Apache SystemML™ Paper Takes Top Prize at This Year’s VLDB Conference

The Best Paper award for this year’s International Conference on Very Large Data Bases (VLDB) goes to "Compressed Linear Algebra for Large-Scale Machine Learning", authored by a PhD candidate at the University of Maryland and four senior researchers from IBM. Their method for compressing matrices for linear algebra operations promises to provide users significant increases in speed with less memory.

In particular, the compression technology provides benefits at two different parts of the data science process.

Before training a model, a data scientist typically goes through multiple iterations of feature engineering. Common feature engineering tasks include examining the data with descriptive statistics and transforming the values in columns to better suit the assumptions built into different types of machine learning models. For example, a data scientist preparing to use linear models like linear regression or support vector machines, might scale columns with numeric data such that they have an approximately Gaussian distributions. Compressed linear algebra makes common feature engineering tasks go orders of magnitude faster. It's possible to multiply every value in a column by a constant or square every value in a column nearly instantly, because those operations become metadata operations in the compressed domain. Similarly, it's possible to compute aggregates like sums extremely fast over compressed data. (Note that this aspect is not a major focus of the paper.)

While training a model, compressed linear algebra allows iterative model training algorithms to run in about an order of magnitude less memory. Many problems that previously required a large server will run on a laptop, which allows data scientists to work more productively. Data sets that previously required a Spark cluster will run a single server, which can make these data sets run faster due to lower communication latency. And data sets that previously required streaming data from disk can run entirely in-memory, leading to order-of-magnitude performance improvements.

The code is already part of the SystemML nightly builds, as an experimental feature. Explore the GitHub repository, in particular the JIRA associated with the work. And be sure to visit the official SystemML site for more information.

See below for an abstract of the paper — or view the complete PDF here.

The authors of the paper — Ahmed Elgohary, Matthias Boehm, Peter Haas, Fred Reiss, and Berthold Reinwald — will be honored in a morning ceremony at this year’s conference in New Delhi. This year marks the 42nd for this preeminent conference, which stands as "perhaps the most international (in terms of participation, technical content, organization, and location) among all comparable events."1


Large-scale machine learning (ML) algorithms are often iterative, using repeated read-only data access and I/Obound matrix-vector multiplications to converge to an optimal model. It is crucial for performance to fit the data into single-node or distributed main memory. General-purpose, heavy- and lightweight compression techniques struggle to achieve both good compression ratios and fast decompression speed to enable block-wise uncompressed operations.

Hence, we initiate work on compressed linear algebra (CLA), in which lightweight database compression techniques are applied to matrices and then linear algebra operations such as matrix-vector multiplication are executed directly on the compressed representations. We contribute effective column compression schemes, cache-conscious operations, and an efficient sampling-based compression algorithm. Our experiments show that CLA achieves in-memory operations performance close to the uncompressed case and good compression ratios that allow us to fit larger datasets into available memory.

We thereby obtain significant end-to-end performance improvements up to 26x or reduced memory requirements.


You Might Also Enjoy

Gidon Gershinsky
Gidon Gershinsky
19 days ago

How Alluxio is Accelerating Apache Spark Workloads

Alluxio is fast virtual storage for Big Data. Formerly known as Tachyon, it’s an open-source memory-centric virtual distributed storage system (yes, all that!), offering data access at memory speed and persistence to a reliable storage. This technology accelerates analytic workloads in certain scenarios, but doesn’t offer any performance benefits in other scenarios. The purpose of this blog is to... Read More

James Spyker
James Spyker
3 months ago

Streaming Transformations as Alternatives to ETL

The strategy of extracting, transforming and then loading data (ETL) to create a version of your data optimized for analytics has been around since the 1970s and its challenges are well understood. The time it takes to run an ETL job is dependent on the total data volume so that the time and resource costs rise as an enterprise’s data volume grows. The requirement for analytics databases to be mo... Read More