Apache SystemML™ Paper Takes Top Prize at This Year’s VLDB Conference

The Best Paper award for this year’s International Conference on Very Large Data Bases (VLDB) goes to "Compressed Linear Algebra for Large-Scale Machine Learning", authored by a PhD candidate at the University of Maryland and four senior researchers from IBM. Their method for compressing matrices for linear algebra operations promises to provide users significant increases in speed with less memory.

In particular, the compression technology provides benefits at two different parts of the data science process.

Before training a model, a data scientist typically goes through multiple iterations of feature engineering. Common feature engineering tasks include examining the data with descriptive statistics and transforming the values in columns to better suit the assumptions built into different types of machine learning models. For example, a data scientist preparing to use linear models like linear regression or support vector machines, might scale columns with numeric data such that they have an approximately Gaussian distributions. Compressed linear algebra makes common feature engineering tasks go orders of magnitude faster. It's possible to multiply every value in a column by a constant or square every value in a column nearly instantly, because those operations become metadata operations in the compressed domain. Similarly, it's possible to compute aggregates like sums extremely fast over compressed data. (Note that this aspect is not a major focus of the paper.)

While training a model, compressed linear algebra allows iterative model training algorithms to run in about an order of magnitude less memory. Many problems that previously required a large server will run on a laptop, which allows data scientists to work more productively. Data sets that previously required a Spark cluster will run a single server, which can make these data sets run faster due to lower communication latency. And data sets that previously required streaming data from disk can run entirely in-memory, leading to order-of-magnitude performance improvements.

The code is already part of the SystemML nightly builds, as an experimental feature. Explore the GitHub repository, in particular the JIRA associated with the work. And be sure to visit the official SystemML site for more information.

See below for an abstract of the paper — or view the complete PDF here.

The authors of the paper — Ahmed Elgohary, Matthias Boehm, Peter Haas, Fred Reiss, and Berthold Reinwald — will be honored in a morning ceremony at this year’s conference in New Delhi. This year marks the 42nd for this preeminent conference, which stands as "perhaps the most international (in terms of participation, technical content, organization, and location) among all comparable events."1


Large-scale machine learning (ML) algorithms are often iterative, using repeated read-only data access and I/Obound matrix-vector multiplications to converge to an optimal model. It is crucial for performance to fit the data into single-node or distributed main memory. General-purpose, heavy- and lightweight compression techniques struggle to achieve both good compression ratios and fast decompression speed to enable block-wise uncompressed operations.

Hence, we initiate work on compressed linear algebra (CLA), in which lightweight database compression techniques are applied to matrices and then linear algebra operations such as matrix-vector multiplication are executed directly on the compressed representations. We contribute effective column compression schemes, cache-conscious operations, and an efficient sampling-based compression algorithm. Our experiments show that CLA achieves in-memory operations performance close to the uncompressed case and good compression ratios that allow us to fit larger datasets into available memory.

We thereby obtain significant end-to-end performance improvements up to 26x or reduced memory requirements.

Spark Technology Center


Subscribe to the Spark Technology Center newsletter for the latest thought leadership in Apache Spark™, machine learning and open source.



You Might Also Enjoy