Sparse Matrices

by Nimrod Priell

Motivation

If you do big data, you want to know about sparse matrices. In many contexts the data you have is represented as matrixes or tables, and in many cases, these have a lot of entries with a value of 0 (or no value which is imputed as zero – not always a benign practice). In machine learning, you’d run into these whenever you have features that are categorical (also known as factors) – i.e a feature that can take a value from a predefined set of values (for example, if you’re classifying cars, the type of the car, ‘Sports’, ‘Compact’ or ‘SUV’ might be a factor). These are usually encoded in the feature matrices used by your learning algorithms using a column per category, with a value of 1 if that instance (row) belongs to the category or 0 if it doesn’t. Thus, if the rows only belong to one category, each row will contain at least as many zeroes as the (number of categories – 1). Another common case is with natural language processing, where you tokenize huge documents and then count the occurrences (or use TF/IDF scoring) of the tokens in each document. You then end up with a dictionary of a few thousand or tens of thousands of terms (more so if you use n-grams), but each document only has a small percentage of these actually appearing. Another interesting case is when representing graphs in an adjacency matrix, whence for most networks the connectivity is pretty low, resulting with a very sparse matrix.

The reason you should care about this is because keeping these matrices in their full splendor will require huge amounts of memory: If you hold data for 100,000 documents with 10,000 features each (hardly an extreme case), each feature encoded as an 8-byte double, you have an 8Gb problem. If each document on average has only 5% of the features, there’s really just 400Mb of information there (although you’d need a constant multiple of this to actually hold the sparse representation – it is still a manageable amount). Even with various workarounds, usually you just can’t afford to start operating on these matrices in memory. The issue here is one of feasibility and not so much runtime, although you do get major performance gains out of using sparse matrices in many cases.

Performance in operations on sparse matrices will be attained by using production-level, field-tested C/Java libraries, and I do not recommend trying to implement these on your own – instead use MATLAB, Matrix-toolkit-java, SciPy, etc. But in case your code is written in some other platform (maybe Ruby) and you need to somehow get these matrices to your scientific code – you want to read the features from somewhere (like your DB, or generate them from documents you tokenize), create a matrix, and perhaps do rudimentary operations on it (selecting only a sub-matrix out of the whole matrix for splitting into cross-validation sets or doing sub-bagging, perhaps, or feature scaling). You want to be able to hold these matrices in memory and work with them.

The go-to book for using sparse matrixes is Direct Methods for Sparse Linear Systems. I highly recommend it – it’s an indispensable resource for all kinds of algorithms for using these matrices and contains the C code that is the essence of MATLAB’s sparse matrix mojo. Here I just briefly discuss the considerations and methods, but the book gives the details for moving between representations and operating on them.

Representation

So the main thing we’d like to deal with is how to represent these matrices, in memory or in a file. The natural idea that comes to mind is what is called a Triplet representation. In this, each non-zero entry is written down as a row index, column index and value. The resulting size of the representation is 3Z where Z is the number of non-zero entries in the matrix, a common factor  for evaluating the scale and asymptotics of operations on sparse matrices. This format is trivial to implement.

However, it turns out it’s not the best format for many of the operations you commonly want to do. In particular you either end up having to use hashes to find your way in the matrix, or you have to scan the triplet list in order to operate on a specific row or column.

What does work well for almost all operations are CSC and CSR representations. These two are exactly the same – but one is geared towards row-operations and the other towards operating on columns.  They both involve 3 arrays, two of size Z and one of size N which is one of the dimensions of your matrix (either the number of rows or the number of columns). So they turn out to actually be a little smaller than the triplet representation in most cases, and one of the arrays acts as a hash that guarantees that operations like picking out certain rows or certain columns are very easy to implement in an efficient manner. Consult better sources for understanding these representations, I’m just pointing them out.

Pat yourself on the back if you figured out you can go to below 2Z + N (or 2Z + M) by looking at the matrix as a 1 dimensional array with (N*M) entries and holding just 2 arrays, one with the non-zero values and one with the indices in the 1-dimensional array corresponding to these entries; But don’t bother implementing that. Saving the extra N or M of space is negligible compared to Z (in the 8Gb example above, it would amount to saving 40Kb or 400Kb), and you either have to access everything through a hash (which would end up taking up all the space you saved in the extra structure required) or scanning the whole array to get a single entry.

There is a fair number of standards around sparse matrix formats, such as the MatrixMarket format or Harwell Boeing. But the underlying implementation is always one of the three – Triplets, CSR or CSC, except when the matrix is known to be in a very specific form such as diagonal or band matrices, which is usually not relevant for general problems or machine learning problems in particular. As a case-in-point, standard machine learning libraries like LibSVM or LibLinear also take their input in a kind of CSR format. Also you might find that your problem requires special cases not handled by these formats, such as saving additional arrays for identifying the rows (row names) or the features (column names) and these need to be handled appropriately as well. Hence, unless you integrate specifically with software that uses one of these formats, you will probably be better off ignoring them and using whatever’s appropriate for you. You do want, however, a general library for reading the formats you decided on, working with them in memory and changing the representation between the formats. You can always convert from a basic triplet representation to any of the formats using SMC. I am putting together a little library with Ruby code for working with these representations up on GitHub. Feel free to contribute – Ruby seems to be lacking the support for these kinds of things.

There are more technicalities involved once you deal with really huge matrices. For example, you might not be able to guarantee that you can allocate consecutive arrays of size Z in memory or that indexes into them fit in an integer. The Ruby code ignores these cases but if you work at that scale these are things to consider – you will need to work around these issues, for instance by building each of the arrays in the representation from smaller chunks.

The issue with sparse matrices is mainly keeping them sparse and never representing them fully in memory. For example, mean-centering a sparse matrix is a bad idea, as you will lose the sparsity altogether. Changing a single zero entry of the matrix to non-zero (which is referred to sometimes as changing the sparsity pattern of the matrix) is hard in CSR or CSC and you are better off changing the representation to triplets, adding a bunch of triplet entries, and changing the matrix back to its compressed form. The same goes for initially building the matrix: Build it from your data source as a triplet matrix, and then compress it to gain the size and efficiency of operations. I recommend you read through my code, adapted from CSparse, to see how that conversion is done.

How do I know if my matrix is sparse?

There are many measures for the sparsity of a matrix but probably the most useful one is the simplest: The ratio of the number of entries that are zero, Z, to the total number of entries in the matrix, N*M. This gives us a measure in [0, 1] where 1 is the 0 matrix and 0 is a full matrix. Given the representations above, you can see that in terms of space it becomes worthwhile to hold the matrix as CSR or CSC once the sparsity is a little over 0.5 (in fact, 1/M or 1/N over 0.5), and in triplet format once it’s over 0.66. But in fact I would still treat these matrices as non-sparse. In fact, I would hold them as non-sparse as long as you can. When they start taking up 2Gb or more of memory, you’ll know you just have no choice but to use sparse representations (or move to bigger servers).

Advertisements