« I have a very large file to uncompress. zip and gzip do not work! | Main | How do I convert a character to its ASCII code? »

What exactly does column-oriented mean?

It means that all the values in a column are stored together. This is in contrast to row-oriented, in which all the values for a row are stored together. Which organization is better depends on the access pattern of your application.

Many operations are more efficient with a column-oriented approach. In particular, operations that need to access a sequence of values from a particular column are much faster. If all the values in a column have the same size (which is true, by design, in kdb), things get even better. This type of access pattern is typical of the applications for which q and kdb are used.

To make this concrete, let's examine a column of 64-bit, floating point numbers:

q).Q.w[] `used
q)t: ([] f: 1000000 ? 1.0)
q).Q.w[] `used

As you can see, the memory needed to hold one million 8-byte values is only a little over 8MB. That's because the data are being stored sequentially in an array. To clarify, let's create another table:

q)u: update g: 1000000 ? 5.0 from t
q).Q.w[] `used

Both t and u are sharing the column f. If q organized its data in rows, the memory usage would have gone up another 8MB. Another way to confirm this is to take a look at k.h.

Now let's see what happens when we write the table to disk:

q)`:t/ set t
q)\ls -l t
"total 15632"
"-rw-r--r-- 1 kdbfaq staff 8000016 May 29 19:57 f"

16 bytes of overhead. Clearly, all of the numbers are being stored sequentially on disk. Efficiency is about avoiding unnecessary work, and here we see that q does exactly what needs to be done when reading and writing a column - no more, no less.

OK, so this approach is space efficient. How does this data layout translate into speed?

If we ask q to sum all 1 million numbers, having the entire list packed tightly together in memory is a tremendous advantage over a row-oriented organization, because we'll encounter fewer misses at every stage of the memory hierarchy. Avoiding cache misses and page faults is essential to getting performance out of your machine.

Moreover, doing math on a long list of numbers that are all together in memory is a problem that modern CPU instruction sets have special features to handle, including instructions to prefetch array elements that will be needed in the near future. Although those features were originally created to improve PC multimedia performance, they turned out to be great for statistics as well. In addition, the same synergy of locality and CPU features enables column-oriented systems to perform linear searches (e.g., in where clauses on unindexed columns) faster than indexed searches (with their attendant branch prediction failures) up to astonishing row counts.

A thorough introduction to these topics is given in Ulrich Drepper's 'What every programmer should know about memory' and Scott Meyer's 2011 talk on 'CPU caches and why you care'.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (1)

-16! returns the reference count of a vector and can be used to illustrate your columnar storage. e.g.

q)t: ([] f: 1000000 ? 1.0)
q).Q.w[] `used
q)u: update g: 1000000 ? 5.0 from t
q)/ -16! returns the reference count
q)delete f from `u

/ John

March 31, 2013 | Unregistered CommenterJohn

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
All HTML will be escaped. Textile formatting is allowed.