Short answer: .Q.view partition_list

By default, all partitions of a given historical kdb database are available. The .Q.view function enables a given kdb process instance to selectively expose an arbitrary set of partitions into view.

The following example restricts all operations of the database instance to the most recent partition date (typically the previous business day):

q)yesterday: last date
q).Q.view yesterday
q)count date
1
q)yesterday ~ value exec from select distinct date from trade
1b
q)

Here is a more realistic example of a date partitioned historical database where viewable partitions are limited to those year-to-date:

q)start_date: "D" $ string[`year $ .z.D], ".01.01"
q)end_date: last date
q).Q.view date where date within (start_date; end_date)
q)count select from trade where not date within (start_date;
end_date)
0q)

See also: .Q.pn

Short answer: Use 7-zip. It contains large-file support.

$ unzip verylargefile.zip /destination_dir/
I/O error: File too large

Make sure your file system can handle LFS.

$ 7za e -o /destination_dir/ verylargefile.zip

See also: 7-zip usage examples.

It means that all the values in a column are stored together. This is in contrast to row-oriented, in which all the values for a row are stored together. Which organization is better depends on the access pattern of your application.

Many operations are more efficient with a column-oriented approach. In particular, operations that need to access a sequence of values from a particular column are much faster. If all the values in a column have the same size (which is true, by design, in kdb), things get even better. This type of access pattern is typical of the applications for which q and kdb are used.

To make this concrete, let’s examine a column of 64-bit, floating point numbers:

q).Q.w[] `used
108464j
q)t: ([] f: 1000000 ? 1.0)
q).Q.w[] `used
8497328j
q)

As you can see, the memory needed to hold one million 8-byte values is only a little over 8MB. That’s because the data are being stored sequentially in an array. To clarify, let’s create another table:

q)u: update g: 1000000 ? 5.0 from t
q).Q.w[] `used
16885952j
q)

Both t and u are sharing the column f. If q organized its data in rows, the memory usage would have gone up another 8MB. Another way to confirm this is to take a look at k.h.

Now let’s see what happens when we write the table to disk:

q)`:t/ set t
`:t/
q)\ls -l t
"total 15632"
"-rw-r--r--  1 kdbfaq  staff  8000016 May 29 19:57 f"
q)

16 bytes of overhead. Clearly, all of the numbers are being stored sequentially on disk. Efficiency is about avoiding unnecessary work, and here we see that q does exactly what needs to be done when reading and writing a column – no more, no less.

OK, so this approach is space efficient. How does this data layout translate into speed?

If we ask q to sum all 1 million numbers, having the entire list packed tightly together in memory is a tremendous advantage over a row-oriented organization, because we’ll encounter fewer misses at every stage of the memory hierarchy. Avoiding cache misses and page faults is essential to getting performance out of your machine.

Moreover, doing math on a long list of numbers that are all together in memory is a problem that modern CPU instruction sets have special features to handle, including instructions to prefetch array elements that will be needed in the near future. Although those features were originally created to improve PC multimedia performance, they turned out to be great for statistics as well. In addition, the same synergy of locality and CPU features enables column-oriented systems to perform linear searches (e.g., in where clauses on unindexed columns) faster than indexed searches (with their attendant branch prediction failures) up to astonishing row counts.

A thorough introduction to these topics is given in Ulrich Drepper’s ‘What every programmer should know about memory’ and Scott Meyer’s 2011 talk on ‘CPU caches and why you care’.