Btree access method specific configuration

There are a series of configuration tasks which you can perform when using the Btree access method. They are described in the following sections.

Btree comparison

The Btree data structure is a sorted, balanced tree structure storing associated key/data pairs. By default, the sort order is lexicographical, with shorter keys collating before longer keys. The user can specify the sort order for the Btree by using the DB->set_bt_compare() method.

Sort routines are passed pointers to keys as arguments. The keys are represented as DBT structures. The routine must return an integer less than, equal to, or greater than zero if the first argument is considered to be respectively less than, equal to, or greater than the second argument. The only fields that the routines may examine in the DBT structures are data and size fields.

An example routine that might be used to sort integer keys in the database is as follows:

  1. int
  2. compare_int(DB *dbp, const DBT *a, const DBT *b, size_t *locp)
  3. {
  4. int ai, bi;
  5. locp = NULL;
  6. /*
  7. * Returns:
  8. * < 0 if a < b
  9. * = 0 if a = b
  10. * > 0 if a > b
  11. */
  12. memcpy(&ai, a->data, sizeof(int));
  13. memcpy(&bi, b->data, sizeof(int));
  14. return (ai - bi);
  15. }

Note that the data must first be copied into memory that is appropriately aligned, as Berkeley DB does not guarantee any kind of alignment of the underlying data, including for comparison routines. When writing comparison routines, remember that databases created on machines of different architectures may have different integer byte orders, for which your code may need to compensate.

An example routine that might be used to sort keys based on the first five bytes of the key (ignoring any subsequent bytes) is as follows:

  1. int
  2. compare_dbt(DB *dbp, const DBT *a, const DBT *b, size_t *locp)
  3. {
  4. int len;
  5. u_char *p1, *p2;
  6. locp = NULL;
  7. /*
  8. * Returns:
  9. * < 0 if a < b
  10. * = 0 if a = b
  11. * > 0 if a > b
  12. */
  13. for (p1 = a->data, p2 = b->data, len = 5; len--; ++p1, ++p2)
  14. if (*p1 != *p2)
  15. return ((long)*p1 - (long)*p2);
  16. return (0);
  17. }

All comparison functions must cause the keys in the database to be well-ordered. The most important implication of being well-ordered is that the key relations must be transitive, that is, if key A is less than key B, and key B is less than key C, then the comparison routine must also return that key A is less than key C.

It is reasonable for a comparison function to not examine an entire key in some applications, which implies partial keys may be specified to the Berkeley DB interfaces. When partial keys are specified to Berkeley DB, interfaces which retrieve data items based on a user-specified key (for example, DB->get() and DBC->get() with the DB_SET flag), will modify the user-specified key by returning the actual key stored in the database.

Btree prefix comparison

The Berkeley DB Btree implementation maximizes the number of keys that can be stored on an internal page by storing only as many bytes of each key as are necessary to distinguish it from adjacent keys. The prefix comparison routine is what determines this minimum number of bytes (that is, the length of the unique prefix), that must be stored. A prefix comparison function for the Btree can be specified by calling DB->set_bt_prefix().

The prefix comparison routine must be compatible with the overall comparison function of the Btree, since what distinguishes any two keys depends entirely on the function used to compare them. This means that if a prefix comparison routine is specified by the application, a compatible overall comparison routine must also have been specified.

Prefix comparison routines are passed pointers to keys as arguments. The keys are represented as DBT structures. The only fields the routines may examine in the DBT structures are data and size fields.

The prefix comparison function must return the number of bytes necessary to distinguish the two keys. If the keys are identical (equal and equal in length), the length should be returned. If the keys are equal up to the smaller of the two lengths, then the length of the smaller key plus 1 should be returned.

An example prefix comparison routine follows:

  1. size_t
  2. compare_prefix(DB *dbp, const DBT *a, const DBT *b)
  3. {
  4. size_t cnt, len;
  5. u_int8_t *p1, *p2;
  6. cnt = 1;
  7. len = a->size > b->size ? b->size : a->size;
  8. for (p1 =
  9. a->data, p2 = b->data; len--; ++p1, ++p2, ++cnt)
  10. if (*p1 != *p2)
  11. return (cnt);
  12. /*
  13. * They match up to the smaller of the two sizes.
  14. * Collate the longer after the shorter.
  15. */
  16. if (a->size < b->size)
  17. return (a->size + 1);
  18. if (b->size < a->size)
  19. return (b->size + 1);
  20. return (b->size);
  21. }

The usefulness of this functionality is data-dependent, but in some data sets can produce significantly reduced tree sizes and faster search times.

Minimum keys per page

The number of keys stored on each page affects the size of a Btree and how it is maintained. Therefore, it also affects the retrieval and search performance of the tree. For each Btree, Berkeley DB computes a maximum key and data size. This size is a function of the page size and the fact that at least two key/data pairs must fit on any Btree page. Whenever key or data items exceed the calculated size, they are stored on overflow pages instead of in the standard Btree leaf pages.

Applications may use the DB->set_bt_minkey() method to change the minimum number of keys that must fit on a Btree page from two to another value. Altering this value in turn alters the on-page maximum size, and can be used to force key and data items which would normally be stored in the Btree leaf pages onto overflow pages.

Some data sets can benefit from this tuning. For example, consider an application using large page sizes, with a data set almost entirely consisting of small key and data items, but with a few large items. By setting the minimum number of keys that must fit on a page, the application can force the outsized items to be stored on overflow pages. That in turn can potentially keep the tree more compact, that is, with fewer internal levels to traverse during searches.

The following calculation is similar to the one performed by the Btree implementation. (The minimum_keys value is multiplied by 2 because each key/data pair requires 2 slots on a Btree page.)

  1. maximum_size = page_size / (minimum_keys * 2)

Using this calculation, if the page size is 8KB and the default minimum_keys value of 2 is used, then any key or data items larger than 2KB will be forced to an overflow page. If an application were to specify a minimum_key value of 100, then any key or data items larger than roughly 40 bytes would be forced to overflow pages.

It is important to remember that accesses to overflow pages do not perform as well as accesses to the standard Btree leaf pages, and so setting the value incorrectly can result in overusing overflow pages and decreasing the application’s overall performance.

Retrieving Btree records by logical record number

The Btree access method optionally supports retrieval by logical record numbers. To configure a Btree to support record numbers, call the DB->set_flags() method with the DB_RECNUM flag.

Configuring a Btree for record numbers should not be done lightly. While often useful, it may significantly slow down the speed at which items can be stored into the database, and can severely impact application throughput. Generally it should be avoided in trees with a need for high write concurrency.

To retrieve by record number, use the DB_SET_RECNO flag to the DB->get() and DBC->get() methods. The following is an example of a routine that displays the data item for a Btree database created with the DB_RECNUM option.

  1. int
  2. rec_display(DB *dbp, db_recno_t recno)
  3. {
  4. DBT key, data;
  5. int ret;
  6. memset(&key, 0, sizeof(key));
  7. key.data = &recno;
  8. key.size = sizeof(recno);
  9. memset(&data, 0, sizeof(data));
  10. if ((ret = dbp->get(dbp, NULL, &key, &data, DB_SET_RECNO)) != 0)
  11. return (ret);
  12. printf("data for %lu: %.*s\n",
  13. (u_long)recno, (int)data.size, (char *)data.data);
  14. return (0);
  15. }

To determine a key’s record number, use the DB_GET_RECNO flag to the DBC->get() method. The following is an example of a routine that displays the record number associated with a specific key.

  1. int
  2. recno_display(DB *dbp, char *keyvalue)
  3. {
  4. DBC *dbcp;
  5. DBT key, data;
  6. db_recno_t recno;
  7. int ret, t_ret;
  8. /* Acquire a cursor for the database. */
  9. if ((ret = dbp->cursor(dbp, NULL, &dbcp, 0)) != 0) {
  10. dbp->err(dbp, ret, "DB->cursor");
  11. goto err;
  12. }
  13. /* Position the cursor. */
  14. memset(&key, 0, sizeof(key));
  15. key.data = keyvalue;
  16. key.size = strlen(keyvalue);
  17. memset(&data, 0, sizeof(data));
  18. if ((ret = dbcp->get(dbcp, &key, &data, DB_SET)) != 0) {
  19. dbp->err(dbp, ret, "DBC->get(DB_SET): %s", keyvalue);
  20. goto err;
  21. }
  22. /*
  23. * Request the record number, and store it into appropriately
  24. * sized and aligned local memory.
  25. */
  26. memset(&data, 0, sizeof(data));
  27. data.data = &recno;
  28. data.ulen = sizeof(recno);
  29. data.flags = DB_DBT_USERMEM;
  30. if ((ret = dbcp->get(dbcp, &key, &data, DB_GET_RECNO)) != 0) {
  31. dbp->err(dbp, ret, "DBC->get(DB_GET_RECNO)");
  32. goto err;
  33. }
  34. printf("key for requested key was %lu\n", (u_long)recno);
  35. err: /* Close the cursor. */
  36. if ((t_ret = dbcp->close(dbcp)) != 0) {
  37. if (ret == 0)
  38. ret = t_ret;
  39. dbp->err(dbp, ret, "DBC->close");
  40. }
  41. return (ret);
  42. }

Compression

The Btree access method supports the automatic compression of key/data pairs upon their insertion into the database. The key/data pairs are decompressed before they are returned to the application, making an application’s interaction with a compressed database identical to that for a non-compressed database. To configure Berkeley DB for compression, call the DB->set_bt_compress() method and specify custom compression and decompression functions. If DB->set_bt_compress() is called with NULL compression and decompression functions, Berkeley DB will use its default compression functions.

Note

Compression only works with the Btree access method, and then only so long as your database is not configured for unsorted duplicates.

Note

The default compression function is not guaranteed to reduce the size of the on-disk database in every case. It has been tested and shown to work well with English-language text. Of course, in order to determine if the default compression algorithm is beneficial for your application, it is important to test both the final size and the performance using a representative set of data and access patterns.

The default compression function performs prefix compression on each key added to the database. This means that, for a key n bytes in length, the first i bytes that match the first i bytes of the previous key exactly are omitted and only the final n-i bytes are stored in the database. If the bytes of key being stored match the bytes of the previous key exactly, then the same prefix compression algorithm is applied to the data value being stored. To use Berkeley DB’s default compression behavior, both the default compression and decompression functions must be used.

For example, to configure your database for default compression:

  1. DB *dbp = NULL;
  2. DB_ENV *envp = NULL;
  3. u_int32_t db_flags;
  4. const char *file_name = "mydb.db";
  5. int ret;
  6. ...
  7. /* Skipping environment open to shorten this example */
  8. /* Initialize the DB handle */
  9. ret = db_create(&dbp, envp, 0);
  10. if (ret != 0) {
  11. fprintf(stderr, "%s\n", db_strerror(ret));
  12. return (EXIT_FAILURE);
  13. }
  14. /* Turn on default data compression */
  15. dbp->set_bt_compress(dbp, NULL, NULL);
  16. /* Now open the database */
  17. db_flags = DB_CREATE; /* Allow database creation */
  18. ret = dbp->open(dbp, /* Pointer to the database */
  19. NULL, /* Txn pointer */
  20. file_name, /* File name */
  21. NULL, /* Logical db name */
  22. DB_BTREE, /* Database type (using btree) */
  23. db_flags, /* Open flags */
  24. 0); /* File mode. Using defaults */
  25. if (ret != 0) {
  26. dbp->err(dbp, ret, "Database '%s' open failed",
  27. file_name);
  28. return (EXIT_FAILURE);
  29. }

Custom compression

An application wishing to perform its own compression may supply a compression and decompression function which will be called instead of Berkeley DB’s default functions. The compression function is passed five DBT structures:

  • The key and data immediately preceeding the key/data pair that is being stored.

  • The key and data being stored in the tree.

  • The buffer where the compressed data should be written.

The total size of the buffer used to store the compressed data is identified in the DBT‘s ulen field. If the compressed data cannot fit in the buffer, the compression function should store the amount of space needed in DBT‘s size field and then return DB_BUFFER_SMALL. Berkeley DB will subsequently re-call the compression function with the required amount of space allocated in the compression data buffer.

Multiple compressed key/data pairs will likely be written to the same buffer and the compression function should take steps to ensure it does not overwrite data.

For example, the following code fragments illustrate the use of a custom compression routine. This code is actually a much simplified example of the default compression provided by Berkeley DB. It does simple prefix compression on the key part of the data.

  1. int compress(DB *dbp, const DBT *prevKey, const DBT *prevData,
  2. const DBT *key, const DBT *data, DBT *dest)
  3. {
  4. u_int8_t *dest_data_ptr;
  5. const u_int8_t *key_data, *prevKey_data;
  6. size_t len, prefix, suffix;
  7. key_data = (const u_int8_t*)key->data;
  8. prevKey_data = (const u_int8_t*)prevKey->data;
  9. len = key->size > prevKey->size ? prevKey->size : key->size;
  10. for (; len-- && *key_data == *prevKey_data; ++key_data,
  11. ++prevKey_data)
  12. continue;
  13. prefix = (size_t)(key_data - (u_int8_t*)key->data);
  14. suffix = key->size - prefix;
  15. /* Check that we have enough space in dest */
  16. dest->size = (u_int32_t)(__db_compress_count_int(prefix) +
  17. __db_compress_count_int(suffix) +
  18. __db_compress_count_int(data->size) + suffix + data->size);
  19. if (dest->size > dest->ulen)
  20. return (DB_BUFFER_SMALL);
  21. /* prefix length */
  22. dest_data_ptr = (u_int8_t*)dest->data;
  23. dest_data_ptr += __db_compress_int(dest_data_ptr, prefix);
  24. /* suffix length */
  25. dest_data_ptr += __db_compress_int(dest_data_ptr, suffix);
  26. /* data length */
  27. dest_data_ptr += __db_compress_int(dest_data_ptr, data->size);
  28. /* suffix */
  29. memcpy(dest_data_ptr, key_data, suffix);
  30. dest_data_ptr += suffix;
  31. /* data */
  32. memcpy(dest_data_ptr, data->data, data->size);
  33. return (0);
  34. }

The corresponding decompression function is likewise passed five DBT structures:

  • The key and data DBTs immediately preceding the decompressed key and data.

  • The compressed data from the database.

  • One to store the decompressed key and another one for the decompressed data.

Because the compression of record X relies upon record X-1, the decompression function can be called repeatedly to linearally decompress a set of records stored in the compressed buffer.

The total size of the buffer available to store the decompressed data is identified in the destination DBT‘s ulen field. If the decompressed data cannot fit in the buffer, the decompression function should store the amount of space needed in the destination DBT‘s size field and then return DB_BUFFER_SMALL. Berkeley DB will subsequently re-call the decompression function with the required amount of space allocated in the decompression data buffer.

For example, the decompression routine that corresponds to the example compression routine provided above is:

  1. int decompress(DB *dbp, const DBT *prevKey, const DBT *prevData,
  2. DBT *compressed, DBT *destKey, DBT *destData)
  3. {
  4. u_int8_t *comp_data, *dest_data;
  5. u_int32_t prefix, suffix, size;
  6. /* Unmarshal prefix, suffix and data length */
  7. comp_data = (u_int8_t*)compressed->data;
  8. size = __db_decompress_count_int(comp_data);
  9. if (size > compressed->size)
  10. return (EINVAL);
  11. comp_data += __db_decompress_int32(comp_data, &prefix);
  12. size += __db_decompress_count_int(comp_data);
  13. if (size > compressed->size)
  14. return (EINVAL);
  15. comp_data += __db_decompress_int32(comp_data, &suffix);
  16. size += __db_decompress_count_int(comp_data);
  17. if (size > compressed->size)
  18. return (EINVAL);
  19. comp_data += __db_decompress_int32(comp_data, &destData->size);
  20. /* Check destination lengths */
  21. destKey->size = prefix + suffix;
  22. if (destKey->size > destKey->ulen ||
  23. destData->size > destData->ulen)
  24. return (DB_BUFFER_SMALL);
  25. /* Write the prefix */
  26. if (prefix > prevKey->size)
  27. return (EINVAL);
  28. dest_data = (u_int8_t*)destKey->data;
  29. memcpy(dest_data, prevKey->data, prefix);
  30. dest_data += prefix;
  31. /* Write the suffix */
  32. size += suffix;
  33. if (size > compressed->size)
  34. return (EINVAL);
  35. memcpy(dest_data, comp_data, suffix);
  36. comp_data += suffix;
  37. /* Write the data */
  38. size += destData->size;
  39. if (size > compressed->size)
  40. return (EINVAL);
  41. memcpy(destData->data, comp_data, destData->size);
  42. comp_data += destData->size;
  43. /* Return bytes read */
  44. compressed->size =
  45. (u_int32_t)(comp_data - (u_int8_t*)compressed->data);
  46. return (0);
  47. }

Programmer Notes

As you use compression with your databases, be aware of the following:

  • Compression works by placing key/data pairs from a single database page into a single block of compressed data. This is true whether you use DB’s default compression, or you write your own compression. Because all of key/data data is placed in a single block of memory, you cannot decompress data unless you have decompressed everything that came before it in the block. That is, you cannot decompress item n in the data block, unless you also decompress items 0 through n-1.

  • If you increase the minimum number of key/data pairs placed on a Btree leaf page (using DB->set_bt_minkey()), you will decrease your seek times on a compressed database. However, this will also decrease the effectiveness of the compression.

  • Compressed databases are fastest if bulk load is used to add data to them. See Retrieving and updating records in bulk for information on using bulk load.