Get Started - Quickstart Guide

Installing

To start using Badger, install Go 1.12 or above. Badger v2 needs go modules. Run the following command to retrieve the library.

  1. $ go get github.com/dgraph-io/badger/v3

This will retrieve the library.

Note Badger does not directly use CGO but it relies on https://github.com/DataDog/zstd for compression and it requires gcc/cgo. If you wish to use badger without gcc/cgo, you can run CGO_ENABLED=0 go get github.com/dgraph-io/badger/… which will download badger without the support for ZSTD compression algorithm.

Installing Badger Command Line Tool

Download and extract the latest Badger DB release from https://github.com/dgraph-io/badger/releases and then run the following commands.

  1. $ cd badger-<version>/badger
  2. $ go install

This will install the badger command line utility into your $GOBIN path.

Choosing a version

BadgerDB is a pretty special package from the point of view that the most important change we can make to it is not on its API but rather on how data is stored on disk.

This is why we follow a version naming schema that differs from Semantic Versioning.

  • New major versions are released when the data format on disk changes in an incompatible way.
  • New minor versions are released whenever the API changes but data compatibility is maintained. Note that the changes on the API could be backward-incompatible - unlike Semantic Versioning.
  • New patch versions are released when there’s no changes to the data format nor the API.

Following these rules:

  • v1.5.0 and v1.6.0 can be used on top of the same files without any concerns, as their major version is the same, therefore the data format on disk is compatible.
  • v1.6.0 and v2.0.0 are data incompatible as their major version implies, so files created with v1.6.0 will need to be converted into the new format before they can be used by v2.0.0.

For a longer explanation on the reasons behind using a new versioning naming schema, you can read VERSIONING.md

Opening a database

The top-level object in Badger is a DB. It represents multiple files on disk in specific directories, which contain the data for a single database.

To open your database, use the badger.Open() function, with the appropriate options. The Dir and ValueDir options are mandatory and must be specified by the client. They can be set to the same value to simplify things.

  1. package main
  2. import (
  3. "log"
  4. badger "github.com/dgraph-io/badger/v3"
  5. )
  6. func main() {
  7. // Open the Badger database located in the /tmp/badger directory.
  8. // It will be created if it doesn't exist.
  9. db, err := badger.Open(badger.DefaultOptions("/tmp/badger"))
  10. if err != nil {
  11. log.Fatal(err)
  12. }
  13. defer db.Close()
  14. // Your code here…
  15. }

Please note that Badger obtains a lock on the directories so multiple processes cannot open the same database at the same time.

In-Memory Mode/Diskless Mode

By default, Badger ensures all the data is persisted to the disk. It also supports a pure in-memory mode. When Badger is running in in-memory mode, all the data is stored in the memory. Reads and writes are much faster in in-memory mode, but all the data stored in Badger will be lost in case of a crash or close. To open badger in in-memory mode, set the InMemory option.

  1. opt := badger.DefaultOptions("").WithInMemory(true)

Encryption Mode

If you enable encryption on Badger, you also need to set the index cache size.

Tip Having a cache improves the performance. Otherwise, your reads would be very slow while encryption is enabled.

For example, to set a 100 Mb cache:

  1. opts.IndexCache = 100 << 20 // 100 mb or some other size based on the amount of data

Transactions

Read-only transactions

To start a read-only transaction, you can use the DB.View() method:

  1. err := db.View(func(txn *badger.Txn) error {
  2. // Your code here…
  3. return nil
  4. })

You cannot perform any writes or deletes within this transaction. Badger ensures that you get a consistent view of the database within this closure. Any writes that happen elsewhere after the transaction has started, will not be seen by calls made within the closure.

Read-write transactions

To start a read-write transaction, you can use the DB.Update() method:

  1. err := db.Update(func(txn *badger.Txn) error {
  2. // Your code here…
  3. return nil
  4. })

All database operations are allowed inside a read-write transaction.

Always check the returned error value. If you return an error within your closure it will be passed through.

An ErrConflict error will be reported in case of a conflict. Depending on the state of your application, you have the option to retry the operation if you receive this error.

An ErrTxnTooBig will be reported in case the number of pending writes/deletes in the transaction exceeds a certain limit. In that case, it is best to commit the transaction and start a new transaction immediately. Here is an example (we are not checking for errors in some places for simplicity):

  1. updates := make(map[string]string)
  2. txn := db.NewTransaction(true)
  3. for k,v := range updates {
  4. if err := txn.Set([]byte(k),[]byte(v)); err == badger.ErrTxnTooBig {
  5. _ = txn.Commit()
  6. txn = db.NewTransaction(true)
  7. _ = txn.Set([]byte(k),[]byte(v))
  8. }
  9. }
  10. _ = txn.Commit()

Managing transactions manually

The DB.View() and DB.Update() methods are wrappers around the DB.NewTransaction() and Txn.Commit() methods (or Txn.Discard() in case of read-only transactions). These helper methods will start the transaction, execute a function, and then safely discard your transaction if an error is returned. This is the recommended way to use Badger transactions.

However, sometimes you may want to manually create and commit your transactions. You can use the DB.NewTransaction() function directly, which takes in a boolean argument to specify whether a read-write transaction is required. For read-write transactions, it is necessary to call Txn.Commit() to ensure the transaction is committed. For read-only transactions, calling Txn.Discard() is sufficient. Txn.Commit() also calls Txn.Discard() internally to cleanup the transaction, so just calling Txn.Commit() is sufficient for read-write transaction. However, if your code doesn’t call Txn.Commit() for some reason (for e.g it returns prematurely with an error), then please make sure you call Txn.Discard() in a defer block. Refer to the code below.

  1. // Start a writable transaction.
  2. txn := db.NewTransaction(true)
  3. defer txn.Discard()
  4. // Use the transaction...
  5. err := txn.Set([]byte("answer"), []byte("42"))
  6. if err != nil {
  7. return err
  8. }
  9. // Commit the transaction and check for error.
  10. if err := txn.Commit(); err != nil {
  11. return err
  12. }

The first argument to DB.NewTransaction() is a boolean stating if the transaction should be writable.

Badger allows an optional callback to the Txn.Commit() method. Normally, the callback can be set to nil, and the method will return after all the writes have succeeded. However, if this callback is provided, the Txn.Commit() method returns as soon as it has checked for any conflicts. The actual writing to the disk happens asynchronously, and the callback is invoked once the writing has finished, or an error has occurred. This can improve the throughput of the application in some cases. But it also means that a transaction is not durable until the callback has been invoked with a nil error value.

Using key/value pairs

To save a key/value pair, use the Txn.Set() method:

  1. err := db.Update(func(txn *badger.Txn) error {
  2. err := txn.Set([]byte("answer"), []byte("42"))
  3. return err
  4. })

Key/Value pair can also be saved by first creating Entry, then setting this Entry using Txn.SetEntry(). Entry also exposes methods to set properties on it.

  1. err := db.Update(func(txn *badger.Txn) error {
  2. e := badger.NewEntry([]byte("answer"), []byte("42"))
  3. err := txn.SetEntry(e)
  4. return err
  5. })

This will set the value of the "answer" key to "42". To retrieve this value, we can use the Txn.Get() method:

  1. err := db.View(func(txn *badger.Txn) error {
  2. item, err := txn.Get([]byte("answer"))
  3. handle(err)
  4. var valNot, valCopy []byte
  5. err := item.Value(func(val []byte) error {
  6. // This func with val would only be called if item.Value encounters no error.
  7. // Accessing val here is valid.
  8. fmt.Printf("The answer is: %s\n", val)
  9. // Copying or parsing val is valid.
  10. valCopy = append([]byte{}, val...)
  11. // Assigning val slice to another variable is NOT OK.
  12. valNot = val // Do not do this.
  13. return nil
  14. })
  15. handle(err)
  16. // DO NOT access val here. It is the most common cause of bugs.
  17. fmt.Printf("NEVER do this. %s\n", valNot)
  18. // You must copy it to use it outside item.Value(...).
  19. fmt.Printf("The answer is: %s\n", valCopy)
  20. // Alternatively, you could also use item.ValueCopy().
  21. valCopy, err = item.ValueCopy(nil)
  22. handle(err)
  23. fmt.Printf("The answer is: %s\n", valCopy)
  24. return nil
  25. })

Txn.Get() returns ErrKeyNotFound if the value is not found.

Please note that values returned from Get() are only valid while the transaction is open. If you need to use a value outside of the transaction then you must use copy() to copy it to another byte slice.

Use the Txn.Delete() method to delete a key.

Monotonically increasing integers

To get unique monotonically increasing integers with strong durability, you can use the DB.GetSequence method. This method returns a Sequence object, which is thread-safe and can be used concurrently via various goroutines.

Badger would lease a range of integers to hand out from memory, with the bandwidth provided to DB.GetSequence. The frequency at which disk writes are done is determined by this lease bandwidth and the frequency of Next invocations. Setting a bandwidth too low would do more disk writes, setting it too high would result in wasted integers if Badger is closed or crashes. To avoid wasted integers, call Release before closing Badger.

  1. seq, err := db.GetSequence(key, 1000)
  2. defer seq.Release()
  3. for {
  4. num, err := seq.Next()
  5. }

Merge Operations

Badger provides support for ordered merge operations. You can define a func of type MergeFunc which takes in an existing value, and a value to be merged with it. It returns a new value which is the result of the merge operation. All values are specified in byte arrays. For e.g., here is a merge function (add) which appends a []byte value to an existing []byte value.

  1. // Merge function to append one byte slice to another
  2. func add(originalValue, newValue []byte) []byte {
  3. return append(originalValue, newValue...)
  4. }

This function can then be passed to the DB.GetMergeOperator() method, along with a key, and a duration value. The duration specifies how often the merge function is run on values that have been added using the MergeOperator.Add() method.

MergeOperator.Get() method can be used to retrieve the cumulative value of the key associated with the merge operation.

  1. key := []byte("merge")
  2. m := db.GetMergeOperator(key, add, 200*time.Millisecond)
  3. defer m.Stop()
  4. m.Add([]byte("A"))
  5. m.Add([]byte("B"))
  6. m.Add([]byte("C"))
  7. res, _ := m.Get() // res should have value ABC encoded

Example: Merge operator which increments a counter

  1. func uint64ToBytes(i uint64) []byte {
  2. var buf [8]byte
  3. binary.BigEndian.PutUint64(buf[:], i)
  4. return buf[:]
  5. }
  6. func bytesToUint64(b []byte) uint64 {
  7. return binary.BigEndian.Uint64(b)
  8. }
  9. // Merge function to add two uint64 numbers
  10. func add(existing, new []byte) []byte {
  11. return uint64ToBytes(bytesToUint64(existing) + bytesToUint64(new))
  12. }

It can be used as

  1. key := []byte("merge")
  2. m := db.GetMergeOperator(key, add, 200*time.Millisecond)
  3. defer m.Stop()
  4. m.Add(uint64ToBytes(1))
  5. m.Add(uint64ToBytes(2))
  6. m.Add(uint64ToBytes(3))
  7. res, _ := m.Get() // res should have value 6 encoded

Setting Time To Live(TTL) and User Metadata on Keys

Badger allows setting an optional Time to Live (TTL) value on keys. Once the TTL has elapsed, the key will no longer be retrievable and will be eligible for garbage collection. A TTL can be set as a time.Duration value using the Entry.WithTTL() and Txn.SetEntry() API methods.

  1. err := db.Update(func(txn *badger.Txn) error {
  2. e := badger.NewEntry([]byte("answer"), []byte("42")).WithTTL(time.Hour)
  3. err := txn.SetEntry(e)
  4. return err
  5. })

An optional user metadata value can be set on each key. A user metadata value is represented by a single byte. It can be used to set certain bits along with the key to aid in interpreting or decoding the key-value pair. User metadata can be set using Entry.WithMeta() and Txn.SetEntry() API methods.

  1. err := db.Update(func(txn *badger.Txn) error {
  2. e := badger.NewEntry([]byte("answer"), []byte("42")).WithMeta(byte(1))
  3. err := txn.SetEntry(e)
  4. return err
  5. })

Entry APIs can be used to add the user metadata and TTL for same key. This Entry then can be set using Txn.SetEntry().

  1. err := db.Update(func(txn *badger.Txn) error {
  2. e := badger.NewEntry([]byte("answer"), []byte("42")).WithMeta(byte(1)).WithTTL(time.Hour)
  3. err := txn.SetEntry(e)
  4. return err
  5. })

Iterating over keys

To iterate over keys, we can use an Iterator, which can be obtained using the Txn.NewIterator() method. Iteration happens in byte-wise lexicographical sorting order.

  1. err := db.View(func(txn *badger.Txn) error {
  2. opts := badger.DefaultIteratorOptions
  3. opts.PrefetchSize = 10
  4. it := txn.NewIterator(opts)
  5. defer it.Close()
  6. for it.Rewind(); it.Valid(); it.Next() {
  7. item := it.Item()
  8. k := item.Key()
  9. err := item.Value(func(v []byte) error {
  10. fmt.Printf("key=%s, value=%s\n", k, v)
  11. return nil
  12. })
  13. if err != nil {
  14. return err
  15. }
  16. }
  17. return nil
  18. })

The iterator allows you to move to a specific point in the list of keys and move forward or backward through the keys one at a time.

By default, Badger prefetches the values of the next 100 items. You can adjust that with the IteratorOptions.PrefetchSize field. However, setting it to a value higher than GOMAXPROCS (which we recommend to be 128 or higher) shouldn’t give any additional benefits. You can also turn off the fetching of values altogether. See section below on key-only iteration.

Prefix scans

To iterate over a key prefix, you can combine Seek() and ValidForPrefix():

  1. db.View(func(txn *badger.Txn) error {
  2. it := txn.NewIterator(badger.DefaultIteratorOptions)
  3. defer it.Close()
  4. prefix := []byte("1234")
  5. for it.Seek(prefix); it.ValidForPrefix(prefix); it.Next() {
  6. item := it.Item()
  7. k := item.Key()
  8. err := item.Value(func(v []byte) error {
  9. fmt.Printf("key=%s, value=%s\n", k, v)
  10. return nil
  11. })
  12. if err != nil {
  13. return err
  14. }
  15. }
  16. return nil
  17. })

Key-only iteration

Badger supports a unique mode of iteration called key-only iteration. It is several order of magnitudes faster than regular iteration, because it involves access to the LSM-tree only, which is usually resident entirely in RAM. To enable key-only iteration, you need to set the IteratorOptions.PrefetchValues field to false. This can also be used to do sparse reads for selected keys during an iteration, by calling item.Value() only when required.

  1. err := db.View(func(txn *badger.Txn) error {
  2. opts := badger.DefaultIteratorOptions
  3. opts.PrefetchValues = false
  4. it := txn.NewIterator(opts)
  5. defer it.Close()
  6. for it.Rewind(); it.Valid(); it.Next() {
  7. item := it.Item()
  8. k := item.Key()
  9. fmt.Printf("key=%s\n", k)
  10. }
  11. return nil
  12. })

Stream

Badger provides a Stream framework, which concurrently iterates over all or a portion of the DB, converting data into custom key-values, and streams it out serially to be sent over network, written to disk, or even written back to Badger. This is a lot faster way to iterate over Badger than using a single Iterator. Stream supports Badger in both managed and normal mode.

Stream uses the natural boundaries created by SSTables within the LSM tree, to quickly generate key ranges. Each goroutine then picks a range and runs an iterator to iterate over it. Each iterator iterates over all versions of values and is created from the same transaction, thus working over a snapshot of the DB. Every time a new key is encountered, it calls ChooseKey(item), followed by KeyToList(key, itr). This allows a user to select or reject that key, and if selected, convert the value versions into custom key-values. The goroutine batches up 4MB worth of key-values, before sending it over to a channel. Another goroutine further batches up data from this channel using smart batching algorithm and calls Send serially.

This framework is designed for high throughput key-value iteration, spreading the work of iteration across many goroutines. DB.Backup uses this framework to provide full and incremental backups quickly. Dgraph is a heavy user of this framework. In fact, this framework was developed and used within Dgraph, before getting ported over to Badger.

  1. stream := db.NewStream()
  2. // db.NewStreamAt(readTs) for managed mode.
  3. // -- Optional settings
  4. stream.NumGo = 16 // Set number of goroutines to use for iteration.
  5. stream.Prefix = []byte("some-prefix") // Leave nil for iteration over the whole DB.
  6. stream.LogPrefix = "Badger.Streaming" // For identifying stream logs. Outputs to Logger.
  7. // ChooseKey is called concurrently for every key. If left nil, assumes true by default.
  8. stream.ChooseKey = func(item *badger.Item) bool {
  9. return bytes.HasSuffix(item.Key(), []byte("er"))
  10. }
  11. // KeyToList is called concurrently for chosen keys. This can be used to convert
  12. // Badger data into custom key-values. If nil, uses stream.ToList, a default
  13. // implementation, which picks all valid key-values.
  14. stream.KeyToList = nil
  15. // -- End of optional settings.
  16. // Send is called serially, while Stream.Orchestrate is running.
  17. stream.Send = func(list *pb.KVList) error {
  18. return proto.MarshalText(w, list) // Write to w.
  19. }
  20. // Run the stream
  21. if err := stream.Orchestrate(context.Background()); err != nil {
  22. return err
  23. }
  24. // Done.

Garbage Collection

Badger values need to be garbage collected, because of two reasons:

  • Badger keeps values separately from the LSM tree. This means that the compaction operations that clean up the LSM tree do not touch the values at all. Values need to be cleaned up separately.

  • Concurrent read/write transactions could leave behind multiple values for a single key, because they are stored with different versions. These could accumulate, and take up unneeded space beyond the time these older versions are needed.

Badger relies on the client to perform garbage collection at a time of their choosing. It provides the following method, which can be invoked at an appropriate time:

  • DB.RunValueLogGC(): This method is designed to do garbage collection while Badger is online. Along with randomly picking a file, it uses statistics generated by the LSM-tree compactions to pick files that are likely to lead to maximum space reclamation. It is recommended to be called during periods of low activity in your system, or periodically. One call would only result in removal of at max one log file. As an optimization, you could also immediately re-run it whenever it returns nil error (indicating a successful value log GC), as shown below.

    1. ticker := time.NewTicker(5 * time.Minute)
    2. defer ticker.Stop()
    3. for range ticker.C {
    4. again:
    5. err := db.RunValueLogGC(0.7)
    6. if err == nil {
    7. goto again
    8. }
    9. }
  • DB.PurgeOlderVersions(): This method is DEPRECATED since v1.5.0. Now, Badger’s LSM tree automatically discards older/invalid versions of keys.

Note The RunValueLogGC method would not garbage collect the latest value log.

Database backup

There are two public API methods DB.Backup() and DB.Load() which can be used to do online backups and restores. Badger v0.9 provides a CLI tool badger, which can do offline backup/restore. Make sure you have $GOPATH/bin in your PATH to use this tool.

The command below will create a version-agnostic backup of the database, to a file badger.bak in the current working directory

  1. badger backup --dir <path/to/badgerdb>

To restore badger.bak in the current working directory to a new database:

  1. badger restore --dir <path/to/badgerdb>

See badger --help for more details.

If you have a Badger database that was created using v0.8 (or below), you can use the badger_backup tool provided in v0.8.1, and then restore it using the command above to upgrade your database to work with the latest version.

  1. badger_backup --dir <path/to/badgerdb> --backup-file badger.bak

We recommend all users to use the Backup and Restore APIs and tools. However, Badger is also rsync-friendly because all files are immutable, barring the latest value log which is append-only. So, rsync can be used as rudimentary way to perform a backup. In the following script, we repeat rsync to ensure that the LSM tree remains consistent with the MANIFEST file while doing a full backup.

  1. #!/bin/bash
  2. set -o history
  3. set -o histexpand
  4. # Makes a complete copy of a Badger database directory.
  5. # Repeat rsync if the MANIFEST and SSTables are updated.
  6. rsync -avz --delete db/ dst
  7. while !! | grep -q "(MANIFEST\|\.sst)$"; do :; done

Memory usage

Badger’s memory usage can be managed by tweaking several options available in the Options struct that is passed in when opening the database using DB.Open.

  • Number of memtables (Options.NumMemtables)
    • If you modify Options.NumMemtables, also adjust Options.NumLevelZeroTables and Options.NumLevelZeroTablesStall accordingly.
  • Number of concurrent compactions (Options.NumCompactors)
  • Size of table (Options.MaxTableSize)
  • Size of value log file (Options.ValueLogFileSize)

If you want to decrease the memory usage of Badger instance, tweak these options (ideally one at a time) until you achieve the desired memory usage.