简介

在RocksDB中,每次写入它都会先写WAL,然后再写入MemTable,这次我们就来分析这两个逻辑具体是如何实现的. 首先需要明确的是在RocksDB中,WAL的写入是单线程顺序串行写入的,而MemTable则是可以并发多线程写入的。

而在RocksDB 5.5中引进了一个选项enable_pipelined_write,这个选项的目的就是将WAL和MemTable的写入pipeline化, 也就是说当一个线程写完毕WAL之后,此时在WAL的write队列中等待的其他的write则会开始继续写入WAL, 而当前线程将会继续 写入MemTable.此时就将不同的Writer的写入WAL和写入MemTable并发执行了.

实现

我们这里只来分析pipeline的实现,核心函数就是DBImpl::PipelinedWriteImpl.

  • 每一个DB(DBImpl)都有一个write_thread_(class WriteThread).
  • 每次调用Write的时候会先写入WAL, 此时新建一个WriteThread::Writer对象,并将这个对象加入到一个Group中(调用JoinBatchGroup)

    1. WriteThread::Writer w(write_options, my_batch, callback, log_ref,
    2. disable_memtable);
    3. write_thread_.JoinBatchGroup(&w);
  • 然后我们来看JoinBatchGroup,这个函数主要是用来讲所有的写入WAL加入到一个Group中.这里可以看到当当前的Writer 对象是leader(比如第一个进入的对象)的时候将会直接返回,否则将会等待知道更新为对应的状态.

    1. void WriteThread::JoinBatchGroup(Writer* w) {
    2. ...................................
    3. bool linked_as_leader = LinkOne(w, &newest_writer_);
    4. if (linked_as_leader) {
    5. SetState(w, STATE_GROUP_LEADER);
    6. }
    7. TEST_SYNC_POINT_CALLBACK("WriteThread::JoinBatchGroup:Wait", w);
    8. if (!linked_as_leader) {
    9. /**
    10. * Wait util:
    11. * 1) An existing leader pick us as the new leader when it finishes
    12. * 2) An existing leader pick us as its follewer and
    13. * 2.1) finishes the memtable writes on our behalf
    14. * 2.2) Or tell us to finish the memtable writes in pralallel
    15. * 3) (pipelined write) An existing leader pick us as its follower and
    16. * finish book-keeping and WAL write for us, enqueue us as pending
    17. * memtable writer, and
    18. * 3.1) we become memtable writer group leader, or
    19. * 3.2) an existing memtable writer group leader tell us to finish memtable
    20. * writes in parallel.
    21. */
    22. AwaitState(w, STATE_GROUP_LEADER | STATE_MEMTABLE_WRITER_LEADER |
    23. STATE_PARALLEL_MEMTABLE_WRITER | STATE_COMPLETED,
    24. &jbg_ctx);
    25. TEST_SYNC_POINT_CALLBACK("WriteThread::JoinBatchGroup:DoneWaiting", w);
    26. }
    27. }
  • 然后我们来看LinkOne函数,这个函数主要用来讲当前的Writer对象加入到group中,这里可以看到由于 写入是并发的因此对应的newest_writer_(保存最新的写入对象)需要原子操作来更新.

    1. bool WriteThread::LinkOne(Writer* w, std::atomic<Writer*>* newest_writer) {
    2. assert(newest_writer != nullptr);
    3. assert(w->state == STATE_INIT);
    4. Writer* writers = newest_writer->load(std::memory_order_relaxed);
    5. while (true) {
    6. w->link_older = writers;
    7. if (newest_writer->compare_exchange_weak(writers, w)) {
    8. return (writers == nullptr);
    9. }
    10. }
    11. }
  • 当从JoinBatchGroup返回之后,当当前的Writer对象为leader的话,则将会把此leader下的所有的write都 链接到一个WriteGroup中(调用EnterAsBatchGroupLeader函数), 并开始写入WAL,这里要注意非leader的write将会直接 进入memtable的写入,这是因为非leader的write都将会被当前它所从属的leader来打包(group)写入,后面我们会看到实现.

    1. size_t WriteThread::EnterAsBatchGroupLeader(Writer* leader,
    2. WriteGroup* write_group) {
    3. assert(leader->link_older == nullptr);
    4. assert(leader->batch != nullptr);
    5. assert(write_group != nullptr);
    6. ................................................
    7. Writer* newest_writer = newest_writer_.load(std::memory_order_acquire);
    8. // This is safe regardless of any db mutex status of the caller. Previous
    9. // calls to ExitAsGroupLeader either didn't call CreateMissingNewerLinks
    10. // (they emptied the list and then we added ourself as leader) or had to
    11. // explicitly wake us up (the list was non-empty when we added ourself,
    12. // so we have already received our MarkJoined).
    13. CreateMissingNewerLinks(newest_writer);
    14. // Tricky. Iteration start (leader) is exclusive and finish
    15. // (newest_writer) is inclusive. Iteration goes from old to new.
    16. Writer* w = leader;
    17. while (w != newest_writer) {
    18. w = w->link_newer;
    19. .........................................
    20. w->write_group = write_group;
    21. size += batch_size;
    22. write_group->last_writer = w;
    23. write_group->size++;
    24. }
    25. ..............................
    26. }
  • 这里注意到遍历是通过link_newer进行的,之所以这样做是相当于在写入WAL之前,对于当前leader的Write 做一次snapshot(通过CreateMissingNewerLinks函数).

    1. void WriteThread::CreateMissingNewerLinks(Writer* head) {
    2. while (true) {
    3. Writer* next = head->link_older;
    4. if (next == nullptr || next->link_newer != nullptr) {
    5. assert(next == nullptr || next->link_newer == head);
    6. break;
    7. }
    8. next->link_newer = head;
    9. head = next;
    10. }
    11. }
  • 上述操作进行完毕之后,进入写WAL操作,最终会把这个write_group打包成一个writeBatch(通过MergeBatch函数)进行写入.

  1. if (w.ShouldWriteToWAL()) {
  2. ...............................
  3. w.status = WriteToWAL(wal_write_group, log_writer, log_used,
  4. need_log_sync, need_log_dir_sync, current_sequence);
  5. }
  • 当当前的leader将它自己与它的follow写入之后,此时它将需要写入memtable,那么此时之前还阻塞的Writer,分为两种情况 第一种是已经被当前的leader打包写入到WAL,这些writer(包括leader自己)需要将他们链接到memtable writer list.还有一种情况,那就是还没有写入WAL的,此时这类writer则需要选择一个leader然后继续写入WAL.

    1. void WriteThread::ExitAsBatchGroupLeader(WriteGroup& write_group,
    2. Status status) {
    3. Writer* leader = write_group.leader;
    4. Writer* last_writer = write_group.last_writer;
    5. assert(leader->link_older == nullptr);
    6. .....................................
    7. if (enable_pipelined_write_) {
    8. // Notify writers don't write to memtable to exit.
    9. ......................................
    10. // Link the ramaining of the group to memtable writer list.
    11. if (write_group.size > 0) {
    12. if (LinkGroup(write_group, &newest_memtable_writer_)) {
    13. // The leader can now be different from current writer.
    14. SetState(write_group.leader, STATE_MEMTABLE_WRITER_LEADER);
    15. }
    16. }
    17. // Reset newest_writer_ and wake up the next leader.
    18. Writer* newest_writer = last_writer;
    19. if (!newest_writer_.compare_exchange_strong(newest_writer, nullptr)) {
    20. Writer* next_leader = newest_writer;
    21. while (next_leader->link_older != last_writer) {
    22. next_leader = next_leader->link_older;
    23. assert(next_leader != nullptr);
    24. }
    25. next_leader->link_older = nullptr;
    26. SetState(next_leader, STATE_GROUP_LEADER);
    27. }
    28. AwaitState(leader, STATE_MEMTABLE_WRITER_LEADER |
    29. STATE_PARALLEL_MEMTABLE_WRITER | STATE_COMPLETED,
    30. &eabgl_ctx);
    31. } else {
    32. .....................................
    33. }
    34. }
  • 接下来我们来看写入memtable的操作,这里逻辑类似写入WAL,如果是leader的话,则依旧会创建一个group(WriteGroup),然后遍历需要写入memtable的writer,将他们都加入到group中(EnterAsMemTableWriter),然后则设置并发执行的大小,以及设置对应状态(LaunchParallelMemTableWriters).这里注意每次setstate就将会唤醒之前阻塞的Writer.

    1. void WriteThread::LaunchParallelMemTableWriters(WriteGroup* write_group) {
    2. assert(write_group != nullptr);
    3. write_group->running.store(write_group->size);
    4. for (auto w : *write_group) {
    5. SetState(w, STATE_PARALLEL_MEMTABLE_WRITER);
    6. }
    7. }
  • 这里要注意,在构造memtable的group的时候,我们不需要创建link_newer,因为之前在写入WAL的时候,我们已经构造好link_newer,那么此时我们使用构造好的group也就是表示这个group中包含的都是已经写入到WAL的操作.

    1. void WriteThread::EnterAsMemTableWriter(Writer* leader,
    2. WriteGroup* write_group) {
    3. ....................................
    4. if (!allow_concurrent_memtable_write_ || !leader->batch->HasMerge()) {
    5. ....................................................
    6. }
    7. write_group->last_writer = last_writer;
    8. write_group->last_sequence =
    9. last_writer->sequence + WriteBatchInternal::Count(last_writer->batch) - 1;
    10. }
  • 最后开始执行写入MemTable的操作,之前在写入WAL的时候被阻塞的所有Writer此时都会进入下面这个逻辑,此时也就意味着 并发写入MemTable.

    1. if (w.state == WriteThread::STATE_PARALLEL_MEMTABLE_WRITER) {
    2. .........................
    3. w.status = WriteBatchInternal::InsertInto(
    4. &w, w.sequence, &column_family_memtables, &flush_scheduler_,
    5. write_options.ignore_missing_column_families, 0 /*log_number*/, this,
    6. true /*concurrent_memtable_writes*/);
    7. if (write_thread_.CompleteParallelMemTableWriter(&w)) {
    8. MemTableInsertStatusCheck(w.status);
    9. versions_->SetLastSequence(w.write_group->last_sequence);
    10. write_thread_.ExitAsMemTableWriter(&w, *w.write_group);
    11. }
    12. }
  • 最后当当前group的所有Writer都写入MemTable之后,则将会调用ExitAsMemTableWriter来进行收尾工作.如果有新的memtable writer list需要处理,那么则唤醒对应的Writer,然后设置已经处理完毕的Writer的状态.

    1. void WriteThread::ExitAsMemTableWriter(Writer* /*self*/,
    2. WriteGroup& write_group) {
    3. Writer* leader = write_group.leader;
    4. Writer* last_writer = write_group.last_writer;
    5. Writer* newest_writer = last_writer;
    6. if (!newest_memtable_writer_.compare_exchange_strong(newest_writer,
    7. nullptr)) {
    8. CreateMissingNewerLinks(newest_writer);
    9. Writer* next_leader = last_writer->link_newer;
    10. assert(next_leader != nullptr);
    11. next_leader->link_older = nullptr;
    12. SetState(next_leader, STATE_MEMTABLE_WRITER_LEADER);
    13. }
    14. Writer* w = leader;
    15. while (true) {
    16. if (!write_group.status.ok()) {
    17. w->status = write_group.status;
    18. }
    19. Writer* next = w->link_newer;
    20. if (w != leader) {
    21. SetState(w, STATE_COMPLETED);
    22. }
    23. if (w == last_writer) {
    24. break;
    25. }
    26. w = next;
    27. }
    28. // Note that leader has to exit last, since it owns the write group.
    29. SetState(leader, STATE_COMPLETED);
    30. }

总结

我们可以看到在RocksDB中,WAL的写入始终是串行写入,而MemTable可以多线程并发写入,也就是说在系统压力到一定阶段的时候, 写入WAL肯定会成为瓶颈.