Rocksdb 写流程,读流程,WAL文件,MANIFEST文件,ColumnFamily,Memtable,SST文件原理详解

文章目录

      • 前言
      • Rocksdb写流程图
      • WAL 原理分析
        • 概述
        • 文件格式
        • 查看WAL的工具
        • 创建WAL
        • 清理WAL
      • MANIFEST原理分析
        • 概述
        • 查看MANIFEST的工具
        • 创建 及 清除 MANIFEST
        • 文件内容
      • CcolumnFamily 详解
        • 概述
        • API介绍
        • 核心数据结构
        • 创建以及删除
      • MEMTABLE 实现
        • 概述
        • 实现
      • Rocksdb写入逻辑
        • 概述
        • 实现
        • 总结
        • 关于写的一些参数调优
      • 读流程
        • 读流程图
        • 概述
        • memtable 源码分析
        • SST 源码分析
      • 参考资料

前言

Rocksdb作为当下nosql中性能的代表被各个存储组件(mysql,tikv,pmdk,bluestore)作为存储引擎底座,其基于LSM tree的核心存储结构(将随机写通过数据结构转化为顺序写)来提供高性能的写吞吐时保证了读性能。同时大量的并发性配置来降低compaction的影响。且最近社区也推出了key-value分离存储的blobdb,在大value场景的写性能又有了进一步的提升。完善且全面的各种语言的SDK和社区,让rocskdb迅速占领存储引擎的内核区域。

所以为了提升存储引擎的核心开发能力,特此针对rocksdb的核心实现学习研究。
通过近期的学习了解,将最终的成果做一个固化总结,方便后续的持续复习。

Rocksdb写流程图

Rocksdb 写流程,读流程,WAL文件,MANIFEST文件,ColumnFamily,Memtable,SST文件原理详解_第1张图片
涉及到的几个核心文件:

  1. WAL 保存当前rocksdb的memtable中的文件信息,当memtable --》 immutable memtable中的数据刷到L0之后即之前的会被删除 — 即于DB目录下的 00012.log
  2. MANIFEST 保存当前db的状态信息(类似于快照),主要是SST文件的各个版本信息(当sst文件被改动,即会生成对应的versionEdit,并触发sync写manifest文件),用于异常断电后恢复— 即 MANIFEST-000001 的文件
  3. CURRENT 记录当前最新的manifest文件编号
  4. Memtable 常驻于内存中,在wal写之后,接受具体的key-value数据。每个memtable大小以及个数都有指定的参数进行控制,write_buffer_size 表示memtable的大小,max_write_buffer_number表示内存中最多可以同时存在多少个memtable的个数
  5. Immutable memtable ,当memtable被写满之后会生成一个新的memtable继续接受IO,旧的memtable就会变成 immutable memtable ,只读的状态,且开始flush到磁盘的L0。
  6. SST文件,核心key-value的存储文件。DB目录下的000023.sst形态。

分析IO过程主要是通过rocksdb的几个接口:

  • rocksdb::Status status = rocksdb::DB::Open(options, "/tmp/testdb", &db);
  • rocksdb::Status s = db->Get(rocksdb::ReadOptions(), key1, &pinnable_val);
  • rocksdb::Status s = db->Put(rocksdb::WriteOptions(), key2, value);
    详细的接口可以参考basic operations

在介绍详细的写流程之前需要先整体得了解流程图中的各个重要文件的作用,以及其基本的实现过程。

WAL 原理分析

概述

在RocksDB中每一次数据的更新都会涉及到两个结构,一个是内存中的memtable(后续会刷新到磁盘成为SST),第二个是WAL(WriteAheadLog)。

WAL主要的功能是当RocksDB异常退出后,能够恢复出错前的内存中(memtable)数据,因此RocksDB默认是每次用户写都会刷新数据到WAL. 每次当当前WAL对应的内存数据(memtable)刷新到磁盘之后,都会新建一个WAL.

所有的WAL文件都是保存在WAL目录(options.wal_dir),为了保证数据的状态,所有的WAL文件的名字都是按照顺序的(log_number).

文件格式

WAL文件由一堆变长的record组成,而每个record是由kBlockSize(32k)来分组,比如某一个record大于kBlockSize的话,他就会被切分为多个record(通过type来判断).

       +-----+-------------+--+----+----------+------+-- ... ----+
 File  | r0  |        r1   |P | r2 |    r3    |  r4  |           |
       +-----+-------------+--+----+----------+------+-- ... ----+
       <--- kBlockSize ------>|<-- kBlockSize ------>|

  rn = variable size records
  P = Padding

record的格式如下:

+---------+-----------+-----------+--- ... ---+
|CRC (4B) | Size (2B) | Type (1B) | Payload   |
+---------+-----------+-----------+--- ... ---+

CRC = 32bit hash computed over the payload using CRC
Size = Length of the payload data
Type = Type of record
       (kZeroType, kFullType, kFirstType, kLastType, kMiddleType )
       The type is used to group a bunch of records together to represent
       blocks that are larger than kBlockSize
Payload = Byte stream as long as specified by the payload size

最后是WAL的payload的格式,其中是一批操作的集合,从record中可以看出wal的写入是一批一批写入得。

// WriteBatch::rep_ :=
//    sequence: fixed64
//    count: fixed32
//    data: record[count]
// record :=
//    kTypeValue varstring varstring
//    kTypeDeletion varstring
//    kTypeSingleDeletion varstring
//    kTypeMerge varstring varstring
//    kTypeColumnFamilyValue varint32 varstring varstring
//    kTypeColumnFamilyDeletion varint32 varstring varstring
//    kTypeColumnFamilySingleDeletion varint32 varstring varstring
//    kTypeColumnFamilyMerge varint32 varstring varstring
//    kTypeBeginPrepareXID varstring
//    kTypeEndPrepareXID
//    kTypeCommitXID varstring
//    kTypeRollbackXID varstring
//    kTypeNoop
// varstring :=
//    len: varint32
//    data: uint8[len]

上面的格式中可以看到有一个sequence的值,这个值主要用来表示WAL中操作的时序,这里要注意每次sequence的更新是按照WriteBatch来更新的.

Status DBImpl::WriteToWAL(const WriteThread::WriteGroup& write_group,
                          log::Writer* log_writer, uint64_t* log_used,
                          bool need_log_sync, bool need_log_dir_sync,
                          SequenceNumber sequence) {
  Status status;
.........................................
  WriteBatchInternal::SetSequence(merged_batch, sequence);

查看WAL的工具

这里我是在mac上直接安装的rocksdb(brew install rocksdb)的工具来打印的,如果是在标准linux操作系统,编译好rocksdb代码之后会有ldb工具,两者是同一个工具

bogon:rocksdb-master baron$ rocksdb_ldb dump_wal --walfile=./000285.log --header

Sequence,Count,ByteSize,Physical Offset,Key(s)
1255,1,110,0,PUT(1) : 0x00000006000000000000013C

以上打印的是一个reocord,且当前record只有一个操作,如果有多个是一个bactch,那么也会添加到同一个record之中的。

创建WAL

首先是一个新的DB被打开的时候会创建一个WAL;

Status DB::Open(const DBOptions& db_options, const std::string& dbname,
                const std::vector<ColumnFamilyDescriptor>& column_families,
                std::vector<ColumnFamilyHandle*>* handles, DB** dbptr) {
......................................................................
  s = impl->Recover(column_families);
  if (s.ok()) {
    uint64_t new_log_number = impl->versions_->NewFileNumber();
.............................................
    s = NewWritableFile(
        impl->immutable_db_options_.env,
        LogFileName(impl->immutable_db_options_.wal_dir, new_log_number),
        &lfile, opt_env_options);
................................................

第二个情况是当一个CF(column family)被刷新到磁盘之后,也会创建新的WAL,这种情况下创建WAL是用过SwitchMemtable函数. 这个函数主要是用来切换memtable,也就是做flush之前的切换(生成新的memtable,然后把老的刷新到磁盘)

Status DBImpl::SwitchMemtable(ColumnFamilyData* cfd, WriteContext* context) {
..................................................
  {
    if (creating_new_log) {
...............................................
      } else {
        s = NewWritableFile(
            env_, LogFileName(immutable_db_options_.wal_dir, new_log_number),
            &lfile, opt_env_opt);
      }
.................................
    }
...............................................
  return s;
}

通过上面的两个函数我们可以看到每次新建WAL都会有一个new_log_number,这个值就是对应的WAL的文件名前缀,可以看到每次生成新的log_number, 基本都会调用NewFileNumber函数.这里注意如果option设置了recycle_log_file_num的话,是有可能重用老的log_number的。我们先来看下NewFileNumber函数:

uint64_t NewFileNumber() { return next_file_number_.fetch_add(1); }

可以看到函数实现很简单,就是每次log_number加一,因此一般来说WAL的文件格式都是类似0000001.log这样子

清理WAL

WAL的删除只有当包含在此WAL中的所有的数据都已经被持久化为SST之后(也有可能会延迟删除,因为有时候需要master发送transcation Log到slave来回放). 先来看DBImpl::FIndObsoleteFiles函数,这个函数很长,我们只关注对应的WAL部分,这里逻辑很简单,就是遍历所有的WAL,然后找出log_number小于当前min_log_number的文件然后加入到对应的结构(log_delete_files).

if (!alive_log_files_.empty() && !logs_.empty()) {
    uint64_t min_log_number = job_context->log_number;
    size_t num_alive_log_files = alive_log_files_.size();
    // find newly obsoleted log files
    while (alive_log_files_.begin()->number < min_log_number) {
      auto& earliest = *alive_log_files_.begin();
      if (immutable_db_options_.recycle_log_file_num >
          log_recycle_files.size()) {
        ROCKS_LOG_INFO(immutable_db_options_.info_log,
                       "adding log %" PRIu64 " to recycle list\n",
                       earliest.number);
        log_recycle_files.push_back(earliest.number);
      } else {
        job_context->log_delete_files.push_back(earliest.number);
      }
.....................................................................
    }
    while (!logs_.empty() && logs_.front().number < min_log_number) {
      auto& log = logs_.front();
      if (log.getting_synced) {
        log_sync_cv_.Wait();
        // logs_ could have changed while we were waiting.
        continue;
      }
      logs_to_free_.push_back(log.ReleaseWriter());
      {
        InstrumentedMutexLock wl(&log_write_mutex_);
        logs_.pop_front();
      }
    }
    // Current log cannot be obsolete.
    assert(!logs_.empty());
  }

这里可以看到有两个核心的数据结构alive_log_files和logs_,他们的区别就是前一个表示有写入的WAL,而后一个则是包括了所有的WAL(比如open一个DB,而没有写入数据,此时也会生成WAL).

最终删除WAL的操作是在DBImpl::DeleteObsoleteFileImpl这个函数,而WAL删除不会单独触发,而是和temp/sst这类文件一起被删除的(PurgeObsoleteFiles).

MANIFEST原理分析

概述

在RocksDB中MANIFEST保存了存储引擎的内部的一些状态元数据,简单来说当系统异常重启,或者程序异常被退出之后,RocksDB需要有一种机制能够恢复到一个一致性的状态, 而这个一致性的状态就是靠MANIFEST来保证的.

MANIFEST在RocksDB中是一个单独的文件,而这个文件所保存的数据基本是来自于VersionEdit这个结构.

MANIFEST包含了两个文件,一个log文件一个包含最新MANIFEST文件名的文件,Manifest的log文件名是这样 MANIFEST-(seqnumber),这个seq会一直增长.只有当 超过了指定的大小之后,MANIFEST会刷新一个新的文件,当新的文件刷新到磁盘(并且文件名更新)之后,老的文件会被删除掉.这里可以认为每一次MANIFEST的更新都代表一次snapshot.

下面就是MANIFEST的基本文件组成:

MANIFEST = { CURRENT, MANIFEST-<seq-no>* } 
CURRENT = File pointer to the latest manifest log
MANIFEST-<seq no> = Contains snapshot of RocksDB state and subsequent modifications

在RocksDB中任意时间存储引擎的状态都会保存为一个Version(也就是SST的集合),而每次对Version的修改都是一个VersionEdit,而最终这些VersionEdit就是 组成manifest-log文件的内容.

下面就是MANIFEST的log文件的基本构成:

version-edit      = Any RocksDB state change
version           = { version-edit* }
manifest-log-file = { version, version-edit* }
                  = { version-edit* }

查看MANIFEST的工具

依旧使用ldb工具,不过我用的是mac自动安装的rocksdb_ldb工具

bogon:rocksdb-master baron$ rocksdb_ldb manifest_dump --path=./MANIFEST-000001

--------------- Column family "default"  (ID 0) --------------
log number: 13
comparator: <NO COMPARATOR>
--- level 0 --- version# 0 ---
 11:80860[' 
--------------- Column family "__system__"  (ID 1) --------------
log number: 24
comparator: RocksDB_SE_v3.10
--- level 0 --- version# 1 ---
 25:1094[' 
next_file_number 27 last_sequence 190  prev_log_number 0 max_column_family 1

创建 及 清除 MANIFEST

整个MANIFEST涉及到三个数据结构分别是VersionEdit/Version/VersionSet,其中前两个上面已经有介绍,而最后一个VersionSet顾名思义表示一堆Version的集合,其实就是 记录了各个版本的信息用来管理整个Version.

class VersionSet {
 public:
  VersionSet(const std::string& dbname, const ImmutableDBOptions* db_options,
             const EnvOptions& env_options, Cache* table_cache,
             WriteBufferManager* write_buffer_manager,
             WriteController* write_controller);
  ~VersionSet();
.......................
 private:
  struct ManifestWriter;

  friend class Version;
.................................
  // Opened lazily
  unique_ptr<log::Writer> descriptor_log_;
  // generates a increasing version number for every new version
  uint64_t current_version_number_;

  // Queue of writers to the manifest file
  std::deque<ManifestWriter*> manifest_writers_;
..........................................

这里最关键的两个数据结构是descriptor_log_和manifest_writers_,前一个表示了当前manifest-log文件,后一个表示需要写入到manifest-log文件中的内容.

下面就是ManifestWriter的结构,可以看到其中包含了一个VersionEdit的数组,这个数组就是即将要写入到manifest-log文件中的内容.

// this is used to batch writes to the manifest file
struct VersionSet::ManifestWriter {
  Status status;
  bool done;
  InstrumentedCondVar cv;
  ColumnFamilyData* cfd;
  const autovector<VersionEdit*>& edit_list;

  explicit ManifestWriter(InstrumentedMutex* mu, ColumnFamilyData* _cfd,
                          const autovector<VersionEdit*>& e)
      : done(false), cv(mu), cfd(_cfd), edit_list(e) {}
};

然后我们来看RocksDB如何来创建以及写入文件,下面所有的代码都是包含在VersionSet::LogAndApply这个函数中.

首先在每次LogAndApply的时候都会创建一个新的ManifesWriter加入到manifest_writers_队列中.这里只有当之前保存在队列中 的所有Writer都写入完毕之后才会加入到队列,否则就会等待 (后续会在详细的写流程中说明writer的作用)

  // queue our request
  ManifestWriter w(mu, column_family_data, edit_list);
  manifest_writers_.push_back(&w);
  while (!w.done && &w != manifest_writers_.front()) {
    w.cv.Wait();
  }
  if (w.done) {
    return w.status;
  }

接下来就是保存对应的数据到batch_edits中(manifest_writers_).

autovector<VersionEdit*> batch_edits;
....................................
 if (w.edit_list.front()->IsColumnFamilyManipulation()) {
    // no group commits for column family add or drop
    LogAndApplyCFHelper(w.edit_list.front());
    batch_edits.push_back(w.edit_list.front());
  } else {
    v = new Version(column_family_data, this, current_version_number_++);
........................................................
    for (const auto& writer : manifest_writers_) {
      if (writer->edit_list.front()->IsColumnFamilyManipulation() ||
          writer->cfd->GetID() != column_family_data->GetID()) {
        break;
      }
      last_writer = writer;
      for (const auto& edit : writer->edit_list) {
...........................................
        batch_edits.push_back(edit);
      }
    }
    builder->SaveTo(v->storage_info());
  }

然后就是创建新的manifest-log文件的逻辑.这里可以看到要么是第一次进入,要么文件大小大于option对应的值才会创建新的文件

if (!descriptor_log_ ||
      manifest_file_size_ > db_options_->max_manifest_file_size) {
    pending_manifest_file_number_ = NewFileNumber();
    batch_edits.back()->SetNextFile(next_file_number_.load());
    new_descriptor_log = true;
  } else {
    pending_manifest_file_number_ = manifest_file_number_;
  }

  if (new_descriptor_log) {
    // if we're writing out new snapshot make sure to persist max column family
    if (column_family_set_->GetMaxColumnFamily() > 0) {
      w.edit_list.front()->SetMaxColumnFamily(
          column_family_set_->GetMaxColumnFamily());
    }
  }

如果需要创建新的manifest-log文件,则开始构造对应的文件信息并创建文件.

if (new_descriptor_log) {
      // create manifest file
      ROCKS_LOG_INFO(db_options_->info_log, "Creating manifest %" PRIu64 "\n",
                     pending_manifest_file_number_);
      unique_ptr<WritableFile> descriptor_file;
      EnvOptions opt_env_opts = env_->OptimizeForManifestWrite(env_options_);
      s = NewWritableFile(
          env_, DescriptorFileName(dbname_, pending_manifest_file_number_),
          &descriptor_file, opt_env_opts);
      if (s.ok()) {
        descriptor_file->SetPreallocationBlockSize(
            db_options_->manifest_preallocation_size);

        unique_ptr<WritableFileWriter> file_writer(
            new WritableFileWriter(std::move(descriptor_file), opt_env_opts));
        descriptor_log_.reset(
            new log::Writer(std::move(file_writer), 0, false));
        s = WriteSnapshot(descriptor_log_.get());
      }
    }

开始写入对应的VersionEdit的record到文件(最后我们会来看这个record的构成),这里看到写入完成后会调用Sync来刷新内容到磁盘,等这些操作都做完之后,则会更新Current文件也就是更新最新的manifest-log文件名到CURRENT文件中.

  for (auto& e : batch_edits) {
    std::string record;
    if (!e->EncodeTo(&record)) {
      s = Status::Corruption(
          "Unable to Encode VersionEdit:" + e->DebugString(true));
      break;
    }
    TEST_KILL_RANDOM("VersionSet::LogAndApply:BeforeAddRecord",
                     rocksdb_kill_odds * REDUCE_ODDS2);
    s = descriptor_log_->AddRecord(record);
    if (!s.ok()) {
      break;
    }
  }
  if (s.ok()) {
    s = SyncManifest(env_, db_options_, descriptor_log_->file());
  }
.............................
// If we just created a new descriptor file, install it by writing a
// new CURRENT file that points to it.
if (s.ok() && new_descriptor_log) {
  s = SetCurrentFile(env_, dbname_, pending_manifest_file_number_,
                     db_directory);
}

CURRENT文件更新完毕之后,就可以删除老的mainfest文件了.

  // Append the old mainfest file to the obsolete_manifests_ list to be deleted
  // by PurgeObsoleteFiles later.
  if (s.ok() && new_descriptor_log) {
    obsolete_manifests_.emplace_back(
        DescriptorFileName("", manifest_file_number_));
  }

最后则是更新manifest_writers_队列,唤醒之前阻塞的内容.

  // wake up all the waiting writers
  while (true) {
    ManifestWriter* ready = manifest_writers_.front();
    manifest_writers_.pop_front();
    if (ready != &w) {
      ready->status = s;
      ready->done = true;
      ready->cv.Signal();
    }
    if (ready == last_writer) break;
  }
  // Notify new head of write queue
  if (!manifest_writers_.empty()) {
    manifest_writers_.front()->cv.Signal();
  }

文件内容

具体的文件格式可以参考官方wikiMANIFEST

比如compaction过程中针对某个sst文件的edit结果会记录到MANIFEST,格式如下:

+--------------+-------------+--------------+------------+----------------+--------------+----------------+----------------+
| kNewFile4    | level       | file number  | file size  | smallest_key   | largest_key  | smallest_seqno | largest_seq_no |
+--------------+-------------+--------------+------------+----------------+--------------+----------------+----------------+
|<-- var32  -->|<-- var32 -->|<-- var64  -->|<-  var64 ->|<-- String   -->|<-- String -->|<-- var64    -->|<-- var64    -->|

+--------------+------------------+---------+------+----------------+--------------------+---------+------------+
|  CustomTag1  | Field 1 size n1  | field1  | ...  |  CustomTag(m)  | Field m size n(m)  | field(m)| kTerminate |
+--------------+------------------+---------+------+----------------+--------------------+---------+------------+
<-- var32   -->|<-- var32      -->|<- n1  ->|      |<-- var32   - ->|<--    var32     -->|<- n(m)->|<- var32 -->|

通过上面的分析我们可以看到最终是通过VersionEdit::EncodeTo来序列化数据,而VersionEdit主要包含了比如log_number/last_sequence_这些字段,这里还有一个比较关键的信息被序列化了,那就是FileMetaData,也就是SST文件的元信息.

struct FileMetaData {
  FileDescriptor fd;
  InternalKey smallest;            // Smallest internal key served by table
  InternalKey largest;             // Largest internal key served by table
  SequenceNumber smallest_seqno;   // The smallest seqno in this file
  SequenceNumber largest_seqno;    // The largest seqno in this file

.........................................
  // File size compensated by deletion entry.
  // This is updated in Version::UpdateAccumulatedStats() first time when the
  // file is created or loaded.  After it is updated (!= 0), it is immutable.
  uint64_t compensated_file_size;
  // These values can mutate, but they can only be read or written from
  // single-threaded LogAndApply thread
  uint64_t num_entries;            // the number of entries.
  uint64_t num_deletions;          // the number of deletion entries.
  uint64_t raw_key_size;           // total uncompressed key size.
  uint64_t raw_value_size;         // total uncompressed value size.

  int refs;  // Reference count

  bool being_compacted;        // Is this file undergoing compaction?
  bool init_stats_from_file;   // true if the data-entry stats of this file
                               // has initialized from file.

  bool marked_for_compaction;  // True if client asked us nicely to compact this
                               // file.
};

CcolumnFamily 详解

概述

在RocksDB 3.0中加入了Column Family特性,加入这个特性之后,每一个KV对都会关联一个Column Family,其中默认的Column Family是 “default”. Column Family主要是提供给RocksDB一个逻辑的分区.从实现上来看不同的Column Family共享WAL,而都有自己的Memtable和SST.这就意味着我们可以很 快速已经方便的设置不同的属性给不同的Column Family以及快速删除对应的Column Family.

API介绍

首先是创建Column Family,这里注意我们可以通过两种方式来创建Column Family:

  • 在Open DB的时候通过传递需要创建的Column Family
  • 当DB创建并打开之后, 通过直接的CreateColumnFamily来创建Column Family

以上创建调用的接口如下:

DB::Open(const DBOptions& db_options, const std::string& name, const std::vector<ColumnFamilyDescriptor>& column_families, std::vector<ColumnFamilyHandle*>* handles, DB** dbptr);
DB::CreateColumnFamily(const ColumnFamilyOptions& options, const std::string& column_family_name, ColumnFamilyHandle** handle);

这里可以看到不管是哪一种方式最终都会返回一个ColumnFamilyHandle给调用者来使用.

然后就是删除Column Family的方式,这里很简单就是传递之前创建的ColumnFamilyHandle给RocksDB,然后用以删除.

DropColumnFamily(ColumnFamilyHandle* column_family);

核心数据结构

所有的Column Family都是通过一个叫做ColumnFamilySet的结构来管理的,而每一个Column Family都是一个ColumnFamilyData.

先来看ColumnFamilySet,这里可以看到它有两个数据结构来管理Column Family,分别是map(column_family_data_)以及一个双向链表(dummy_cfd_). 其中map用来保存Column Family名字和对应的id以及ColumnFamilyData的映射. 这里要注意在RocksDB内部是将没一个ColumnFamily的名字表示为一个uint32类型的ID(max_column_family_).也就是这个ID是一个简单的递增的数值.

class ColumnFamilySet {
 public:
  // ColumnFamilySet supports iteration
   public:
.................................

  ColumnFamilyData* CreateColumnFamily(const std::string& name, uint32_t id,
                                       Version* dummy_version,
                                       const ColumnFamilyOptions& options);
  iterator begin() { return iterator(dummy_cfd_->next_); }
  iterator end() { return iterator(dummy_cfd_); }
...............................
 private:
  friend class ColumnFamilyData;
  // helper function that gets called from cfd destructor
  // REQUIRES: DB mutex held
  void RemoveColumnFamily(ColumnFamilyData* cfd);

  // column_families_ and column_family_data_ need to be protected:
  // * when mutating both conditions have to be satisfied:
  // 1. DB mutex locked
  // 2. thread currently in single-threaded write thread
  // * when reading, at least one condition needs to be satisfied:
  // 1. DB mutex locked
  // 2. accessed from a single-threaded write thread
  std::unordered_map<std::string, uint32_t> column_families_;
  std::unordered_map<uint32_t, ColumnFamilyData*> column_family_data_;

  uint32_t max_column_family_;
  ColumnFamilyData* dummy_cfd_;
  // We don't hold the refcount here, since default column family always exists
  // We are also not responsible for cleaning up default_cfd_cache_. This is
  // just a cache that makes common case (accessing default column family)
  // faster
  ColumnFamilyData* default_cfd_cache_;

..................................
};

然后来看ColumnFamilyData,这个数据结构就是用来表示一个ColumnFamily,保存了对应的信息,我们可以看到有ID/name以及当前ColumnFamily对应的所有的version(dummy_versions_). 其中这里的next_/prev_就是在ColumnFamilySet中用来表示所有ColumnFamily的双向链表.

class ColumnFamilyData {
 public:
  ~ColumnFamilyData();

  // thread-safe
  uint32_t GetID() const { return id_; }
  // thread-safe
  const std::string& GetName() const { return name_; }

  // Ref() can only be called from a context where the caller can guarantee
  // that ColumnFamilyData is alive (while holding a non-zero ref already,
  // holding a DB mutex, or as the leader in a write batch group).
  void Ref() { refs_.fetch_add(1, std::memory_order_relaxed); }

  // Unref decreases the reference count, but does not handle deletion
  // when the count goes to 0.  If this method returns true then the
  // caller should delete the instance immediately, or later, by calling
  // FreeDeadColumnFamilies().  Unref() can only be called while holding
  // a DB mutex, or during single-threaded recovery.
  bool Unref() {
    int old_refs = refs_.fetch_sub(1, std::memory_order_relaxed);
    assert(old_refs > 0);
    return old_refs == 1;
  }
..............................

 private:
  friend class ColumnFamilySet;
  ColumnFamilyData(uint32_t id, const std::string& name,
                   Version* dummy_versions, Cache* table_cache,
                   WriteBufferManager* write_buffer_manager,
                   const ColumnFamilyOptions& options,
                   const ImmutableDBOptions& db_options,
                   const EnvOptions& env_options,
                   ColumnFamilySet* column_family_set);

  uint32_t id_;
  const std::string name_;
  Version* dummy_versions_;  // Head of circular doubly-linked list of versions.
  Version* current_;         // == dummy_versions->prev_
......................................................

  // Thread's local copy of SuperVersion pointer
  // This needs to be destructed before mutex_
  std::unique_ptr<ThreadLocalPtr> local_sv_;

  // pointers for a circular linked list. we use it to support iterations over
  // all column families that are alive (note: dropped column families can also
  // be alive as long as client holds a reference)
  ColumnFamilyData* next_;
  ColumnFamilyData* prev_;
...................................

  ColumnFamilySet* column_family_set_;
..................................
};

然后就是返回给调用者的ColumnFamilyHandleImpl结构,这个结构主要是封装了ColumnFamilyData.

// ColumnFamilyHandleImpl is the class that clients use to access different
// column families. It has non-trivial destructor, which gets called when client
// is done using the column family
class ColumnFamilyHandleImpl : public ColumnFamilyHandle {
 public:
  // create while holding the mutex
  ColumnFamilyHandleImpl(
      ColumnFamilyData* cfd, DBImpl* db, InstrumentedMutex* mutex);
  // destroy without mutex
  virtual ~ColumnFamilyHandleImpl();
  virtual ColumnFamilyData* cfd() const { return cfd_; }
......................................

 private:
  ColumnFamilyData* cfd_;
  DBImpl* db_;
  InstrumentedMutex* mutex_;
};

创建以及删除

从DBImpl::CreateColumnFamilyImpl开始.在这个函数 中首先就是通过调用GetNextColumnFamilyID来得到当前创建的ColumnFamily对应的ID(自增).然后再调用LogAndApply来对ColumnFamily 进行对应的操作.最后再返回封装好的ColumnFamilyHandle给调用者.

Status DBImpl::CreateColumnFamilyImpl(const ColumnFamilyOptions& cf_options,
                                      const std::string& column_family_name,
                                      ColumnFamilyHandle** handle) {
.......................................

  {
...................................
    VersionEdit edit;
    edit.AddColumnFamily(column_family_name);
    uint32_t new_id = versions_->GetColumnFamilySet()->GetNextColumnFamilyID();
    edit.SetColumnFamily(new_id);
    edit.SetLogNumber(logfile_number_);
    edit.SetComparatorName(cf_options.comparator->Name());

    // LogAndApply will both write the creation in MANIFEST and create
    // ColumnFamilyData object
    {  // write thread
      WriteThread::Writer w;
      write_thread_.EnterUnbatched(&w, &mutex_);
      // LogAndApply will both write the creation in MANIFEST and create
      // ColumnFamilyData object
      s = versions_->LogAndApply(nullptr, MutableCFOptions(cf_options), &edit,
                                 &mutex_, directories_.GetDbDir(), false,
                                 &cf_options);
      write_thread_.ExitUnbatched(&w);
    }
    if (s.ok()) {
........................................
      *handle = new ColumnFamilyHandleImpl(cfd, this, &mutex_);
      ROCKS_LOG_INFO(immutable_db_options_.info_log,
                     "Created column family [%s] (ID %u)",
                     column_family_name.c_str(), (unsigned)cfd->GetID());
    }
.............................................
  }  // InstrumentedMutexLock l(&mutex_)

.................................
  return s;
}

最终会在LogAndApply调用ColumnFamilySet的CreateColumnFamily函数(通过VersionSet::CreateColumnFamily),这个函数我们可看到主要做了下面三件事情:

  1. 创建ColumnFamilyData对象
  2. 将新的创建好的CFD加入到双向链表
  3. 对应的Map数据结构更新数据
// under a DB mutex AND write thread
ColumnFamilyData* ColumnFamilySet::CreateColumnFamily(
 const std::string& name, uint32_t id, Version* dummy_versions,
 const ColumnFamilyOptions& options) {
  assert(column_families_.find(name) == column_families_.end());
  ColumnFamilyData* new_cfd = new ColumnFamilyData(
   id, name, dummy_versions, table_cache_, write_buffer_manager_, options,
   *db_options_, env_options_, this);
  column_families_.insert({name, id});
  column_family_data_.insert({id, new_cfd});
  max_column_family_ = std::max(max_column_family_, id);
  // add to linked list
  new_cfd->next_ = dummy_cfd_;
  auto prev = dummy_cfd_->prev_;
  new_cfd->prev_ = prev;
  prev->next_ = new_cfd;
  dummy_cfd_->prev_ = new_cfd;
  if (id == 0) {
 default_cfd_cache_ = new_cfd;
  }
  return new_cfd;
}

然后来看如何删除ColumnFamily,这里所有的删除最终都会调用ColumnFamilySet::RemoveColumnFamily函数,这个函数是是从两个Map中删除对应的ColumnFamily. 这里或许我们要问了,为什么管理的双向链表不需要删除呢。这里原因是这样的,由于ColumnFamilyData是通过引用计数管理的,因此只有当所有的引用计数都清零之后, 才需要真正的函数ColumnFamilyData(也就是会从双向链表中删除数据).

// under a DB mutex AND from a write thread
void ColumnFamilySet::RemoveColumnFamily(ColumnFamilyData* cfd) {
 auto cfd_iter = column_family_data_.find(cfd->GetID());
 assert(cfd_iter != column_family_data_.end());
 column_family_data_.erase(cfd_iter);
 column_families_.erase(cfd->GetName());
}

因此我们来看ColumnFamilyData的析构函数.可以看到析构函数中会从双向链表中删除对应的数据,以及处理对应的Version(corrent_).

// DB mutex held
ColumnFamilyData::~ColumnFamilyData() {
  assert(refs_.load(std::memory_order_relaxed) == 0);
  // remove from linked list
  auto prev = prev_;
  auto next = next_;
  prev->next_ = next;
  next->prev_ = prev;

  if (!dropped_ && column_family_set_ != nullptr) {
    // If it's dropped, it's already removed from column family set
    // If column_family_set_ == nullptr, this is dummy CFD and not in
    // ColumnFamilySet
    column_family_set_->RemoveColumnFamily(this);
  }

  if (current_ != nullptr) {
    current_->Unref();
  }
..............................
}

最后我们来看一下在磁盘上ColumnFamily是如何保存的,首先需要明确的是ColumnFamily是保存在MANIFEST文件中的,信息的保存比较简单(之前的文章有介绍), 和MANIFEST中其他的信息没什么区别,因此这里我们主要来看数据的读取以及初始化,这里所有的操作都是包含在VersionSet::Recover中,我们来看这个函数.

函数主要的逻辑就是读取MANIFEST然后来再来将磁盘上读取的ColumnFamily的信息初始化(初始化ColumnFamilySet结构),可以看到这里相当于将之前的create/drop 的操作全部回放一遍,也就是会调用CreateColumnFamily/DropColumnFamily来将磁盘的信息初始化到内存.

while (reader.ReadRecord(&record, &scratch) && s.ok()) {
      VersionEdit edit;
      s = edit.DecodeFrom(record);
      if (!s.ok()) {
        break;
      }

      // Not found means that user didn't supply that column
      // family option AND we encountered column family add
      // record. Once we encounter column family drop record,
      // we will delete the column family from
      // column_families_not_found.
      bool cf_in_not_found =
          column_families_not_found.find(edit.column_family_) !=
          column_families_not_found.end();
      // in builders means that user supplied that column family
      // option AND that we encountered column family add record
      bool cf_in_builders =
          builders.find(edit.column_family_) != builders.end();

      // they can't both be true
      assert(!(cf_in_not_found && cf_in_builders));

      ColumnFamilyData* cfd = nullptr;

      if (edit.is_column_family_add_) {
        if (cf_in_builders || cf_in_not_found) {
          s = Status::Corruption(
              "Manifest adding the same column family twice");
          break;
        }
        auto cf_options = cf_name_to_options.find(edit.column_family_name_);
        if (cf_options == cf_name_to_options.end()) {
          column_families_not_found.insert(
              {edit.column_family_, edit.column_family_name_});
        } else {
          cfd = CreateColumnFamily(cf_options->second, &edit);
          cfd->set_initialized();
          builders.insert(
              {edit.column_family_, new BaseReferencedVersionBuilder(cfd)});
        }
      } else if (edit.is_column_family_drop_) {
        if (cf_in_builders) {
          auto builder = builders.find(edit.column_family_);
          assert(builder != builders.end());
          delete builder->second;
          builders.erase(builder);
          cfd = column_family_set_->GetColumnFamily(edit.column_family_);
          if (cfd->Unref()) {
            delete cfd;
            cfd = nullptr;
          } else {
            // who else can have reference to cfd!?
            assert(false);
          }
        } else if (cf_in_not_found) {
          column_families_not_found.erase(edit.column_family_);
        } else {
          s = Status::Corruption(
              "Manifest - dropping non-existing column family");
          break;
        }
      } else if (!cf_in_not_found) {
        if (!cf_in_builders) {
          s = Status::Corruption(
              "Manifest record referencing unknown column family");
          break;
        }

        cfd = column_family_set_->GetColumnFamily(edit.column_family_);
        // this should never happen since cf_in_builders is true
        assert(cfd != nullptr);

        // if it is not column family add or column family drop,
        // then it's a file add/delete, which should be forwarded
        // to builder
        auto builder = builders.find(edit.column_family_);
        assert(builder != builders.end());
        builder->second->version_builder()->Apply(&edit);
      }

      if (cfd != nullptr) {
        if (edit.has_log_number_) {
          if (cfd->GetLogNumber() > edit.log_number_) {
            ROCKS_LOG_WARN(
                db_options_->info_log,
                "MANIFEST corruption detected, but ignored - Log numbers in "
                "records NOT monotonically increasing");
          } else {
            cfd->SetLogNumber(edit.log_number_);
            have_log_number = true;
          }
        }
        if (edit.has_comparator_ &&
            edit.comparator_ != cfd->user_comparator()->Name()) {
          s = Status::InvalidArgument(
              cfd->user_comparator()->Name(),
              "does not match existing comparator " + edit.comparator_);
          break;
        }
      }

      if (edit.has_prev_log_number_) {
        previous_log_number = edit.prev_log_number_;
        have_prev_log_number = true;
      }

      if (edit.has_next_file_number_) {
        next_file = edit.next_file_number_;
        have_next_file = true;
      }

      if (edit.has_max_column_family_) {
        max_column_family = edit.max_column_family_;
      }

      if (edit.has_last_sequence_) {
        last_sequence = edit.last_sequence_;
        have_last_sequence = true;
      }
    }

MEMTABLE 实现

概述

我们知道RocksDB每一次写入,都是先写WAL,然后写Memtable,这次我们就来分析下MemTable的实现。

在RocksDB中,每个ColumnFamily都有自己的Memtable,互不影响.而在RocksDB中Memtable有多种实现(SkipList/HashSkipList/HashLinkList/Vector),具体的区别可以看memtable,我们这次主要来分析默认的实现skiplist(只有skiplist是可以并发插入的).

实现

首先从创建Memtable开始,Memtable的创建(ColumnFamilyData::CreateNewMemtable)是在创建ColumnFamily(VersionSet::CreateColumnFamily)的时候创建的.这里就是创建memtable,然后设置到ColumnFamilyData的mem_域中.

MemTable* ColumnFamilyData::ConstructNewMemtable(
    const MutableCFOptions& mutable_cf_options, SequenceNumber earliest_seq) {
  return new MemTable(internal_comparator_, ioptions_, mutable_cf_options,
                      write_buffer_manager_, earliest_seq, id_);
}
void ColumnFamilyData::CreateNewMemtable(
    const MutableCFOptions& mutable_cf_options, SequenceNumber earliest_seq) {
  if (mem_ != nullptr) {
    delete mem_->Unref();
  }
  SetMemtable(ConstructNewMemtable(mutable_cf_options, earliest_seq));
  mem_->Ref();
}

上面所提及的,RocksDB有多种MemTable的实现,那么它是如何来做的呢,RocksDB通过memtable_factory来根据用户的设置来创建不同的memtable.这里要注意的是核心的memtable实现是在MemTable这个类的table_域中.

MemTable::MemTable:
      table_(ioptions.memtable_factory->CreateMemTableRep(
          comparator_, &arena_, ioptions.prefix_extractor, ioptions.info_log,
          column_family_id)),


class MemTableRepFactory {
 public:
  virtual ~MemTableRepFactory() {}

  virtual MemTableRep* CreateMemTableRep(const MemTableRep::KeyComparator&,
                                         Allocator*, const SliceTransform*,
                                         Logger* logger) = 0;
  virtual MemTableRep* CreateMemTableRep(
      const MemTableRep::KeyComparator& key_cmp, Allocator* allocator,
      const SliceTransform* slice_transform, Logger* logger,
      uint32_t /* column_family_id */) {
    return CreateMemTableRep(key_cmp, allocator, slice_transform, logger);
  }
........................

然后最后会调用对应的实现的CreateMemTableRep方法,这里我们就来看SkipList的实现.

MemTableRep* SkipListFactory::CreateMemTableRep(
    const MemTableRep::KeyComparator& compare, Allocator* allocator,
    const SliceTransform* transform, Logger* /*logger*/) {
  return new SkipListRep(compare, allocator, transform, lookahead_);
}

最终就是创建SkipListRep对象,在这个对象里面会创建SkipList(class InlineSkipList).

class SkipListRep : public MemTableRep {
  InlineSkipList<const MemTableRep::KeyComparator&> skip_list_;
...................................
public:
 explicit SkipListRep(const MemTableRep::KeyComparator& compare,
                      Allocator* allocator, const SliceTransform* transform,
                      const size_t lookahead)
     : MemTableRep(allocator),
       skip_list_(compare, allocator),
       cmp_(compare),
       transform_(transform),
       lookahead_(lookahead) {}

这里我们只需要知道最终所有的memtable数据都是保存在SkipList中就可以了.

在之前的分析中我们知道Memtable的插入是通过WriteBatch然后遍历ColumnFamily来插入的,而最终则是会调用MemTable::Add这个函数.

bool MemTable::Add(SequenceNumber s, ValueType type,
                   const Slice& key, /* user key */
                   const Slice& value, bool allow_concurrent,
                   MemTablePostProcessInfo* post_process_info) {
bool res = table->InsertKeyConcurrently(handle);
    if (UNLIKELY(!res)) {
      return res;
    }
..............................
                   }

最终会调用InlineSkipList来对数据进行插入.

template <class Comparator>
bool InlineSkipList<Comparator>::InsertConcurrently(const char* key) {
  Node* prev[kMaxPossibleHeight];
  Node* next[kMaxPossibleHeight];
  Splice splice;
  splice.prev_ = prev;
  splice.next_ = next;
  return Insert<true>(key, &splice, false);
}

看到这里或许会有疑问了,那就是skiplist里面只有key,而RocksDB是一个KV存储,那么这个KV是如何存储的呢,这里是这样的,RocksDB会将KV打包成一个key传递给SkipList, 对应的KEY的结构是这样的.

  // Format of an entry is concatenation of:
  //  key_size     : varint32 of internal_key.size()
  //  key bytes    : char[internal_key.size()]
  //  value_size   : varint32 of value.size()
  //  value bytes  : char[value.size()]

而数据的格式化就在之前的MemTable::Add中实现的.

uint32_t key_size = static_cast<uint32_t>(key.size());
  uint32_t val_size = static_cast<uint32_t>(value.size());
  uint32_t internal_key_size = key_size + 8;
  const uint32_t encoded_len = VarintLength(internal_key_size) +
                               internal_key_size + VarintLength(val_size) +
                               val_size;
  char* buf = nullptr;
  std::unique_ptr<MemTableRep>& table =
      type == kTypeRangeDeletion ? range_del_table_ : table_;
  KeyHandle handle = table->Allocate(encoded_len, &buf);

  char* p = EncodeVarint32(buf, internal_key_size);
  memcpy(p, key.data(), key_size);
  Slice key_slice(p, key_size);
  p += key_size;
  uint64_t packed = PackSequenceAndType(s, type);
  EncodeFixed64(p, packed);
  p += 8;
  p = EncodeVarint32(p, val_size);
  memcpy(p, value.data(), val_size);

而对于真正的KEY的解析是在SkipList的Comparator中实现的(compare_).下面的代码片段可以看到会解析出来真正的key,然后再进行查找以及插入.

bool InlineSkipList<Comparator>::Insert(const char* key, Splice* splice,
                                        bool allow_partial_splice_fix) {
  Node* x = reinterpret_cast<Node*>(const_cast<char*>(key)) - 1;
  const DecodedKey key_decoded = compare_.decode_key(key);
...............................
                                        }

Rocksdb写入逻辑

概述

在RocksDB中,每次写入它都会先写WAL,然后再写入MemTable,这次我们就来分析这两个逻辑具体是如何实现的. 首先需要明确的是在RocksDB中,WAL的写入是单线程顺序串行写入的,而MemTable则是可以并发多线程写入的。

而在RocksDB 5.5中引进了一个选项enable_pipelined_write,这个选项的目的就是将WAL和MemTable的写入pipeline化, 也就是说当一个线程写完毕WAL之后,此时在WAL的write队列中等待的其他的write则会开始继续写入WAL, 而当前线程将会继续 写入MemTable.此时就将不同的Writer的写入WAL和写入MemTable并发执行了.

实现

这里分析pipeline的实现,核心函数就是DBImpl::PipelinedWriteImpl.通过设置参数enable_pipelined_write = true 来开启pipeline的写方式。

  • 每一个DB(DBImpl)都有一个write_thread_(class WriteThread).

  • 每次调用Write的时候会先写入WAL, 此时新建一个WriteThread::Writer对象,并将这个对象加入到一个Group中(调用JoinBatchGroup)

    WriteThread::Writer w(write_options, my_batch, callback, log_ref,
                          disable_memtable);
    write_thread_.JoinBatchGroup(&w);
    
  • JoinBatchGroup,这个函数主要是用来将所有的写入WAL加入到一个Group中.这里可以看到当当前的Writer 对象是leader(比如第一个进入的对象)的时候将会直接返回,否则将会等待直到更新为对应的状态.

    void WriteThread::JoinBatchGroup(Writer* w) {
    ...................................
    bool linked_as_leader = LinkOne(w, &newest_writer_);
    if (linked_as_leader) {
      SetState(w, STATE_GROUP_LEADER);
    }
    
    TEST_SYNC_POINT_CALLBACK("WriteThread::JoinBatchGroup:Wait", w);
    
    if (!linked_as_leader) {
      /**
       * Wait util:
       * 1) An existing leader pick us as the new leader when it finishes
       * 2) An existing leader pick us as its follewer and
       * 2.1) finishes the memtable writes on our behalf
       * 2.2) Or tell us to finish the memtable writes in pralallel
       * 3) (pipelined write) An existing leader pick us as its follower and
       *    finish book-keeping and WAL write for us, enqueue us as pending
       *    memtable writer, and
       * 3.1) we become memtable writer group leader, or
       * 3.2) an existing memtable writer group leader tell us to finish memtable
       *      writes in parallel.
       */
      AwaitState(w, STATE_GROUP_LEADER | STATE_MEMTABLE_WRITER_LEADER |
                        STATE_PARALLEL_MEMTABLE_WRITER | STATE_COMPLETED,
                 &jbg_ctx);
      TEST_SYNC_POINT_CALLBACK("WriteThread::JoinBatchGroup:DoneWaiting", w);
    }
    }
    
  • 然后我们来看LinkOne函数,这个函数主要用来讲当前的Writer对象加入到group中,这里可以看到由于 写入是并发的因此对应的newest_writer_(保存最新的写入对象)需要原子操作来更新.

    bool WriteThread::LinkOne(Writer* w, std::atomic<Writer*>* newest_writer) {
    	assert(newest_writer != nullptr);
    	assert(w->state == STATE_INIT);
    	Writer* writers = newest_writer->load(std::memory_order_relaxed);
    	while (true) {
    	  w->link_older = writers;
    	  if (newest_writer->compare_exchange_weak(writers, w)) {
    	    return (writers == nullptr);
    	  }
    	}
    }
    
  • 当从JoinBatchGroup返回之后,当当前的Writer对象为leader的话,则将会把此leader下的所有的write都 链接到一个WriteGroup中(调用EnterAsBatchGroupLeader函数), 并开始写入WAL,这里要注意非leader的write将会直接 进入memtable的写入,这是因为非leader的write都将会被当前它所从属的leader来打包(group)写入,后面我们会看到实现.

    size_t WriteThread::EnterAsBatchGroupLeader(Writer* leader,
                                          WriteGroup* write_group) {
    	assert(leader->link_older == nullptr);
    	assert(leader->batch != nullptr);
    	assert(write_group != nullptr);
    	................................................
    	Writer* newest_writer = newest_writer_.load(std::memory_order_acquire);
    	
    	// This is safe regardless of any db mutex status of the caller. Previous
    	// calls to ExitAsGroupLeader either didn't call CreateMissingNewerLinks
    	// (they emptied the list and then we added ourself as leader) or had to
    	// explicitly wake us up (the list was non-empty when we added ourself,
    	// so we have already received our MarkJoined).
    	CreateMissingNewerLinks(newest_writer);
    	
    	// Tricky. Iteration start (leader) is exclusive and finish
    	// (newest_writer) is inclusive. Iteration goes from old to new.
    	Writer* w = leader;
    	while (w != newest_writer) {
    	  w = w->link_newer;
    	.........................................
    	  w->write_group = write_group;
    	  size += batch_size;
    	  write_group->last_writer = w;
    	  write_group->size++;
    	}
    	..............................
    }
    
  • 这里注意到遍历是通过link_newer进行的,之所以这样做是相当于在写入WAL之前,对于当前leader的Write 做一次snapshot(通过CreateMissingNewerLinks函数).

    void WriteThread::CreateMissingNewerLinks(Writer* head) {
    	while (true) {
    	  Writer* next = head->link_older;
    	  if (next == nullptr || next->link_newer != nullptr) {
    	    assert(next == nullptr || next->link_newer == head);
    	    break;
    	  }
    	  next->link_newer = head;
    	  head = next;
    	}
    }
    
  • 上述操作进行完毕之后,进入写WAL操作,最终会把这个write_group打包成一个writeBatch(通过MergeBatch函数)进行写入.

       if (w.ShouldWriteToWAL()) {
    ...............................
          w.status = WriteToWAL(wal_write_group, log_writer, log_used,
                                need_log_sync, need_log_dir_sync, current_sequence);
        }
    
  • 当当前的leader将它自己与它的follow写入之后,此时它将需要写入memtable,那么此时之前还阻塞的Writer,分为两种情况 第一种是已经被当前的leader打包写入到WAL,这些writer(包括leader自己)需要将他们链接到memtable writer list.还有一种情况,那就是还没有写入WAL的,此时这类writer则需要选择一个leader然后继续写入WAL.

    void WriteThread::ExitAsBatchGroupLeader(WriteGroup& write_group,
                                       Status status) {
    Writer* leader = write_group.leader;
    Writer* last_writer = write_group.last_writer;
    assert(leader->link_older == nullptr);
    .....................................
    
    if (enable_pipelined_write_) {
      // Notify writers don't write to memtable to exit.
    ......................................
      // Link the ramaining of the group to memtable writer list.
      if (write_group.size > 0) {
        if (LinkGroup(write_group, &newest_memtable_writer_)) {
          // The leader can now be different from current writer.
          SetState(write_group.leader, STATE_MEMTABLE_WRITER_LEADER);
        }
      }
      // Reset newest_writer_ and wake up the next leader.
      Writer* newest_writer = last_writer;
      if (!newest_writer_.compare_exchange_strong(newest_writer, nullptr)) {
        Writer* next_leader = newest_writer;
        while (next_leader->link_older != last_writer) {
          next_leader = next_leader->link_older;
          assert(next_leader != nullptr);
        }
        next_leader->link_older = nullptr;
        SetState(next_leader, STATE_GROUP_LEADER);
      }
      AwaitState(leader, STATE_MEMTABLE_WRITER_LEADER |
                             STATE_PARALLEL_MEMTABLE_WRITER | STATE_COMPLETED,
                 &eabgl_ctx);
    } else {
     .....................................
    }
    }
    
  • 接下来我们来看写入memtable的操作,这里逻辑类似写入WAL,如果是leader的话,则依旧会创建一个group(WriteGroup),然后遍历需要写入memtable的writer,将他们都加入到group中(EnterAsMemTableWriter),然后则设置并发执行的大小,以及设置对应状态(LaunchParallelMemTableWriters).这里注意每次setstate就将会唤醒之前阻塞的Writer.

    void WriteThread::LaunchParallelMemTableWriters(WriteGroup* write_group) {
    	assert(write_group != nullptr);
    	write_group->running.store(write_group->size);
    	for (auto w : *write_group) {
    	  SetState(w, STATE_PARALLEL_MEMTABLE_WRITER);
    	}
    }
    
  • 这里要注意,在构造memtable的group的时候,我们不需要创建link_newer,因为之前在写入WAL的时候,我们已经构造好link_newer,那么此时我们使用构造好的group也就是表示这个group中包含的都是已经写入到WAL的操作.

    void WriteThread::EnterAsMemTableWriter(Writer* leader,
                                      WriteGroup* write_group) {
    ....................................
    
    if (!allow_concurrent_memtable_write_ || !leader->batch->HasMerge()) {
     ....................................................
    }
    
    write_group->last_writer = last_writer;
    write_group->last_sequence =
        last_writer->sequence + WriteBatchInternal::Count(last_writer->batch) - 1;
    }
    
  • 最后开始执行写入MemTable的操作,之前在写入WAL的时候被阻塞的所有Writer此时都会进入下面这个逻辑,此时也就意味着 并发写入MemTable.

    if (w.state == WriteThread::STATE_PARALLEL_MEMTABLE_WRITER) {
    .........................
      w.status = WriteBatchInternal::InsertInto(
          &w, w.sequence, &column_family_memtables, &flush_scheduler_,
          write_options.ignore_missing_column_families, 0 /*log_number*/, this,
          true /*concurrent_memtable_writes*/);
      if (write_thread_.CompleteParallelMemTableWriter(&w)) {
        MemTableInsertStatusCheck(w.status);
        versions_->SetLastSequence(w.write_group->last_sequence);
        write_thread_.ExitAsMemTableWriter(&w, *w.write_group);
      }
    }
    
  • 最后当当前group的所有Writer都写入MemTable之后,则将会调用ExitAsMemTableWriter来进行收尾工作.如果有新的memtable writer list需要处理,那么则唤醒对应的Writer,然后设置已经处理完毕的Writer的状态.

    void WriteThread::ExitAsMemTableWriter(Writer* /*self*/,
                                     WriteGroup& write_group) {
    	Writer* leader = write_group.leader;
    	Writer* last_writer = write_group.last_writer;
    	
    	Writer* newest_writer = last_writer;
    	if (!newest_memtable_writer_.compare_exchange_strong(newest_writer,
    	                                                     nullptr)) {
    	  CreateMissingNewerLinks(newest_writer);
    	  Writer* next_leader = last_writer->link_newer;
    	  assert(next_leader != nullptr);
    	  next_leader->link_older = nullptr;
    	  SetState(next_leader, STATE_MEMTABLE_WRITER_LEADER);
    	}
    	Writer* w = leader;
    	while (true) {
    	  if (!write_group.status.ok()) {
    	    w->status = write_group.status;
    	  }
    	  Writer* next = w->link_newer;
    	  if (w != leader) {
    	    SetState(w, STATE_COMPLETED);
    	  }
    	  if (w == last_writer) {
    	    break;
    	  }
    	  w = next;
    	}
    	// Note that leader has to exit last, since it owns the write group.
    	SetState(leader, STATE_COMPLETED);
    }
    

总结

我们可以看到在RocksDB中,WAL的写入始终是串行写入,而MemTable可以多线程并发写入,也就是说在系统压力到一定阶段的时候, 写入WAL肯定会成为瓶颈.

关于写的一些参数调优

通过在rocksdb打开的时候增加自调优的参数设置:
options.OptimizeLevelStyleCompaction();总体上的一个参数调整是增加memtable的吞吐量:增加了memtable的大小,可以同时存在于内存中的memtable文件的个数,并且适配了L1的容量保持和L0的容量接近,还有一些各层的压缩算法的配置。大概测试了一下该配置的随机写吞吐能够在原有基础之上提升50-80%。不过该配置肯定对内存资源的消耗比较大,所以如果系统资源足够且是IO密集型业务对性能有较高的要求可以尝试一下该配置。
options.allow_concurrent_memtable_write=true ; 允许多个writer 对memtable的并发写入
options.enable_pipelined_write=true ; 开启pipeline的写机制,允许memtable和wal并发写入

详细的rocksb自优化参数实现逻辑

ColumnFamilyOptions* ColumnFamilyOptions::OptimizeLevelStyleCompaction(
    uint64_t memtable_memory_budget) {
  write_buffer_size = static_cast<size_t>(memtable_memory_budget / 4);
  // merge two memtables when flushing to L0
  min_write_buffer_number_to_merge = 2;
  // this means we'll use 50% extra memory in the worst case, but will reduce
  // write stalls.
  max_write_buffer_number = 6;
  // start flushing L0->L1 as soon as possible. each file on level0 is
  // (memtable_memory_budget / 2). This will flush level 0 when it's bigger than
  // memtable_memory_budget.
  level0_file_num_compaction_trigger = 2;
  // doesn't really matter much, but we don't want to create too many files
  target_file_size_base = memtable_memory_budget / 8;
  // make Level1 size equal to Level0 size, so that L0->L1 compactions are fast
  max_bytes_for_level_base = memtable_memory_budget;

  // level style compaction
  compaction_style = kCompactionStyleLevel;

  // only compress levels >= 2
  compression_per_level.resize(num_levels);
  for (int i = 0; i < num_levels; ++i) {
    if (i < 2) {
      compression_per_level[i] = kNoCompression;
    } else {
      compression_per_level[i] =
          LZ4_Supported()
              ? kLZ4Compression
              : (Snappy_Supported() ? kSnappyCompression : kNoCompression);
    }
  }
  return this;
}

读流程

读流程图

Rocksdb 写流程,读流程,WAL文件,MANIFEST文件,ColumnFamily,Memtable,SST文件原理详解_第2张图片

概述

简而言之,在RocksDB中的读取需要处理的最核心的一个问题就是如何读取最新的数据,这是由于RocksDB是基于LSM,因此在RocksDB中,对于数据的delete以及update,它并不会立即去执行对应的动作,而只是插入一条新的数据,而数据的最终更新(last-write-win)以及删除是在compact的时候来做的.

其实最那就是如何读取到一个数据的最新版本,因此首先我们需要知道在RocksDB中,多个版本的数据是如何保存的。首先我们需要知道在RocksDB中,数据是保存在两个地方,一个是memtable(内存),一个是sstable(磁盘),因此RocksDB读取数据也是依次从这两个地方读取.

  • memtable.在RocksDB中memtable的默认实现是skiplist,RocksDB会将用户传入的key改变为memtable内部的key(user_key+seq+type),然后再加上用户传入的value之后,作为一个element加入到skiplist.因此我们读取的时候需要读取到最新的那条数据.
  • sstable.在RocksDB中,除去level0之外的sstable是保证不会overlap,因此在这些sstable中,只要get到值,那么就可以进入下一个level了,而在level0中则需要读取所有的sstable.

memtable 源码分析

首先我们知道在RocksDB中,每个version都会一个sequence number,每次写入都会更新这个sequence number,因此相同key的不同版本就是通过这个seq来确定的,这个sequence相当于一个时间戳,这样通过sequence我们就可以得到某一个key的最新数据.

通过上面我们知道由于用户插入或者读取的时候传递进来永远是只有user_key,因此在RocksDB内部还会维护一个internal_key,这个internal_key格式如下:

user_key + sequence + type

对应的代码如下:

 InternalKey(const Slice& _user_key, SequenceNumber s, ValueType t) {
  AppendInternalKey(&rep_, ParsedInternalKey(_user_key, s, t));
}

void AppendInternalKey(std::string* result, const ParsedInternalKey& key) {
  result->append(key.user_key.data(), key.user_key.size());
  PutFixed64(result, PackSequenceAndType(key.sequence, key.type));
}

这里type就是表示当前操作,这里在memtable中,分为三种种操作,下面value就表示是插入,而merge这次暂时忽略,我们以后会详细介绍这个操作:

enum ValueType : unsigned char {
  kTypeDeletion = 0x0,
  kTypeValue = 0x1,
  kTypeMerge = 0x2,
........................
}

不同版本的key被插入的时候,在RocksDB内部是如何组织的。在RocksDB中的不同版本的key是按照下面的逻辑进行排序:

increasing user key (according to user-supplied comparator)
decreasing sequence number
decreasing type (though sequence# should be enough to disambiguate)

那么此时为了读取最新的那条数据,我们只需要读取最大seq的那条数据就可以了.

对应代码就是InternalKeyComparator这个类,可以看到当key相同时说明是相同key的不同版本,因此开始进行后续的处理:

int InternalKeyComparator::Compare(const ParsedInternalKey& a,
                                   const ParsedInternalKey& b) const {
  int r = user_comparator_->Compare(a.user_key, b.user_key);
  PERF_COUNTER_ADD(user_key_comparison_count, 1);
  if (r == 0) {
    if (a.sequence > b.sequence) {
      r = -1;
    } else if (a.sequence < b.sequence) {
      r = +1;
    } else if (a.type > b.type) {
      r = -1;
    } else if (a.type < b.type) {
      r = +1;
    }
  }
  return r;
}

这里InternalKey对于用户来说是完全透明的,那么当用户来查找对应的user_key的时候,RocksDB又是如何来构建对应的internalkey呢,这里有一个核心的数据结构叫做LookupKey.我们来看这个类的实现:

class LookupKey {
 public:
  // Initialize *this for looking up user_key at a snapshot with
  // the specified sequence number.
  LookupKey(const Slice& _user_key, SequenceNumber sequence);
...................................................

 private:
  // We construct a char array of the form:
  //    klength  varint32               <-- start_
  //    userkey  char[klength]          <-- kstart_
  //    tag      uint64
  //                                    <-- end_
  // The array is a suitable MemTable key.
  // The suffix starting with "userkey" can be used as an InternalKey.
  const char* start_;
  const char* kstart_;
  const char* end_;
  char space_[200];      // Avoid allocation for short keys
...........................................
};

这里可以看到每次构造lookupkey的时候,必须得传入一个seq,那么这个seq是如何计算的呢,来看代码:

Status DBImpl::GetImpl(const ReadOptions& read_options,
                       ColumnFamilyHandle* column_family, const Slice& key,
                       PinnableSlice* pinnable_val, bool* value_found,
                       ReadCallback* callback, bool* is_blob_index) {
...........................................
SequenceNumber snapshot;
  if (read_options.snapshot != nullptr) {
    // Note: In WritePrepared txns this is not necessary but not harmful either.
    // Because prep_seq > snapshot => commit_seq > snapshot so if a snapshot is
    // specified we should be fine with skipping seq numbers that are greater
    // than that.
    snapshot =
        reinterpret_cast<const SnapshotImpl*>(read_options.snapshot)->number_;
  } else {
.............................................................
    snapshot = last_seq_same_as_publish_seq_
                   ? versions_->LastSequence()
                   : versions_->LastPublishedSequence();
  }
.........................................
  // First look in the memtable, then in the immutable memtable (if any).
  // s is both in/out. When in, s could either be OK or MergeInProgress.
  // merge_operands will contain the sequence of merges in the latter case.
  LookupKey lkey(key, snapshot);

}

通过上面的代码我们可以看到每次调用Get的时候,RocksDB都会构造一个LookupKey,这里我们可以简单的认为这个seq就是当前的version最后一次写成功的seq(以后会介绍这里的publish_seq).

然后上面的代码最终会调用MemTable::Get,在分析这个函数之前我们先来看一个数据结构Saver,这个数据结构用来保存查找内容时的上下文.

struct Saver {
  Status* status;
  const LookupKey* key;
  bool* found_final_value;  // Is value set correctly? Used by KeyMayExist
  bool* merge_in_progress;
  std::string* value;
  SequenceNumber seq;
  const MergeOperator* merge_operator;
  // the merge operations encountered;
  MergeContext* merge_context;
  RangeDelAggregator* range_del_agg;
  MemTable* mem;
  Logger* logger;
  Statistics* statistics;
  bool inplace_update_support;
  Env* env_;
  ReadCallback* callback_;
  bool* is_blob_index;

  bool CheckCallback(SequenceNumber _seq) {
    if (callback_) {
      return callback_->IsCommitted(_seq);
    }
    return true;
  }
};

然后我们来看MemTable::Get这个函数,这个函数最核心的步骤就是构造Saver对象,然后调用MemTableRep::Get,这里注意传递给Get的第三个参数是一个回调函数,后面我们会详细分析这个函数.

bool MemTable::Get(const LookupKey& key, std::string* value, Status* s,
                   MergeContext* merge_context,
                   RangeDelAggregator* range_del_agg, SequenceNumber* seq,
                   const ReadOptions& read_opts, ReadCallback* callback,
                   bool* is_blob_index) {
...............................................
    Saver saver;
    saver.status = s;
    saver.found_final_value = &found_final_value;
    saver.merge_in_progress = &merge_in_progress;
    saver.key = &key;
    saver.value = value;
    saver.seq = kMaxSequenceNumber;
    saver.mem = this;
    saver.merge_context = merge_context;
    saver.range_del_agg = range_del_agg;
    saver.merge_operator = moptions_.merge_operator;
    saver.logger = moptions_.info_log;
    saver.inplace_update_support = moptions_.inplace_update_support;
    saver.statistics = moptions_.statistics;
    saver.env_ = env_;
    saver.callback_ = callback;
    saver.is_blob_index = is_blob_index;
    table_->Get(key, &saver, SaveValue);
..............................................
}

然后我们来看MemTableRep::Get,首先我们需要知道MemTableRep这个类用来抽象不同的MemTable的实现,也就是说它是一个虚类,然后不同的MemTable实现了它(通过工厂方法模式维护了包括skiplist,vector等多个工厂,用来提供给用户创建不同的memtable管理方式),这里我们只来分析skiplist也就是默认的MemTable实现.

可以通过设置options.memtable_factory.reset(new rocksdb::VectorRepFactory());来配置不同的memtable管理方式

void MemTableRep::Get(const LookupKey& k, void* callback_args,
                      bool (*callback_func)(void* arg, const char* entry)) {
  auto iter = GetDynamicPrefixIterator();
  for (iter->Seek(k.internal_key(), k.memtable_key().data());
       iter->Valid() && callback_func(callback_args, iter->key());
       iter->Next()) {
  }
}

上面的函数中最核心的是两个一个是iter->Seek一个是callback_func,我们一个个来,先来分析Seek,可以看到这里Seek的时候传递进去有两个参数,一个是internal_key,一个是memtable_key,那么这两个key分别代表什么呢,我们再次回到LookupKey这个类,可以看到这里memtable_key就是(end_-start_),而internal_key就是(end_-kstart_)

class LookupKey {
 public:

  // Return a key suitable for lookup in a MemTable. memtable-key的构造
  Slice memtable_key() const {
    return Slice(start_, static_cast<size_t>(end_ - start_));
  }

  // Return an internal key (suitable for passing to an internal iterator) internal-key的构造
  Slice internal_key() const {
    return Slice(kstart_, static_cast<size_t>(end_ - kstart_));
  }

然后那么对应的这三个变量又表示什么呢,我们来看LookupKey的构造函数:

LookupKey::LookupKey(const Slice& _user_key, SequenceNumber s) {
  size_t usize = _user_key.size();
  size_t needed = usize + 13;  // A conservative estimate
  char* dst;
  if (needed <= sizeof(space_)) {
    dst = space_;
  } else {
    dst = new char[needed];
  }
  start_ = dst; // _start key
  // NOTE: We don't support users keys of more than 2GB :)
  dst = EncodeVarint32(dst, static_cast<uint32_t>(usize + 8));
  kstart_ = dst; //kstart_ key
  memcpy(dst, _user_key.data(), usize);
  dst += usize;
  EncodeFixed64(dst, PackSequenceAndType(s, kValueTypeForSeek));
  dst += 8;
  end_ = dst; // end
}

通过上面的构造函数可以看到在LookupKey中会把全部的internal_key(user_key+seq+type)和RocksDB为user_key所添加的内容指针分别保存起来,也就是memtable_key保存了内部使用的key,而internal_key保存了RocksDB为构造在内部key添加的内容.这里可以看到查找的时候,保存的type是一个特殊的type,这个type其实是kTypeBlobIndex,也就是是值最大的type.那么为什么要这么做呢,我们在分析之前先来看对应的Seek函数.

    // Advance to the first entry with a key >= target
    virtual void Seek(const Slice& user_key, const char* memtable_key)
        override {
      if (memtable_key != nullptr) {
        iter_.Seek(memtable_key);
      } else {
        iter_.Seek(EncodeKey(&tmp_, user_key));
      }
    }

template <class Comparator>
inline void InlineSkipList<Comparator>::Iterator::Seek(const char* target) {
  node_ = list_->FindGreaterOrEqual(target);
}

这里由于上面的memtable_key肯定不为null,那么就是会调用下面对应的Seek函数,而这个函数最终会调用skiplist的FindGreaterOrEqual函数,这个函数也就是用来定位到大于或者等于memtable_key的位置,此时我们再回忆下一开始介绍的key的排序

(InternalKeyComparator::Compare),也就是当Key相同时,按照seq的降序,如果seq相同则按照type的降序,那么此时FindGreaterOrEqual就比较好理解了,也就是会返回小于我们输入seq的值,而当seq相等的话,则会返回小于我们的输入type的值(由于我们传入的是最大的type,因此也就是会直接返回值).那么此时返回的位置有可能key本身就比我们的输入key小,并且我们还需要肯根据不同的type来做不同的操作,那么此时就需要SaveValue回调了.

接下来我们来看对应的callbakc_func(SaveValue)函数,这个函数有两个参数,第一个参数是之前保存的Saver对象,第二个则就是我们在skiplist中定位到的位置.这个函数要做的比较简单,首先就是判断是否得到的key和我们传递进来的key相同,如果不同,则说明查找的key不合法,因此直接返回.这里我们着重来看对于插入和删除的处理.

static bool SaveValue(void* arg, const char* entry) {
......................................................
//检查得到的key和传入的key是否一致
if (s->mem->GetInternalKeyComparator().user_comparator()->Equal(
          Slice(key_ptr, key_length - 8), s->key->user_key())) {
...........................................................
   case kTypeValue: {
        if (s->inplace_update_support) {
          s->mem->GetLock(s->key->user_key())->ReadLock();
        }
        Slice v = GetLengthPrefixedSlice(key_ptr + key_length);
        *(s->status) = Status::OK();
        if (*(s->merge_in_progress)) {
          if (s->value != nullptr) {
            *(s->status) = MergeHelper::TimedFullMerge(
                merge_operator, s->key->user_key(), &v,
                merge_context->GetOperands(), s->value, s->logger,
                s->statistics, s->env_, nullptr /* result_operand */, true);
          }
        } else if (s->value != nullptr) {
          s->value->assign(v.data(), v.size());
        }
        if (s->inplace_update_support) {
          s->mem->GetLock(s->key->user_key())->ReadUnlock();
        }
        *(s->found_final_value) = true;
        if (s->is_blob_index != nullptr) {
          *(s->is_blob_index) = (type == kTypeBlobIndex);
        }
        return false;
      }
      case kTypeDeletion:
      case kTypeSingleDeletion:
      case kTypeRangeDeletion: {
        if (*(s->merge_in_progress)) {
          if (s->value != nullptr) {
            *(s->status) = MergeHelper::TimedFullMerge(
                merge_operator, s->key->user_key(), nullptr,
                merge_context->GetOperands(), s->value, s->logger,
                s->statistics, s->env_, nullptr /* result_operand */, true);
          }
        } else {
          *(s->status) = Status::NotFound();
        }
        *(s->found_final_value) = true;
        return false;
      }
}
}

当查找到对应的值的时候,直接赋值然后返回给用户(设置found_final_value).这里可以看到如果是Delete的话,直接返回NotFound.

SST 源码分析

当数据不在内存中时,读操作会去到底层SST文件中读取数据。

依旧是从DBImpl::GetImpl开始,这个函数只分析了Memtable相关的代码,这次我们来看当memtable没有查找到之后,RocksDB是如何处理的.我们可以看到当MemTable中没有找到对应的数据之后(包括删除),RocksDB将会进入对应的sst中查找.

  if (!done) {
    PERF_TIMER_GUARD(get_from_output_files_time);
    sv->current->Get(read_options, lkey, pinnable_val, &s, &merge_context,
                     &range_del_agg, value_found, nullptr, nullptr, callback,
                     is_blob_index);
    RecordTick(stats_, MEMTABLE_MISS);
  }

从上面的代码我们可以看到直接从当前的version(sv->current)调用Get方法,因此接下来我们就来详细看这个函数。 这个函数简单来说就是根据所需要查找的key,然后选择对应的文件,这里每次会返回一个文件(key在sst的key范围内),然后循环查找.

先来看查找之前的初始化

  GetContext get_context(
      user_comparator(), merge_operator_, info_log_, db_statistics_,
      status->ok() ? GetContext::kNotFound : GetContext::kMerge, user_key,
      value, value_found, merge_context, range_del_agg, this->env_, seq,
      merge_operator_ ? &pinned_iters_mgr : nullptr, callback, is_blob);

  // Pin blocks that we read to hold merge operands
  if (merge_operator_) {
    pinned_iters_mgr.StartPinning();
  }

  FilePicker fp(
      storage_info_.files_, user_key, ikey, &storage_info_.level_files_brief_,
      storage_info_.num_non_empty_levels_, &storage_info_.file_indexer_,
      user_comparator(), internal_comparator());
  FdWithKeyRange* f = fp.GetNextFile();

第一个是GetContext结构,这个类只要是根据传递进来的文件元信息来查找对应的key.然后是FilePicker,这个类主要是根据传递进来的key来选择对应的文件.这里最重要就是GetNextFile这个函数,我们来看这个函数。

这个函数他会遍历所有的level,然后再遍历每个level的所有的文件,这里会对level 0的文件做一个特殊处理,这是因为只有level0的sst的range不是有序的,因此我们每次查找需要查找所有的文件,也就是会一个个的遍历.

而在非level0,我们只需要按照二分查找来得到对应的文件即可,如果二分查找不存在,那么我就需要进入下一个level进行查找.

FdWithKeyRange* GetNextFile() {
    while (!search_ended_) {  // Loops over different levels.
      while (curr_index_in_curr_level_ < curr_file_level_->num_files) {
        // Loops over all files in current level.
        FdWithKeyRange* f = &curr_file_level_->files[curr_index_in_curr_level_];
        hit_file_level_ = curr_level_;
        is_hit_file_last_in_level_ =
            curr_index_in_curr_level_ == curr_file_level_->num_files - 1;
        int cmp_largest = -1;
        if (num_levels_ > 1 || curr_file_level_->num_files > 3) {
          // Check if key is within a file's range. If search left bound and
          // right bound point to the same find, we are sure key falls in
          // range.
          assert(
              curr_level_ == 0 ||
              curr_index_in_curr_level_ == start_index_in_curr_level_ ||
              user_comparator_->Compare(user_key_,
                ExtractUserKey(f->smallest_key)) <= 0);

          int cmp_smallest = user_comparator_->Compare(user_key_,
              ExtractUserKey(f->smallest_key));
          if (cmp_smallest >= 0) {
            cmp_largest = user_comparator_->Compare(user_key_,
                ExtractUserKey(f->largest_key));
          }

          // Setup file search bound for the next level based on the
          // comparison results
          if (curr_level_ > 0) {
            file_indexer_->GetNextLevelIndex(curr_level_,
                                            curr_index_in_curr_level_,
                                            cmp_smallest, cmp_largest,
                                            &search_left_bound_,
                                            &search_right_bound_);
          }
          // Key falls out of current file's range
          if (cmp_smallest < 0 || cmp_largest > 0) {
            if (curr_level_ == 0) {
              ++curr_index_in_curr_level_;
              continue;
            } else {
              // Search next level.
              break;
            }
          }
        }
        returned_file_level_ = curr_level_;
        if (curr_level_ > 0 && cmp_largest < 0) {
          // No more files to search in this level.
          search_ended_ = !PrepareNextLevel();
        } else {
          ++curr_index_in_curr_level_;
        }
        return f;
      }
      // Start searching next level.
      search_ended_ = !PrepareNextLevel();
    }
    // Search ended.
    return nullptr;
  }

这里RocksDB使用了一个技巧用来加快二分查找的速度,每次更新sst的时候,RocksDB都会调用FileIndexer::UpdateIndex来更新这样的一个结构,这个结构就是FileIndexer,它主要是用来保存每一个level和level+1的key范围的关联信息,这样当我们在level查找的时候,如果没有查找到信息,那么我们将会迅速得到下一个level需要查找的文件范围.

每一个key来进行比较总会有三种情况:

  • 小于当前sst的smallest.
  • 大于当前sst的largest.
  • 处于这个范围.

那么我们只需要在初始化索引的时候能够得到当前的sst在下一个level中的位置,就可以根据上面三种类型来确定下一个level我们需要进行二分查找的文件范围.在RocksDB中定义了下面三个值.

    // Point to a left most file in a lower level that may contain a key,
    // which compares greater than smallest of a FileMetaData (upper level)
    int32_t smallest_lb;
    // Point to a left most file in a lower level that may contain a key,
    // which compares greater than largest of a FileMetaData (upper level)
    int32_t largest_lb;
    // Point to a right most file in a lower level that may contain a key,
    // which compares smaller than smallest of a FileMetaData (upper level)
    int32_t smallest_rb;
    // Point to a right most file in a lower level that may contain a key,
    // which compares smaller than largest of a FileMetaData (upper level)
    int32_t largest_rb;

我们通过例子来解释这三个值.假设有下面两个level,4个sst.那么初始化的时候,对应的level1的这个sst对应的四个值分别为. smallest_lb=1;largest_lb=2;smallest_rb=1;largest_rb=2;

        level 1:              [50 - 60]
        level 2:        [1 - 40], [45 - 55], [58 - 80]

此时如果我们查找一个key为49,然后第一次比较,也就是key < level1.sst->smallest,那么我们将会知道我们需要在0和smallest_rb之间来查找,也就是0和1.假设我们查找key是55,也就是 level1.sst->smallest < key < level1.test.largest,此时我们在level2将需要在smallest_rb和largest_rb之间.这里可以看到其实就是计算一个重合的区间。

来看RocksDB如何根据当前level的比较结果来计算下一个level需要二分查找的文件范围:

// During file search, a key is compared against smallest and largest
// from a FileMetaData. It can have 3 possible outcomes:
// (1) key is smaller than smallest, implying it is also smaller than
//     larger. Precalculated index based on "smallest < smallest" can
//     be used to provide right bound.
// (2) key is in between smallest and largest.
//     Precalculated index based on "smallest > greatest" can be used to
//     provide left bound.
//     Precalculated index based on "largest < smallest" can be used to
//     provide right bound.
// (3) key is larger than largest, implying it is also larger than smallest.
//     Precalculated index based on "largest > largest" can be used to
//     provide left bound.
//
// As a result, we will need to do:
// Compare smallest (<=) and largest keys from upper level file with
// smallest key from lower level to get a right bound.
// Compare smallest (>=) and largest keys from upper level file with
// largest key from lower level to get a left bound.
//
// Example:
//    level 1:              [50 - 60]
//    level 2:        [1 - 40], [45 - 55], [58 - 80]
// A key 35, compared to be less than 50, 3rd file on level 2 can be
// skipped according to rule (1). LB = 0, RB = 1.
// A key 53, sits in the middle 50 and 60. 1st file on level 2 can be
// skipped according to rule (2)-a, but the 3rd file cannot be skipped
// because 60 is greater than 58. LB = 1, RB = 2.
// A key 70, compared to be larger than 60. 1st and 2nd file can be skipped
// according to rule (3). LB = 2, RB = 2.
    
void FileIndexer::GetNextLevelIndex(const size_t level, const size_t file_index,
                                    const int cmp_smallest,
                                    const int cmp_largest, int32_t* left_bound,
                                    int32_t* right_bound) const {
  assert(level > 0);

  const IndexUnit* index_units = next_level_index_[level].index_units;
  const auto& index = index_units[file_index];

  if (cmp_smallest < 0) {
    *left_bound = (level > 0 && file_index > 0)
                      ? index_units[file_index - 1].largest_lb
                      : 0;
    *right_bound = index.smallest_rb;
  } else if (cmp_smallest == 0) {
    *left_bound = index.smallest_lb;
    *right_bound = index.smallest_rb;
  } else if (cmp_smallest > 0 && cmp_largest < 0) {
    *left_bound = index.smallest_lb;
    *right_bound = index.largest_rb;
  } else if (cmp_largest == 0) {
    *left_bound = index.largest_lb;
    *right_bound = index.largest_rb;
  } else if (cmp_largest > 0) {
    *left_bound = index.largest_lb;
    *right_bound = level_rb_[level + 1];
  } else {
    assert(false);
  }
}

看完上面这些我们继续来看RocksDB对于文件的查找.这里所有对于key的查找都是在table_cache_->Get中.这里我们暂且略过这个函数的实现,最后我们再来详细分析这个函数.

 while (f != nullptr) {
................................

    *status = table_cache_->Get(
        read_options, *internal_comparator(), f->fd, ikey, &get_context,
        cfd_->internal_stats()->GetFileReadHist(fp.GetHitFileLevel()),
        IsFilterSkipped(static_cast<int>(fp.GetHitFileLevel()),
                        fp.IsHitFileLastInLevel()),
        fp.GetCurrentLevel());
    // TODO: examine the behavior for corrupted key
    if (!status->ok()) {
      return;
    }
.......................
 }

当table_cache_->Get返回之后,我们需要根据get_context来判断返回的结果

switch (get_context.State()) {
      case GetContext::kNotFound:
        // Keep searching in other files
        break;
      case GetContext::kMerge:
        break;
      case GetContext::kFound:
        if (fp.GetHitFileLevel() == 0) {
          RecordTick(db_statistics_, GET_HIT_L0);
        } else if (fp.GetHitFileLevel() == 1) {
          RecordTick(db_statistics_, GET_HIT_L1);
        } else if (fp.GetHitFileLevel() >= 2) {
          RecordTick(db_statistics_, GET_HIT_L2_AND_UP);
        }
        return;
      case GetContext::kDeleted:
        // Use empty error message for speed
        *status = Status::NotFound();
        return;
      case GetContext::kCorrupt:
        *status = Status::Corruption("corrupted key for ", user_key);
        return;
      case GetContext::kBlobIndex:
        ROCKS_LOG_ERROR(info_log_, "Encounter unexpected blob index.");
        *status = Status::NotSupported(
            "Encounter unexpected blob index. Please open DB with "
            "rocksdb::blob_db::BlobDB instead.");
        return;
    }

如果没有发现对应的值则进入下一次文件查找

f = fp.GetNextFile();

最后我们来详细分析最核心的函数TableCache::Get,这个函数不仅仅是返回对应的查找结果,并且还会cache相应的文件信息,并且如果row_cache打开,他还会做row cache.这里row cache就是对当前的所需要查找的key在当前sst中对应的value进行cache.

先来看如果打开了row cache,RocksDB将会如何处理,首先它会计算row cache的key.通过下面的代码我们可以看到row cache的key就是fd_number+seq_no+user_key.

    uint64_t fd_number = fd.GetNumber();
    auto user_key = ExtractUserKey(k);
    // We use the user key as cache key instead of the internal key,
    // otherwise the whole cache would be invalidated every time the
    // sequence key increases. However, to support caching snapshot
    // reads, we append the sequence number (incremented by 1 to
    // distinguish from 0) only in this case.
    uint64_t seq_no =
        options.snapshot == nullptr ? 0 : 1 + GetInternalKeySeqno(k);

    // Compute row cache key.
    row_cache_key.TrimAppend(row_cache_key.Size(), row_cache_id_.data(),
                             row_cache_id_.size());
    AppendVarint64(&row_cache_key, fd_number);
    AppendVarint64(&row_cache_key, seq_no);
    row_cache_key.TrimAppend(row_cache_key.Size(), user_key.data(),
                             user_key.size());

然后就是在row cache中进行一次查找.如果有对应的值则直接返回结果,否则则将会在对应的sst读取传递进来的key.

    if (auto row_handle =
            ioptions_.row_cache->Lookup(row_cache_key.GetUserKey())) {
      Cleanable value_pinner;
      auto release_cache_entry_func = [](void* cache_to_clean,
                                         void* cache_handle) {
        ((Cache*)cache_to_clean)->Release((Cache::Handle*)cache_handle);
      };
      auto found_row_cache_entry = static_cast<const std::string*>(
          ioptions_.row_cache->Value(row_handle));
....................................
      done = true;       
    } else {
      // Not found, setting up the replay log.
      RecordTick(ioptions_.statistics, ROW_CACHE_MISS);
      row_cache_entry = &row_cache_entry_buffer;
    }

接下来就是需要在对应的sst文件读取对应的key的值,这里可以看到每一个fd都包含了一个TableReader的结构,这个结构就是用来保存文件的内容.而我们的table_cache主要就是缓存这个结构.

 Status s;
  TableReader* t = fd.table_reader;
  Cache::Handle* handle = nullptr;
  if (!done && s.ok()) {
    if (t == nullptr) {
      s = FindTable(env_options_, internal_comparator, fd, &handle,
                    options.read_tier == kBlockCacheTier /* no_io */,
                    true /* record_read_stats */, file_read_hist, skip_filters,
                    level);
      if (s.ok()) {
        t = GetTableReaderFromHandle(handle);
      }
    }
   ..........................
  }

上面的代码会直接调用TableCache::FindTable, 这个函数主要是用来实现对应tablereader的读取以及row cache.

Status TableCache::FindTable(const EnvOptions& env_options,
                             const InternalKeyComparator& internal_comparator,
                             const FileDescriptor& fd, Cache::Handle** handle,
                             const bool no_io, bool record_read_stats,
                             HistogramImpl* file_read_hist, bool skip_filters,
                             int level,
                             bool prefetch_index_and_filter_in_cache) {
...................................................
  if (*handle == nullptr) {
    if (no_io) {  // Don't do IO and return a not-found status
      return Status::Incomplete("Table not found in table_cache, no_io is set");
    }
    unique_ptr<TableReader> table_reader;
    s = GetTableReader(env_options, internal_comparator, fd,
                       false /* sequential mode */, 0 /* readahead */,
                       record_read_stats, file_read_hist, &table_reader,
                       skip_filters, level, prefetch_index_and_filter_in_cache);
    if (!s.ok()) {
      assert(table_reader == nullptr);
      RecordTick(ioptions_.statistics, NO_FILE_ERRORS);
      // We do not cache error results so that if the error is transient,
      // or somebody repairs the file, we recover automatically.
    } else {
      s = cache_->Insert(key, table_reader.get(), 1, &DeleteEntry<TableReader>,
                         handle);
      if (s.ok()) {
        // Release ownership of table reader.
        table_reader.release();
      }
    }
  }
  return s;
}

通过上面的代码可以看到实现很简单,就是一般的cache逻辑,读取然后判断是否存在,不存在则插入到cache. 上面的函数会调用 TableCache::GetTableReader,我们来简单看下这个函数.

Status TableCache::GetTableReader(
    const EnvOptions& env_options,
    const InternalKeyComparator& internal_comparator, const FileDescriptor& fd,
    bool sequential_mode, size_t readahead, bool record_read_stats,
    HistogramImpl* file_read_hist, unique_ptr<TableReader>* table_reader,
    bool skip_filters, int level, bool prefetch_index_and_filter_in_cache,
    bool for_compaction) {
..........................................
  if (s.ok()) {
...............................................    
    s = ioptions_.table_factory->NewTableReader(
        TableReaderOptions(ioptions_, env_options, internal_comparator,
                           skip_filters, level),
        std::move(file_reader), fd.GetFileSize(), table_reader,
        prefetch_index_and_filter_in_cache);
    TEST_SYNC_POINT("TableCache::GetTableReader:0");
  }
  return s;
} 

可以看到最关键的调用就是调用ioptions_.table_factory->NewTableReader, 这里RocksDB会根据我们配置的不同的sst格式来调用不同的reader,而在RocksDB中默认的格式是基于block.

// Create default block based table factory.
extern TableFactory* NewBlockBasedTableFactory(
    const BlockBasedTableOptions& table_options = BlockBasedTableOptions());

这里我们只需要知道最终缓存的tablereader就是一个BlockBasedTable对象(假设使用了基于block的sst format).

当读取完毕TableReader之后,RocksDB就需要从sst文件中get key了,也就是最终的key查找方式是在每个sst format class的Get方法中实现的。

    if (s.ok()) {
      get_context->SetReplayLog(row_cache_entry);  // nullptr if no cache.
      s = t->Get(options, k, get_context, skip_filters);
      get_context->SetReplayLog(nullptr);
    }

和上面一样,这里的get也就是对应的sst format的get.

最后如果查找到key,则开始缓存对应的kv到row_cache.

    size_t charge =
        row_cache_key.Size() + row_cache_entry->size() + sizeof(std::string);
    void* row_ptr = new std::string(std::move(*row_cache_entry));
    ioptions_.row_cache->Insert(row_cache_key.GetUserKey(), row_ptr, charge,
                                &DeleteEntry<std::string>);

参考资料

http://mysql.taobao.org/monthly/2018/07/
https://github.com/facebook/rocksdb/wiki
https://github.com/facebook/rocksdb

你可能感兴趣的:(存储引擎,#,Rocksdb)