RocksDB 源码分析-接口下的数据结构

RocksDB 源码分析-接口下的数据结构

RocksDB是非常流行的KV数据库,是LSM-Tree数据库的典型代表,很多分布式数据库NewSQL、图数据库都使用RocksDB作为底层存储引擎,RocksDB在稳定性和性能等方面都比较出色。

HugeGraph图数据库底层也支持RocksDB作为后端存储,HugeGraph使用的是Java语言,RocksDB是C++语言编写,幸好官方提供了Java JNI接口可直接使用。RocksDB的功能非常聚焦,可以简单理解为其提供一个个Map来存取键值对,所以核心接口基本就是put、get、scan等,使用起来还是比较简单。不过简单的接口下面,蕴含了非常复杂的内部结构,本文对其接口下的几个核心结构进行分析。

最频繁使用的RocksDB接口:

  • RocksDB:数据库实例,所有操作的入口
  • ColumnFamilyHandle:CF描述符,类似文件描述符,可简单理解为Map的指针
  • RocksIterator:查询迭代器,scan查询的操作接口

先看几个问题:

  1. Iterator、ColumnFamilyHandle 背后的是怎么把 MemTable、ImmMemTable、Manifest、SST 等组织起来的?
  2. 要查找某个 CF 中指定key范围的值,如何定位到某个文件的某个位置?
  3. Iterator 的生命周期如何管理?在 CF close 之后 Iterator 如何保持依旧可用而不被释放?

重点类结构及其关系:ColumnFamilyHandle

ColumnFamilyHandle <--- ColumnFamilyHandleImpl ---+ ColumnFamilyData  ---+ SuperVersion   -----------------------+ Version current ------------------+ uint64_t version_number
                                                  + MemTable mem         + MemTableListVersion imm               + ColumnFamilyData cfd
                                                  + MemTableList imm     + MemTable mem why?                     + VersionStorageInfo storage_info (SSTs meta)
                                                  + Ref refs             + Ref refs
                                                  + ColumnFamilyOptions
                                                  + ColumnFamilyData (next & prev)

ColumnFamilyHandle是CF(类似表Table)的描述符,从创建CF或打开数据库时,就可以拿到各CF的Handle,对表的任何操作都需要ColumnFamilyHandle描述符来进行,比如put、get、scan,如示例:rocksdb.put(cfHandle, key, value)。

ColumnFamilyHandle可通过如下示例代码获取:

cfHandle=RocksDB.createCF()

cfHandles=RocksDB.open(cfNames) 

ColumnFamilyHandle下层的ColumnFamilyData则管理着CF的各种状态、资源,包括memtable、immutables,以及通过SuperVersion管理CF的元数据,如当前版本号、SSTs文件信息等,而所有的ColumnFamilyData都放在db实例的ColumnFamilySet中。

重点类结构及其关系:Iterator

Iterator <--- ArenaWrappedDBIter ---+ DBIter db_iter ----------+ InternalIterator iter <------- MergingIterator ---+ vector children  ---+ MemTableIterator memtable
                                    + Arena arena              + bool valid                                        + MergerMinIterHeap minHeap             + MemTableIterator immutables
                                    + uint64_t sv_number       + IterKey saved_key                                 + InternalIterator current              + BlockBasedTableIterator level 0
                                                               + string saved_value                                                                        + LevelIterator level 1~n
                                                               + SequenceNumber sequence
                                                               + iterate_lower_bound、iterate_upper_bound、prefix_start_key
                                                               + user_comparator、merge_operator、prefix_extractor
                                                               + LocalStatistics local_stats

查询时,最外层使用RocksDB.newIterator(cfHandle)来得到Iterator,进一步通过Iterator来查询指定CF的数据,除点查get操作根据key获得value外,其它所有查询都是基于Iterator之上的,包括全表扫描、范围查找(大于、小于、区间)、前缀查找等。Iterator涵盖内容和生命周期都比较复杂,读取路径基本蕴含RocksDB的大部分关键概念。

构建最外层迭代器:RocksDB.newIterator(cfHandle) 调用栈:

ArenaWrappedDBIter::Init 0x7feefb8f5c00, allow_refresh_=1
ArenaWrappedDBIter::Init()
 0   librocksdbjni3300438414871377681.jnilib 0x0000000121dc8236 _ZN7rocksdb18ArenaWrappedDBIter4InitEPNS_3EnvERKNS_11ReadOptionsERKNS_18ImmutableCFOptionsERKyyyPNS_12ReadCallbackEbb + 214
 1   librocksdbjni3300438414871377681.jnilib 0x0000000121dc85ba _ZN7rocksdb25NewArenaWrappedDbIteratorEPNS_3EnvERKNS_11ReadOptionsERKNS_18ImmutableCFOptionsERKyyyPNS_12ReadCallbackEPNS_6DBImplEPNS_16ColumnFamilyDataEbb + 266
 2   librocksdbjni3300438414871377681.jnilib 0x0000000121d640f9 _ZN7rocksdb6DBImpl11NewIteratorERKNS_11ReadOptionsEPNS_18ColumnFamilyHandleE + 617
 3   librocksdbjni3300438414871377681.jnilib 0x0000000121c8757e Java_org_rocksdb_RocksDB_iteratorCF__JJ + 78

RocksDB.newIterator()返回的是一个ArenaWrappedDBIter对象,ArenaWrappedDBIter相当于一个外壳,其持有的DBIter包括了大量的状态变量(上图最高部分,如当前读取key&value),还持有一个内部迭代器InternalIterator,DBIter的作用是将查询转发给底层InternalIterator,InternalIterator返回的KV是原始的二进制数据,DBIter获取到数据之后解析为有含义的内容,包括版本号sequence(末尾8-1字节)、操作类型type(末尾1字节,包括普通的Value Key、删除操作Delete Key、合并操作Merge Key等)、实际用户Key内容,比如Delete Key则需要跳过去读取下一个Key,Merge Key则需要合并新老值,处理完成之后才返回结果。

其中Arena是用来存放DBIter以及其内部的InternalIterator的,目的是用于防止过多小内存碎片,DBIter中包括大量成员,Arena申请了一大片空间用于存放所有这些成员,而非每个成员申请一小点内存。

此外,ArenaWrappedDBIter还包括部分额外用于迭代器 Refresh 的信息ColumnFamilyData cfd_ 、DBImpl db_impl_ 、ReadOptions read_options_,Refresh是指当SuperVersionNumber比创建迭代器时的版本更新时,需要重新创建内部DBIter和InternalIterator,详见方法ArenaWrappedDBIter::Refresh()

详细的KV格式见 db/memtable.cc / MemTable::Add():internal_key_size(varint) + internal_key(user_key+sequence+type) + value_size(varint) + value。对于上层来说其中的user_key可能还在真正的用户数据末尾包含了timestamp。

WriteBatch层格式见 db/write_batch.cc / WriteBatchInternal::Put():tag(type) + cf_id(varint) + key_and_timestamp_size(varint) + key_data + timestamp + value_size(varint) + value_data。

注意当启用TTL时,DBWithTTLImpl::Write()中显示,timestamp是加到value后面的4字节,TTL的过滤见TtlCompactionFilter。

更多Put()内容见 DBImpl::WriteImpl() -> WriteBatchInternal::InsertInto() -> WriteBatch::Iterate() -> WriteBatchInternal::Iterate() -> MemTableInserter::PutCFImpl() -> MemTable::Add()。

MergingIterator是一个包罗万象的迭代器,是InternalIterator的一种,下层的各种类型的子迭代器都被放在MergingIterator中,包括memtable、immutables、SSTs的InternalIterator,由一个vector集合持有,并通过最小堆minHeap来优化pick哪个字迭代器的KV。

重点代码概览:

  • 构建InternalIterator:DBImpl::NewInternalIterator(),代码详见末尾。
  • MergingIterator从子迭代器中选择读取下一个键值:MergingIterator::SeekToFirst() & Next(),代码详见末尾。
  • 迭代器解析数据方法:DBIter::FindNextUserEntryInternal(),代码详见末尾。

解答一下开头的几个问题:

问题1,Iterator、ColumnFamilyHandle 背后的是怎么把 MemTable、ImmMemTable、Manifest、SST 等组织起来的?

从上面的分析看应该基本清楚了。

问题2,要查找某个 CF 中指定key范围的值,如何定位到某个文件的某个位置?

从 ArenaWrappedDBIter::Seek(const Slice& target) 方法一直往下追即可,到 MergingIterator::Seek(const Slice& target) 时,对所有的子迭代器进行一次Seek,然后按key排序将子迭代器放入最小堆中,返回最小key的子迭代器,通过 ArenaWrappedDBIter::Next() 获取下一个key时,将上次最小迭代器的值取走,接着依然返回最小key的子迭代器,如此循环往复直到上界。

那么子迭代器的Seek是如何完成的?

  • 内存中的MemTableIterator的Seek,以SkipList表为例,会通过SkipListRep::Iterator::Seek()找到SkipList对应的节点;
  • level 0 SST文件(可能有多个)的Seek,会通过BlockBasedTableIterator::Seek()/PlainTableIterator::Seek()找到,BlockBasedTable是SST的默认格式,BlockBasedTableIterator内部又通过SST的Block索引IndexIterator::Seek()来快速定位文件内部大致位置(哪个Block,一搬一个Block为4K大小),最终在Block内通过BlockIter::Seek()以二分查找找到key对应的具体Entry;
  • level 1~n SST文件的Seek,则是每层有一个LevelIterator,对于一层的多个SST文件,其内容都是排好序的,LevelIterator::Seek()先找到key对应的该层文件,并返回某个SST文件的BlockBasedTableIterator,再调用BlockBasedTableIterator::Seek(),接下来流程与上述level 0中分析类似;

问题3,Iterator 的生命周期如何管理?在 CF close 之后 Iterator 如何保持依旧可用而不被释放?

在ColumnFamilyData结构中有一个refs引用计数,当调用ColumnFamilyHandle.close()释放CF描述符时,只会对下层的ColumnFamilyData引用减1,只有引用refs=0时才真正释放(代码参考析构函数~ColumnFamilyHandleImpl())。

 

关键结构


关键结构:ColumnFamilyData

代码路径:rocksdb/db/column_family.cc

// This class keeps all the data that a column family needs.
// Most methods require DB mutex held, unless otherwise noted
class ColumnFamilyData {
  uint32_t id_;
  const std::string name_;
  Version* dummy_versions_;  // Head of circular doubly-linked list of versions.
  Version* current_;         // == dummy_versions->prev_

  std::atomic refs_;      // outstanding references to ColumnFamilyData
  std::atomic initialized_;
  std::atomic dropped_;  // true if client dropped it

  const InternalKeyComparator internal_comparator_;
  std::vector>
      int_tbl_prop_collector_factories_;

  const ColumnFamilyOptions initial_cf_options_;
  const ImmutableCFOptions ioptions_;
  MutableCFOptions mutable_cf_options_;

  const bool is_delete_range_supported_;

  std::unique_ptr table_cache_;

  std::unique_ptr internal_stats_;

  WriteBufferManager* write_buffer_manager_;

  MemTable* mem_;
  MemTableList imm_;
  SuperVersion* super_version_;

  // An ordinal representing the current SuperVersion. Updated by
  // InstallSuperVersion(), i.e. incremented every time super_version_
  // changes.
  std::atomic super_version_number_;

  // Thread's local copy of SuperVersion pointer
  // This needs to be destructed before mutex_
  std::unique_ptr local_sv_;

  // pointers for a circular linked list. we use it to support iterations over
  // all column families that are alive (note: dropped column families can also
  // be alive as long as client holds a reference)
  ColumnFamilyData* next_;
  ColumnFamilyData* prev_;

  // This is the earliest log file number that contains data from this
  // Column Family. All earlier log files must be ignored and not
  // recovered from
  uint64_t log_number_;

  std::atomic flush_reason_;

  // An object that keeps all the compaction stats
  // and picks the next compaction
  std::unique_ptr compaction_picker_;

  ColumnFamilySet* column_family_set_;

  std::unique_ptr write_controller_token_;

  // If true --> this ColumnFamily is currently present in DBImpl::flush_queue_
  bool queued_for_flush_;

  // If true --> this ColumnFamily is currently present in
  // DBImpl::compaction_queue_
  bool queued_for_compaction_;

  uint64_t prev_compaction_needed_bytes_;

  // if the database was opened with 2pc enabled
  bool allow_2pc_;

  // Memtable id to track flush.
  std::atomic last_memtable_id_;

  // Directories corresponding to cf_paths.
  std::vector> data_dirs_;
};

关键结构:ArenaWrappedDBIter

代码路径:rocksdb/db/db_iter.cc(rocksdb/db/db_impl.cc ArenaWrappedDBIter* DBImpl::NewIteratorImpl() <= Iterator* DBImpl::NewIterator())

// A wrapper iterator which wraps DB Iterator and the arena, with which the DB
// iterator is supposed be allocated. This class is used as an entry point of
// a iterator hierarchy whose memory can be allocated inline. In that way,
// accessing the iterator tree can be more cache friendly. It is also faster
// to allocate.
class ArenaWrappedDBIter : public Iterator {
  DBIter* db_iter_;
  Arena arena_;
  uint64_t sv_number_;
  ColumnFamilyData* cfd_ = nullptr;
  DBImpl* db_impl_ = nullptr;
  ReadOptions read_options_;
  ReadCallback* read_callback_;
  bool allow_blob_ = false;
  bool allow_refresh_ = true;
};
ArenaWrappedDBIter* DBImpl::NewIteratorImpl(const ReadOptions& read_options,
                                            ColumnFamilyData* cfd,
                                            SequenceNumber snapshot,
                                            ReadCallback* read_callback,
                                            bool allow_blob,
                                            bool allow_refresh) {
  // Try to generate a DB iterator tree in continuous memory area to be
  // cache friendly. Here is an example of result:
  // +-------------------------------+
  // |                               |
  // | ArenaWrappedDBIter            |
  // |  +                            |
  // |  +---> Inner Iterator   ------------+
  // |  |                            |     |
  // |  |    +-- -- -- -- -- -- -- --+     |
  // |  +--- | Arena                 |     |
  // |       |                       |     |
  // |          Allocated Memory:    |     |
  // |       |   +-------------------+     |
  // |       |   | DBIter            | <---+
  // |           |  +                |
  // |       |   |  +-> iter_  ------------+
  // |       |   |                   |     |
  // |       |   +-------------------+     |
  // |       |   | MergingIterator   | <---+
  // |           |  +                |
  // |       |   |  +->child iter1  ------------+
  // |       |   |  |                |          |
  // |           |  +->child iter2  ----------+ |
  // |       |   |  |                |        | |
  // |       |   |  +->child iter3  --------+ | |
  // |           |                   |      | | |
  // |       |   +-------------------+      | | |
  // |       |   | Iterator1         | <--------+
  // |       |   +-------------------+      | |
  // |       |   | Iterator2         | <------+
  // |       |   +-------------------+      |
  // |       |   | Iterator3         | <----+
  // |       |   +-------------------+
  // |       |                       |
  // +-------+-----------------------+

 

详细代码


构建InternalIterator:DBImpl::NewInternalIterator():

InternalIterator* DBImpl::NewInternalIterator(
    const ReadOptions& read_options, ColumnFamilyData* cfd,
    SuperVersion* super_version, Arena* arena,
    RangeDelAggregator* range_del_agg) {
  InternalIterator* internal_iter;
  assert(arena != nullptr);
  assert(range_del_agg != nullptr);
  // Need to create internal iterator from the arena.
  MergeIteratorBuilder merge_iter_builder(
      &cfd->internal_comparator(), arena,
      !read_options.total_order_seek &&
          cfd->ioptions()->prefix_extractor != nullptr);
  // Collect iterator for mutable mem
  merge_iter_builder.AddIterator(
      super_version->mem->NewIterator(read_options, arena));
  std::unique_ptr range_del_iter;
  Status s;
  if (!read_options.ignore_range_deletions) {
    range_del_iter.reset(
        super_version->mem->NewRangeTombstoneIterator(read_options));
    s = range_del_agg->AddTombstones(std::move(range_del_iter));
  }
  // Collect all needed child iterators for immutable memtables
  if (s.ok()) {
    super_version->imm->AddIterators(read_options, &merge_iter_builder);
    if (!read_options.ignore_range_deletions) {
      s = super_version->imm->AddRangeTombstoneIterators(read_options, arena,
                                                         range_del_agg);
    }
  }
  TEST_SYNC_POINT_CALLBACK("DBImpl::NewInternalIterator:StatusCallback", &s);
  if (s.ok()) {
    // Collect iterators for files in L0 - Ln
    if (read_options.read_tier != kMemtableTier) {
      super_version->current->AddIterators(read_options, env_options_,
                                           &merge_iter_builder, range_del_agg);
    }
    internal_iter = merge_iter_builder.Finish();
    IterState* cleanup =
        new IterState(this, &mutex_, super_version,
                      read_options.background_purge_on_iterator_cleanup);
    internal_iter->RegisterCleanup(CleanupIteratorState, cleanup, nullptr);

    return internal_iter;
  } else {
    CleanupSuperVersion(super_version);
  }
  return NewErrorInternalIterator(s, arena);
}

 

MergingIterator从子迭代器中选择读取下一个key,其中用到最小堆加速pick:MergingIterator::SeekToFirst() & Next()

virtual void SeekToFirst() override {
    ClearHeaps();
    status_ = Status::OK();
    for (auto& child : children_) {
      child.SeekToFirst();
      if (child.Valid()) {
        assert(child.status().ok());
        minHeap_.push(&child);
      } else {
        considerStatus(child.status());
      }
    }
    direction_ = kForward;
    current_ = CurrentForward();
  }
  
  IteratorWrapper* CurrentForward() const {
    assert(direction_ == kForward);
    return !minHeap_.empty() ? minHeap_.top() : nullptr;
  }
  
  virtual void Next() override {
    assert(Valid());

    // Ensure that all children are positioned after key().
    // If we are moving in the forward direction, it is already
    // true for all of the non-current children since current_ is
    // the smallest child and key() == current_->key().
    if (direction_ != kForward) {
      SwitchToForward();
      // The loop advanced all non-current children to be > key() so current_
      // should still be strictly the smallest key.
      assert(current_ == CurrentForward());
    }

    // For the heap modifications below to be correct, current_ must be the
    // current top of the heap.
    assert(current_ == CurrentForward());

    // as the current points to the current record. move the iterator forward.
    current_->Next();
    if (current_->Valid()) {
      // current is still valid after the Next() call above.  Call
      // replace_top() to restore the heap property.  When the same child
      // iterator yields a sequence of keys, this is cheap.
      assert(current_->status().ok());
      minHeap_.replace_top(current_);
    } else {
      // current stopped being valid, remove it from the heap.
      considerStatus(current_->status());
      minHeap_.pop();
    }
    current_ = CurrentForward();
  }

 

迭代器解析数据方法:DBIter::FindNextUserEntryInternal():

bool DBIter::FindNextUserEntryInternal(bool skipping, bool prefix_check) {
  // Loop until we hit an acceptable entry to yield
  assert(iter_->Valid());
  assert(status_.ok());
  assert(direction_ == kForward);
  current_entry_is_merged_ = false;

  // How many times in a row we have skipped an entry with user key less than
  // or equal to saved_key_. We could skip these entries either because
  // sequence numbers were too high or because skipping = true.
  // What saved_key_ contains throughout this method:
  //  - if skipping        : saved_key_ contains the key that we need to skip,
  //                         and we haven't seen any keys greater than that,
  //  - if num_skipped > 0 : saved_key_ contains the key that we have skipped
  //                         num_skipped times, and we haven't seen any keys
  //                         greater than that,
  //  - none of the above  : saved_key_ can contain anything, it doesn't matter.
  uint64_t num_skipped = 0;

  is_blob_ = false;

  do {
    if (!ParseKey(&ikey_)) {
      return false;
    }

    if (iterate_upper_bound_ != nullptr &&
        user_comparator_->Compare(ikey_.user_key, *iterate_upper_bound_) >= 0) {
      break;
    }

    if (prefix_extractor_ && prefix_check &&
        prefix_extractor_->Transform(ikey_.user_key)
                .compare(prefix_start_key_) != 0) {
      break;
    }

    if (TooManyInternalKeysSkipped()) {
      return false;
    }

    if (IsVisible(ikey_.sequence)) {
      if (skipping && user_comparator_->Compare(ikey_.user_key,
                                                saved_key_.GetUserKey()) <= 0) {
        num_skipped++;  // skip this entry
        PERF_COUNTER_ADD(internal_key_skipped_count, 1);
      } else {
        num_skipped = 0;
        switch (ikey_.type) {
          case kTypeDeletion:
          case kTypeSingleDeletion:
            // Arrange to skip all upcoming entries for this key since
            // they are hidden by this deletion.
            // if iterartor specified start_seqnum we
            // 1) return internal key, including the type
            // 2) return ikey only if ikey.seqnum >= start_seqnum_
            // note that if deletion seqnum is < start_seqnum_ we
            // just skip it like in normal iterator.
            if (start_seqnum_ > 0 && ikey_.sequence >= start_seqnum_)  {
              saved_key_.SetInternalKey(ikey_);
              valid_ = true;
              return true;
            } else {
              saved_key_.SetUserKey(
                ikey_.user_key,
                !pin_thru_lifetime_ || !iter_->IsKeyPinned() /* copy */);
              skipping = true;
              PERF_COUNTER_ADD(internal_delete_skipped_count, 1);
            }
            break;
          case kTypeValue:
          case kTypeBlobIndex:
            if (start_seqnum_ > 0) {
              // we are taking incremental snapshot here
              // incremental snapshots aren't supported on DB with range deletes
              assert(!(
                (ikey_.type == kTypeBlobIndex) && (start_seqnum_ > 0)
              ));
              if (ikey_.sequence >= start_seqnum_) {
                saved_key_.SetInternalKey(ikey_);
                valid_ = true;
                return true;
              } else {
                // this key and all previous versions shouldn't be included,
                // skipping
                saved_key_.SetUserKey(ikey_.user_key,
                  !pin_thru_lifetime_ || !iter_->IsKeyPinned() /* copy */);
                skipping = true;
              }
            } else {
              saved_key_.SetUserKey(
                  ikey_.user_key,
                  !pin_thru_lifetime_ || !iter_->IsKeyPinned() /* copy */);
              if (range_del_agg_.ShouldDelete(
                      ikey_, RangeDelAggregator::RangePositioningMode::
                                 kForwardTraversal)) {
                // Arrange to skip all upcoming entries for this key since
                // they are hidden by this deletion.
                ...
            }
            break;
          case kTypeMerge:
            saved_key_.SetUserKey(
                ikey_.user_key,
                !pin_thru_lifetime_ || !iter_->IsKeyPinned() /* copy */);
            if (range_del_agg_.ShouldDelete(
                    ikey_, RangeDelAggregator::RangePositioningMode::
                               kForwardTraversal)) {
              // Arrange to skip all upcoming entries for this key since
              // they are hidden by this deletion.
              skipping = true;
              num_skipped = 0;
              PERF_COUNTER_ADD(internal_delete_skipped_count, 1);
            } else {
              // By now, we are sure the current ikey is going to yield a
              // value
              current_entry_is_merged_ = true;
              valid_ = true;
              return MergeValuesNewToOld();  // Go to a different state machine
            }
            break;
          default:
            assert(false);
            break;
        }
      }
    } else {
      PERF_COUNTER_ADD(internal_recent_skipped_count, 1);

      // This key was inserted after our snapshot was taken.
      // If this happens too many times in a row for the same user key, we want
      // to seek to the target sequence number.
      int cmp =
          user_comparator_->Compare(ikey_.user_key, saved_key_.GetUserKey());
      if (cmp == 0 || (skipping && cmp <= 0)) {
        num_skipped++;
      } else {
        saved_key_.SetUserKey(
            ikey_.user_key,
            !iter_->IsKeyPinned() || !pin_thru_lifetime_ /* copy */);
        skipping = false;
        num_skipped = 0;
      }
    }

    // If we have sequentially iterated via numerous equal keys, then it's
    // better to seek so that we can avoid too many key comparisons.
    if (num_skipped > max_skip_) {
      num_skipped = 0;
      std::string last_key;
      if (skipping) {
        // We're looking for the next user-key but all we see are the same
        // user-key with decreasing sequence numbers. Fast forward to
        // sequence number 0 and type deletion (the smallest type).
        AppendInternalKey(&last_key, ParsedInternalKey(saved_key_.GetUserKey(),
                                                       0, kTypeDeletion));
        // Don't set skipping = false because we may still see more user-keys
        // equal to saved_key_.
      } else {
        // We saw multiple entries with this user key and sequence numbers
        // higher than sequence_. Fast forward to sequence_.
        // Note that this only covers a case when a higher key was overwritten
        // many times since our snapshot was taken, not the case when a lot of
        // different keys were inserted after our snapshot was taken.
        AppendInternalKey(&last_key,
                          ParsedInternalKey(saved_key_.GetUserKey(), sequence_,
                                            kValueTypeForSeek));
      }
      iter_->Seek(last_key);
      RecordTick(statistics_, NUMBER_OF_RESEEKS_IN_ITERATION);
    } else {
      iter_->Next();
    }
  } while (iter_->Valid());

  valid_ = false;
  return iter_->status().ok();
}

 

你可能感兴趣的:(RocksDB)