C++ LRU cache 实现

目录

  • LRU
  • 问题
  • 实现

LRU

简单回顾 LRU( Least Recently Used),最近最久未使用。
LRU 缓存要具备以下特点:

  • 新增缓存时,应该被放到 Cache 的最前面
  • 访问某个缓存之后,也应该被挪到 Cache 的最前面
  • 容量不够,擦除尾端的Cache

最近翻了下mediapipe的源码,发现里面有个image_multi_pool.cc,基本原理就是使用LRU cache 对image对象池的管理,不过里面LRU实现用的是deque+unordered_map。
主要代码

class ImageMultiPool {
 public:
  ImageMultiPool() {}
  explicit ImageMultiPool(void* ignored) {}
  ~ImageMultiPool();

  // Obtains a buffer. May either be reused or created anew.
  Image GetBuffer(int width, int height, bool use_gpu,
                  ImageFormat::Format format /*= ImageFormat::SRGBA*/)
                  {
    IBufferSpec key(width, height, format);
    auto pool_it = pools_cpu_.find(key);
    if (pool_it == pools_cpu_.end()) {
      // Discard the least recently used pool in LRU cache.
      if (pools_cpu_.size() >= kMaxPoolCount) {
        auto old_spec = buffer_specs_cpu_.front();  // Front has LRU.
        buffer_specs_cpu_.pop_front();
        pools_cpu_.erase(old_spec);
      }
      buffer_specs_cpu_.push_back(key);  // Push new spec to back.
      std::tie(pool_it, std::ignore) = pools_cpu_.emplace(
          std::piecewise_construct, std::forward_as_tuple(key),
          std::forward_as_tuple(MakeSimplePoolCpu(key)));
    } else {
      // Find and move current 'key' spec to back, keeping others in same order.
      auto specs_it = buffer_specs_cpu_.begin();
      while (specs_it != buffer_specs_cpu_.end()) {
        if (*specs_it == key) {
          buffer_specs_cpu_.erase(specs_it);
          break;
        }
        ++specs_it;
      }
      buffer_specs_cpu_.push_back(key);
    }
    return GetBufferFromSimplePool(pool_it->first, pool_it->second);
  }

  struct IBufferSpec {
    IBufferSpec(int w, int h, mediapipe::ImageFormat::Format f)
        : width(w), height(h), format(f) {}
    int width;
    int height;
    mediapipe::ImageFormat::Format format;
  };

 private:
  std::unordered_map<IBufferSpec, SimplePoolcpu, IBufferSpecHash> pools_cpu_
  std::deque<IBufferSpec> buffer_specs_cpu_;
};

问题

对LRU管理对象池,没问题,但是使用deque实现LRU管理最近使用的池感觉不太合适,里面的遍历deque查找最近使用的池也不是最优解。

实现

其实就是把deque用list代替,list 元素插删可以保证o(1),deque只保证头尾插删的大部分情况下O(1)复杂度,触发resize时还是要做整体迁移O(n)。
另外一个就是存储迭代器,方便擦除LRU ,这个也是不能使用deque的原因,插入和擦除都可能会导致deque的迭代器失效,而list则只有在删除元素时,该元素的迭代器失效。

#include 
#include 
#include 
#include 
#include 

template <typename KeyType, typename ValueType, int capacity = 10>
class LRUCache {
 private:
  std::unordered_map<KeyType, std::pair<ValueType, typename std::list<KeyType>::iterator>> cache;
  std::list<KeyType> lruList;
  std::mutex mtx;  // Mutex for synchronization

 public:
  ValueType get(const KeyType& key) {
    std::lock_guard<std::mutex> lock(mtx);  // Lock for thread safety
    if (cache.find(key) != cache.end()) {
      // Move the accessed item to the front of the list
      lruList.erase(cache[key].second);
      lruList.push_front(key);
      cache[key].second = lruList.begin();
      return cache[key].first;
    }
    return ValueType();  // Return a default-constructed value if the key is not in the cache
  }

  void put(const KeyType& key, const ValueType& value) {
    std::lock_guard<std::mutex> lock(mtx);  // Lock for thread safety
    if (cache.find(key) != cache.end()) {
      // If key exists, update its value and move to the front
      cache[key].first = value;
      lruList.erase(cache[key].second);
      lruList.push_front(key);
      cache[key].second = lruList.begin();
    } else {
      // If key does not exist
      if (cache.size() >= capacity) {
        // Remove the least recently used item
        KeyType lruKey = lruList.back();
        cache.erase(lruKey);
        lruList.pop_back();
      }
      // Add the new key-value pair
      lruList.push_front(key);
      cache[key] = std::make_pair(value, lruList.begin());
    }
  }

  friend std::ostream& operator<<(std::ostream& os,
                                  const LRUCache<KeyType, ValueType, capacity>& rhs) {
    for (auto p : rhs.lruList) {
      os << p << " ";
    }
    return os;
  }
};

int main() {
  LRUCache<std::string, int, 2> cache;  // Capacity is set to 2

  std::thread t1([&]() {
    cache.put("one", 1);
    cache.put("two", 2);
    std::cout << "Thread 1 cache: " << cache << std::endl;
    std::cout << "Thread 1: " << cache.get("one") << std::endl;
    std::cout << "Thread 1 cache: " << cache << std::endl;
  });

  std::thread t2([&]() {
    cache.put("three", 3);
    std::cout << "Thread 2 cache: " << cache << std::endl;
    std::cout << "Thread 2: " << cache.get("two") << std::endl;
    std::cout << "Thread 2 cache: " << cache << std::endl;
    std::cout << "Thread 2: " << cache.get("one") << std::endl;
    std::cout << "Thread 2 cache: " << cache << std::endl;
  });

  t1.join();
  t2.join();

  return 0;
}


你可能感兴趣的:(C\C++,c++,开发语言)