caffe源码分析-DataLayer

DataLayer作为caffe训练时的数据层(以多线程的方式读取数据加速solver的训练过程),继承自BaseDataLayer/BasePrefetchingDataLayer

template 
class BaseDataLayer : public Layer {
public:
    explicit BaseDataLayer(const LayerParameter& param);
    // This method may not be overridden except by the BasePrefetchingDataLayer.
    virtual void LayerSetUp(const vector*>& bottom,
    const vector*>& top);
    // Data layers should be shared by multiple solvers in parallel
    virtual inline bool ShareInParallel() const { return true; }
    virtual void DataLayerSetUp(const vector*>& bottom,
    const vector*>& top) {}
    // Data layers have no bottoms, so reshaping is trivial.
    virtual void Reshape(....) {//do nothing}
    virtual void Backward_cpu(....) {//do nothing}

protected:
    TransformationParameter transform_param_;
    shared_ptr > data_transformer_;
    bool output_labels_;
};

data_transformer_主要是对输入的图片做crop_size,do_mirror,mean等操作.

下面看起函数定义LayerSetUp(参数初始化):

template 
void BaseDataLayer::LayerSetUp(const vector*>& bottom,
                                      const vector*>& top) {
    if (top.size() == 1) {
        output_labels_ = false;
    } else {
        output_labels_ = true;
    }
    data_transformer_.reset(
            new DataTransformer(transform_param_, this->phase_));
    data_transformer_->InitRand();
    // The subclasses should setup the size of bottom and top
    DataLayerSetUp(bottom, top);
}

BasePrefetchingDataLayer还继承了InternalThread

template 
class BasePrefetchingDataLayer :
        public BaseDataLayer, public InternalThread {
public:
    explicit BasePrefetchingDataLayer(const LayerParameter& param);

 
    void LayerSetUp(const vector*>& bottom,
                    const vector*>& top);

    virtual void Forward_cpu(const vector*>& bottom,
                             const vector*>& top);

    // Prefetches batches 
    static const int PREFETCH_COUNT = 3;

protected:
    virtual void InternalThreadEntry();
    virtual void load_batch(Batch* batch) = 0;

    Batch prefetch_[PREFETCH_COUNT];
    BlockingQueue*> prefetch_free_;
    BlockingQueue*> prefetch_full_;

    Blob transformed_data_;
};

template 
void BasePrefetchingDataLayer::LayerSetUp(
        const vector*>& bottom, const vector*>& top) {
    BaseDataLayer::LayerSetUp(bottom, top);
    //.........
    DLOG(INFO) << "Initializing prefetch";
    this->data_transformer_->InitRand();
    StartInternalThread();
    DLOG(INFO) << "Prefetch initialized.";
}

其中load_batch函数被DataLayer重载.

template 
void BasePrefetchingDataLayer::InternalThreadEntry() {
        while (!must_stop()) {
            Batch *batch = prefetch_free_.pop();
            load_batch(batch);

            prefetch_full_.push(batch);
        }
}

DataLayer中使用线程读取Batch(image,label)push到队列中,然后pop出来前向传播:

template 
void BasePrefetchingDataLayer::Forward_cpu(
        const vector*>& bottom, const vector*>& top) {

    Batch* batch = prefetch_full_.pop("Data layer prefetch queue empty");
    // Reshape to loaded data.
    top[0]->ReshapeLike(batch->data_);

    // Copy the data
    caffe_copy(batch->data_.count(), batch->data_.cpu_data(),
               top[0]->mutable_cpu_data());

    if (this->output_labels_) {
        // Reshape to loaded labels.
        top[1]->ReshapeLike(batch->label_);
        // Copy the labels.
        caffe_copy(batch->label_.count(), batch->label_.cpu_data(),
                   top[1]->mutable_cpu_data());
    }

    prefetch_free_.push(batch);
}

下面看DataLayer:

template 
class DataLayer : public BasePrefetchingDataLayer {
public:
    explicit DataLayer(const LayerParameter& param);
    virtual ~DataLayer();
    virtual void DataLayerSetUp(const vector*>& bottom,
                                const vector*>& top);
    // DataLayer uses DataReader instead for sharing for parallelism
    virtual inline bool ShareInParallel() const { return false; }
    virtual inline const char* type() const { return "Data"; }
    virtual inline int ExactNumBottomBlobs() const { return 0; }
    virtual inline int MinTopBlobs() const { return 1; }
    virtual inline int MaxTopBlobs() const { return 2; }

protected:
    virtual void load_batch(Batch* batch);
    DataReader reader_;

};

DataLayerSetUp初始化参数:

template 
void DataLayer::DataLayerSetUp(const vector*>& bottom,
                                      const vector*>& top) {
    const int batch_size = this->layer_param_.data_param().batch_size();
    // Read a data point, and use it to initialize the top blob.
    Datum& datum = *(reader_.full().peek());

    // Use data_transformer to infer the expected blob shape from datum.
    vector top_shape = this->data_transformer_->InferBlobShape(datum);
    this->transformed_data_.Reshape(top_shape);
    // Reshape top[0] and prefetch_data according to the batch_size.
    top_shape[0] = batch_size;
    top[0]->Reshape(top_shape);
    for (int i = 0; i < this->PREFETCH_COUNT; ++i) {
        this->prefetch_[i].data_.Reshape(top_shape);
    }
    
    if (this->output_labels_) {// label
        vector label_shape(1, batch_size);
        top[1]->Reshape(label_shape);
        for (int i = 0; i < this->PREFETCH_COUNT; ++i) {
            this->prefetch_[i].label_.Reshape(label_shape);
        }
    }
}

load_batch在线程中轮训读取数据库(如lmdb)中的数据,并做crop_size,do_mirror,mean等操作,然后通过Forward_cpu传递给下一层.

template
void DataLayer::load_batch(Batch* batch) {
    CPUTimer batch_timer;
    batch_timer.Start();
    double read_time = 0;nnndouble trans_time = 0;
    CPUTimer timer;

    const int batch_size = this->layer_param_.data_param().batch_size();
    Datum& datum = *(reader_.full().peek());
    // Use data_transformer to infer the expected blob shape from datum.
    vector top_shape = this->data_transformer_->InferBlobShape(datum);
    this->transformed_data_.Reshape(top_shape);
    // Reshape batch according to the batch_size.
    top_shape[0] = batch_size;
    batch->data_.Reshape(top_shape);

    Dtype* top_data = batch->data_.mutable_cpu_data();
    Dtype* top_label = NULL;

    if (this->output_labels_) {
        top_label = batch->label_.mutable_cpu_data();
    }
    for (int item_id = 0; item_id < batch_size; ++item_id) {
        timer.Start();
        // get a datum
        Datum& datum = *(reader_.full().pop("Waiting for data"));
        read_time += timer.MicroSeconds();
        timer.Start();
        // Apply data transformations (mirror, scale, crop...)
        int offset = batch->data_.offset(item_id);
        this->transformed_data_.set_cpu_data(top_data + offset);
        this->data_transformer_->Transform(datum, &(this->transformed_data_));
        // Copy label.
        if (this->output_labels_) {
            top_label[item_id] = datum.label();
        }
        trans_time += timer.MicroSeconds();

        reader_.free().push(const_cast(&datum));
    }
    timer.Stop();
    batch_timer.Stop();
//  DLOG(INFO) << "Prefetch batch: " << batch_timer.MilliSeconds() << " ms.";
//  DLOG(INFO) << "     Read time: " << read_time / 1000 << " ms.";
//  DLOG(INFO) << "Transform time: " << trans_time / 1000 << " ms.";
}

使用示例如下:

layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/home/xy/caffe-master/examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}

proto定义如下:

message DataParameter {
  enum DB {
    LEVELDB = 0;
    LMDB = 1;
  }
  // Specify the data source.
  optional string source = 1;
  // Specify the batch size.
  optional uint32 batch_size = 4;
  optional uint32 rand_skip = 7 [default = 0];
  optional DB backend = 8 [default = LEVELDB];
  // Prefetch queue (Number of batches to prefetch to host memory.
  optional uint32 prefetch = 10 [default = 4];
}

caffe系列源码分析介绍

本系列深度学习框架caffe 源码分析主要内容如下:

1. caffe源码分析-cmake 工程构建:

caffe源码分析-cmake 工程构建主要内容:

自己从头构建一遍工程,这样能让我更好的了解大型的项目的构建。当然原始的caffe的构建感觉还是比较复杂(主要是cmake),我这里仅仅使用cmake构建,而且简化点,当然最重要的是支持CLion直接运行调试(如果需要这个工程可以评论留下你的邮箱,我给你发送过去)。

caffe源码分析-DataLayer_第1张图片

2. caffe的数据内存分配类SyncedMemory, 以及类Blob数据传输的媒介.

主要内容:
caffe源码分析-SyncedMemory
caffe源码分析-Blob
其中Blob分析给出了其直接与opencv的图片相互转化以及操作,可以使得我们更好的理解Blob.

3. caffe layer的源码分析,包括从整体上说明了layer类别以及其proto定义与核心函数.

内容如下:
caffe源码分析-layer
caffe源码分析-ReLULayer
caffe源码分析-inner_product_layer
caffe源码分析-layer_factory

首先分析了最简单的layer Relu,然后在是inner_product_layer全连接层, 最后是layer_factorycaffe中 以此工厂模式create各种Layer.

4. 数据输入层,主要是多线程+BlockingQueue的方式读取数据训练:

内容如下:
caffe源码分析-BlockingQueue
caffe源码分析-InternalThread
caffe源码分析-DataReader

5. IO处理例如读取proto文件转化为网络,以及网络参数的序列化

内容如下:
caffe源码分析-DataTransformer
caffe源码分析-db, io

6. 最后给出了使用纯C++结合多层感知机网络训练mnist的示例

内容如下:

caffe c++示例(mnist 多层感知机c++训练,测试)

类似与caffe一样按照layer、solver、loss、net等模块构建的神经网络实现可以见下面这篇blog,相信看懂了这个python的代码理解caffe框架会更简单点.

神经网络python实现


最后如果需要cmake + CLion直接运行调试caffe的代码工程,可以评论留下你的邮箱,我给你发送过去.

你可能感兴趣的:(caffe)