caffe入门应用方法(四)--制作hdf5数据源

有些时候,我们的输入不是标准的图像,而是其它一些格式,比如:频谱图、特征向量等等,这种情况下LMDB、Leveldb以及ImageData layer等就不好使了,这时候我们就需要一个新的输入接口——HDF5Data.

本文主要介绍一种应用python hdf5软件库制作hdf5数据源的方法.

代码示例

import h5py
import os
import cv2
import math
import numpy as np
import random
import re

root_path = "/home/tyd/caffe_case/HDF5/image" # 图片存放的路经

with open("/home/tyd/caffe_case/HDF5/hdf5.txt", 'r') as f: # txt文件中存放的图片路经和相应回归数据
    lines = f.readlines()

num = len(lines)
random.shuffle(lines)


imgAccu = 0
imgs = np.zeros([num, 3, 224, 224]) # 用于存放图片数据
labels = np.zeros([num, 10]) # 存放标签
for i in range(num):
    line = lines[i]
    segments = re.split('\s+', line)[:-1]
    print segments[0]
    img = cv2.imread(os.path.join(root_path, segments[0]))
    img = cv2.resize(img, (224, 224))
    img = img.transpose(2,0,1)
    imgs[i,:,:,:] = img.astype(np.float32)
    for j in range(10):
        labels[i,j] = float(segments[j+1])*224/256

batchSize = 1  # 每一个hdf5文件中存放图片的数量,一般不超过8000个
batchNum = int(math.ceil(1.0*num/batchSize))

imgsMean = np.mean(imgs, axis=0)
# imgs = (imgs - imgsMean)/255.0  # 数据中心化,标准化
labelsMean = np.mean(labels, axis=0)    # 标签中心化,标准化(预测时需要加上mean值)
labels = (labels - labelsMean)/10

if os.path.exists('trainlist.txt'):
    os.remove('trainlist.txt')
if os.path.exists('testlist.txt'):
    os.remove('testlist.txt')
comp_kwargs = {'compression': 'gzip', 'compression_opts': 1}

## 将数据写入hdf5文件中
for i in range(batchNum):
    start = i*batchSize
    end = min((i+1)*batchSize, num)
    if i < batchNum-1:
        filename = '/home/tyd/caffe_case/HDF5/h5/train{0}.h5'.format(i)
    else:
        filename = '/home/tyd/caffe_case/HDF5/h5/test{0}.h5'.format(i-batchNum+1)
    print filename
    with h5py.File(filename, 'w') as f:
        f.create_dataset('data', data = np.array((imgs[start:end]-imgsMean)/255.0).astype(np.float32), **comp_kwargs)
        f.create_dataset('label', data = np.array(labels[start:end]).astype(np.float32), **comp_kwargs)

    if i < batchNum-1:
        with open('/home/tyd/caffe_case/HDF5/h5/trainlist.txt', 'a') as f:
            f.write(os.path.join(os.getcwd(), 'train{0}.h5').format(i) + '\n')
    else:
        with open('/home/tyd/caffe_case/HDF5/h5/testlist.txt', 'a') as f:
            f.write(os.path.join(os.getcwd(), 'test{0}.h5').format(i-batchNum+1) + '\n')

imgsMean = np.mean(imgsMean, axis=(1,2))
with open('mean.txt', 'w') as f:
    f.write(str(imgsMean[0]) + '\n' + str(imgsMean[1]) + '\n' + str(imgsMean[2]))

根据中文注释的步骤我们可以了解到hdf5制作的流程.

你可能感兴趣的:(caffe入门应用方法(四)--制作hdf5数据源)