卷积
全联通网络
较大图像,通过全联通网络来学习整幅图像的特征,将会非常耗时。
在稀疏自编码章节中,我们把输入层和隐含层进行“全连接”的设计。
从计算的角度来讲,在其他章节中曾经用过的相对较小的图像(如在稀疏自编码的作业中用到过的 8x8 的小块图像,在MNIST数据集中用到过的28x28 的小块图像),从整幅图像中计算特征是可行的。
但是,如果是更大的图像(如 96x96 的图像),要通过这种全联通网络的这种方法来学习整幅图像上的特征,从计算角度而言,将变得非常耗时。
比如说为96*96,隐含层有要学习100个特征,则这时候把输入层的所有点都与隐含层节点连接,则大约需要学习10^6个参数,这样的话在使用BP算法时速度就明显慢了很多。
部分联通网络
解决这类问题的一种简单方法是对隐含单元和输入单元间的连接加以限制:每个隐含单元仅仅只能连接输入单元的一部分。
例如,每个隐含单元仅仅连接输入图像的一小片相邻区域。(对于不同于图像的输入形式,也会有一些特别的连接到单隐含层的输入信号“连接区域”选择方式。如音频作为一种信号输入方式,一个隐含单元所需要连接的输入单元的子集,可能仅仅是一段音频输入所对应的某个时间段上的信号。)
网络部分连通的思想,也是受启发于生物学里面的视觉系统结构。视觉皮层的神经元就是局部接受信息的(即这些神经元只响应某些特定区域的刺激)。
自然图像有其固有特性,它们具有稳定性,也就是说,图像的一部分的统计特性与其他部分是一样的。这也意味着我们在这一部分学习的特征也能用在另一部分上,所以对于这个图像上的所有位置,我们都能使用同样的学习特征。
更恰当的解释是,当从一个大尺寸图像中随机选取一小块,比如说 8x8 作为样本,并且从这个小块样本中学习到了一些特征,这时我们可以把从这个 8x8 样本中学习到的特征作为探测器,应用到这个图像的任意地方中去。特别是,我们可以用从 8x8 样本中所学习到的特征跟原本的大尺寸图像作卷积,从而对这个大尺寸图像上的任一位置获得一个不同特征的激活值。
池化
在通过卷积获得了特征 (features) 之后,下一步我们希望利用这些特征去做分类。理论上讲,人们可以用所有提取得到的特征去训练分类器,例如 softmax 分类器,但这样做面临计算量的挑战。
例如:对于一个 96X96 像素的图像,假设我们已经学习得到了400个定义在8X8输入上的特征,每一个特征和图像卷积都会得到一个 (96 − 8 + 1) * (96 − 8 + 1) = 7921 维的卷积特征,由于有 400 个特征,所以每个样例 (example) 都会得到一个 89*89 * 400 = 3,168,400 维的卷积特征向量。学习一个拥有超过 3 百万特征输入的分类器十分不便,并且容易出现过拟合 (over-fitting)。而采用完全连接的网络输出只有100维。
使用卷积后的特征是因为图像具有一种“静态性”的属性,这也就意味着在一个图像区域有用的特征极有可能在另一个区域同样适用。因此,为了描述大的图像,一个很自然的想法就是对不同位置的特征进行聚合统计.
例如,人们可以计算图像一个区域上的某个特定特征的平均值 (或最大值)。这些概要统计特征不仅具有低得多的维度 (相比使用所有提取得到的特征),同时还会改善结果(不容易过拟合)。这种聚合的操作就叫做池化 (pooling),有时也称为平均池化或者最大池化 (取决于计算池化的方法)。
下图显示池化如何应用于一个图像的四块不重合区域
池化的不变性
如果人们选择图像中的连续范围作为池化区域,并且只是池化相同(重复)的隐藏单元产生的特征,那么,这些池化单元就具有平移不变性 (translation invariant)。这就意味着即使图像经历了一个小的平移之后,依然会产生相同的 (池化的) 特征。
convolution是为了解决前面无监督特征提取学习计算复杂度的问题,
而pooling方法是为了后面有监督特征分类器学习的,也是为了减小需要训练的系统参数。
也就是说我们采用无监督的方法提取目标的特征,而采用有监督的方法来训练分类器。
实验步骤
1.初始化参数,加载上一节实验结果,即:10万张8*8的RGB小图像块中提取的颜色特征,并把特征可视化。
2.先加载8张64*64的图片(用来测试卷积和池化是否正确),再实现卷积函数cnnConvolve.m,并检查该函数是否正确。
3.实现池化函数cnnPool.m,并检查该函数是否正确。
4.加载2000张64*64RGB图片,利用前面实现的卷积函数从中提取出卷积特征convolvedFeaturesThis后,再利用池化函数从convolvedFeaturesThis中提取出池化特征pooledFeaturesTrain,把它作为softmax分类器的训练数据集;加载3200张64*64RGB图片,利用前面实现的卷积函数从中提取出卷积特征convolvedFeaturesThis后,再利用池化函数从convolvedFeaturesThis中提取出池化特征pooledFeaturesTest,把它作为softmax分类器的测试数据集。
5.用训练数据集pooledFeaturesTrain及其标签训练softmax分类器,得到模型参数softmaxModel。
6.利用训练过的模型参数为pooledFeaturesTest的softmax分类器对测试数据集pooledFeaturesTest进行分类,即得到3200张64*64RGB图片的分类结果。
cnnExercise.m
%% CS294A/CS294W Convolutional Neural Networks Exercise
% Instructions
% ------------
%
% This file contains code that helps you get started on the
% convolutional neural networks exercise. In this exercise, you will only
% need to modify cnnConvolve.m and cnnPool.m. You will not need to modify
% this file.
%%======================================================================
%% STEP 0: Initialization
% Here we initialize some parameters used for the exercise.
imageDim = 64; % image dimension
imageChannels = 3; % number of channels (rgb, so 3)
patchDim = 8; % patch dimension
numPatches = 50000; % number of patches
visibleSize = patchDim * patchDim * imageChannels; % number of input units
outputSize = visibleSize; % number of output units
hiddenSize = 400; % number of hidden units
epsilon = 0.1; % epsilon for ZCA whitening
poolDim = 19; % dimension of pooling region
%%======================================================================
%% STEP 1: Train a sparse autoencoder (with a linear decoder) to learn
% features from color patches. If you have completed the linear decoder
% execise, use the features that you have obtained from that exercise,
% loading them into optTheta. Recall that we have to keep around the
% parameters used in whitening (i.e., the ZCA whitening matrix and the
% meanPatch)
% --------------------------- YOUR CODE HERE --------------------------
% Train the sparse autoencoder and fill the following variables with
% the optimal parameters:
optTheta = zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);
ZCAWhite = zeros(visibleSize, visibleSize);
meanPatch = zeros(visibleSize, 1);
%load STL10Features.mat;
% --------------------------------------------------------------------
% Display and check to see that the features look good
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
displayColorNetwork( (W*ZCAWhite)');
%%======================================================================
%% STEP 2: Implement and test convolution and pooling
% In this step, you will implement convolution and pooling, and test them
% on a small part of the data set to ensure that you have implemented
% these two functions correctly. In the next step, you will actually
% convolve and pool the features with the STL10 images.
%% STEP 2a: Implement convolution
% Implement convolution in the function cnnConvolve in cnnConvolve.m
% Note that we have to preprocess the images in the exact same way
% we preprocessed the patches before we can obtain the feature activations.
load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
%% Use only the first 8 images for testing
convImages = trainImages(:, :, :, 1:8);
% NOTE: Implement cnnConvolve in cnnConvolve.m first!w和b已经是矩阵或向量的形式了
convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch);
%% STEP 2b: Checking your convolution
% To ensure that you have convolved the features correctly, we have
% provided some code to compare the results of your convolution with
% activations from the sparse autoencoder
% For 1000 random points
for i = 1:1000
featureNum = randi([1, hiddenSize]);%随机选取一个特征
imageNum = randi([1, 8]);%随机选取一个样本
imageRow = randi([1, imageDim - patchDim + 1]);%随机选取一个点
imageCol = randi([1, imageDim - patchDim + 1]);
%在那8张图片中随机选取1张图片,然后又根据随机选取的左上角点选取1个patch
patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum);
patch = patch(:); %这样是按照列的顺序来排列的
patch = patch - meanPatch;
patch = ZCAWhite * patch;%用同样的参数对该patch进行白化处理
features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); %计算出该patch的输出值
if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9
fprintf('Convolved feature does not match activation from autoencoder\n');
fprintf('Feature Number : %d\n', featureNum);
fprintf('Image Number : %d\n', imageNum);
fprintf('Image Row : %d\n', imageRow);
fprintf('Image Column : %d\n', imageCol);
fprintf('Convolved feature : %0.5f\n', convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
fprintf('Sparse AE feature : %0.5f\n', features(featureNum, 1));
error('Convolved feature does not match activation from autoencoder');
end
end
disp('Congratulations! Your convolution code passed the test.');
%% STEP 2c: Implement pooling
% Implement pooling in the function cnnPool in cnnPool.m
% NOTE: Implement cnnPool in cnnPool.m first!
pooledFeatures = cnnPool(poolDim, convolvedFeatures);
%% STEP 2d: Checking your pooling
% To ensure that you have implemented pooling, we will use your pooling
% function to pool over a test matrix and check the results.
%将1~64这64个数字弄成一个矩阵,按列的方向依次递增
testMatrix = reshape(1:64, 8, 8);
%直接计算均值pooling值
expectedMatrix = [mean(mean(testMatrix(1:4, 1:4))) mean(mean(testMatrix(1:4, 5:8))); ...
mean(mean(testMatrix(5:8, 1:4))) mean(mean(testMatrix(5:8, 5:8))); ];
testMatrix = reshape(testMatrix, 1, 1, 8, 8);
%squeeze去掉维度为1的那一维
pooledFeatures = squeeze(cnnPool(4, testMatrix));%参数值为4表明是对4*4的区域进行pooling
if ~isequal(pooledFeatures, expectedMatrix)
disp('Pooling incorrect');
disp('Expected');
disp(expectedMatrix);
disp('Got');
disp(pooledFeatures);
else
disp('Congratulations! Your pooling code passed the test.');
end
%%======================================================================
%% STEP 3: Convolve and pool with the dataset
% In this step, you will convolve each of the features you learned with
% the full large images to obtain the convolved features. You will then
% pool the convolved features to obtain the pooled features for
% classification.
%
% Because the convolved features matrix is very large, we will do the
% convolution and pooling 50 features at a time to avoid running out of
% memory. Reduce this number if necessary
stepSize = 50;
assert(mod(hiddenSize, stepSize) == 0, 'stepSize should divide hiddenSize');%hiddenSize/stepSize为整数,这里分8次进行
load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
load stlTestSubset.mat % loads numTestImages, testImages, testLabels
pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ... %image是大图片的尺寸,这里为64
floor((imageDim - patchDim + 1) / poolDim), ... %.poolDim为多大的区域pool一次,这里为19,即19*19大小pool一次.
floor((imageDim - patchDim + 1) / poolDim) ); %最后算出的pooledFeaturesTrain大小为400*2000*3*3
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
floor((imageDim - patchDim + 1) / poolDim), ...
floor((imageDim - patchDim + 1) / poolDim) ); %pooledFeaturesTest大小为400*3200*3*3
tic();
for convPart = 1:(hiddenSize / stepSize) %stepSize表示分批次进行原始图片数据的特征提取,一次进行stepSize个隐含层节点
featureStart = (convPart - 1) * stepSize + 1; %选取起始的特征
featureEnd = convPart * stepSize; %选取结束的特征
fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);
Wt = W(featureStart:featureEnd, :);
bt = b(featureStart:featureEnd);
fprintf('Convolving and pooling train images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ... %参数2表示的是当前"隐含层"节点的个数
trainImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc();
clear convolvedFeaturesThis pooledFeaturesThis;%这些大的变量在不用的情况下全部删除掉,因为后面用的是test部分
fprintf('Convolving and pooling test images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
testImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc();
clear convolvedFeaturesThis pooledFeaturesThis;
end
% You might want to save the pooled features since convolution and pooling takes a long time
save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest');
toc();
%%======================================================================
%% STEP 4: Use pooled features for classification
% Now, you will use your pooled features to train a softmax classifier,
% using softmaxTrain from the softmax exercise.
% Training the softmax classifer for 1000 iterations should take less than
% 10 minutes.
% Add the path to your softmax solution, if necessary
% addpath /path/to/solution/
% Setup parameters for softmax
softmaxLambda = 1e-4;%权值惩罚系数
numClasses = 4;
% Reshape the pooledFeatures to form an input vector for softmax
softmaxX = permute(pooledFeaturesTrain, [1 3 4 2]);%permute是调整顺序,把图片放在最后
softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...
numTrainImages); %为每一张图片得到的特征向量长度
softmaxY = trainLabels;
options = struct;
options.maxIter = 200;
softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...%第一个参数为inputSize
numClasses, softmaxLambda, softmaxX, softmaxY, options);
%%======================================================================
%% STEP 5: Test classifer
% Now you will test your trained classifer against the test images
softmaxX = permute(pooledFeaturesTest, [1 3 4 2]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
softmaxY = testLabels;
[pred] = softmaxPredict(softmaxModel, softmaxX);
acc = (pred(:) == softmaxY(:));
acc = sum(acc) / size(acc, 1);
fprintf('Accuracy: %2.3f%%\n', acc * 100);%计算预测准确度
% You should expect to get an accuracy of around 80% on the test images.
cnnConvolve.m
function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
% patchDim - patch (feature) dimension
% numFeatures - number of features
% images - large images to convolve with, matrix in the form
% images(r, c, channel, image number)
% W, b - W, b for features from the sparse autoencoder
% ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
% preprocessing
%
% Returns:
% convolvedFeatures - matrix of convolved features in the form
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
patchSize = patchDim*patchDim;
assert(numFeatures == size(W,1), 'W should have numFeatures rows');
numImages = size(images, 4);%第4维的大小,即图片的样本数
imageDim = size(images, 1);%第1维的大小,即图片的行数
imageChannels = size(images, 3);%第3维的大小,即图片的通道数
assert(patchSize*imageChannels == size(W,2), 'W should have patchSize*imageChannels cols');
% Instructions:
% Convolve every feature with every large image here to produce the
% numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1)
% matrix convolvedFeatures, such that
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
% value of the convolved featureNum feature for the imageNum image over
% the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1)
%
% Expected running times:
% Convolving with 100 images should take less than 3 minutes
% Convolving with 5000 images should take around an hour
% (So to save time when testing, you should convolve with less images, as
% described earlier)
% -------------------- YOUR CODE HERE --------------------
% Precompute the matrices that will be used during the convolution. Recall
% that you need to take into account the whitening and mean subtraction
% steps
WT = W*ZCAWhite;%等效的网络参数
b_mean = b - WT*meanPatch;%针对未均值化的输入数据需要加入该项
% --------------------------------------------------------
convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
for imageNum = 1:numImages
for featureNum = 1:numFeatures
% convolution of image with feature matrix for each channel
convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);
for channel = 1:imageChannels
% Obtain the feature (patchDim x patchDim) needed during the convolution
% ---- YOUR CODE HERE ----
offset = (channel-1)*patchSize;
feature = reshape(WT(featureNum,offset+1:offset+patchSize), patchDim, patchDim);%取一个权值图像块出来
im = images(:,:,channel,imageNum);
% Flip the feature matrix because of the definition of convolution, as explained later
feature = flipud(fliplr(squeeze(feature)));
% Obtain the image
im = squeeze(images(:, :, channel, imageNum));%取一张图片出来
% Convolve "feature" with "im", adding the result to convolvedImage
% be sure to do a 'valid' convolution
% ---- YOUR CODE HERE ----
convolvedoneChannel = conv2(im, feature, 'valid');
convolvedImage = convolvedImage + convolvedoneChannel;%直接把3通道的值加起来,理由:3通道相当于有3个feature-map,类似于cnn第2层以后的输入。
% ------------------------
end
% Subtract the bias unit (correcting for the mean subtraction as well)
% Then, apply the sigmoid function to get the hidden activation
% ---- YOUR CODE HERE ----
convolvedImage = sigmoid(convolvedImage+b_mean(featureNum));
% ------------------------
% The convolved feature is the sum of the convolved values for all channels
convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
end
end
end
function sigm = sigmoid(x)
sigm = 1./(1+exp(-x));
end
cnnPool.m
function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
% poolDim - dimension of pooling region
% convolvedFeatures - convolved features to pool (as given by cnnConvolve)
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
%
% Returns:
% pooledFeatures - matrix of pooled features in the form
% pooledFeatures(featureNum, imageNum, poolRow, poolCol)
%
numImages = size(convolvedFeatures, 2);%图片数
numFeatures = size(convolvedFeatures, 1);%特征数
convolvedDim = size(convolvedFeatures, 3);%图片的行数
resultDim = floor(convolvedDim / poolDim);
pooledFeatures = zeros(numFeatures, numImages, resultDim, resultDim);
% -------------------- YOUR CODE HERE --------------------
% Instructions:
% Now pool the convolved features in regions of poolDim x poolDim,
% to obtain the
% numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim)
% matrix pooledFeatures, such that
% pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the
% value of the featureNum feature for the imageNum image pooled over the
% corresponding (poolRow, poolCol) pooling region
% (see http://ufldl/wiki/index.php/Pooling )
%
% Use mean pooling here.
% -------------------- YOUR CODE HERE --------------------
for imageNum = 1:numImages
for featureNum = 1:numFeatures
for poolRow = 1:resultDim
offsetRow = 1+(poolRow-1)*poolDim;
for poolCol = 1:resultDim
offsetCol = 1+(poolCol-1)*poolDim;
patch = convolvedFeatures(featureNum,imageNum,offsetRow:offsetRow+poolDim-1,...
offsetCol:offsetCol+poolDim-1);%取出一个patch
pooledFeatures(featureNum,imageNum,poolRow,poolCol) = mean(patch(:));%使用均值pool
end
end
end
end
end
参考文献
卷积特征提取
池化
Deep learning:十七(Linear Decoders,Convolution和Pooling)
Deep learning:二十三(Convolution和Pooling练习)
吴恩达 Andrew Ng 的公开课