DeepLearningToolBox学习——SAE(stacked auto encoders )

下载地址:DeepLearningToolBox

SAE(stacked auto encoders )的结构如下:

DeepLearningToolBox学习——SAE(stacked auto encoders )_第1张图片

基本意思就是一个隐藏层的神经网络,输入输出都是x,属于无监督学习。

test_example_SAE

%%  ex1 train a 100 hidden unit SDAE and use it to initialize a FFNN
%  Setup and train a stacked denoising autoencoder (SDAE)
rand('state',0)
sae = saesetup([784 100]);
sae.ae{1}.activation_function       = 'sigm';
sae.ae{1}.learningRate              = 1;
sae.ae{1}.inputZeroMaskedFraction   = 0.5;
opts.numepochs =   1;
opts.batchsize = 100;
sae = saetrain(sae, train_x, opts);
visualize(sae.ae{1}.W{1}(:,2:end)')
% Use the SDAE to initialize a FFNN
nn = nnsetup([784 100 10]);
nn.activation_function              = 'sigm';
nn.learningRate                     = 1;
nn.W{1} = sae.ae{1}.W{1};

训练一个含100个隐层单元的SDAE,可视化权重,并用其初始化一个FFNN。

权重可视化结果

saesetup

function sae = saesetup(size)
    for u = 2 : numel(size)
        sae.ae{u-1} = nnsetup([size(u-1) size(u) size(u-1)]);
    end
end

sae的每一层是用nn训练的一个autoencoder。和之前的rbm训练的不同。

saetrain

function sae = saetrain(sae, x, opts)
    for i = 1 : numel(sae.ae);
        disp(['Training AE ' num2str(i) '/' num2str(numel(sae.ae))]);
        sae.ae{i} = nntrain(sae.ae{i}, x, x, opts);
        t = nnff(sae.ae{i}, x, x);
        x = t.a{2};
        %remove bias term
        x = x(:,2:end);
    end
end

用nntrain 来训练这个autoencoder.

最后finu-tune这个被初始化的NN
% Train the FFNN
opts.numepochs =   1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.16, 'Too big error');

最终误差为0.2左右。

你可能感兴趣的:(sae,deep,learning,AutoEncoder,nn,FFNN)