6-3 稀疏自动编码器--Keras实现

点击查看完整代码http://www.daimapi.com/neuralnetwork6_3/

      该代码利用Python3实现,利用到了深度学习工具包Keras。

      从之前autoencoder的原理可以看出,autoencoder是直接去学习的输入数据的隐藏层表达,但VAE则不是如此。

      假定认为输入数据的数据集D(显变量)是受到一组隐变量 z 的控制,数据集的分布完全由这组隐变量操控,而这组隐变量之间相互独立而且服从高斯分布。

#! -*- coding: utf-8 -*-
from __future__ import print_function

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm

from keras.layers import Input, Dense, Lambda
from keras.models import Model
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
from keras.utils import to_categorical

batch_size = 100
original_dim = 784
latent_dim = 2 # 隐变量取2维只是为了方便后面画图
intermediate_dim = 256
epochs = 10
epsilon_std = 1.0
num_classes = 10

# 加载MNIST数据集
(x_train, y_train_), (x_test, y_test_) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
y_train = to_categorical(y_train_, num_classes)
y_test = to_categorical(y_test_, num_classes)

你可能感兴趣的:(神经网络,稀疏自动编码器,Keras)