深度学习Keras 库 跑例子

跑lmdb_lstm.py 因为需要用lstm,所以就先跑 lstm例子, 


1、官网下载后,直接运行lmdb_lstm.py。总是提示无法下载,打开程序有看到, 通过load_data来下载数据,但是这个数据没法在线下载,导致跑不通。

print("Loading data...")
(X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=max_features, test_split=0.2)

解决:在lmdb.py 中路径更改一下。 如下边所示,直接给路径。

  #  path = get_file(path, origin="https://s3.amazonaws.com/text-datasets/imdb.pkl")
    path = "E:\\project\\deep learning\\RNN\\eeg rnn\\theano code\\keras-master\\imdb.pkl"

再次运行,即可跑通


二、下边是官网 例子说明,讲的很清楚, 看完这个才真正发现,这个库确实 很好用,很简单啊。 就是速度有点慢。

官网地址:http://keras.io/examples/

Here are a few examples to get you started!

Multilayer Perceptron (MLP):

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD

model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(64, input_dim=20, init='uniform'))   // 全连接层, 64个神经元
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))         // 最后一个全连接层用 softmax当激活函数

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)  // 用随机梯度下降优化,nesterov?????
model.compile(loss='mean_squared_error', optimizer=sgd)

model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
score = model.evaluate(X_test, y_test, batch_size=16)

Alternative implementation of MLP:

model = Sequential()
model.add(Dense(64, input_dim=20, init='uniform', activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform', activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform', activation='softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)   
model.compile(loss='mean_squared_error', optimizer=sgd)

VGG-like convnet:

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD

model = Sequential()
# input: 100x100 images with 3 channels -> (3, 100, 100) tensors.
# this applies 32 convolution filters of size 3x3 each.
model.add(Convolution2D(32, 3, 3, border_mode='full', input_shape=(3, 100, 100)))
model.add(Activation('relu'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Convolution2D(64, 3, 3, border_mode='valid'))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))

model.add(Flatten())
# Note: Keras does automatic shape inference.
model.add(Dense(256))
model.add(Activation('relu'))
model.add(Dropout(0.5))

model.add(Dense(10))
model.add(Activation('softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)

model.fit(X_train, Y_train, batch_size=32, nb_epoch=1)

Sequence classification with LSTM:

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM

model = Sequential()
model.add(Embedding(max_features, 256, input_length=maxlen))
model.add(LSTM(output_dim=128, activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='rmsprop')

model.fit(X_train, Y_train, batch_size=16, nb_epoch=10)
score = model.evaluate(X_test, Y_test, batch_size=16)

Architecture for learning image captions with a convnet and a Gated Recurrent Unit:

(word-level embedding, caption of maximum length 16 words).

Note that getting this to work well will require using a bigger convnet, initialized with pre-trained weights.

max_caption_len = 16
vocab_size = 10000

# first, let's define an image model that
# will encode pictures into 128-dimensional vectors.
# it should be initialized with pre-trained weights.
image_model = Sequential()
image_model.add(Convolution2D(32, 3, 3, border_mode='full', input_shape=(3, 100, 100)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(32, 3, 3))
image_model.add(Activation('relu'))
image_model.add(MaxPooling2D(pool_size=(2, 2)))

image_model.add(Convolution2D(64, 3, 3, border_mode='full'))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64, 3, 3))
image_model.add(Activation('relu'))
image_model.add(MaxPooling2D(pool_size=(2, 2)))

image_model.add(Flatten())
image_model.add(Dense(128))

# let's load the weights from a save file.
image_model.load_weights('weight_file.h5')

# next, let's define a RNN model that encodes sequences of words
# into sequences of 128-dimensional word vectors.
language_model = Sequential()
language_model.add(Embedding(vocab_size, 256, input_length=max_caption_len))
language_model.add(GRU(output_dim=128, return_sequences=True))
language_model.add(Dense(128))

# let's repeat the image vector to turn it into a sequence.
image_model.add(RepeatVector(max_caption_len))

# the output of both models will be tensors of shape (samples, max_caption_len, 128).
# let's concatenate these 2 vector sequences.
model = Sequential()
model.add(Merge([image_model, language_model], mode='concat', concat_axis=-1))
# let's encode this vector sequence into a single vector
model.add(GRU(256, 256, return_sequences=False))
# which will be used to compute a probability
# distribution over what the next word in the caption should be!
model.add(Dense(vocab_size))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

# "images" is a numpy float array of shape (nb_samples, nb_channels=3, width, height).
# "captions" is a numpy integer array of shape (nb_samples, max_caption_len)
# containing word index sequences representing partial captions.
# "next_words" is a numpy float array of shape (nb_samples, vocab_size)
# containing a categorical encoding (0s and 1s) of the next word in the corresponding
# partial caption.
model.fit([images, partial_captions], next_words, batch_size=16, nb_epoch=100)

In the examples folder, you will find example models for real datasets: - CIFAR10 small images classification: Convolutional Neural Network (CNN) with realtime data augmentation - IMDB movie review sentiment classification: LSTM over sequences of words - Reuters newswires topic classification: Multilayer Perceptron (MLP) - MNIST handwritten digits classification: MLP & CNN - Character-level text generation with LSTM

...and more.

三、看到网上一篇博客注释lstm, 这个可能是老版本上注释的, 但是参数还是有可借鉴的地方

参考博客地址: http://www.jianshu.com/p/3992fe7bb847

Keras Recurrent Layers 解析

字数1179  阅读120  评论0 

GRU

keras.layers.recurrent.GRU(input_dim, output_dim=128, init='glorot_uniform', inner_init='orthogonal', activation='sigmoid', inner_activation='hard_sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)

Gated Recurrent Unit - Cho et al. 2014.

  • 输入形状:3D 张量:(nb_samples, timesteps, input_dim).
  • 输出形状:
    • 如果 return_sequences:3D 张量形如:(nb_samples, timesteps, output_dim).
    • 否则:2D 张量形如:(nb_samples, output_dim).
  • 参数:
    • input_dim:输入的维数
    • output_dim:内部投影的维数和最终输出的维数
    • init:权重初始函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • inner_init:对内部元件的权重初始化函数
    • activation:激活函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • inner_activation:内部元件的激活函数
    • weights:numpy 数组的列表用以设置初始权重。这个列表应该有 9 个元素
    • truncate_gradient:BPTT 的截断时间步。参见:Theano scan
    • return_sequences:Boolean。是否返回输出序列的最后一个,或者返回全部序列。
  • References:
    • On the Properties of Neural Machine Translation: Encoder–Decoder Approaches
    • Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

LSTM

keras.layers.recurrent.LSTM(input_dim, output_dim=128, init='glorot_uniform', inner_init='orthogonal', forget_bias_init='one', activation='tanh', inner_activation='hard_sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)

Long Short-Term Memory unit - Hochreiter et al. 1997

  • 输入形状:3D 张量:(nb_samples, timesteps, input_dim).
  • 输出形状:
    • 如果 return_sequences:3D 张量形如:(nb_samples, timesteps, output_dim).
    • 否则:2D 张量形如:(nb_samples, output_dim).
  • 参数:
    • input_dim:输入的维数
    • output_dim:内部投影的维数和最终输出的维数
    • init:权重初始函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • inner_init:对内部元件的权重初始化函数
    • forget_bias_init:用作遗忘门的偏差初的始函数。Jozefowicz 等人推荐使用 1 来初始化
    • activation:激活函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • inner_activation:内部元件的激活函数
    • weights:numpy 数组的列表用以设置初始权重。这个列表应该有 9 个元素
    • truncate_gradient:BPTT 的截断时间步。参见:Theano scan
    • return_sequences:Boolean。是否返回输出序列的最后一个,或者返回全部序列。
  • References:
    • Long short-term memory
    • Learning to forget: Continual prediction with LSTM
    • Supervised sequence labelling with recurrent neural networks

JZS1, JZS2, JZS3

keras.layers.recurrent.JZS1(input_dim, output_dim=128, init='glorot_uniform', inner_init='orthogonal', activation='tanh', inner_activation='sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)

全连接的 RNN 其中输出被重回输入。不是特别有用,仅供参考。

  • 输入形状:3D 张量:(nb_samples, timesteps, input_dim).
  • 输出形状:
    • 如果 return_sequences:3D 张量形如:(nb_samples, timesteps, output_dim).
    • 否则:2D 张量形如:(nb_samples, output_dim).
  • 参数:
    • input_dim
    • output_dim
    • init:权重初始函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • inner_init:内部元件的初始化的函数
    • activation:激活函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • weights:numpy 数组的列表用以设置初始权重。这个列表应该有 3 个元素,形如:[(input_dim, output_dim), (output_di,, output_dim), (output_dim, )]
    • truncate_gradient:BPTT 的截断时间步。参见:Theano scan
    • return_sequences:Boolean。是否返回输出序列的最后一个,或者返回全部序列。
  • 参考文献:
    An Empirical Exploration of Recurrent Network Architectures

SimpleDeepRNN

keras.layers.recurrent.SimpleDeepRNN(input_dim, output_dim, depth=3, init='glorot_uniform', inner_init='orthogonal', activation='sigmoid', inner_activation='hard_sigmoid', weights=None, truncate_gradient=-1, return_sequences=False)

全连接的 RNN 其中多个时间步的输出重回输入中(使用 depth 参数来控制步数)。

output = activation( W.x_t + b + inner_activation(U_1.h_tm1) + inner_activation(U_2.h_tm2) + ... )

也不是常用的模型,仅供参考。

  • 输入形状:3D 张量:(nb_samples, timesteps, input_dim).
  • 输出形状:
    • 如果 return_sequences:3D 张量形如:(nb_samples, timesteps, output_dim).
    • 否则:2D 张量形如:(nb_samples, output_dim).
  • 参数:
    • input_dim
    • output_dim
    • init:权重初始函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • inner_init:对内部元件的权重初始化函数
    • activation:激活函数。可以是任何已经存在的函数(str),或者是一个 Theano 的函数(参见:初始化)
    • inner_activation:内部元件的激活函数
    • weights:numpy 数组的列表用以设置初始权重。这个列表应该有 3 个元素,形如:[(input_dim, output_dim), (output_di,, output_dim), (output_dim, )]
    • truncate_gradient:BPTT 的截断时间步。参见:Theano scan
    • return_sequences:Boolean。是否返回输出序列的最后一个,或者返回全部序列。

你可能感兴趣的:(深度学习Keras 库 跑例子)