时间序列分类:LSTM模型之处理变序列长度输入

摘自:https://datascience.stackexchange.com/questions/48796/how-to-feed-lstm-with-different-input-array-sizes

The easiest way is to use Padding and Masking.

There are three general ways to handle variable-length sequences:

(1)Padding and masking,

(2)Batch size = 1,

(3)Batch size > 1, with equi-length samples in each batch.

Padding and masking

In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then

X = [ 

        [[1, 1.1], 

         [0.9, 0.95]], # sequence 1 (2 timestamps) 

        [[2, 2.2],

         [1.9, 1.95], 

         [1.8, 1.85]], # sequence 2 (3 timestamps)

]

will be converted to

X2 = [ 

         [[1, 1.1], 

          [0.9, 0.95]

          [-10, -10]],  # padded sequence 1 (3 timestamps)

         [[2, 2.2],

          [1.9, 1.95], 

          [1.8, 1.85]], # sequence 2 (3 timestamps)

]

This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.

Without Padding and masking

For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.

model.add(LSTM(units, input_shape=(None, dimension)))

this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of model.fit).

I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.

model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))

model.add(LSTM(lstm_units))

where first dimension of input_shape in Masking is again None to allow batches with different lengths.

Code Example

Here is the code for cases (1) and (2):

from keras import Sequential

from keras.utils import Sequence

from keras.layers import LSTM, Dense, Masking

import numpy as np

class MyBatchGenerator(Sequence):

    'Generates data for Keras'

    def __init__(self, X, y, batch_size=1, shuffle=True):

        'Initialization'

        self.X = X

        self.y = y

        self.batch_size = batch_size

        self.shuffle = shuffle

        self.on_epoch_end()

    def __len__(self):

        'Denotes the number of batches per epoch'

        return int(np.floor(len(self.y)/self.batch_size))

    def __getitem__(self, index):

        return self.__data_generation(index)

    def on_epoch_end(self):

        'Shuffles indexes after each epoch'

        self.indexes = np.arange(len(self.y))

        if self.shuffle == True:

            np.random.shuffle(self.indexes)

    def __data_generation(self, index):

        Xb = np.empty((self.batch_size, *X[index].shape))

        yb = np.empty((self.batch_size, *y[index].shape))

        # naively use the same sample over and over again

        for s in range(0, self.batch_size):

            Xb[s] = X[index]

            yb[s] = y[index]

        return Xb, yb

# Parameters

N = 1000

halfN = int(N/2)

dimension = 2

lstm_units = 3

# Data

np.random.seed(123)  # to generate the same numbers

# create sequence lengths between 1 to 10

seq_lens = np.random.randint(1, 10, halfN)

X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])

y_zero = np.zeros((halfN, 1))

X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])

y_one = np.ones((halfN, 1))

p = np.random.permutation(N)  # to shuffle zero and one classes

X = np.concatenate((X_zero, X_one))[p]

y = np.concatenate((y_zero, y_one))[p]

# Batch = 1

model = Sequential()

model.add(LSTM(lstm_units, input_shape=(None, dimension)))

model.add(Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

print(model.summary())

model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

# Padding and Masking

special_value = -10.0

max_seq_len = max(seq_lens)

Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)

for s, x in enumerate(X):

    seq_len = x.shape[0]

    Xpad[s, 0:seq_len, :] = x

model2 = Sequential()

model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))

model2.add(LSTM(lstm_units))

model2.add(Dense(1, activation='sigmoid'))

model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

print(model2.summary())

model2.fit(Xpad, y, epochs=50, batch_size=32)

Extra notes

Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.

你可能感兴趣的:(时间序列分类:LSTM模型之处理变序列长度输入)