Semi-supervised Sequence Learning

encoder-decoder

输入序列文本=['1 2 3 4 5' , '6 7 8 9 10' , '11 12 13 14 15' , '16 17 18 19 20' , '21 22 23 24 25']

目标序列文本 = ['one two three four five' , 'six seven eight nine ten' , 'eleven twelve thirteen fourteen fifteen' , 'sixteen seventeen eighteen nineteen twenty' , 'twenty_one twenty_two twenty_three twenty_four twenty_five']

一些参数如下:(‘Vocab size:’, 51, ‘unique words’)  (‘Input max length:’, 5, ‘words’)  (‘Target max length:’, 5, ‘words’)

(‘Dimension of hidden vectors:’, 20)  (‘Number of training stories:’, 5)  (‘Number of test stories:’, 5)

tokenize()\W+ 匹配数字和字母下划线的多个字符  ['Bob', 'dropped', 'the', 'apple', '.', 'Where', 'is', 'the', 'apple', '?']

input_list=[['1', '2', '3', '4', '5'], ['6', '7', '8', '9', '10'], ['11', '12', '13', '14', '15'], ['16', '17', '18', '19', '20'], ['21', '22', '23', '24', '25']]

tar_list=[['one', 'two', 'three', 'four', 'five'], ['six', 'seven', 'eight', 'nine', 'ten'], ['eleven', 'twelve', 'thirteen', 'fourteen', 'fifteen'], ['sixteen', 'seventeen', 'eighteen', 'nineteen', 'twenty'], ['twenty_one', 'twenty_two', 'twenty_three', 'twenty_four', 'twenty_five']]

vocab=['1','10','11','12','13','14','15',16','17','18','19',2','20','21','22','23','24','25','3','4','5','6','7',8','9','eight','eighteen','eleven',fifteen','five','four','fourteen','nine',nineteen','one',seven','seventeen','six','sixteen','ten','thirteen','three','twelve','twenty','twenty_five','twenty_four','twenty_one','twenty_three','twenty_two','two']

vocab_size = len(vocab) + 1 = 51

input_maxlen = max(map(len, (x for x in input_list))) # 5

tar_maxlen = max(map(len, (x for x in tar_list)))  # 5

output_dim = vocab_size  # 51

hidden_dim = 20  # Dimension of hidden vectors

word_to_idx={'1': 1, '10': 2, '11': 3, ...  'twenty_one': 47, 'twenty_three': 48, 'twenty_two': 49, 'two': 50}

idx_to_word={1: '1', 2: '10', 3: '11' ...  49: 'twenty_two', 50: 'two'}

inputs_train, tars_train = vectorize_stories(input_list, tar_list, word_to_idx, input_maxlen, tar_maxlen, vocab_size)

inputs_train=array([[ 1, 12, 19, 20, 21], [22, 23, 24, 25,  2],...[ 8,  9, 10, 11, 13], [14, 15, 16, 17, 18]])

tars_train=array([[[False, False, False, ..., False, False, False],[False, False, False, ..., False, False,  True],...]]]

shape=(5,5,51)

SA-LSTMs

When LSTMs are initialized with a sequence autoencoder, the methods are called SA-LSTMs

LM-LSTMs

When LSTMs are initialized with a language model, the method is called LM-LSTMs.

你可能感兴趣的:(Semi-supervised Sequence Learning)