deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3


Part 1

Neural Machine Translation

Welcome to your first programming assignment for this week!

You will build a Neural Machine Translation (NMT) model to translate human readable dates (“25th of June, 2009”) into machine readable dates (“2009-06-25”). You will do this using an attention model, one of the most sophisticated sequence to sequence models.

This notebook was produced together with NVIDIA’s Deep Learning Institute.

Let’s load all the packages you will need for this assignment.

from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np

from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline

1 - Translating human readable dates into machine readable dates

The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler “date translation” task.

The network will input a date written in a variety of possible formats (e.g. “the 29th of August 1958”, “03/30/1968”, “24 JUNE 1987”) and translate them into standardized, machine readable dates (e.g. “1958-08-29”, “1968-03-30”, “1987-06-24”). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.

1.1 - Dataset

We will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let’s run the following cells to load the dataset and print some examples.

m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
[('9 may 1998', '1998-05-09'),
 ('10.09.70', '1970-09-10'),
 ('4/28/90', '1990-04-28'),
 ('thursday january 26 1995', '1995-01-26'),
 ('monday march 7 1983', '1983-03-07'),
 ('sunday may 22 1988', '1988-05-22'),
 ('tuesday july 8 2008', '2008-07-08'),
 ('08 sep 1999', '1999-09-08'),
 ('1 jan 1981', '1981-01-01'),
 ('monday may 22 1995', '1995-05-22')]

You’ve loaded:

  • dataset: a list of tuples of (human readable date, machine readable date)
  • human_vocab: a python dictionary mapping all characters used in the human readable dates to an integer-valued index
  • machine_vocab: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with human_vocab.
  • inv_machine_vocab: the inverse dictionary of machine_vocab, mapping from indices back to characters.

Let’s preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since “YYYY-MM-DD” is 10 characters long).

Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)

print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
X.shape: (10000, 30)
Y.shape: (10000, 10)
Xoh.shape: (10000, 30, 37)
Yoh.shape: (10000, 10, 11)

You now have:

  • X: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via human_vocab. Each date is further padded to T x T_x Tx values with a special character (< pad >). X.shape = (m, Tx)
  • Y: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in machine_vocab. You should have Y.shape = (m, Ty).
  • Xoh: one-hot version of X, the “1” entry’s index is mapped to the character thanks to human_vocab. Xoh.shape = (m, Tx, len(human_vocab))
  • Yoh: one-hot version of Y, the “1” entry’s index is mapped to the character thanks to machine_vocab. Yoh.shape = (m, Tx, len(machine_vocab)). Here, len(machine_vocab) = 11 since there are 11 characters (’-’ as well as 0-9).

Lets also look at some examples of preprocessed training examples. Feel free to play with index in the cell below to navigate the dataset and see how source/target dates are preprocessed.

index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
Source date: 9 may 1998
Target date: 1998-05-09

Source after preprocessing (indices): [12  0 24 13 34  0  4 12 12 11 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36
 36 36 36 36 36]
Target after preprocessing (indices): [ 2 10 10  9  0  1  6  0  1 10]

Source after preprocessing (one-hot): [[ 0.  0.  0. ...,  0.  0.  0.]
 [ 1.  0.  0. ...,  0.  0.  0.]
 [ 0.  0.  0. ...,  0.  0.  0.]
 ..., 
 [ 0.  0.  0. ...,  0.  0.  1.]
 [ 0.  0.  0. ...,  0.  0.  1.]
 [ 0.  0.  0. ...,  0.  0.  1.]]
Target after preprocessing (one-hot): [[ 0.  0.  1.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  1.  0.]
 [ 1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  1.  0.  0.  0.  0.]
 [ 1.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  1.]]

2 - Neural machine translation with attention

If you had to translate a book’s paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.

The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.

2.1 - Attention mechanism

In this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one “Attention” step does to calculate the attention variables α ⟨ t , t ′ ⟩ \alpha^{\langle t, t' \rangle} αt,t, which are used to compute the context variable c o n t e x t ⟨ t ⟩ context^{\langle t \rangle} contextt for each timestep in the output ( t = 1 , … , T y t=1, \ldots, T_y t=1,,Ty).
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第1张图片
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第2张图片

Figure 1: Neural machine translation with attention

Here are some properties of the model that you may notice:

  • There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes before the attention mechanism, we will call it pre-attention Bi-LSTM. The LSTM at the top of the diagram comes after the attention mechanism, so we will call it the post-attention LSTM. The pre-attention Bi-LSTM goes through T x T_x Tx time steps; the post-attention LSTM goes through T y T_y Ty time steps.

  • The post-attention LSTM passes s ⟨ t ⟩ , c ⟨ t ⟩ s^{\langle t \rangle}, c^{\langle t \rangle} st,ct from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations s ⟨ t ⟩ s^{\langle t\rangle} st. But since we are using an LSTM here, the LSTM has both the output activation s ⟨ t ⟩ s^{\langle t\rangle} st and the hidden cell state c ⟨ t ⟩ c^{\langle t\rangle} ct. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time t t t does will not take the specific generated y ⟨ t − 1 ⟩ y^{\langle t-1 \rangle} yt1 as input; it only takes s ⟨ t ⟩ s^{\langle t\rangle} st and c ⟨ t ⟩ c^{\langle t\rangle} ct as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn’t as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.

  • We use a ⟨ t ⟩ = [ a → ⟨ t ⟩ ; a ← ⟨ t ⟩ ] a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}] at=[a t;a t] to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM.

  • The diagram on the right uses a RepeatVector node to copy s ⟨ t − 1 ⟩ s^{\langle t-1 \rangle} st1's value T x T_x Tx times, and then Concatenation to concatenate s ⟨ t − 1 ⟩ s^{\langle t-1 \rangle} st1 and a ⟨ t ⟩ a^{\langle t \rangle} at to compute e ⟨ t , t ′ e^{\langle t, t'} et,t, which is then passed through a softmax to compute α ⟨ t , t ′ ⟩ \alpha^{\langle t, t' \rangle} αt,t. We’ll explain how to use RepeatVector and Concatenation in Keras below.

Lets implement this model. You will start by implementing two functions: one_step_attention() and model().

1) one_step_attention(): At step t t t, given all the hidden states of the Bi-LSTM ( [ a < 1 > , a < 2 > , . . . , a < T x > ] [a^{<1>},a^{<2>}, ..., a^{<T_x>}] [a<1>,a<2>,...,a<Tx>]) and the previous hidden state of the second LSTM ( s < t − 1 > s^{<t-1>} s<t1>), one_step_attention() will compute the attention weights ( [ α < t , 1 > , α < t , 2 > , . . . , α < t , T x > ] [\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}] [α<t,1>,α<t,2>,...,α<t,Tx>]) and output the context vector (see Figure 1 (right) for details):
(1) c o n t e x t < t > = ∑ t ′ = 0 T x α < t , t ′ > a < t ′ > context^{<t>} = \sum_{t' = 0}^{T_x} \alpha^{<t,t'>}a^{<t'>}\tag{1} context<t>=t=0Txα<t,t>a<t>(1)

Note that we are denoting the attention in this notebook c o n t e x t ⟨ t ⟩ context^{\langle t \rangle} contextt. In the lecture videos, the context was denoted c ⟨ t ⟩ c^{\langle t \rangle} ct, but here we are calling it c o n t e x t ⟨ t ⟩ context^{\langle t \rangle} contextt to avoid confusion with the (post-attention) LSTM’s internal memory cell variable, which is sometimes also denoted c ⟨ t ⟩ c^{\langle t \rangle} ct.

2) model(): Implements the entire model. It first runs the input through a Bi-LSTM to get back [ a < 1 > , a < 2 > , . . . , a < T x > ] [a^{<1>},a^{<2>}, ..., a^{<T_x>}] [a<1>,a<2>,...,a<Tx>]. Then, it calls one_step_attention() T y T_y Ty times (for loop). At each iteration of this loop, it gives the computed context vector c < t > c^{<t>} c<t> to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction y ^ < t > \hat{y}^{<t>} y^<t>.

Exercise: Implement one_step_attention(). The function model() will call the layers in one_step_attention() T y T_y Ty using a for-loop, and it is important that all T y T_y Ty copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all T y T_y Ty steps should have shared weights. Here’s how you can implement layers with shareable weights in Keras:

  1. Define the layer objects (as global variables for examples).
  2. Call these objects when propagating the input.

We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: RepeatVector(), Concatenate(), Dense(), Activation(), Dot().

# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)

Now you can use these layers to implement one_step_attention(). In order to propagate a Keras tensor object X through one of these layers, use layer(X) (or layer([X,Y]) if it requires multiple inputs.), e.g. densor(X) will propagate X through the Dense(1) layer defined above.

# GRADED FUNCTION: one_step_attention

def one_step_attention(a, s_prev):
    """
    Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
    "alphas" and the hidden states "a" of the Bi-LSTM.
    
    Arguments:
    a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
    s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
    
    Returns:
    context -- context vector, input of the next (post-attetion) LSTM cell
    """
    
    ### START CODE HERE ###
    # Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
    s_prev = repeator(s_prev)
    # Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
    concat = concatenator([a,s_prev])
    # Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
    e = densor1(concat)
    # Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
    energies = densor2(e)
    # Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
    alphas = activator(energies)
    # Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
    context = dotor([alphas,a])
    ### END CODE HERE ###
    
    return context

You will be able to check the expected output of one_step_attention() after you’ve coded the model() function.

Exercise: Implement model() as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in model().

n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)

Now you can use these layers T y T_y Ty times in a for loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps:

  1. Propagate the input into a Bidirectional LSTM

  2. Iterate for t = 0 , … , T y − 1 t = 0, \dots, T_y-1 t=0,,Ty1:

    1. Call one_step_attention() on [ α < t , 1 > , α < t , 2 > , . . . , α < t , T x > ] [\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}] [α<t,1>,α<t,2>,...,α<t,Tx>] and s < t − 1 > s^{<t-1>} s<t1> to get the context vector c o n t e x t < t > context^{<t>} context<t>.
    2. Give c o n t e x t < t > context^{<t>} context<t> to the post-attention LSTM cell. Remember pass in the previous hidden-state s ⟨ t − 1 ⟩ s^{\langle t-1\rangle} st1 and cell-states c ⟨ t − 1 ⟩ c^{\langle t-1\rangle} ct1 of this LSTM using initial_state= [previous hidden state, previous cell state]. Get back the new hidden state s < t > s^{<t>} s<t> and the new cell state c < t > c^{<t>} c<t>.
    3. Apply a softmax layer to s < t > s^{<t>} s<t>, get the output.
    4. Save the output by adding it to the list of outputs.
  3. Create your Keras model instance, it should have three inputs (“inputs”, s < 0 > s^{<0>} s<0> and c < 0 > c^{<0>} c<0>) and output the list of “outputs”.

# GRADED FUNCTION: model

def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
    """
    Arguments:
    Tx -- length of the input sequence
    Ty -- length of the output sequence
    n_a -- hidden state size of the Bi-LSTM
    n_s -- hidden state size of the post-attention LSTM
    human_vocab_size -- size of the python dictionary "human_vocab"
    machine_vocab_size -- size of the python dictionary "machine_vocab"

    Returns:
    model -- Keras model instance
    """
    
    # Define the inputs of your model with a shape (Tx,)
    # Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
    X = Input(shape=(Tx, human_vocab_size))
    s0 = Input(shape=(n_s,), name='s0')
    c0 = Input(shape=(n_s,), name='c0')
    s = s0
    c = c0
    
    # Initialize empty list of outputs
    outputs = []
    
    ### START CODE HERE ###
    
    # Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
    a = Bidirectional(LSTM(n_a, return_sequences=True))(X)
    
    # Step 2: Iterate for Ty steps
    for t in range(Ty):
    
        # Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
        context = one_step_attention(a, s)
        
        # Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
        # Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
        s, _, c = post_activation_LSTM_cell(context,initial_state=[s,c])
        
        # Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
        out = output_layer(s)
        
        # Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
        outputs.append(out)
    
    # Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
    model = Model(inputs=[X,s0,c0],outputs=outputs)
    
    ### END CODE HERE ###
    
    return model

Run the following cell to create your model.

model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))

Let’s get a summary of the model to check if it matches the expected output.

model.summary()
____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_2 (InputLayer)             (None, 30, 37)        0                                            
____________________________________________________________________________________________________
s0 (InputLayer)                  (None, 64)            0                                            
____________________________________________________________________________________________________
bidirectional_2 (Bidirectional)  (None, 30, 64)        17920       input_2[0][0]                    
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector)   (None, 30, 64)        0           s0[0][0]                         
                                                                   lstm_1[1][0]                     
                                                                   lstm_1[2][0]                     
                                                                   lstm_1[3][0]                     
                                                                   lstm_1[4][0]                     
                                                                   lstm_1[5][0]                     
                                                                   lstm_1[6][0]                     
                                                                   lstm_1[7][0]                     
                                                                   lstm_1[8][0]                     
                                                                   lstm_1[9][0]                     
____________________________________________________________________________________________________
concatenate_1 (Concatenate)      (None, 30, 128)       0           bidirectional_2[0][0]            
                                                                   repeat_vector_1[1][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[2][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[3][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[4][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[5][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[6][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[7][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[8][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[9][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[10][0]           
____________________________________________________________________________________________________
dense_1 (Dense)                  (None, 30, 10)        1290        concatenate_1[1][0]              
                                                                   concatenate_1[2][0]              
                                                                   concatenate_1[3][0]              
                                                                   concatenate_1[4][0]              
                                                                   concatenate_1[5][0]              
                                                                   concatenate_1[6][0]              
                                                                   concatenate_1[7][0]              
                                                                   concatenate_1[8][0]              
                                                                   concatenate_1[9][0]              
                                                                   concatenate_1[10][0]             
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 30, 1)         11          dense_1[1][0]                    
                                                                   dense_1[2][0]                    
                                                                   dense_1[3][0]                    
                                                                   dense_1[4][0]                    
                                                                   dense_1[5][0]                    
                                                                   dense_1[6][0]                    
                                                                   dense_1[7][0]                    
                                                                   dense_1[8][0]                    
                                                                   dense_1[9][0]                    
                                                                   dense_1[10][0]                   
____________________________________________________________________________________________________
attention_weights (Activation)   (None, 30, 1)         0           dense_2[1][0]                    
                                                                   dense_2[2][0]                    
                                                                   dense_2[3][0]                    
                                                                   dense_2[4][0]                    
                                                                   dense_2[5][0]                    
                                                                   dense_2[6][0]                    
                                                                   dense_2[7][0]                    
                                                                   dense_2[8][0]                    
                                                                   dense_2[9][0]                    
                                                                   dense_2[10][0]                   
____________________________________________________________________________________________________
dot_1 (Dot)                      (None, 1, 64)         0           attention_weights[1][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[2][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[3][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[4][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[5][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[6][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[7][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[8][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[9][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[10][0]         
                                                                   bidirectional_2[0][0]            
____________________________________________________________________________________________________
c0 (InputLayer)                  (None, 64)            0                                            
____________________________________________________________________________________________________
lstm_1 (LSTM)                    [(None, 64), (None, 6 33024       dot_1[1][0]                      
                                                                   s0[0][0]                         
                                                                   c0[0][0]                         
                                                                   dot_1[2][0]                      
                                                                   lstm_1[1][0]                     
                                                                   lstm_1[1][2]                     
                                                                   dot_1[3][0]                      
                                                                   lstm_1[2][0]                     
                                                                   lstm_1[2][2]                     
                                                                   dot_1[4][0]                      
                                                                   lstm_1[3][0]                     
                                                                   lstm_1[3][2]                     
                                                                   dot_1[5][0]                      
                                                                   lstm_1[4][0]                     
                                                                   lstm_1[4][2]                     
                                                                   dot_1[6][0]                      
                                                                   lstm_1[5][0]                     
                                                                   lstm_1[5][2]                     
                                                                   dot_1[7][0]                      
                                                                   lstm_1[6][0]                     
                                                                   lstm_1[6][2]                     
                                                                   dot_1[8][0]                      
                                                                   lstm_1[7][0]                     
                                                                   lstm_1[7][2]                     
                                                                   dot_1[9][0]                      
                                                                   lstm_1[8][0]                     
                                                                   lstm_1[8][2]                     
                                                                   dot_1[10][0]                     
                                                                   lstm_1[9][0]                     
                                                                   lstm_1[9][2]                     
____________________________________________________________________________________________________
dense_3 (Dense)                  (None, 11)            715         lstm_1[1][0]                     
                                                                   lstm_1[2][0]                     
                                                                   lstm_1[3][0]                     
                                                                   lstm_1[4][0]                     
                                                                   lstm_1[5][0]                     
                                                                   lstm_1[6][0]                     
                                                                   lstm_1[7][0]                     
                                                                   lstm_1[8][0]                     
                                                                   lstm_1[9][0]                     
                                                                   lstm_1[10][0]                    
====================================================================================================
Total params: 52,960
Trainable params: 52,960
Non-trainable params: 0
____________________________________________________________________________________

Expected Output:

Here is the summary you should see

**Total params:** 52,960
**Trainable params:** 52,960
**Non-trainable params:** 0
**bidirectional_1's output shape ** (None, 30, 64)
**repeat_vector_1's output shape ** (None, 30, 64)
**concatenate_1's output shape ** (None, 30, 128)
**attention_weights's output shape ** (None, 30, 1)
**dot_1's output shape ** (None, 1, 64)
**dense_3's output shape ** (None, 11)

As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, a custom Adam optimizer (learning rate = 0.005, β 1 = 0.9 \beta_1 = 0.9 β1=0.9, β 2 = 0.999 \beta_2 = 0.999 β2=0.999, decay = 0.01) and ['accuracy'] metrics:

### START CODE HERE ### (≈2 lines)
opt = Adam(lr=0.0005, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss = 'categorical_crossentropy',optimizer=opt, metrics = ['accuracy'])
### END CODE HERE ###

The last step is to define all your inputs and outputs to fit the model:

  • You already have X of shape ( m = 10000 , T x = 30 ) (m = 10000, T_x = 30) (m=10000,Tx=30) containing the training examples.
  • You need to create s0 and c0 to initialize your post_activation_LSTM_cell with 0s.
  • Given the model() you coded, you need the “outputs” to be a list of 11 elements of shape (m, T_y). So that: outputs[i][0], ..., outputs[i][Ty] represent the true labels (characters) corresponding to the i t h i^{th} ith training example (X[i]). More generally, outputs[i][j] is the true label of the j t h j^{th} jth character in the i t h i^{th} ith training example.
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))

Let’s now fit the model and run it for one epoch.

model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)

Epoch 1/1
10000/10000 [==============================] - 36s - loss: 22.0442 - dense_3_loss_1: 2.2764 - dense_3_loss_2: 2.1758 - dense_3_loss_3: 2.3413 - dense_3_loss_4: 2.6109 - dense_3_loss_5: 1.7309 - dense_3_loss_6: 1.8242 - dense_3_loss_7: 2.6554 - dense_3_loss_8: 1.5706 - dense_3_loss_9: 2.0752 - dense_3_loss_10: 2.7836 - dense_3_acc_1: 0.0292 - dense_3_acc_2: 0.3608 - dense_3_acc_3: 0.1945 - dense_3_acc_4: 0.0765 - dense_3_acc_5: 0.6003 - dense_3_acc_6: 0.2663 - dense_3_acc_7: 0.0338 - dense_3_acc_8: 0.6435 - dense_3_acc_9: 0.0952 - dense_3_acc_10: 0.0340    


While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples:
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第3张图片

Thus, dense_2_acc_8: 0.89 means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data.

We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)

model.load_weights('models/model.h5')

You can now see the results on new examples.

EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
    
    source = string_to_int(example, Tx, human_vocab)
    source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
    prediction = model.predict([source, s0, c0])
    prediction = np.argmax(prediction, axis = -1)
    output = [inv_machine_vocab[int(i)] for i in prediction]
    
    print("source:", example)
    print("output:", ''.join(output))
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']

for example in EXAMPLES:

    

    source = string_to_int(example, Tx, human_vocab)

    source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)

    prediction = model.predict([source, s0, c0])

    prediction = np.argmax(prediction, axis = -1)

    output = [inv_machine_vocab[int(i)] for i in prediction]

    

    print("source:", example)

    print("output:", ''.join(output))

source: 3 May 1979
output: 1979-05-03
source: 5 April 09
output: 2009-05-05
source: 21th of August 2016
output: 2016-08-21
source: Tue 10 Jul 2007
output: 2007-07-10
source: Saturday May 9 2018
output: 2018-05-09
source: March 3 2001
output: 2001-03-03
source: March 3rd 2001
output: 2001-03-03
source: 1 March 2001
output: 2001-03-01

You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing–i.e., what part of the input the network is paying attention to when generating a particular output character.

3 - Visualizing Attention (Optional / Ungraded)

Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.

Consider the task of translating “Saturday 9 May 2018” to “2018-05-09”. If we visualize the computed α ⟨ t , t ′ ⟩ \alpha^{\langle t, t' \rangle} αt,t we get this:
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第4张图片

Figure 8: Full Attention Map

Notice how the output ignores the “Saturday” portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input’s “18” in order to generate “2018.”

3.1 - Getting the activations from the network

Lets now visualize the attention values in your network. We’ll propagate an example through the network, then visualize the values of α ⟨ t , t ′ ⟩ \alpha^{\langle t, t' \rangle} αt,t.

To figure out where the attention values are located, let’s start by printing a summary of the model .

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_2 (InputLayer)             (None, 30, 37)        0                                            
____________________________________________________________________________________________________
s0 (InputLayer)                  (None, 64)            0                                            
____________________________________________________________________________________________________
bidirectional_2 (Bidirectional)  (None, 30, 64)        17920       input_2[0][0]                    
____________________________________________________________________________________________________
repeat_vector_1 (RepeatVector)   (None, 30, 64)        0           s0[0][0]                         
                                                                   lstm_1[1][0]                     
                                                                   lstm_1[2][0]                     
                                                                   lstm_1[3][0]                     
                                                                   lstm_1[4][0]                     
                                                                   lstm_1[5][0]                     
                                                                   lstm_1[6][0]                     
                                                                   lstm_1[7][0]                     
                                                                   lstm_1[8][0]                     
                                                                   lstm_1[9][0]                     
____________________________________________________________________________________________________
concatenate_1 (Concatenate)      (None, 30, 128)       0           bidirectional_2[0][0]            
                                                                   repeat_vector_1[1][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[2][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[3][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[4][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[5][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[6][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[7][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[8][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[9][0]            
                                                                   bidirectional_2[0][0]            
                                                                   repeat_vector_1[10][0]           
____________________________________________________________________________________________________
dense_1 (Dense)                  (None, 30, 10)        1290        concatenate_1[1][0]              
                                                                   concatenate_1[2][0]              
                                                                   concatenate_1[3][0]              
                                                                   concatenate_1[4][0]              
                                                                   concatenate_1[5][0]              
                                                                   concatenate_1[6][0]              
                                                                   concatenate_1[7][0]              
                                                                   concatenate_1[8][0]              
                                                                   concatenate_1[9][0]              
                                                                   concatenate_1[10][0]             
____________________________________________________________________________________________________
dense_2 (Dense)                  (None, 30, 1)         11          dense_1[1][0]                    
                                                                   dense_1[2][0]                    
                                                                   dense_1[3][0]                    
                                                                   dense_1[4][0]                    
                                                                   dense_1[5][0]                    
                                                                   dense_1[6][0]                    
                                                                   dense_1[7][0]                    
                                                                   dense_1[8][0]                    
                                                                   dense_1[9][0]                    
                                                                   dense_1[10][0]                   
____________________________________________________________________________________________________
attention_weights (Activation)   (None, 30, 1)         0           dense_2[1][0]                    
                                                                   dense_2[2][0]                    
                                                                   dense_2[3][0]                    
                                                                   dense_2[4][0]                    
                                                                   dense_2[5][0]                    
                                                                   dense_2[6][0]                    
                                                                   dense_2[7][0]                    
                                                                   dense_2[8][0]                    
                                                                   dense_2[9][0]                    
                                                                   dense_2[10][0]                   
____________________________________________________________________________________________________
dot_1 (Dot)                      (None, 1, 64)         0           attention_weights[1][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[2][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[3][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[4][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[5][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[6][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[7][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[8][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[9][0]          
                                                                   bidirectional_2[0][0]            
                                                                   attention_weights[10][0]         
                                                                   bidirectional_2[0][0]            
____________________________________________________________________________________________________
c0 (InputLayer)                  (None, 64)            0                                            
____________________________________________________________________________________________________
lstm_1 (LSTM)                    [(None, 64), (None, 6 33024       dot_1[1][0]                      
                                                                   s0[0][0]                         
                                                                   c0[0][0]                         
                                                                   dot_1[2][0]                      
                                                                   lstm_1[1][0]                     
                                                                   lstm_1[1][2]                     
                                                                   dot_1[3][0]                      
                                                                   lstm_1[2][0]                     
                                                                   lstm_1[2][2]                     
                                                                   dot_1[4][0]                      
                                                                   lstm_1[3][0]                     
                                                                   lstm_1[3][2]                     
                                                                   dot_1[5][0]                      
                                                                   lstm_1[4][0]                     
                                                                   lstm_1[4][2]                     
                                                                   dot_1[6][0]                      
                                                                   lstm_1[5][0]                     
                                                                   lstm_1[5][2]                     
                                                                   dot_1[7][0]                      
                                                                   lstm_1[6][0]                     
                                                                   lstm_1[6][2]                     
                                                                   dot_1[8][0]                      
                                                                   lstm_1[7][0]                     
                                                                   lstm_1[7][2]                     
                                                                   dot_1[9][0]                      
                                                                   lstm_1[8][0]                     
                                                                   lstm_1[8][2]                     
                                                                   dot_1[10][0]                     
                                                                   lstm_1[9][0]                     
                                                                   lstm_1[9][2]                     
____________________________________________________________________________________________________
dense_3 (Dense)                  (None, 11)            715         lstm_1[1][0]                     
                                                                   lstm_1[2][0]                     
                                                                   lstm_1[3][0]                     
                                                                   lstm_1[4][0]                     
                                                                   lstm_1[5][0]                     
                                                                   lstm_1[6][0]                     
                                                                   lstm_1[7][0]                     
                                                                   lstm_1[8][0]                     
                                                                   lstm_1[9][0]                     
                                                                   lstm_1[10][0]                    
====================================================================================================
Total params: 52,960
Trainable params: 52,960
Non-trainable params: 0

Navigate through the output of model.summary() above. You can see that the layer named attention_weights outputs the alphas of shape (m, 30, 1) before dot_2 computes the context vector for every time step t = 0 , … , T y − 1 t = 0, \ldots, T_y-1 t=0,,Ty1. Lets get the activations from this layer.

The function attention_map() pulls out the attention values from your model and plots them.

attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64)

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第5张图片
On the generated plot you can observe the values of the attention weights for each character of the predicted output. Examine this plot and check that where the network is paying attention makes sense to you.

In the date translation application, you will observe that most of the time attention helps predict the year, and hasn’t much impact on predicting the day/month.

Congratulations!

You have come to the end of this assignment

Here’s what you should remember from this notebook:

  • Machine translation models can be used to map from one sequence to another. They are useful not just for translating human languages (like French->English) but also for tasks like date format translation.
  • An attention mechanism allows a network to focus on the most relevant parts of the input when producing a specific part of the output.
  • A network using an attention mechanism can translate from inputs of length T x T_x Tx to outputs of length T y T_y Ty, where T x T_x Tx and T y T_y Ty can be different.
  • You can visualize attention weights α ⟨ t , t ′ ⟩ \alpha^{\langle t,t' \rangle} αt,t to see what the network is paying attention to while generating each output.

Congratulations on finishing this assignment! You are now able to implement an attention model and use it to learn complex mappings from one sequence to another.


Part2

Trigger Word Detection

Welcome to the final programming assignment of this specialization!

In this week’s videos, you learned about applying deep learning to speech recognition. In this assignment, you will construct a speech dataset and implement an algorithm for trigger word detection (sometimes also called keyword detection, or wakeword detection). Trigger word detection is the technology that allows devices like Amazon Alexa, Google Home, Apple Siri, and Baidu DuerOS to wake up upon hearing a certain word.

For this exercise, our trigger word will be “Activate.” Every time it hears you say “activate,” it will make a “chiming” sound. By the end of this assignment, you will be able to record a clip of yourself talking, and have the algorithm trigger a chime when it detects you saying “activate.”

After completing this assignment, perhaps you can also extend it to run on your laptop so that every time you say “activate” it starts up your favorite app, or turns on a network connected lamp in your house, or triggers some other event?
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第6张图片
In this assignment you will learn to:

  • Structure a speech recognition project
  • Synthesize and process audio recordings to create train/dev datasets
  • Train a trigger word detection model and make predictions

Lets get started! Run the following cell to load the package you are going to use.

import numpy as np
from pydub import AudioSegment
import random
import sys
import io
import os
import glob
import IPython
from td_utils import *
%matplotlib inline

1 - Data synthesis: Creating a speech dataset

Let’s start by building a dataset for your trigger word detection algorithm. A speech dataset should ideally be as close as possible to the application you will want to run it on. In this case, you’d like to detect the word “activate” in working environments (library, home, offices, open-spaces …). You thus need to create recordings with a mix of positive words (“activate”) and negative words (random words other than activate) on different background sounds. Let’s see how you can create such a dataset.

1.1 - Listening to the data

One of your friends is helping you out on this project, and they’ve gone to libraries, cafes, restaurants, homes and offices all around the region to record background noises, as well as snippets of audio of people saying positive/negative words. This dataset includes people speaking in a variety of accents.

In the raw_data directory, you can find a subset of the raw audio files of the positive words, negative words, and background noise. You will use these audio files to synthesize a dataset to train the model. The “activate” directory contains positive examples of people saying the word “activate”. The “negatives” directory contains negative examples of people saying random words other than “activate”. There is one word per audio recording. The “backgrounds” directory contains 10 second clips of background noise in different environments.

Run the cells below to listen to some examples.

IPython.display.Audio("./raw_data/activates/1.wav")
IPython.display.Audio("./raw_data/negatives/4.wav")
IPython.display.Audio("./raw_data/backgrounds/1.wav")

You will use these three type of recordings (positives/negatives/backgrounds) to create a labelled dataset.

1.2 - From audio recordings to spectrograms

What really is an audio recording? A microphone records little variations in air pressure over time, and it is these little variations in air pressure that your ear also perceives as sound. You can think of an audio recording is a long list of numbers measuring the little air pressure changes detected by the microphone. We will use audio sampled at 44100 Hz (or 44100 Hertz). This means the microphone gives us 44100 numbers per second. Thus, a 10 second audio clip is represented by 441000 numbers (= 10 × 44100 10 \times 44100 10×44100).

It is quite difficult to figure out from this “raw” representation of audio whether the word “activate” was said. In order to help your sequence model more easily learn to detect triggerwords, we will compute a spectrogram of the audio. The spectrogram tells us how much different frequencies are present in an audio clip at a moment in time.

(If you’ve ever taken an advanced class on signal processing or on Fourier transforms, a spectrogram is computed by sliding a window over the raw audio signal, and calculates the most active frequencies in each window using a Fourier transform. If you don’t understand the previous sentence, don’t worry about it.)

Lets see an example.

IPython.display.Audio("audio_examples/example_train.wav")
x = graph_spectrogram("audio_examples/example_train.wav")

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第7张图片The graph above represents how active each frequency is (y axis) over a number of time-steps (x axis). deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第8张图片

Figure 1: Spectrogram of an audio recording, where the color shows the degree to which different frequencies are present (loud) in the audio at different points in time. Green squares means a certain frequency is more active or more present in the audio clip (louder); blue squares denote less active frequencies.

The dimension of the output spectrogram depends upon the hyperparameters of the spectrogram software and the length of the input. In this notebook, we will be working with 10 second audio clips as the “standard length” for our training examples. The number of timesteps of the spectrogram will be 5511. You’ll see later that the spectrogram will be the input x x x into the network, and so T x = 5511 T_x = 5511 Tx=5511.
【MARK】 下面这个函数可以读取音频的频率

_, data = wavfile.read("audio_examples/example_train.wav")
print("Time steps in audio recording before spectrogram", data[:,0].shape)
print("Time steps in input after spectrogram", x.shape)
Time steps in audio recording before spectrogram (441000,)
Time steps in input after spectrogram (101, 5511)

Now, you can define:

Tx = 5511 # The number of time steps input to the model from the spectrogram
n_freq = 101 # Number of frequencies input to the model at each time step of the spectrogram

Note that even with 10 seconds being our default training example length, 10 seconds of time can be discretized to different numbers of value. You’ve seen 441000 (raw audio) and 5511 (spectrogram). In the former case, each step represents 10 / 441000 ≈ 0.000023 10/441000 \approx 0.000023 10/4410000.000023 seconds. In the second case, each step represents 10 / 5511 ≈ 0.0018 10/5511 \approx 0.0018 10/55110.0018 seconds.

For the 10sec of audio, the key values you will see in this assignment are:

  • 441000 441000 441000 (raw audio)
  • 5511 = T x 5511 = T_x 5511=Tx (spectrogram output, and dimension of input to the neural network).
  • 10000 10000 10000 (used by the pydub module to synthesize audio)
  • 1375 = T y 1375 = T_y 1375=Ty (the number of steps in the output of the GRU you’ll build).

Note that each of these representations correspond to exactly 10 seconds of time. It’s just that they are discretizing them to different degrees. All of these are hyperparameters and can be changed (except the 441000, which is a function of the microphone). We have chosen values that are within the standard ranges uses for speech systems.

Consider the T y = 1375 T_y = 1375 Ty=1375 number above. This means that for the output of the model, we discretize the 10s into 1375 time-intervals (each one of length 10 / 1375 ≈ 0.0072 10/1375 \approx 0.0072 10/13750.0072s) and try to predict for each of these intervals whether someone recently finished saying “activate.”

Consider also the 10000 number above. This corresponds to discretizing the 10sec clip into 10/10000 = 0.001 second itervals. 0.001 seconds is also called 1 millisecond, or 1ms. So when we say we are discretizing according to 1ms intervals, it means we are using 10,000 steps.

Ty = 1375 # The number of time steps in the output of our model

1.3 - Generating a single training example

Because speech data is hard to acquire and label, you will synthesize your training data using the audio clips of activates, negatives, and backgrounds. It is quite slow to record lots of 10 second audio clips with random “activates” in it. Instead, it is easier to record lots of positives and negative words, and record background noise separately (or download background noise from free online sources).

To synthesize a single training example, you will:

  • Pick a random 10 second background audio clip
  • Randomly insert 0-4 audio clips of “activate” into this 10sec clip
  • Randomly insert 0-2 audio clips of negative words into this 10sec clip

Because you had synthesized the word “activate” into the background clip, you know exactly when in the 10sec clip the “activate” makes its appearance. You’ll see later that this makes it easier to generate the labels y ⟨ t ⟩ y^{\langle t \rangle} yt as well.

You will use the pydub package to manipulate audio. Pydub converts raw audio files into lists of Pydub data structures (it is not important to know the details here). Pydub uses 1ms as the discretization interval (1ms is 1 millisecond = 1/1000 seconds) which is why a 10sec clip is always represented using 10,000 steps.

# Load audio segments using pydub 
activates, negatives, backgrounds = load_raw_audio()

print("background len: " + str(len(backgrounds[0])))    # Should be 10,000, since it is a 10 sec clip
print("activate[0] len: " + str(len(activates[0])))     # Maybe around 1000, since an "activate" audio clip is usually around 1 sec (but varies a lot)
print("activate[1] len: " + str(len(activates[1])))     # Different "activate" clips can have different lengths 

Overlaying positive/negative words on the background:

Given a 10sec background clip and a short audio clip (positive or negative word), you need to be able to “add” or “insert” the word’s short audio clip onto the background. To ensure audio segments inserted onto the background do not overlap, you will keep track of the times of previously inserted audio clips. You will be inserting multiple clips of positive/negative words onto the background, and you don’t want to insert an “activate” or a random word somewhere that overlaps with another clip you had previously added.

For clarity, when you insert a 1sec “activate” onto a 10sec clip of cafe noise, you end up with a 10sec clip that sounds like someone sayng “activate” in a cafe, with “activate” superimposed on the background cafe noise. You do not end up with an 11 sec clip. You’ll see later how pydub allows you to do this.

Creating the labels at the same time you overlay:

Recall also that the labels y ⟨ t ⟩ y^{\langle t \rangle} yt represent whether or not someone has just finished saying “activate.” Given a background clip, we can initialize y ⟨ t ⟩ = 0 y^{\langle t \rangle}=0 yt=0 for all t t t, since the clip doesn’t contain any “activates.”

When you insert or overlay an “activate” clip, you will also update labels for y ⟨ t ⟩ y^{\langle t \rangle} yt, so that 50 steps of the output now have target label 1. You will train a GRU to detect when someone has finished saying “activate”. For example, suppose the synthesized “activate” clip ends at the 5sec mark in the 10sec audio—exactly halfway into the clip. Recall that T y = 1375 T_y = 1375 Ty=1375, so timestep $687 = $ int(1375*0.5) corresponds to the moment at 5sec into the audio. So, you will set y ⟨ 688 ⟩ = 1 y^{\langle 688 \rangle} = 1 y688=1. Further, you would quite satisfied if the GRU detects “activate” anywhere within a short time-internal after this moment, so we actually set 50 consecutive values of the label y ⟨ t ⟩ y^{\langle t \rangle} yt to 1. Specifically, we have y ⟨ 688 ⟩ = y ⟨ 689 ⟩ = ⋯ = y ⟨ 737 ⟩ = 1 y^{\langle 688 \rangle} = y^{\langle 689 \rangle} = \cdots = y^{\langle 737 \rangle} = 1 y688=y689==y737=1.

This is another reason for synthesizing the training data: It’s relatively straightforward to generate these labels y ⟨ t ⟩ y^{\langle t \rangle} yt as described above. In contrast, if you have 10sec of audio recorded on a microphone, it’s quite time consuming for a person to listen to it and mark manually exactly when “activate” finished.

Here’s a figure illustrating the labels y ⟨ t ⟩ y^{\langle t \rangle} yt, for a clip which we have inserted “activate”, “innocent”, activate", “baby.” Note that the positive labels “1” are associated only with the positive words.
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第9张图片

Figure 2

To implement the training set synthesis process, you will use the following helper functions. All of these function will use a 1ms discretization interval, so the 10sec of audio is alwsys discretized into 10,000 steps.

  1. get_random_time_segment(segment_ms) gets a random time segment in our background audio
  2. is_overlapping(segment_time, existing_segments) checks if a time segment overlaps with existing segments
  3. insert_audio_clip(background, audio_clip, existing_times) inserts an audio segment at a random time in our background audio using get_random_time_segment and is_overlapping
  4. insert_ones(y, segment_end_ms) inserts 1’s into our label vector y after the word “activate”

The function get_random_time_segment(segment_ms) returns a random time segment onto which we can insert an audio clip of duration segment_ms. Read through the code to make sure you understand what it is doing.

def get_random_time_segment(segment_ms):
    """
    Gets a random time segment of duration segment_ms in a 10,000 ms audio clip.
    
    Arguments:
    segment_ms -- the duration of the audio clip in ms ("ms" stands for "milliseconds")
    
    Returns:
    segment_time -- a tuple of (segment_start, segment_end) in ms
    """
    
    segment_start = np.random.randint(low=0, high=10000-segment_ms)   # Make sure segment doesn't run past the 10sec background 
    segment_end = segment_start + segment_ms - 1
    
    return (segment_start, segment_end)

Next, suppose you have inserted audio clips at segments (1000,1800) and (3400,4500). I.e., the first segment starts at step 1000, and ends at step 1800. Now, if we are considering inserting a new audio clip at (3000,3600) does this overlap with one of the previously inserted segments? In this case, (3000,3600) and (3400,4500) overlap, so we should decide against inserting a clip here.

For the purpose of this function, define (100,200) and (200,250) to be overlapping, since they overlap at timestep 200. However, (100,199) and (200,250) are non-overlapping.

Exercise: Implement is_overlapping(segment_time, existing_segments) to check if a new time segment overlaps with any of the previous segments. You will need to carry out 2 steps:

  1. Create a “False” flag, that you will later set to “True” if you find that there is an overlap.
  2. Loop over the previous_segments’ start and end times. Compare these times to the segment’s start and end times. If there is an overlap, set the flag defined in (1) as True. You can use:
for ....:
        if ... <= ... and ... >= ...:
            ...

Hint: There is overlap if the segment starts before the previous segment ends, and the segment ends after the previous segment starts.

# GRADED FUNCTION: is_overlapping

def is_overlapping(segment_time, previous_segments):
    """
    Checks if the time of a segment overlaps with the times of existing segments.
    
    Arguments:
    segment_time -- a tuple of (segment_start, segment_end) for the new segment
    previous_segments -- a list of tuples of (segment_start, segment_end) for the existing segments
    
    Returns:
    True if the time segment overlaps with any of the existing segments, False otherwise
    """
    
    segment_start, segment_end = segment_time
    
    ### START CODE HERE ### (≈ 4 line)
    # Step 1: Initialize overlap as a "False" flag. (≈ 1 line)
    overlap = False
    
    # Step 2: loop over the previous_segments start and end times.
    # Compare start/end times and set the flag to True if there is an overlap (≈ 3 lines)
    for previous_start, previous_end in previous_segments:
        if segment_start <= previous_end and segment_end >= previous_start:
            overlap = True
    ### END CODE HERE ###

    return overlap
Overlap 1 =  False
Overlap 2 =  True

Expected Output:

**Overlap 1** False
**Overlap 2** True

Now, lets use the previous helper functions to insert a new audio clip onto the 10sec background at a random time, but making sure that any newly inserted segment doesn’t overlap with the previous segments.

Exercise: Implement insert_audio_clip() to overlay an audio clip onto the background 10sec clip. You will need to carry out 4 steps:

  1. Get a random time segment of the right duration in ms.
  2. Make sure that the time segment does not overlap with any of the previous time segments. If it is overlapping, then go back to step 1 and pick a new time segment.
  3. Add the new time segment to the list of existing time segments, so as to keep track of all the segments you’ve inserted.
  4. Overlay the audio clip over the background using pydub. We have implemented this for you.
# GRADED FUNCTION: insert_audio_clip

def insert_audio_clip(background, audio_clip, previous_segments):
    """
    Insert a new audio segment over the background noise at a random time step, ensuring that the 
    audio segment does not overlap with existing segments.
    
    Arguments:
    background -- a 10 second background audio recording.  
    audio_clip -- the audio clip to be inserted/overlaid. 
    previous_segments -- times where audio segments have already been placed
    
    Returns:
    new_background -- the updated background audio
    """
    
    # Get the duration of the audio clip in ms
    segment_ms = len(audio_clip)
    
    ### START CODE HERE ### 
    # Step 1: Use one of the helper functions to pick a random time segment onto which to insert 
    # the new audio clip. (≈ 1 line)
    segment_time = get_random_time_segment(segment_ms)
    
    # Step 2: Check if the new segment_time overlaps with one of the previous_segments. If so, keep 
    # picking new segment_time at random until it doesn't overlap. (≈ 2 lines)
    while is_overlapping(segment_time, previous_segments):
        segment_time = get_random_time_segment(segment_ms)

    # Step 3: Add the new segment_time to the list of previous_segments (≈ 1 line)
    previous_segments.append(segment_time)
    ### END CODE HERE ###
    
    # Step 4: Superpose audio segment and background
    new_background = background.overlay(audio_clip, position = segment_time[0])
    
    return new_background, segment_time
Segment Time:  (2254, 3169)

Expected Output

**Segment Time** (2254, 3169)
# Expected audio
IPython.display.Audio("audio_examples/insert_reference.wav")

Finally, implement code to update the labels y ⟨ t ⟩ y^{\langle t \rangle} yt, assuming you just inserted an “activate.” In the code below, y is a (1,1375) dimensional vector, since T y = 1375 T_y = 1375 Ty=1375.

If the “activate” ended at time step t t t, then set y ⟨ t + 1 ⟩ = 1 y^{\langle t+1 \rangle} = 1 yt+1=1 as well as for up to 49 additional consecutive values. However, make sure you don’t run off the end of the array and try to update y[0][1375], since the valid indices are y[0][0] through y[0][1374] because T y = 1375 T_y = 1375 Ty=1375. So if “activate” ends at step 1370, you would get only y[0][1371] = y[0][1372] = y[0][1373] = y[0][1374] = 1

Exercise: Implement insert_ones(). You can use a for loop. (If you are an expert in python’s slice operations, feel free also to use slicing to vectorize this.) If a segment ends at segment_end_ms (using a 10000 step discretization), to convert it to the indexing for the outputs y y y (using a 1375 1375 1375 step discretization), we will use this formula:

    segment_end_y = int(segment_end_ms * Ty / 10000.0)
# GRADED FUNCTION: insert_ones

def insert_ones(y, segment_end_ms):
    """
    Update the label vector y. The labels of the 50 output steps strictly after the end of the segment 
    should be set to 1. By strictly we mean that the label of segment_end_y should be 0 while, the
    50 followinf labels should be ones.
    
    
    Arguments:
    y -- numpy array of shape (1, Ty), the labels of the training example
    segment_end_ms -- the end time of the segment in ms
    
    Returns:
    y -- updated labels
    """
    
    # duration of the background (in terms of spectrogram time-steps)
    segment_end_y = int(segment_end_ms * Ty / 10000.0)
    
    # Add 1 to the correct index in the background label (y)
    ### START CODE HERE ### (≈ 3 lines)
    for i in range(segment_end_y+1, segment_end_y+51):
        if  i < Ty:
            y[0, i] = 1
    ### END CODE HERE ###
    
    return y
arr1 = insert_ones(np.zeros((1, Ty)), 9700)
plt.plot(insert_ones(arr1, 4251)[0,:])
print("sanity checks:", arr1[0][1333], arr1[0][634], arr1[0][635])
sanity checks: 0.0 1.0 0.0

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第10张图片Expected Output

**sanity checks**: 0.0 1.0 0.0

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第11张图片

Finally, you can use insert_audio_clip and insert_ones to create a new training example.

Exercise: Implement create_training_example(). You will need to carry out the following steps:

  1. Initialize the label vector y y y as a numpy array of zeros and shape ( 1 , T y ) (1, T_y) (1,Ty).
  2. Initialize the set of existing segments to an empty list.
  3. Randomly select 0 to 4 “activate” audio clips, and insert them onto the 10sec clip. Also insert labels at the correct position in the label vector y y y.
  4. Randomly select 0 to 2 negative audio clips, and insert them into the 10sec clip.
# GRADED FUNCTION: create_training_example

def create_training_example(background, activates, negatives):
    """
    Creates a training example with a given background, activates, and negatives.
    
    Arguments:
    background -- a 10 second background audio recording
    activates -- a list of audio segments of the word "activate"
    negatives -- a list of audio segments of random words that are not "activate"
    
    Returns:
    x -- the spectrogram of the training example
    y -- the label at each time step of the spectrogram
    """
    
    # Set the random seed
    np.random.seed(18)
    
    # Make background quieter
    background = background - 20

    ### START CODE HERE ###
    # Step 1: Initialize y (label vector) of zeros (≈ 1 line)
    y = np.zeros((1,Ty))

    # Step 2: Initialize segment times as empty list (≈ 1 line)
    previous_segments = []
    ### END CODE HERE ###
    
    # Select 0-4 random "activate" audio clips from the entire list of "activates" recordings
    number_of_activates = np.random.randint(0, 5)
    random_indices = np.random.randint(len(activates), size=number_of_activates)
    random_activates = [activates[i] for i in random_indices]
    
    ### START CODE HERE ### (≈ 3 lines)
    # Step 3: Loop over randomly selected "activate" clips and insert in background
    for random_activate in random_activates:
        # Insert the audio clip on the background
        background, segment_time = insert_audio_clip(background, random_activate, previous_segments)
        # Retrieve segment_start and segment_end from segment_time
        segment_start, segment_end = segment_time
        # Insert labels in "y"
        y = insert_ones(y, segment_end)
    ### END CODE HERE ###

    # Select 0-2 random negatives audio recordings from the entire list of "negatives" recordings
    number_of_negatives = np.random.randint(0, 3)
    random_indices = np.random.randint(len(negatives), size=number_of_negatives)
    random_negatives = [negatives[i] for i in random_indices]

    ### START CODE HERE ### (≈ 2 lines)
    # Step 4: Loop over randomly selected negative clips and insert in background
    for random_negative in random_negatives:
        # Insert the audio clip on the background 
        background, _ = insert_audio_clip(background, random_negative, previous_segments)
    ### END CODE HERE ###
    
    # Standardize the volume of the audio clip 
    background = match_target_amplitude(background, -20.0)

    # Export new training example 
    file_handle = background.export("train" + ".wav", format="wav")
    print("File (train.wav) was saved in your directory.")
    
    # Get and plot spectrogram of the new recording (background with superposition of positive and negatives)
    x = graph_spectrogram("train.wav")
    
    return x, y
x, y = create_training_example(backgrounds[0], activates, negatives)
File (train.wav) was saved in your directory.

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第12张图片Expected Output
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第13张图片Now you can listen to the training example you created and compare it to the spectrogram generated above.

IPython.display.Audio("train.wav")

Expected Output

IPython.display.Audio("audio_examples/train_reference.wav")

Finally, you can plot the associated labels for the generated training example.

plt.plot(y[0])
[]

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第14张图片Expected Output
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第15张图片

1.4 - Full training set

You’ve now implemented the code needed to generate a single training example. We used this process to generate a large training set. To save time, we’ve already generated a set of training examples.

# Load preprocessed training examples
X = np.load("./XY_train/X.npy")
Y = np.load("./XY_train/Y.npy")

1.5 - Development set

To test our model, we recorded a development set of 25 examples. While our training data is synthesized, we want to create a development set using the same distribution as the real inputs. Thus, we recorded 25 10-second audio clips of people saying “activate” and other random words, and labeled them by hand. This follows the principle described in Course 3 that we should create the dev set to be as similar as possible to the test set distribution; that’s why our dev set uses real rather than synthesized audio.
为了保证dev set和training set分布相同,dev set 使用真实场景的音频而不是合成的音频,同时手动标注。

# Load preprocessed dev set examples
X_dev = np.load("./XY_dev/X_dev.npy")
Y_dev = np.load("./XY_dev/Y_dev.npy")

2 - Model

Now that you’ve built a dataset, lets write and train a trigger word detection model!

The model will use 1-D convolutional layers, GRU layers, and dense layers. Let’s load the packages that will allow you to use these layers in Keras. This might take a minute to load.

from keras.callbacks import ModelCheckpoint
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking, TimeDistributed, LSTM, Conv1D
from keras.layers import GRU, Bidirectional, BatchNormalization, Reshape
from keras.optimizers import Adam

2.1 - Build the model

Here is the architecture we will use. Take some time to look over the model and see if it makes sense.
deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第16张图片

Figure 3

One key step of this model is the 1D convolutional step (near the bottom of Figure 3). It inputs the 5511 step spectrogram, and outputs a 1375 step output, which is then further processed by multiple layers to get the final T y = 1375 T_y = 1375 Ty=1375 step output. This layer plays a role similar to the 2D convolutions you saw in Course 4, of extracting low-level features and then possibly generating an output of a smaller dimension.

Computationally, the 1-D conv layer also helps speed up the model because now the GRU has to process only 1375 timesteps rather than 5511 timesteps. The two GRU layers read the sequence of inputs from left to right, then ultimately uses a dense+sigmoid layer to make a prediction for y ⟨ t ⟩ y^{\langle t \rangle} yt. Because y y y is binary valued (0 or 1), we use a sigmoid output at the last layer to estimate the chance of the output being 1, corresponding to the user having just said “activate.”

Note that we use a uni-directional RNN rather than a bi-directional RNN. This is really important for trigger word detection, since we want to be able to detect the trigger word almost immediately after it is said. If we used a bi-directional RNN, we would have to wait for the whole 10sec of audio to be recorded before we could tell if “activate” was said in the first second of the audio clip.

Implementing the model can be done in four steps:

Step 1: CONV layer. Use Conv1D() to implement this, with 196 filters,
a filter size of 15 (kernel_size=15), and stride of 4. [See documentation.]

Step 2: First GRU layer. To generate the GRU layer, use:

X = GRU(units = 128, return_sequences = True)(X)

Setting return_sequences=True ensures that all the GRU’s hidden states are fed to the next layer. Remember to follow this with Dropout and BatchNorm layers.

Step 3: Second GRU layer. This is similar to the previous GRU layer (remember to use return_sequences=True), but has an extra dropout layer.

Step 4: Create a time-distributed dense layer as follows:

X = TimeDistributed(Dense(1, activation = "sigmoid"))(X)

This creates a dense layer followed by a sigmoid, so that the parameters used for the dense layer are the same for every time step. [See documentation.]

Exercise: Implement model(), the architecture is presented in Figure 3.

# GRADED FUNCTION: model

def model(input_shape):
    """
    Function creating the model's graph in Keras.
    
    Argument:
    input_shape -- shape of the model's input data (using Keras conventions)

    Returns:
    model -- Keras model instance
    """
    
    X_input = Input(shape = input_shape)
    
    ### START CODE HERE ###
    
    # Step 1: CONV layer (≈4 lines)
    X = Conv1D(filters=196, kernel_size=15, strides=4)(X_input)  # CONV1D
    X = BatchNormalization()(X)                                 # Batch normalization
    X = Activation('relu')(X)                                 # ReLu activation
    X = Dropout(0.8)(X)                                 # dropout (use 0.8)

    # Step 2: First GRU Layer (≈4 lines)
    X = GRU(units = 128, return_sequences = True)(X)       # GRU (use 128 units and return the sequences)
    X = Dropout(0.8)(X)                                 # dropout (use 0.8)
    X = BatchNormalization()(X)                                 # Batch normalization
    
    # Step 3: Second GRU Layer (≈4 lines)
    X = GRU(units = 128, return_sequences = True)(X)       # GRU (use 128 units and return the sequences)
    X = Dropout(0.8)(X)                                # dropout (use 0.8)
    X = BatchNormalization()(X)                                 # Batch normalization
    X = Dropout(0.8)(X)                                 # dropout (use 0.8)
    
    # Step 4: Time-distributed dense layer (≈1 line)
    X = TimeDistributed(Dense(1, activation = "sigmoid"))(X) # time distributed  (sigmoid)

    ### END CODE HERE ###

    model = Model(inputs = X_input, outputs = X)
    
    return model  
model = model(input_shape = (Tx, n_freq))

Let’s print the model summary to keep track of the shapes.

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 5511, 101)         0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 1375, 196)         297136    
_________________________________________________________________
batch_normalization_1 (Batch (None, 1375, 196)         784       
_________________________________________________________________
activation_1 (Activation)    (None, 1375, 196)         0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 1375, 196)         0         
_________________________________________________________________
gru_1 (GRU)                  (None, 1375, 128)         124800    
_________________________________________________________________
dropout_2 (Dropout)          (None, 1375, 128)         0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 1375, 128)         512       
_________________________________________________________________
gru_2 (GRU)                  (None, 1375, 128)         98688     
_________________________________________________________________
dropout_3 (Dropout)          (None, 1375, 128)         0         
_________________________________________________________________
batch_normalization_3 (Batch (None, 1375, 128)         512       
_________________________________________________________________
dropout_4 (Dropout)          (None, 1375, 128)         0         
_________________________________________________________________
time_distributed_1 (TimeDist (None, 1375, 1)           129       
=================================================================
Total params: 522,561
Trainable params: 521,657
Non-trainable params: 904
_________________________________________________________________

Expected Output:

**Total params** 522,561
**Trainable params** 521,657
**Non-trainable params** 904

The output of the network is of shape (None, 1375, 1) while the input is (None, 5511, 101). The Conv1D has reduced the number of steps from 5511 at spectrogram to 1375.

2.2 - Fit the model

Trigger word detection takes a long time to train. To save time, we’ve already trained a model for about 3 hours on a GPU using the architecture you built above, and a large training set of about 4000 examples. Let’s load the model.

model = load_model('./models/tr_model.h5')

You can train the model further, using the Adam optimizer and binary cross entropy loss, as follows. This will run quickly because we are training just for one epoch and with a small training set of 26 examples.

opt = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=["accuracy"])
model.fit(X, Y, batch_size = 5, epochs=1)
Epoch 1/1
26/26 [==============================] - 32s - loss: 0.0727 - acc: 0.9806    

2.3 - Test the model

Finally, let’s see how your model performs on the dev set.

loss, acc = model.evaluate(X_dev, Y_dev)
print("Dev set accuracy = ", acc)
25/25 [==============================] - 5s
Dev set accuracy =  0.946036338806

This looks pretty good! However, accuracy isn’t a great metric for this task, since the labels are heavily skewed to 0’s, so a neural network that just outputs 0’s would get slightly over 90% accuracy. We could define more useful metrics such as F1 score or Precision/Recall. But let’s not bother with that here, and instead just empirically see how the model does.

3 - Making Predictions

Now that you have built a working model for trigger word detection, let’s use it to make predictions. This code snippet runs audio (saved in a wav file) through the network.

def detect_triggerword(filename):
    plt.subplot(2, 1, 1)

    x = graph_spectrogram(filename)
    # the spectogram outputs (freqs, Tx) and we want (Tx, freqs) to input into the model
    x  = x.swapaxes(0,1)
    x = np.expand_dims(x, axis=0)
    predictions = model.predict(x)
    
    plt.subplot(2, 1, 2)
    plt.plot(predictions[0,:,0])
    plt.ylabel('probability')
    plt.show()
    return predictions

Once you’ve estimated the probability of having detected the word “activate” at each output step, you can trigger a “chiming” sound to play when the probability is above a certain threshold. Further, y ⟨ t ⟩ y^{\langle t \rangle} yt might be near 1 for many values in a row after “activate” is said, yet we want to chime only once. So we will insert a chime sound at most once every 75 output steps. This will help prevent us from inserting two chimes for a single instance of “activate”. (This plays a role similar to non-max suppression from computer vision.)

chime_file = "audio_examples/chime.wav"
def chime_on_activate(filename, predictions, threshold):
    audio_clip = AudioSegment.from_wav(filename)
    chime = AudioSegment.from_wav(chime_file)
    Ty = predictions.shape[1]
    # Step 1: Initialize the number of consecutive output steps to 0
    consecutive_timesteps = 0
    # Step 2: Loop over the output steps in the y
    for i in range(Ty):
        # Step 3: Increment consecutive output steps
        consecutive_timesteps += 1
        # Step 4: If prediction is higher than the threshold and more than 75 consecutive output steps have passed
        if predictions[0,i,0] > threshold and consecutive_timesteps > 75:
            # Step 5: Superpose audio and background using pydub
            audio_clip = audio_clip.overlay(chime, position = ((i / Ty) * audio_clip.duration_seconds)*1000)
            # Step 6: Reset consecutive output steps to 0
            consecutive_timesteps = 0
        
    audio_clip.export("chime_output.wav", format='wav')

3.3 - Test on dev examples

Let’s explore how our model performs on two unseen audio clips from the development set. Lets first listen to the two dev set clips.

IPython.display.Audio("./raw_data/dev/1.wav")
IPython.display.Audio("./raw_data/dev/2.wav")

Now lets run the model on these audio clips and see if it adds a chime after “activate”!

filename = "./raw_data/dev/1.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第17张图片

filename  = "./raw_data/dev/2.wav"
prediction = detect_triggerword(filename)
chime_on_activate(filename, prediction, 0.5)
IPython.display.Audio("./chime_output.wav")

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第18张图片# Congratulations

You’ve come to the end of this assignment!

Here’s what you should remember:

  • Data synthesis is an effective way to create a large training set for speech problems, specifically trigger word detection.
  • Using a spectrogram and optionally a 1D conv layer is a common pre-processing step prior to passing audio data to an RNN, GRU or LSTM.
  • An end-to-end deep learning approach can be used to built a very effective trigger word detection system.

Congratulations on finishing the fimal assignment!

Thank you for sticking with us through the end and for all the hard work you’ve put into learning deep learning. We hope you have enjoyed the course!

4 - Try your own example! (OPTIONAL/UNGRADED)

In this optional and ungraded portion of this notebook, you can try your model on your own audio clips!

Record a 10 second audio clip of you saying the word “activate” and other random words, and upload it to the Coursera hub as myaudio.wav. Be sure to upload the audio as a wav file. If your audio is recorded in a different format (such as mp3) there is free software that you can find online for converting it to wav. If your audio recording is not 10 seconds, the code below will either trim or pad it as needed to make it 10 seconds.

# Preprocess the audio to the correct format
def preprocess_audio(filename):
    # Trim or pad audio segment to 10000ms
    padding = AudioSegment.silent(duration=10000)
    segment = AudioSegment.from_wav(filename)[:10000]
    segment = padding.overlay(segment)
    # Set frame rate to 44100
    segment = segment.set_frame_rate(44100)
    # Export as wav
    segment.export(filename, format='wav')

Once you’ve uploaded your audio file to Coursera, put the path to your file in the variable below.

your_filename = "audio_examples/my_audio.wav"
preprocess_audio(your_filename)
IPython.display.Audio(your_filename) # listen to the audio you uploaded 

Finally, use the model to predict when you say activate in the 10 second audio clip, and trigger a chime. If beeps are not being added appropriately, try to adjust the chime_threshold.

chime_threshold = 0.5
prediction = detect_triggerword(your_filename)
chime_on_activate(your_filename, prediction, chime_threshold)
IPython.display.Audio("./chime_output.wav")

deeplearning.ai课程作业:Recurrent Neural Networks- Course 5 Week3_第19张图片

你可能感兴趣的:(deeplearning.ai,homework)