【深度学习】吴恩达深度学习-Course1神经网络与深度学习-第二周神经网络基础作业

视频链接:【中英字幕】吴恩达深度学习课程第一课 — 神经网络与深度学习
本文题目来源:

  1. 【中英】【吴恩达课后测验】Course 1 - 神经网络和深度学习 -第二周测验
  2. 吴恩达深度学习第一部分第二周作业打卡

目录

  • 英文习题
  • 中文习题
  • 答案

英文习题

1.What does a neuron compute?

A. A neuron computes the mean of all features before applying the output to an activation function
B. A neuron computes a linear function (z = Wx + b) followed by an activation function
C. A neuron computes an activation function followed by a linear function (z = Wx + b)
D. A neuron computes a function g that scales the input x linearly (Wx + b)
Note: The output of a neuron is a = g(Wx + b) where g is the activation function (sigmoid, tanh, ReLU, …).


2.Which of these is the “Logistic Loss”?

A. L(i)(y^ (i),y(i))=−(y(i)log(y^ (i))+(1−y(i))log(1−y^(i)))
B. L(i)(y^ (i),y(i))=max(0,y(i)−y^(i))
C. L(i)(y^ (i),y(i))=∣y(i)−y^(i)∣2
D. L(i)(y^ (i),y(i))=∣y(i)−y^(i)∣
Note: We are using a cross-entropy loss function.(Click Here)
(上面看不了可下载这个免费平替版)


3.Suppose img is a (32,32,3) array, representing a 32x32 image with 3 color channels red, green and blue. How do you reshape this into a column vector?
A. x = img.reshape((32 * 32,3))
B. x = img.reshape((3,32 * 32))
C. x = img.reshape((32 * 32 * 3,1))
D. x = img.reshape((1,32 * 32,*3))


4.Consider the two following random arrays “a” and “b”:

a = np.random.randn(2, 3) # a.shape = (2, 3)
b = np.random.randn(2, 1) # b.shape = (2, 1)
c = a + b

What will be the shape of “c”?
A. The computation cannot happen because the sizes don’t match. It’s going to be “Error”!
B. c.shape = (3, 2)
C. c.shape = (2, 3)
D. c.shape = (2, 1)


5.Consider the two following random arrays “a” and “b”:

a = np.random.randn(4, 3) # a.shape = (4, 3)
b = np.random.randn(3, 2) # b.shape = (3, 2)
c = a*b

What will be the shape of “c”?
A. c.shape = (3, 3)
B. c.shape = (4,2)
C.The computation cannot happen because the sizes don’t match. It’s going to be “Error”!
D.c.shape = (4, 3)

6. Suppose you have nx input features per example. Recall that X=[x(1)x(2)…x(m)]. What is the dimension of X?
A. (m,1)
B. (1,m)
C. (m,nx)
D. (nx,m)


7.Recall that “np.dot(a,b)” performs a matrix multiplication on a and b, whereas “a*b” performs an element-wise multiplication.
Consider the two following random arrays “a” and “b”:

a = np.random.randn(12288, 150) # a.shape = (12288, 150)
b = np.random.randn(150, 45) # b.shape = (150, 45)
c = np.dot(a,b)

What is the shape of c?
A. The computation cannot happen because the sizes don’t match. It’s going to be “Error”!
B. c.shape = (12288, 45)
C. c.shape = (12288, 150)
D. c.shape = (150,150)


8.Consider the following code snippet:

# a.shape = (3,4)
# b.shape = (4,1)
for i in range(3):
	for j in range(4):
		c[i][j] = a[i][j] + b[j]

How do you vectorize this?
A. c = a + b.T
B. c = a.T + b.T
C. c = a.T + b
D. c = a + b


9.Consider the following code:

a = np.random.randn(3, 3)
b = np.random.randn(3, 1)
c = a*b

What will be c? (If you’re not sure, feel free to run this in python to find out).
A. This will invoke broadcasting, so b is copied three times to become (3,3), and ∗ is an element-wise product so c.shape will be (3, 3)
B. This will invoke broadcasting, so b is copied three times to become (3, 3), and ∗ invokes a matrix multiplication operation of two 3x3 matrices so c.shape will be (3, 3)
C. This will multiply a 3x3 matrix a with a 3x1 vector, thus resulting in a 3x1 vector. That is, c.shape = (3,1).
D. It will lead to an error since you cannot use “*” to operate on these two matrices. You need to instead use np.dot(a,b)

10.Consider the following computation graph.
【深度学习】吴恩达深度学习-Course1神经网络与深度学习-第二周神经网络基础作业_第1张图片

What is the output J?
A. J = (c - 1)(b + a)
B. J = (a - 1) (b + c)

C. J = ab + bc + a*c
D. J = (b - 1) * (c + a)

中文习题

1.神经元节点计算什么?

A.神经元节点先计算激活函数,再计算线性函数(z = Wx + b)
B.神经元节点先计算线性函数(z = Wx + b),再计算激活。
C.神经元节点计算函数g,函数g计算(Wx + b)。
D.在将输出应用于激活函数之前,神经元节点计算所有特征的平均值
请注意:神经元的输出是a = g(Wx + b),其中g是激活函数(sigmoid,tanh,ReLU,…)。


2.下面哪一个是Logistic损失?

A. L(i)(y^ (i),y(i))=−(y(i)log(y^ (i))+(1−y(i))log(1−y^(i)))
B. L(i)(y^ (i),y(i))=max(0,y(i)−y^(i))
C. L(i)(y^ (i),y(i))=∣y(i)−y^(i)∣2
D. L(i)(y^ (i),y(i))=∣y(i)−y^(i)∣
请注意:我们使用交叉熵损失函数。(点击此处)
(上面看不了可下载这个免费平替版)


3.假设img是一个(32,32,3)数组,具有3个颜色通道:红色、绿色和蓝色的32x32像素的图像。 如何将其重新转换为列向量?
A. x = img.reshape((32 * 32,3))
B. x = img.reshape((3,32 * 32))
C. x = img.reshape((32 * 32 * 3,1))
D. x = img.reshape((1,32 * 32, *3))


4.看一下下面的这两个随机数组“a”和“b”:

a = np.random.randn(2, 3) # a.shape = (2, 3)
b = np.random.randn(2, 1) # b.shape = (2, 1)
c = a + b

数组c的维度是多少?
A.计算无法发生,因为大小不匹配,会发生“错误”!
B. c.shape = (3, 2)
C. c.shape = (2, 3)
D. c.shape = (2, 1)


5.看一下下面的这两个随机数组“a”和“b”:

a = np.random.randn(4, 3) # a.shape = (4, 3)
b = np.random.randn(3, 2) # b.shape = (3, 2)
c = a*b

请问数组c的维度是多少?
A. c.shape = (3, 3)
B. c.shape = (4,2)
C.计算无法发生,因为大小不匹配,会发生“错误”!
D.c.shape = (4, 3)


6.假设你的每一个实例有n_x个输入特征,想一下在X=[x^(1), x^ (2)…x^(m)]中,X的维度是多少?
A. (m,1)
B. (1,m)
C. (m,nx)
D. (nx,m)


7.回想一下,np.dot(a,b)在a和b上执行矩阵乘法,而`a * b’执行元素方式的乘法。
看一下下面的这两个随机数组“a”和“b”:

a = np.random.randn(12288, 150) # a.shape = (12288, 150)
b = np.random.randn(150, 45) # b.shape = (150, 45)
c = np.dot(a,b)

数组c的维度是多少?
A. 计算无法发生,因为大小不匹配,会发生“错误”!
B. c.shape = (12288, 45)
C. c.shape = (12288, 150)
D. c.shape = (150,150)


8.看一下下面的这个代码片段:

# a.shape = (3,4)
# b.shape = (4,1)
for i in range(3):
	for j in range(4):
		c[i][j] = a[i][j] + b[j]

请问要怎么把它们向量化?
A. c = a + b.T
B. c = a.T + b.T
C. c = a.T + b
D. c = a + b


这题答案的翻译有点乱,可以自己去看看英文版本的
9.看一下下面的代码:

a = np.random.randn(3, 3)
b = np.random.randn(3, 1)
c = a*b

请问c的维度会是多少? (如果你不确定,就跑一下这段代码试试).
A. 这将会使用广播机制,b会被复制三次,就会变成(3,3),再使用元素乘法。所以: c.shape = (3, 3).
B. 这会引发广播机制,所以b会被复制三次编程(3,3),并且∗ 调用两个3x3矩阵的矩阵乘法运算,使c.shape为3,3)
C. 这将使3x3矩阵a与3x1向量相乘,从而得到3x1向量。也就是说,c.shape=(3,1)
D. 由于不能使用“*”对这两个矩阵进行操作,因此将导致错误。您需要使用np.dot(a,b)


10.看一下下面的计算图
【深度学习】吴恩达深度学习-Course1神经网络与深度学习-第二周神经网络基础作业_第2张图片

输出的J是什么?
A. J = (c - 1)(b + a)
B. J = (a - 1) (b + c)
C. J = a
b + bc + ac
D. J = (b - 1) * (c + a)







答案

  1. B
  2. A
  3. C
  4. C(由python的broadcasting,b会自动扩充为(2,3)以能够和a相加)
  5. C
  6. D
  7. B
  8. A(为什么不选C?注意,给出的for循环中是a[i][j]+b[j],而不是a[j][i]+b[j]!,如果代码给的是后者,则选C)
  9. A(当我们使用“ * ”进行计算的时候,会触发b广播称为shape为(3 , 3)的矩阵,此时a与b相乘即对应元素相乘,即a中的元素与b中对应位置的元素进行相乘。当使用np.dot进行计算时,就类似于线性代数里面的计算方式,可以写一段代码来测试一下,如下:)
import numpy as np

a = np.random.randn(3, 3)
b = np.random.randn(3, 1)

print("a:")
print(a)
print("b:")
print(b)

c = a*b
print("a*b:")
print(c)

c = np.dot(a, b)
print("np.dot(a, b):")
print(c)

结果如下:

a:
[[-0.03228798  1.28358425  0.66830909]
 [ 0.84179787  0.4902092  -0.09530794]
 [ 0.35068597 -0.60616291 -0.47161416]]
b:
[[-0.21431806]
 [ 0.71400772]
 [ 0.41949772]]
a*b:
[[ 0.0069199  -0.27509529 -0.14323071]
 [ 0.60105018  0.35001315 -0.06805061]
 [ 0.14711196 -0.25428396 -0.19784106]]
np.dot(a, b):
[[ 1.2037631]
 [ 0.1296192]
 [-0.7058044]]
  1. B

你可能感兴趣的:(深度学习,深度学习,神经网络)