本篇文章主要内容为第一周课程结束后的十几道测试题,在其他大佬的文章中看到题目后,感觉因为有答案所以不太利于自己的思考,所以进行一下简单的编辑工作,将答案放在文章最后。
文章参考: https://blog.csdn.net/u013733326/article/details/79862336
仅做学习使用。
题目分为中文版和英文版,根据自己喜好选择观看即可。
英文版:
[ ]AI is powering personal devices in our homes and offices, similar to electricity.
[ ]Through the “smart grid”, AI is delivering a new wave of electricity.
[ ]AI runs on computers and is thus powered by electricity, but it is letting computers do things not possible before.
[ ]Similar to electricity starting about 100 years ago, AI is transforming multiple industries.
Note: Andrew illustrated the same idea in the lecture.
[ ] We have access to a lot more computational power.
[ ]Neural Networks are a brand new field.
[ ] We have access to a lot more data.
[ ]Deep learning has resulted in significant improvements in important applications such as online advertising, speech recognition, and image recognition.
[ ] Being able to try out ideas quickly allows deep learning engineers to iterate more quickly.
[ ] Faster computation can help speed up how long a team takes to iterate to a good idea.
[ ]It is faster to train on a big dataset than a small dataset.
[ ] Recent progress in deep learning algorithms has allowed us to train good models faster (even without changing the CPU/GPU hardware).
[ ]True
[ ] False
Note: Did not get the picture, had to put a correct picture.
[ ]True
[ ] False
[ ]True
[ ] False
[ ] It can be trained as a supervised learning problem.
[ ]It is strictly more powerful than a Convolutional Neural Network (CNN).
[ ] It is applicable when the input/output is a sequence (e.g., a sequence of words).
[ ]RNNs represent the recurrent process of Idea->Code->Experiment->Idea->….
[ ] Increasing the training set size generally does not hurt an algorithmic performance, and it may help significantly.
[ ] Increasing the size of a neural network generally does not hurt an algorithmic performance, and it may help significantly.
[ ]Decreasing the training set size generally does not hurt an algorithmic performance, and it may help significantly.
[ ]Decreasing the size of a neural network generally does not hurt an algorithmic performance, and it may help significantly.
中文版:
【 】AI为我们的家庭和办公室的个人设备供电,类似于电力。
【 】通过“智能电网”,AI提供新的电能。
【 】AI在计算机上运行,并由电力驱动,但是它正在让以前的计算机不能做的事情变为可能。
【 】就像100年前产生电能一样,AI正在改变很多的行业。
请注意: 吴恩达在视频中表达了同样的观点。
【 】现在我们有了更好更快的计算能力。
【 】神经网络是一个全新的领域。
【 】我们现在可以获得更多的数据。
【 】深度学习已经取得了重大的进展,比如在在线广告、语音识别和图像识别方面有了很多的应用。
【 】能够让深度学习工程师快速地实现自己的想法。
【 】在更好更快的计算机上能够帮助一个团队减少迭代(训练)的时间。
【 】在数据量很多的数据集上训练上的时间要快于小数据集。
【 】使用更新的深度学习算法可以使我们能够更快地训练好模型(即使更换CPU / GPU硬件)。
【 】正确
【 】错误
【 】正确
【 】错误
【 】正确
【 】 错误
【 】因为它可以被用做监督学习。
【 】严格意义上它比卷积神经网络(CNN)效果更好。
【 】它比较适合用于当输入/输出是一个序列的时候(例如:一个单词序列)
【 】RNNs代表递归过程:想法->编码->实验->想法->…
【 】 增加训练集的大小通常不会影响算法的性能,这可能会有很大的帮助。
【 】增加神经网络的大小通常不会影响算法的性能,这可能会有很大的帮助。
【 】减小训练集的大小通常不会影响算法的性能,这可能会有很大的帮助。
【 】减小神经网络的大小通常不会影响算法的性能,这可能会有很大的帮助。
[x]Similar to electricity starting about 100 years ago, AI is transforming multiple industries.
Note: Andrew illustrated the same idea in the lecture.
【★】就像100年前产生电能一样,AI正在改变很多的行业。
[x] We have access to a lot more computational power.
[x] We have access to a lot more data.
【★】 现在我们有了更好更快的计算能力。
【★】 我们现在可以获得更多的数据。
[x] Being able to try out ideas quickly allows deep learning engineers to iterate more quickly.
[x] Faster computation can help speed up how long a team takes to iterate to a good idea.
[x] Recent progress in deep learning algorithms has allowed us to train good models faster (even without changing the CPU/GPU hardware).
Note: A bigger dataset generally requires more time to train on a same model.
【★】能够让深度学习工程师快速地实现自己的想法。
【★】在更好更快的计算机上能够帮助一个团队减少迭代(训练)的时间。
【★】 使用更新的深度学习算法可以使我们能够更快地训练好模型(即使更换CPU / GPU硬件)。
请注意: 同一模型在较大的数据集上通常需要花费更多时间。
[x] False
Note: Maybe some experience may help, but nobody can always find the best model or hyper-parameters without iterations.
【★】 错误
请注意:也许之前的一些经验可能会有所帮助,但没有人总是可以找到最佳模型或超参数而无需迭代多次。
正确图片已经放在题目下面了
[x] False
【★】 错误
博主注:图片属于非结构化数据。
[x] False
【★】 错误
博主注:单纯的看以上数据的话就是非结构化数据,但是这些数据都被整合到了数据集里面,所以是结构化数据。
[x] It can be trained as a supervised learning problem.
[x] It is applicable when the input/output is a sequence (e.g., a sequence of words).
【★】 因为它可以被用做监督学习。
【★】 它比较适合用于当输入/输出是一个序列的时候(例如:一个单词序列)
x-axis is the amount of data
y-axis (vertical axis) is the performance of the algorithm.
【★】x轴是数据量
【★】y轴(垂直轴)是算法的性能(博主注:也可以说是准确率)
[x] Increasing the training set size generally does not hurt an algorithm鈥檚 performance, and it may help significantly.
[x] Increasing the size of a neural network generally does not hurt an algorithm鈥檚 performance, and it may help significantly.
【★】 增加训练集的大小通常不会影响算法的性能,这可能会有很大的帮助。
【★】 增加神经网络的大小通常不会影响算法的性能,这可能会有很大的帮助。