[Week 1] Machine-learning Notes 1——Stanford University Coursera Lesson

What is Machine Learning?

什么是机器学习?

Two definitions of Machine Learning are offered. Arthur Samuel described it as: "the field of study that gives computers the ability to learn without being explicitly programmed." This is an older, informal definition.

现在有两种解释,一种是Arthur Samuel的过时并且非官方的定义: “无需通过精确的编程而提供给计算机学习能力的一种研究领域”

Tom Mitchell provides a more modern definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."

Tom Mitchell 提供了一种更先进的定义"一种从任务T的经验E中改善性能P的程序。"

Example: playing checkers.
E = the experience of playing many games of checkers
T = the task of playing checkers.
P = the probability that the program will win the next game.

In general, any machine learning problem can be assigned to one of two broad classifications:
Supervised learning and Unsupervised learning.

通常来讲,机器学习分为两大类:监督学习和无监督学习

Supervised Learning

In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output.
Supervised learning problems are categorized into "regression" and "classification" problems. In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.
Example 1:
Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.
We could turn this example into a classification problem by instead making our output about whether the house "sells for more or less than the asking price." Here we are classifying the houses based on price into two discrete categories.
Example 2:
(a) Regression - Given a picture of a person, we have to predict their age on the basis of the given picture
(b) Classification - Given a patient with a tumor, we have to predict whether the tumor is malignant or benign.

监督学习(Supervised Learning),意指给定一个算法,需要部分数据集已经有正确的答案。比如给定房价数据集。对于里面每个数据,算法都知道对应的正确房价,即这房子实际卖出的价格。算法的结果就是算出更多正确的价格,比如那个新房子,你朋友想卖的那个。用更术语的方式来定义, 监督学习又叫回归(Regression)问题,(应该是回归属于监督学习中的一种),意指要预测一个连续值的输出,比如房价。再比如分类问题。分类(Classification)是要根据1个或者多个特征(features),预测一个离散值输出,也是一种监督学习,指的是之前已经了一部分正确的答案,根据这个答案来学习从而预测新数据的结果。

有趣的学习算法能够处理无穷多个特征。不是3个5个这么少,要用到无数多个特征,非常多的属性(Attributes),那么,如何处理无限多的特征,甚至如何存储无数的东西到你的电脑里而又要避免内存空间的不足,这就是一种叫做“支持向量机(Support Vector)的算法”的功劳了。

总结:监督学习中,对于数据集中的每个数据,都有相应的正确答案(训练集),算法就是基于这些来做出预测。回归和分类问题都是监督学习的一种,前者通过回归来预测连续值输出。后者是通过分类来预测离散值输出。

Unsupervised Learning

Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know the effect of the variables.
We can derive this structure by clustering the data based on relationships among the variables in the data.
With unsupervised learning there is no feedback based on the prediction results.
Example:
Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.
Non-clustering: The "Cocktail Party Algorithm", allows you to find structure in a chaotic environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail party).

无监督学习(Unsupervised Learning),在无监督学习中,没有属性或者标签这一概念,所有数据都是一样的,没有区别,它只是告诉我们,“现在有一个数据集,你能在其中找到某种结构吗?"例如聚类算法(Clustering algorithm),对于给定的数据集,无监督学习算法可能判定该数据集包含两个不同的聚类。无监督学习算法会把这些数据分成两个不同的聚类,这是用了聚类算法.我们没有给算法一个正确答案,但是他却能自己分好类,所以,这就是无监督学习。

Q: Is there a prerequisite for this course?
A: Students are expected to have the following background:

0 . 使用Octave的话,会学的更快。
1 . 了解基本计算机理论并且能够写一些不算复杂的代码
2 . 熟悉基本的概率论知识
3 . 熟悉基本的线性代数知识

你可能感兴趣的:([Week 1] Machine-learning Notes 1——Stanford University Coursera Lesson)