Keywords: Supervised Learning、 Unsupervised Learning、 Regression、 Classification、Clustering
As mentioned in my last blog, machine learning is divided into supervised learning and unsupervised learning.
Today, we're going to figure out supervised learning and unsupervised learning are respectively applicable to solving what kind of problems.
At first, there're some questions list below we should discuss about.
Why machine learning is divided into supervised learning and unsupervised learning?
Here's the definetions of Supervised Learning and Unsupervised Learning on wikipedia.
Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples.
Unsupervised machine learning is the machine learning task of inferring a function that describes the structure of "unlabeled" data (i.e. data that has not been classified or categorized)
As we can see, the biggest difference between supervised learning and unsupervised learning is whether the data has labels.
In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a relationship between the input and the output. On the contrary, unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know the effect of the variables.
Supervised Learning
Concretely, supervised learning is divided into regression problem and classification problem.
Regression: try to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function.
For example, given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.
Classification: instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.
If we make our output about whether the house "sells for more or less than the asking price." Here we are classifying the houses based on price into two discrete categories.
Tumor diagnose is another classic classification problem: Given a patient with a tumor, we have to predict whether the tumor is malignant or benign.
Unsupervised Learning
Unsupervised learning can be divided into clustering and Non-clustering.
Clustering:Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.
Non-clustering: The "Cocktail Party Algorithm", allows you to find structure in a chaostic environment.(i.e. identifying individual voices and music from a mesh of sounds at a cocktail party).
By unsupervised learning, we can derive structure by clustering the data based on relationships among the variables in the data.
How to solve a given problem of supervised learning or unsupervised learning?
Before we designing a learning algorithm, the first thing we should do is to figure out what kind of problem do we have. Which means for example, our problem should be categorized into "regression" or "classification" problem, if it belongs to Supervised Learning problems.
Steps to design a supervised learning or unsupervised learning algorithm
Generally, there the following steps you can take:
- Determine the type of training examples. Before doing anything else, the user should decide what kind of data is to be used as a training set. In case of handwriting analysis, for example, this might be a single handwritten character, an entire handwritten word, or an entire line of handwriting.
- Gather a training set. The training set needs to be representative of the real-world use of the function. Thus, a set of input objects is gathered and corresponding outputs are also gathered, either from human experts or from measurements.
- Determine the input feature representation of the learned function. The accuracy of the learned function depends strongly on how the input object is represented. Typically, the input object is transformed into a feature vector, which contains a number of features that are descriptive of the object. The number of features should not be too large, because of the curse of dimensionality; but should contain enough information to accurately predict the output.
- Determine the structure of the learned function and corresponding learning algorithm. For example, the engineer may choose to use support vector machines or decision trees.
- Complete the design. Run the learning algorithm on the gathered training set. Some supervised learning algorithms require the user to determine certain control parameters. These parameters may be adjusted by optimizing performance on a subset (called a validation set) of the training set, or via cross-validation.
- Evaluate the accuracy of the learned function. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set.
I'm not going to get into the details of these steps right now. A wide range of supervised learning algorithms are available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems.
Four major issues you should take care of in supervised learning and unsupervised learning
- Bias-variance tradeoff
- Function complexity and amount of training data
- Dimensionality of the input space
- Noise in the output values
- Other factors to consider: Heterogeneity of the data, Redundancy in the data, Presence of interactions and non-linearities and so on
The most widely used learning algorithms in Supervised learning
- Support Vector Machines
- linear regression
- logistic regression
- naive Bayes
- linear discriminant analysis
- decision trees
- k-nearest neighbor algorithm
- Neural Networks
Algorithms commonly used in unsupervised learning
- Clustering
- k-means
- mixture models
- hierarchical clustering
- Anomaly detection
- Neural Networks
- Autoencoders
- Deep Belief Nets
- Hebbian Learning
- Generative Adversarial Networks
- Self-organizing map
- Approaches for learning latent variable models such as
- Expectation-maximization algorithm(EM)
- Method of moments
- Blind signal separation techniques, e.g.,
- Principal component analysis,
- Independent component analysis,
- Non-negative matrix factorization,
- Singular value decomposition.
We'll cover on each of these algorithms in the notes that follow.
How to evaluate the performance of a learning algorithm?
To be honest, this is a broad but extremely important question, we will talk about it lately.
How to optimize a learning algorithm?
The same as above question.
Conclusion
By now, we already know two branches of machine learning, supervised learning and unsupervised learning, and categorized some daily problems into regression, classification or even clustering(in unsupervised learning).
Then, I list six normal steps we usually take when designing a supervised learning algorithm (Similar with unsupervised learning).
Finally, we recognized eight common algorithms in supervised learning, which we would talk about lately.
References
Supervisd learning From Wikipedia
Unsupervised learning From Wikipedia