Being a computer science student, I had a hard time understanding the Beta Distribution. What makes it worse is that, everytime I tries to find out something from the internet, most of them only focus on the pdf (probability density function).
In this post, I am going to talk about Beta Distribution and some intuitive interpretations behind it.
Suppose we have two coins (A and B), and we are making a statistical experiment to identify whether these coins are biased or not.
For coin A, we tossed 5 times and the results are: 1,0,0,0,0. (1 indicates Head and 0 indicates Tail)
For coin B, we tossed 10 times and the results are: 1,1,0,0,0,0,0,0,0,0.
The probability for theses two coins to be Tail are identical: 0.2. Is it safe to say, both coins equally favor the Tail?
The answer is NOT, from the perspective of law of large numbers (LLN), the results obtained from a large number of trials should be close to the expeted value, and will tend to become closer as more trials are performed.
That means, although the standard deviation of the two distributions are the same,the standard error of A will be larger than that of B, because of a smaller sample size. (Please note that the standard error of a sample average measures the rough size of the difference between the population average and the sample average.)
Now that we know, the expected outcome of coin A and B are the same, while the confidence for these two events are different. Let PA denotes the probability for A to be Tail, and PB denotes the probability for B to be Tail. We want to know, the uncertainty for PA and PB to take different values, ranging from 0 to 1. In other words, that is the probability (uncertainty) for PA and PB to be different probilities. That is, the probability (uncertainty) for probability.
Below is the probability (uncertainty) that we wanted. (Please find the code that generates the below image here)
In the above graph, the red line illustrates coin A, the green line illustrates coin B, and the Blue line coorespondes to another coin with 80 Heads and 20 Tails.
From the red curve (coin A), we can find that, even though there is one Tail in the five tosses, the probability for it to be Tale has a peak around zero. That means, the most probable probability for coin A to be Tail is close to zero.
From the green curve (coin B), we can see that, the peak of probability is close to 0.15. That means, the most probable probability for coin B to be Tail is close to 0.15.
From the blue curve, the peak is close to 0.2. That means, the most probable probability for it to be Tail is close to 0.2.
Also, we can see that, although the expected probabilities for these coins to be Tail are the same: 0.2, the shapes of the probability distribution are different. And, the more data we get, the shape of probability distribution becomes more condensed in a small area.
That is Beta Distribution.
The pdf of Beta Distribution is:
where B(α,β) is a normalizing constant to make the outcome of the formula ranging from 0 to 1.
where Γ(x) is the Gamma Function.
Beta Distribution can express a wide range of different shapes for pdf, the above graph shows a variety of pdf from Beta Distribution.
The expected value of Beta Distribution is αα+β , which answers the intuitive question why coin A and coin B has the same expected value.
The variance of a Beta distribution is:
This answers the question that, if expected values are the same, as the number of trial are becoming larger and larger, the dispersion of the Beta distribution is becoming smaller and smaller.
One important application of Beta distribution is that, it can be used as a conjugate prior for binomial distributions in Bayesian analysis.
In Bayesian probability theory, if the posterior distributions P(h|D) are in the same family as the prior probability distribution P(h) , the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior.
A conjugate prior is an algebraic convenience, giving a closed-form expression for the posterior: otherwise a difficult numerical integration may be necessary. Further, conjugate priors may give intuition, by more transparently showing how a likelihood function updates a prior distribution.
For a Binomial distribution, we got α successes and β failures, we use this information as a prior to model the further s successes and f failures.
The prior is a Beta distribution:
The likelihood is a Binomial distribution:
The posterior is another Beta distribution:
This posterior distribution could then be used as the prior for more samples, with the hyperparameters simply adding each extra piece of information as it comes.
转自: http://xiangacadia.github.io/statistics/2014/07/29/Beta-Distribution.html