# Bayesian Statistics, Maximum Likelihood Estimation, and Machine Learning

# Resources

- Wikipedia: Prior Probability
- Wikipedia: Posterior Probability
- Wikipedia: Maximum Likelihood Estimation
- Youtube: 오토인코더의 모든 것 1/3

# Prior probability

The prior probability distribution of an uncertain quantity is the probability distribution about that quantity **before** some evidence is taken into account. This is often expressed as .

# Posterior probability

The posterior probability of a random event is the conditional probability that is assigned **after** relevant evidence is taken into account. This is often expressed as . The prior and posterior probabilities are related by the Bayes’ Theorem as follows:

# Maximum Likelihood Estimation (MLE)

MLE is a method of estimating the parameters of a statistical model, given observations. Intuitively, we are trying to find the model parameters that make the observed data most probable. This is done by finding the parameters that maximizes the likelihood function . When we are dealing with discrete random variables, the likelihood function is the probability. On the other hand, when we are dealing with continuous random variables, the likelihood function is the value of the probability distribution function.

We can formulate the MLE problem as follows:

where is the model parameters and is the observed data.

We often use the average log-likelihood function

since it has preferable qualities. One of this is illustrated later in this document.

## Machine Learning in the MLE perspective

A traditional machine learning model for classification is visualized as the above: we receive an input image and our model calculates , which is a vector denoting the probability for each class. Then based on our label, we calculate the loss function, which is then optimized using gradient descent. Now, let us view this in a maximum likelihood perspective.

Now, when we create an ML model, we choose a statistical model that our output may follow. Then, our ML model function calculates the parameters of that statistical model. For example, let us assume that our output is one dimensional and has a Gaussian distribution. Then we set to a two-dimensional vector and interpret it as

Thus for each input we obtain a Gaussian distribution for . Using negative log-likelihood, our optimization problem is the following:

If we assume that our inputs are independent and identically distributed (i.i.d), we can obtain the following:

Rewriting our optimization problem:

When we perform inference from our model, we no longer get determined outputs as we did in traditional machine learning models. We now get a distribution of ,

where we should sample a single .

## Loss Functions in the MLE perspective

Two famous loss functions, mean square error and cross-entropy error, can be derived using the MLE perspective.

## Leave a comment