Oct 10, 2024
Office hours:
This week: Thursday - Friday
Next week: Wednesday - Friday
No class next Monday or Tuesday
Likelihood
Maximum likelihood estimation
MLE for linear regression
Properties of maximum likelihood estimator
\[ \mathbf{y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}, \hspace{10mm} \boldsymbol{\epsilon} \sim N(0, \sigma^2_\epsilon\mathbf{I}) \]using least-squares estimation
We have also shown some nice properties of the least-squares estimator \(\hat{\boldsymbol{\beta}}\), given \(E(\boldsymbol{\epsilon}) = \mathbf{0}\) and \(Var(\boldsymbol{\epsilon}) = \sigma^2_{\epsilon}\mathbf{I}\)
Today we will introduce another way to find these estimators - maximum likelihood estimation. We will see…
the maximum likelihood estimators have nice properties
the least-squares estimator is equal to the maximum likelihood estimator when certain assumptions hold
Suppose a basketball player shoots a single free throw, such that the probability of making a basket is \(p\)
What is the probability distribution for this random phenomenon?
Suppose the probability is \(p = 0.5\)? What is the probability the player makes a single shot, given this value of \(p\)?
Suppose the probability is \(p = 0.8\)? What is the probability the player makes a single shot, given this value of \(p\)?
Suppose the player shoots three free throws. They are all independent and the player has the same probability \(p\) of making each shot.
Let \(B\) represent a made basket, and \(M\) represent a missed basket. The player shoots three free throws with the outcome \(BBM\).
Suppose the probability is \(p = 0.5\)? What is the probability of observing the data \(BBM\), given this value of \(p\)?
Suppose the probability is \(p = 0.3\)? What is the probability of observing the data \(BBM\), given this value of \(p\) ?
Suppose the player shoots three free throws. They are all independent and the player has the same probability \(p\) of making each shot.
The player shoots three free throws with the outcome \(BBM\).
How would you describe in words the probabilities we previously calculated?
New question: What parameter value of \(p\) do you think maximizes the probability of observing this data?
We will use a likelihood function to answer this question.
A likelihood is a function that tells us how likely we are to observe our data for a given parameter value (or values).
Note that this is not the same as the probability function.
Probability function: Fixed parameter value(s) + input possible outcomes \(\Rightarrow\) probability of seeing the different outcomes given the parameter value(s)
Likelihood function: Fixed data + input possible parameter values \(\Rightarrow\) probability of seeing the fixed data for each parameter value
The likelihood function for the probability of a basket \(p\) given we observed \(BBM\) when shooting three independent free throws is \[ L(p|BBM) = p \times p \times (1 - p) \]
Thus, if the likelihood for \(p = 0.8\) is
\[ L(p = 0.8|BBM) = 0.8 \times 0.8 \times (1 - 0.8) = 0.128 \]
What is the general formula for the likelihood function for \(p\) given the observed data \(BBM\)?
Why do we need to assume independence?
Why does having identically distributed data simplify things?
The likelihood function for \(p\) given the data \(BBM\) is
\[ L(p|BBM) = p \times p \times (1 - p) = p^2 \times (1 - p) \]
We want of the value of \(p\) that maximizes this likelihood function, i.e., the value of \(p\) that is most likely given the observed data.
The process of finding this value is maximum likelihood estimation.
There are three primary ways to find the maximum likelihood estimator
Approximate using a graph
Using calculus
Numerical approximation
What do you think is the approximate value of the MLE of \(p\) given the data?
Use calculus to find the MLE of \(p\) given the data \(BBM\).
Suppose the player shoots \(n\) free throws. They are all independent and the player has the same probability \(p\) of making each shot.
Suppose the player makes \(k\) baskets out of the \(n\) free throws. This is the observed data.
What is the formula for the likelihood function for \(p\) given the observed data?
For what value of \(p\) do we maximize the likelihood given the observed data? Use calculus to find the response.
“Maximum likelihood estimation is, by far, the most popular technique for deriving estimators.” (Casella and Berger 2024, 315)
MLEs have nice statistical properties. They are
Consistent
Efficient - Have the smallest MSE among all consistent estimators
Asymptotically normal
Note
If the normality assumption holds, the least squares estimator is the maximum likelihood estimator for \(\beta\). Therefore, it has all these properties of the MLE.
Recall the linear model
\[ \mathbf{y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}, \hspace{10mm} \boldsymbol{\epsilon} \sim N(\mathbf{0}, \sigma^2_{\epsilon}\mathbf{I}) \]
Suppose we have the simple linear regression (SLR) model
\[ y_i = \beta_0 + \beta_1x_i + \epsilon_i, \hspace{10mm} \epsilon_i \sim N(0, \sigma^2_{\epsilon}) \]
such that \(\epsilon_i\) are independently and identically distributed.
We can write this model in the form below and use this to find the MLE
\[ y_i | x_i \sim N(\beta_0 + \beta_1 x_i, \sigma^2_{\epsilon}) \]
Let \(X\) be a random variable, such that \(X \sim N(\mu, \sigma^2)\). Then the probability function is
\[ P(X = x | \mu, \sigma^2) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\Big\{-{\frac{1}{2\sigma^2}(x - \mu)^2}\Big\} \]
The likelihood function for \(\beta_0, \beta_1, \sigma^2_{\epsilon}\) is
\[ \begin{aligned} L&(\beta_0, \beta_1, \sigma^2_{\epsilon} | x_i, \dots, x_n, y_i, \dots, y_n) \\[5pt] &= \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma_ \epsilon^2}}\exp\Big\{{-\frac{1}{2\sigma_\epsilon^2}(y_i - [\beta_0 + \beta_1x_i])^2}\Big\} \\[10pt] & = (2\pi\sigma^2_{\epsilon})^{-\frac{n}{2}}\exp\Big\{-\frac{1}{2\sigma^2_{\epsilon}}\sum_{i=1}^n(y_i - \beta_0 - \beta_1x_i)^2\Big\} \end{aligned} \]
The log-likelihood function for \(\beta_0, \beta_1, \sigma^2_{\epsilon}\) is
\[ \begin{aligned} \log &L(\beta_0, \beta_1, \sigma^2_{\epsilon} | x_i, \dots, x_n, y_i, \dots, y_n) \\[8pt] & = -\frac{n}{2}\log(2\pi\sigma^2_{\epsilon}) -\frac{1}{2\sigma^2_{\epsilon}}\sum_{i=1}^n(y_i - \beta_0 - \beta_1x_i)^2 \end{aligned} \]
We will use the log-likelihood function to find the MLEs
1️⃣ Take derivative of \(\log L\) with respect to \(\beta_0\) and set it equal to 0
\[ \frac{\partial \log L}{\partial \beta_0} = -\frac{2}{2\sigma^2_\epsilon}\sum_{i=1}^n (y_i - \beta_0 - \beta_1x_i)(-1) = 0 \]
2️⃣ Find the \(\tilde{\beta}_0\) that satisfies the equality on the previous slide
After a few steps…
\[ \begin{aligned} &\Rightarrow \sum_{i=1}^ny_i - n\tilde{\beta}_0 - \tilde{\beta}_1\sum_{i=1}^n x_i = 0 \\ &\Rightarrow \sum_{i=1}^ny_i - \tilde{\beta}_1\sum_{i=1}^n x_i = n\tilde{\beta}_0 \\ &\Rightarrow \frac{1}{n}\sum_{i=1}^ny_i - \frac{1}{n}\tilde{\beta}_1\sum_{i=1}^n x_i = \tilde{\beta}_0 \end{aligned} \]
3️⃣ We can use the second derivative to show we’ve found the maximum
\[ \frac{\partial^2 \log L}{\partial \beta_0^2} = -\frac{n}{2\tilde{\sigma}^2_\epsilon} < 0 \]
Therefore, we have found the maximum. Thus, MLE for \(\beta_0\) is
\[ \tilde{\beta}_0 = \bar{y} - \tilde{\beta}_1\bar{x} \]
$$$$
We can use a similar process to find the MLEs for \(\beta_1\) and \(\sigma^2_{\epsilon}\)
\[ \tilde{\beta}_1 = \frac{\sum_{i=1}^n y_i(x_i - \bar{x})}{\sum_{i=1}^n(x_i - \bar{x})^2} \]
\[ \tilde{\sigma}^2_{\epsilon} = \frac{\sum_{i=1}^n(y_i - \tilde{\beta}_0 - \tilde{\beta}_1x_i)^2}{n} = \frac{\sum_{i=1}^ne_i^2}{n} \]
The MLEs \(\tilde{\beta}_0\) and \(\tilde{\beta}_1\) are equivalent to the least-squares estimators, when the errors follow independent and identical normal distributions
This means the least-squares estimators \(\hat{\beta}_0\) and \(\hat{\beta}_1\) and inherit all the nice properties of MLEs
From previous work, we also know estimators \(\tilde{\beta}_0\) and \(\tilde{\beta}_1\) are unbiased
Note that the MLE \(\tilde{\sigma}^2_{\epsilon}\) is asymptotically unbiased