Oct 01, 2024
Project Proposal due Thursday, October 3 at 11:59pm
Lab 03 due Thursday, October 3 at 11:59pm
HW 02 due Thursday, October 3 at 11:59pm (released after class)
Exam 01: Tuesday, October 8 (in class + take-home)
Lecture recordings available until the start of the in-class exam (Link on side bar of webpage)
Exam review on Thursday
Monday’s lab: Exam office hours
No office hours while take-home exam is out
Note
This is not a mathematical statistics class. There are semester-long courses that will go into these topics in much more detail; we will barely scratch the surface in this course.
Our goals are to understand
Estimators have properties
A few properties of the least squares estimator and why they are useful
We have discussed how to use least squares to find an estimator of \(\hat{\boldsymbol{\beta}}\)
How do we know whether our least-squares estimator is a “good” estimator?
When we consider what makes an estimator “good”, we’ll look at three criteria:
We’ll take a look at these and motivate why we might prefer using least squares to compute \(\hat{\boldsymbol{\beta}}\) versus other methods
Suppose you are throwing darts at a target
Ideal scenario: Darts are clustered around the target (unbiased and low variance)
Worst case scenario: Darts are widely spread out and systematically far from the target (high bias and high variance)
Acceptable scenario: There’s some trade-off between the bias and variance.
Finite sample ( \(n\) ) properties
Unbiased estimator
Best Linear Unbiased Estimator (BLUE)
Infinite sample ( \(n \rightarrow \infty\) ) properties
Consistent estimator
Efficient estimator
The bias of an estimator is the difference between the estimator’s expected value and the true value of the parameter
Let \(\hat{\theta}\) be an estimator of the parameter \(\theta\). Then
\[ Bias(\hat{\theta}) = E(\hat{\theta}) - \theta \]
An estimator is unbiased if the bias is 0 and thus \(E(\hat{\theta}) = \theta\)
\[ \begin{aligned} E(\hat{\boldsymbol{\beta}}) &= E[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}] \\[8pt] & = E[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T(\mathbf{X}\boldsymbol{\beta} + \mathbf{\epsilon})] \\[8pt] & = E[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{X}\boldsymbol{\beta}] + E[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\boldsymbol{\epsilon}]\\[8pt] & = \boldsymbol{\beta} + (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^TE(\boldsymbol{\epsilon}) \\[8pt] & = \boldsymbol{\beta} \end{aligned} \]
The least-squares estimator \(\hat{\boldsymbol{\beta}}\) is an unbiased estimator of \(\boldsymbol{\beta}\)
\[ \begin{aligned} Var(\hat{\boldsymbol{\beta}}) &= Var((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}) \\[8pt] & = [(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T]Var(\mathbf{y})[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T]^T \\[8pt] & = [(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T]\sigma^2_{\epsilon}\mathbf{I}[\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}] \\[8pt] & = \sigma^2_{\epsilon}[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}] \\[8pt] & = \sigma^2_{\epsilon}(\mathbf{X}^T\mathbf{X})^{-1} \end{aligned} \]
What does it mean for a model to be a “linear” regression model?
Linear regression models are linear in the parameters, i.e. given an observation \(y_i\)
\[ y_i = \beta_0 + \beta_1f_1(x_{i1}) + \dots + \beta_pf_p(x_{ip}) + \epsilon_i \]
The functions \(f_1, \ldots, f_p\) can be non-linear as long as \(\beta_0, \beta_1, \ldots, \beta_p\) are linear in \(Y\)
Gauss-Markov Theorem
The least-squares estimator of \(\boldsymbol{\beta}\) in the model \(\mathbf{y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}\) is given by \(\hat{\boldsymbol{\beta}}\). Given the errors have mean \(\mathbf{0}\) and variance \(\sigma^2_{\epsilon}\mathbf{I}\) , then \(\hat{\boldsymbol{\beta}}\) is BLUE (best linear unbiased estimator).
“Best” means \(\hat{\boldsymbol{\beta}}\) has the smallest variance among all linear unbiased estimators for \(\boldsymbol{\beta}\) .
Suppose \(\tilde{\boldsymbol{\beta}}\) is another linear unbiased estimator of \(\boldsymbol{\beta}\) that can be expressed as \(\tilde{\boldsymbol{\beta}} = \mathbf{Cy}\) , such that \(\hat{\mathbf{y}} = \mathbf{X}\tilde{\boldsymbol{\beta}} = \mathbf{XCy}\)
Let \(\mathbf{C} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B}\) for a non-zero matrix \(\mathbf{B}\).
What is the dimension of \(\mathbf{B}\)?
\[ \tilde{\boldsymbol{\beta}} = \mathbf{Cy} = ((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})\mathbf{y} \]
We need to show
\(\tilde{\boldsymbol{\beta}}\) is unbiased
\(Var(\tilde{\boldsymbol{\beta}}) > Var(\hat{\boldsymbol{\beta}})\)
\[ \begin{aligned} E(\tilde{\boldsymbol{\beta}}) & = E[((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})\mathbf{y}] \\ & = E[((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})(\mathbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon})] \\ & = E[((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})(\mathbf{X}\boldsymbol{\beta})] \\ & = ((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})(\mathbf{X}\boldsymbol{\beta}) \\ & = (\mathbf{I} + \mathbf{BX})\boldsymbol{\beta} \end{aligned} \]
What assumption(s) of the Gauss-Markov Theorem did we use?
What must be true for \(\tilde{\boldsymbol{\beta}}\) to be unbiased?
\(\mathbf{BX}\) must be the \(\mathbf{0}\) matrix (dimension = \((p+1) \times (p+1)\)) in order for \(\tilde{\boldsymbol{\beta}}\) to be unbiased
Now we need to find \(Var(\tilde{\boldsymbol{\beta}})\) and see how it compares to \(Var(\hat{\boldsymbol{\beta}})\)
\[ \begin{aligned} Var(\tilde{\boldsymbol{\beta}}) &= Var[((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})\mathbf{y}] \\[8pt] & = ((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})Var(\mathbf{y})((\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T + \mathbf{B})^T \\[8pt] & = \small{\sigma^2_{\epsilon}[(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1} + (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T \mathbf{B}^T + \mathbf{BX}(\mathbf{X}^T\mathbf{X})^{-1} + \mathbf{BB}^T]}\\[8pt] & = \sigma^2_\epsilon(\mathbf{X}^T\mathbf{X})^{-1} + \sigma^2_{\epsilon}\mathbf{BB}^T\end{aligned} \]
What assumption(s) of the Gauss-Markov Theorem did we use?
We have
\[ Var(\tilde{\boldsymbol{\beta}}) = \sigma^2_{\epsilon}(\mathbf{X}^T\mathbf{X})^{-1} + \sigma^2_\epsilon \mathbf{BB}^T \]
We know that \(\sigma^2_{\epsilon}\mathbf{BB}^T \geq \mathbf{0}\).
When is \(\sigma^2_{\epsilon}\mathbf{BB}^T = \mathbf{0}\)?
Therefore, we have shown that \(Var(\tilde{\boldsymbol{\beta}}) > Var(\hat{\boldsymbol{\beta}})\) and have completed the proof.
Gauss-Markov Theorem
The least-squares estimator of \(\boldsymbol{\beta}\) in the model \(\mathbf{y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}\) is given by \(\hat{\boldsymbol{\beta}}\). Given the errors have mean \(\mathbf{0}\) and variance \(\sigma^2_{\epsilon}\mathbf{I}\) , then \(\hat{\boldsymbol{\beta}}\) is BLUE (best linear unbiased estimator).
“Best” means \(\hat{\boldsymbol{\beta}}\) has the smallest variance among all linear unbiased estimators for \(\boldsymbol{\beta}\) .
Finite sample ( \(n\) ) properties
Unbiased estimator ✅
Best Linear Unbiased Estimator (BLUE) ✅
Infinite sample ( \(n \rightarrow \infty\) ) properties
Consistent estimator
Efficient estimator
The mean squared error (MSE) is the squared difference between the estimator and parameter.
Let \(\hat{\theta}\) be an estimator of the parameter \(\theta\). Then
\[ \begin{aligned} MSE(\hat{\theta}) &= E[(\hat{\theta} - \theta)^2] \\ & = E(\hat{\theta}^2 - 2\hat{\theta}\theta + \theta^2) \\ & = E(\hat{\theta}^2) - 2\theta E(\hat{\theta}) + \theta^2 \\ & = \underbrace{E(\hat{\theta}^2) - E(\hat{\theta})^2}_{Var(\hat{\theta})} + \underbrace{E(\hat{\theta})^2 - 2\theta E(\hat{\theta}) + \theta^2}_{Bias(\theta)^2} \end{aligned} \]
\[ MSE(\hat{\theta}) = Var(\hat{\theta}) + Bias(\hat{\theta})^2 \]
The least-squares estimator \(\hat{\boldsymbol{\beta}}\) is unbiased, so \[MSE(\hat{\boldsymbol{\beta}}) = Var(\hat{\boldsymbol{\beta}})\]
An estimator \(\hat{\theta}\) is a consistent estimator of a parameter \(\theta\) if it converges in probability to \(\theta\). Given a sequence of estimators \(\hat{\theta}_1, \hat{\theta}_2, . . .\), then for every \(\epsilon > 0\),
\[ \displaystyle \lim_{n\to\infty} P(|\hat{\theta}_n - \theta| \geq \epsilon) = 0 \]
This means that as the sample size goes to \(\infty\) (and thus the sample information gets better and better), the estimator will be arbitrarily close to the parameter with high probability.
Why is this a useful property of an estimator?
Important
Theorem
An estimator \(\hat{\theta}\) is a consistent estimator of the parameter \(\theta\) if the sequence of estimators \(\hat{\theta}_1, \hat{\theta}_2, \ldots\) satisfies
\(\lim_{n \to \infty} Var(\hat{\theta}) = 0\)
\(\lim_{n \to \infty} Bias(\hat{\theta}) = 0\)
\(Bias(\hat{\boldsymbol{\beta}}) = \mathbf{0}\), so \(\lim_{n \to \infty} Bias(\hat{\boldsymbol{\beta}}) = \mathbf{0}\)
Now we need to show that \(\lim_{n \to \infty} Var(\hat{\boldsymbol{\beta}}) = \mathbf{0}\)
What is \(Var(\hat{\boldsymbol{\beta}})\)?
Does \(Var(\hat{\boldsymbol{\beta}}) \to \mathbf{0}\) as \(n \to \infty\)?
The efficiency of an estimator is concerned with the asymptotic variance of an estimator.
The estimator with the smallest variance is considered the most efficient.
By the Gauss-Markov Theorem, we have shown that the least-squares estimator is the most efficient among linear unbiased estimators.
Finite sample ( \(n\) ) properties
Unbiased estimator ✅
Best Linear Unbiased Estimator (BLUE) ✅
Infinite sample ( \(n \rightarrow \infty\) ) properties
Consistent estimator ✅
Efficient estimator ✅