Logistic Regression: Inference

Prof. Maria Tackett

Nov 14, 2024

Announcements

  • Project: Draft report due + peer review in December 2 lab

  • Statistics experience due Tuesday, November 26

  • HW 04 released later today. Due Thursday, November 21

Topics

  • Test of significance for overall logistic regression model

  • Test of significance for a subset of model coefficients

  • Test of significance for a single coefficient

Computational setup

library(tidyverse)
library(tidymodels)
library(pROC)      
library(knitr)
library(kableExtra)

# set default theme in ggplot2
ggplot2::theme_set(ggplot2::theme_bw())

Risk of coronary heart disease

This data set is from an ongoing cardiovascular study on residents of the town of Framingham, Massachusetts. We want to examine the relationship between various health characteristics and the risk of having heart disease.

  • high_risk:

    • 1: High risk of having heart disease in next 10 years
    • 0: Not high risk of having heart disease in next 10 years
  • age: Age at exam time (in years)

  • totChol: Total cholesterol (in mg/dL)

  • currentSmoker: 0 = nonsmoker, 1 = smoker

  • education: 1 = Some High School, 2 = High School or GED, 3 = Some College or Vocational School, 4 = College

Modeling risk of coronary heart disease

Using age, totChol, currentSmoker

heart_disease_fit <- glm(high_risk ~ age + totChol + currentSmoker, 
              data = heart_disease, family = "binomial")

tidy(heart_disease_fit, conf.int = TRUE) |> 
  kable(digits = 3)
term estimate std.error statistic p.value conf.low conf.high
(Intercept) -6.673 0.378 -17.647 0.000 -7.423 -5.940
age 0.082 0.006 14.344 0.000 0.071 0.094
totChol 0.002 0.001 1.940 0.052 0.000 0.004
currentSmoker1 0.443 0.094 4.733 0.000 0.260 0.627

Test for overall significance

Likelihood ratio test

Similar to linear regression, we can test the overall significance for a logistic regression model, i.e., whether there is at least one non-zero coefficient in the model

H0:β1=⋯=βp=0Ha:βj≠0 for at least one j

The likelihood ratio test compares the fit of a model with no predictors to the current model.

Likelihood ratio test statistic

Let L0 and La be the likelihood functions of the model under H0 and Ha, respectively. The likelihood ratio test statistic is

G=−2[log⁡L0−log⁡La]=−2∑i=1n[yilog⁡(π^0π^ia)+(1−yi)log⁡(1−π^01−π^ia)]

where π^0 is the predicted probability under H0 and π^ia=exp⁡{xiTβ}1+exp⁡{xiTβ} is the predicted probability under Ha 1

  1. See Wilks (1935) for explanation of why -2 is included.

Likelihood ratio test statistic

G=−2∑i=1n[yilog⁡(π^0π^ia)+(1−yi)log⁡(1−π^01−π^ia)]

  • When n is large, G∼χp2, ( G follows a Chi-square distribution with p degrees of freedom)

  • The p-value is calculated as p-value=P(χ2>G)

  • Large values of G (small p-values) indicate at least one βj is non-zero

χ2 distribution

Heart disease model: likelihood ratio test

H0:βage=βtotChol=βcurrentSmoker=0Ha:βj≠0 for at least one j

Fit the null model

(we’ve already fit the alternative model)

null_model <- glm(high_risk ~ 1, data = heart_disease, family = "binomial")

tidy(null_model) |>
  kable()
term estimate std.error statistic p.value
(Intercept) -1.72294 0.0436342 -39.486 0

Heart disease model: likelihood ratio test

Calculate the log-likelihood for the null and alternative models

(L_0 <- glance(null_model)$logLik)
[1] -1737.735
(L_a <- glance(heart_disease_fit)$logLik)
[1] -1612.406

Calculate the likelihood ratio test statistic

(G <- -2 * (L_0 - L_a))
[1] 250.6572

Heart disease model: likelihood ratio test

Calculate the p-value

(p_value <- pchisq(G, df = 3, lower.tail = FALSE))
[1] 4.717158e-54

Conclusion

The p-value is small, so we reject H0. The data provide evidence of at least one non-zero model coefficient in the model.

Test a subset of coefficients

Testing a subset of coefficients

  • Suppose there are two models:

    • Reduced Model: includes predictors x1,…,xq

    • Full Model: includes predictors x1,…,xq,xq+1,…,xp

  • We can use the likelihood ratios to see if any of the new predictors are useful

H0:βq+1=⋯=βp=0Ha:βj≠0 for at least one j

This is called a drop-in-deviance test (also known as nested likelihood ratio test)

Deviance

The deviance is a measure of the degree to which the predicted values are different from the observed values (compares the current model to a “saturated” model)


In logistic regression,

D=−2log⁡L


D∼χn−p−12 ( D follows a Chi-square distribution with n−p−1 degrees of freedom)


Note: n−p−1 a the degrees of freedom associated with the error in the model (like residuals)

Drop-in-deviance test

H0:βq+1=⋯=βp=0Ha:βj≠0 for at least one j

The test statistic is

G=Dreduced−Dfull=−2log⁡Lreduced−(−2log⁡Lfull)=−2(log⁡Lreduced−log⁡Lfull)

The p-value is calculated using a χΔdf2 distribution, where Δdf is the number of parameters being tested (the difference in number of parameters between the full and reduced model).

Heart disease model: drop-in-deviance test

Should we add education to the model?

  • Reduced model: age, totChol, currentSmoker

  • Full model: age, totChol, currentSmoker , education

H0:βed1=βed2=βed3=0Ha:βj≠0 for at least one j

Heart disease model: drop-in-deviance test

reduced_model <- glm(high_risk ~ age + totChol + currentSmoker, 
              data = heart_disease, family = "binomial")

full_model <- glm(high_risk ~ age + totChol + currentSmoker + education, 
              data = heart_disease, family = "binomial")

Calculate deviances

(deviance_reduced <- -2 * glance(reduced_model)$logLik)
[1] 3224.812
(deviance_full <- -2 * glance(full_model)$logLik)
[1] 3217.6

Calculate test statistic

(G <- deviance_reduced - deviance_full)
[1] 7.212113

Heart disease model: drop-in-deviance test

Calculate p-value

pchisq(G, df = 3, lower.tail = FALSE)
[1] 0.06543567


What is your conclusion? Would you include education in the model that already has age, totChol, currentSmoker?

Drop-in-deviance test in R

Conduct the drop-in-deviance test using the anova() function in R with option test = "Chisq"

anova(reduced_model, full_model, test = "Chisq") |> 
  tidy() |> 
  kable(digits = 3)
term df.residual residual.deviance df deviance p.value
high_risk ~ age + totChol + currentSmoker 4082 3224.812 NA NA NA
high_risk ~ age + totChol + currentSmoker + education 4079 3217.600 3 7.212 0.065

AIC and BIC

Similar to linear regression, we can use AIC and BIC to compare models.

AIC=−2log⁡L+2(p+1)BIC=−2log⁡L+log⁡(n)(p+1)

You want to select the model that minimizes AIC / BIC

Compare models using AIC

AIC for reduced model (age, totChol, currentSmoker)

glance(reduced_model)$AIC
[1] 3232.812


AIC for full model (age, totChol, currentSmoker, education)

glance(full_model)$AIC
[1] 3231.6

Compare models using BIC

BIC for reduced model (age, totChol, currentSmoker)

glance(reduced_model)$BIC
[1] 3258.074


BIC for full model (age, totChol, currentSmoker, education)

glance(full_model)$BIC
[1] 3275.807

Test for a single coefficient

Distribution of β^

When n is large, β^, the estimated coefficients of the logistic regression model, is approximately normal.


How do we know the distribution of β^ is normal for large n?

Distribution of β^

The expected value of β^ is the true parameter, β, i.e., E(β^)=β

Var(β^), the matrix of variances and covariances between estimators, is found by taking the second partial derivatives of the log-likelihood function (Hessian matrix)

Var(β^)=(XTVX)−1

where V is a n×n diagonal matrix such that Vii is the estimated variance for the ith observation

Test for a single coefficient

Hypotheses: H0:βj=0 vs Ha:βj≠0, given the other variables in the model

(Wald) Test Statistic: z=β^j−0SE(β^j)

where SE(β^j) is the square root of the jth diagonal element of Var(β^)

P-value: P(|Z|>|z|), where Z∼N(0,1), the Standard Normal distribution

Confidence interval for βj

We can calculate the C% confidence interval for βj as the following:

β^j±z∗SE(β^j)

where z∗ is calculated from the N(0,1) distribution

Note

This is an interval for the change in the log-odds for every one unit increase in xj

Interpretation in terms of the odds

The change in odds for every one unit increase in xj.

exp⁡{β^j±z∗SE(β^j)}

Interpretation: We are C% confident that for every one unit increase in xj, the odds multiply by a factor of exp⁡{β^j−z∗SE(β^j)} to exp⁡{β^j+z∗SE(β^j)}, holding all else constant.

Coefficient for age

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -6.673 0.378 -17.647 0.000 -7.423 -5.940
age 0.082 0.006 14.344 0.000 0.071 0.094
totChol 0.002 0.001 1.940 0.052 0.000 0.004
currentSmoker1 0.443 0.094 4.733 0.000 0.260 0.627

Hypotheses:

H0:βage=0 vs Ha:βage≠0, given total cholesterol and smoking status are in the model.

Coefficient for age

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -6.673 0.378 -17.647 0.000 -7.423 -5.940
age 0.082 0.006 14.344 0.000 0.071 0.094
totChol 0.002 0.001 1.940 0.052 0.000 0.004
currentSmoker1 0.443 0.094 4.733 0.000 0.260 0.627

Test statistic:

z=0.0825−00.00575=14.34

Coefficient for age

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -6.673 0.378 -17.647 0.000 -7.423 -5.940
age 0.082 0.006 14.344 0.000 0.071 0.094
totChol 0.002 0.001 1.940 0.052 0.000 0.004
currentSmoker1 0.443 0.094 4.733 0.000 0.260 0.627

P-value:

P(|Z|>|14.34|)≈0

2 * pnorm(14.34,lower.tail = FALSE)
[1] 1.230554e-46

Coefficient for age

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -6.673 0.378 -17.647 0.000 -7.423 -5.940
age 0.082 0.006 14.344 0.000 0.071 0.094
totChol 0.002 0.001 1.940 0.052 0.000 0.004
currentSmoker1 0.443 0.094 4.733 0.000 0.260 0.627

Conclusion:

The p-value is very small, so we reject H0. The data provide sufficient evidence that age is a statistically significant predictor of whether someone is high risk of having heart disease, after accounting for total cholesterol and smoking status.

CI for age

term estimate std.error statistic p.value conf.low conf.high
(Intercept) -6.673 0.378 -17.647 0.000 -7.423 -5.940
age 0.082 0.006 14.344 0.000 0.071 0.094
totChol 0.002 0.001 1.940 0.052 0.000 0.004
currentSmoker1 0.443 0.094 4.733 0.000 0.260 0.627

Interpret the 95% confidence interval for age in terms of the odds of being high risk for heart disease.

Overview of testing coefficients

Test a single coefficient

  • Likelihood ratio test

  • Drop-in-deviance test

  • Wald hypothesis test and confidence interval

Test a subset of coefficients

  • Likelihood ratio test

  • Drop-in-deviance test

Can use AIC and BIC to compare models in both scenarios

References

Wilks, SS. 1935. “The Likelihood Test of Independence in Contingency Tables.” The Annals of Mathematical Statistics 6 (4): 190–96.

🔗 STA 221 - Fall 2024

1 / 39
Logistic Regression: Inference Prof. Maria Tackett Nov 14, 2024

  1. Slides

  2. Tools

  3. Close
  • Logistic Regression: Inference
  • Announcements
  • Topics
  • Computational setup
  • Slide 5
  • Risk of coronary heart disease
  • Modeling risk of coronary heart disease
  • Test for overall significance
  • Likelihood ratio test
  • Likelihood ratio test statistic
  • Likelihood ratio test statistic
  • χ2 distribution
  • Heart disease model: likelihood ratio test
  • Heart disease model: likelihood ratio test
  • Heart disease model: likelihood ratio test
  • Test a subset of coefficients
  • Testing a subset of coefficients
  • Deviance
  • Drop-in-deviance test
  • Heart disease model: drop-in-deviance test
  • Heart disease model: drop-in-deviance test
  • Heart disease model: drop-in-deviance test
  • Drop-in-deviance test in R
  • AIC and BIC
  • Compare models using AIC
  • Compare models using BIC
  • Test for a single coefficient
  • Distribution of β^
  • Distribution of β^
  • Test for a single coefficient
  • Confidence interval for βj
  • Interpretation in terms of the odds
  • Coefficient for age
  • Coefficient for age
  • Coefficient for age
  • Coefficient for age
  • CI for age
  • Overview of testing coefficients
  • References
  • f Fullscreen
  • s Speaker View
  • o Slide Overview
  • e PDF Export Mode
  • r Scroll View Mode
  • b Toggle Chalkboard
  • c Toggle Notes Canvas
  • d Download Drawings
  • ? Keyboard Help