Logistic Regression

Kat Husar

Nov 07, 2024

Announcements

  • Lab 05 due TODAY 11:59pm

  • Monday, November 11: Project presentations

  • Looking ahead

    • Statistics experience due Tuesday, November 26

Topics

  • Logistic regression for binary response variable

  • Use logistic regression model to calculate predicted odds and probabilities

  • Interpret the coefficients of a logistic regression model with

    • a single categorical predictor
    • a single quantitative predictor
    • multiple predictors

Computational setup

# load packages
library(tidyverse)
library(tidymodels)
library(knitr)
library(Stat2Data) #contains data set

# set default theme in ggplot2
ggplot2::theme_set(ggplot2::theme_bw())

Recap

Do teenagers get 7+ hours of sleep?

Students in grades 9 - 12 surveyed about health risk behaviors including whether they usually get 7 or more hours of sleep.

Sleep7

1: yes

0: no

# A tibble: 446 × 6
     Age Sleep7 Sleep           SmokeLife SmokeDaily MarijuaEver
   <int>  <int> <fct>           <fct>     <fct>            <int>
 1    16      1 8 hours         Yes       Yes                  1
 2    17      0 5 hours         Yes       Yes                  1
 3    18      0 5 hours         Yes       Yes                  1
 4    17      1 7 hours         Yes       No                   1
 5    15      0 4 or less hours No        No                   0
 6    17      0 6 hours         No        No                   0
 7    17      1 7 hours         No        No                   0
 8    16      1 8 hours         Yes       No                   0
 9    16      1 8 hours         No        No                   0
10    18      0 4 or less hours Yes       Yes                  1
# ℹ 436 more rows

Let’s fit a linear regression model

Outcome: Y = 1: yes, 0: no

Let’s use proportions

Outcome: Probability of getting 7+ hours of sleep

What happens if we zoom out?

Outcome: Probability of getting 7+ hours of sleep

🛑 This model produces predictions outside of 0 and 1.

Let’s try another model

✅ This model (called a logistic regression model) only produces predictions between 0 and 1.

Probabilities and odds

Binary response variable

  • Y=1: yes,0: no
  • π: probability that Y=1, i.e., P(Y=1)
  • π1−π: odds that Y=1
  • log⁡(π1−π): log odds
  • Go from π to log⁡(π1−π) using the logit transformation

From odds to probabilities

  1. Logistic model: log odds = log⁡(π1−π)=Xβ
  2. Odds = exp⁡{log⁡(π1−π)}=π1−π
  3. Combining (1) and (2) with what we saw earlier

probability=π=exp⁡{Xβ}1+exp⁡{Xβ}

Sigmoid Function

We call this function relating the probability to the predictors a sigmoid function, σ(x)=exp⁡{x}1+exp⁡{x}=11+exp{−x}.

Sigmoid Function

Logistic regression

Logistic regression model

Logit form: logit(π)=log⁡(π1−π)=Xβ

Probability form:

π=exp⁡{Xβ}1+exp⁡{Xβ}

Logit and sigmoid link functions are inverses of each other.

Note

More on link functions later, if time permits

Goal

We want to use our data to estimate β (find β^) and obtain the model:

π^=exp⁡{Xβ^}1+exp⁡{Xβ^}

In this modeling scheme, one typically finds β^ by maximizing the likelihood function.

Linear Regression vs. Logistic Regression

Linear regression:

  • Quantitative outcome

  • yi=xi⊤β+ϵi

  • E[Yi]=xi⊤β

  • Estimate β

  • Use β^ to predict y^i

Logistic regression:

  • Binary outcome

  • log⁡(πi1−πi)=xi⊤β

  • E[Yi]=πi

  • Estimate β

  • Use β^ to predict π^i

Likelihood function for β

  • P(Yi=1)=πi. What likelihood function should we use?
  • f(yi|xi,β)=πiyi(1−πi)1−yi
  • Yi’s are independent, so

f(y1,…,yn)=∏i=1nπiyi(1−πi)1−yi

Likelihood

The likelihood function for β is

L(β|x1,…,xn,y1,…,yn)=∏i=1nπiyi(1−πi)1−yi

We will use the log-likelihood function to find the MLEs

Log-likelihood

The log-likelihood function for β is

logL(β|x1,…,xn,y1,…,yn)=∑i=1nlog⁡(πiyi(1−πi)1−yi)=∑i=1n(yilog⁡(πi)+(1−yi)log⁡(1−πi))

Log-likelihood

Plugging in πi=exp⁡{xi⊤β}1+exp⁡{xi⊤β} and simplifying, we get:

logL(β|x1,…,xn,y1,…,yn)=∑i=1nyixi⊤β−∑i=1nlog⁡(1+exp⁡{xi⊤β})

Finding the MLE

  • Taking the derivative:

∂log⁡L∂β=∑i=1nyixi⊤−∑i=1nexp⁡{xi⊤β}xi⊤1+exp⁡{xi⊤β}

  • If we set this to zero, there is no closed form solution.
  • R uses numerical approximation to find the MLE.

Example

Risk of coronary heart disease

This data set is from an ongoing cardiovascular study on residents of the town of Framingham, Massachusetts. We want to examine the relationship between various health characteristics and the risk of having heart disease.

  • high_risk: 1 = High risk of having heart disease in next 10 years, 0 = Not high risk of having heart disease in next 10 years

  • age: Age at exam time (in years)

  • education: 1 = Some High School, 2 = High School or GED, 3 = Some College or Vocational School, 4 = College

Data: heart_disease

# A tibble: 4,135 × 3
     age education high_risk
   <dbl> <fct>     <fct>    
 1    39 4         0        
 2    46 2         0        
 3    48 1         0        
 4    61 3         1        
 5    46 3         0        
 6    43 2         0        
 7    63 1         1        
 8    45 2         0        
 9    52 1         0        
10    43 1         0        
# ℹ 4,125 more rows

High risk vs. age

ggplot(heart_disease, aes(x = high_risk, y = age)) +
  geom_boxplot(fill = "steelblue") +
  labs(x = "High risk - 1: yes, 0: no",
       y = "Age", 
       title = "Age vs. High risk of heart disease")

Let’s fit the model

heart_edu_age_fit <- glm(high_risk ~ age + education, 
                         data  = heart_disease, 
                         family = "binomial")heart_edu_age_fit <- glm(high_risk ~ age + education, 
                         data  = heart_disease, 
                         family = "binomial")heart_edu_age_fit <- glm(high_risk ~ age + education, 
                         data  = heart_disease, 
                         family = "binomial")
term estimate std.error statistic p.value
(Intercept) -5.385 0.308 -17.507 0.000
age 0.073 0.005 13.385 0.000
education2 -0.242 0.112 -2.162 0.031
education3 -0.235 0.134 -1.761 0.078
education4 -0.020 0.148 -0.136 0.892

log⁡(π^1−π^)=−5.385+0.073×age−0.242×education2−0.235×education3−0.020×education4 where π^ is the predicted probability of being high risk of having heart disease in the next 10 years

Interpretation in terms of log-odds

term estimate std.error statistic p.value
(Intercept) -5.385 0.308 -17.507 0.000
age 0.073 0.005 13.385 0.000
education2 -0.242 0.112 -2.162 0.031
education3 -0.235 0.134 -1.761 0.078
education4 -0.020 0.148 -0.136 0.892

education4: The log-odds of being high risk for heart disease are expected to be 0.020 less for those with a college degree compared to those with some high school, holding age constant.

Warning

We would not use the interpretation in terms of log-odds in practice.

Interpretation in terms of log-odds

term estimate std.error statistic p.value
(Intercept) -5.385 0.308 -17.507 0.000
age 0.073 0.005 13.385 0.000
education2 -0.242 0.112 -2.162 0.031
education3 -0.235 0.134 -1.761 0.078
education4 -0.020 0.148 -0.136 0.892

age: For each additional year in age, the log-odds of being high risk for heart disease are expected to increase by 0.073, holding education level constant.

Warning

We would not use the interpretation in terms of log-odds in practice.

Interpretation in terms of odds

term estimate std.error statistic p.value
(Intercept) -5.385 0.308 -17.507 0.000
age 0.073 0.005 13.385 0.000
education2 -0.242 0.112 -2.162 0.031
education3 -0.235 0.134 -1.761 0.078
education4 -0.020 0.148 -0.136 0.892

education4: The odds of being high risk for heart disease for those with a college degree are expected to be 0.98 (exp{-0.020}) times the odds for those with some high school, holding age constant.

Note

In logistic regression with 2+ predictors, exp{β^j} is often called the adjusted odds ratio (AOR).

Interpretation in terms of odds

term estimate std.error statistic p.value
(Intercept) -5.385 0.308 -17.507 0.000
age 0.073 0.005 13.385 0.000
education2 -0.242 0.112 -2.162 0.031
education3 -0.235 0.134 -1.761 0.078
education4 -0.020 0.148 -0.136 0.892

age: For each additional year in age, the odds being high risk for heart disease are expected to multiply by a factor of 1.08 (exp(0.073)), holding education level constant.

Alternate interpretation: For each additional year in age, the odds of being high risk for heart disease are expected to increase by 8%.

Note

In logistic regression with 2+ predictors, exp{β^j} is often called the adjusted odds ratio (AOR).

Generalized Linear Models

Introduction to GLMs

  • Wider class of models.
  • Response variable does not have to be continuous and/or normal.
  • Variance does not have to be constant
  • Still need to specify distribution of outcome variable (randomness).
  • Does not require a linear relationship between response and explanatory variable. Instead, assumes linear relationship between the transformed expected response (ex. logit(πi)) and predictors.

Generalization of Linear Model

Linear model

E[Yi]=μi=xi⊤β.

Yi∼indN(μi,σ2).

GLM

g(E[Yi])=g(μi)=xi⊤β. Alternatively, μi∼g−1(xi⊤β).

Yi∼indf(μi).

Note

We call g a link function

Examples of link functions

Linear regression

g(μi)=μi and Yi∼N(μi,σ2).

Logistic regression

g(πi)=logit(πi) (note, E[Yi]=πi). Yi∼Bernoulli(πi). Alternatively, πi=σ(xi⊤β) where σ is the sigmoid function.

Probit model

πi=Φ(xi⊤β), where Φ is the cdf of standard normal. Yi∼Bernoulli(πi). g(πi)=Φ−1(πi) is called a probit link.

Prediction

Predicted log odds

heart_disease_aug = 
  augment(heart_edu_age_fit) 
heart_disease_aug|> select(.fitted, .resid)|>
  head(6)
# A tibble: 6 × 2
  .fitted .resid
    <dbl>  <dbl>
1   -2.55 -0.388
2   -2.26 -0.446
3   -1.87 -0.536
4   -1.15  1.69 
5   -2.25 -0.448
6   -2.48 -0.402

For observation 1

predicted odds=ω^=π^1−π^=exp⁡{−2.548}=0.078

Predicted probabilities

heart_disease_aug$predicted_prob <- 
  predict.glm(heart_edu_age_fit, heart_disease, type = "response")
heart_disease_aug|>
  select(.fitted,predicted_prob) |>
  head(6)
# A tibble: 6 × 2
  .fitted predicted_prob
    <dbl>          <dbl>
1   -2.55         0.0726
2   -2.26         0.0948
3   -1.87         0.134 
4   -1.15         0.240 
5   -2.25         0.0954
6   -2.48         0.0775

For observation 1

predicted probability=π^=exp⁡{−2.548}1+exp⁡{−2.548}=0.073

Predicted classes

# Convert probabilities to binary predictions (0 or 1)
heart_disease_aug <- heart_disease_aug |>
  mutate(predicted_class =  ifelse(predicted_prob > 0.5, 1, 0))
heart_disease_aug |>
  select(predicted_prob, predicted_class) 
# A tibble: 4,135 × 2
   predicted_prob predicted_class
            <dbl>           <dbl>
 1         0.0726               0
 2         0.0948               0
 3         0.134                0
 4         0.240                0
 5         0.0954               0
 6         0.0775               0
 7         0.317                0
 8         0.0887               0
 9         0.172                0
10         0.0967               0
# ℹ 4,125 more rows

Observed vs. predicted

What does the following table show?

heart_disease_aug|>
  count(high_risk, predicted_class)
# A tibble: 2 × 3
  high_risk predicted_class     n
  <fct>               <dbl> <int>
1 0                       0  3507
2 1                       0   628

The predicted_class is the class with the probability of occurring higher than 0.5. What is a limitation to using this method to determine the predicted class?

Recap

  • Reviewed the relationship between odds and probabilities

  • Introduced logistic regression for binary response variable

  • Interpreted the coefficients of a logistic regression model with multiple predictors

  • Introduced generalized linear model

🔗 STA 221 - Fall 2024

Logistic Regression Kat Husar Nov 07, 2024

  1. Slides

  2. Tools

  3. Close
  • Logistic Regression
  • Announcements
  • Topics
  • Computational setup
  • Recap
  • Do teenagers get 7+ hours of sleep?
  • Let’s fit a linear regression model
  • Let’s use proportions
  • What happens if we zoom out?
  • Let’s try another model
  • Probabilities and odds
  • Binary response variable
  • From odds to probabilities
  • Sigmoid Function
  • Sigmoid Function
  • Logistic regression
  • Logistic regression model
  • Goal
  • Linear Regression vs. Logistic Regression
  • Likelihood function for β
  • Likelihood
  • Log-likelihood
  • Log-likelihood
  • Finding the MLE
  • Example
  • Risk of coronary heart disease
  • Data: heart_disease
  • High risk vs. age
  • Let’s fit the model
  • Interpretation in terms of log-odds
  • Interpretation in terms of log-odds
  • Interpretation in terms of odds
  • Interpretation in terms of odds
  • Generalized Linear Models
  • Introduction to GLMs
  • Generalization of Linear Model
  • Examples of link functions
  • Prediction
  • Predicted log odds
  • Predicted probabilities
  • Predicted classes
  • Observed vs. predicted
  • Recap
  • f Fullscreen
  • s Speaker View
  • o Slide Overview
  • e PDF Export Mode
  • r Scroll View Mode
  • b Toggle Chalkboard
  • c Toggle Notes Canvas
  • d Download Drawings
  • ? Keyboard Help