Interpreting Logistic Regression Coefficients the Right Way

Lavanya Gupta
Analytics Vidhya
Published in
3 min readMar 2, 2021

--

Learn to correctly interpret the coefficients of Logistic Regression and in the process naturally derive its cost function — the Log Loss!

Source: Unsplash

Overview

Models like Logistic Regression often win over their complex counterpart models when explainability and interpretability are crucial to the solution. Despite this, unfortunately, Logistic Regression coefficients are not so easy to interpret as the usual Linear Regression coefficients.

Imagine choosing Logistic Regression for sole reasons of explainability yet presenting wrong descriptions to the business stakeholders. Ouch, not a pleasant scenario definitely!

In this blog, I have described how we can derive the interpretation of logistic regression coefficients naturally so that there is no need to remember the ugly terminologies!

Interpreting Model Coefficients

  1. Let’s start with what is known to us, the linear regression equation:
y = θ0 + θ1X1 + θ2X2 + θ3X3 +  ….. + θnXn                     (1)

However, with Logistic Regression our aim is now to predict a class probability value (rather than a real continuous y value as we did for linear regression). Hence, we need a way to restrict the range of y to [0,1] (instead of the original range (-∞, +∞) ).

2. A very nice function that converts any value in the range (-∞, +∞) to [0,1] is the Sigmoid function (𝜎). Let’s make use of that.
Applying sigmoid on both sides:

𝜎(y) = 𝜎(θ*X)                                                 (2)
Therefore, predicted probability p = 𝜎(θ*X)

So far, so good. We now have a linear model that can predict a class probability given a set of features X and their weights θ. This is what Logistic Regression does, but with a few more changes.
Wondering what are they? Read ahead to know more.

3. ML models explainability is crucial for businesses. Recall from linear regression on how we interpret its coefficients:

How much does the output (dependent) variable y change for 1 unit change in the predictor (independent) x, given all the other predictors are held constant?

We want to interpret logistic regression coefficients in a similar fashion. Unfortunately, our coefficients are currently wrapped inside the sigmoid function 𝜎(θ*X) making it difficult to frame our interpretation:

How much does the output (dependent) variable y change for 1 unit sigmoid change in the predictor (independent) variable x, given all the other predictor variables are held constant?

▶️ Sounds weird, right?! What in the world is a 1 unit sigmoid change?!

We would definitely like to simplify this. And this is where the logit function comes to our rescue!

Logit and sigmoid are inverses of each other.

4. Applying the logit function on both sides on eq. 2:

logit(p) = logit(𝜎(θ*X))                                      (3)

Canceling logit and sigmoid (𝜎) on the right side of the eq. 3:

logit(p) = θ*X                                                (4)

Recall, by definition, logit(p) = log(odds) = log(p/1-p)

log(p/1-p) = θ*X                                            (5)

Bingo! All our θ coefficients are now free from the sigmoid function. We can thus interpret our coefficients (in the same manner as linear regression) as:

How much do the log odds of belonging to a class change for 1 unit change in the predictor (independent) x, given all the other predictors are held constant?

5. Performing the above activity has also led us to derive our pretty cost function for logistic regression — The Log Loss (or Cross-Entropy Loss).

Loss = -yi * log(p) - (1-yi) * log(1-p)where p = 𝜎(θ*X)

In its general form, cross-entropy is written as:

Cross Entropy = - Σ yi * log(pi)

Thank you for investing your time in reading this article!

--

--

Lavanya Gupta
Analytics Vidhya

Carnegie Mellon Grad | AWS ML Specialist | Instructor & Mentor for ML/Data Science