Contents

Mastering Logistic Regression: Unpacking WOE and IV Metrics for Variable Selection and Interpretation.

   Apr 24, 2023     7 min read

Unpacking WOE and IV Metrics for Variable Selection and Interpretation.

Ever heard of WOE and IV? They stand for “Weight of Evidence” and “Value of Information” respectively, and are essential for building, selecting, and interpreting variables for logistic regression models. These metrics help you determine how well a variable predicts the response you’re looking for, as well as which way the variable is skewing the response.

Metrics Presentation

Remember the previous post on how to create WoE and IV functions in Python? Now I’ll show you how to interpret the table and extract even more insights!

The table presents several metrics that are useful in evaluating the relationship between the study variable and the occurrence of negative or positive outcomes. Some of these metrics include:

  • Proportion of targets 0 or 1 for each sector of the study variable, which helps to understand the variable’s distribution in relation to outcomes.
  • WoE, a useful metric for assessing the variable’s discrimination. The further away from 0 the WoE is, the more discriminating the variable is. A negative WoE indicates that the variable favor the occurrence of the target, while a positive WoE indicates that the variable does not favors the occurrence.
  • IV, an Information Value that helps assess the predictivity of the variables. It is important to note that if a sector of the variable indicates a strong association with the target but appears infrequently in the population, its IV will not be high. The Total Information Value of a variable is the sum of the IVs for each sector studied.

Additionally, there is a table indicating the classification of the IV values:

IVClassification
≤ 0.02Not useful for prediction
0.02 -0.1Weak predictive power
0.1 - 0.3Moderate predictive power
0.3 - 0.5Strong predictive power
> 0.5Suspect predictive power

These metrics enable you to gain an even clearer view of your data!

Grouping Categories

Grouping categories is an essential step in variable preparation for predictive models. It involves analyzing the similarity in the discrimination of targets and evaluating representative cases in each attribute, resulting in categories grouped in a way that makes sense. Clustering categories based on IV and WoE analysis has several benefits, such as simplifying the equation, reducing the risk of overfitting, and making variables more suitable for the model. However, it’s important to keep in mind that the information value always decreases when categories are grouped, and just categories with similar WoE should be combined to avoid losing important information.

Let’s dive right in!

When we apply the Woe_IV_Discrete function to the “Sex” variable in the Titanic - Machine Learning from Disaster competition data, we get some interesting insights.

Survived01DistrWoEIVIV_total
Sex      
female0.1475410.6812870.216562-1.5298770.8165651.341681
male0.8524590.3187132.6746880.9838330.5251161.341681

The WoE metric is showing that being female favors survival. The IV metric indicates that the sex variable is strongly related to the response variable, which suggests a high predictive power.

Now, let’s take a look at the table generated by the Woe_IV_Continuous function for the “Fare” variable.

variablelimit01DistrWoEIV
Fare<=[7.55]0.1438980.0380123.7856241.3312110.14
Fare[7.55] a [7.8542]0.1111110.0760231.4615380.3794900.01
Fare[7.8542] a [8.05]0.1584700.0555562.8524591.0481810.11
Fare[8.05] a [10.5]0.1092900.0526322.0765030.7306850.04
Fare[10.5] a [14.4542]0.0874320.1052630.830601-0.1856060.00
Fare[14.4542] a [21.6792]0.0928960.1081870.858662-0.1523800.00
Fare[21.6792] a [27.]0.0783240.1345030.582324-0.5407290.03
Fare[27.] a [39.6875]0.1038250.0994151.0443590.0434030.00
Fare[39.6875] a [77.9583]0.0765030.1374270.556679-0.5857660.04
Fare 1.0000001.0000001.0000000.0000000.37

This variable also has a high IV, indicating strong predictive power. To improve the model, it is recommended to create a binary variable indicating whether the value is less than or equal to 10.5. When grouping categories, it’s important to keep in mind that the value of information tends to decrease. Therefore, it’s best to group only categories with similar WoE. In the case of the “Fare” variable, row 7 has a positive WoE close to zero, which suggests a neutral range with respect to survival. Thus, including it in the group favoring survival is safe.

Based on this quick analysis, we were able to create two variables (FLG_Fare_leq_10.5 and FLG_female) that will be useful in building the logistic regression model. You can see the details in the table below.

PassengerIdSurvivedPclassSexFareCabinEmbarkedFLG_femaleFLG_Fare_leq_10.5
103male7.2500NaNS01
211female71.2833C85C10
313female7.9250NaNS11
88903female23.4500NaNS10
89011male30.0000C148C00
89103male7.7500NaNQ01

In summary, analyzing the WoE, IV, and other metrics is crucial in identifying predictive variables and grouping categories effectively to improve model performance. I’m glad that the explanation on how to interpret the metrics in the table has provided you with the necessary knowledge to create, interpret, and select variables for the logistic regression model.

Just a quick heads up that you can find all the support materials on my Github page. And in case you missed it, my previous post introduced the function to calculate WoE and IV in Python. But guess what? I’m planning to go even deeper in a future post and really unpack how these metrics are calculated. So stay tuned! And if you have any questions, don’t hesitate to hit me up. I’m always here and happy to help in any way I can.

References:

  • Anderson, Raymond. The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management and Decision Automation. Oxford University Press, 2007.

  • Siddiqi, Naeem. Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring. Wiley, 2006.

  • Sudarson Mothilal Thoppay (2015). woe: Computes Weight of Evidence and Information Values. R package version 0.2. https://CRAN.R-project.org/package=woe

  • Thilo Eichenberg (2018). woeBinning: Supervised Weight of Evidence Binning of Numeric Variables and Factors. R package version 0.1.6. https://CRAN.R-project.org/package=woeBinning