Contents

Function in Python to calculate WoE and IV

   Apr 9, 2023     10 min read

Improve Your Logistic Regression Model with Python Functions.

Hey, have you ever heard of Information Value and Weight of Evidence? These two stats are a total power duo when it comes to picking predictor variables for logistic regression models. Together, they help improve predictability big time! Using Information Value and Weight of Evidence, you can see just how effective a variable is in predicting the response you want, and even find out which direction that variable is leaning the response.

But here’s the thing: when I started using Python to create and select variables, I realized that was missing a function that could include these metrics in my analysis. So, I decided to make my own functions in Python that would generate tables with WoE and IV.

WoE and IV for discrete variables

I started off by building a function for discrete variables and kept improving from there.

So, I plugged some data from the Titanic - Machine Learning from Disaster competition into my Python WoE and IV function, and let me tell you, the results were seriously eye-opening!

Survived01DistrWoEIVIV_total
female0.1475410.6812870.216562-1.5298770.8165651.341681
male0.8524590.3187132.6746880.9838330.5251161.341681
C0.1366120.2735290.499442-0.6942640.0950570.122728
Q0.0856100.0882350.970249-0.0302030.0000790.122728
S0.7777780.6382351.2186380.1977340.0275920.122728

The function generated a table that clearly showed the WoE and IV values for each variable in the dataset, giving me a better understanding of which factors were most important in predicting survival rates.

If you’re working with this same dataset (or any other for that matter), I highly recommend giving this function a go. It’s a powerful tool for gaining insights into your data and improving your predictive models.

WoE and IV for continuous variables

When it came to the continuous variables, things got a bit trickier for me. I wasn’t entirely sure how to break down these variables for analysis because they can vary so much depending on the problem at hand.

I spent a lot of time brainstorming and drafting several different ideas until I finally settled on breaking everything down into deciles. That way, I could analyze the IV return in larger portions (like quartiles) if needed.

Anyway, after a lot of hard work, I finally came up with a function for the continuous variables.

It was definitely a challenge, but the end result was super satisfying!

variablelimit01DistrWoEIV
Age<=[14.]0.0582880.1315790.442987-0.8142140.06
Age[14.] a [19.]0.0965390.0994150.971070-0.0293560.00
Age[19.] a [22.]0.0874320.0555561.5737700.4534740.01
Age[22.] a [25.]0.0801460.0760231.0542240.0528050.00
Age[25.] a [28.]0.0673950.0701750.960383-0.0404240.00
Age[28.] a [31.8]0.0728600.0760230.958386-0.0425050.00
Age[31.8] a [36.]0.0856100.1286550.665425-0.4073300.02
Age[36.] a [41.]0.0619310.0555561.1147540.1086340.00
Age[41.] a [50.]0.0856100.0906430.944474-0.0571270.00
Age 1.0000001.0000001.0000000.0000000.09
Fare<=[7.55]0.1438980.0380123.7856241.3312110.14
Fare[7.55] a [7.8542]0.1111110.0760231.4615380.3794900.01
Fare[7.8542] a [8.05]0.1584700.0555562.8524591.0481810.11
Fare[8.05] a [10.5]0.1092900.0526322.0765030.7306850.04
Fare[10.5] a [14.4542]0.0874320.1052630.830601-0.1856060.00
Fare[14.4542] a [21.6792]0.0928960.1081870.858662-0.1523800.00
Fare[21.6792] a [27.]0.0783240.1345030.582324-0.5407290.03
Fare[27.] a [39.6875]0.1038250.0994151.0443590.0434030.00
Fare[39.6875] a [77.9583]0.0765030.1374270.556679-0.5857660.04
Fare 1.0000001.0000001.0000000.0000000.37

All together now

Once I got over the initial obstacles, I decided to tackle creating another function. This one would combine the metrics for both discrete and continuous variables into a single, comprehensive table.

I must admit, this function was way easier and quicker to make! So, without further ado, check out the function I whipped up to bring it all together in one table:

The final table is:

variablelimit01DistrWoEIVIV_total
female 0.1475410.6812870.216562-1.5298770.8165651.341681
male 0.8524590.3187132.6746880.9838330.5251161.341681
C 0.1366120.2735290.499442-0.6942640.0950570.122728
Q 0.0856100.0882350.970249-0.0302030.0000790.122728
S 0.7777780.6382351.2186380.1977340.0275920.122728
Age<=[14.]0.0582880.1315790.442987-0.8142140.060000 
Age[14.] a [19.]0.0965390.0994150.971070-0.0293560.000000 
Age[19.] a [22.]0.0874320.0555561.5737700.4534740.010000 
Age[22.] a [25.]0.0801460.0760231.0542240.0528050.000000 
Age[25.] a [28.]0.0673950.0701750.960383-0.0404240.000000 
Age[28.] a [31.8]0.0728600.0760230.958386-0.0425050.000000 
Age[31.8] a [36.]0.0856100.1286550.665425-0.4073300.020000 
Age[36.] a [41.]0.0619310.0555561.1147540.1086340.000000 
Age[41.] a [50.]0.0856100.0906430.944474-0.0571270.000000 
Age 1.0000001.0000001.0000000.0000000.090000 
Fare<=[7.55]0.1438980.0380123.7856241.3312110.140000 
Fare[7.55] a [7.8542]0.1111110.0760231.4615380.3794900.010000 
Fare[7.8542] a [8.05]0.1584700.0555562.8524591.0481810.110000 
Fare[8.05] a [10.5]0.1092900.0526322.0765030.7306850.040000 
Fare[10.5] a [14.4542]0.0874320.1052630.830601-0.1856060.000000 
Fare[14.4542] a [21.6792]0.0928960.1081870.858662-0.1523800.000000 
Fare[21.6792] a [27.]0.0783240.1345030.582324-0.5407290.030000 
Fare[27.] a [39.6875]0.1038250.0994151.0443590.0434030.000000 
Fare[39.6875] a [77.9583]0.0765030.1374270.556679-0.5857660.040000 
Fare 1.0000001.0000001.0000000.0000000.370000 

Hey, did you liked my Python functions for calculating IV and WoE? I had a blast creating them, but I’ve realized that there are still a lot of folks out there who might be feeling a bit lost when it comes to these metrics.

Since they can be so helpful in improving your logistic regression models, I’ve decided it’s high time to write up a post and explain everything in detail. I’m going to walk you through how to calculate IV and WoE and give you some seriously valuable tips on how to use these metrics to make savvy decisions and elevate your models to the next level.

I’m really excited to share all of this with you, so don’t miss out! Keep your eyes peeled for my next post!

References:

  • Anderson, Raymond. The Credit Scoring Toolkit: Theory and Practice for Retail Credit Risk Management and Decision Automation. Oxford University Press, 2007.

  • Siddiqi, Naeem. Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring. Wiley, 2006.

  • Sudarson Mothilal Thoppay (2015). woe: Computes Weight of Evidence and Information Values. R package version 0.2. https://CRAN.R-project.org/package=woe

  • Thilo Eichenberg (2018). woeBinning: Supervised Weight of Evidence Binning of Numeric Variables and Factors. R package version 0.1.6. https://CRAN.R-project.org/package=woeBinning