【金融风控系列】_[0]_零基础学习评分卡模型

P粉084495128
发布: 2025-07-22 13:40:56
原创
993人浏览过
本文介绍零基础入门金融风控评分卡开发实战。使用某信贷平台40w贷款记录数据,含16列变量,以Defaulter为目标变量预测逾期概率。流程包括数据构建、探索性分析、预处理、特征选择、模型开发与评估,还涉及WOE、IV等指标,对比了逻辑回归与多种集成模型效果。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

【金融风控系列】_[0]_零基础学习评分卡模型 - php中文网

零基础入门金融风控-评分卡开发

该数据来自网络,仅用作学习交流

本项目是对评分卡模型的介绍及实战,适用于零基础入门学习

该数据来自某信贷平台的贷款记录,总数据量约40w,包含16列变量信息,Defaulter为客户是否违约。该数据要求使用客户的贷款历史,抵押物价值等信息预测该客户的逾期概率


字段表

变量 描述 类型
AppNo id ID
Region 用户所在城市 categorical
Area 用户所在城市的地区 categorical
Activity 用户经济活动 categorical
Guarantor 用户是否提供担保人 binary
Collateral 用户是否提供抵押物 binary
Collateral_valuation 抵押物价值 numerical
Age 年龄 numerical
Properties_Status 用户财产的所有权状况 categorical
Properties_Total 用户财产的数量 numerical
Amount 贷款数额 numerical
Term 贷款期数 numerical
Historic_Loans 客户历史贷款次数 numerical
Current_Loans 客户当前在还贷款总额(excluding this one) numerical
Max_Arrears 客户拖欠贷款的最大天数(excluding this one) numerical
Defaulter 客户是否违约(TARGET) binary

参考:

[1] https://zhuanlan.zhihu.com/p/44663658

[2] https://blog.csdn.net/lsxxx2011/article/details/98765540

[3] https://zhuanlan.zhihu.com/p/80134853(WOE与IV指标的深入理解应用)

[4] https://blog.csdn.net/u013421629/article/details/78416830

[5] https://blog.csdn.net/COCO56/article/details/96971844

[6] https://www.bilibili.com/read/cv8037568/

评分卡模型

信用评分卡模型在国外是一种成熟的预测方法,尤其在信用风险评估以及金融风险控制领域更是得到了比较广泛的使用,

其原理是将模型变量WOE编码方式离散化之后运用logistic回归模型进行的一种二分类变量的广义线性模型。

信用评分卡有三种:

  • A卡(Application scorecard),即申请评分卡。用于贷款审批前期对借款申请人的量化评估;

  • B卡(Behavior scorecard),即行为评分卡。用于贷后管理,通过借款人的还款以及交易行为,结合其他维度的数据预测借款人未来的还款能力和意愿;

  • C卡(Collection scorecard),即催收评分卡。在借款人当前还款状态为逾期的情况下,预测未来该笔贷款变为坏账的概率。

三种评分卡根据使用时间不同,分别侧重贷前,贷中和贷后。

数据分析流程:

数据构建:训练集数据、测试集数据

  • 确定建模需求

    根据建模需求确定需要构建的是申请评分卡,行为评分卡或催收评分卡

  • 确定观察期和表现期

    观察期指的是变量计算的时期,一般设定6至24个月

    表现期指的是预测的时间长度,若预测12个月内客户违约的概率,则表现期为12个月

对于数据库中的待处理时间序列数据,我们选择2020年6月至2021年6月的数据,预测客户未来六个月内是否违约的概率。则观察期为12个月,表现期为6个月,在2021年6月至12月出现违约的客户则定义为“坏”标签

  • 定义好坏客

    真实生产中,往往不像比赛中可以直接获得数据的标签,需要风控人员根据业务理解进行定义。

    一般坏客户的定义,是公司定义的非目标客户,例如六个月内出现M2逾期或上文中出现的六个月内出现违约

  • 样本区分

    通常为了最佳的预测效果,通常会依据客群或产品做样本区分,分别开发模型。

    比如针对年收入十万以上的客户和年收入十万以下的客户,针对不同的信贷产品开发不同的风控模型

    常用的区分维度有:

    • 产品类型
    • 地区(按业务发展状况进行区域划分)
    • 历史逾期情况
    • 帐龄
    • 客户年龄
    • 客户职业等

在数据竞赛中,这一技巧也用的非常多,笔者通常称为“样本的细分”。 竞赛中常用的细分方法体现在『样本的细分』与『标签的细分』两点上。

样本的细分:比如对于一个群体进行二分类,但是由于部分样本的干扰,可能需要首先对建立一个样本分类模型将训练样本分成几类,对不同的类重新构建二分类模型。这种方法笔者会在后续项目中进行展示。

标签的细分:同样对于一个二分类模型,由于对数据的理解更加深刻,可能有的人会手动对标签进行重新细分,比如猫狗分类,猫可以进一步分成不同颜色的猫,这就是基于每个人对数据的理解,在标签中引入了更多的信息,也往往能取得效果的提高。

这里的样本区分,指的是针对不同的样本进行细分。
登录后复制
       

探索性分析EDA:变量分布情况-中位数、均值等

千帆AppBuilder
千帆AppBuilder

百度推出的一站式的AI原生应用开发资源和工具平台,致力于实现人人都能开发自己的AI原生应用。

千帆AppBuilder 174
查看详情 千帆AppBuilder

从这里开始就和数据竞赛的基本流程相似了,通过对数据分布,相关性的分析,对数据进行进一步理解

数据预处理:缺失值处理、异常值处理、特征相关性分析

特征选择:变量离散化、WOE变换

这一步对应的是数据竞赛中的特征工程,但评分卡中常用的方法是主要是基于分箱的方法。

模型开发:逻辑回归

模型评估:K-S指标、拟合度曲线

信用评分:好坏比、基础分值等创立标准评分卡

对测试集进行预测和转化为信用评分卡

笔者在这里对数据处理进行总结如下。

  • 使用的数据维度

【金融风控系列】_[0]_零基础学习评分卡模型 - php中文网        

  • 抽取特征与风险规则

【金融风控系列】_[0]_零基础学习评分卡模型 - php中文网        

  • 特征衍生

【金融风控系列】_[0]_零基础学习评分卡模型 - php中文网        

评分卡常用指标及方法说明

笔者在该部分对评分卡构建过程中常用但在金融数据相关竞赛中不常使用的指标及方法进行说明

WOE

WOE(Weight of Evidence)称为证据权重,是一种有监督的编码方式,将预测类别的集中度的属性作为编码的数值。

作为衡量正常样本( Good)和违约样本( Bad)分布差异的方法。

【金融风控系列】_[0]_零基础学习评分卡模型 - php中文网        

WOE在业务中常有哪些应用呢?

  • 处理缺失值:

    当数据源没有100%覆盖时,那就会存在缺失值,此时可以把null单独作为一个分箱。这点在分数据源建模时非常有用,可以有效将覆盖率哪怕只有20%的数据源利用起来。

  • 处理异常值:

    当数据中存在离群点时,可以把其通过分箱离散化处理,从而提高变量的鲁棒性(抗干扰能力)。例如,age若出现200这种异常值,可分入“age > 60”这个分箱里,排除影响。

  • 业务解释性:

    我们习惯于线性判断变量的作用,当x越来越大,y就越来越大。但实际x与y之间经常存在着非线性关系,此时可经过WOE变换。

IV

IV(Information Value)是与WOE密切相关的一个指标,常用来评估变量的预测能力。因而可用来快速筛选变量

  • 违约件占比 > 正常件占比 ,WOE为负数

  • 绝对值越高,表明该组别好坏客户的区隔程度越高

  • 各组之间的WOE值差距应尽可能拉开并呈现由低至高的合理趋势

    IV=1n×WOEIV=∑1n×WOE

【金融风控系列】_[0]_零基础学习评分卡模型 - php中文网        

PSI

群体稳定性指标(Population Stability Index,PSI)反映了验证样本在各分数段的分布与建模样本分布的稳定性。在建模中,我们常用来筛选特征变量、评估模型稳定性。

需要有两个分布——实际分布(actual)和预期分布(expected)。其中,在建模时通常以训练样本(In the Sample, INS)作为预期分布,而验证样本通常作为实际分布。验证样本一般包括样本外(Out of Sample,OOS)和跨时间样本(Out of Time,OOT)

一般以训练集(INS)的样本分布作为预期分布,进而跨时间窗按月/周来计算PSI,得到Monthly PSI Report,进而剔除不稳定的变量。

【金融风控系列】_[0]_零基础学习评分卡模型 - php中文网        

PSI用以判断变量稳定性,IV用以判断变量预测能力。

KS

KS用于模型风险区分能力进行评估, 指标衡量的是好坏样本累计分部之间的差值。好坏样本累计差异越大,KS指标越大,那么模型的风险区分能力越强

KS值越大,表示模型能够将正、负客户区分开的程度越大。通常来讲,KS>0.2即表示模型有较好的预测准确性。

  1. 计算正常户和违约户在各评分阶段下的累积比率
  2. 计算各阶段累积比率之差
  3. 找出最大的累积比率之差,即为KS

Talk is cheap. Show me the code

In [1]
## 工作包准备,numpy和pandas是常用的数据分析第三方包import numpy as npimport pandas as pd 
from scipy.stats import chi2
登录后复制
   
In [2]
## 利用pandas自带的read_csv导入数据,导入的数据会转化为pandas数据格式,dataframe类型train = pd.read_csv('./work/data.csv')
登录后复制
   
In [3]
#### 对数据集进行描述性统计分析 ###numerical = ['Collateral_valuation',             'Age',             'Properties_Total',             'Amount',             'Term',             'Historic_Loans',             'Current_Loans',             'Max_Arrears']

categorical = ['Region',               'Area',               'Activity',               'Properties_Status']

binaray = ['Guarantor',           'Collateral']### 将目标变量单独赋值给一个变量target_var = ['Defaulter']

train_X = train[numerical + categorical + binaray]
train_Y = train[target_var]

train_X.describe()
登录后复制
       
       Collateral_valuation           Age  Properties_Total         Amount  \
count          28463.000000  50000.000000      50000.000000   50000.000000   
mean            6399.752415     41.128860          1.992360    8580.784360   
std             8155.521062     10.443382          1.175521   10088.501785   
min               10.000000     18.000000          1.000000    1137.000000   
25%             1923.000000     33.000000          1.000000    3002.000000   
50%             3768.000000     41.000000          2.000000    5500.000000   
75%             7589.500000     49.000000          2.000000    9912.250000   
max           137618.000000     80.000000         15.000000  134750.000000   

               Term  Historic_Loans  Current_Loans   Max_Arrears  \
count  50000.000000    50000.000000   38523.000000  50000.000000   
mean      26.199240        4.261880       1.797679     58.077620   
std       11.511816        3.728208       1.147399    205.871957   
min       11.000000        1.000000       1.000000      0.000000   
25%       21.000000        2.000000       1.000000      0.000000   
50%       23.000000        3.000000       1.000000      0.000000   
75%       34.000000        6.000000       2.000000     24.000000   
max       69.000000       38.000000      12.000000   3483.000000   

             Region          Area      Activity     Guarantor    Collateral  
count  50000.000000  50000.000000  47422.000000  50000.000000  50000.000000  
mean       9.134600     35.360280      8.936527      0.086540      0.569260  
std        2.522406     24.703517      7.017887      0.281163      0.495185  
min        1.000000      5.000000      1.000000      0.000000      0.000000  
25%        8.000000     15.000000      1.000000      0.000000      0.000000  
50%        9.000000     30.000000     10.000000      0.000000      1.000000  
75%       10.000000     50.000000     14.000000      0.000000      1.000000  
max       15.000000     95.000000     19.000000      1.000000      1.000000
登录后复制
               
In [4]
### 首先将类别变量转换为虚拟变量,方便之后做数据探索dummy_region = pd.get_dummies(train_X["Region"],prefix='Region')
dummy_region_col = list(dummy_region.columns)
dummy_area = pd.get_dummies(train_X["Area"],prefix='Area')
dummy_area_col = list(dummy_area.columns)
dummy_activity = pd.get_dummies(train_X["Activity"],prefix='Activity', dummy_na=True)
dummy_activity_col = list(dummy_activity.columns)
dummy_status = pd.get_dummies(train_X["Properties_Status"],prefix='PropertiesStatus')
dummy_status_col = list(dummy_status.columns)
dummy_col_dict = {"Region":dummy_region_col, "Area":dummy_area_col, "Activity":dummy_activity_col, "Properties_Status":dummy_status_col}
登录后复制
   
In [5]
### 分别取自变量数据集和目标变量数据集train_X = pd.concat([train[numerical+binaray],dummy_region, dummy_area, dummy_activity, dummy_status], axis=1)
train_Y = train[target_var]

train = pd.concat([train_X, train_Y], axis=1)### 对数据集做描述性分析train_X.describe()
登录后复制
       
       Collateral_valuation           Age  Properties_Total         Amount  \
count          28463.000000  50000.000000      50000.000000   50000.000000   
mean            6399.752415     41.128860          1.992360    8580.784360   
std             8155.521062     10.443382          1.175521   10088.501785   
min               10.000000     18.000000          1.000000    1137.000000   
25%             1923.000000     33.000000          1.000000    3002.000000   
50%             3768.000000     41.000000          2.000000    5500.000000   
75%             7589.500000     49.000000          2.000000    9912.250000   
max           137618.000000     80.000000         15.000000  134750.000000   

               Term  Historic_Loans  Current_Loans   Max_Arrears  \
count  50000.000000    50000.000000   38523.000000  50000.000000   
mean      26.199240        4.261880       1.797679     58.077620   
std       11.511816        3.728208       1.147399    205.871957   
min       11.000000        1.000000       1.000000      0.000000   
25%       21.000000        2.000000       1.000000      0.000000   
50%       23.000000        3.000000       1.000000      0.000000   
75%       34.000000        6.000000       2.000000     24.000000   
max       69.000000       38.000000      12.000000   3483.000000   

          Guarantor    Collateral  ...  Activity_15.0  Activity_16.0  \
count  50000.000000  50000.000000  ...   50000.000000   50000.000000   
mean       0.086540      0.569260  ...       0.002040       0.000620   
std        0.281163      0.495185  ...       0.045121       0.024892   
min        0.000000      0.000000  ...       0.000000       0.000000   
25%        0.000000      0.000000  ...       0.000000       0.000000   
50%        0.000000      1.000000  ...       0.000000       0.000000   
75%        0.000000      1.000000  ...       0.000000       0.000000   
max        1.000000      1.000000  ...       1.000000       1.000000   

       Activity_17.0  Activity_18.0  Activity_19.0  Activity_nan  \
count   50000.000000    50000.00000   50000.000000  50000.000000   
mean        0.030360        0.06984       0.077780      0.051560   
std         0.171578        0.25488       0.267828      0.221139   
min         0.000000        0.00000       0.000000      0.000000   
25%         0.000000        0.00000       0.000000      0.000000   
50%         0.000000        0.00000       0.000000      0.000000   
75%         0.000000        0.00000       0.000000      0.000000   
max         1.000000        1.00000       1.000000      1.000000   

       PropertiesStatus_A  PropertiesStatus_B  PropertiesStatus_C  \
count        50000.000000        50000.000000        50000.000000   
mean             0.121360            0.639960            0.016820   
std              0.326548            0.480016            0.128598   
min              0.000000            0.000000            0.000000   
25%              0.000000            0.000000            0.000000   
50%              0.000000            1.000000            0.000000   
75%              0.000000            1.000000            0.000000   
max              1.000000            1.000000            1.000000   

       PropertiesStatus_D  
count        50000.000000  
mean             0.221860  
std              0.415502  
min              0.000000  
25%              0.000000  
50%              0.000000  
75%              0.000000  
max              1.000000  

[8 rows x 69 columns]
登录后复制
               
In [6]
### 基于target变量,分别进行describetrain[train['Defaulter']==0].describe()
登录后复制
       
       Collateral_valuation           Age  Properties_Total         Amount  \
count          24439.000000  41781.000000      41781.000000   41781.000000   
mean            5858.894267     41.630646          2.041813    7979.738900   
std             7325.955843     10.315372          1.185661    9497.662856   
min               10.000000     18.000000          1.000000    1137.000000   
25%             1823.000000     34.000000          1.000000    2847.000000   
50%             3535.000000     42.000000          2.000000    5166.000000   
75%             6996.000000     49.000000          3.000000    9145.000000   
max           122388.000000     80.000000         15.000000  134750.000000   

               Term  Historic_Loans  Current_Loans   Max_Arrears  \
count  41781.000000     41781.00000   31991.000000  41781.000000   
mean      25.143367         4.44817       1.793411     43.002561   
std       10.790318         3.83826       1.139883    134.028353   
min       11.000000         1.00000       1.000000      0.000000   
25%       21.000000         2.00000       1.000000      0.000000   
50%       23.000000         3.00000       1.000000      0.000000   
75%       32.000000         6.00000       2.000000     23.000000   
max       69.000000        38.00000      12.000000   2701.000000   

          Guarantor    Collateral  ...  Activity_16.0  Activity_17.0  \
count  41781.000000  41781.000000  ...   41781.000000   41781.000000   
mean       0.087121      0.584931  ...       0.000550       0.033915   
std        0.282016      0.492740  ...       0.023456       0.181012   
min        0.000000      0.000000  ...       0.000000       0.000000   
25%        0.000000      0.000000  ...       0.000000       0.000000   
50%        0.000000      1.000000  ...       0.000000       0.000000   
75%        0.000000      1.000000  ...       0.000000       0.000000   
max        1.000000      1.000000  ...       1.000000       1.000000   

       Activity_18.0  Activity_19.0  Activity_nan  PropertiesStatus_A  \
count   41781.000000   41781.000000  41781.000000        41781.000000   
mean        0.072162       0.070439      0.051722            0.108662   
std         0.258759       0.255888      0.221468            0.311218   
min         0.000000       0.000000      0.000000            0.000000   
25%         0.000000       0.000000      0.000000            0.000000   
50%         0.000000       0.000000      0.000000            0.000000   
75%         0.000000       0.000000      0.000000            0.000000   
max         1.000000       1.000000      1.000000            1.000000   

       PropertiesStatus_B  PropertiesStatus_C  PropertiesStatus_D  Defaulter  
count        41781.000000        41781.000000        41781.000000    41781.0  
mean             0.657619            0.014528            0.219191        0.0  
std              0.474512            0.119655            0.413703        0.0  
min              0.000000            0.000000            0.000000        0.0  
25%              0.000000            0.000000            0.000000        0.0  
50%              1.000000            0.000000            0.000000        0.0  
75%              1.000000            0.000000            0.000000        0.0  
max              1.000000            1.000000            1.000000        0.0  

[8 rows x 70 columns]
登录后复制
               
In [7]
train[train['Defaulter']==1].describe()
登录后复制
       
       Collateral_valuation          Age  Properties_Total         Amount  \
count           4024.000000  8219.000000       8219.000000    8219.000000   
mean            9684.551690    38.578051          1.740966   11636.178002   
std            11488.015293    10.714466          1.088420   12224.974714   
min               55.000000    18.000000          1.000000    1138.000000   
25%             2760.000000    30.000000          1.000000    4029.000000   
50%             5833.000000    38.000000          1.000000    7771.000000   
75%            12312.000000    46.000000          2.000000   14260.000000   
max           137618.000000    78.000000         12.000000  132168.000000   

              Term  Historic_Loans  Current_Loans  Max_Arrears    Guarantor  \
count  8219.000000     8219.000000    6532.000000  8219.000000  8219.000000   
mean     31.566736        3.314880       1.818585   134.711157     0.083587   
std      13.411274        2.931627       1.183386   399.384840     0.276784   
min      11.000000        1.000000       1.000000     0.000000     0.000000   
25%      22.000000        1.000000       1.000000     0.000000     0.000000   
50%      31.000000        2.000000       1.000000     0.000000     0.000000   
75%      46.000000        4.000000       2.000000    39.000000     0.000000   
max      68.000000       33.000000      10.000000  3483.000000     1.000000   

        Collateral  ...  Activity_16.0  Activity_17.0  Activity_18.0  \
count  8219.000000  ...    8219.000000    8219.000000    8219.000000   
mean      0.489597  ...       0.000973       0.012289       0.058036   
std       0.499922  ...       0.031185       0.110177       0.233826   
min       0.000000  ...       0.000000       0.000000       0.000000   
25%       0.000000  ...       0.000000       0.000000       0.000000   
50%       0.000000  ...       0.000000       0.000000       0.000000   
75%       1.000000  ...       0.000000       0.000000       0.000000   
max       1.000000  ...       1.000000       1.000000       1.000000   

       Activity_19.0  Activity_nan  PropertiesStatus_A  PropertiesStatus_B  \
count    8219.000000   8219.000000         8219.000000         8219.000000   
mean        0.115099      0.050736            0.185911            0.550189   
std         0.319161      0.219472            0.389058            0.497505   
min         0.000000      0.000000            0.000000            0.000000   
25%         0.000000      0.000000            0.000000            0.000000   
50%         0.000000      0.000000            0.000000            1.000000   
75%         0.000000      0.000000            0.000000            1.000000   
max         1.000000      1.000000            1.000000            1.000000   

       PropertiesStatus_C  PropertiesStatus_D  Defaulter  
count         8219.000000         8219.000000     8219.0  
mean             0.028471            0.235430        1.0  
std              0.166323            0.424293        0.0  
min              0.000000            0.000000        1.0  
25%              0.000000            0.000000        1.0  
50%              0.000000            0.000000        1.0  
75%              0.000000            0.000000        1.0  
max              1.000000            1.000000        1.0  

[8 rows x 70 columns]
登录后复制
               
In [8]
## 数据探索--协方差和相关矩阵%matplotlib inline
%config InlineBackend.figure_format = 'retina'train.cov()
train.corr()### 绘制直方图和箱形图from matplotlib import pyplot as plt
plt.hist(train[train['Defaulter']==0]['Age'],color='blue',label='Class 0',alpha=0.5,bins=20)
plt.hist(train[train['Defaulter']==1]['Age'],color='red',label='Class 1',alpha=0.5,bins=20)

plt.legend(loc='best')
plt.grid()
plt.show()

train[['Defaulter', 'Age']].boxplot(by='Defaulter',layout=(1,1))
plt.show()
登录后复制
       
<Figure size 432x288 with 1 Axes>
登录后复制
               
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/numpy/core/_asarray.py:102: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
  return array(a, dtype, copy=False, order=order)
登录后复制
       
<Figure size 432x288 with 1 Axes>
登录后复制
               
In [9]
## 首先做缺失值处理missing = train_X.isnull().sum()
missing = missing[missing > 0]
missing.sort_values(inplace=True)
missing.plot.bar()
登录后复制
       
<matplotlib.axes._subplots.AxesSubplot at 0x7ff273757b90>
登录后复制
               
<Figure size 432x288 with 1 Axes>
登录后复制
               
In [10]
## 两列缺失值,一列是当前在还贷款总额,一列是抵押物价值,从数据看出,抵押物价值为空,就是没有抵押物的意思,已有是否有抵押物表示,这列变量不用对空值处理### 一列是当前在还贷款总额,如果为空,则表示当前没有在还贷款,遵循空值即信息的原则train_X.loc[train_X['Current_Loans'].isnull(), 'Current_Loans_nan'] = 1train_X.loc[train_X['Current_Loans_nan'].isnull(), 'Current_Loans_nan'] = 0binaray = binaray + ["Current_Loans_nan"]


train = pd.concat([train_X, train_Y], axis=1)### 将方差较小的变量,直接选择进行剔除,阈值选择0.001 #### 针对数值变量做方差筛选drop_col = list()for col in numerical:
    col_var = train_X[col].var()    if col_var < 0.001:
        drop_col.append(col)
        train_X.drop(axis=1, columns=col, inplace=True)

numerical = list(set(numerical).difference(set(drop_col)))
train = pd.concat([train_X, train_Y], axis=1)
登录后复制
   
In [11]
### 缺失值处理完成过后,如果样本比例不均匀,则进行样本调整,本例子的样本比例在1:5,因此可以不用对样本比例进行调整##### 统计目标变量好样本和坏样本的个数'''
neg_Y = train_Y.sum()
pos_Y = train_Y.count() - neg_Y

### 好坏样本的比例差距过大,我们采用分层抽样的方法,对样本比例做调整
### 将数据集好坏样本进行区分,P_train为好样本数据集,N_train为坏样本数据集
P_train = train[train['Defaulter'] == 0]
N_train = train[train['Defaulter'] == 1]

### 对好样本进行抽样,抽样个数选择坏样本个数的5倍
P_train_sample = P_train.sample(n=N_train.shape[0] * 5, frac=None, replace=False, weights=None, random_state=2, axis=0)
print P_train_sample.shape
print N_train.shape

### 将抽样的好样本数据集与坏样本数据集合并,重新生成训练集
train_sample = pd.concat([N_train,P_train_sample])
print train_sample.shape

### 将新训练集的index进行重排
train_sample= train_sample.sample(frac=1).reset_index(drop=True)
'''
登录后复制
       
"\nneg_Y = train_Y.sum()\npos_Y = train_Y.count() - neg_Y\n\n### 好坏样本的比例差距过大,我们采用分层抽样的方法,对样本比例做调整\n### 将数据集好坏样本进行区分,P_train为好样本数据集,N_train为坏样本数据集\nP_train = train[train['Defaulter'] == 0]\nN_train = train[train['Defaulter'] == 1]\n\n### 对好样本进行抽样,抽样个数选择坏样本个数的5倍\nP_train_sample = P_train.sample(n=N_train.shape[0] * 5, frac=None, replace=False, weights=None, random_state=2, axis=0)\nprint P_train_sample.shape\nprint N_train.shape\n\n### 将抽样的好样本数据集与坏样本数据集合并,重新生成训练集\ntrain_sample = pd.concat([N_train,P_train_sample])\nprint train_sample.shape\n\n### 将新训练集的index进行重排\ntrain_sample= train_sample.sample(frac=1).reset_index(drop=True)\n"
登录后复制
               
In [12]
## 自写卡方最优分箱过程def get_chi2(X, col):
    '''
    计算卡方统计量
    '''
    # 计算样本期望频率
    
    pos_cnt = X['Defaulter'].sum()
    all_cnt = X['Defaulter'].count()
    expected_ratio = float(pos_cnt) / all_cnt 
    
    # 对变量按属性值从大到小排序
    df = X[[col, 'Defaulter']]
    df = df.dropna()
    col_value = list(set(df[col]))
    col_value.sort()    
    # 计算每一个区间的卡方统计量
    
    chi_list = []
    pos_list = []
    expected_pos_list = []    
    for value in col_value:
        df_pos_cnt = df.loc[df[col] == value, 'Defaulter'].sum()
        df_all_cnt = df.loc[df[col] == value,'Defaulter'].count()
        
        expected_pos_cnt = df_all_cnt * expected_ratio
        chi_square = (df_pos_cnt - expected_pos_cnt)**2 / expected_pos_cnt
        chi_list.append(chi_square)
        pos_list.append(df_pos_cnt)
        expected_pos_list.append(expected_pos_cnt)    
    # 导出结果到dataframe
    chi_result = pd.DataFrame({col: col_value, 'chi_square':chi_list,                               'pos_cnt':pos_list, 'expected_pos_cnt':expected_pos_list})    return chi_resultdef chiMerge(chi_result, maxInterval=5):
       
    '''
    根据最大区间数限制法则,进行区间合并
    '''
    
    group_cnt = len(chi_result)    # 如果变量区间超过最大分箱限制,则根据合并原则进行合并,直至在maxInterval之内
    
    while(group_cnt > maxInterval):        
        ## 取出卡方值最小的区间
        min_index = chi_result[chi_result['chi_square'] == chi_result['chi_square'].min()].index.tolist()[0]        
        # 如果分箱区间在最前,则向下合并
        if min_index == 0:
            chi_result = merge_chiSquare(chi_result, min_index+1, min_index)        
        # 如果分箱区间在最后,则向上合并
        elif min_index == group_cnt-1:
            chi_result = merge_chiSquare(chi_result, min_index-1, min_index)        
        # 如果分箱区间在中间,则判断两边的卡方值,选择最小卡方进行合并
        else:            if chi_result.loc[min_index-1, 'chi_square'] > chi_result.loc[min_index+1, 'chi_square']:
                chi_result = merge_chiSquare(chi_result, min_index, min_index+1)            else:
                chi_result = merge_chiSquare(chi_result, min_index-1, min_index)
        
        group_cnt = len(chi_result)    
    return chi_resultdef cal_chisqure_threshold(dfree=4, cf=0.1):
    '''
    根据给定的自由度和显著性水平, 计算卡方阈值
    '''
    percents = [0.95, 0.90, 0.5, 0.1, 0.05, 0.025, 0.01, 0.005]    
    ## 计算每个自由度,在每个显著性水平下的卡方阈值
    df = pd.DataFrame(np.array([chi2.isf(percents, df=i) for i in range(1, 30)]))
    df.columns = percents
    df.index = df.index+1
    
    pd.set_option('precision', 3)    return df.loc[dfree, cf]def chiMerge_chisqure(chi_result, dfree=4, cf=0.1, maxInterval=5):

    threshold = cal_chisqure_threshold(dfree, cf)
    
    min_chiSquare = chi_result['chi_square'].min()
    
    group_cnt = len(chi_result)    
    
    # 如果变量区间的最小卡方值小于阈值,则继续合并直到最小值大于等于阈值
    
    while(min_chiSquare < threshold and group_cnt > maxInterval):
        min_index = chi_result[chi_result['chi_square']==chi_result['chi_square'].min()].index.tolist()[0]        
        # 如果分箱区间在最前,则向下合并
        if min_index == 0:
            chi_result = merge_chiSquare(chi_result, min_index+1, min_index)        
        # 如果分箱区间在最后,则向上合并
        elif min_index == group_cnt-1:
            chi_result = merge_chiSquare(chi_result, min_index-1, min_index)        
        # 如果分箱区间在中间,则判断与其相邻的最小卡方的区间,然后进行合并
        else:            if chi_result.loc[min_index-1, 'chi_square'] > chi_result.loc[min_index+1, 'chi_square']:
                chi_result = merge_chiSquare(chi_result, min_index, min_index+1)            else:
                chi_result = merge_chiSquare(chi_result, min_index-1, min_index)
        
        min_chiSquare = chi_result['chi_square'].min()
        
        group_cnt = len(chi_result)    
    return chi_resultdef merge_chiSquare(chi_result, index, mergeIndex, a = 'expected_pos_cnt',
                    b = 'pos_cnt', c = 'chi_square'):
    '''
    按index进行合并,并计算合并后的卡方值
    mergeindex 是合并后的序列值
    
    '''
    chi_result.loc[mergeIndex, a] = chi_result.loc[mergeIndex, a] + chi_result.loc[index, a]
    chi_result.loc[mergeIndex, b] = chi_result.loc[mergeIndex, b] + chi_result.loc[index, b]    ## 两个区间合并后,新的chi2值如何计算
    chi_result.loc[mergeIndex, c] = (chi_result.loc[mergeIndex, b] - chi_result.loc[mergeIndex, a])**2 /chi_result.loc[mergeIndex, a]
    
    chi_result = chi_result.drop([index])    
    ## 重置index
    chi_result = chi_result.reset_index(drop=True)    
    return chi_result
登录后复制
   
In [13]
## chi2分箱主流程# 1:计算初始chi2 result## 合并X数据集与Y数据集### 先对数据进行等频分箱,提高卡方分箱的效率## 注意对原始数据的拷贝import copy
chi_train_X = copy.deepcopy(train_X)### 本例先不进行等频分箱的过程'''
def get_freq(train_X, col, bind):
    col_data = train_X[col]
    col_data_sort = col_data.sort_values().reset_index(drop=True)
    col_data_cnt = col_data.count()
    length = col_data_cnt / bind
    col_index = np.append(np.arange(length, col_data_cnt, length), (col_data_cnt - 1))
    col_interval = list(set(col_data_sort[col_index]))
    return col_interval    
'''    '''  
for col in train_X.columns:
    print "start get " + col + " 等频 result"
    col_interval = get_freq(train_X, col, 200)
    col_interval.sort()
    for i, val in enumerate(col_interval):
        if i == 0:
            freq_train_X.loc[train_X[col] <= val, col] = i + 1 
            
        else:
            freq_train_X.loc[(train_X[col]<= val) & (train_X[col] > col_interval[i-1]), col] = i + 1
        
'''    ## 对数据进行卡方分箱,按照自由度进行分箱chi_result_all = dict()for col in chi_train_X.columns:    print("start get " + col + " chi2 result")
    chi2_result = get_chi2(train, col)
    chi2_merge = chiMerge_chisqure(chi2_result, dfree=4, cf=0.05, maxInterval=5)
    
    chi_result_all[col] = chi2_merge
登录后复制
   
In [14]
### 进行WOE编码woe_iv={} ### 计算特征的IV值def get_woevalue(train_all, col, chi2_merge):
    ## 计算所有样本中,响应客户和未响应客户的比例
    df_pos_cnt = train_all['Defaulter'].sum()
    df_neg_cnt = train_all['Defaulter'].count() - df_pos_cnt
    
    df_ratio = df_pos_cnt / (df_neg_cnt * 1.0)
    
        
    col_interval = chi2_merge[col].values
    woe_list = []
    iv_list = []    
    for i, val in enumerate(col_interval):        if i == 0:
            col_pos_cnt = train_all.loc[train_all[col]<= val, 'Defaulter'].sum()
            col_all_cnt = train_all.loc[train_all[col]<= val, 'Defaulter'].count()
            col_neg_cnt = col_all_cnt - col_pos_cnt        
        else:
            col_pos_cnt = train_all.loc[(train_all[col]<= val) & (train_all[col] > col_interval[i-1]), 'Defaulter'].sum()
            col_all_cnt = train_all.loc[(train_all[col]<= val) & (train_all[col] > col_interval[i-1]), 'Defaulter'].count()
            col_neg_cnt = col_all_cnt - col_pos_cnt        
        if col_neg_cnt == 0:
            col_neg_cnt = col_neg_cnt + 1
        
        col_ratio = col_pos_cnt / (col_neg_cnt * 1.0)
        
        
        woei = np.log(col_ratio / df_ratio)
        ivi = woei * ((col_pos_cnt / (df_pos_cnt * 1.0)) - (col_neg_cnt / (df_neg_cnt * 1.0)))
        woe_list.append(woei)
        iv_list.append(ivi)
    
    IV = sum(iv_list)    
    return woe_list, iv_list, IV        
        
for col in chi_train_X.columns:    
    ## 首先对特征进行分箱转化
    chi2_merge = chi_result_all[col]
    woe_list, iv_list, iv = get_woevalue(train, col, chi2_merge)
    woe_iv[col] = {'woe_list': woe_list, 'iv_list':iv_list, 'iv': iv, 'value_list':chi_result_all[col][col].values}### 计算字符变量的总体iv值
              woe_iv['Region'] = {'woe_list':[woe_iv[col]['woe_list'][1] for col in dummy_region_col], 'iv': np.sum([woe_iv[col]['iv_list'][1] for col in dummy_region_col]),'value_list':[col.split('_')[1] for col in dummy_region_col]}
woe_iv['Area'] = {'woe_list':[woe_iv[col]['woe_list'][1] for col in dummy_area_col], 'iv': np.sum([woe_iv[col]['iv_list'][1] for col in dummy_area_col]),'value_list':[col.split('_')[1] for col in dummy_area_col]}
woe_iv['Activity'] = {'woe_list':[woe_iv[col]['woe_list'][1] for col in dummy_activity_col], 'iv': np.sum([woe_iv[col]['iv_list'][1] for col in dummy_activity_col]), 'value_list': [col.split('_')[1] for col in dummy_activity_col]}
woe_iv['Properties_Status'] = {'woe_list':[woe_iv[col]['woe_list'][1] for col in dummy_status_col], 'iv': np.sum([woe_iv[col]['iv_list'][1] for col in dummy_status_col]), 'value_list':[col.split('_')[1] for col in dummy_status_col]}   
### 根据计算的IV值进行特征筛选drop_numerical = list()for col in numerical:
    iv = woe_iv[col]['iv']    if iv < 0.02:
        drop_numerical.append(col)
        chi_train_X.drop(axis=1, columns=col, inplace=True) ## 删除IV值过小的特征drop_categorical = list()for col in categorical:
    iv = woe_iv[col]['iv']    if iv < 0.02:
        drop_categorical.append(col)
        chi_train_X.drop(axis=1, columns=dummy_col_dict[col], inplace=True)

drop_binary = list()for col in binaray:
    iv = woe_iv[col]['iv']    if iv < 0.02:
        drop_binary.append(col)
        chi_train_X.drop(axis=1, columns=col, inplace=True)



numerical = list(set(numerical).difference(drop_numerical))
categorical = list(set(categorical).difference(drop_categorical))
binaray = list(set(binaray).difference(drop_binary))### 对留下的特征进行WOE编码转化,WOE编码只是为了使得评分卡的格式更加标准化,并不能提高模型的效果,分箱完过后,直接建立模型,一样可以达到目的woe_train_X = copy.deepcopy(chi_train_X)for col in numerical:
    woe_list = woe_iv[col]['woe_list']
    col_interval = woe_iv[col]['value_list']    
    for i, val in enumerate(col_interval):        if i == 0:
            woe_train_X.loc[chi_train_X[col] <= val, col] = woe_list[i]        else:
            woe_train_X.loc[(chi_train_X[col] <= val) & (chi_train_X[col] > col_interval[i-1]), col] = woe_list[i]
    woe_train_X.loc[woe_train_X[col].isnull(), col] = 0for col in categorical:
    woe_list = woe_iv[col]['woe_list']
    col_interval = woe_iv[col]['value_list']    
    for i, val in enumerate(col_interval):
        woe_train_X.loc[woe_train_X[dummy_col_dict[col][i]]==1 , col] = woe_list[i]
    woe_train_X.drop(axis=1, columns=dummy_col_dict[col], inplace=True)for col in binaray:
    woe_list = woe_iv[col]['woe_list']
    col_interval = woe_iv[col]['value_list']    
    for i,var in enumerate(col_interval):
        woe_train_X.loc[woe_train_X[col]==var , col] = woe_list[i]
登录后复制
   
In [15]
### 在数据集中加上intercept列woe_train_X['intercept'] = [1] * woe_train_X.shape[0]

train_all = pd.concat([woe_train_X, train_Y], axis=1)### 将数据集进行切分,以便后续对模型做验证from sklearn.model_selection import train_test_split### 切分训练集和测试集,按照7:3的比例进行切分train_all_train, train_all_test = train_test_split(train_all, test_size=0.3)
登录后复制
   
In [16]
!pip install statsmodels
登录后复制
       
Looking in indexes: https://mirror.baidu.com/pypi/simple/
Collecting statsmodels
  Downloading https://mirror.baidu.com/pypi/packages/da/69/8eef30a6237c54f3c0b524140e2975f4b1eea3489b45eb3339574fc8acee/statsmodels-0.12.2-cp37-cp37m-manylinux1_x86_64.whl (9.5MB)
     |████████████████████████████████| 9.5MB 13.3MB/s eta 0:00:01
Requirement already satisfied: scipy>=1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from statsmodels) (1.6.3)
Requirement already satisfied: numpy>=1.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from statsmodels) (1.20.3)
Collecting patsy>=0.5 (from statsmodels)
  Downloading https://mirror.baidu.com/pypi/packages/ea/0c/5f61f1a3d4385d6bf83b83ea495068857ff8dfb89e74824c6e9eb63286d8/patsy-0.5.1-py2.py3-none-any.whl (231kB)
     |████████████████████████████████| 235kB 22.6MB/s eta 0:00:01
Requirement already satisfied: pandas>=0.21 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from statsmodels) (1.1.5)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from patsy>=0.5->statsmodels) (1.15.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pandas>=0.21->statsmodels) (2.8.0)
Requirement already satisfied: pytz>=2017.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pandas>=0.21->statsmodels) (2019.3)
Installing collected packages: patsy, statsmodels
Successfully installed patsy-0.5.1 statsmodels-0.12.2
登录后复制
       
In [21]
import statsmodelsimport statsmodels.api as smfimport pandas as pd 

def forward_selected(train_data, target):
    
    remaining = set(train_data.columns)
    remaining.remove(target)
    remaining.remove('intercept')
    
    selected = ['intercept']
    current_score, best_new_score = float("inf"),float("inf") 
    
    while remaining and current_score == best_new_score:
        scores_candidates = []        for candidate in remaining:            #formula = "{} ~ {} + 1".format(target,  ' + '.join(selected + [candidate]))
            score = smf.Logit(train_data[target], train_data[selected + [candidate]] ).fit().bic            #score = smf.logit(formula, train_data).fit().bic
            
            scores_candidates.append((score, candidate))
            
        scores_candidates.sort(reverse = True)        print(scores_candidates)
        
        best_new_score, best_candidate = scores_candidates.pop()        
        if current_score > best_new_score:
            remaining.remove(best_candidate)
            selected.append(best_candidate)
            current_score = best_new_score    
    #formula = "{} ~ {} + 1".format(target, ' + '.join(selected))
    model = smf.Logit(train_data[target], train_data[selected]).fit() 
    return model
  
    
model = forward_selected(train_all_train, 'Defaulter')print(model.params)    
print(model.bic)
登录后复制
   
In [22]
##### 对模型中的每个变量做wald 卡方检验for col in model.params.index:
    result = model.wald_test(col)    print(str(col) + " wald test: " + str(result.pvalue))
登录后复制
       
intercept wald test: 0.0
Region wald test: 6.423056389802783e-60
Amount wald test: 3.9363274620978153e-94
Max_Arrears wald test: 1.1398112750859715e-98
Term wald test: 9.821155677885337e-76
Properties_Total wald test: 1.8100481440703784e-83
Age wald test: 9.93192337340017e-57
Activity wald test: 2.8623600644364966e-46
Historic_Loans wald test: 2.5657230463575935e-40
Area wald test: 2.95622379290248e-10
Collateral_valuation wald test: 3.616031275444134e-10
Collateral wald test: 0.0008702893883406537
登录后复制
       
In [24]
### 查看VIF值from statsmodels.stats.outliers_influence import variance_inflation_factor


train_X_M = np.matrix(train_all_train[list(model.params.index)])

VIF_list = [variance_inflation_factor(train_X_M, i) for i in range(train_X_M.shape[1])]print(VIF_list)
登录后复制
       
[1.2123722031437774, 1.5506345946415625, 1.331025384078001, 1.0170363634810329, 1.2318819768891884, 1.0460820427024398, 1.0247822491365703, 1.3242169042290288, 1.1028027520756603, 1.185377289699483, 1.450797592652846, 1.3110262542160958]
登录后复制
       
In [25]
### 重新训练模型 ##model = smf.Logit(train_all_train['Defaulter'], train_all_train[list(model.params.index)]).fit()
登录后复制
       
Optimization terminated successfully.
         Current function value: 0.365929
         Iterations 7
登录后复制
       
In [26]
### from sklearn.metrics import auc,roc_curve, roc_auc_scorefrom sklearn.metrics import precision_score, recall_score, accuracy_score## 用拟合好的模型预测训练集## 首先将数据集的X和Y进行区分train_all_train_X = train_all_train[list(model.params.index)]
train_all_train_Y = train_all_train['Defaulter']

train_all_test_X = train_all_test[list(model.params.index)]
train_all_test_Y = train_all_test['Defaulter']

y_train_proba = model.predict(train_all_train_X)## 用拟合好的模型预测测试集y_test_proba = model.predict(train_all_test_X)### 计算训练集的AUC值roc_auc_score(train_all_train_Y, y_train_proba)### 计算测试集的AUC值roc_auc_score(train_all_test_Y, y_test_proba)import matplotlib.pyplot as plt### 绘制roc曲线fpr, tpr, thresholds = roc_curve(train_all_test_Y, y_test_proba, pos_label=1)
auc_score = auc(fpr,tpr)
w = tpr - fpr
ks_score = w.max()
ks_x = fpr[w.argmax()]
ks_y = tpr[w.argmax()]
fig,ax = plt.subplots()
ax.plot(fpr,tpr,label='AUC=%.5f'%auc_score)
ax.set_title('Receiver Operating Characteristic')
ax.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6))
ax.plot([ks_x,ks_x], [ks_x,ks_y], '--', color='red')
ax.text(ks_x,(ks_x+ks_y)/2,'  KS=%.5f'%ks_score)
ax.legend()
fig.show()
登录后复制
       
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/figure.py:457: UserWarning: matplotlib is currently using a non-GUI backend, so cannot show the figure
  "matplotlib is currently using a non-GUI backend, "
登录后复制
       
<Figure size 432x288 with 1 Axes>
登录后复制
               
In [27]
### 采用其他模型进行训练,评估效果from sklearn.ensemble import AdaBoostClassifierfrom sklearn.ensemble import GradientBoostingClassifierfrom sklearn.ensemble import RandomForestClassifierfrom sklearn.ensemble import BaggingClassifierfrom sklearn.ensemble import ExtraTreesClassifier


x_col = list(set(train_all_train.columns).difference(set(['Defaulter'])))
train_all_train_X = train_all_train[x_col]
train_all_train_Y = train_all_train['Defaulter']

train_all_test_X = train_all_test[x_col]
train_all_test_Y = train_all_test['Defaulter']## 建立不同的分类器模型 model = GradientBoostingClassifier()

model.fit(train_all_train_X, train_all_train_Y)## 用拟合好的模型预测训练集y_train_proba = model.predict_proba(train_all_train_X)
y_train_label = model.predict(train_all_train_X)## 用拟合好的模型预测测试集y_test_proba = model.predict_proba(train_all_test_X)
y_test_label = model.predict(train_all_test_X)print('训练集准确率:{:.2%}'.format(accuracy_score(train_all_train_Y, y_train_label)))print('测试集准确率:{:.2%}'.format(accuracy_score(train_all_test_Y, y_test_label)))print('训练集精度:{:.2%}'.format(precision_score(train_all_train_Y, y_train_label)))print('测试集精度:{:.2%}'.format(precision_score(train_all_test_Y, y_test_label)))print('训练集召回率:{:.2%}'.format(recall_score(train_all_train_Y, y_train_label)))print('测试集召回率:{:.2%}'.format(recall_score(train_all_test_Y, y_test_label)))print('训练集AUC:{:.2%}'.format(roc_auc_score(train_all_train_Y, y_train_proba[:,1])))print('测试集AUC:{:.2%}'.format(roc_auc_score(train_all_test_Y, y_test_proba[:,1])))
登录后复制
       
训练集准确率:85.76%
测试集准确率:85.29%
训练集精度:73.58%
测试集精度:72.13%
训练集召回率:19.93%
测试集召回率:19.33%
训练集AUC:81.38%
测试集AUC:79.88%
登录后复制
       

以上就是【金融风控系列】_[0]_零基础学习评分卡模型的详细内容,更多请关注php中文网其它相关文章!

最佳 Windows 性能的顶级免费优化软件
最佳 Windows 性能的顶级免费优化软件

每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。

下载
来源:php中文网
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
最新问题
开源免费商场系统广告
热门教程
更多>
最新下载
更多>
网站特效
网站源码
网站素材
前端模板
关于我们 免责申明 举报中心 意见反馈 讲师合作 广告合作 最新更新 English
php中文网:公益在线php培训,帮助PHP学习者快速成长!
关注服务号 技术交流群
PHP中文网订阅号
每天精选资源文章推送
PHP中文网APP
随时随地碎片化学习

Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号