3 minutes to thoroughly understand credit scorecard models & Model Validation

Credit score card in a foreign country is a mature model of prediction methods, especially in credit risk assessment and risk control areas of finance but also has been relatively widely used, the principle is the model variables WOE after encoding discrete use of logistic regression model a generalized linear model two categorical variables.

 This article focuses on model variables WOE and principle IV, for the convenience of description, this paper model of the target scalar 1 referred to as default user, the target variable is 0 recorded as a normal user; the WOE (weight of Evidence) is actually an argument to take a when the value of an impact on the proportion of default , how to understand the Bible? I will be illustrated by an icon.

Woe formula is as follows: 

 

Age

#bad

#good

Woe

0-10

50

200

= Ln ((50/100) / (200/1000)) = ln ((50/200) / (100/1000))

10-18

20

200

= Ln ((20/100) / (200/1000)) = ln ((20/200) / (100/1000))

18-35

5

200

= Ln ((5/100) / (200/1000)) = ln ((5/200) / (100/1000))

35-50

15

200

= Ln ((15/100) / (200/1000)) = ln ((15/200) / (100/1000))

above 50

10

200

= Ln ((10/100) / (200/1000)) = ln ((10/200) / (100/1000))

Gather

100

1000

 

 Table age to age one independent variable, due to age is a continuous-type argument, it needs to be discrete processing, assuming that discrete divided into five groups (As for how to group, as will be explained later topic), # bad and #good represents the number of users and a normal user default distribution in the five groups, the last one is a value calculated woe can be seen by the following equation later changes, woe reflects the default arguments for each packet in the normal user users accounting default user and population differences between the proportion of normal user; thus intuitive thought woe contains the values ​​of the independent variables affect the target variables (probability of default) of.

Coupled Forms and calculating logistic conversion woe objective variable in logistic regression (logist_p = ln (p / 1-p)) so similar, so woe argument values ​​can be replaced with the original value of the argument;

WOE finished speaking about the following IV:

IV formula is as follows 



 

In fact, IV measures the amount of information of a certain variable , from the formula of view, is equivalent to a weighted summation woe argument value , the size value determines the degree of influence for the independent variables of the target variable; from another angle look at it, and entropy formula IV formula is very similar. 

In fact, in order to understand the significance of WOE, it is necessary to consider the effect of the evaluation of the scoring model. Because we all processing variables from the model work in modeling, are designed to enhance the effect of the model in nature.

In some learning before, I have summed up this evaluation method dichotomous model effect, in particular the ROC curve. In order to describe the significance of WOE, really we need to start with the ROC. Still it is to draw a table.

 

Data from the famous German credit dataset, took one of the arguments to illustrate the problem. The first column is the value of the independent variable, N denotes the number of samples corresponding to each of the values, n1 and n0 represent the default number of samples and the number of normal samples, p1 and p0 denote the default account for each sample and the normal sample population ratio, cump1 and cump0 p1 and p0, respectively, represent the cumulative and, woe corresponding to each value of the independent variable wOE (ln (p1 / p0)), iv is woe * (p1-p0).

Iv sum of (can be seen as a weighted sum of WOE), we get IV (information value information value), is one of the indicators to measure the impact of independent variables on the target variables (like gini, entropy those), where is 0.666, it looks like a little too big, embarrassing.

The above process to study the effects of independent variables on a target variable, in fact, can also be seen as a single independent variable scoring model, further, can be directly used as the value of the independent variable is some kind of credit score score, this time need to assume some kind of argument is ordinal variables, that is only to predict the target variable directly ordered based on this argument.

It is with this perspective, we can "evaluation model effect" and "Independent Variables and Coding" unify the two processes. Screening right arguments, and proper coding, in fact, is to pick and construct arguments have a higher predictive power (predictive power) of the target variable, but may also believes that these arguments are established univariate score model, which model the effect is quite good.

Take the above table, for example, which cump1 and cump0, in some ways is what we do when the ROC curve of TPR and FPR. For example, when the order of score A12, A11, A14, A13, A14 In terms of the cutoff, in this case the TPR = cumsum (p1) [3] / (sum (p1)), FPR = cumsum (p0) [ 3] / (sum (p0)), it is cump1 [3] and cump0 [3]. So we can draw the corresponding ROC curve.

 ROC can see this not very nice. Prior also studied the, ROC curve has AUC quantifiable indicators, refers to the area under the curve. This area is actually a measure of the distance between the TPR and FPR.

From the above description, from another perspective TPR and the FPR, this can be understood as independent variables (i.e. some kind of score Rating Rules) conditions for a target variable distribution of 0/1, e.g. TPR, i.e. cump1, i.e. when the target when the variable takes 1, the independent variable (rating score) in a cumulative distribution. When these two conditions are distributed far away, that this argument has a good degree of recognition of the target variable. 

Since the conditional distribution function can describe this recognition ability, then the conditional density function do that then? This leads to the concept of IV and WOE. In fact, we can also measure the distance between two conditional density function, which is IV. This can be seen from the inside of the formula IV, IV = ((p1-p0) * log (p1 / p0)), where p1 and p0 is the density value corresponding sum. IV This definition is coming from relative entropy evolution, which can still see the shadow of x * lnx.

At this point it should already be concluded: Evaluation scoring model can "distribution function of the distance condition" from the perspective of these two "conditional density function of the distance" starting to be considered in order to obtain AUC and IV, respectively, these two indicators. Of course, these two indicators can also be used as a screening indicator of arguments, IV seems to be more common number. And WOE IV is a major component.

那么,到底为什么要用WOE来对自变量做编码呢?主要的两个考虑是:提升模型的预测效果,提高模型的可理解性。

首先,对已经存在的一个评分规则,例如上述的A12,A11,A14,A13,对其做各种函数变化,可以得到不同的ROC结果。但是,如果这种函数变化是单调的,那么ROC曲线事实上是不发生变化的。因此,想要提高ROC,必须寄希望于对评分规则做非单调的变换。传说中的NP引理证明了,使得ROC达到最优的变换就是计算现有评分的一个WOE,这似乎叫做“条件似然比”变换。

用上述例子,我们根据计算出的WOE值,对评分规则(也就是第一列的value)做排序,得到新的一个评分规则。 

此处按照WOE做了逆序排列(因为WOE越大则违约概率越大),照例可以画出ROC线。

可以看出来,经过WOE的变化之后,模型的效果好多了。事实上,WOE也可以用违约概率来代替,两者没有本质的区别。用WOE来对自变量做编码的一大目的就是实现这种“条件似然比”变换,极大化辨识度。

同时,WOE与违约概率具有某种线性关系,从而通过这种WOE编码可以发现自变量与目标变量之间的非线性关系(例如U型或者倒U型关系)。在此基础上,我们可以预料到模型拟合出来的自变量系数应该都是正数,如果结果中出现了负数,应当考虑是否是来自自变量多重共线性的影响。

另外,WOE编码之后,自变量其实具备了某种标准化的性质,也就是说,自变量内部的各个取值之间都可以直接进行比较(WOE之间的比较),而不同自变量之间的各种取值也可以通过WOE进行直接的比较。进一步地,可以研究自变量内部WOE值的变异(波动)情况,结合模型拟合出的系数,构造出各个自变量的贡献率及相对重要性 

一般地,系数越大,woe的方差越大,则自变量的贡献率越大(类似于某种方差贡献率),这也能够很直观地理解。

总结起来就是,做信用评分模型时,自变量的处理过程(包括编码与筛选)很大程度上是基于对单变量模型效果的评价。而在这个评价过程中,ROC与IV是从不同角度考察自变量对目标变量的影响力,基于这种考察,我们用WOE值对分类自变量进行编码,从而能够更直观地理解自变量对目标变量的作用效果及方向,同时提升预测效果。

这么一总结,似乎信用评分的建模过程更多地是分析的过程(而不是模型拟合的过程),也正因此,我们对模型参数的估计等等内容似乎并不做太多的学习,而把主要的精力集中于研究各个自变量与目标变量的关系,在此基础上对自变量做筛选和编码,最终再次评估模型的预测效果,并且对模型的各个自变量的效用作出相应的评价。

有了WOE和IV指标就可以进行下一步的模型验证了。

模型验证 

在收集数据时,把所有整理好的数据分为用于建立模型的建模样本和用于模型验证的对照样本对照样本用于对模型总体预测性、稳定性进行验证。申请评分模型的模型检验指标包括K-S值、ROC等指标。

通常一个二值分类器可以通过ROC(Receiver Operating Characteristic 受试者工作特征曲线,ROC曲线上每个点反映着对同一信号刺激的感受性。)曲线和AUC值(Area under Curve  Roc曲线下的面积,介于0.1和1之间。Auc作为数值可以直观的评价分类器的好坏,值越大越好。)来评价优劣。

很多二元分类器会产生一个概率预测值,而非仅仅是0-1预测值。我们可以使用某个临界点(例如0.5),以划分哪些预测为1,哪些预测为0。得到二元预测值后,可以构建一个混淆矩阵来评价二元分类器的预测效果。所有的训练数据都会落入这个矩阵中,而对角线上的数字代表了预测正确的数目,即true positive + true nagetive。同时可以相应算出TPR(真正率或称为灵敏度)和TNR(真负率或称为特异度)。我们主观上希望这两个指标越大越好,但可惜二者是一个此消彼涨的关系。除了分类器的训练参数,临界点的选择,也会大大的影响TPR和TNR。有时可以根据具体问题和需要,来选择具体的临界点。


图7. 真假阴阳性定义 

如果我们选择一系列的临界点,就会得到一系列的TPR和TNR,将这些值对应的点连接起来,就构成了ROC曲线。ROC曲线可以帮助我们清楚的了解到这个分类器的性能表现,还能方便比较不同分类器的性能。在绘制ROC曲线的时候,习惯上是使用1-TNR作为横坐标即FPR(false positive rate),TPR作为纵坐标。这是就形成了ROC曲线。

而AUC(Area Under Curve)被定义为ROC曲线下的面积,显然这个面积的数值不会大于1。又由于ROC曲线一般都处于y=x这条直线的上方,所以AUC的取值范围在0.5和1之间。使用AUC值作为评价标准是因为很多时候ROC曲线并不能清晰的说明哪个分类器的效果更好,而作为一个数值,对应AUC更大的分类器效果更好。

ROC交换曲线现实意义:衡量舍弃好账户和避免坏账户之间的交换关系。理想的情况是:舍弃0%好账户的情况下拒绝100%的坏账户,模型完全准确地把好账户和坏账户区别开来


图8. ROC曲线中好坏客户比

K-S指标根据两个数学家命名,与交换曲线类似,衡量的是好账户和坏账户的累计分布比例之间具体最大的差距好账户和坏账户之间的距离越大,k-s指标越高,模型的区分能力越强。


图9. K-S指标图:作为好坏客户的另一种区分标志

这些指标满足之后则基本完成评分卡模型的开发过程。

总结展望:

根据以上的讲解,可以看出现在的评分卡并不是特别复杂,很多金融和银行机构都会有自己已经成熟的评分卡模型,可是对于以安全性为最主要的因素考虑,未来的转型是通过外围的数据平台进行双擎的数据分析,业务拓展,例如实时的BI,以及像蚂蚁金服一样,很多额度指标和业务模式比较灵活。很多模型在基础的数据量上的可行,并不代表在未来的云数据平台,大规模跑批中有很好的效果,这其中依然存在很大的挑战和机遇。

 

转载自:https://www.cnblogs.com/nxld/p/6365460.html

信用评分卡模型在国外是一种成熟的预测方法,尤其在信用风险评估以及金融风险控制领域更是得到了比较广泛的使用,其原理是将模型变量WOE编码方式离散化之后运用logistic回归模型进行的一种二分类变量的广义线性模型。

 本文重点介绍模型变量WOE以及IV原理,为表述方便,本文将模型目标标量为1记为违约用户,对于目标变量为0记为正常用户;则WOE(weight of Evidence)其实就是自变量取某个值的时候对违约比例的一种影响,怎么理解这句话呢?我下面通过一个图标来进行说明。

Woe公式如下: 

 

Age

#bad

#good

Woe

0-10

50

200

=ln((50/100)/(200/1000))=ln((50/200)/(100/1000))

10-18

20

200

=ln((20/100)/(200/1000))=ln((20/200)/(100/1000))

18-35

5

200

=ln((5/100)/(200/1000))=ln((5/200)/(100/1000))

35-50

15

200

=ln((15/100)/(200/1000))=ln((15/200)/(100/1000))

50以上

10

200

=ln((10/100)/(200/1000))=ln((10/200)/(100/1000))

汇总

100

1000

 

 表中以age年龄为某个自变量,由于年龄是连续型自变量,需要对其进行离散化处理,假设离散化分为5组(至于如何分组,会在以后专题中解释),#bad和#good表示在这五组中违约用户和正常用户的数量分布,最后一列是woe值的计算,通过后面变化之后的公式可以看出,woe反映的是在自变量每个分组下违约用户对正常用户占比和总体中违约用户对正常用户占比之间的差异;从而可以直观的认为woe蕴含了自变量取值对于目标变量(违约概率)的影响。

再加上woe计算形式与logistic回归中目标变量的logistic转换(logist_p=ln(p/1-p))如此相似,因而可以将自变量woe值替代原先的自变量值;

讲完WOE下面来说一下IV:

IV公式如下 



 

其实IV衡量的是某一个变量的信息量,从公式来看的话,相当于是自变量woe值的一个加权求和,其值的大小决定了自变量对于目标变量的影响程度;从另一个角度来看的话,IV公式与信息熵的公式极其相似。 

事实上,为了理解WOE的意义,需要考虑对评分模型效果的评价。因为我们在建模时对模型自变量的所有处理工作,本质上都是为了提升模型的效果。

在之前的一些学习中,我也总结了这种二分类模型效果的评价方法,尤其是其中的ROC曲线。为了描述WOE的意义,还真的需要从ROC说起。仍旧是先画个表格。

 

数据来自于著名的German credit dataset,取了其中一个自变量来说明问题。第一列是自变量的取值,N表示对应每个取值的样本数,n1和n0分别表示了违约样本数与正常样本数,p1和p0分别表示了违约样本与正常样本占各自总体的比例,cump1和cump0分别表示了p1和p0的累计和,woe是对应自变量每个取值的WOE(ln(p1/p0)),iv是woe*(p1-p0)。

对iv求和(可以看成是对WOE的加权求和),就得到IV(information value信息值),是衡量自变量对目标变量影响的指标之一(类似于gini,entropy那些),此处是0.666,貌似有点太大了,囧。

上述过程研究了一个自变量对目标变量的影响,事实上也可以看成是单个自变量的评分模型,更进一步地,可以直接将自变量的取值当做是某种信用评分的得分,此时需要假设自变量是某种有序变量,也就是仅仅根据这个有序的自变量直接对目标变量进行预测。

正是基于这种视角,我们可以将“模型效果的评价”与“自变量筛选及编码”这两个过程统一起来。筛选合适的自变量,并进行适当的编码,事实上就是挑选并构造出对目标变量有较高预测力(predictive power)的自变量,同时也可以认为,由这些自变量分别建立的单变量评分模型,其模型效果也是比较好的。

就以上面这个表格为例,其中的cump1和cump0,从某种角度看就是我们做ROC曲线时候的TPR与FPR。例如,此时的评分排序为A12,A11,A14,A13,若以A14为cutoff,则此时的TPR=cumsum(p1)[3]/(sum(p1)),FPR=cumsum(p0)[3]/(sum(p0)),就是cump1[3]和cump0[3]。于是我们可以画出相应的ROC曲线。

 可以看得出来这个ROC不怎么好看。之前也学习过了,ROC曲线有可以量化的指标AUC,指的就是曲线下方的面积。这种面积其实衡量了TPR与FPR之间的距离。

根据上面的描述,从另一个角度看TPR与FPR,可以理解为这个自变量(也就是某种评分规则的得分)关于0/1目标变量的条件分布,例如TPR,即cump1,也就是当目标变量取1时,自变量(评分得分)的一个累积分布。当这两个条件分布距离较远时,说明这个自变量对目标变量有较好的辨识度。 

既然条件分布函数能够描述这种辨识能力,那么条件密度函数行不行呢?这就引出了IV和WOE的概念。事实上,我们同样可以衡量两个条件密度函数的距离,这就是IV。这从IV的计算公式里面可以看出来,IV=sum((p1-p0)*log(p1/p0)),其中的p1和p0就是相应的密度值。IV这个定义是从相对熵演化过来的,里面仍然可以看到x*lnx的影子。

至此应该已经可以总结到:评价评分模型的效果可以从“条件分布函数距离”与“条件密度函数距离”这两个角度出发进行考虑,从而分别得到AUC和IV这两个指标。这两个指标当然也可以用来作为筛选自变量的指标,IV似乎更加常用一些。而WOE就是IV的一个主要成分。

那么,到底为什么要用WOE来对自变量做编码呢?主要的两个考虑是:提升模型的预测效果,提高模型的可理解性。

首先,对已经存在的一个评分规则,例如上述的A12,A11,A14,A13,对其做各种函数变化,可以得到不同的ROC结果。但是,如果这种函数变化是单调的,那么ROC曲线事实上是不发生变化的。因此,想要提高ROC,必须寄希望于对评分规则做非单调的变换。传说中的NP引理证明了,使得ROC达到最优的变换就是计算现有评分的一个WOE,这似乎叫做“条件似然比”变换。

用上述例子,我们根据计算出的WOE值,对评分规则(也就是第一列的value)做排序,得到新的一个评分规则。 

此处按照WOE做了逆序排列(因为WOE越大则违约概率越大),照例可以画出ROC线。

可以看出来,经过WOE的变化之后,模型的效果好多了。事实上,WOE也可以用违约概率来代替,两者没有本质的区别。用WOE来对自变量做编码的一大目的就是实现这种“条件似然比”变换,极大化辨识度。

同时,WOE与违约概率具有某种线性关系,从而通过这种WOE编码可以发现自变量与目标变量之间的非线性关系(例如U型或者倒U型关系)。在此基础上,我们可以预料到模型拟合出来的自变量系数应该都是正数,如果结果中出现了负数,应当考虑是否是来自自变量多重共线性的影响。

另外,WOE编码之后,自变量其实具备了某种标准化的性质,也就是说,自变量内部的各个取值之间都可以直接进行比较(WOE之间的比较),而不同自变量之间的各种取值也可以通过WOE进行直接的比较。进一步地,可以研究自变量内部WOE值的变异(波动)情况,结合模型拟合出的系数,构造出各个自变量的贡献率及相对重要性 

一般地,系数越大,woe的方差越大,则自变量的贡献率越大(类似于某种方差贡献率),这也能够很直观地理解。

总结起来就是,做信用评分模型时,自变量的处理过程(包括编码与筛选)很大程度上是基于对单变量模型效果的评价。而在这个评价过程中,ROC与IV是从不同角度考察自变量对目标变量的影响力,基于这种考察,我们用WOE值对分类自变量进行编码,从而能够更直观地理解自变量对目标变量的作用效果及方向,同时提升预测效果。

这么一总结,似乎信用评分的建模过程更多地是分析的过程(而不是模型拟合的过程),也正因此,我们对模型参数的估计等等内容似乎并不做太多的学习,而把主要的精力集中于研究各个自变量与目标变量的关系,在此基础上对自变量做筛选和编码,最终再次评估模型的预测效果,并且对模型的各个自变量的效用作出相应的评价。

有了WOE和IV指标就可以进行下一步的模型验证了。

模型验证 

在收集数据时,把所有整理好的数据分为用于建立模型的建模样本和用于模型验证的对照样本对照样本用于对模型总体预测性、稳定性进行验证。申请评分模型的模型检验指标包括K-S值、ROC等指标。

通常一个二值分类器可以通过ROC(Receiver Operating Characteristic 受试者工作特征曲线,ROC曲线上每个点反映着对同一信号刺激的感受性。)曲线和AUC值(Area under Curve  Roc曲线下的面积,介于0.1和1之间。Auc作为数值可以直观的评价分类器的好坏,值越大越好。)来评价优劣。

很多二元分类器会产生一个概率预测值,而非仅仅是0-1预测值。我们可以使用某个临界点(例如0.5),以划分哪些预测为1,哪些预测为0。得到二元预测值后,可以构建一个混淆矩阵来评价二元分类器的预测效果。所有的训练数据都会落入这个矩阵中,而对角线上的数字代表了预测正确的数目,即true positive + true nagetive。同时可以相应算出TPR(真正率或称为灵敏度)和TNR(真负率或称为特异度)。我们主观上希望这两个指标越大越好,但可惜二者是一个此消彼涨的关系。除了分类器的训练参数,临界点的选择,也会大大的影响TPR和TNR。有时可以根据具体问题和需要,来选择具体的临界点。


图7. 真假阴阳性定义 

如果我们选择一系列的临界点,就会得到一系列的TPR和TNR,将这些值对应的点连接起来,就构成了ROC曲线。ROC曲线可以帮助我们清楚的了解到这个分类器的性能表现,还能方便比较不同分类器的性能。在绘制ROC曲线的时候,习惯上是使用1-TNR作为横坐标即FPR(false positive rate),TPR作为纵坐标。这是就形成了ROC曲线。

And AUC (Area Under Curve) is defined as the area under the ROC curve, the value of this area is obviously not greater than 1. Also, because the ROC curve is generally located above this straight line y = x, and therefore between the AUC in the range 0.5 and 1 . AUC values used as an evaluation criterion is the ROC curve because many times did not intelligible explanations classifier which effects better, as a value corresponding to a larger AUC better classification results.

ROC curves exchange practical significance: a measure to give up a good relationship between accounts and avoid the bad exchange accounts . Ideally: the case refused to give up a good account of 0% to 100% of bad accounts, the model is completely accurate account good and bad accounts distinguish


FIG 8. ROC curve than the quality of the customer

KS mathematician named according to two indicators, exchange and similar curve, a measure of the gap between specific maximum cumulative distribution ratio of good and bad accounts accounts . The greater the distance between the good and bad accounts accounts, ks index higher, the stronger the ability to distinguish between models.


FIG 9. KS FIG index: quality as another customer distinguishing mark

After these indicators meet the basic completion of the development process scorecard model.

Outlook summary:

According to the above explanation, we can see now the scorecard is not particularly complex, many financial and banking institutions will have their own scorecard model has matured, but for to safety as the most important factor to consider, the future is through a peripheral transition the dual-engine data platform for data analysis, business development, such as real-time BI, and like ants gold dress, like a lot of credit metrics and more flexible business model. Many models on the basis of the amount of data possible, does not mean that the future of cloud data platform to run large-scale batch with very good results, which remain great challenges and opportunities.

 

Reprinted from: https://www.cnblogs.com/nxld/p/6365460.html

Guess you like

Origin www.cnblogs.com/shujuxiong/p/11355848.html