Simple interpretation of ROC evaluation curves.

I mainly want to summarize what the commonly used evaluation curves do.

With code, there may be a lot of nonsense

It should be enough for homework.

# auc roc curve correlation

import sklearn.metrics as metrics
#假设用的线性模型

'''
model.predict记得改成你自己的模型
'''

fpr1,tpr1,th1 = metrics.roc_curve(test_y,model.predict(test_x))


'''
auc的值
'''

auc = metrics.auc(fpr1,tpr1)

'''
画ROC曲线
'''

from matplotlib.pyplot as plt
plt.figure(figsize = [8,8])
plt.plot(fpr1,tpr1,color='b')
plt.plot([0,1],[0,1],color='r',alpha = 0.5,linestyle = '--')

The drawing looks like this, this is drawn by randomly finding a data set.

 AUC is the area under the blue line to the x-axis, the range is 0.5-1, the closer to 1, the better the performance of the classifier

The closer the AUC is to 0.5, the closer the model classification performance is to random guessing.

What is ROC, ROC is the blue line, the horizontal axis is called FPR (false positive rate, which is the ratio of 0 as 1, and then this pile of 1 is actually 0), and the vertical axis is called TPR (true positive rate, It is the reverse of the previous one), the closer the ROC curve is to the upper left corner, the better the performance of the classifier.

Guess you like

Origin blog.csdn.net/nanaminanaki/article/details/130140850