模型的评价指标:Precision, Recall, F1 Score

版权声明:本文为King_HAW原创文章,未经King_HAW允许不得转载。 https://blog.csdn.net/King_HAW/article/details/79520142

首先考虑对于数据预测结果可能出现的四种情况:

True Positive(TP):预测为正,实际为正

False Positive(FP)::预测为正,实际为负

False Negative(FN):预测为负,实际为正

Ture Negative(TN):预测为负,实际为负


准确率(Precision)定义为:在单类预测结果中,正确的比率,为 P = TP / (TP + FP)。

召回率(Recall)定义为:在单类的样本中,真正预测正确的比率,为 R = TP / (TP + FN)。

F1 Score定义为P和R的综合,定义为 2*TP / (2*TP + FP + FN)。

其实以上的评判标准都可以通过混淆矩阵(Confusion Matrix)计算出来。混淆矩阵的纵坐标为实际标签,横坐标为预测标签。


以下为Python代码实现:

# -*- coding: utf-8 -*-

import numpy as np
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report

# true label and predicted label
Class = 3
y_true = [1, 0, 2, 0, 2, 1, 1, 2, 0]
y_pred = [1, 2, 0, 0, 2, 1, 0, 2, 0]
Precision = np.zeros((3, 1))
Recall = np.zeros((3, 1))
F1 = np.zeros((3, 1))

# confusion_matrix
cm = confusion_matrix(y_pred=y_pred, y_true=y_true)
print(cm)

# precision
for i in range(Class):
    Precision[i] = cm[i, i] / np.sum(cm[:, i])
    print(Precision[i])

# recall
for i in range(Class):
    Recall[i] = cm[i, i] / np.sum(cm[i, :])
    print(Recall[i])

# F1 score
for i in range(Class):
    F1[i] = 2 * cm[i, i] / (np.sum(cm[i, :]) + np.sum(cm[:, i]))
    print(F1[i])

# classification report
cr = classification_report(y_pred=y_pred, y_true=y_true)
print(cr)

最后得到的结果为:



参考文章:

http://blog.csdn.net/matrix_space/article/details/50384518

http://blog.csdn.net/net_wolf_007/article/details/51769020

http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics





猜你喜欢

转载自blog.csdn.net/King_HAW/article/details/79520142