关于Cross-validation的学习笔记

1.train_test_split函数

train_test_split函数可以很快的将数据划分为训练集和测试集,以iris数据集为例,用svm算法来做分类预测:

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn import svm
iris = datasets.load_iris()
x = iris.data
y = iris.target

下面可以用train_text_split函数来将数据分为训练集和测试集

X_train,X_test,y_train,y_test = train_test_split(x,y,test_size = 0.4,random_state=0)
X_train.shape, y_train.shape
out:((90, 4), (90,))
X_test.shape, y_test.shape
out:((60, 4), (60,))
clf = svm.SVC(kernel='linear', C=1).fit(x_train, y_train)
clf.score(x_test,y_test)
out:0.9666666666666667

score显示,此训练后的分类器预测的正确率为0.9666666666666667

2.cross-val_score函数

下面演示用K折交叉验证后的情况

from sklearn.model_selection import cross_val_score
clf = svm.SVC(kernel='linear', C=1)
scores = cross_val_score(clf, iris.data, iris.target, cv=5)
scores                                              
out:array([0.96..., 1.  ..., 0.96..., 0.96..., 1.        ])

3.KFold函数

from sklearn.model_selection import KFold
kf = KFold(n_splits = 5)
for train,test in kf.split(x):
    print (test)

将上述数据集里的x平均分成5份,每次取1份作为验证集,其他作为训练集。比如,n_splits = 5表示把数据集分成5份,按照样本顺序平均分的,并且验证集是按照5份的顺序依次出现的。也就是说,第1次是以第1份作为验证集,后面4份作为训练集,第2次是以第2份作为验证集,第1,3,4,5份作为训练集,以此类推。

4.LeaveOneOut函数(留一验证)

>>> from sklearn.model_selection import LeaveOneOut
>>> X = [1, 2, 3, 4]
>>> loo = LeaveOneOut()
>>> for train, test in loo.split(X):
...     print("%s %s" % (train, test))
[1 2 3] [0]
[0 2 3] [1]
[0 1 3] [2]
[0 1 2] [3]

即每次只留一个样本作为验证集,并且是按照顺序留下验证集的。

5.LeavePOut函数(留P验证)

顾名思义,留下P个样本作为验证集,此函数将所有可能出现的P个样本的组合都出现一遍。

>>> from sklearn.model_selection import LeavePOut

>>> X = np.ones(4)
>>> lpo = LeavePOut(p=2)
>>> for train, test in lpo.split(X):
...     print("%s %s" % (train, test))
[2 3] [0 1]
[1 3] [0 2]
[1 2] [0 3]
[0 3] [1 2]
[0 2] [1 3]
[0 1] [2 3]

X为[0,1,2,3],留2验证,则该函数将所有2个组合都会出现一次,并按顺序作为验证集。

6.ShuffleSplit函数

ShuffleSplit函数是先将数据打乱顺序,再分割成训练集和验证集

>>> from sklearn.model_selection import ShuffleSplit
>>> X = np.arange(10)
>>> ss = ShuffleSplit(n_splits=5, test_size=0.25,
...     random_state=0)
>>> for train_index, test_index in ss.split(X):
...     print("%s %s" % (train_index, test_index))
[9 1 6 7 3 0 5] [2 8 4]
[2 9 8 0 6 7 4] [3 5 1]
[4 5 1 0 6 9 7] [2 3 8]
[2 7 5 8 0 3 4] [6 1 9]
[4 1 0 6 8 9 3] [5 2 7]

7.StratifiedKFold函数

在分类问题中,如果正负样本差别较大,用一般的KFold相关的函数进行分割数据集时,容易造成正负样本不均匀的现象,为了解决此问题,需要用到StratifiedKFold函数,该函数可以在分割数据集的同时,也会照顾到正负样本的比例。

>>> from sklearn.model_selection import StratifiedKFold

>>> X = np.ones(10)
>>> y = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
>>> skf = StratifiedKFold(n_splits=3)
>>> for train, test in skf.split(X, y):
...     print("%s %s" % (train, test))
[2 3 6 7 8 9] [0 1 4 5]
[0 1 3 4 5 8 9] [2 6 7]
[0 1 2 4 5 6 7] [3 8 9]

可以看到,此例中的正样本较多,这样在分割的时候,x,y参数都加进去,StratifiedKFold函数可以考虑到正负类的差异。
同ShuffleSplit一样,也有StratifiedShuffleSplit函数。

8.TimeSeriesSplit函数

处理时间序列的分割函数,具体原理尚不明白,先看例子:

>>> from sklearn.model_selection import TimeSeriesSplit

>>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
>>> y = np.array([1, 2, 3, 4, 5, 6])
>>> tscv = TimeSeriesSplit(n_splits=3)
>>> print(tscv)  
TimeSeriesSplit(max_train_size=None, n_splits=3)
>>> for train, test in tscv.split(X):
       print("%s %s" % (train, test))
[0 1 2] [3]
[0 1 2 3] [4]
[0 1 2 3 4] [5]

猜你喜欢

转载自www.cnblogs.com/xiaoma927/p/9949216.html