机器学习实战-KNN算法

KNN-近邻算法主要思路:给出一个向量inX,将dataSet的每一个点距离该点的距离d求出,根据d排序,得序号下标,而每个点(一行)对应一个label(通常为最后一个属性),顺序取得前k个d,并将其相同label计数,将label按数量递减排序,取最多数量的label输出

思路:给出一个向量inX,将dataSet的每一个点距离该点的距离d求出,根据d排序,得序号下标,而每个点(一行)对应一个label(通常为最后一个属性),顺序取得前k个d,并将其相同label计数,将label按数量递减排序,取最多数量的label输出
def classify0(inX, dataSet, labels, k):
    dataSetSize = dataSet.shape[0]
    diffMat = tile(inX, (dataSetSize,1)) - dataSet
    sqDiffMat = diffMat**2
    sqDistances = sqDiffMat.sum(axis=1)
    distances = sqDistances**0.5
    sortedDistIndicies = distances.argsort()     
    classCount={}          
    for i in range(k):
        voteIlabel = labels[sortedDistIndicies[i]]
        classCount[voteIlabel] = classCount.get(voteIlabel,0) + 1
    sortedClassCount = sorted(classCount.items(), key=operator.itemgetter(1), reverse=True)
    return sortedClassCount[0][0]

思路:读取数据,数据必须和代码放在一起,否则需要写完整的路径。注意:returnMat[index,:]i执行一次,读取fr的一行的前三个属性,最后构成读取fr数据的前三列;classLabelVector每次执行一次,读取fr的一行的最后一个属性构成读取fr数据的最后一列的值,构成一个list
def file2matrix(filename):
    fr = open(filename)
    numberOfLines = len(fr.readlines())         #get the number of lines in the file
    returnMat = zeros((numberOfLines,3))        #prepare matrix to return
    classLabelVector = []                       #prepare labels return   
    fr = open(filename)
    index = 0
    for line in fr.readlines():
        line = line.strip()
        listFromLine = line.split('\t')
        returnMat[index,:] = listFromLine[0:3]
        classLabelVector.append(int(listFromLine[-1]))
        index += 1
    return returnMat,classLabelVector

思路:数值归一化就是将取值范围处理为0-1或者-1-1之间,
公式:newValue=(oldValue-min)/(max-min)注意dataSet.min(0)可以取得列最小的值

def autoNorm(dataSet):
    minVals = dataSet.min(0)
    maxVals = dataSet.max(0)
    ranges = maxVals - minVals
    normDataSet = zeros(shape(dataSet))
    m = dataSet.shape[0]
    normDataSet = dataSet - tile(minVals, (m,1))
    normDataSet = normDataSet/tile(ranges, (m,1))   #element wise divide
    return normDataSet, ranges, minVals

思路:取datingTestSet2.txt的前50%,每一行作为inX测试向量,取datingTestSet2.txt的后50%作为训练集,此时的label是datingTestSet2.txt的后50%的最后一列,k=3。每次预测的值和datingTestSet2.txt相对应的行的最后一列(实例)比对,如果不对,计数+1,输出错误率=错误数/测试集
def datingClassTest():
    hoRatio = 0.50      #hold out 50%
    datingDataMat,datingLabels = file2matrix('datingTestSet2.txt')       #load data setfrom file
    normMat, ranges, minVals = autoNorm(datingDataMat)
    m = normMat.shape[0]
    numTestVecs = int(m*hoRatio)
    errorCount = 0.0
    for i in range(numTestVecs):
        classifierResult = classify0(normMat[i,:],normMat[numTestVecs:m,:],datingLabels[numTestVecs:m],3)
        print ("the classifier came back with: %d, the real answer is: %d" % (classifierResult, datingLabels[i]))
        if (classifierResult != datingLabels[i]): errorCount += 1.0
    print ("the total error rate is: %f" % (errorCount/float(numTestVecs)))
    print (errorCount)


 


 

猜你喜欢

转载自blog.csdn.net/lirika_777/article/details/78528657
今日推荐