机器学习——Andrew Ng machine-learning-ex3 python实现

目录

Exercise 3: Multi-class Classification and Neural Networks

1. Multi-class Classification

1.1 Loading and Visualizing Data

1.2 Vectorize Logistic Regression

1.3 One-vs-All Training

1.4 Predict for One-Vs-All

2. Neural Networks

2.1 Loading Pameters

2.2 Implement Predict


Exercise 3: Multi-class Classification and Neural Networks

需要用到的库

import numpy as np
import scipy.io
import matplotlib.pyplot as plt
import scipy.optimize as opt

1. Multi-class Classification

1.1 Loading and Visualizing Data

读取并展示数据

def displayData(sel):
    fig, ax_array = plt.subplots(nrows=10, ncols=10, figsize=(10, 10))
    for row in range(10):
        for column in range(10):
            ax_array[row, column].matshow(sel[10 * row + column].reshape((20, 20)).T, cmap='gray')
            ax_array[row, column].axis('off')
    plt.show()
    return


## =========== Part 1: Loading and Visualizing Data =============
print('Loading and Visualizing Data ...')
data = scipy.io.loadmat('ex3data1.mat') # training data stored in arrays X, y
X, y = data['X'], data['y'].flatten()
m = np.size(X, 0)
# Randomly select 100 data points to display
rand_indices = np.random.permutation(m)
sel = X[rand_indices[0:100], :]
displayData(sel)
print('Program paused. Press enter to continue.')
input()

读取到的数据如下图所示,每张图片的大小为20x20:

1.2 Vectorize Logistic Regression

Logistic Regression的损失函数为:

其中:

那么损失函数的梯度为:

对于每个\theta来说,梯度为:

其中:

def lrCostFunction(theta_t, X_t, y_t, lambda_t):
    m, n = X_t.shape
    y_t = y_t.reshape((m, 1))
    theta_t = theta_t.reshape((n, 1))
    cost = (-y_t.T.dot(np.log(sigmoid(X_t.dot(theta_t)))) - \
    (1 - y_t).T.dot(np.log(1-sigmoid(X_t.dot(theta_t))))) / m
    grad = X_t.T.dot(sigmoid(X_t.dot(theta_t)) - y_t) / m
    
    cost += lambda_t / (2 * m) * theta_t[1:].T.dot(theta_t[1:])
    temp = theta_t.copy()
    temp[0][0] = 0
    grad += lambda_t / m * temp
    return cost, grad.flatten()


## ============ Part 2a: Vectorize Logistic Regression ============
print('Testing lrCostFunction() with regularization')
theta_t = np.array([-2, -1, 1, 2], dtype='float64')
X_t = np.c_[np.ones((5,1)), np.arange(1, 16).reshape(3,5).T / 10]
y_t = (np.array([1, 0, 1, 0, 1]) >= 0.5)
lambda_t = 3
J, grad = lrCostFunction(theta_t, X_t, y_t, lambda_t)
print('Cost: %f' % J)
print('Expected cost: 2.534819')
print('Gradients:')
print('%f %f %f %f' % (grad[0], grad[1], grad[2], grad[3]))
print('Expected gradients:')
print(' 0.146561 -0.548558 0.724722 1.398003')
print('Program paused. Press enter to continue.')
input()

输出结果为:

结果与预期一致。

1.3 One-vs-All Training

训练的时候使用oneVSAll方法训练,训练时对每一个标签都看作是一个2分类任务,进行Logistic Regression训练。

def oneVsAll(X, y, num_labels, lambda_t):
    m, n = X.shape
    all_theta = np.zeros((num_labels, n + 1))
    X = np.c_[np.ones((m, 1)), X]
    for c in range(num_labels):
        initial_theta = np.zeros((n + 1, 1))
        result = opt.minimize(fun=costfunction, x0=initial_theta, \
                              args=(X, (y==c), lambda_t), method='TNC', jac=gradient)
        print('Training label %d, cost is %.4f' % (c, result['fun']))
        all_theta[c-1] = result['x'].reshape((1, n + 1))

    return all_theta


## ============ Part 2b: One-vs-All Training ============
print('Training One-vs-All Logistic Regression...')
lambda_t = 0.1;
all_theta = oneVsAll(X, y, num_labels, lambda_t)
print('Program paused. Press enter to continue.')
input()

1.4 Predict for One-Vs-All

预测分类结果:

def predictOneVsAll(all_theta, X):
    m = np.size(X, 0)
    # You need to return the following variables correctly
    # Add ones to the X data matrix
    X = np.c_[np.ones((m, 1)), X]
    pred = np.argmax(sigmoid(X.dot(all_theta.T)), 1) + 1
    return pred


## ================ Part 3: Predict for One-Vs-All ================
pred = predictOneVsAll(all_theta, X)
print('Training Set Accuracy: %.2f%%' % (np.mean(pred == y) * 100))

最终输出的结果为:训练集的准确率为86.5%。

2. Neural Networks

加载数据同上。

2.1 Loading Pameters

在作业中,不需要训练,只需要加载权重,查看分类准确性。

## ================ Part 2: Loading Pameters ================
print('Loading Saved Neural Network Parameters ...')

# Load the weights into variables Theta1 and Theta2
weight = scipy.io.loadmat('ex3weights.mat')
Theta1, Theta2 = weight['Theta1'], weight['Theta2']

2.2 Implement Predict

## ================= Part 3: Implement Predict =================
pred = predict(Theta1, Theta2, X)
print('Training Set Accuracy: %.1f%%' % (np.mean(pred == y) * 100))
print('Program paused. Press enter to continue.')

预测结果如下所示,为97.5%,比使用Logistic回归效果好得多。

发布了20 篇原创文章 · 获赞 6 · 访问量 2177

猜你喜欢

转载自blog.csdn.net/linghu8812/article/details/89786169