week4 编程作业 回顾

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_34271269/article/details/73821729

这一周课程主要内容是关于神经网络初步,包括一些逻辑运算结构等等,总体难度不大,但是编程还是有很多需要注意的地方。

下面附上我的代码以及我在编程过程中出现的问题:

这个作业总的要求是让你根据5000 个 20 *20 的灰度矩阵来识别字体

1. 首先是logical regression with regularization 这跟这周讲的其实没有多大联系:

function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with 
%regularization
%   J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
%   theta as the parameter for regularized logistic regression and the
%   gradient of the cost w.r.t. to the parameters. 

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
J = -1/m*(y'*log(sigmoid(X*theta))+(1-y')*log(1-sigmoid(X*theta)))+lambda/(2*m)*(sum(theta.^2)-theta(1)*theta(1));
%grad = 1/m*X'*(X*theta-y')+lambda/m*theta;
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
%
% Hint: The computation of the cost function and gradients can be
%       efficiently vectorized. For example, consider the computation
%
%           sigmoid(X * theta)
%
%       Each row of the resulting matrix will contain the value of the
%       prediction for that example. You can make use of this to vectorize
%       the cost function and gradient computations. 
%
% Hint: When computing the gradient of the regularized cost function, 
%       there're many possible vectorized solutions, but one solution
%       looks like:
%           grad = (unregularized gradient for logistic regression)
%           temp = theta; 
%           temp(1) = 0;   % because we don't add anything for j = 0  
%           grad = grad + YOUR_CODE_HERE (using the temp variable)
%

grad = 1/m*X'*(sigmoid(X*theta)-y);
temp = theta;
temp(1)=0;
grad = grad + (lambda/m)*temp;

% =============================================================


end



重点是logical regression的表达式 有那个表达式这个函数就可以轻松的写出来, 附的pdf 上面写的非常清楚 :





以上就是代价函数的表达式 用python的语言来表达也十分容易

J = -1/m*(y'*log(sigmoid(X*theta))+(1-y')*log(1-sigmoid(X*theta)))+lambda/(2*m)*(sum(theta.^2)-theta(1)*theta(1));


因为我们用的是一些标准化的方法 所以在cost这个函数中 我们也必须要给出目前的梯度供 fmincg 这个函数使用


有了代价函数 我们就很容易可以推导出偏导数:



也是非常容易表示 

grad = 1/m*X'*(sigmoid(X*theta)-y);
temp = theta;
temp(1)=0;
grad = grad + (lambda/m)*temp;


以上两个地方都需要特别注意的一点是 在进行运算中 theta(1) 并不会参与到运算中 所以theta(1) = 0




2. onevsall 

这个函数的实质是对于我们的parameters 进行训练

function [all_theta] = oneVsAll(X, y, num_labels, lambda)
%ONEVSALL trains multiple logistic regression classifiers and returns all
%the classifiers in a matrix all_theta, where the i-th row of all_theta 
%corresponds to the classifier for label i
%   [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
%   logistic regression classifiers and returns each of these classifiers
%   in a matrix all_theta, where the i-th row of all_theta corresponds 
%   to the classifier for label i

% Some useful variables
m = size(X, 1);
n = size(X, 2);

% You need to return the following variables correctly 
all_theta = zeros(num_labels,n+1);

% Add ones to the X data matrix
X = [ones(m, 1),X];

% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the following code to train num_labels
%               logistic regression classifiers with regularization
%               parameter lambda. 
%
% Hint: theta(:) will return a column vector.
%
% Hint: You can use y == c to obtain a vector of 1's and 0's that tell you
%       whether the ground truth is true/false for this class.
%
% Note: For this assignment, we recommend using fmincg to optimize the cost
%       function. It is okay to use a for-loop (for c = 1:num_labels) to
%       loop over the different classes.
%
%       fmincg works similarly to fminunc, but is more efficient when we
%       are dealing with large number of parameters.
%
% Example Code for fmincg:
%
%     % Set Initial theta
%     initial_theta = zeros(n + 1, 1);
%     
%     % Set options for fminunc
%     options = optimset('GradObj', 'on', 'MaxIter', 50);
% 
%     % Run fmincg to obtain the optimal theta
%     % This function will return theta and the cost 
%     [theta] = ...
%         fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
%                 initial_theta, options);
%
%size(X),size(all_theta),size(y)
initial_theta = zeros(n+1,1);
options = optimset('GradObj','on','MaxIter',50);
for i = 1:num_labels   
  kk= fmincg(@(t)(lrCostFunction(t,X,(y==i),lambda)),initial_theta,options);
  size(all_theta);
  all_theta(i,:)=kk;
  %all_theta
 end
 %size(all_theta)
 %print(all_theta)
 %all_theta = all_theta';
 % =========================================================================
end


在这里我们需要注意的是fmincg 这个函数的使用方法:

在其中 第一个参数是我们需要最小化的函数 在这里是lrCostFunction 

[jVal,gradient] 注意这个函数返回两个值,虽然这里没用但是也值得一说 第一个经过迭代以后的的代价 当这个代价比较小的时候我们这次训练才是有意义的。第二个就是梯度,相当于我们训练的能够优化代价函数的一些参数,在这里也是我们训练的一些参数。

训练之后注意赋给kk 由kk再赋给all_theta (我的matlab语言学的不怎么样 如果有更好的方法请一定联系我)

在这里训练的时候只能够是向量训练,所以我们把 10 *401 的矩阵 化成1*401 的向量进行分别训练 最后再合在一起。


3.onevsall predict 

这个就非常简单了 并且max函数根据介绍也能返回最大值的索引 所以针对每一个行向量用一次max 或者 根据max函数的第二个返回值去做就好,详情请baidu或google octave中max函数的用法

4.predict

这个就更简单了 训练好了神经网络要求预测值 和 3基本上一样,下面上核心代码

for i = 1:m
  p(i)=find(temp(i,:)==max(temp(i,:)))


猜你喜欢

转载自blog.csdn.net/qq_34271269/article/details/73821729