机器学习实战 多变量线性回归的实现

                多元线性回归其实方法和单变量线性回归差不多,我们这里直接给出算法:

computeCostMulti函数

function J = computeCostMulti(X, y, theta) m = length(y); % number of training examples J = 0; predictions = X * theta; J = 1/(2*m)*(predictions - y)' * (predictions - y);end

gradientDescentMulti函数

function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters) m = length(y); % number of training examples J_history = zeros(num_iters, 1); feature_number = size(X,2); temp = zeros(feature_number,1); for iter = 1:num_iters  for i=1:feature_number   temp(i) = theta(i) - (alpha / m) * sum((X * theta - y).* X(:,i));  end  for j=1:feature_number   theta(j) = temp(j);  end    J_history(iter) = computeCostMulti(X, y, theta); endend



但是其中还是有一些区别的,比如在开始梯度下降之前需要进行feature Scaling:

function [X_norm, mu, sigma] = featureNormalize(X) X_norm = X; mu = zeros(1, size(X, 2)); sigma = zeros(1, size(X, 2)); mu = mean(X); sigma = std(X); for i=1:size(mu,2)  X_norm(:,i) = (X(:,i).-mu(i))./sigma(i); endend


Normal Equation算法的实现


function [theta] = normalEqn(X, y) theta = zeros(size(X, 2), 1); theta = pinv(X'*X)*X'*y;end






           

再分享一下我老师大神的人工智能教程吧。零基础!通俗易懂!风趣幽默!还带黄段子!希望你也加入到我们人工智能的队伍中来!https://blog.csdn.net/jiangjunshow

猜你喜欢

转载自blog.csdn.net/dtijcfrhb/article/details/87811681