Machine Learning - WEEK 1 2 3- 线性回归 、逻辑回归、梯度下降法及其优化算法、传统方法、 Octave 入门

WEEK 1、2、3

本文为个人笔记,只记了重要内容,不适合新手入手

线性回归

  • 样本 ( x ( i ) , y ( i ) ) i 1 , 2 , , m
  • x ( i ) = ( x 1 ( i ) , x 2 ( i ) , , x n ( i ) ) ,假设 x ( i ) 具有 n 个特征
  • 假想函数(目标函数):
    h θ ( x ( i ) ) = θ 0 + θ 1 x 1 ( i ) + θ 2 x 2 ( i ) + + θ n x n ( i )
  • h θ ( x ( i ) ) 表达式视具体情况而定
  • 线性回归线性 的含义是参数 θ j 都是一次的,而非 h θ 的自变量,回归 应该是代价函数 J 的自变量的回归
  • θ = ( θ 0 , θ 1 , θ 2 , , θ n ) 称为参数,Machine Learning 的目的就是求出参数的合适取值使得 h θ ( x ( i ) ) 更能体现 x ( i ) y ( i ) 的映射关系
  • 为此我们提出了一个衡量参数取值好坏的函数——代价函数:
    J ( θ ) = 1 2 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) 2
  • 现在问题转变为了求使得代价函数 J ( θ ) 最小时的参数 θ ,下面给出两种方法:

梯度下降法

时间复杂度 O ( k n 2 ) ,适合当 n > 10000 时 (k 为学习步数)

θ j := θ j α J ( θ ) θ j

  • α 称学习速率(或步长),取值视情况而定,取值过大会导致不收敛;若当取值适当,在趋近极值时 J ( θ ) θ j 也会变小,所以此时梯度下降法是收敛的,所以没必要担心 α 在趋近极值点的时候不改变导致无法收敛。
  • 注意要先求出所有的 t e m p j = θ j α J ( θ ) θ j ,再对所有的 θ j = t e m p j

若取 x 0 ( i ) = 1 ,求偏导后可以写成:

θ j := θ j α m i = 1 m [ ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) ]

用矩阵来表述的话:

θ := θ α m [ X T ( X θ y ) ]

  • θ R ( n + 1 ) × 1 , X R m × ( n + 1 ) , y R m × 1

特征优化:

尽量让 1 < x i < 1

x i := x i μ i s i

  • μ i 为第 i 个特征的平均取值, s i 为第 i 个特征的极差 ( m a x m i n )

传统方法

时间复杂度 O ( n 3 ) ,适合当 n < 10000 时

直接令 J ( θ ) θ i = 0 ,解得:

θ = ( X T X ) 1 X T y

* θ R ( n + 1 ) × 1 , X R m × ( n + 1 ) , y R m × 1

code

featureNormalize

function [X_norm, mu, sigma] = featureNormalize(X)
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));
for iter = 1:size(X, 2)
    mu(iter) = mean(X(:, iter));
    sigma(iter) = std(X(:, iter));
    X_norm(:, iter) = (X(:, iter) - mu(iter)) / sigma(iter);
end
end

computeCostMulti

function J = computeCost(X, y, theta)
m = length(y); % number of training examples
J = 1 / (2 * m) * sum((X * theta - y) .^ 2);
end

gradientDescentMulti

function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
    theta -= alpha / m * X' * (X * theta - y);   
    J_history(iter) = computeCostMulti(X, y, theta);
end
end

normalEqn

function [theta] = normalEqn(X, y)
theta = zeros(size(X, 2), 1);
theta = pinv(X' * X) * X' * y;
end

Octave

https://www.gnu.org/software/octave/

Ubuntu Install:

sudo apt-add-repository ppa:octave/stable
sudo apt-get update
sudo apt-get install octave

Octave 入门:

内容比较多,推荐两篇文章

http://blog.csdn.net/weixin_36106941/article/details/64443944
https://www.cnblogs.com/leezx/p/5635056.html

逻辑回归

线性回归是用来预测某个点的取值,逻辑回归是预测某个点具有某种特征的概率

为了达到我们的目的,重现建立模型:

h θ ( x ) = g ( θ T x ) g ( z ) = 1 1 + e z J ( θ ) = 1 m i = 1 m C o s t ( h θ ( x ( i ) , y ( i ) ) )

C o s t ( h θ ( x , y ) ) = { l o g ( h θ ( x ) ) y = 1 l o g ( 1 h θ ( x ) ) y = 0

Note: y = 0 or 1 always , and log is ln

可以写成:

C o s t ( h θ ( x , y ) ) = [ y l o g ( h θ ( x ) ) + ( 1 y ) l o g ( 1 h θ ( x ) ]

对代价函数求偏倒后发现和线性回归代价函数求偏倒的结果形式上是完全一样的:

J ( θ ) θ j = 1 m i = 1 m [ ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) ]

cost function code

function [J, grad] = costFunction(theta, X, y)
m = length(y); % number of training examples
tmp = sigmoid(X*theta);
J = -1 / m * (y'*log(tmp)+(1-y)'*log(1-tmp));
grad = 1 / m * X' * (sigmoid(X * theta) - y);
end

高级优化算法

Optimization algorithms:
- Gradient descent
- Conjugate gradient
- BFGS
- L-BFGS

Advantages:
- No need to manually pick α
- Often faster than grdicent descent.

disadvantages:
- More complex

调用 Octave 优化算法 example :

[first] 定义 cost function:

costFunction.m

function [jval, gradient] = costFunction(theta, X, y)
    % jval := J(theta)
    % gradient := grad J(theta)

[then] 键入命令

options = optimset('GrandObj', 'on', 'MaxIter', '100');
initialTheta = zeros(n + 1, 1)
[optTheta, functionVal, exitFlag] = fminunc(@(t)costFunction(t, X, y), initialTheta, options);

正则化项

线性回归

为了防止 overfitting(过度拟合),对 cost function 引入了正则化项 λ 2 m i = 1 n θ i 2

J ( θ ) = 1 2 m [ i = 1 m ( h θ ( x ( i ) ) y ( i ) ) 2 + λ i = 1 n θ i 2 ]

求偏导:
J ( θ ) θ j = { 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x 0 ( i ) j = 0 1 m [ i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) + λ θ j ] j > 0

注意!!! θ 0 不需要惩罚

梯度下降法

Repeat {

θ 0 := θ 0 α 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x 0 ( i ) θ j := θ j α 1 m [ i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) + λ θ j ]

}

对上整理后:

θ j := θ j ( 1 α λ m ) α 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i )

由于 1 α λ m < 1 ,可以将 θ j ( 1 α λ m ) 写成 θ j 0.99

常规方法

直接令 J ( θ ) θ i = 0 ,解得:

θ = ( X T X + λ [ 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ] ( n + 1 ) × ( n + 1 ) ) 1 X T y

  • Suppose m n (m: examples , n : features)
  • λ > 0 时,括号内的矩阵总是可逆的
  • θ R ( n + 1 ) × 1 , X R m × ( n + 1 ) , y R m × 1

逻辑回归

对 cost function 引入了正则化项 λ 2 m i = 1 n θ i 2

J ( θ ) = 1 m i = 1 m C o s t ( h θ ( x ( i ) , y ( i ) ) ) + λ 2 m i = 1 n θ i 2 = 1 m i = 1 m [ y ( i ) l o g ( h θ ( x ( i ) ) ) + ( 1 y ( i ) ) l o g ( 1 h θ ( x ( i ) ) ] + λ 2 m i = 1 n θ i 2

求偏导数后结果形式和线性回归是一样的:
J ( θ ) θ j = { 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x 0 ( i ) j = 0 1 m [ i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) + λ θ j ] j > 0

注意!!! θ 0 不需要惩罚

cost function code

function [J, grad] = costFunctionReg(theta, X, y, lambda)
tmp = sigmoid(X*theta);
J = -1 / m * (y'*log(tmp)+(1-y)'*log(1-tmp)) + lambda / (2 * m) * sum(theta(2:size(theta, 1),1) .^ 2);
theta(1) = 0;
grad = 1 / m * (X' * (tmp - y) + lambda * theta);
end

https://www.coursera.org/learn/machine-learning
教学方: Andrew Ng, Co-founder, Coursera; Adjunct Professor, Stanford University; formerly head of Baidu AI Group/Google Brain

猜你喜欢

转载自blog.csdn.net/ctsas/article/details/79232499