机器学习笔记 ---- Evaluations and Diagnostics on Algorithms

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/sinat_35406909/article/details/81779430

Improvements and Diagnostics on Algorithms

1. How to Evaluate A Hypothesis

Split training set into 2 parts: training set + test set
If J t e s t ( θ ) high, J ( θ ) low, then overfitting occurs.



Linear Regression Test Error:
Same as J ( θ )


Logistic Regression Test Error:

e r r ( h Θ ( x ) , y ) = { 1 , if  h Θ ( x ) 0.5   a n d   y = 0   o r   h Θ ( x ) < 0.5   a n d   y = 1 0 , o t h e r w i s e

then
T e s t E r r o r = 1 M t e s t e r r ( h Θ ( x ) , y )

2. Model Selection

Split training set into 3 parts: training set + cross validation set (CV) + test set
1) Optimize the parameters in Θ using the training set for each polynomial degree.
2) Find the polynomial degree d with the least error using the cross validation set.
3) Estimate the generalization error using the test set with J t e s t ( Θ ( d ) ) , ( d = theta from polynomial with lower error);
In reality, CV set and test set should be randomly picked!

3. Diagnosing Bias & Variance

Training error decreases with d increases.
Validation error first decreases, then increases as d becomes bigger.

High Bias:
J C V ( θ ) J t r a i n ( θ ) is high
High Variance:
J C V ( θ ) high, J t r a i n ( θ ) low

4. Choosing λ When Doing Regularization

Try λ := λ 2 , Pick the one wth least J C V ( θ ) and see its test error
High Bias:
J C V ( θ ) J t r a i n ( θ ) is high, λ is big
High Variance:
J C V ( θ ) high, J t r a i n ( θ ) low, λ is small

5. Learning Curves

x-axis is m, y-axis is error

High Bias:

If bias is high, adding more training data won’t help.


High Variance:

If variance is high, adding more training data may help.

6. Solutions for Bias & Variance

High Bias:
-more features;
-more polynomials;
-decreasing λ

High Variance:
-more examples;
-less features;
-increasing λ

7.Bias & Variance for Neural Network

Small Network: High Bias
Big Network: High Variance, using λ doing regularization

8. Error Metrics: Precision & Recall

Put y=1 in presence of rare classes.
- Precision: Of all y=1 predictions, how many are correctly detected?
- Recall: Of all the rare cases, how many are correctly detected?

How to compare precision and recall? Using F score.
F score = 2 P R P + R

猜你喜欢

转载自blog.csdn.net/sinat_35406909/article/details/81779430
今日推荐