Hundred-faced machine learning notes

Chapter 3 Classic Algorithm-Support Vector Machine

December 12

For any two sets of points that are linearly separable, the projections on the hyperplane of the SVM classification are linearly inseparable.

The proof is probably like this: first prove by contradiction that there is a hyperplane, so that the SVM allows all support vectors to be projected on the hyperplane still separable, but for this separable situation, the support vector has a better hyperplane Therefore, it is not satisfied with the premise of SVM that the hyperplane is the definition of "maximized separation plane", so it is proved that the projection is linear and inseparable.

Then the author added the proof. That is, the justification method just considered the support variables, but the support vector that has a better hyperplane is no longer the support vector of the original hyperplane. Can this be used to prove it? So the next step is the proof: proof that the SVM classification result only depends on the support vector.
(I have a question at this point, that is, does the support vector remain constant? No matter how the hyperplane changes, the support variables or those.)

Two methods are used in the book. Only the first one is stated here. The second one involves the concept of convex optimization. As shown in the figure, the KKT condition is to consider the best points, and is used to solve both equality and inequality constraints. The extreme value. First, according to the KKT condition, the value of the constraint conditions in the three cases of alpha is obtained. For linearly separable problems, it is satisfied when alpha is equal to 0, and when alpha is equal to 0, there is L = 1/2 w^2, so the optimization problem is only related to the support vector w.
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_29027865/article/details/103516991