paper reading- Feb 25 about optimization problem used in image

Two-Sided Sparse Learning with Augmented Lagrangian Method

这篇文章 针对解决图像相关的或是分类或者图像去噪,图像修复,只要是利用字典来处理问题都会遇到的sparse learning 的问题,本文引入了two-sided sparse learning的方法,在稀疏矩阵特征选择的时候,囊括更多的信息,除此之外,在解优化问题的时候,引入了ADMM方法,以及augmented Lagrangian Method.
https://www.researchgate.net/publication/331192514_Two-Sided_Sparse_Learning_with_Augmented_Lagrangian_Method

1. Sparse learning model

In this paper, this model is used in classification problem.

We use Φ R m × n \Phi \in R^{m\times n} (m<<n) to denote the training data matrix consisting of n input samples whose classes is known in advance.
Given an arbitrary sample y \in R m R^m , sparse learning model aims to find the sparse representation x of y under Φ \Phi :
y = Φ x y = \Phi x
where x R n x \in R^n and the number of nonzero elements in x should not more than a specific threshold k.

For computational convenience, the optimization problem can be written as:
m i n x x 1 , s . t . y = Φ x min_x ||x||_1, s.t. y = \Phi x
the constraint on x can be achieved by using a L1 norm.

2. Two-Sided Sparse Learning

Instead of only considering the column-wise sparsity, this two-sided sparse learning model also takes the row-wise sparsity of features into account.

So the two-sided sparsity can reduce the reconstruction error and find several most representative features.

The proposed model is as following:

y = D Φ \Phi x
Notation:
y R m y \in R^m an arbitrary sample vector with m features.
Φ R m × n \Phi \in R^{m\times n} (m<<n) is the dictionary consisting of n training samples
D R m × m D\in R^{m\times m} ensures the sparsity of features.
x R n x\in R^n is the sparse representation of y.

x, y both are vectors.

In this work, we want to reconstruct x from y :

Then our model can be transformed to:
m i n 1 2 Y D Φ X F 2 min \frac{1}{2}||Y - D\Phi X||_F^2 + λ 1 X 1 \lambda_1||X||_1 + λ 2 Ω ( D ) \lambda_2\Omega(D)
λ 1 , λ 2 &gt; 0 \lambda_1, \lambda_2 &gt;0 , Ω ( D ) \Omega(D) is a penalty term.
In this model, X and Y are matrix composed by x, y

In this penalty term, we consider L1 norm and F norm.

We use ADMM to solve this optimization problem, and we alternatively solve one variable by fixing the other at each iteration.


(2) Then, when fixing X, according to ADMM, Eq could be written as:
m i n 1 2 Y D Φ X F 2 + λ 2 Ω ( Z ) min \frac{1}{2}||Y-D\Phi X||_F^2 +\lambda_2 \Omega(Z) s.t. D - Z =0

猜你喜欢

转载自blog.csdn.net/weixin_39434589/article/details/87917147