interior point method

1. The interior point method is to search inside the feasible region, and finally converge to the optimal solution boundary

2. Commonly used interior point methods include affine scale method, logarithmic barrier method and primitive dual method

In addition to the simplex method and dual theory to solve the linear programming (LP) problem, there is also a search solution - the interior point method (interior point method) , which moves inside the feasible region. Today we will learn three interior point methods, including: affine-scaling , log-barrier and primal-dual.

Again, for ease of introduction, we introduce a new example - Franny's firewood:

Every year Franny sells 3 matches from her grove. One potential customer is willing to pay $90 for a half stick, another $150 for a stick. Our question is how much firewood should Franny sell to each customer to maximize her profit? Assume that each customer can buy as much as possible.

Modeling the above scenario, it is easy to get:

7d0a5bc1b98f5e48e1ba44a3d345a84c.png

Draw its feasible region area on the two-dimensional coordinate axis.

9f9461c61b034acc39fe454e632b61ea.png

0

The logic of the interior point method

The interior point method is an algorithm that performs search iterations inside the feasible region , and it has a significant advantage: no constraints work, and all directions are feasible . But we know that the maximum improvement direction of the maximization model is the gradient direction, and the maximum improvement direction of the minimization model is the negative gradient direction. In this way, the interior point method will inevitably stop at the boundary point. Assuming the Franny wood problem, improve from the interior point X0(1,0.5) to the gradient direction, according to the principle of maximum step size, the next point will stay on the boundary of the feasible region X1:

7780ef3544e9e49a290e58bbb66a02da.png

Therefore, the key to the interior point method is whether the search point is always kept in the "middle" of the feasible region to guide the optimal solution to be found, which must follow: the interior point method starts from an internal feasible point, and between a series of internal points Search until the optimal solution converges to the boundary of the feasible region.

To check whether each iteration point is an interior point, we first need to change the LP problem into a standard linear programming, taking the Franny firewood problem as an example:

6698da0eafb3aea7b2c284a686243965.png

In this way, it is easy to judge whether a feasible solution is an interior point: given a feasible solution of the linear programming standard form, if each component of the feasible solution is strictly positive, then the feasible solution is an interior point

Here is an example to illustrate:

eec0b64bec19156c9d073db1dd6fc4cd.png

It can be seen that for the standard LP problem, the interior point must satisfy: the constraint condition Ax=b holds; the component is strictly greater than 0. The direction of its next search point should satisfy:

59c4f31cea8fad01d2d5783e9170a0ca.png

According to the projection theory , we can get the moving direction of the interior point:

e920403ed4336e3dfaa349691c086d4b.png

Generally speaking, d is the positive and negative gradient direction, because this direction is the fastest improving direction, and Δx guarantees the feasible direction. The above theory states that the direction of movement is as close as possible to the direction of improvement and remains on the interior point. So is Δx the direction of improvement? The answer is yes. We can prove:

bea20ea964031b8b0ca2964ec277ba73.png

for example:

420e26970da5d9802191a759ede533bb.png

To sum up, the core of the interior point method is to avoid the boundary of the feasible domain before finding the optimal solution, and to ensure that each round of iteration is a feasible improvement direction.

1

Affine Scaling Algorithm

According to the principle of the previous section, we propose an effective tool to avoid the boundary - scaling, first define the diagonal matrix form of the current solution:

2d97ac70199360374cacae8410555a5e.png

Because the components of interior points are strictly greater than 0, we construct affine scaling and inverse affine scaling:

2b7857aa344423b12f8869c2cb784bd2.png

After transformation, the form of the original standard linear programming will be changed :

8d68189e39fcce34528e0f29ebd65749.png

Take the Franny firewood problem as an example, c=(90,150,0), A=(0.5,1,1), b=(3), then:

9a595c47295205b63c4f1703309a0eef.png

The form after the affine scale transformation, to get the next search point, according to the projection theory, we get:

9e22763a48c6b6082b40f0f463eb66a4.png

The feasible improvement direction Δx of the original objective function can be obtained by using the inverse affine scale transformation:

296eb38816a0baf3da002ca00bfcf96e.png

Then there is the last question, what is the step size of the move:

79964158e18f55c0dc74ffe1db9d9cd3.png

Obviously, we must strictly ensure that each component of the solution obtained at each step is strictly greater than 0 until it converges to the optimal solution. For the maximization problem, if Δx is non-negative, then the corresponding standard linear programming model is unbounded. otherwise:

8ba091c80376d2850896442807806cb4.png

Why is this, because the y after the affine scale transformation is all 1:

67461710b446c0c02c15f2a126735435.png

It shows that the search range is carried out within a unit circle (dimension n), and it is only necessary to realize the affine scale transformation by finding the limit of this sphere.

In summary, we get the steps of the affine scaling algorithm :

6c517f044eaf17608dd802a5a26d34f4.png

Taking the Franny firewood problem as an example, let's take a look at the details of the affine scaling algorithm:

f521e5abad669e193ecb9babb819b639.png

It can be seen that when the iteration reaches the ninth time, the error is very close to the optimal solution. We can easily implement this process using the python program:

8a3cf50cff5a6c306f475440ee48f4db.png

The following 3D diagram shows the search process (only the first 6 iterations are shown):

f609b570e2ac4951c7eed05bf90d4015.png

The two-dimensional diagram more precisely shows that the search process is carried out at the interior point of the feasible region:

57a3a379f7852ca88928a4d4bc5d32f4.png

Note: the code is placed in the appendix

2

logarithmic handicap

Next we introduce the second interior point method - the logarithmic barrier method. First give the model directly:

8673d4c7460ce1853e62449eba498d37.png

The idea here is to make the search point principle boundary as much as possible. Why is this because:

a3c02eb3085eaff74cfa0ee1ede8a565.png

Take the Franny firewood problem as an example:

493540bceb0e0b11804b8f83256b1b78.png

It can be seen that the closer to the boundary, the more significant the impact of the obstacle item on the real value, thus avoiding the search algorithm from touching the boundary of the feasible region during iteration.

The planning problem with logarithmic barriers is nonlinear. We use the idea of ​​Newton's method to iterate the second-order Taylor expansion on the multivariate function. The derivation process is no longer expanded. Here we directly give the direction of movement:

9009db84ed1fc2b1b18edbfd608ba6d5.png

The step size of the obstacle method from Newton iteration is:

cc7d9f543253f797ae46cca78b70c16b.png

In the above formula, the multiple positioning is 0.9 because due to the non-linearity of the target, the search process may stop improving before approaching the maximum step size, or it may start to reduce the value of the objective function after a larger step size near the current solution.

Next, the last problem remains—the determination of the obstacle factor. Obviously, when the factor is large, it has a significant effect on hindering the interior points close to the boundary, and when it is small, it encourages the search process to approach the boundary. Therefore, the barrier algorithm usually starts with a large factor and reduces it to 0 as the search progresses slowly

Based on the above, we get the steps of the obstacle algorithm:

e5bb14b17b803782ddcd6fa660fc028f.png

Still taking the Franny firewood problem as an example, implement this process with code. The following figure shows the first five two-dimensional search processes:

119b83ec62b348f99cc792cd8814e5a2.png

The code developed by the editor went through 12 iterations to reach the optimum. The following are some screenshots. See the appendix for the code.

634a2522364e728cc5cca7a2de773b8a.png

3

Primal Duality

In this section we discuss the last algorithm of the interior point method - the primal dual interior point method . In the dual theory, we get the form of the standard original problem and the dual problem and the relaxation conditions:

07fd96befee923dd1474414b8796e7f7.png

Portal: A dual theory of linear programming

We add the slack variable to the dual problem, transform it into a form without inequality constraints, and get:

eff3c77f82baeedb393605f9d52bdeb8.png

At the optimal solution , we have:

e9bdc80173c25d865a263a9d4f4a9f4f.png

If the solution is not optimal, the above equation is not satisfied, and the primal-dual interior-point search iterates using their duality gap :

3af458b32099395dabfbe35388deb69c.png

The iterative principle is: the original dual interior point method always keeps the solutions of the original problem and the dual problem strictly feasible in each iteration, and systematically reduces the degree of violation of complementary slackness during the search process

Specifically, the primal dual interior point method sets the target μ>0 for each violation degree in the search, finds a feasible moving direction that can reduce the difference between the current complementary relaxation result and the target, and gradually reduces μ to 0 to achieve complementary relaxation ,this needs:

93e3ceb896849f13d2f1dfd8c69afbf3.png

Then the last problem remains: the determination of the step size λ. The algorithm keeps the search process within the feasible region by adjusting a positive barrier factor (boundary prevention) - constant δ

3c786d4afb948a6288b06dc7cb71d92f.png

In summary, we get the steps of the original dual interior point method:

da3d485d1504c20abd283f6897ef4282.png

We will not implement this algorithm here, and will combine the dual theory with the implementation code in the future! Let's end today. Happy National Day! !

【appendix】:

#放射尺度变换
import sys
import numpy as np


c = np.array([90,150,0])
A = np.array([0.5,1,1])
# D = []
t = 0
x_t = np.array([1,0.5,2])
while True:
    # D.append(list(x_t))
    print('t={}时的目标取值:{}'.format(t,np.dot(c,x_t)))
    for i in list(x_t):
        if round(i,6) == 0:
            print('迭代结束,最优解为{}'.format(np.around(x_t,4)))
            sys.exit()
    X_t = np.diag(x_t)
    c_t = np.dot(c,X_t)
    A_t = np.dot(A,X_t)
    P_t = np.identity(3) - np.dot(A_t.reshape(3,1),A_t.reshape(1,3))/np.dot(A_t,A_t.T)
    delat_x = np.dot(X_t,np.dot(P_t,c_t))
    print('移动方向:{}'.format(np.around(delat_x,2)))
    a = np.dot(delat_x,np.linalg.inv(X_t))
    lam = 1/np.sqrt(np.dot(a,a))
    print('步长大小:{:.5f}'.format(lam))
    x_t = x_t + lam*delat_x
    print('下一搜索点:{}'.format(np.around(x_t,4)))
    t += 1
#     D.append(list(delat_x))
    print('*****************************************')
#对数障碍法
import numpy as np
import pandas as pd
c = np.array([90,150,0])
A = np.array([0.5,1,1])
D = []
t = 0
x_t = np.array([1,0.5,2])
y = 0
u = 16
while True:
    D.append(list(x_t))
    y_old = np.dot(c, x_t)+u*np.sum(np.log(x_t))
    print('t={}时的实际目标值:{}'.format(t,np.dot(c,x_t)))
    print('t={}时的障碍目标值:{}'.format(t, y_old))
    X_t = np.diag(x_t)
    c_t = np.dot(c,X_t)
    A_t = np.dot(A,X_t)
    P_t = np.identity(3) - np.dot(A_t.reshape(3,1),A_t.reshape(1,3))/np.dot(A_t,A_t.T)
    delat_x = np.dot(np.dot(X_t,P_t),c_t+u)
    print('移动方向:{}'.format(np.around(delat_x,3)))
    lam_list = []
    for i in range(3):
        if delat_x[i] < 0:
            lam_list.append(-0.9*x_t[i] / delat_x[i])
    lam = min(1/u,min(lam_list))
    print('障碍因子为:{},步长大小:{:.5f}'.format(u,lam))
    x_t = x_t + lam*delat_x
    print('下一搜索点:{}'.format(np.around(x_t,4)))
    y_new = np.dot(c, x_t)+u*np.sum(np.log(x_t))
    t += 1
    if y_new - y_old < 1:
        if u < 0.5:
            print('迭代结束,最优解为{}'.format(np.around(x_t, 2)))
            break
        else:
            u = u/2
    else:
        continue
#     D.append(list(delat_x))
    print('*****************************************')
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['SimHei']
plt.rcParams['axes.unicode_minus']=False
fig = plt.figure(figsize=(12,6))
ax = plt.axes(projection='3d')
ax.scatter(A['x1'],A['x2'],A['x3'],color='red',s=25)
ax.quiver(A['x1'],A['x2'],A['x3'],A['u'],A['v'],A['w'],arrow_length_ratio=0.2,color='black')
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_zlabel('x3')
ax.set_xlim(0,6)
ax.set_ylim(0,3)
ax.set_zlim(0,2)
ax.set_title('弗兰妮木柴问题搜索过程')
plt.show()
plt.figure(figsize=(6,4))
plt.fill_between(np.linspace(0,6,500),np.tile(np.array(0),500),3-0.5*np.linspace(0,6,500),facecolor="#DCDCDC")
plt.scatter(A['x1'],A['x2'],color='black',linewidths=0.5)
# plt.text(-6,-1,s='X0(-5,0)',verticalalignment='bottom')
for _,row in A.iterrows():
    plt.arrow(row.x1, row.x2, row.u, row.v, length_includes_head=True, head_width=0.1, width=0.02, alpha=0.1, fc='#CD2626', ec='blue')


plt.xlim(0,6)
plt.ylim(0,3)
plt.xlabel("x1")
plt.ylabel("x2")
plt.title("弗兰妮木柴问题二维搜索过程")
plt.tight_layout()
plt.show()

Guess you like

Origin blog.csdn.net/qq_27388259/article/details/127139358