Maintain a long reverse proxy connection Nginx

[Scene Description

After the HTTP1.1, HTTP protocol supports persistent connections, i.e. the connection length, the advantage that in a TCP connection may transmit a plurality of HTTP requests and responses, reducing and closing the connection establishment delay and consumption.

If we use nginx as a reverse proxy or to load balancing, from the client over the long connection requests will be converted into a short connection to the server side.

In order to support long connection, we need to do some configuration on the nginx server.

【Claim】

When using nginx, you want to do long connection, we must do two things:
1. From the client to nginx is a long connection
2. From nginx is a long connection to the server

For clients concerned, nginx in fact play the role of server, on the contrary, is to the server, nginx is a client.

[Client and maintain the long connection]

If we want to achieve between Client and Nginx maintain a long connection, you need:
a request sent by 1.Client carrying the "keep-alive" header.
2.Nginx setting supports keep-alive

[HTTP Configuration]

By default, nginx has opened keepalive support for client connections. For special scene, you can adjust the parameters.

http {

keepalive_timeout 120s; # client link timeout. When connected to 0 to disable long.

keepalive_requests 10000; # a maximum number of requests that can be served on long connections.

                                                  # When the maximum number of requests and prior requests after all, the connection is closed.

                                                  # The default value is 100

}
 

In most cases, keepalive_requests = 100 is good enough, but for higher QPS scene, is necessary to increase this parameter, in order to avoid generating a large number of connections are re-appear abandoned, reducing TIME_WAIT.
 

QPS = 10000, the client sends a request per 10,000 (usually a plurality of long established connections), each connection request can run up to 100, it means that on average every second long connector 100 will thus be nginx shut down.

Also it means that in order to maintain QPS, the client has to re-new 100 connections per second.

Thus, if we look at the client machine with netstat command, you will find a large number of TIME_WAIT socket connection (keep alive even while in force between the Client and NGINX).

· [To keep] and the long connection Server

Want to maintain a long connection between Nginx and Server, the most simple set as follows:

 

 

http {

upstream backend {

  server 192.168.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;

  server 192.168.0.2:8080 weight=1 max_fails=2 fail_timeout=30s;

  keepalive 300; // this is very important!

server {

listen 8080 default_server;

server_name "";

location / {

proxy_pass http://backend;

proxy_http_version 1.1; # set http version 1.1

proxy_set_header Connection ""; # Set Connection to a long connection (default is no)}

}

}

}
 

[Upstream] Configuration

upstream, there is a particularly important parameter is the keepalive.

This parameter and before http inside keepalive_timeout not the same.

The implication of this argument is that the maximum number of free connection pool inside.

I do not understand? It does not matter, we give an example:

Scenes:

There is a HTTP server, a server receives the request upstream, a response time of 100 milliseconds.

Required performance to 10000 QPS, we need to build about 1000 nginx HTTP requests between the upstream server. (1000 / 0.1s = 10000)

Optimal conditions:

Suppose even and smooth request, each request is 100ms, the request is immediately placed in the end of the connection pool and set to IDLE (idle) state.

We 0.1s units:

1. We now keepalive value is set to 10, every 0.1s clock has 1000 connections

2. 0.1s first time, we received a total of 1000 requests and release

3. The first 0.2s when we go again 1000 request, released at the end of time 0.2s

Uniform both requests and responses, just enough 0.1s releasable connection, without establishing a new connection, and the connection pool are not connected to the idle state.

The first case:

The response is very smooth, but not smooth when requested

4. 0.3s first time, we have only received 500 requests, 500 requests due to network delay and other reasons did not come

 This time, Nginx has detected the connection pool 500 is connected to the idle state, directly closed (500-10) connection

5. 0.4s first time, we received 1,500 requests, but now there is only the pool (500 + 10) connections, so Nginx had re-established (1500-510) connections.

 If in Step 4, when it does not close 490 connection, then only we need to re-establish 500 connections.

The second case:

Request is very smooth, but not smooth response time

4. 0.3s first time, we received a total of 1500 requests

 But the pool there are only 1,000 connections, this time, Nginx has created 500 connections, a total of 1500 connections

5. The first time 0.3s, 0.3s first of all the connections are released, we received 500 requests

Nginx cell which has detected the connection in the idle state 1000, it had released (1000-10) connection

Repeated shocks caused by the number of connections a promoter, this is the maximum number of connections keepalive idle.

Unreasonable above two cases that are the keepalive setting Nginx has resulted in the release process many times and create a connection, resulting in waste of resources.

The keepalive parameter must be careful, especially for relatively high QPS unstable network environment or scene, generally based on the average response time and the value of QPS can roughly calculate the required number of long connection.

Then the number of connections set keepalive length of 10% to 30%.

[Location] Configuration

 

 

http {

server {

location / {

proxy_pass http://backend;

proxy_http_version 1.1; # set http version 1.1

proxy_set_header Connection ""; # Set Connection to a long connection (default is no)

}

}

}
 

HTTP protocol support from the long connection is only after the version 1.1, it is preferable to set 1.1 by proxy_http_version instruction.

HTTP1.0 does not support keepalive properties, when there is no use of HTTP1.1, back-end services will return 101 error, and then disconnect.

The "Connection" header can choose to be cleaned up, so even short connections between Yes, Nginx and upstream also can open the long connection between the Client and Nginx.

Another way [Advanced]

 

 

http {

map $http_upgrade $connection_upgrade {

default upgrade;

'' close;

upstream backend {

server 192.168.0.1:8080 weight=1 max_fails=2 fail_timeout=30s;

server 192.168.0.2:8080 weight=1 max_fails=2 fail_timeout=30s;

keepalive 300;

server {

listen 8080 default_server;

server_name "";

location / {

proxy_pass http://backend;

proxy_connect_timeout 15; # upstream server and the connection time (no units, may not exceed the maximum 75s)

proxy_read_timeout 60s; #nginx will wait long to obtain a response to the request

proxy_send_timeout 12s; # sending a request to a server timeout upstream 

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection $connection_upgrade;

 }

}

}
 

Http role inside the map are:

Let forwarded to the proxy server header field "Connection", depending on the "Upgrade" field value client request header.

$ Http_upgrade If there is no match, then the value of the header field "Connection" will be the upgrade.

If $ http_upgrade an empty string, then the value of "Connection" header field would be close.

【supplement】

NGINX support WebSocket.

For NGINX upgrade request sent from the client to the back-end server, you must explicitly set Upgrade and Connection title.

This could be considered above situation very common scenario.

Upgrade the HTTP protocol header mechanism for connection to a WebSocket connection upgrade from HTTP, Upgrade mechanism uses Connection Upgrade Protocol header and a protocol header.

To make Nginx may Upgrade from the client request to the backend server, and Connection Upgrade header information must be explicitly set.

【note】

In nginx configuration file, if the current module is not proxy_set_header settings, it will inherit the configuration from the level.

Order of succession is: http, server, location.

If the next layer using proxy_set_header changes the value of the header, then all the header values ​​may vary, all previous configuration inheritance will be discarded.

So, as far as possible proxy_set_header in the same place, otherwise there may be other problems.

Guess you like

Origin www.linuxidc.com/Linux/2019-08/159990.htm