[Linux] Deploy nginx under Ubuntu - load balancing of nginx

introduce

This is the journey of the editor's growth path, and also the editor's learning path. Hope to grow up with you guys!

Here are two of my favorite quotes:

To have the most simple life and the most distant dream, even if it will be cold tomorrow, the mountains will be high and the water will be far away, and the road will be far away.

Why should one work hard? The best answer I have ever seen is: because the things I like are expensive, the places I want to go are far away, and the person I love is perfect. Therefore, the editor would like to say: encourage each other!

This article is a small editor to record the systematic learning of Linux

Table of contents

1. Load balancing

1. What is load balancing?

2. What are the common nginx load balancing methods?

Second, the realization of load balancing

1. Demand

2. Steps

1. Configure multiple tomcat application servers (the editor is one under window and one under linux)

2. Add the configuration /etc/nginx/sites-available/default in ngnix (it can also be configured in nginx.conf under the root directory of nginx) or configure two or more services as follows:

 3. The parameters of nginx load balancing:

 4. Let the configuration take effect and update the configuration

 5. Access services


1. Load balancing

1. What is load balancing?

Load balancing is based on the existing network structure, which provides a cheap, effective and transparent method to expand the bandwidth of network devices and servers, increase throughput, strengthen network data processing capabilities, and improve network flexibility and availability.

Load balancing, the English name is Load Balance, which means that it is distributed to multiple operating units for execution, such as Web servers, FTP servers, enterprise key application servers, and other mission-critical servers, so as to jointly complete work tasks.

2. What are the common nginx load balancing methods?

Nginx provides a variety of load balancing methods, the following are some common methods:

1. Round-robin : The default load balancing strategy, which is to distribute requests to different back-end servers in turn. When the request is allocated to the last backend server, the statistical data is cleared, and the allocation starts from the first backend server again.

2. IP hash (ip_hash) : perform hash calculation based on the client's IP address, and assign requests from the same client to the same backend server. This method can ensure that all requests from the same client will be allocated to the same backend server, which can solve problems in certain application scenarios.

3. Least connection (least_conn) : Send the request to the backend server with the least number of current connections. This method allows the load balancing algorithm to select the server with the fastest processing request and improve the system response speed.

4. Weighted round robin (weight) : The request is distributed according to the weight of the server, and the server with the higher weight can handle more requests. This method can allocate requests according to the processing capacity of the server and improve the performance of the entire system.

5. Weighted least connection (least_conn + weight) : Combine the least connection method with the weighted method to allocate requests based on the number of connections and server weights, and select the fastest and most powerful server.

Of course, Nginx load balancing can also set some advanced options, such as: health check, slow start, maximum number of failures, etc. These advanced options can ensure the stability and reliability of load balancing and improve the availability of application systems.

In short, Nginx provides a variety of load balancing methods, including round robin, IP hash, least connection, weighted round robin and weighted least connection. According to different application scenarios and requirements, choose the appropriate load balancing method and parameter combination to achieve flexible, efficient and stable load balancing.

Second, the realization of load balancing

1. Demand

Nginx acts as a load balancing server. User requests arrive at nginx first, and then nginx forwards the requests to the tomcat server according to the load configuration.

eg:

nginx load balancing server: IP address 1:80

tomcat1 server: http://ip address 2:80

tomcat2 server: http://IP address 1:8080

2. Steps


1. Configure multiple tomcat application servers (the editor is one under window and one under linux)


2. Add the configuration /etc/nginx/sites-available/default in ngnix (it can also be configured in nginx.conf under the root directory of nginx) or
configure two or more services as follows:

The following code is just an example, you can configure it according to your own situation

upstream tomcatserver1 {
# 第一台服务器
upstream tomcatserver1 {
    server 192.168.0.126:8080;
  server 192.168.0.126:8082;
 }
 # 第二台服务器
 upstream tomcatserver2{
    server 192.168.0.126:8082;
     #   server 192.168.3.43:8082; 
 }
 
 server {
        listen       8888;
        server_name  localhost;
 
        #charset koi8-r;
 
        #access_log  logs/host.access.log  main;
 
        location / {
            proxy_pass   http://tomcatserver1;             
            index  index.html index.jsp;
        }
    }
 
 server {
        listen       8888;
        server_name  localhost;
 
        #charset koi8-r;
 
        #access_log  logs/host.access.log  main;
 
        location / {
            proxy_pass   http://tomcatserver2;
            index  index.html index.jsp;
        }
    }


The following figure is the load balancing configuration of Xiaobian

 

 3. The parameters of nginx load balancing:

Add under the Server node that needs to use the load

proxy_pass  http://myServer; (myServer here is the name defined in upstream)

Status of each upstream device:

down means that the server before the order does not participate in the load temporarily

weight : The default is 1. The larger the weight, the greater the weight of the load.

max_fails : The number of allowed request failures is 1 by default. When the maximum number is exceeded, an error defined by the proxy_next_upstream module will be returned

fail_timeout : The time to pause after max_fails failures.

backup : When all other non-backup machines are down or busy, request the backup machine. So this machine will be the least stressful.

 4. Let the configuration take effect and update the configuration


Note: After changing the configuration file every time, be sure to check whether the syntax is correct ( nginx -t ), otherwise restart the service and report an error

The following code can be used to restart nginx

/etx/init.d/nginx   restart 

service nginx restart

 5. Access services


After the web page visits 172.1.3.69:8080 twice through http:ip address/80 and press Enter, visit 172.1.3.29:8080 again

The above is the content of the editor's practice, I hope it can help everyone, thank you for watching! !
 

Guess you like

Origin blog.csdn.net/weixin_60387745/article/details/131189061