Configure Nginx + uWSGI + load balancing Comments on CentOS

Load balancing regarded as a more important feature in the service-side development. After the addition because Nginx as a regular Web server, it will also be used for large-scale back-end reverse proxy, Nginx asynchronous frame can handle a lot of concurrent requests, these concurrent requests can be distributed to hold live backstage server (backend servers, later referred backend) to do a complicated calculation, and processing the response, and may conveniently be in the expansion backend server when traffic increases.

It means, as the business grows and the size of the user, only one server can not respond to shoulder high concurrency, so more servers need to work together to share the pressure, and relieve the pressure on the media is omnipotent Nginx.  

First, wsgi from two Django service on a different port, such as 8000 and 8001 (do not start nginx Services)

Then modify the configuration nginx website, the original uwsgi_pass comment into variable bindings

server {
    listen       8000;
    server_name  localhost;

    access_log      /root/myweb_access.log;
    error_log       /root/myweb_error.log;


    client_max_body_size 75M;

    location / {
        include uwsgi_params;
        uwsgi_pass mytest;
        #uwsgi_pass 127.0.0.1:8001;
        uwsgi_param UWSGI_SCRIPT video_back.wsgi;
        uwsgi_param UWSGI_CHDIR  /root/video_back;
    }


    location /static {
        alias /root/video_back/static;
    }

    location /upload {
        alias /root/video_back/upload;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}
server {
    listen       80;
    server_name  localhost;

    access_log      /root/video_vue_access.log;
    error_log       /root/video_vue_error.log;
    
    client_max_body_size 75M;


    location / {
        #include uwsgi_params;
        # uwsgi_pass 127.0.0.1:8000;
        #uwsgi_pass mytest;
        root /root/video_vue;
        index index.html;
        try_files $uri $uri/ /index.html;

    }

    location /static {
        alias /root/video_vue/static;
    }

    error_log    /root/video_vue/error.log    error;

}

Then modify the main configuration file vim /etc/nginx/nginx.conf, add load balancing configuration in a configuration http

user  root;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log   / var / log / Nginx / the access.log main; 

    the sendfile ON; 
    # tcp_nopush ON; 

    keepalive_timeout   65 ; 

    # the gzip ON; 

    the include /etc/nginx/conf.d/* .conf;
     upstream mytest { 
    Server 39.106.172.65:8000 =. 3 weight;   # server load balancing cluster 
    server 39.97.117.229:8001 weight. 7 = ; 
        } 
}

Then you can restart the service: 

systemctl restart nginx.service
It is noted that load balancing strategy commonly used are the following:
 1 , polling (default) 
each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed. 

{backserver upstream 
    Server 192.168.0.14 ; 
    Server 192.168.0.15 ; 
}


 2 , the weight Weight 
polling a probability proportional to weight ratio and access, for the case where unevenness backend server performance. 

{backserver upstream 
    Server 192.168.0.14 weight =. 3 ; 
    Server 192.168.0.15 weight. 7 = ; 
}


 . 3 , ip_hash (the IP binding) 
has a problem in the above-described embodiment is to say, in the load balancing system, if a user is logged on the server , then when the user requests a second time, because we are load-balancing system, every request will be relocated to one of the servers in the cluster, then the server has logged one user and then relocated to another server, its Login information will be lost, this is clearly inappropriate. 

We can use ip_hash instructions to solve this problem, if the customer has visited a server, when the user accesses again, the request is through a hashing algorithm, to automatically locate the server.

Each request is assigned according ip hash result of the visit, so that each visitor to access a fixed back-end server, can solve the problem of session. 

{backserver upstream 
    ip_hash; 
    Server 192.168.0.14:88 ; 
    Server 192.168.0.15:80 ; 
}


 . 4 , Fair (third-party plug) 
according to the response time of the allocation request to the backend server, a short response time priority allocation. 

{backserver upstream 
    Server server1; 
    Server Server2; 
    Fair; 
}


 . 5 , url_hash (third-party plug) 
according to the results of hash access url allocation request to the url each directed to the same back-end server, the back end server is effective when the cache . 

{backserver upstream 
    Server squid1: 3128 ; 
    Server squid2: 3128 ; 
    the hash $ REQUEST_URI; 
    hash_method CRC32; 
}

Guess you like

Origin www.linuxidc.com/Linux/2019-06/158949.htm