Nginx reverse proxy and load balancing for big data cloud computing operation and maintenance

Introduction to Nginx

1. Overview of Nginx

1.1 Overview

Nginx ("engine x") is a high-performance HTTP/reverse proxy server and email (IMAP/POP3) proxy server.
img
According to the official test, nginx can support 50,000 concurrency, and the consumption of resources such as cpu and memory is very low, and the operation is very stable. The most important thing is open source, free, and commercially available.

Nginx also supports hot deployment, and can run almost 7 * 24 hours a day without restarting even if it runs for several months. It can also upgrade and maintain the software without interrupting the service.

1.2 Nginx application scenarios

1. Refer to the server configuration in the stand-alone environment. The number of concurrent connections is around 7000+ -8000. Cluster mode 20000+.

2. As a web server : Compared with Apache, Nginx uses fewer resources, supports more concurrent connections, and reflects higher efficiency, which makes Nginx especially popular with virtual host providers. Capable of supporting responses up to 50,000 concurrent connections.

3. As a load balancing server : Nginx can directly support Rails and PHP internally, and can also support external services as an HTTP proxy server. Nginx is written in C, and it is much better than Perlbal in terms of system resource overhead and CPU usage efficiency.

4. As a mail proxy server : Nginx is also a very good mail proxy server (one of the first purposes of developing this product is also as a mail proxy server), Last.fm describes the successful and wonderful experience of using it.

5. The installation of Nginx is very simple, the configuration file is very concise (it can also support perl syntax), and the server with very few bugs.

Two, Nginx installation

2.1 Enter the official website to download

2.2 Install related dependencies

2.2.1 The first step

1. Avoid: port conflict between Nginx and httpd

Uninstall: httpd that may already be installed, thus avoiding <port conflict>

(rpm -qa | grep -P "^httpd-([0-9].)+") && rpm -e --nodeps httpd || echo "未安装" 

2. Download: Nginx source code installation package, and unzip it

cd ~ 
which wget || yum install -y wget 
wget http://nginx.org/download/nginx-1.19.1.tar.gz

3. Install other dependencies

yum install -y gcc pcre-devel zlib-devel 

4. Create a running account nginx

useradd -M -s /sbin/nologin nginx

2.3 install nginx

  1. Unzip the nginx-xx.tar.gz package

    tar -axf nginx-1.19.1.tar.gz 
    
  2. Enter the decompression directory, execute ./configure to set the installation path and running account

    cd ~/nginx-1.19.1 
    ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx 
    
  3. make&&make install

    make && make install
    
  4. configuration page

    cat >/usr/local/nginx/conf/nginx.conf <<EOF
    worker_processes 1;
    events {
        worker_connections  1024;
    }
    http {
         include            mime.types;
         default_type       application/octet-stream;
         sendfile           on;
         keepalive_timeout  65;
         charset            utf-8;
         server {
            listen          80;
            server_name     localhost;
            include         conf.d/*.conf;
            location / {
                root        html;
                index       index.html index.htm;
            }
            error_page 500 502 503 504  /50x.html;
                location =  /50x.html {
                root        html;
            }
         }
    }
    EOF
    
    #创建辅助配置文件目录
    [ -d /usr/local/nginx/conf/conf.d ] || mkdir -p /usr/local/nginx/conf/conf.d
    

2.4 Set the environment variable and set the boot to start automatically at the same time

1. Set variables

cat > /etc/profile.d/nginx.sh<<EOF 
export PATH="/usr/local/nginx/sbin:\$PATH" 
EOF

2. Refresh the environment

source /etc/profile 

3. Start and stop: Nginx service process

echo "/usr/local/nginx/sbin/nginx" >> /etc/rc.d/rc.local 	## 设置:开机自启动 
chmod +x /etc/rc.d/rc.local 
nginx 或 nginx -c /usr/local/nginx/conf/nginx.conf           ## 启动:Nginx 服务 

2.5 Access

Direct browser input virtual machine ip address test

Three, nginx common commands and configuration files

3.1 Common commands

#查看版本
nginx -v

#检查配置文件错误
nginx -t

#启动nginx
nginx

#关闭nginx
nginx -s stop

#重加载nginx
nginx -s reload

3.2 Detailed explanation of the configuration file

#配置文件位置
位置:/usr/local/nginx/conf/nginx.conf
★ 查看:<Nginx 主配置文件><默认配置>
# cat /usr/local/nginx/conf/nginx.conf | grep -vE "^\s*(#|$)"
worker_processes  1;
events {
    
    
    worker_connections  1024;
}
http {
    
    
     include            mime.types;
    	## 导入:<MIME 类型定义配置文件>,其<路径>相对于[当前目录]
		## <mime.type 配置文件>定义:什么类型的<文件>,需用什么<应用组件>打开?
     default_type       application/octet-stream;
     sendfile           on;
     keepalive_timeout  65;
     charset            utf-8;	## 设置语音编码为utf-8,使其页面支持中文
     server {
    
    
        listen          80;		## 设置:侦听端口和 IP 地址
        server_name     localhost;
        ## 导入:<自定义配置文件>(可以是相对路径 ☚ 以<主配置文件>的<当前目录>为<根目录>)
        include         conf.d/*.conf;
        location / {
    
    
            root        html;
            index       index.html index.htm;	## 定义:<默认首页文件名>
        }
        error_page 500 502 503 504  /50x.html;
            location =  /50x.html {
    
    
            root        html;
        }
     }
}

3.3 Understanding: <grammar format> and <default configuration> of <Nginx main configuration file>


main 主配置段 ┤ worker_processes 1; ## 指定:<nginx: worker process 工作进程>的<数量>## 单 CPU,建议:1 个## 多 CPU,建议:CPU 总核心数
                 ┍ events {
    
    
evens 事件配置段 ┤ worker_connections 1024; ## 设置:单个工作进程的<并发最大连接>}
               ┍ http {
    
    
               │ include mime.types;
               │ default_type application/octet-stream;
               │ sendfile on;
               │ keepalive_timeout 65;
               │ server {
    
     ─────────────────────────────┐
               │ listen 80;
http 网站配置段 ┤ server_name localhost;
               │ ┍ location / {
    
    
               │ │ root html;
               │ │ index index.html index.htm;         ├定义:虚拟主机
               │ └ }
               │ error_page 500 502 503 504 /50x.html;
               │ ┍ location = /50x.html {
    
    
               │ │ root html;
               │ └ }}─────────────────────────────────────┘}

4. Nginx reverse proxy and load balancing

4.1 Reverse proxy

**Reverse Proxy:** Just the opposite. To the client, the reverse proxy looks like the target server. And the client does not need to make any settings. The client sends a request to the reverse proxy, and then the reverse proxy determines where the request is going, and forwards the request to the client, making the content as if it were its own, once the client will not be aware of the reverse proxy The latter service, so the client does not need to make any settings, it only needs to treat the reverse proxy server as a real server.

4.2 Load Balancing

Load balancing is based on the existing network structure. It provides an effective and transparent method to expand the bandwidth of network devices and servers, increase throughput, strengthen network data processing capabilities, and improve network flexibility and availability.

Nginx proxy reverse proxy module (installed by default)

Function 1: It can be used as <Application Gateway>
to hide from the outside world: <IP address> of <Intranet Server>
Publish to the outside world: <Service Resource> of <Intranet Server>
Function 2: It can realize <Separation of Dynamic and Static>
through location URI address matching , Realize: Forward <request for dynamic web page>.

vim /usr/local/nginx/conf/nginx.conf
        location / {
    
    		
        proxy_pass http://服务器池;	 
}									##<上游服务器>指 被反向代理的服务器
     或 proxy_pass http://后端服务器池的名字;
     
## 例如:
vim /usr/local/nginx/conf/nginx.conf
....
location / {
    
    
....
        proxy_pass http://xm;	 
}
....

Nginx upstream upstream module (load balancing)

#Instructions and functions
upstream definition: a named <backend server pool>
server definition: server
ip_hash in the server pool enable: <load balancing algorithm> based on <IP address hash value>

Deployment method of load balancing

1. Polling (default)

Each request is assigned to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated

vim /usr/local/nginx/conf/nginx.conf
....
http {
    
    
    upstream xm {
    
    	##xm 服务器池的命名,不要有下划线
		server ip地址:80;	##上游服务器ip:端口
		server ip地址:80;
    }
...
}
保存后出去重载文件
nginx -s reload

# 例如:
vim /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
    
    
    worker_connections  1024;
}
http {
    
    
     include            mime.types;
     default_type       application/octet-stream;
     sendfile           on;
     keepalive_timeout  65;
     charset            utf-8;
     upstream test {
    
    		## 定义test组
        server 192.168.106.147:80;	## 定义test组里有那些机器
        server 192.168.106.148:80;	## 这几条ip换成自己web服务器的IP地址
     }
     server {
    
    
        listen          80;
        server_name     localhost;
        include         conf.d/*.conf;
        location / {
    
    
            root        html;
            index       index.html index.htm;
            proxy_pass  http://test;	## 有请求就转发到test组
        }
        error_page 500 502 503 504  /50x.html;
            location =  /50x.html {
    
    
            root        html;
        }
     }
}

2.weight (weight)
The greater the weight, the more tasks are assigned and the higher the probability of being visited

vim /usr/local/nginx/conf/nginx.conf
....
http {
    
    
	upstream xm {
    
    
		server ip地址:80 weight=7;   #默认weight=1
		server ip地址:80 weight=3;
	}
...
}
#如上例,分别是70%和30%

保存后出去重载文件
nginx -s reload

## 例如:
worker_processes 1;
events {
    
    
    worker_connections  1024;
}
http {
    
    
     include            mime.types;
     default_type       application/octet-stream;
     sendfile           on;
     keepalive_timeout  65;
     charset            utf-8;
     upstream test {
    
    	## 定义test组
        server 192.168.106.147:80 weight=7; ## 定义test组内的机器
        server 192.168.106.148:80 weight=3; ## 定义test组内的机器
     }
     server {
    
    
        listen          80;
        server_name     localhost;
        include         conf.d/*.conf;
        location / {
    
    
            root        html;
            index       index.html index.htm;
            proxy_pass  http://test;
        }
        error_page 500 502 503 504  /50x.html;
            location =  /50x.html {
    
    
            root        html;
        }
     }
}

3. ip_hash (hash algorithm)
The client disconnects for a short time after visiting a certain server for the first time, and automatically locates the server when visiting again

vim /usr/local/nginx/conf/nginx.conf
....
http {
    
    
	upstream xm {
    
    
		ip_hash;
		server ip地址:80;
		server ip地址:80;
	}
...
}

保存后出去重载文件
nginx -s reload

# 例如:
worker_processes 1;
events {
    
    
    worker_connections  1024;
}
http {
    
    
     include            mime.types;
     default_type       application/octet-stream;
     sendfile           on;
     keepalive_timeout  65;
     charset            utf-8;
     upstream test {
    
    
     	ip_hash;		## 定义哈希算法
        server 192.168.106.147:80;
        server 192.168.106.148:80;
     }
     server {
    
    
        listen          80;
        server_name     localhost;
        include         conf.d/*.conf;
        location / {
    
    
            root        html;
            index       index.html index.htm;
            proxy_pass  http://test;
        }
        error_page 500 502 503 504  /50x.html;
            location =  /50x.html {
    
    
            root        html;
        }
     }
}

Five, Nginx+php dynamic and static separation

In order to speed up the parsing speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed and reduce the pressure on the original single server. Generally speaking, it is necessary to separate dynamic resources from static resources. Due to the high concurrency and static resource caching of Nginx, static resources are often deployed on Nginx. If the request is a static resource, go directly to the static resource directory to obtain the resource. If it is a request for Tongtai resources, use the principle of reverse proxy to forward the request to the corresponding background application for processing, so as to realize the separation of dynamic and static.

1. The separation of dynamic and static is mainly realized through nginx + PHP FPM, in which nginx handles static files such as pictures and html, and PHP handles dynamic programs.
2. Dynamic and static separation refers to the architecture design method of separating static pages from dynamic pages or static content interfaces from dynamic content interfaces in the web server architecture, thereby improving the access performance and maintainability of the entire service.
3. Simply put, when a user requests, if he only accesses static requests, such as pictures and html, nginx will return directly. If he sends a dynamic request, nginx will send the request to the program for dynamic processing

1. Cooperate with php to realize separate processing of dynamic pages and static pages

1、删除httpd
rpm -e httpd --nodeps
2、安装php及其组件
yum install -y php php-devel php-mysql
yum install -y php-fpm
3、启动php及其组件,同时将其加入开机自启
systemctl enable php-fpm
systemctl start php-fpm

2. Modify the running user

sed -i -r 's/^\s*user\s*=.*/user = nginx/' /etc/php-fpm.d/www.conf
sed -i -r 's/^\s*group\s*=.*/group = nginx/' /etc/php-fpm.d/www.conf

restart service

systemctl restart php-fpm

3. Modify the configuration file

cat > /usr/local/nginx/conf/conf.d/location_php.conf <<EOF
location ~ \.php$ {
            root           html;
            fastcgi_index  index.php;
            fastcgi_pass   127.0.0.1:9000;
            include        fastcgi_params;
            fastcgi_param SCRIPT_FILENAME \$document_root\$fastcgi_script_name;
            if (!-f \$document_root\$fastcgi_script_name) {
                        return             404;
            }
}
EOF

reload nginx

nginx -s reload

4. Write php pages

cat > /usr/local/nginx/html/index.php <<EOF
<?php
            phpinfo();
?>
EOF

Guess you like

Origin blog.csdn.net/Myx74270512/article/details/131284512