1 Nginx cross domain configuration

The cross-domain problem is actually a relatively rare problem in the previous single-architecture development, unless it needs to be connected to a third party SDK, this problem needs to be dealt with. However, with the popularity of separation of front and back ends and distributed architecture, cross-domain issues have become a problem that every Java developer must know how to solve.

Causes of cross-domain problems

   The main reason for cross-domain problems is the same-origin policy . In order to ensure the security of user information and prevent malicious websites from stealing data, the same-origin policy is necessary, otherwise cookieit can be shared. Since httpstateless protocols are usually used cookieto achieve stateful information records, such as user identity/password, etc., once cookieshared, the user's identity information will be stolen.
The same-origin policy mainly refers to the same three points, two requests with the same protocol + domain name + port can be regarded as the same source, but if any point is different, it means that there are two requests from different sources. Origin policies restrict resource interaction between different origins.

Nginx solves cross-domain problems

   After figuring out the cause of the cross-domain problem, Nginxhow can we solve the cross-domain problem next? In fact, it is relatively simple, nginx.confjust add a little configuration in :

location / {
    # 允许跨域的请求,可以自定义变量$http_origin,*表示所有
    add_header 'Access-Control-Allow-Origin' *;
    # 允许携带cookie请求
    add_header 'Access-Control-Allow-Credentials' 'true';
    # 允许跨域请求的方法:GET,POST,OPTIONS,PUT
    add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT';
    # 允许请求时携带的头部信息,*表示所有
    add_header 'Access-Control-Allow-Headers' *;
    # 允许发送按段获取资源的请求
    add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
    # 一定要有!!!否则Post请求无法进行跨域!
    # 在发送Post跨域请求前,会以Options方式发送预检请求,服务器接受时才会正式请求
    if ($request_method = 'OPTIONS') {
        add_header 'Access-Control-Max-Age' 1728000;
        add_header 'Content-Type' 'text/plain; charset=utf-8';
        add_header 'Content-Length' 0;
        # 对于Options方式的请求返回204,表示接受跨域请求
        return 204;
    }
}
复制代码

After nginx.confthe above configuration is added to the file, the cross-domain request will take effect.

HandlerInterceptorAdapterBut if the backend is developed using a distributed architecture, sometimes RPC calls also need to solve cross-domain problems, otherwise there will also be exceptions that cannot be cross-domain requests, so you can inherit classes in your backend projects. WebMvcConfigurerInterfaces and annotations @CrossOrginare used to implement cross-domain configuration between interfaces.

10. Nginx anti-leech design

Author: Bamboo Loves Panda
Link: https://juejin.cn/post/7112826654291918855
Source: Rare Earth Nuggets
The copyright belongs to the author. For commercial reprint, please contact the author for authorization, for non-commercial reprint, please indicate the source.

After nginx.confthe above configuration is added to the file, the cross-domain request will take effect.

HandlerInterceptorAdapterBut if the backend is developed using a distributed architecture, sometimes RPC calls also need to solve cross-domain problems, otherwise there will also be exceptions that cannot be cross-domain requests, so you can inherit classes in your backend projects. WebMvcConfigurerInterfaces and annotations @CrossOrginare used to implement cross-domain configuration between interfaces.

10. Nginx anti-leech design

   Let’s first understand what hotlinking is: hotlinking refers to the external display of resources introduced by external websites into the current website . Let’s give a simple example to understand:

Just like the wallpaper website XZhan, YZhan, XZhan is a way to buy copyrights and sign authors little by little, thus accumulating a large amount of wallpaper materials, but Ydue to various reasons such as funds, Zhan directly <img src="X站/xxx.jpg" />copied Xall the wallpapers of Zhan through this method Resources, and then provided to users for downloading.

So if we are from this Xsite Boss, we must be unhappy, so how can we shield such problems at this time? Then the anti-theft chain to be described next is here!

NginxThe implementation of the anti-leeching mechanism is related to a header field analyzed in the previous article "HTTP/HTTPS"Referer : This field mainly describes where the current request is sent from, so Nginxyou can get this value in , and then judge Whether it is a resource reference request of this site, if not, access is not allowed. NginxThere is a configuration item in valid_referers, which can meet the previous requirements. The syntax is as follows:

  • valid_referers none | blocked | server_names | string ...;
    • none: Indicates that request access without Refererfields is accepted.HTTP
    • blocked: Indicates that access is permitted http://or https//otherwise requested.
    • server_names: The white list of resources, here you can specify the domain names that are allowed to be accessed.
    • string: You can customize the string, control wildcards, and regular expressions.

After a brief understanding of the syntax, the next implementation is as follows:

# 在动静分离的location中开启防盗链机制
location ~ .*\.(html|htm|gif|jpg|jpeg|bmp|png|ico|txt|js|css){
    # 最后面的值在上线前可配置为允许的域名地址
    valid_referers blocked 192.168.12.129;
    if ($invalid_referer) {
        # 可以配置成返回一张禁止盗取的图片
        # rewrite   ^/ http://xx.xx.com/NO.jpg;
        # 也可直接返回403
        return   403;
    }
    
    root   /soft/nginx/static_resources;
    expires 7d;
}
复制代码

After configuring according to the above content, Nginxthe most basic anti-leeching mechanism has been realized, and finally only need to restart it again! Of course, for the implementation of the anti-leech mechanism, there are also special third-party modules ngx_http_accesskey_moduleto achieve a more complete design, and interested friends can go and see by themselves.

PS: The anti-leeching mechanism cannot solve the crawler's forged referersinformation grabbing data in this way.

Eleven, Nginx large file transfer configuration

   In some business scenarios, some large files need to be transferred, but there will often be some problems when transferring large files Bug, such as file exceeding the limit, request timeout during file transfer, etc., then you can Nginxdo some configuration at this time, let’s understand first Some configuration items that may be used when transferring large files:

configuration item paraphrase
client_max_body_size Set the maximum size allowed for the request body
client_header_timeout The timeout to wait for the client to send a request header
client_body_timeout Set the timeout for reading the request body
proxy_read_timeout NginxSet the maximum time to wait when the request is read by the backend server
proxy_send_timeout Set Nginxthe timeout when the backend returns a response to

When transferring large files, client_max_body_size、client_header_timeout、proxy_read_timeout、proxy_send_timeoutthese four parameter values ​​can be configured according to the actual situation of your project.

The above configuration is only required as a proxy layer, because the final client transfers files or directly interacts with the backend. Here, the Nginxconfiguration as a gateway layer is only adjusted to a level that can "accommodate large files" transfer.
Of course, Nginxit can also be used as a file server, but it needs to use a special third-party module nginx-upload-module. If there are not many functions for file upload in the project, it is recommended to Nginxbuild it. After all, it can save a file server resource. However, if the file upload/download is more frequent, it is recommended to set up an additional file server and hand over the upload/download function to the backend for processing.

12. Nginx configures SSL certificate

   As more and more websites are accessed HTTPS, it is not enough Nginxto configure only in the middle HTTP, and it is often necessary to monitor port requests, but as mentioned in 443the previous article "HTTP/HTTPS"HTTPS , in order to ensure communication security, the server needs to be configured The corresponding digital certificate, when the project is used Nginxas a gateway, then the certificate Nginxalso needs to be configured in , and then briefly talk about SSLthe certificate configuration process:

  • ① First go to the CA organization or apply for the corresponding SSLcertificate from the cloud console, and download Nginxthe version certificate after passing the review.
  • ② After downloading the digital certificate, there are a total of three complete files: .crt、.key、.pem:
    • .crt: Digital certificate file, .crtwhich is .peman extended file, so some people may not have it after downloading.
    • .key: The server's private key file and the asymmetrically encrypted private key are used to decrypt the data transmitted by the public key.
    • .pem: Base64-encodedThe text file of the source certificate in encoded format, and the extension name can be modified according to the needs.
  • ③Create a Nginxnew certificatedirectory under the directory, and upload the downloaded certificate/private key and other files to the directory.
  • ④ Finally, modify nginx.confthe file, as follows:
# ----------HTTPS配置-----------
server {
    # 监听HTTPS默认的443端口
    listen 443;
    # 配置自己项目的域名
    server_name www.xxx.com;
    # 打开SSL加密传输
    ssl on;
    # 输入域名后,首页文件所在的目录
    root html;
    # 配置首页的文件名
    index index.html index.htm index.jsp index.ftl;
    # 配置自己下载的数字证书
    ssl_certificate  certificate/xxx.pem;
    # 配置自己下载的服务器私钥
    ssl_certificate_key certificate/xxx.key;
    # 停止通信时,加密会话的有效期,在该时间段内不需要重新交换密钥
    ssl_session_timeout 5m;
    # TLS握手时,服务器采用的密码套件
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
    # 服务器支持的TLS版本
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    # 开启由服务器决定采用的密码套件
    ssl_prefer_server_ciphers on;
​
    location / {
        ....
    }
}

# ---------HTTP请求转HTTPS-------------
server {
    # 监听HTTP默认的80端口
    listen 80;
    # 如果80端口出现访问该域名的请求
    server_name www.xxx.com;
    # 将请求改写为HTTPS(这里写你配置了HTTPS的域名)
    rewrite ^(.*)$ https://www.xxx.com;
}
复制代码

OK~, according to the above configuration Nginx, your website can https://be accessed through the method, and when the client uses http://the method to access, it will be automatically rewritten as HTTPSa request.

Thirteen, Nginx high availability

   If a single node is deployed online Nginx, natural disasters and man-made disasters will inevitably occur, such as system abnormalities, program downtime, server power outages, computer room explosions, and the destruction of the earth... Hahaha, exaggerated. However, there are indeed hidden dangers in the actual production environment. As Nginxthe gateway layer of the entire system accesses external traffic, once it Nginxgoes down, the entire system will eventually become unavailable. This is undoubtedly extremely bad for the user experience, so it is also NginxHigh availability must be guaranteed .

Next, the mechanism will be adopted to achieve keepalivedhigh availability. It doesn't mean only members, but means , that is, virtual .VIPNginx
VIPVirtual IPIP

keepalivedIn the previous development of the single-node architecture, it was a relatively frequently used high-availability technology. For example, the mechanism provided MySQL、Redis、MQ、Proxy、Tomcatby etc. will be used to achieve high availability of single-node applications.keepalivedVIP

Keepalived + restart script + dual-machine hot standby construction

①First create a corresponding directory and download keepalivedthe installation package (extraction code: s6aq) to Linuxit and unzip it:

[root@localhost]# mkdir /soft/keepalived && cd /soft/keepalived
[root@localhost]# wget https://www.keepalived.org/software/keepalived-2.2.4.tar.gz
[root@localhost]# tar -zxvf keepalived-2.2.4.tar.gz
复制代码

②Enter the decompressed keepaliveddirectory and build the installation environment, then compile and install:

[root@localhost]# cd keepalived-2.2.4
[root@localhost]# ./configure --prefix=/soft/keepalived/
[root@localhost]# make && make install
复制代码

③Enter the installation directory /soft/keepalived/etc/keepalived/and edit the configuration file:

[root@localhost]# cd /soft/keepalived/etc/keepalived/
[root@localhost]# vi keepalived.conf
复制代码

④ Edit keepalived.confthe core configuration file of the host, as follows:

global_defs {
    # 自带的邮件提醒服务,建议用独立的监控或第三方SMTP,也可选择配置邮件发送。
    notification_email {
        root@localhost
    }
    notification_email_from root@localhost
    smtp_server localhost
    smtp_connect_timeout 30
    # 高可用集群主机身份标识(集群中主机身份标识名称不能重复,建议配置成本机IP)
	router_id 192.168.12.129 
}

# 定时运行的脚本文件配置
vrrp_script check_nginx_pid_restart {
    # 之前编写的nginx重启脚本的所在位置
	script "/soft/scripts/keepalived/check_nginx_pid_restart.sh" 
    # 每间隔3秒执行一次
	interval 3
    # 如果脚本中的条件成立,重启一次则权重-20
	weight -20
}

# 定义虚拟路由,VI_1为虚拟路由的标示符(可自定义名称)
vrrp_instance VI_1 {
    # 当前节点的身份标识:用来决定主从(MASTER为主机,BACKUP为从机)
	state MASTER
    # 绑定虚拟IP的网络接口,根据自己的机器的网卡配置
	interface ens33 
    # 虚拟路由的ID号,主从两个节点设置必须一样
	virtual_router_id 121
    # 填写本机IP
	mcast_src_ip 192.168.12.129
    # 节点权重优先级,主节点要比从节点优先级高
	priority 100
    # 优先级高的设置nopreempt,解决异常恢复后再次抢占造成的脑裂问题
	nopreempt
    # 组播信息发送间隔,两个节点设置必须一样,默认1s(类似于心跳检测)
	advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 将track_script块加入instance配置块
    track_script {
        # 执行Nginx监控的脚本
		check_nginx_pid_restart
    }

    virtual_ipaddress {
        # 虚拟IP(VIP),也可扩展,可配置多个。
		192.168.12.111
    }
}
复制代码

⑤ Clone a previous virtual machine as the slave (standby) machine, and edit keepalived.confthe files of the slave machine as follows:

global_defs {
    # 自带的邮件提醒服务,建议用独立的监控或第三方SMTP,也可选择配置邮件发送。
    notification_email {
        root@localhost
    }
    notification_email_from root@localhost
    smtp_server localhost
    smtp_connect_timeout 30
    # 高可用集群主机身份标识(集群中主机身份标识名称不能重复,建议配置成本机IP)
	router_id 192.168.12.130 
}

# 定时运行的脚本文件配置
vrrp_script check_nginx_pid_restart {
    # 之前编写的nginx重启脚本的所在位置
	script "/soft/scripts/keepalived/check_nginx_pid_restart.sh" 
    # 每间隔3秒执行一次
	interval 3
    # 如果脚本中的条件成立,重启一次则权重-20
	weight -20
}

# 定义虚拟路由,VI_1为虚拟路由的标示符(可自定义名称)
vrrp_instance VI_1 {
    # 当前节点的身份标识:用来决定主从(MASTER为主机,BACKUP为从机)
	state BACKUP
    # 绑定虚拟IP的网络接口,根据自己的机器的网卡配置
	interface ens33 
    # 虚拟路由的ID号,主从两个节点设置必须一样
	virtual_router_id 121
    # 填写本机IP
	mcast_src_ip 192.168.12.130
    # 节点权重优先级,主节点要比从节点优先级高
	priority 90
    # 优先级高的设置nopreempt,解决异常恢复后再次抢占造成的脑裂问题
	nopreempt
    # 组播信息发送间隔,两个节点设置必须一样,默认1s(类似于心跳检测)
	advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 将track_script块加入instance配置块
    track_script {
        # 执行Nginx监控的脚本
		check_nginx_pid_restart
    }

    virtual_ipaddress {
        # 虚拟IP(VIP),也可扩展,可配置多个。
		192.168.12.111
    }
}
复制代码

⑥Create a new scriptsdirectory and write Nginxa restart script check_nginx_pid_restart.sh:

[root@localhost]# mkdir /soft/scripts /soft/scripts/keepalived
[root@localhost]# touch /soft/scripts/keepalived/check_nginx_pid_restart.sh
[root@localhost]# vi /soft/scripts/keepalived/check_nginx_pid_restart.sh

#!/bin/sh
# 通过ps指令查询后台的nginx进程数,并将其保存在变量nginx_number中
nginx_number=`ps -C nginx --no-header | wc -l`
# 判断后台是否还有Nginx进程在运行
if [ $nginx_number -eq 0 ];then
    # 如果后台查询不到`Nginx`进程存在,则执行重启指令
    /soft/nginx/sbin/nginx -c /soft/nginx/conf/nginx.conf
    # 重启后等待1s后,再次查询后台进程数
    sleep 1
    # 如果重启后依旧无法查询到nginx进程
    if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then
        # 将keepalived主机下线,将虚拟IP漂移给从机,从机上线接管Nginx服务
        systemctl stop keepalived.service
    fi
fi
复制代码

⑦The script file written needs to change the encoding format and grant execution permission, otherwise the execution may fail:

[root@localhost]# vi /soft/scripts/keepalived/check_nginx_pid_restart.sh

:set fileformat=unix # 在vi命令里面执行,修改编码格式
:set ff # 查看修改后的编码格式

[root@localhost]# chmod +x /soft/scripts/keepalived/check_nginx_pid_restart.sh
复制代码

⑧Since the installation keepalivedis a custom installation location, some files need to be copied to the system directory:

[root@localhost]# mkdir /etc/keepalived/
[root@localhost]# cp /soft/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@localhost]# cp /soft/keepalived/keepalived-2.2.4/keepalived/etc/init.d/keepalived /etc/init.d/
[root@localhost]# cp /soft/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
复制代码

⑨Add keepalivedthe system service and set to enable self-start, and then test whether the startup is normal:

[root@localhost]# chkconfig keepalived on
[root@localhost]# systemctl daemon-reload
[root@localhost]# systemctl enable keepalived.service
[root@localhost]# systemctl start keepalived.service

其他命令:
systemctl disable keepalived.service # 禁止开机自动启动
systemctl restart keepalived.service # 重启keepalived
systemctl stop keepalived.service # 停止keepalived
tail -f /var/log/messages # 查看keepalived运行时日志
复制代码

⑩Finally, test VIPwhether it takes effect, by checking whether the virtual machine is successfully mounted IP:

[root@localhost]# ip addr
复制代码

From the figure above, it can be clearly seen that the virtual machine has been successfully mounted, but the virtual machine will not be mounted on IPanother machine . Only when the main machine goes offline, the slave machine will go online and take over . Finally, test whether the external network can communicate normally, that is, directly in :192.168.12.130IP192.168.12.130VIPVIPWindowsping VIP

When the external VIPcommunication is passed, it can also Pingbe communicated normally, which means that the virtual IPconfiguration is successful.

Nginx High Availability Test

   After the above steps, keepalivedthe VIPmechanism has been successfully built. In the last stage, several things were mainly done:

  • 1. NginxMounted for the deployed machine VIP.
  • Second, through keepalivedthe establishment of a master-slave dual-machine hot backup.
  • Third, through keepalivedthe realization of Nginxdowntime restart.

Because there is no domain name in front, the initial server_nameconfiguration is the current machine IP, so nginx.confthe configuration needs to be changed slightly:

sever{
    listen    80;
    # 这里从机器的本地IP改为虚拟IP
	server_name 192.168.12.111;
	# 如果这里配置的是域名,那么则将域名的映射配置改为虚拟IP
}
复制代码

Finally, let's experiment with the effect:

In the above process, first start keepalived、nginxthe services respectively, and then nginxsimulate Nginxthe downtime situation by manually stopping. After a while, check the background process again, and we will find that nginxit is still alive.

From this process, it is not difficult to find that the function of automatic restart after downtime keepalivedhas been realized for us Nginx, so let's simulate the situation when the server fails:

In the above process, we manually closed keepalivedthe service to simulate the power failure of the machine, hardware damage, etc. (because the power failure of the machine = the keepalivedprocess in the host disappears), and then checked IPthe information of the machine again, it is obvious that we can see VIPDisappeared!

Now switch to another machine: 192.168.12.130take a look at the situation:

At this moment, we will find that 192.168.12.129after the host goes down, the VIP automatically drifts from the host to the slave 192.168.12.130, and at this time the client's request will eventually come to 130this machine Nginx.

In the end, after the master-slave hot backup is Keepalivedimplemented Nginx, the application system can provide users with 7x24hourly services no matter when it encounters various failures such as online downtime or power failure in the computer room.

Fourteen, Nginx performance optimization

   The length of the article here is quite long. Finally, let’s talk about Nginxthe performance optimization. The main thing is to briefly talk about the optimization items with the highest income. I won’t expand the description here. After all, there are many reasons for affecting performance. For example, network, server hardware, operating system, back-end service, program itself, database service, etc. If you are more interested in performance tuning, you can refer to the tuning ideas in the previous "JVM Performance Tuning" .

Optimization 1: Open the long connection configuration

   Usually Nginx acts as a proxy service and is responsible for distributing client requests, so it is recommended to enable HTTPlong connections so that users can reduce the number of handshakes and reduce server loss, as follows:

upstream xxx {
    # 长连接数
    keepalive 32;
    # 每个长连接提供的最大请求数
    keepalived_requests 100;
    # 每个长连接没有新的请求时,保持的最长时间
    keepalive_timeout 60s;
}
复制代码

Optimization 2. Enable zero-copy technology

   The concept of zero copy appears in most middleware with relatively good performance, such as Kafka、Nettyetc., and Nginxthe data zero copy technology can also be configured in the middleware, as follows:

sendfile on; # 开启零拷贝机制
复制代码

The difference between the zero-copy read mechanism and the traditional resource read mechanism:

  • Traditional way: hardware --> kernel --> user space --> program space --> program kernel space --> network socket
  • Zero copy method: hardware --> kernel --> program kernel space --> network socket

From the comparison of the above process, it is easy to see the performance difference between the two.

Optimization 3. Open the no-delay or multi-packet co-delivery mechanism

NginxThere are two key performance parameters in , that is    , tcp_nodelay、tcp_nopushthe opening method is as follows:

tcp_nodelay on;
tcp_nopush on;
复制代码

TCP/IPThe protocol uses the Nagle algorithm by default, that is, in the process of network data transmission, each data packet will not be sent out immediately, but will wait for a period of time, and combine the following data packets into a datagram However, although this algorithm improves the network throughput, the real-time performance is reduced.

Therefore, your project is a highly interactive application, so you can manually enable tcp_nodelaythe configuration, so that every data packet submitted by the application to the kernel will be sent out immediately. But this will generate a large number of TCPpacket headers and increase the network overhead.

On the contrary, the business of some projects does not require high real-time data, but pursues higher throughput. Then you can enable the tcp_nopushconfiguration item. This configuration is similar to the meaning of "plug". First, plug the connection so that The data will not be sent out yet, and will be sent out after the plug is removed. After setting this option, the kernel will try to splice small data packets into one large data packet (one MTU) and send it out.

Of course, if after a certain period of time (usually 200ms), the kernel still has not accumulated a MTUcertain amount, it must also send the existing data, otherwise it will always be blocked.

tcp_nodelay、tcp_nopushThe two parameters are "mutually exclusive". If the application pursues response speed, it is recommended to enable tcp_nodelaythe parameter, such as IM, financial and other types of projects. For applications that pursue throughput, it is recommended to enable tcp_nopushparameters, such as scheduling system, reporting system, etc.

Note:
tcp_nodelayGenerally, it should be used when the long connection mode is turned on.
tcp_nopushThe parameters must be enabled sendfilebefore they can be used.

Optimization 4. Adjust Worker Work Process

   NginxAfter startup, only one Workerworker process will be opened by default to handle client requests, and we can start the corresponding number of worker processes according to the number of CPU cores of the machine, so as to improve the overall concurrency support, as follows:

# 自动根据CPU核心数调整Worker进程数量
worker_processes auto;
复制代码

8It is OK if the maximum number of working processes is 1, and 8there will be no further performance improvement after 1.

At the same time, you can also slightly adjust the number of file handles that each worker process can open:

# 每个Worker能打开的文件描述符,最少调整至1W以上,负荷较高建议2-3W
worker_rlimit_nofile 20000;
复制代码

The operating system kernel ( kernel) uses file descriptors to access files. Whether it is opening, creating, reading, or writing files, it is necessary to use file descriptors to specify the file to be operated. Therefore, the larger the value, it represents a The more files the process can operate (but it cannot exceed the kernel limit, the 3.8Wupper limit is recommended at most).

Optimization 5. Turn on the CPU affinity mechanism

   Friends who are familiar with concurrent programming know that the number of processes/threads often far exceeds the number of CPU cores in the system, because the principle of operating system execution is essentially to use the time slice switching mechanism, that is, one CPU core will be in multiple processes. Frequent switching between them causes a large performance loss.

The CPU affinity mechanism refers to Nginxbinding each working process to a fixed CPU core, thereby reducing the time overhead and resource consumption caused by CPU switching. The opening method is as follows:

worker_cpu_affinity auto;
复制代码

Optimization 6. Open the epoll model and adjust the number of concurrent connections

   It was mentioned at the very beginning: Nginx、Redisall programs are implemented based on the multiplexing model, but the original multiplexing model select/pollcan only monitor 1024a maximum of a connection, and epollit belongs to select/pollthe enhanced version of the interface, so using this model can Improve individual Workerperformance to a great extent , as follows:

events {
    # 使用epoll网络模型
    use epoll;
    # 调整每个Worker能够处理的连接数上限
    worker_connections  10240;
}
复制代码

select/poll/epollThe model will not be elaborated here , and will be analyzed in detail in the following IO model articles.

Fifteen, put it at the end

   So far, Nginxmost of the content has been explained. Regarding the performance optimization content of the last section, in fact, the dynamic and static separation, allocation buffer, resource cache, anti-leeching, resource compression, etc. mentioned earlier can also be summarized as A solution for performance optimization.

Guess you like

Origin blog.csdn.net/weixin_37855495/article/details/130082400