nginx-initial-2

Introduction to Nginx

The official website of nginx: www.nginx.org

The latest version of nginx: 1.20 (the latest version mentioned here refers to the stable even-numbered version, all odd-numbered versions are unstable, and all even-numbered versions are stable)

Nginx (engine x) is a high-performance open source HTTP and reverse proxy service, as well as an IMAP/POP3/SMTP service. Nginx was developed by Igor Sysoyev for the second most visited Rambler.ru site in Russia (Russian: Рамблер). The first public version 0.1.0 was released on October 4, 2004. It releases its source code under a BSD-like license and is known for its stability, rich feature set, sample configuration files, and low system resource consumption. On June 1, 2011, nginx 1.0.4 was released.

Nginx is a lightweight web server/reverse proxy server and email (IMAP/POP3) proxy server released under a BSD-like protocol. Its characteristic is that it occupies less memory and has strong concurrency capability. In fact, the concurrency capability of nginx does perform better among similar web servers. Users of nginx websites in mainland China include: Baidu, JD.com, Sina, NetEase, Tencent, Taobao, etc.

In the case of high connection concurrency, Nginx is a good substitute for the Apache server.

Founder Igor Sysoyev.
 

Function of nginx

  • web server
  • Proxy server (forward proxy and reverse proxy)
  • load balancer
Nginx 是一个高性能的 Web 和反向代理服务器, 它具有有很多非常优越的特性:
 
单机环境下参考服务器配置。 并发连接数在7000+ -8000左右。 集群模式20000+
 
作为 Web 服务器:相比 Apache,Nginx 使用更少的资源,支持更多的并发连接,体现更高的效率,这点使 Nginx 尤其受到虚拟主机提供商的欢迎。能够支持高达 50,000 个并发连接数的响应,感谢 Nginx 为我们选择了 epoll and kqueue 作为开发模型.
 
作为负载均衡服务器:Nginx 既可以在内部直接支持 Rails 和 PHP,也可以支持作为 HTTP代理服务器 对外进行服务。Nginx 用 C 编写, 不论是系统资源开销还是 CPU 使用效率都比 Perlbal 要好的多。
         
作为邮件代理服务器: Nginx 同时也是一个非常优秀的邮件代理服务器(最早开发这个产品的目的之一也是作为邮件代理服务器),Last.fm 描述了成功并且美妙的使用经验。
         
Nginx 安装非常的简单,配置文件 非常简洁(还能够支持perl语法),Bugs非常少的服务器: Nginx 启动特别容易,并且几乎可以做到7*24不间断运行,即使运行数个月也不需要重新启动。你还能够在 不间断服务的情况下进行软件版本的升级。

basic skills

1. Realize and serve static files (web server for static resources), which can cache open file descriptors;

2. Reverse proxy server, cache, load balancing, health status detection;

3. Support FastCGI;

4. Modular mechanism, non-DSO mechanism, supports multiple filters gzip, SSI and image modules to complete graphic size adjustment, etc.;

5. Support SSL;

extensions

    Virtual host based on name and IP;

    support keeplive;

    Support smooth configuration update or program version upgrade;

    Customize the access log, support the use of log cache to improve performance;

    Support URL rewrite;

    Support for path aliases;

    Support IP-based and user-based authentication;

    Support rate limit, concurrent number limit, etc.;

Advantages and disadvantages of nginx

advantage:

       It occupies a small amount of memory, can achieve high concurrent connections, and has
       many types of fast processing and response functions. It can realize http server, virtual host, reverse proxy, and load balancing
       to support epoll model, and can support about 30,000 concurrent connections in the actual production environment.
       Use nginx to limit the IP speed and limit the number of connections.
       Nginx is cross-platform and relatively easy to configure.
       The official server IP address can not be exposed.
       Nginx has a built-in health check function. If one of the load balancing servers is down, the received request will be sent to other servers for processing.
      Support Gzip compression, you can add the header of the browser's local cache.
      Nginx supports hot deployment, allowing smooth configuration changes without interrupting services.
      Receive user requests asynchronously, reducing the pressure on the web server.
shortcoming:

      Poor dynamic processing: nginx handles static files well and consumes less memory, but it is very tasteless to handle dynamic pages. Nowadays, nginx is generally used as a reverse proxy on the front end to resist pressure

* asynchronous, non-blocking

$ pstree |grep nginx
 |-+= 81666 root nginx: master process nginx
 | |--- 82500 nobody nginx: worker process
 | \--- 82501 nobody nginx: worker process
1个master进程,2个worker进程

Every time a request comes in, there will be a worker process to process it. But not the whole process, to what extent? Handle to places where blocking may occur, such as forwarding requests to upstream (backend) servers and waiting for requests to return. Then, the processing process will not wait forever. After sending the request, he will register an event: "If upstream returns, let me know, and I will continue." So he went to deal with other requests. This is asynchronous. At this time, if there are more requests coming in, he can quickly deal with them in this way. That's non-blocking and IO multiplexing. Once the upstream server returns, this event will be triggered, the worker will take over, and the request will continue. That's asynchronous callbacks.

For example: after the customer (sender) pays (sends a request) to the cashier (receiver), while waiting for the cashier to give change, he can also do other things, such as making phone calls, chatting, etc.; While waiting for the cash register to process the transaction (IO operation), the cashier can also help the customer pack the goods. When the cash register produces the result, the cashier will check out the customer (response to the request). Among the four methods, this method is the most efficient one for the communication between the sender and the receiver.

The focus of synchronous and asynchronous is on the method of message notification, that is, the method of call result notification.

Synchronization: When a synchronous call is issued, the caller has to wait for the notification of the call result before proceeding with subsequent execution.

Asynchronous: When an asynchronous call is sent, the caller cannot get the result of the call back immediately.

There are generally two ways to get the result of an asynchronous call:

1. Actively poll the results of asynchronous calls;

2. The called party notifies the caller of the call result through callback.

for example:

Simultaneously pick up the courier: Xiao Ming received the text message that the courier will be delivered, and waited downstairs until the courier was delivered.
Asynchronous pickup of courier: Xiao Ming receives a text message that the courier will be delivered. After the courier arrives downstairs, Xiao Ming goes downstairs to pick it up.
Pick up the courier asynchronously, Xiao Ming knows that there are two ways for the courier to arrive downstairs:
1. Keep calling to ask if the courier has arrived, that is, take the initiative to poll;
2. After the courier arrives downstairs, call Xiao Ming, and then Xiao Ming Go downstairs to pick up the courier, that is, call back the notification.

Blocking and non-blocking lie in the behavior of the process/thread when waiting for a message, that is, when waiting for a message, whether the current progress/thread is in a suspended state or a non-suspended state.

- Blocking: After the blocking call is issued, the current process/thread will be suspended before the message is returned, and the current thread/process will not be activated until a message is returned.

- Non-blocking: After the non-blocking call is issued, it will not block the current progress/thread, but will return immediately.

Blocking the delivery of the courier: After Xiao Ming received the message that the courier was about to be delivered, he did nothing and waited for the courier all the time.
Non-blocking courier delivery: After Xiao Ming received the information that the courier was about to be delivered, while waiting for the courier, he also typed codes and swiped WeChat at the same time.

I/O multiplexing

Select, poll, epoll  three models

1. I/O multiplexing [multiple concurrency]

The first method is the most traditional multi-process concurrency model (every time a new I/O stream comes in, a new process management will be allocated.)

The second method is I/O multiplexing (a single thread manages multiple I/O streams at the same time by recording and tracking the status of each I/O stream (sock).)

I/O multiplexing The multiplexing here actually refers to managing multiple I/O streams at the same time by recording and tracking the status of each Sock (I/O stream) in a single thread. The reason for inventing it is to increase the throughput of the server as much as possible. In the same thread, multiple I/O streams are transmitted at the same time by flipping a switch.

2. When a request arrives, what is the process of Nginx using epoll to receive the request?

There will be a lot of connections coming in from ngnix, and epoll will monitor them all, and then, like flipping a switch, whoever has data will dial to whom, and then call the corresponding code for processing.

epoll can check a large number of fds more efficiently, and kqueue calls with similar functions are provided in UNIX.

epoll can be understood as event poll, which is different from busy polling and indiscriminate polling. When a connection has an I/O flow event, epoll will tell the process which connection has an I/O flow event, and then the process will go to Handle this event. At this point our operations on these streams make sense. (The complexity is reduced to O(k), k is the number of streams that generate I/O events, and some consider O(1))

There is a mail room downstairs in Xiaoming's house. Every time a courier arrives, it is first collected and marked; then Xiaoming is notified to pick up the courier for Xiaoming.

epoll can be said to be the latest implementation of I/O multiplexing. epoll fixes most of the problems of select/poll, such as:

• epoll is thread-safe.

• epoll tells you which sock has data, you don't have to find it yourself.
 

Nginx's internal technical architecture

The Nginx server, with its high concurrency, high performance and high efficiency in processing network requests, has been widely recognized by the industry. In recent years, it has ranked second in the deployment of web servers and is widely used in reverse proxy and load balancing. .

How does Nginx achieve these goals? The answer is its unique internal technical architecture design.

Briefly explain a few points:

1) When nginx starts, two types of processes are generated, one is the main process (Master), and one or more worker processes (Worker). The main process does not handle network requests, but is mainly responsible for scheduling work processes, that is: loading configurations, starting work processes, and non-stop upgrades. Therefore, after nginx starts, we can see at least two nginx processes by looking at the process list of the operating system.

2) It is the worker process (worker) that the server actually handles network requests and responses. On a Unix-like system, nginx can configure multiple workers, and each worker process can handle thousands of network requests at the same time.

3) Modular design. The worker of nginx includes core and functional modules. The core module is responsible for maintaining a run-loop and executing module functions in different stages of network request processing, such as network read and write, storage read and write, content transmission, outbound filtering, And send the request to the upstream server, etc. The modular design of its code also allows us to properly select and modify the functional modules as needed, and compile them into a server with specific functions.

4) Event-driven, asynchronous and non-blocking can be said to be the key factors for nginx to achieve high concurrency and high performance, and also benefit from event notification and I/O performance enhancement in Linux, Solaris and BSD-like operating system kernels The adoption of functions, such as kqueue, epoll and event ports.

5) Proxy (proxy) design can be said to be the deep-rooted design of nginx. Whether it is for HTTP, or for network requests or responses such as FastCGI, memcache, Redis, etc., the proxy mechanism is essentially used. Therefore, nginx is inherently a high-performance proxy server.

Nginx installation and deployment

yum install nginx

Configure the Yum source of nginx. (The upstream epel extension source of the virtual machine can be downloaded directly. If not, you can add the epel extension source, or manually set the nginx package repository. After that, you can install and update nginx from the repository)

nginx的yum仓库配置文件如下:
[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
 
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
 
[root@localhost ~]# yum -y install nginx (安装最新的稳定版本)
 
[root@localhost ~]# nginx -V
nginx version: nginx/1.20.2
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) 
built with OpenSSL 1.0.2k-fips  26 Jan 2017
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'
 
[root@localhost ~]# nginx -v  查看版本
nginx version: nginx/1.20.2
关闭防火墙和selinux:
[root@nginx-server ~]# getenforce  
Enforcing
[root@nginx-server ~]# sed -i '/^SELINUX=/c SELINUX=Disabled' /etc/selinux/config
[root@nginx-server ~]# setenforce 0
[root@nginx-server ~]# systemctl stop firewalld 
[root@nginx-server ~]# systemctl disable firewalld
启动
[root@nginx-server ~]# systemctl start nginx

Compile and install Nginx

1、安装编译环境
[root@localhost ~]# yum -y install gcc gcc-c++ make ncurses ncurses-devel pcre pcre-devel openssl openssl-devel zlib zlib-devel
2、创建用户
[root@localhost ~]# useradd nginx
3、安装Nginx(从官网获取源码包)
[root@localhost ~]# wget http://nginx.org/download/nginx-1.20.2.tar.gz
[root@localhost ~]# tar zxf nginx-1.20.2.tar.gz
[root@localhost ~]# cd nginx-1.20.2
[root@localhost nginx-1.20.2]# ./configure --prefix=/usr/local/nginx --group=nginx --user=nginx --sbin-path=/usr/local/nginx/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/tmp/nginx/client_body --http-proxy-temp-path=/tmp/nginx/proxy --http-fastcgi-temp-path=/tmp/nginx/fastcgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_gzip_static_module --with-pcre --with-http_realip_module --with-stream
[root@localhost nginx-1.20.2]# make && make install
 
启动:
[root@localhost ~]# cd /usr/local/nginx/sbin/
[root@localhost sbin]# ./nginx
如果启动出现下列问题:
nginx: [emerg] mkdir() "/tmp/nginx/client_body" failed (2: No such file or directory)
解决方案:
[root@localhost sbin]# mkdir -pv /tmp/nginx/client_body
然后再次启动
[root@localhost sbin]# ./nginx 
 
注意:
编译安装过程:(定制)
1.安装编译安装所需要的环境(软件包,创建用户)
 
2.下载编译安装所需要的包(源码包)解压的操作
 
3.到解压后的目录下进行配置(./configure 配置所需要的参数)【mysql:cmake进行配置】
 
4.编译 make
 
5.安装 make install

compile parameters

# 查看 nginx 安装的模块
[root@localhost ~]#/usr/local/nginx/sbin/nginx -V
# 模块参数具体功能 
--with-cc-opt='-g -O2 -fPIE -fstack-protector    //设置额外的参数将被添加到CFLAGS变量。(FreeBSD或者ubuntu使用)
--param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' 
--with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 
 
--prefix=/usr/local/nginx                        //指向安装目录
--conf-path=/etc/nginx/nginx.conf                //指定配置文件
--http-log-path=/var/log/nginx/access.log        //指定访问日志
--error-log-path=/var/log/nginx/error.log        //指定错误日志
--lock-path=/var/lock/nginx.lock                 //指定lock文件
--pid-path=/run/nginx.pid                        //指定pid文件
 
--http-client-body-temp-path=/var/lib/nginx/body    //设定http客户端请求临时文件路径
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi     //设定http fastcgi临时文件路径
--http-proxy-temp-path=/var/lib/nginx/proxy         //设定http代理临时文件路径
--http-scgi-temp-path=/var/lib/nginx/scgi           //设定http scgi临时文件路径
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi         //设定http uwsgi临时文件路径
 
--with-debug                                        //启用debug日志
--with-pcre-jit                                     //编译PCRE包含“just-in-time compilation”
--with-ipv6                                         //启用ipv6支持
--with-http_ssl_module                              //启用ssl支持
--with-http_stub_status_module                      //获取nginx自上次启动以来的状态
--with-http_realip_module                 //允许从请求标头更改客户端的IP地址值,默认为关
--with-http_auth_request_module           //实现基于一个子请求的结果的客户端授权。如果该子请求返回的2xx响应代码,所述接入是允许的。如果它返回401或403中,访问被拒绝与相应的错误代码。由子请求返回的任何其他响应代码被认为是一个错误。
--with-http_addition_module               //作为一个输出过滤器,支持不完全缓冲,分部分响应请求
--with-http_dav_module                    //增加PUT,DELETE,MKCOL:创建集合,COPY和MOVE方法 默认关闭,需编译开启
--with-http_geoip_module                  //使用预编译的MaxMind数据库解析客户端IP地址,得到变量值
--with-http_gunzip_module                 //它为不支持“gzip”编码方法的客户端解压具有“Content-Encoding: gzip”头的响应。
--with-http_gzip_static_module            //在线实时压缩输出数据流
--with-http_image_filter_module           //传输JPEG/GIF/PNG 图片的一个过滤器)(默认为不启用。gd库要用到)
--with-http_spdy_module                   //SPDY可以缩短网页的加载时间
--with-http_sub_module                    //允许用一些其他文本替换nginx响应中的一些文本
--with-http_xslt_module                   //过滤转换XML请求
--with-mail                               //启用POP3/IMAP4/SMTP代理模块支持
--with-mail_ssl_module                    //启用ngx_mail_ssl_module支持启用外部模块支持

nginx deployment

Composition of nginx.conf: nginx.conf consists of three parts, namely: global block, events block, and http block. In httpd, it also contains http global block and multiple serve blocks. 2. Each server block contains the server global block and multiple location blocks. There is no sequence relationship among the configuration blocks nested in the unified configuration block. (The location of the server block in different versions of nginx or different installation methods of nginx may be different. Some server blocks are in the main configuration file together with the http block, and some server blocks are in the sub-configuration file, but no matter where , as long as the content of the server block is correct, it will take effect whether you write it in the main configuration file or the sub-configuration file)

# 全局参数设置 
worker_processes  4;          #设置nginx启动进程的数量,一般设置成与逻辑cpu数量相同 
注意:
1.物理cpu数:主板上实际插入的cpu数量,可以数不重复的 physical id 有几个(physical id)
2.cpu核数:单块CPU上面能处理数据的芯片组的数量,如双核、四核等 (cpu cores)
3.逻辑cpu数:一般情况下,逻辑cpu=物理CPU个数×每颗核数
 
error_log  logs/error.log;    #指定错误日志 
worker_rlimit_nofile 102400;  #设置一个nginx进程能打开的最大文件数 
pid        /var/run/nginx.pid; 
events { 
    worker_connections  1024; #设置一个进程的最大并发连接数 
}
# http 服务相关设置 
http { 
    include      mime.types; 
    default_type  application/octet-stream; 
    log_format  main  'remote_addr - remote_user [time_local] "request" '
                      'status body_bytes_sent "$http_referer" '
                      '"http_user_agent" "http_x_forwarded_for"'; 
    access_log  /var/log/nginx/access.log  main;    #设置访问日志的位置和格式 
    sendfile          on; #是否调用sendfile函数输出文件,一般设置为on,若nginx是用来进行磁盘IO负载应用时,可以设置为off,降低系统负载 
    gzip              on;      #是否开启gzip压缩,将注释去掉开启 
    keepalive_timeout  65;     #设置长连接的超时时间
# 虚拟服务器的相关设置 
    server { 
        listen      80;        #设置监听的端口 
        server_name  localhost;        #设置绑定的主机名、域名或ip地址 
        charset koi8-r;        # 设置编码字符 
        location / { 
            root  /var/www/nginx;           #设置服务器默认网站的根目录位置,需要手动创建
            index  index.html index.htm;    #设置默认打开的文档 
            } 
        error_page  500 502 503 504  /50x.html; #设置错误信息返回页面 
        location = /50x.html { 
            root  html;        #这里的绝对位置是/usr/local/nginx/html
        } 
            include /etc/nginx/conf.d/*.conf;  加载子配置文件
 
    } 
 }
 
检测nginx配置文件是否正确 
[root@localhost ~]# /usr/local/nginx/sbin/nginx -t
[root@localhost ~]# mkdir -p /tmp/nginx
 
启动nginx服务
[root@localhost ~]# /usr/local/nginx/sbin/nginx
 
通过 nginx 命令控制 nginx 服务
 
nginx -c /path/nginx.conf  	     # 以特定目录下的配置文件启动nginx:
nginx -s reload            	 	 # 修改配置后重新加载生效
nginx -s reopen   			 	 # 重新打开日志文件
nginx -s stop  				 	 # 快速停止nginx
nginx -s quit  				  	 # 完整有序的停止nginx
nginx -t    					 # 测试当前配置文件是否正确
nginx -t -c /usr/lcoal/nginx/conf/nginx.conf  # 测试特定的nginx配置文件是否正确
 
注意:
nginx -s reload 命令加载修改后的配置文件,命令下达后发生如下事件
1. Nginx的master进程检查配置文件的正确性,若是错误则返回错误信息,nginx继续采用原配置文件进行工作(因为worker未受到影响)
2. Nginx启动新的worker进程,采用新的配置文件
3. Nginx将新的请求分配新的worker进程
4. Nginx等待以前的worker进程的全部请求已经都返回后,关闭相关worker进程
5. 重复上面过程,直到全部旧的worker进程都被关闭掉

 The following content is related to nginx installed by yum

   

main configuration file content

  

Guess you like

Origin blog.csdn.net/qq_50660509/article/details/129687639