Chapter 4: Nginx Configuration Example-Reverse Proxy
4.1 Example 1 of reverse proxy
Realization effect: Use nginx reverse proxy, visit www.123.com to jump directly to 127.0.0.1:8080
4.1.1 Experimental code
1) Start a tomcat, enter 127.0.0.1:8080 in the browser address bar, the following interface appears
[root@hadoop-104 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat latest 238e6d7313e3 9 months ago 506MB
[root@hadoop-104 ~]# docker run -d -p 8080:8080 tomcat
08c7bd6f8d662a6a0a651b4bf40936bbc078b46bd8acb3111c743f14032c302a
[root@hadoop-104 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
08c7bd6f8d66 tomcat "catalina.sh run" 15 seconds ago Up 11 seconds 0.0.0.0:8080->8080/tcp lucid_boyd
[root@hadoop-104 ~]#
#查看主机的IP
[root@hadoop-104 ~]# hostname -i
192.168.137.14
#查看是否能够访问
[root@hadoop-104 ~]# curl http://192.168.137.14:8080/
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Apache Tomcat/8.5.43</title>
<link href="favicon.ico" rel="icon" type="image/x-icon" />
<link href="favicon.ico" rel="shortcut icon" type="image/x-icon" />
<link href="tomcat.css" rel="stylesheet" type="text/css" />
</head>
<body>
...
</body>
</html>
External access via browser:
2) Map www.123.com to 192.168.137.14 by modifying the local host file
After the configuration is complete, we can access the initial interface of Tomcat at the first step through www.123.com:8080. So how to jump to the initial interface of Tomcat just by typing www.123.com? Then use nginx reverse proxy.
192.168.137.14 www.123.com
Visit: http://www.123.com:8080/
3) Add the following configuration to the nginx.conf configuration file
![image-20200419164450432](https://img2020.cnblogs.com/blog/1722983/202004/1722983-20200420163045927-61717263.png)
As configured above, we listen on port 80 and access the domain name as www.123.com. If the port number is not added, the default is port 80, so when accessing the domain name, it will jump to the path 127.0.0.1:8080. Enter www.123.com in the browser and the results are as follows:
If it can be accessed on Linux, but cannot be accessed externally, please check whether the firewall is turned on. If it is turned on, turn it off and access again.
#查看防火墙状态
[root@hadoop-104 nginx]# service firewalld status ;
Redirecting to /bin/systemctl status ; firewalld.service
Unit \xef\xbc\x9b.service could not be found.
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-04-19 01:52:58 EDT; 2h 49min ago
Docs: man:firewalld(1)
Main PID: 5944 (firewalld)
CGroup: /system.slice/firewalld.service
└─5944 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
#关闭防火墙
[root@hadoop-104 nginx]# service firewalld stop ;
Redirecting to /bin/systemctl stop ; firewalld.service
Failed to stop \xef\xbc\x9b.service: Unit \xef\xbc\x9b.service not loaded.
#再次查看防火墙状态
[root@hadoop-104 nginx]# service firewalld status ;
Redirecting to /bin/systemctl status ; firewalld.service
Unit \xef\xbc\x9b.service could not be found.
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Sun 2020-04-19 04:42:10 EDT; 6s ago
Docs: man:firewalld(1)
Process: 5944 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
Main PID: 5944 (code=exited, status=0/SUCCESS)
[root@hadoop-104 nginx]#
4.3 Example 2 of reverse proxy
Realization effect: Use nginx reverse proxy, jump to different ports according to the access path nginx listening port is 9001,
visit http://192.168.137.14:9001/edu/ directly jump to 192.168.137.14:8001
access http://192.168.137.14:9001/vod/ jump directly to 192.168.137.14:8002
4.3.1 Experimental code
Step 1: Prepare test data
Under the / opt / directory, create two folders "edu" and "vod", the directory structure is as follows:
[root@hadoop-104 opt]# pwd
/opt
#目录结构如下
[root@hadoop-104 opt]# ll -R edu vod
edu:
total 4
-rw-r--r--. 1 root root 523 Apr 19 09:59 a.jsp
drwxr-xr-x. 2 root root 21 Apr 19 07:01 webapps
edu/webapps:
total 4
-rw-r--r--. 1 root root 346 Apr 19 07:04 web.xml
vod:
total 4
-rw-r--r--. 1 root root 525 Apr 19 09:59 b.jsp
drwxr-xr-x. 2 root root 21 Apr 19 07:01 webapps
vod/webapps:
total 4
-rw-r--r--. 1 root root 346 Apr 19 07:04 web.xml
[root@hadoop-104 opt]#
The web.xml in both directories is the same
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://java.sun.com/xml/ns/javaee"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
id="WebApp_ID" version="2.5">
<display-name>test</display-name>
</web-app>
a.jsp in the edu directory
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Insert title here</title>
</head>
<body>
-----------welcome------------
<%="hello world!!! i am 8081"%>
<br>
<br>
<% System.out.println("=============docker tomcat self");%>
</body>
</html>
b.jsp in the vod directory
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Insert title here</title>
</head>
<body>
-----------welcome------------
<%="hello world!!! i am 8082. "%>
<br>
<br>
<% System.out.println("=============docker tomcat self");%>
</body>
</html>
Step 2: Prepare two tomcats, one 8001 port and one 8002 port
Create a container and map the local / opt / edut and opt / vod data volumes to "/ usr / local / tomcat / webapp" of the tomcat container
[root@hadoop-104 opt]# docker run -d --name tomcat_8001 -v /opt/edu:/usr/local/tomcat/webapps/edu -p 8001:8080 tomcat
5b34385bec1e531ee94ab5b5346dffb596a0e4e655797380b4f7695afe27d5c3
[root@hadoop-104 opt]# docker run -d --name tomcat_8002 -v /opt/vod:/usr/local/tomcat/webapps/vod -p 8002:8080 tomcat
db9f6a62e7816a26ab9033889bcaf0d66aaada78d20657f7d4d9a67f6a471910
View the two containers created:
[root@hadoop-104 opt]# docker ps -l -n 2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
db9f6a62e781 tomcat "catalina.sh run" 15 seconds ago Up 14 seconds 0.0.0.0:8002->8080/tcp tomcat_8002
5b34385bec1e tomcat "catalina.sh run" 20 seconds ago Up 19 seconds 0.0.0.0:8001->8080/tcp tomcat_8001
[root@hadoop-104 opt]#
Go into the "tomcat_8001" and "tomcat_8002" containers, and check the contents of the "/ usr / local / tomcat / webapps" directory
[root@hadoop-104 opt]# docker exec -it tomcat_8001 /bin/bash
root@5b34385bec1e:/usr/local/tomcat# cd webapps/
root@5b34385bec1e:/usr/local/tomcat/webapps# ls -l
total 8
drwxr-xr-x. 3 root root 4096 Jul 18 2019 ROOT
drwxr-xr-x. 14 root root 4096 Jul 18 2019 docs
drwxr-xr-x. 3 root root 34 Apr 19 13:59 edu
drwxr-xr-x. 6 root root 83 Jul 18 2019 examples
drwxr-xr-x. 5 root root 87 Jul 18 2019 host-manager
drwxr-xr-x. 5 root root 103 Jul 18 2019 manager
root@5b34385bec1e:/usr/local/tomcat/webapps# ls -l -R edu/
edu/:
total 4
-rw-r--r--. 1 root root 523 Apr 19 13:59 a.jsp
drwxr-xr-x. 2 root root 21 Apr 19 11:01 webapps
edu/webapps:
total 4
-rw-r--r--. 1 root root 346 Apr 19 11:04 web.xml
root@5b34385bec1e:/usr/local/tomcat/webapps#
Go to the "tomcat_8002" container and check the directory structure
[root@hadoop-104 opt]# docker exec -it tomcat_8002 /bin/bash
root@db9f6a62e781:/usr/local/tomcat# cd webapps/
root@db9f6a62e781:/usr/local/tomcat/webapps# ls -l
total 8
drwxr-xr-x. 3 root root 4096 Jul 18 2019 ROOT
drwxr-xr-x. 14 root root 4096 Jul 18 2019 docs
drwxr-xr-x. 6 root root 83 Jul 18 2019 examples
drwxr-xr-x. 5 root root 87 Jul 18 2019 host-manager
drwxr-xr-x. 5 root root 103 Jul 18 2019 manager
drwxr-xr-x. 3 root root 34 Apr 19 13:59 vod
root@db9f6a62e781:/usr/local/tomcat/webapps# ls -l -R vod/
vod/:
total 4
-rw-r--r--. 1 root root 525 Apr 19 13:59 b.jsp
drwxr-xr-x. 2 root root 21 Apr 19 11:01 webapps
vod/webapps:
total 4
-rw-r--r--. 1 root root 346 Apr 19 11:04 web.xml
root@db9f6a62e781:/usr/local/tomcat/webapps#
Step 3, modify the nginx configuration file
Add server {} in the http block
server {
listen 9001;
server_name 192.168.137.14;
location / {
proxy_pass http://192.168.137.14:8080;
root html;
index index.html index.htm;
}
location ~ /edu/ {
proxy_pass http://192.168.137.14:8001;
index index.html index.htm;
}
location ~ /vod/ {
proxy_pass http://192.168.137.14:8002;
index index.html index.htm;
}
}
location instruction description
This instruction is used to match the URL.
The syntax is as follows:
-
=: Before being used for uri without regular expression, the request string must be strictly matched with uri. If the match is successful, stop searching downward and process the request immediately.
-
~: Used to indicate that uri contains regular expressions and is case sensitive.
-
~ *: Used to indicate that uri contains regular expressions and is not case sensitive.
-
^ ~: Before using the uri without regular expressions, the Nginx server is required to find the location with the highest matching between the uri and the request string, and then use this location to process the request, instead of using the regular uri and request in the location block. String matching.
Note: If uri contains a regular expression, it must have a ~ or ~ * identifier.
Step 4, test
Visit: http://192.168.137.14:9001/edu/a.jsp
Visit: http://192.168.137.14:9001/vod/b.jsp
Chapter 5: Nginx Configuration Example-Load Balancing
Realization effect: Configure load balancing
5.1 Experimental code
1) First prepare two Tomcats started at the same time
Create / opt / test directory, then create /opt/test/a.jsp file and /opt/test/webapps/web.xml file
/opt/test/a.jsp
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Insert title here</title>
</head>
<body>
8080 !!!
</body>
</html>
/opt/test/webapps/web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://java.sun.com/xml/ns/javaee"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
id="WebApp_ID" version="2.5">
<display-name>test</display-name>
</web-app>
Create "tomcat_8001" and "tomcat_8002" containers and map the "/ opt / test" data volume to the "/ usr / local / tomcat / webapps / test" directory
[root@hadoop-104 test]# docker run -d --name tomcat_8001 -v /opt/test:/usr/local/tomcat/webapps/test -p 8001:8080 tomcat
b7a0c6d3e9ca241a3d9b0dee972ab7837a0aefbaa0226219ae1c03eecb08beb4
[root@hadoop-104 test]# docker run -d --name tomcat_8002 -v /opt/test:/usr/local/tomcat/webapps/test -p 8002:8080 tomcat
4ecaec8c00da9b3cf207b7cd1e5c10ca711523476c132924bebe123bf0c767bc
Go to the "tomcat_8001" container, copy test to edu, and modify a.jsp to 8001
[root@hadoop-104 test]# docker exec -it tomcat_8001 /bin/bash
root@b7a0c6d3e9ca:/usr/local/tomcat# cd webapps/
root@b7a0c6d3e9ca:/usr/local/tomcat/webapps# ls
ROOT docs examples host-manager manager test
root@b7a0c6d3e9ca:/usr/local/tomcat/webapps# cp -rfp test edu
#替换为8080为8081
root@b7a0c6d3e9ca:/usr/local/tomcat/webapps# sed -i "s/8080/8081/g" edu/a.jsp
root@b7a0c6d3e9ca:/usr/local/tomcat/webapps# cat edu/a.jsp
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Insert title here</title>
</head>
<body>
8081 !!!
</body>
</html>
root@b7a0c6d3e9ca:/usr/local/tomcat/webapps#
Go to the "tomcat_8002" container, copy test to edu, and modify a.jsp to 8002
[root@hadoop-104 ~]# docker exec -it tomcat_8002 /bin/bash
root@4ecaec8c00da:/usr/local/tomcat# cd webapps/
root@4ecaec8c00da:/usr/local/tomcat/webapps# ls
ROOT docs examples host-manager manager test
root@4ecaec8c00da:/usr/local/tomcat/webapps# cp -rfp test edu
#替换为8080为8082
root@4ecaec8c00da:/usr/local/tomcat/webapps# sed -i "s/8080/8082/g" edu/a.jsp
root@4ecaec8c00da:/usr/local/tomcat/webapps# cat edu/a.jsp
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Insert title here</title>
</head>
<body>
8082 !!!
</body>
</html>
root@4ecaec8c00da:/usr/local/tomcat/webapps#
2) Configure in nginx.conf
upstream myserver {
server 192.168.137.14:8001;
server 192.168.137.14:8002;
}
server {
listen 80;
server_name 192.168.137.14;
location / {
proxy_pass http://myserver;
root html;
index index.html index.htm;
}
}
3) Test
Test: http://192.168.137.14:8001/edu/a.jsp
Test: http://192.168.137.14:8002/edu/a.jsp
Test: http://192.168.137.14/edu/a.jsp , and with constant refresh, the content is also changing.
With the explosive growth of Internet information, load balance is no longer a very unfamiliar topic. As the name implies, load balance is to distribute the load to different service units, which not only ensures the availability of the service, but also ensures that the response is fast enough , To give users a good experience. The rapid increase in access and data traffic has spawned a variety of load balancing products. Many professional load balancing hardware provide good functions, but they are expensive. This makes load balancing software very popular, and nginx is one of them. One, there are Nginx, LVS, Haproxy and other services under Linux to provide load balancing services, and Nginx provides several distribution methods (strategies):
-
Polling (default)
Each request is assigned to different backend servers one by one in chronological order. If the backend server is down, it can be automatically rejected. -
weight
weight stands for weight. The default weight is 1. The higher the weight is, the more clients are assigned. The
specified polling probability is proportional to the access ratio. It is used when the performance of the back-end server is uneven. E.g:upstream server_pool{ server 192.168.5.21 weight=10; server 192.168.5.22 weight=10; }
-
Each request of ip_hash is allocated according to the hash result of accessing ip, so that each visitor fixedly accesses a back-end server, which can solve the problem of session. E.g:upstream server_pool{ ip_hash; server 192.168.5.21:80; server 192.168.5.22:80; }
-
Fair (third party)
distributes requests according to the response time of the back-end server, and priority is given to short response time.upstream server_pool{ server 192.168.5.21:80; server 192.168.5.22:80; fair; }
Chapter 6: nginx configuration example-dynamic and static separation
Nginx dynamic and static separation is simply to separate dynamic and static requests. It cannot be understood as simply physical separation of dynamic and static pages. Strictly speaking, dynamic requests should be separated from static requests, which can be understood as using Nginx to process static pages and Tomcat to process dynamic pages. From the perspective of current implementation, dynamic and static separation can be roughly divided into two types:
- One is to purely separate static files into separate domain names and put them on separate servers, which is also the current mainstream popular scheme;
- Another method is to mix dynamic and static files and publish them, separated by nginx;
Specify different suffixes by location to forward different requests. By setting the expires parameter, you can make the browser cache expire time and reduce the previous requests and traffic with the server. Specific Expires definition: It is to set an expiration time for a resource, that is to say, without going to the server for verification, you can directly confirm whether it is expired through the browser itself, so no additional traffic will be generated. This method is ideal for resources that change infrequently. (If you frequently update files, Expires is not recommended for caching), I set 3d here, which means accessing this URL within 3 days, sending a request, comparing the server, the last update time of the file has not changed, it will not The server fetches and returns the status code 304. If there is a modification, it downloads directly from the server and returns the status code 200.
6.1 Experimental code
-
Project resource preparation
In the / opt / edu directory, create two folders "www" and "images" to store static resources[root@hadoop-104 edu]# mkdir -p data/{images,www}
/opt/edu/data/www/index.html
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>动静分离</title> </head> <body> <h2>测试动静分离</h2> </body> </html>
Find an image and place it in the "/ opt / edu / data / images" directory.
The entire / opt / edu directory structure is:[root@hadoop-104 edu]# find ./ ./ ./webapps ./webapps/web.xml ./a.jsp ./data ./data/images ./data/images/timg.jpg ./data/www ./data/www/index.html [root@hadoop-104 edu]#
Create docker container tomcat_8001
[root@hadoop-104 edu]# docker run -d --name tomcat_8001 -v /opt/edu:/usr/local/tomcat/webapps/edu -p 8001:8080 tomcat 1eaba92b56f0987f8c79cc84043b87a6dc2c7ae22607c46c169c1621cac6a9c2
-
Perform nginx configuration
Find the nginx installation directory, open the conf / nginx.conf configuration file,upstream myserver { server 192.168.137.14:8001; server 192.168.137.14:8002; } server { listen 80; server_name 192.168.137.14; location / { proxy_pass http://myserver; root html; index index.html index.htm; } location /www/ { root /opt/edu/data; index index.html index.htm; } location /images/ { root /opt/edu/data; autoindex on; } }
-
test
Test static request: http://192.168.137.14/images/
Test static request: http://192.168.137.14/www/index.html
Test dynamic request: http://192.168.137.14/edu/a.jsp
Chapter 7: Principles of nginx and configuration of optimized parameters
1、mater 和 worker
Looking at the nginx process, it can be found that there are two processes master and worker
[root@master ~]# ps -ef|grep nginx
root 9494 1 0 03:20 ? 00:00:00 nginx: master process nginx
nginx 9495 9494 0 03:20 ? 00:00:00 nginx: worker process
[root@master ~]#
The following is the architectural model of master and worker:
![image-20200420002053601](images/image-20200420002053601.png)
2. How the worker works
![image-20200420002121940](images/image-20200420002121940.png)
3. One master and multiple wokers are beneficial
(1) You can use nginx –s reload hot deployment, use nginx for hot deployment operations
(2) Each woker is an independent process, if there is a problem with one of the wokers, the other wokers Independent, continue to scramble and realize the request process without causing service interruption.
4. How many wokers are appropriate
It is most appropriate that the number of workers and the number of CPUs of the server are equal
5. Connection number worker_connection
-
The first one: send a request, how many connections are occupied by woker?
Answer: 2 or 4
-
Second: nginx has a master and four wokers. Each woker supports a maximum of 1024 connections. What is the maximum number of concurrent connections?
The maximum concurrent number of ordinary static access is: (worker_connections * worker_processes) / 2
And if it is HTTP as a reverse proxy, the maximum number of concurrent should be (worker_connections * worker_processes) / 4
Chapter 8: nginx to build a highly available cluster
8.1 Keepalived + Nginx high availability cluster (master-slave mode)
1. IP address planning
host | service | role |
---|---|---|
192.168.137.14 | dokcer tomcat | tomcat |
192.168.137.15 | nginx+keepalived | master |
192.168.137.17 | nginx+keepalived | backup |
192.168.137.50 | Virtual IP |
2. Install nginx and keepalived
Install nginx and keepalived on 192.168.137.15 and 192.168.137.16 respectively
Install nginx:
yum -y install nginx
Installation: keepalived
yum -y install keepalived
3. Modify the configuration
192.168.137.15:/etc/keepalived/keepalived.conf
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.137.15
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_http_port {
script "/usr/local/src/nginx_check.sh"
interval 2 #(检测脚本执行的间隔)
weight 2
}
vrrp_instance VI_1 {
state MASTER # 备份服务器上将 MASTER 改为 BACKUP
interface eth0
virtual_router_id 51 # 主、备机的 virtual_router_id 必须相同
priority 100 # 主、备机取不同的优先级,主机值较大,备份机值较小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.137.50
}
}
192.168.137.16:/etc/keepalived/keepalived.conf
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.137.15
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_http_port {
script "/usr/local/src/nginx_check.sh"
interval 2 #(检测脚本执行的间隔)
weight 2
}
vrrp_instance VI_1 {
state BACKUP # 备份服务器上将 MASTER 改为 BACKUP
interface eth0
virtual_router_id 51 # 主、备机的 virtual_router_id 必须相同
priority 90 # 主、备机取不同的优先级,主机值较大,备份机值较小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.137.50
}
}
Create the "nginx_check.sh" file in the "/ usr / local / src" directory of "192.168.137.15" and "192.168.137.16" respectively:
#!/bin/bash
A=`ps -C nginx --no-headers |wc -l`
if [ ${A} -eq 0 ]; then
systemctl start nginx
sleep 2
var=`ps -C nginx --no-headers |wc -l`;
if [ ${var} -eq 0 ]; then
killall keepalived;
fi
fi
Modify /usr/share/nginx/html/index.html and add the IP address identifier "192.168.137.15":
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx! 192.168.137.15</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Modify /usr/share/nginx/html/index.html and add the IP address identifier "192.168.137.15":
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx! 192.168.137.16</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
4. Create tomcat container
Combine the experiment in Chapter 5 to create a tomcat container
[root@tomcat ~]# docker run -d --name tomcat_8001 -v /opt/edu:/usr/local/tomcat/webapps/edu -p 8001:8080 tomcat
b80ac12ae19c3784ffc9f08c698b73ac9bcd3bc60e1b7f98dd7c6e9f9212049c
[root@tomcat ~]#
Add the following statements in "/etc/nginx/nginx.conf" of "192.168.137.15" and "192.168.137.16" respectively:
upstream myserver {
server 192.168.137.14:8001;
server 192.168.137.14:8002;
}
server {
listen 9001;
server_name 192.168.137.15; #在192.168.137.16上请修改IP为192.168.137.16
location / {
proxy_pass http://myserver;
root html;
index index.html index.htm;
}
}
5. Start nginx and keepalived on the two servers respectively
Start nginx: nginx
start keepalived: systemctl start keepalived.service
6. Final test
(1) Enter the virtual IP address 192.168.173.50 in the browser address bar
[root@master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:ec:8a:c5 brd ff:ff:ff:ff:ff:ff
inet 192.168.137.15/24 brd 192.168.137.255 scope global eth0
valid_lft forever preferred_lft forever
#能够看到已经绑定了虚拟IP
inet 192.168.137.50/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feec:8ac5/64 scope link
valid_lft forever preferred_lft forever
[root@master ~]#
Visit: http://192.168.137.50/
(2) Stop the main server (192.168.137.15) nginx and keepalived, and then enter 192.168.137.50
[root@master ~]# nginx -s stop
[root@master ~]# systemctl stop keepalived.service
[root@master ~]#
Check whether the virtual IP is bound to the backup:
[root@backup html]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:65:e3:5e brd ff:ff:ff:ff:ff:ff
inet 192.168.137.16/24 brd 192.168.137.255 scope global eth0
valid_lft forever preferred_lft forever
#能够看到在master关闭后,虚拟IP已经绑定到了Backup上了
inet 192.168.137.50/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe65:e35e/64 scope link
valid_lft forever preferred_lft forever
[root@backup html]# c
Visit again: http://192.168.137.50/
(3) Start nginx and keepalived of the main server (192.168.137.15) again
Visit http://192.168.137.50/ again, you can find that it has become 192.168.137.15, because we have configured the priority of the active and standby nodes, the priority of the active node should be high, so when the active node recovers from failure, it will regain the vip .
(4) Visit: http://192.168.137.50:9001/
(5) Access: http://192.168.137.50:9001/edu/a.jsp , and even if the Master node is down, it can be accessed normally.
8.2 Keepalived + Nginx high availability cluster (dual master mode)