lvs有三种模式:nat模式(LVS/NAT),直接路由模式(LVS/DR),ip隧道模式(LVS/TUN)以及二度开发的第四种模式(FULL NAT)
一、NAT模式
配置环境
配置三台redhat6.5版本的虚拟机
sever1作为vs (双网卡)
sever2、sever3作为rs
1.sever(1)
1.配置yum源
因为6.5版本的yum源不能一次性全部加载,所以需要将镜像中的东西全部设置
7.0版本以后不需要
[kiosk@oundation51 Desktop]$ cd /var/www/html/rhel6.5/
[kiosk@oundation51 rhel6.5]$ ls
EFI Packages RELEASE-NOTES-pa-IN.html
EULA README RELEASE-NOTES-pt-BR.html
EULA_de RELEASE-NOTES-as-IN.html RELEASE-NOTES-ru-RU.html
EULA_en RELEASE-NOTES-bn-IN.html RELEASE-NOTES-si-LK.html
EULA_es RELEASE-NOTES-de-DE.html RELEASE-NOTES-ta-IN.html
EULA_fr RELEASE-NOTES-en-US.html RELEASE-NOTES-te-IN.html
EULA_it RELEASE-NOTES-es-ES.html RELEASE-NOTES-zh-CN.html
EULA_ja RELEASE-NOTES-fr-FR.html RELEASE-NOTES-zh-TW.html
EULA_ko RELEASE-NOTES-gu-IN.html repodata
EULA_pt RELEASE-NOTES-hi-IN.html ResilientStorage
EULA_zh RELEASE-NOTES-it-IT.html RPM-GPG-KEY-redhat-beta
GPL RELEASE-NOTES-ja-JP.html RPM-GPG-KEY-redhat-release
HighAvailability RELEASE-NOTES-kn-IN.html ScalableFileSystem
images RELEASE-NOTES-ko-KR.html Server
isolinux RELEASE-NOTES-ml-IN.html TRANS.TBL
LoadBalancer RELEASE-NOTES-mr-IN.html
media.repo RELEASE-NOTES-or-IN.html
[kiosk@oundation51 rhel6.5]$
[root@server1: ~]# cat /etc/yum.repos.d/rhel-source.repo
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.51.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.51.250/rhel6.5/HighAvailability
gpgcheck=0
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.51.250/rhel6.5/LoadBalancer
gpgcheck=0
[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.51.250/rhel6.5/ResilientStorage
gpgcheck=0
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.51.250/rhel6.5/ScalableFileSystem
gpgcheck=0
[root@server1: ~]# yum repolist
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
repo id repo name status
HighAvailability HighAvailability 56
LoadBalancer LoadBalancer 4
ResilientStorage ResilientStorage 62
ScalableFileSystem ScalableFileSystem 7
rhel-source Red Hat Enterprise Linux 6Server - x86_64 - Source 3,690
repolist: 3,819 ##多了部分软件
2.下载ipvsadm
[root@server1: ~]# yum install ipvsadm -y
下载完成后启动
[root@server1: ~]# /etc/init.d/ipvsadm start
3.打开内部路由设置
[root@server1: ~]# vim /etc/sysctl.conf
7 net.ipv4.ip_forward = 1 ##0=关闭 1=开启
[root@server1: ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
4.添加ipvsadm规则
[root@server1: ~]# ipvsadm -A -t 172.25.254.151:80 -s rr
[root@server1: ~]# ipvsadm -a -t 172.25.254.151:80 -r 172.25.51.2:80 -m
[root@server1: ~]# ipvsadm -a -t 172.25.254.151:80 -r 172.25.51.3:80 -m
[root@server1: ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.254.151:80 rr
-> 172.25.51.2:80 Masq 1 0 0
-> 172.25.51.3:80 Masq 1 0 0
2.server(2)
1.下载http服务
[root@server2: network-scripts]# yum install httpd -y
[root@server2: network-scripts]# cat /var/www/html/index.html
<h1>server2</h1>
[root@server2: network-scripts]# /etc/init.d/httpd restart
2.配置RS的网关指向vs
[root@server2: network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
ONBOOT="yes"
IPADDR=172.25.51.2
PREFIX=24
GATEWAY=172.25.51.1
DNS1=114.114.114.114
3.server(3)
1.下载http服务
[root@server3: network-scripts]# yum install httpd -y
[root@server3: network-scripts]# cat /var/www/html/index.html
<h1>server3</h1>
[root@server3: network-scripts]# /etc/init.d/httpd restart
2.配置RS的网关指向vs
[root@server3: network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
ONBOOT="yes"
IPADDR=172.25.51.3
PREFIX=24
GATEWAY=172.25.51.1
DNS1=114.114.114.114
测试:物理机
[kiosk@oundation51 rhel6.5]$ curl 172.25.254.151
<h1>server3</h1>
[kiosk@oundation51 rhel6.5]$ curl 172.25.254.151
<h1>server2</h1>
[kiosk@oundation51 rhel6.5]$ curl 172.25.254.151
<h1>server3</h1>
[kiosk@oundation51 rhel6.5]$ curl 172.25.254.151
<h1>server2</h1>
[kiosk@oundation51 rhel6.5]$ curl 172.25.254.151
<h1>server3</h1>
[kiosk@oundation51 rhel6.5]$ curl 172.25.254.151
<h1>server2</h1>
二、TUN模式
1.设置规则
[root@server1: ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:0a:79:ce brd ff:ff:ff:ff:ff:ff
inet 172.25.51.1/24 brd 172.25.51.255 scope global eth4
inet 172.25.51.100/32 scope global eth4
inet6 fe80::5054:ff:fe0a:79ce/64 scope link
valid_lft forever preferred_lft forever
[root@server1: ~]# ipvsadm -A -t 172.25.51.100:80 -s rr
[root@server1: ~]# ipvsadm -a -t 172.25.51.100:80 -r 172.25.51.2:80 -i
[root@server1: ~]# ipvsadm -a -t 172.25.51.100:80 -r 172.25.51.3:80 -i
[root@server1: ~]# ipvsadm -l -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.25.51.100:80 rr
-> 172.25.51.2:80 Tunnel 1 0 0
-> 172.25.51.3:80 Tunnel 1 0 0
2.禁用rp_filter内核和打开内部路由
[root@server1: ~]# vim /etc/sysctl.conf
7 net.ipv4.ip_forward = 1
10 net.ipv4.conf.default.rp_filter = 0
[root@server1: ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
server(2)
1、安装arptables_jf
因为设置172.25.254.100/24作为vip,不可以和外部通信,所以设用arptables将其的访问全部DROP,出去的包全部转化为本机的ip
[root@server2: network-scripts]# yum install arptables_jf
[root@server2: network-scripts]# arptables -A IN -d 172.25.51.100 -j DROP
[root@server2: network-scripts]# arptables -A OUT -s 172.25.51.100 -j mangle --mangle-ip-s 172.25.51.2
[root@server2: network-scripts]# arptables -L -n
Chain IN (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
DROP 0.0.0.0/0 172.25.51.100 00/00 00/00 any 0000/0000 0000/0000 0000/0000
Chain OUT (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
mangle 172.25.51.100 0.0.0.0/0 00/00 00/00 any 0000/0000 0000/0000 0000/0000 --mangle-ip-s 172.25.51.2
Chain FORWARD (policy ACCEPT)
target source-ip destination-ip source-hw destination-hw hlen op hrd pro
[root@server2: network-scripts]# /etc/init.d/arptables_jf save
Saving current rules to /etc/sysconfig/arptables: [ OK ]
2、添加隧道tun
[root@server2: network-scripts]# ip addr add 172.25.51.100/24 dev tunl0
[root@server2: network-scripts]# ip link set up dev tunl0
[root@server2: network-scripts]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:d4:19:89 brd ff:ff:ff:ff:ff:ff
inet 172.25.51.2/24 brd 172.25.51.255 scope global eth0
inet6 fe80::5054:ff:fed4:1989/64 scope link
valid_lft forever preferred_lft forever
3: tunl0: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN
link/ipip 0.0.0.0 brd 0.0.0.0
inet 172.25.51.100/24 scope global tunl0
[root@server2: network-scripts]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.25.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.25.51.0 0.0.0.0 255.255.255.0 U 0 0 0 tunl0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 172.25.51.1 0.0.0.0 UG 0 0 0 eth0
[root@server2: network-scripts]# route add -host 172.25.51.100 dev tunl0
[root@server2: network-scripts]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.25.51.100 0.0.0.0 255.255.255.255 UH 0 0 0 tunl0
172.25.51.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
172.25.51.0 0.0.0.0 255.255.255.0 U 0 0 0 tunl0
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
0.0.0.0 172.25.51.1 0.0.0.0 UG 0 0 0 eth0
server(3)
与server(2)一样
测试
用和vip网关相同的ip主机访问vip,如果访问到的页面有轮询,则负载均衡搭建成功