Linux系统之LVS+Keepalived实现
spark sql 执行计划生成案例
1、简述lvs四种集群特性及运用场景
LVS集群有4种范例,分别是NAT、DR、TUN、FULLNAT
从事情体式格局来说,NAT和FULLNAT都要修正要求报文的目的IP和目的端口(NAT)或源IP目的IP或源端口目的端口(FULLNAT),一般状况下,不发起修正源端口。这两种集群的特性是,要求报文和响应报文都要经由DIRECTOR(调理器),在NAT范例的集群中,后端real server平常都是和director在统一网段,且为私网地点。director应该是后端各real server 的网关。而FULLNAT各real server 的ip未必都在统一IP收集,但后端主机必需能与director通讯即可。一般这两种范例的集群运用比较多的是NAT,FULLNAT运用比较少,且FULLNAT黑白规范运用,所以我们要在运用FULLNAT时还须要分外的给Linux内核打补丁才运用;NAT一般运用在一些要求流量没有太大的集群环境中,且director和各后端real server在统一IP网收集,平常用于隐蔽后端主机的实在地点;FULLNAT常用于后端主机和director不再统一IP收集,但他们又可以一般通行的跨网段的内网集群环境中运用;
DR和TUN这两种范例的集群在用户要求报文上都没有修正操纵,只是在本来的要求报文上各自封装了一个新的mac首部(DR)或ip首部(TUN);这两种范例的集群都有如许的特性,要求报文经由director,响应报文则由各real server各自响应给客户端,如许一来,各real server上就必需设置VIP地点;DR范例的集群的director必需同后端各real server在统一物理收集,简单说就是他们之间不能有路由器,原因是DR是基于在本来的要求报文前封装一个MAC首部,源MAC为DIP,目的MAC为后端real server 中的个中一个RS的MAC;一般这类集群运用在流量迥殊大,用来做进站流量接收器来用,一般状况LVS把前端流量接进来交给后端的调理器(这类调理器是基于更好的婚配算法来调理,比方基于要求报文的URL做调理),这类常用于多级调理的集群环境中;而TUN范例的集群,它和DR相似,都是要求报文经由director 响应报文由各real server 本身响应给客户端,响应报文不经由director,这类范例的集群DIP和各后端real server都不在统一机房或统一局域网,一般都是各real server在一个公网环境中(对real server来说 出口地点都不是统一个);这类集群的完成是经由历程在原要求报文的外边封装IP首部,个中源IP是DIP目的IP是RIP,且这类集群环境中各real server必需可以支撑并辨认tun范例的报文(简单说就是双IP首部的报文);一般这类集群运用在多个real server 不统一公网ip地点下,且各real server相距很远时(跨机房、跨地市州等);
2、形貌LVS-DR事情道理,并设置完成。
如上图所示,LVS-DR范例的集群它的事情逻辑上如许的,客户端要求vip,当报文抵达LVS效劳器时,LVS效劳器会对收到的报文举行检察,它看到目的IP是VIP,且本身有VIP,然后它就把报文送到INPUT链上去,在INPUT链上举行划定规矩婚配,这时候我们在LVS上写的划定规矩,定义的集群,当婚配到该报文是集群效劳时,它会把本来的要求报文一成不变的,然后在该报文外边封装一个MAC首部,这个MAC首部的源MAC就是DIP地点的接口MAC地点,目的MAC是经由调理算法获得一个RIP,然后经由历程ARP播送拿到对应RIP对应接口上的MAC,然后把这个封装好的报文直接转发出去。当报文从LVS效劳器的DIP地点接口发出后,交流时机经由历程目的MAC把这个报文送到对应的RIP接口上去,RIP收到封装后的报文后,一看目的MAC是本身,然后它就把MAC首部给拆了,然后拿到客户端的要求报文,一看目的IP也是本身,然后它就拆IP首部,然后拿到客户端的要求信息,然后它会依据客户端的要求信息给出对应的响应;在RS封装响应报文时,它会把VIP封装成源IP,把客户端IP封装成目的IP,然后经由历程VIP地点接口发送出去(这是由于它收到报文时,是从VIP地点接口拿到的要求报文,响应报文会从要求报文进来的接口发送出去);这时候的响应报文会依据目的IP,层层路由末了送到客户端;客户端收到报文后,看目的IP是本身的IP,然后它就撤除IP首部拿到效劳端给出的响应;
以上就是LVS-DR范例处置惩罚报文的历程,接下来我们搭建一个以上面top图为例的试验环境
环境申明:
客户端:192.168.0.99 LVS效劳器:VIP是192.168.0.222 DIP是:192.168.0.10 后端RS1的IP是:192.168.0.20 RS2的IP是192.168.0.30 VIP是192.168.0.222
1)预备2台RS并装置设置好web效劳(2台RS如今IP是192.168.0.20和30先设置好web效劳,并供应一个测试页面,这两个测试页面有意差别,轻易我们看出调理到那台RS上去了)
[root@dr ~]# curl http://192.168.0.20/test.html <h1>RS1,192.168.0.20</h1> [root@dr ~]# curl http://192.168.0.30/test.html <h1>RS2,192.168.0.30</h1> [root@dr ~]#
提示:设置好web效劳后,dr是可以接见获得
2)修正内核参数,让其两个RS设置的VIP不对局域网做ARP转达和ARP要求响应,同时设置路由,目的IP为VIP的报文转发至VIP地点接口,并在各RS上设置好VIP
为了设置轻易我们可以写剧本,然后实行剧本即可
[root@rs1 ~]# cat setparam.sh #/bin/bash vip='192.168.0.222' mask='255.255.255.255' interface='lo:0' case $1 in start) echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore ifconfig $interface $vip netmask $mask broadcast $vip up route add -host $vip dev $interface ;; stop) ifconfig $interface down echo 0 >/proc/sys/net/ipv4/conf/all/arp_announce echo 0 >/proc/sys/net/ipv4/conf/lo/arp_announce echo 0 >/proc/sys/net/ipv4/conf/all/arp_ignore echo 0 >/proc/sys/net/ipv4/conf/lo/arp_ignore ;; *) echo "Usage:bash $0 start|stop" exit 1 ;; esac [root@rs1 ~]#
提示:以上剧本的意义就是设置内核参数,然后把VIP绑定到lo:0,增加主机路由
[root@rs1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.20 netmask 255.255.255.0 broadcast 192.168.0.255 ether 00:0c:29:96:23:23 txqueuelen 1000 (Ethernet) RX packets 31990 bytes 42260814 (40.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 23112 bytes 1983590 (1.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 259 bytes 21752 (21.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 259 bytes 21752 (21.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@rs1 ~]# bash -x setparam.sh start + vip=192.168.0.222 + mask=255.255.255.255 + interface=lo:0 + case $1 in + echo 2 + echo 2 + echo 1 + echo 1 + ifconfig lo:0 192.168.0.222 netmask 255.255.255.255 broadcast 192.168.0.222 up + route add -host 192.168.0.222 dev lo:0 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_announce 2 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/all/arp_ignore 1 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/lo/arp_announce 2 [root@rs1 ~]# cat /proc/sys/net/ipv4/conf/lo/arp_ignore 1 [root@rs1 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 ens33 192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33 192.168.0.222 0.0.0.0 255.255.255.255 UH 0 0 0 lo [root@rs1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.20 netmask 255.255.255.0 broadcast 192.168.0.255 ether 00:0c:29:96:23:23 txqueuelen 1000 (Ethernet) RX packets 32198 bytes 42279504 (40.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 23266 bytes 2001218 (1.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 259 bytes 21752 (21.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 259 bytes 21752 (21.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 192.168.0.222 netmask 255.255.255.255 loop txqueuelen 1 (Local Loopback) [root@rs1 ~]#
提示:可以看到我们实行剧本后,对应的内核参数都已设置好,VIP和相干路由也都增加胜利;一样RS2也只须要把上面的剧本拿过去运转一遍即可
3)在LVS效劳器上设置VIP,定义集群效劳
3.1)首先给director绑定VIP
[root@dr ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 11135 bytes 9240712 (8.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7705 bytes 754318 (736.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 70 bytes 5804 (5.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 70 bytes 5804 (5.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr ~]# ifconfig ens33:0 192.168.0.222 netmask 255.255.255.255 broadcast 192.168.0.222 up [root@dr ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 11277 bytes 9253418 (8.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7800 bytes 765238 (747.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.255 broadcast 192.168.0.222 ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 70 bytes 5804 (5.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 70 bytes 5804 (5.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr ~]#
3.2)增加集群效劳
[root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@dr ~]# ipvsadm -A -t 192.168.0.222:80 -s rr [root@dr ~]# ipvsadm -a -t 192.168.0.222:80 -r 192.168.0.20 -g [root@dr ~]# ipvsadm -a -t 192.168.0.222:80 -r 192.168.0.30 -g [root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr ~]#
提示:以上划定规矩示意增加一个集群效劳192.168.0.222:80,调理算法是rr(轮询),在集群效劳下增加了2个real server 分别是192.168.0.20和192.168.0.30,而且增加为DR范例
4)测试
用客户端192.168.0.99 去接见VIP
提示:可以看到客户端是可以接见的,而且是轮询的体式格局接见后端效劳器。我们替换一个调理算法,再来尝尝
[root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 5 -> 192.168.0.30:80 Route 1 0 5 [root@dr ~]# ipvsadm -E -t 192.168.0.222:80 -s sh [root@dr ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 sh -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr ~]#
提示:以上划定规矩示意变动集群效劳192.168.0.222:80的调理算法为sh
提示:可以看到我们替换了算法后,也就立时见效了。以上就是LVS-DR范例的集群完成,这里须要提示一点的是假如VIP和DIP不再统一网段,须要斟酌后端real server 怎样将响应报文送出去;
3、完成LVS+Keepalived高可用。
首先来诠释下上面的图,当keepalived主节点一般时,数据报文会和本来LVS集群的数据报文流向一样,如图中赤色或绿色实线是要求报文的走向,赤色或绿色虚线是响应报文的走向;当keepalived主节点宕机后,备份节点会经由历程心跳信息去推断主节点是不是在线,假如在划定的时候探测到主节点没在线后,备份节点会立时把VIP抢过来,然后供应效劳。这时候新来的要求就会经由历程备份节点去处置惩罚新的要求,从而完成,效劳高可用,防止单点失利的问题;图上蓝色虚线示意主节点宕机后,备份节点处置惩罚要乞降响应报文的历程。
根据上图,我们须要在LVS集群上加一台效劳器,而且须要在原有的LVS集群的调理器上装置设置keepalived效劳,如上图
1)在两个调理器上装置keepalived
[root@dr1 ~]# yum install -y keepalived Loaded plugins: fastestmirror epel | 5.4 kB 00:00:00 my_base | 3.6 kB 00:00:00 (1/2): epel/x86_64/updateinfo | 1.0 MB 00:00:00 (2/2): epel/x86_64/primary_db | 6.7 MB 00:00:01 Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package keepalived.x86_64 0:1.3.5-1.el7 will be installed --> Processing Dependency: libnetsnmpmibs.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 --> Processing Dependency: libnetsnmpagent.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 --> Processing Dependency: libnetsnmp.so.31()(64bit) for package: keepalived-1.3.5-1.el7.x86_64 --> Running transaction check ---> Package net-snmp-agent-libs.x86_64 1:5.7.2-28.el7 will be installed --> Processing Dependency: libsensors.so.4()(64bit) for package: 1:net-snmp-agent-libs-5.7.2-28.el7.x86_64 ---> Package net-snmp-libs.x86_64 1:5.7.2-28.el7 will be installed --> Running transaction check ---> Package lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================== Package Arch Version Repository Size ================================================================================================== Installing: keepalived x86_64 1.3.5-1.el7 my_base 327 k Installing for dependencies: lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 my_base 41 k net-snmp-agent-libs x86_64 1:5.7.2-28.el7 my_base 704 k net-snmp-libs x86_64 1:5.7.2-28.el7 my_base 748 k Transaction Summary ================================================================================================== Install 1 Package (+3 Dependent packages) Total download size: 1.8 M Installed size: 6.0 M Downloading packages: (1/4): lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64.rpm | 41 kB 00:00:00 (2/4): keepalived-1.3.5-1.el7.x86_64.rpm | 327 kB 00:00:00 (3/4): net-snmp-agent-libs-5.7.2-28.el7.x86_64.rpm | 704 kB 00:00:00 (4/4): net-snmp-libs-5.7.2-28.el7.x86_64.rpm | 748 kB 00:00:00 -------------------------------------------------------------------------------------------------- Total 1.9 MB/s | 1.8 MB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : 1:net-snmp-libs-5.7.2-28.el7.x86_64 1/4 Installing : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 2/4 Installing : 1:net-snmp-agent-libs-5.7.2-28.el7.x86_64 3/4 Installing : keepalived-1.3.5-1.el7.x86_64 4/4 Verifying : 1:net-snmp-libs-5.7.2-28.el7.x86_64 1/4 Verifying : 1:net-snmp-agent-libs-5.7.2-28.el7.x86_64 2/4 Verifying : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 3/4 Verifying : keepalived-1.3.5-1.el7.x86_64 4/4 Installed: keepalived.x86_64 0:1.3.5-1.el7 Dependency Installed: lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 net-snmp-agent-libs.x86_64 1:5.7.2-28.el7 net-snmp-libs.x86_64 1:5.7.2-28.el7 Complete! [root@dr1 ~]#
提示:keepalived包来自base包,不须要分外设置epel源,在DR2上也是一样的操纵装置好keepalived包
2)编写邮件关照剧本
[root@dr1 ~]# cat /etc/keepalived/notify.sh #!/bin/bash # contact='root@localhost' notify() { local mailsubject="$(hostname) to be $1, vip floating" local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1" echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master ;; backup) notify backup ;; fault) notify fault ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac [root@dr1 ~]# chmod +x /etc/keepalived/notify.sh [root@dr1 ~]# ll /etc/keepalived/notify.sh -rwxr-xr-x 1 root root 405 Feb 21 19:52 /etc/keepalived/notify.sh [root@dr1 ~]#
提示:以上剧本重要头脑是经由历程通报差别状况的参数,然后响应的发送差别状况的邮件
3)在DR上面装置sorry_sever
[root@dr1 ~]# yum install -y nginx Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Resolving Dependencies --> Running transaction check ---> Package nginx.x86_64 1:1.16.1-1.el7 will be installed --> Processing Dependency: nginx-all-modules = 1:1.16.1-1.el7 for package: 1:nginx-1.16.1-1.el7.x86_64 --> Processing Dependency: nginx-filesystem = 1:1.16.1-1.el7 for package: 1:nginx-1.16.1-1.el7.x86_64 --> Processing Dependency: nginx-filesystem for package: 1:nginx-1.16.1-1.el7.x86_64 ……省略部份信息 Installed: nginx.x86_64 1:1.16.1-1.el7 Dependency Installed: centos-indexhtml.noarch 0:7-9.el7.centos fontconfig.x86_64 0:2.10.95-11.el7 fontpackages-filesystem.noarch 0:1.44-8.el7 gd.x86_64 0:2.0.35-26.el7 gperftools-libs.x86_64 0:2.4-8.el7 libX11.x86_64 0:1.6.5-1.el7 libX11-common.noarch 0:1.6.5-1.el7 libXau.x86_64 0:1.0.8-2.1.el7 libXpm.x86_64 0:3.5.12-1.el7 libjpeg-turbo.x86_64 0:1.2.90-5.el7 libpng.x86_64 2:1.5.13-7.el7_2 libunwind.x86_64 2:1.2-2.el7 libxcb.x86_64 0:1.12-1.el7 libxslt.x86_64 0:1.1.28-5.el7 lyx-fonts.noarch 0:2.2.3-1.el7 nginx-all-modules.noarch 1:1.16.1-1.el7 nginx-filesystem.noarch 1:1.16.1-1.el7 nginx-mod-http-image-filter.x86_64 1:1.16.1-1.el7 nginx-mod-http-perl.x86_64 1:1.16.1-1.el7 nginx-mod-http-xslt-filter.x86_64 1:1.16.1-1.el7 nginx-mod-mail.x86_64 1:1.16.1-1.el7 nginx-mod-stream.x86_64 1:1.16.1-1.el7 Complete! [root@dr1 ~]#
提示:在RS2上也是一样的操纵
给sorry server 一个测试主页
[root@dr1 ~]# cat /usr/share/nginx/html/index.html <h1>sorry server 192.168.0.10</h1> [root@dr1 ~]#
[root@dr2 ~]# cat /usr/share/nginx/html/index.html <h1>sorry server 192.168.0.11<h1> [root@dr2 ~]#
提示:这两个页面可所以一样的内容,我们为了辨别,有意给出不一样的页面。
启动效劳
[root@dr1 ~]# systemctl start nginx [root@dr1 ~]# curl http://127.0.0.1 <h1>sorry server 192.168.0.10</h1> [root@dr1 ~]#
[root@dr2 ~]# systemctl start nginx [root@dr2 ~]# curl http://127.0.0.1 <h1>sorry server 192.168.0.11<h1> [root@dr2 ~]#
提示:可以看到两个DR上面各自的sorry server都启动,并可以接见
4)设置主节点keepalived
1)在设置前须要确认效劳器的各时候是不是同步,一般状况下,我们会把一个集群的一切主机都指向一台时候效劳器,用来同步时候,有关时候效劳器搭建可参考本人博客
[root@dr1 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@dr1 ~]# [root@dr2 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@dr2 ~]# [root@rs1 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@rs1 ~]# [root@rs2 ~]# grep "^server" /etc/chrony.conf server 192.168.0.99 iburst [root@rs2 ~]#
提示:把时候效劳器地点实行统一个时候效劳器,然后重启效劳即可同步时候
2)确保iptables及selinux不会成为障碍;
[root@dr1 ~]# getenforce Disabled [root@dr1 ~]# iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination [root@dr1 ~]#
提示:我们可以挑选封闭selinux和iptables,在centos7上大概会有许多划定规矩,我们可以挑选增加IPTABLES或许直接将划定规矩状况,把默许的处置惩罚行动改成ACCEPT也行;
3)各节点之间可经由历程主机名相互通讯(对KA并不是必需),发起运用/etc/hosts文件完成;
[root@dr1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.10 dr1.ilinux.io dr1 192.168.0.11 dr2.ilinux.io dr2 192.168.0.20 rs1.ilinux.io rs1 192.168.0.30 rs2.ilinux.io rs2 [root@dr1 ~]# scp /etc/hosts 192.168.0.11:/etc/ The authenticity of host '192.168.0.11 (192.168.0.11)' can't be established. ECDSA key fingerprint is SHA256:EG9nua4JJuUeofheXlgQeL9hX5H53JynOqf2vf53mII. ECDSA key fingerprint is MD5:57:83:e6:46:2c:4b:bb:33:13:56:17:f7:fd:76:71:cc. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.0.11' (ECDSA) to the list of known hosts. root@192.168.0.11's password: hosts 100% 282 74.2KB/s 00:00 [root@dr1 ~]# scp /etc/hosts 192.168.0.20:/etc/ root@192.168.0.20's password: hosts 100% 282 144.9KB/s 00:00 [root@dr1 ~]# scp /etc/hosts 192.168.0.30:/etc/ root@192.168.0.30's password: hosts 100% 282 85.8KB/s 00:00 [root@dr1 ~]# ping dr1 PING dr1.ilinux.io (192.168.0.10) 56(84) bytes of data. 64 bytes from dr1.ilinux.io (192.168.0.10): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from dr1.ilinux.io (192.168.0.10): icmp_seq=2 ttl=64 time=0.046 ms ^C --- dr1.ilinux.io ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.031/0.038/0.046/0.009 ms [root@dr1 ~]# ping dr2 PING dr2.ilinux.io (192.168.0.11) 56(84) bytes of data. 64 bytes from dr2.ilinux.io (192.168.0.11): icmp_seq=1 ttl=64 time=1.36 ms 64 bytes from dr2.ilinux.io (192.168.0.11): icmp_seq=2 ttl=64 time=0.599 ms 64 bytes from dr2.ilinux.io (192.168.0.11): icmp_seq=3 ttl=64 time=0.631 ms ^C --- dr2.ilinux.io ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 0.599/0.865/1.366/0.355 ms [root@dr1 ~]# ping rs1 PING rs1.ilinux.io (192.168.0.20) 56(84) bytes of data. 64 bytes from rs1.ilinux.io (192.168.0.20): icmp_seq=1 ttl=64 time=0.614 ms 64 bytes from rs1.ilinux.io (192.168.0.20): icmp_seq=2 ttl=64 time=0.628 ms ^C --- rs1.ilinux.io ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.614/0.621/0.628/0.007 ms [root@dr1 ~]# ping rs2 PING rs2.ilinux.io (192.168.0.30) 56(84) bytes of data. 64 bytes from rs2.ilinux.io (192.168.0.30): icmp_seq=1 ttl=64 time=0.561 ms 64 bytes from rs2.ilinux.io (192.168.0.30): icmp_seq=2 ttl=64 time=0.611 ms 64 bytes from rs2.ilinux.io (192.168.0.30): icmp_seq=3 ttl=64 time=0.653 ms ^C --- rs2.ilinux.io ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.561/0.608/0.653/0.042 ms [root@dr1 ~]#
提示:设置好的hosts文件可以经由历程scp拷贝到各节点即可
4)确保各节点的用于集群效劳的接口支撑MULTICAST通讯;
以上4点没有问题的状况下,我们可以来设置keepalived
[root@dr1 ~]# cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DR1 vrrp_mcast_group4 224.10.10.222 } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass Yc15tnWa } virtual_ipaddress { 192.168.0.222/24 dev ens33 label ens33:0 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 192.168.0.222 80 { delay_loop 2 lb_algo rr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 real_server 192.168.0.20 80 { weight 1 TCP_CHECK { connect_timeout 3 } } real_server 192.168.0.30 80 { weight 1 HTTP_GET { url { path /test.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } [root@dr1 ~]#
提示:多播地点是用来通报心跳信息的,我们在设置时须要设置成一个D类地点。
5)复制邮件发送剧本,并设置备份节点keepalived
[root@dr1 ~]# scp /etc/keepalived/notify.sh 192.168.0.11:/etc/keepalived/ root@192.168.0.11's password: notify.sh 100% 405 116.6KB/s 00:00 [root@dr1 ~]# scp /etc/keepalived/keepalived.conf 192.168.0.11:/etc/keepalived/keepalived.conf.bak root@192.168.0.11's password: keepalived.conf 100% 1162 506.4KB/s 00:00 [root@dr1 ~]#
提示:我们可以把主节点设置文件发送到备份节点上,然后改下就可以了
[root@dr2 ~]# ls /etc/keepalived/ epalived.conf keepalived.conf.bak notify.sh [root@dr2 ~]# cp /etc/keepalived/keepalived.conf{,.backup} [root@dr2 ~]# ls /etc/keepalived/ keepalived.conf keepalived.conf.backup keepalived.conf.bak notify.sh [root@dr2 ~]# mv /etc/keepalived/keepalived.conf.bak /etc/keepalived/keepalived.conf mv: overwrite ‘/etc/keepalived/keepalived.conf’? y [root@dr2 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_DR2 vrrp_mcast_group4 224.10.10.222 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 51 priority 99 advert_int 1 authentication { auth_type PASS auth_pass Yc15tnWa } virtual_ipaddress { 192.168.0.222/24 dev ens33 label ens33:0 } notify_master "/etc/keepalived/notify.sh master" notify_backup "/etc/keepalived/notify.sh backup" notify_fault "/etc/keepalived/notify.sh fault" } virtual_server 192.168.0.222 80 { delay_loop 2 lb_algo rr lb_kind DR protocol TCP sorry_server 127.0.0.1 80 real_server 192.168.0.20 80 { weight 1 TCP_CHECK { connect_timeout 3 } } real_server 192.168.0.30 80 { weight 1 HTTP_GET { url { path /test.html status_code 200 } connect_timeout 3 nb_get_retry 3 delay_before_retry 3 } } } "/etc/keepalived/keepalived.conf" 60L, 1161C written [root@dr2 ~]#
提示:假如我们是从主节点复制设置文件到备份节点上去,我们只须要变动global_defs内里的route_id;vrrp_instances里变动state 为BACKUP,priority 为99,这个值示意优先级,数字越小优先级越低
6)启动主备节点,看启VIP是不是都已配,以及LVS划定规矩是不是生成
[root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 16914 bytes 14760959 (14.0 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 12058 bytes 1375703 (1.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 15 bytes 1304 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1304 (1.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@dr1 ~]# systemctl start keepalived [root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 17003 bytes 14768581 (14.0 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 12150 bytes 1388509 (1.3 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 15 bytes 1304 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1304 (1.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr1 ~]#
提示:可看到keepalived启动后,VIP和LVS划定规矩就自动生成了,接下我们在备份几点抓包看看主节点是不是在向组播地点发送心跳信息
提示:可看到主节点在向组播地点公告本身的心跳信息
启动备份节点
[root@dr2 ~]# systemctl start keepalived [root@dr2 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.11 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe50:13f1 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) RX packets 12542 bytes 14907658 (14.2 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 7843 bytes 701839 (685.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 10 bytes 879 (879.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 879 (879.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr2 ~]# tcpdump -i ens33 -nn host 224.10.10.222 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens33, link-type EN10MB (Ethernet), capture size 262144 bytes 20:59:33.620661 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:34.622645 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:35.624590 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:36.626588 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:37.628675 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:38.630562 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:39.632673 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:40.634658 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 20:59:41.636699 IP 192.168.0.10 > 224.10.10.222: VRRPv2, Advertisement, vrid 51, prio 100, authtype simple, intvl 1s, length 20 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel [root@dr2 ~]#
提示:可以看到备份节点启动后,它不会去拿VIP,这是由于主节点的优先级要比备份节点高,同时主节点在向组播地点公告本身的心跳信息
用客户端192.168.0.99 去接见集群效劳
[qiuhom@test ~]$ ip a s enp2s0 2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:30:18:51:af:3c brd ff:ff:ff:ff:ff:ff inet 192.168.0.99/24 brd 192.168.0.255 scope global noprefixroute enp2s0 valid_lft forever preferred_lft forever inet 172.16.1.2/16 brd 172.16.255.255 scope global noprefixroute enp2s0:0 valid_lft forever preferred_lft forever inet6 fe80::230:18ff:fe51:af3c/64 scope link valid_lft forever preferred_lft forever [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS1,192.168.0.20</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$
提示:如今主节点一般的状况,集群效劳是可以一般接见的
把主节点停掉,看看集群效劳是不是可以一般接见
[root@dr1 ~]# systemctl stop keepalived [root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 18001 bytes 15406859 (14.6 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 14407 bytes 1548635 (1.4 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 15 bytes 1304 (1.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15 bytes 1304 (1.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn [root@dr1 ~]#
提示:可以看到当我们停掉了主节点后,vip和lvs划定规矩也就自动删除了,接下来,我们再用客户端来接见下集群效劳,看看是不是可接见?
[qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS1,192.168.0.20</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS1,192.168.0.20</h1> [qiuhom@test ~]$ curl http://192.168.0.222/test.html <h1>RS2,192.168.0.30</h1> [qiuhom@test ~]$
提示:可看到在主节点宕机的状况,集群效劳是不受影响的,这是由于备份节点接管了主节点的事情,把VIP和LVS划定规矩在本身的节点上运用了
我们再看看备份节点上的IP信息和LVS划定规矩
[root@dr2 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.11 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe50:13f1 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) RX packets 13545 bytes 15227354 (14.5 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 9644 bytes 828542 (809.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 10 bytes 879 (879.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 10 bytes 879 (879.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr2 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr2 ~]#
我们把主节点恢复,再看看备份节点是不是把VIP和LVS划定规矩删除?
[root@dr1 ~]# systemctl start keepalived [root@dr1 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fef2:820c prefixlen 64 scopeid 0x20<link> ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) RX packets 18533 bytes 15699933 (14.9 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 14808 bytes 1589148 (1.5 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.222 netmask 255.255.255.0 broadcast 0.0.0.0 ether 00:0c:29:f2:82:0c txqueuelen 1000 (Ethernet) lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 17 bytes 1402 (1.3 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 17 bytes 1402 (1.3 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 [root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr1 ~]#
提示:可以看到主节点启动keepalived后,VIP和LVS划定规矩都自动生成,我们再来看看备份节点上的VIP和LVS是不是存在?
[root@dr2 ~]# ifconfig ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.11 netmask 255.255.255.0 broadcast 192.168.0.255 inet6 fe80::20c:29ff:fe50:13f1 prefixlen 64 scopeid 0x20<link> ether 00:0c:29:50:13:f1 txqueuelen 1000 (Ethernet) RX packets 13773 bytes 15243276 (14.5 MiB) RX errors 0 dropped 1 overruns 0 frame 0 TX packets 10049 bytes 857748 (837.6 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1 (Local Loopback) RX packets 12 bytes 977 (977.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 12 bytes 977 (977.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 You have mail in /var/spool/mail/root [root@dr2 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 -> 192.168.0.30:80 Route 1 0 0 [root@dr2 ~]#
提示:可以看到主节点启动后,备份节点上的VIP没有了,然则LVS划定规矩还存在;我们看看备份节点上是不是有邮件呢?
[root@dr2 ~]# mail Heirloom Mail version 12.5 7/5/10. Type ? for help. "/var/spool/mail/root": 1 message 1 new >N 1 root Fri Feb 21 08:13 18/673 "dr2.ilinux.io to be backup, vip floatin" & 1 Message 1: From root@dr2.ilinux.io Fri Feb 21 08:13:00 2020 Return-Path: <root@dr2.ilinux.io> X-Original-To: root@localhost Delivered-To: root@localhost.ilinux.io Date: Fri, 21 Feb 2020 08:13:00 -0500 To: root@localhost.ilinux.io Subject: dr2.ilinux.io to be backup, vip floating User-Agent: Heirloom mailx 12.5 7/5/10 Content-Type: text/plain; charset=us-ascii From: root@dr2.ilinux.io (root) Status: R 2020-02-21 08:13:00: vrrp transition, dr2.ilinux.io changed to be backup &
提示:可看到有一封邮件,通知我们DR2切换至backup状况了,按理说,主节点上也有邮件,无妨我们也去看看吧
[root@dr1 ~]# mail Heirloom Mail version 12.5 7/5/10. Type ? for help. "/var/spool/mail/root": 1 message 1 new >N 1 root Fri Feb 21 08:13 18/673 "dr1.ilinux.io to be master, vip floatin" & 1 Message 1: From root@dr1.ilinux.io Fri Feb 21 08:13:01 2020 Return-Path: <root@dr1.ilinux.io> X-Original-To: root@localhost Delivered-To: root@localhost.ilinux.io Date: Fri, 21 Feb 2020 08:13:01 -0500 To: root@localhost.ilinux.io Subject: dr1.ilinux.io to be master, vip floating User-Agent: Heirloom mailx 12.5 7/5/10 Content-Type: text/plain; charset=us-ascii From: root@dr1.ilinux.io (root) Status: R 2020-02-21 08:13:01: vrrp transition, dr1.ilinux.io changed to be master &
提示:在主节点上也收到了一封邮件,说dr1切换成master状况了
到此LVS+keepalived高可用LVS测试没有问题,接下来我们在测试,当一个real server宕机后,DR上的LVS是不是可以实时的把对应的RS下线?
[root@rs1 ~]# systemctl stop nginx [root@rs1 ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@rs1 ~]#
[root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.30:80 Route 1 0 0 [root@dr1 ~]#
提示:可以看到当一个RS1毛病时,DR会立时把rs1从集群效劳下线
再把RS2都停掉看看 对应的sorry server是不是可以一般加到集群效劳
[root@rs2 ~]# systemctl stop nginx [root@rs2 ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@rs2 ~]#
[root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 127.0.0.1:80 Route 1 0 0 [root@dr1 ~]#
提示:可看到当后端两个RS都宕机后,sorry server 会立时加到集群来,假如这时候客户端再接见集群效劳,就会把sorry server 的页面响应给用户
[qiuhom@test ~]$ curl http://192.168.0.222/ <h1>sorry server 192.168.0.10</h1> [qiuhom@test ~]$
提示:这个页面重如果通知用户,网站正在保护等信息,专用于给用户说sorry的,所以叫sorry server ;固然我们也可以把它设置成和集群效劳页面如出一辙的也是可以的,平常不发起如许做
我们把RS1 启动起来,再看看集群是不是把sorry server 下线把RS1加到集群?
[root@rs1 ~]# systemctl start nginx [root@rs1 ~]# ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 *:80 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* [root@rs1 ~]#
[root@dr1 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.222:80 rr -> 192.168.0.20:80 Route 1 0 0 You have new mail in /var/spool/mail/root [root@dr1 ~]#
提示:可看到当后端real server 一般后,sorry server会从集群效劳里下线,然后real server继承供应效劳
到此LVS集群+keepalived高可用搭建和测试就终了了!!!
Python趣味入门02: 妥妥地安装配置Python(Windows版),Python趣味入门01:你真的了解Python么?,Python趣味入门01:你真的了解Python么?