设为首页 收藏本站
查看: 1925|回复: 0

[经验分享] Haproxy+keepalived做简单的高可用(主主模式)

[复制链接]

尚未签到

发表于 2017-10-30 10:19:56 | 显示全部楼层 |阅读模式
Haproxy:

    Haproxy是一个开源的高性能的反向代理或者说是负载均衡服务软件之一,它支持双机热备、虚拟主
机、基于TCP和HTTP应用代理等功能。其配置简单,而且拥有很好的对服务器节点的健康检查功能(相当
于keepalived健康检查),当其代理的后端服务器出现故障时,Haproxy会自动的将该故障服务器摘除,
当服务器的故障恢复后Haproxy还会自动将RS服务器加入。
    HAProxy提供高可用性、负载均衡以及基于TCP和HTTP应用的代理,支持虚拟主机,它是免费、快速
并且可靠的一种解决方案。HAProxy特别适用于那些负载特大的web站点,这些站点通常又需要会话保持
或七层处理。HAProxy运行在当前的硬件上,完全可以支持数以万计的并发连接。并且它的运行模式使得
它可以很简单安全的整合进您当前的架构中,同时可以保护你的web服务器不被暴露到网络上。
   Haproxy 配置中分成五部分内容,分别如下:
    1、global:参数是进程级的,通常是和操作系统相关。这些参数一般只设置一次,如果配置无误,
就不需要再次进行修改
    2、defaults:配置默认参数,这些参数可以被用到frontend,backend,Listen组件

    3、frontend:接收请求的前端虚拟节点,Frontend可以更加规则直接指定具体使用后端的backend
    4、backend:后端服务集群的配置,是真实服务器,一个Backend对应一个或者多个实体服务器
    5、Listen Fronted和backend的组合体

Haproxy支持两种主要代理模式:

    第一个是4层tcp代理(例如:可用于邮件服务内部协议通信服务器、Mysql服务等)。Haproxy软件
的四层tcp代理应用非常优秀,配置非常简单方便,比LVS和Nginx要方便很多,因为不需要在RS端执行脚
本即可实现应用代理。说明:由于Haproxy采用的是NAT模式,数据包来去都会经过Haproxy,因此,在流
量特别大的情况下,其性能不如LVS。在一般的中小型公司,建议采用haproxy做负载均衡,而不要使用
LVS或者Nginx。
    第二个是7层代理(如HTTP代理)。在4层tcp代理模式下,Haproxy仅在客户端和服务器之间双向转
发流量。但是在7层模式下Haproxy会分析应用层协议,并且能通过运行、拒绝、交换、增加、修改或者
删除请求(request)或者回应(reponse)里指定内容来控制协议。Haproxy软件的最大优点在于其7层
的根据URL请求头应用过滤的功能,一般用在LVS软件的下一层,或者像官方推荐的可以挂在硬件负载均
衡NS、F5下使用。

搭建环境(CentOS 7)

拓扑图如下:

f27fe2182fade4833d5f7b964af36321.png-wh_500x0-wm_3-wmp_4-s_4214500698.png

        主机         ip        角色
     haproxy-1      10.0.0.11  haproxy+keepalivevip1:10.0.0.100
vip2:10.0.0.200
     haproxy-2
      10.0.0.12  haproxy+keepalive
       web-1      10.0.0.13      web服务器
       web-2      10.0.0.13      web服务器

开始配置服务

1、先配置相关主机和相关时间同步服务器:
1
2
3
4
5
6
7
8
9
10
##关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
##关闭selinux
vim /etc/selinux/config
SELINUX=disabled
##同步时间
yum install -y ntpdate
crontab -e
*/5 * * * * ntpdate cn.pool.ntp.org




2、配置web服务器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
##安装apache服务
[iyunv@web-1 ~]#yum install -y httpd
[iyunv@web-1 ~]#echo "It is web1" >> /var/www/html/index.html
[iyunv@web-1 ~]#systemctl restart httpd
[iyunv@web-1 ~]#systemctl enable httpd
[iyunv@web-2 ~]#yum install -y httpd
[iyunv@web-2 ~]#echo "It is web2" >> /var/www/html/index.html
[iyunv@web-2 ~]#systemctl restart httpd
[iyunv@web-2 ~]#systemctl enable httpd

##访问web服务
[iyunv@haproxy-1 ~]# curl 10.0.0.13
It is web1
[iyunv@haproxy-1 ~]# curl 10.0.0.14
It is web2




3、配置haproxy和keepalive

3.1、安装haproxy
1
2
[iyunv@haproxy-1 ~]# yum install -y haproxy
[iyunv@haproxy-2 ~]# yum install -y haproxy




1
2
3
4
5
6
7
8
9
10
##相关配置:
global:全局配置段
    进程及安全配置相关的参数
    性能调整相关的参数
    Debug相关的参数
proxies:代理配置段
    defaults:为frontend, backend以及listen提供默认配置;
    frontend:前端,相当于Nginx中的server{ ... };
    backend:后端,相当于nginx中的upstream { ...  };
    listen:前后端的直接组合;




3.2、配置haproxy的日志文件
##定义日志系统相关属性

log <address> [len <length>] <facility> [max level [min level]]:
<address>:日志服务器地址;
[len <length>]:每行日志记录的最大长度;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
##添加日志(haproxy.cfg):
[iyunv@haproxy-1 haproxy]# vim haproxy.cfg
log         127.0.0.1 local2

##修改syslog.conf:
[iyunv@haproxy-1 haproxy]# vim /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

local2.*         /var/log/haproxy.log


##重启两个服务即可看到日志文件:
[iyunv@haproxy-1 ~]# systemctl restart rsyslog
[iyunv@haproxy-1 ~]# systemctl restart haproxy
[iyunv@haproxy-1 ~]# cd /var/log/
[iyunv@haproxy-1 log]# ls haproxy.log
haproxy.log

##把日志文件配置同步到主机haproxy-2上
[iyunv@haproxy-1 ~]# scp /etc/haproxy/haproxy.cfg 10.0.0.12:/etc/haproxy/haproxy.cfg
[iyunv@haproxy-1 ~]# scp /etc/rsyslog.conf 10.0.0.12:/etc/rsyslog.conf
[iyunv@haproxy-2 ~]# systemctl restart rsyslog
[iyunv@haproxy-2 ~]# systemctl restart haproxy
[iyunv@haproxy-2 log]# cd /var/log/
[iyunv@haproxy-2 log]# ls haproxy.log
haproxy.log




3.3、配置haproxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
##修改配置文件
[iyunv@haproxy-1 ~]# vim /etc/haproxy/haproxy.cfg
frontend  main *:80
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js
    use_backend static          if url_static
    default_backend             app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
    balance     roundrobin
    server   web-1  10.0.0.13:80 check
    server   web-2  10.0.0.14:80 check



##把配置文件同步到haproxy-2上去
[iyunv@haproxy-1 ~]# scp /etc/rsyslog.conf 10.0.0.12:/etc/rsyslog.conf

##重启服务
[iyunv@haproxy-1 ~]# systemctl restart haproxy
[iyunv@haproxy-1 ~]# systemctl enable haproxy
[iyunv@haproxy-2 ~]# systemctl restart haproxy
[iyunv@haproxy-2 ~]# systemctl enable haproxy

##测试,看看能不能访问web
[iyunv@haproxy-1 haproxy]# curl 10.0.0.11
It is web1
[iyunv@haproxy-1 haproxy]# curl 10.0.0.11
It is web2
[iyunv@haproxy-1 haproxy]# curl 10.0.0.11
It is web1
[iyunv@haproxy-1 haproxy]# curl 10.0.0.11
It is web2
##现在haproxy就已经配置好了




3.4、配置keepalive
安装keepalive

1
2
[iyunv@haproxy-1 ~]# yum install -y keepalived
[iyunv@haproxy-2 ~]# yum install -y keepalived




配置keepalive
##主机haproxy-1的keepalive配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[iyunv@haproxy-1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived  

vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 2
        weight 2
}
global_defs {  
   notification_email {  
     acassen@firewall.loc  
   }  
   router_id LVS_DEVEL  
}  
vrrp_instance VI_1 {  
    state MASTER
    interface ens33
    virtual_router_id 50
    nopreempt
    priority 150
    advert_int 1  
    authentication {  
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.100
    }
    track_script {
        check_haproxy
   }
}
vrrp_instance VI_2 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    nopreempt
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.200
    }
    track_script {
        check_haproxy
   }
}



##主机haproxy-2的keepalive配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[iyunv@haproxy-2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived  

vrrp_script check_haproxy {
        script "/etc/keepalived/check_haproxy.sh"
        interval 2
        weight 2
}
global_defs {  
   notification_email {  
     acassen@firewall.loc  
   }  
   router_id LVS_DEVEL  
}  
vrrp_instance VI_1 {  
    state BACKUP
    interface ens33
    virtual_router_id 50
    nopreempt
    priority 100
    advert_int 1  
    authentication {  
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.100
    }
    track_script {
        check_haproxy
   }
}
vrrp_instance VI_2 {
    state MASTER
    interface ens33
    virtual_router_id 51
    nopreempt
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.0.0.200
    }
    track_script {
        check_haproxy
   }
}



##haproxy检测脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[iyunv@haproxy-1 ~]# cat /etc/keepalived/check_haproxy.sh
#!/bin/bash
h=`ps -C haproxy --no-header |wc -l`
if [ $h -eq 0 ];then
systemctl stop keepalived
fi
[iyunv@haproxy-1 ~]# chmod a+x /etc/keepalived/check_haproxy.sh
[iyunv@haproxy-2 ~]# cat /etc/keepalived/check_haproxy.sh
#!/bin/bash
h=`ps -C haproxy --no-header |wc -l`
if [ $h -eq 0 ];then
systemctl stop keepalived
fi
[iyunv@haproxy-2 ~]# chmod a+x /etc/keepalived/check_haproxy.sh




启动keepalive

##主机haproxy-1(由于优先级的问题,可以看到vip1在haproxy-1上)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[iyunv@haproxy-1 ~]# systemctl restart keepalived  
[iyunv@haproxy-1 ~]# systemctl enable keepalived         
[iyunv@haproxy-1 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-10-28 12:30:32 CST; 1s ago
  Process: 2445 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 2446 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─2446 /usr/sbin/keepalived -D
           ├─2447 /usr/sbin/keepalived -D
           └─2448 /usr/sbin/keepalived -D
Oct 28 12:30:33 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_1) Changing effective priority from 150 to 152
Oct 28 12:30:33 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_2) Changing effective priority from 100 to 102
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_1) Entering MASTER STATE
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_1) setting protocol VIPs.
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 10.0.0.100
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:30:34 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
[iyunv@haproxy-1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:1d:7a:63 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::8ec5:50ac:d71:20d7/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::f87c:449f:eb4a:ba03/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever




##主机haproxy-2(由于优先级的问题,可以看到vip2在haproxy-2上)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[iyunv@haproxy-2 ~]# systemctl start keepalived
[iyunv@haproxy-2 ~]# systemctl enable keepalived
[iyunv@haproxy-2 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-10-28 12:30:09 CST; 5s ago
  Process: 9158 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 9159 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─9159 /usr/sbin/keepalived -D
           ├─9160 /usr/sbin/keepalived -D
           └─9161 /usr/sbin/keepalived -D
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: VRRP_Instance(VI_1) Changing effective priority from 100 to 102
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: VRRP_Instance(VI_2) Changing effective priority from 150 to 152
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: VRRP_Instance(VI_2) Entering MASTER STATE
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: VRRP_Instance(VI_2) setting protocol VIPs.
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.200
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: VRRP_Instance(VI_2) Sending/queueing gratuitous ARPs on ens33 for 10.0.0.200
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.200
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.200
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.200
Oct 28 12:30:10 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.200
[iyunv@haproxy-2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:76:bf:48 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.12/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::f87c:449f:eb4a:ba03/64 scope link
       valid_lft forever preferred_lft forever




访问web服务:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[iyunv@localhost ~]# curl 10.0.0.100
It is web1
[iyunv@localhost ~]# curl 10.0.0.100
It is web2
[iyunv@localhost ~]# curl 10.0.0.100
It is web1
[iyunv@localhost ~]# curl 10.0.0.100
It is web2
[iyunv@localhost ~]#
[iyunv@localhost ~]# curl 10.0.0.200
It is web1
[iyunv@localhost ~]# curl 10.0.0.200
It is web2
[iyunv@localhost ~]# curl 10.0.0.200
It is web1
[iyunv@localhost ~]# curl 10.0.0.200
It is web2




测试


在haproxy-1上把haproxy服务给关闭掉,看看vip1会不会飘到haproxy-2上,同时也看看网页还能不能正常访问:

##在haproxy-1上关闭haproxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[iyunv@haproxy-1 ~]# systemctl stop haproxy
[iyunv@haproxy-1 ~]# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
Oct 28 11:30:32 haproxy-1 haproxy-systemd-wrapper[2321]: haproxy-systemd-wrapper: exit, haproxy RC=1
Oct 28 11:30:32 haproxy-1 systemd[1]: Unit haproxy.service entered failed state.
Oct 28 11:30:32 haproxy-1 systemd[1]: haproxy.service failed.
Oct 28 12:18:47 haproxy-1 systemd[1]: Started HAProxy Load Balancer.
Oct 28 12:18:47 haproxy-1 systemd[1]: Starting HAProxy Load Balancer...
Oct 28 12:18:47 haproxy-1 haproxy-systemd-wrapper[2369]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Oct 28 12:45:34 haproxy-1 systemd[1]: Stopping HAProxy Load Balancer...
Oct 28 12:45:34 haproxy-1 haproxy-systemd-wrapper[2369]: haproxy-systemd-wrapper: SIGTERM -> 2371.
Oct 28 12:45:34 haproxy-1 systemd[1]: Stopped HAProxy Load Balancer.
Oct 28 12:45:34 haproxy-1 haproxy-systemd-wrapper[2369]: haproxy-systemd-wrapper: exit, haproxy RC=0




##查看haproxy-1的keepalive服务是否已经停掉了
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[iyunv@haproxy-1 ~]# systemctl status keepalived              
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
Oct 28 12:30:39 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 10.0.0.100
Oct 28 12:30:39 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:30:39 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:30:39 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:30:39 haproxy-1 Keepalived_vrrp[2448]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:06 haproxy-1 Keepalived[2446]: Stopping
Oct 28 12:51:06 haproxy-1 systemd[1]: Stopping LVS and VRRP High Availability Monitor...
Oct 28 12:51:06 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_1) sent 0 priority
Oct 28 12:51:06 haproxy-1 Keepalived_vrrp[2448]: VRRP_Instance(VI_1) removing protocol VIPs.
Oct 28 12:51:07 haproxy-1 systemd[1]: Stopped LVS and VRRP High Availability Monitor.
[iyunv@haproxy-1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:1d:7a:63 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::8ec5:50ac:d71:20d7/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::f87c:449f:eb4a:ba03/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever




##查看haproxy-2的keepalive服务状态和ip,可以看到vip1已经成功飘到haproxy-2上了
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[iyunv@haproxy-2 ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2017-10-28 12:30:09 CST; 21min ago
  Process: 9158 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 9159 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─9159 /usr/sbin/keepalived -D
           ├─9160 /usr/sbin/keepalived -D
           └─9161 /usr/sbin/keepalived -D
Oct 28 12:51:07 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:07 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:07 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:07 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:12 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:12 haproxy-2 Keepalived_vrrp[9161]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 10.0.0.100
Oct 28 12:51:12 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:12 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:12 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
Oct 28 12:51:12 haproxy-2 Keepalived_vrrp[9161]: Sending gratuitous ARP on ens33 for 10.0.0.100
[iyunv@haproxy-2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:76:bf:48 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.12/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.200/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::f87c:449f:eb4a:ba03/64 scope link
       valid_lft forever preferred_lft forever




##访问web网页(可以看到web页面还是可以正常访问的)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[iyunv@localhost ~]# curl 10.0.0.100
It is web1
[iyunv@localhost ~]# curl 10.0.0.100
It is web2
[iyunv@localhost ~]# curl 10.0.0.100
It is web1
[iyunv@localhost ~]# curl 10.0.0.100
It is web2
[iyunv@localhost ~]# curl 10.0.0.200
It is web1
[iyunv@localhost ~]# curl 10.0.0.200
It is web2
[iyunv@localhost ~]# curl 10.0.0.200
It is web1
[iyunv@localhost ~]# curl 10.0.0.200
It is web2




    这次的haproxy+keepalive做简单的高可用实验就已经到此结束了。如果有写错的地方,欢迎各位大神指出来,我会去改正的。如果有写的不好的地方,请多多见谅!!!

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.yunweiku.com/thread-406197-1-1.html 上篇帖子: haproxy+keepalive双主高可用实现负载均衡 下篇帖子: HAProxy 之 概念和配置介绍
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表