1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
| 1、在master上执行
service corosync start
ssh slave 'service corosync start'
注意:启动node2需要在node1上使用如上命令进行,不要在node2节点上直接启动;
查看corosync引擎是否正常启动
[iyunv@master corosync]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/corosync.log
May 19 23:11:05 corosync [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:2055.
May 19 23:11:46 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
May 19 23:11:46 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'
查看初始化成员节点通知是否正常发出:
[iyunv@master corosync]# grep TOTEM /var/log/corosync.log
May 19 19:59:44 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
May 19 19:59:44 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
May 19 19:59:44 corosync [TOTEM ] The network interface [172.30.82.45] is now up.
May 19 19:59:44 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
检查启动过程中是否有错误产生:
[iyunv@master corosync]# # grep ERROR: /var/log/corosync.log
查看pacemaker是否正常启动:
May 19 23:11:46 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
May 19 23:11:46 corosync [pcmk ] Logging: Initialized pcmk_startup
May 19 23:11:46 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
May 19 23:11:46 corosync [pcmk ] info: pcmk_startup: Service: 9
May 19 23:11:46 corosync [pcmk ] info: pcmk_startup: Local hostname: master
使用如下命令查看集群节点的启动状态:
[iyunv@master corosync]# crm status
Last updated: Wed May 20 00:10:38 2015
Last change: Tue May 19 22:49:50 2015
Stack: classic openais (with plugin)
Current DC: slave - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ master slave ]
2、配置集群资源,这里需要配置2个基本资源1个组资源
a、配置vip
crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=172.30.82.61 nic=eth0 cidr_netmask=24
b、配置ldirectord服务资源
crm(live)configure#priimitive ldir lsb:ldirectord
c、配置组资源,组资源将基本资源定义在同一台服务器上运行,默认情况集群资源会均衡运行在集群中各个节点
crm(live)configure#group lvsserver vip ldir
d、不用定义组,可以通过资源粘性及资源约束来也可定义资源的倾向性,这里只是举例:
顺序约束:资源的启动顺序
crm(live)configure# order vip_before_ldir mandatory: vip ldir
排列约束:哪些资源运行在一起
crm(live)configure# colocation ldir_with_vip inf: vip ldir
位置约束:资源更倾向运行在那个节点上
crm(live)configure# location vip_on_mater vip rule 100: #uname eq node1
e、其他的一些配置
禁用stonith设备
crm(live)configure# property stonith-enabled=false
设定集群未到达法定票数的工作机制为忽略,因为只有两台服务器只能选此项
crm(live)configure#no-quorum-policy=ignore
corosync的框架、运行原理、配置命令说明需自行研究,这里倾向于环境搭建及测试
查看集群配置信息库:
crm(live)configure#show
node master
node slave
primitive ldir lsb:ldirectord
primitive vip IPaddr \
params ip=172.30.82.61 nic=eth0 cidr_netmask=24
group lvsserver vip ldir
property cib-bootstrap-options: \
dc-version=1.1.11-97629de \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=2 \
stonith-enabled=false \
no-quorum-policy=ignore
验证配置语法:
crm(live)configure# verify
不报错即提交固化配置:
crm(live)configure# commit
3、测试集群服务,客户端访问172.30.82.61
a、master 上执行:
[iyunv@master corosync]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.30.82.61:80 rr
-> 172.30.82.3:80 Route 1 0 13
-> 172.30.82.11:80 Route 1 0 14
b、slave 上执行:
[iyunv@slave ha.d]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
说明集群资源只运行在master上
4、集群资源转移测试
a、master上执行
service corosync stop
[iyunv@master log]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
b、在slave上执行
[iyunv@slave log]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.30.82.61:80 rr
-> 172.30.82.3:80 Route 1 1 17
-> 172.30.82.11:80 Route 1 0 18
说明集群资源转移成功
c、后端服务故障检测node1上执行
service httpd stop
查看master集群服务
[iyunv@master log]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.30.82.61:80 rr
-> 172.30.82.11:80 Route 1 0 0
恢复node1服务
service httpd start
[iyunv@master log]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.30.82.61:80 rr
-> 172.30.82.11:80 Route 1 0
-> 172.30.82.3:80 Route 1
|