|
|
一、环境
系统:CentOS 6.4x64最小化安装
node1 192.168.3.61 node1.test.com
node2 192.168.3.62 node2.test.com
vip 192.168.3.63
二、基础配置
a.配置ssh互信
node1:
1
2
| [iyunv@node1 ~]# ssh-keygen
[iyunv@node1 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.62
|
node2:
1
2
| [iyunv@node2 ~]# ssh-keygen
[iyunv@node2 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.61
|
b.在node1和node2进行同样的操作,这里只给出node1的操作
配置hosts本地解析
1
2
3
4
5
| [iyunv@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.61 node1.test.com node1
192.168.3.62 node2.test.com node2
|
关闭防火墙和selinux
1
2
3
4
5
6
| [iyunv@node1 ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: [ OK ]
[iyunv@node1 ~]# getenforce
Disabled
|
安装epel源
配置ntp同步
1
2
3
4
| [iyunv@node1 ~]# echo "*/10 * * * * /usr/sbin/ntpdate asia.pool.ntp.org &>/dev/null" >/var/spool/cron/root
[iyunv@node1 ~]# ntpdate asia.pool.ntp.org
11 Jun 14:42:40 ntpdate[1529]: step time server 120.119.31.1 offset 167.549052 sec
[iyunv@node1 ~]# hwclock -w
|
三、corosync的安装和配置
a.安装corosync
node1:
1
| [iyunv@node1 ~]# yum install corosync -y
|
node2:
1
| [iyunv@node2 ~]# yum install corosync -y
|
b.配置corosync
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
| [iyunv@node1 ~]# cd /etc/corosync/
[iyunv@node1 corosync]# cp corosync.conf.example corosync.conf
[iyunv@node1 corosync]# egrep -v "^$|^#|^[[:space:]]+#" /etc/corosync/corosync.conf
compatibility: whitetank
totem {
version: 2
secauth: on #开启认证
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.3.0 #心跳线网段
mcastaddr: 239.255.11.49 #组播地址
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: no
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service { #开始pacemaker
ver: 0
name: pacemaker
}
aisexec {
user: root
group: root
}
|
c.配置秘钥文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| [iyunv@node1 corosync]# mv /dev/{random,random.bak}
[iyunv@node1 corosync]# ln -s /dev/urandom /dev/random #主要作用是缩短key的生成时间
#生成key文件
[iyunv@node1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/random.
Press keys on your keyboard to generate entropy.
Writing corosync key to /etc/corosync/authkey.
[iyunv@node1 corosync]# ll
total 24
-r-------- 1 root root 128 Jun 11 15:05 authkey #这是刚生成的key文件
-rw-r--r-- 1 root root 2769 Jun 11 14:59 corosync.conf
-rw-r--r-- 1 root root 2663 Oct 15 2014 corosync.conf.example
-rw-r--r-- 1 root root 1073 Oct 15 2014 corosync.conf.example.udpu
drwxr-xr-x 2 root root 4096 Oct 15 2014 service.d
drwxr-xr-x 2 root root 4096 Oct 15 2014 uidgid.d
|
d.将node1的配置文件盒key文件复制到node2节点上
1
2
3
| [iyunv@node1 corosync]# scp authkey corosync.conf node2:/etc/corosync/
authkey 100% 128 0.1KB/s 00:00
corosync.conf 100% 2769 2.7KB/s 00:00
|
因为我们在corosync配置文件中启动了pacemaker,所有我们要等安装完pacemaker后再启动corosync
四、pacemaker的安装和配置
a.安装pacemaker
node1:
1
| [iyunv@node1 corosync]# yum install pacemaker -y
|
node2:
1
| [iyunv@node2 ~]# yum install pacemaker -y
|
b.安装crmsh
node1和node2的操作是一样的
1
2
3
4
5
6
| [iyunv@node1 corosync]# wget http://download.opensuse.org/rep ... -2.1-1.2.x86_64.rpm
[iyunv@node1 corosync]# yum install python-dateutil python-lxml redhat-rpm-config pssh -y
[iyunv@node1 corosync]# rpm -ivh crmsh-2.1-1.2.x86_64.rpm
warning: crmsh-2.1-1.2.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID 17280ddf: NOKEY
Preparing... ########################################### [100%]
1:crmsh ########################################### [100%]
|
d.启动corosync
1
2
3
4
| [iyunv@node1 ~]# ssh node2 service corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
[iyunv@node1 ~]# service corosync start
Starting Corosync Cluster Engine (corosync): [ OK ]
|
e.查看启动信息
(1).查看corosync引擎是否正常启动
1
2
3
| [iyunv@node1 ~]# egrep "Corosync Cluster Engine|configuration file" /var/log/cluster/corosync.log
Jun 11 15:23:10 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
Jun 11 15:23:10 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
|
(2).查看初始化节点成员信息
1
2
3
4
5
6
| [iyunv@node1 ~]# grep TOTEM /var/log/cluster/corosync.log
Jun 11 15:35:57 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Jun 11 15:35:57 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Jun 11 15:35:57 corosync [TOTEM ] The network interface [192.168.3.61] is now up.
Jun 11 15:35:57 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jun 11 15:35:58 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
|
(3).检查启动过程中是否有错误产生
1
2
3
| [iyunv@node1 ~]# grep ERROR: /var/log/cluster/corosync.log
Jun 11 15:23:10 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon.
Jun 11 15:23:10 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN
|
(4).查看pacemaker是否正常启动
1
2
3
4
5
6
| [iyunv@node1 ~]# grep pcmk_startup /var/log/cluster/corosync.log
Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: CRM: Initialized
Jun 11 15:23:10 corosync [pcmk ] Logging: Initialized pcmk_startup
Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: Service: 9
Jun 11 15:23:10 corosync [pcmk ] info: pcmk_startup: Local hostname: node1.test.com
|
(5).查看集群状态
1
2
3
4
5
6
7
8
9
10
11
| [iyunv@node1 ~]# crm status
Last updated: Thu Jun 11 15:36:15 2015
Last change: Thu Jun 11 15:36:09 2015
Stack: classic openais (with plugin)
Current DC: node2.test.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ node1.test.com node2.test.com ]
|
从结果能看出node1和node2都在线
五、安装DRBD
这里使用编译安装的方式进行安装,node1和node2的操作一样
DRBD的编译安装需要安装kernel-devel,kernel-heaeds这2个rpm包,且版本要和uname -r保持一致,我们从系统光盘中提取出这2个包,DRBD的下载地址http://oss.linbit.com/drbd/8.4/drbd-8.4.4.tar.gz
a.安装DRBD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
| [iyunv@node1 ~]# uname -r
2.6.32-358.el6.x86_64
[iyunv@node1 ~]# ll |grep rpm
-r--r--r-- 1 root root 8548160 Jun 11 15:44 kernel-devel-2.6.32-358.el6.x86_64.rpm
-r--r--r-- 1 root root 2426756 Jun 11 15:44 kernel-headers-2.6.32-358.el6.x86_64.rpm
[iyunv@node1 ~]# rpm -ivh kernel-devel-2.6.32-358.el6.x86_64.rpm kernel-headers-2.6.32-358.el6.x86_64.rpm
Preparing... ########################################### [100%]
1:kernel-headers ########################################### [ 50%]
2:kernel-devel ########################################### [100%]
[iyunv@node1 ~]# yum install gcc make flex -y
[iyunv@node1 ~]# tar xf drbd-8.4.4.tar.gz && cd drbd-8.4.4
[iyunv@node1 drbd-8.4.4]# ./configure --prefix=/usr/local/drbd --with-km --with-pacemaker --with-heartbeat
[iyunv@node1 drbd-8.4.4]# make KDIR=/usr/src/kernels/2.6.32-358.el6.x86_64/
[iyunv@node1 drbd-8.4.4]# make install
[iyunv@node1 drbd-8.4.4]# mkdir -p /usr/local/drbd/var/run/drbd
[iyunv@node1 drbd-8.4.4]# cp /usr/local/drbd/etc/rc.d/init.d/drbd /etc/init.d/
[iyunv@node1 drbd-8.4.4]# chkconfig --add drbd
[iyunv@node1 drbd-8.4.4]# chkconfig drbd off
#安装DRBD模块
[iyunv@node1 drbd-8.4.4]# cd drbd
[iyunv@node1 drbd]# make clean
rm -rf .tmp_versions Module.markers Module.symvers modules.order
rm -f *.[oas] *.ko .*.cmd .*.d .*.tmp *.mod.c .*.flags .depend .kernel*
rm -f compat/*.[oas] compat/.*.cmd
[iyunv@node1 drbd]# make KDIR=/usr/src/kernels/2.6.32-358.el6.x86_64/
[iyunv@node1 drbd]# cp drbd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/lib/
[iyunv@node1 drbd]# depmod
[iyunv@node1 drbd]# modprobe drbd
[iyunv@node1 drbd]# lsmod |grep drbd
drbd 340519 0
libcrc32c 1246 1 drbd
|
b.配置DRBD
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
| [iyunv@node1 ~]# egrep -v "^$|^#|^[[:space:]]+#" /usr/local/drbd/etc/drbd.d/global_common.conf
global {
usage-count no;
}
common {
protocol C;
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
}
startup {
}
options {
}
disk {
on-io-error detach;
rate 200M;
no-disk-flushes;
no-md-flushed;
}
net {
cram-hmac-alg "sha1";
shared-secret "123456";
sndbuf-size 512k;
max-buffers 8000;
unplug-watermark 1024;
max-epoch-size 8000;
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
}
|
c.node1和node2的/dev/sdb拆分成2部分,/dev/sdb1=48G,/dev/sdb2=剩余空间,并格式化成ext4文件系统
node1:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| [iyunv@node1 ~]# fdisk -l |grep /dev/sdb
Disk /dev/sdb: 53.7 GB, 53687091200 bytes
/dev/sdb1 1 6267 50339646 83 Linux #大小48G
/dev/sdb2 6268 6527 2088450 83 Linux #大小是剩余的空间,用来存放meta数据
[iyunv@node1 ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3147760 inodes, 12584911 blocks
629245 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
385 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 39 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[iyunv@node1 ~]# tune2fs -c -1 /dev/sdb1
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
|
node2:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
| [iyunv@node2 drbd]# fdisk -l |grep dev/sdb
Disk /dev/sdb: 53.7 GB, 53687091200 bytes
/dev/sdb1 1 6267 50339646 83 Linux
/dev/sdb2 6268 6527 2088450 83 Linux
[iyunv@node2 ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3147760 inodes, 12584911 blocks
629245 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
385 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[iyunv@node2 ~]# tune2fs -c -1 /dev/sdb1
tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
|
d.增加资源
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| [iyunv@node1 ~]# cat /usr/local/drbd/etc/drbd.d/web.res
resource web {
on node1.test.com {
device /dev/drbd1;
disk /dev/sdb1;
address 192.168.3.61:7789;
meta-disk /dev/sdb2 [0];
}
on node2.test.com {
device /dev/drbd1;
disk /dev/sdb1;
address 192.168.3.62:7789;
meta-disk /dev/sdb2 [0];
}
}
|
e.将配置文件和资源复制到node2上
1
2
3
4
| [iyunv@node1 ~]# cd /usr/local/drbd/etc/drbd.d/
[iyunv@node1 drbd.d]# scp global_common.conf web.res node2:/usr/local/drbd/etc/drbd.d/
global_common.conf 100% 2542 2.5KB/s 00:00
web.res 100% 255 0.3KB/s 00:00
|
f.在node1和node2上初始化DRBD资源
node1:
1
2
3
4
5
| [iyunv@node1 ~]# drbdadm create-md web
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
|
node2:
1
2
3
4
5
| [iyunv@node2 ~]# drbdadm create-md web
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
|
g.启动DRBD
node1:
1
2
3
4
5
6
7
| [iyunv@node1 ~]# /etc/init.d/drbd start
Starting DRBD resources: [
create res: web
prepare disk: web
adjust disk: web
adjust net: web
]
|
node2:
1
2
3
4
5
6
7
| [iyunv@node2 ~]# /etc/init.d/drbd start
Starting DRBD resources: [
create res: web
prepare disk: web
adjust disk: web
adjust net: web
]
|
h.查看DRBD状态
1
2
3
| [iyunv@node1 ~]# ln -s /usr/local/drbd/sbin/* /usr/bin/
[iyunv@node1 ~]# drbd-overview
1:web/0 Connected Secondary/Secondary Inconsistent/Inconsistent C r-----
|
i.设置node1为主节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
| [iyunv@node1 ~]# drbdadm -- --overwrite-data-of-peer primary web
[iyunv@node1 ~]# drbd-overview
1:web/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r---n-
[>....................] sync'ed: 0.9% (48744/49156)M
#将DRBD设备挂载到/mnt下,写入一个测试数据
[iyunv@node1 ~]# mount /dev/drbd1 /mnt
[iyunv@node1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 18G 1.7G 16G 10% /
tmpfs 495M 22M 473M 5% /dev/shm
/dev/sda1 194M 28M 156M 16% /boot
/dev/sr0 48G 180M 45G 1% /mnt
/dev/drbd1 48G 180M 45G 1% /mnt
[iyunv@node1 ~]# touch /mnt/test.txt
[iyunv@node1 ~]# ll /mnt/
total 16
drwx------ 2 root root 16384 Jun 11 16:26 lost+found
-rw-r--r-- 1 root root 0 Jun 11 16:45 test.txt
[iyunv@node1 ~]# umount /mnt
#通过命令能看到DRBD正在同步
[iyunv@node1 ~]# drbd-overview
1:web/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r-----
[====>...............] sync'ed: 27.2% (35808/49156)M
#同步完成
[iyunv@node1 ~]# drbd-overview
1:web/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
#在node2上挂载/dev/sdb1,查看数据是否同步
[iyunv@node2 ~]# drbdadm down web
[iyunv@node2 ~]# mount /dev/sdb1 /mnt
[iyunv@node2 ~]# ll /mnt
total 16
drwx------ 2 root root 16384 Jun 11 16:26 lost+found
-rw-r--r-- 1 root root 0 Jun 11 16:45 test.txt #数据已同步过来
[iyunv@node2 ~]# umount /mnt
[iyunv@node2 ~]# drbdadm up web
|
六.mysql安装和配置
node1:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
| #安装基础软件包
[iyunv@node1 ~]# yum -y install make gcc-c++ cmake bison-devel ncurses-devel
#创建用户
[iyunv@node1 ~]# groupadd mysql
[iyunv@node1 ~]# useradd -g mysql mysql -s /sbin/nologin
#解压软件包,这里我们是事先下载好的mysql-5.5.37.tar.gz
[iyunv@node1 ~]# tar xf mysql-5.5.37.tar.gz
#创建用来存放Mysql数据的目录,因为我们这里使用DRBD做高可用,所以我们的目录应该创建在DRBD设备上
[iyunv@node1 ~]# mkdir /data
[iyunv@node1 ~]# mount /dev/drbd1 /data/
[iyunv@node1 ~]# mkdir -p /data/mysql/data #将数据目录创建在DRBD设备上
#安装mysql
[iyunv@node1 mysql-5.5.37]# cmake \
> -DCMAKE_INSTALL_PREFIX=/usr/local/mysql-5.5.37 \
> -DMYSQL_DATADIR=/data/mysql/data \
> -DSYSCONFDIR=/etc \
> -DWITH_MYISAM_STORAGE_ENGINE=1 \
> -DWITH_INNOBASE_STORAGE_ENGINE=1 \
> -DWITH_MEMORY_STORAGE_ENGINE=1 \
> -DWITH_READLINE=1 \
> -DMYSQL_UNIX_ADDR=/var/lib/mysql/mysql.sock \
> -DMYSQL_TCP_PORT=3306 \
> -DENABLED_LOCAL_INFILE=1 \
> -DWITH_PARTITION_STORAGE_ENGINE=1 \
> -DEXTRA_CHARSETS=all \
> -DDEFAULT_CHARSET=utf8 \
> -DDEFAULT_COLLATION=utf8_general_ci
[iyunv@node1 mysql-5.5.37]# make && make install
#数据目录初始化
[iyunv@node1 mysql-5.5.37]# scripts/mysql_install_db --datadir=/data/mysql/data/ --user=mysql --basedir=/usr/local/mysql-5.5.37/
#复制mysql配置文件
[iyunv@node1 mysql-5.5.37]# cp -rf support-files/my-large.cnf /etc/my.cnf
|
|
|