32213211111 发表于 2016-12-8 08:46:12

centos7部署ceph集群(正确)

环境介绍

主机名
系统
ip地址
ceph版本

ceph-node1
CentOS Linux release 7.2.1511
192.168.1.120
jewel

ceph-node2
CentOS Linux release 7.2.1511
192.168.1.121
jewel

ceph-node3
CentOS Linux release 7.2.1511
192.168.1.128
jewel

准备工作    ◆ 1-7在三台ceph节点上都需要进行操作
    ◆ 8只在ceph1操作即可
1:修改主机名# hostname ceph-node1
# vim /etc/sysconfig/network
HOSTNAME=ceph-node1
2:配置ip地址,子网掩码,网关# cat /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE="Ethernet"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="no"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
NAME="eth1"
DEVICE="eno16777736"
ONBOOT="yes"
IPADDR="192.168.1.120"
PREFIX="24"
GATEWAY="192.168.1.1"
DNS1="192.168.0.220"
3:配置hosts文件# vim /etc/hosts
ceph-node1 192.168.1.120
ceph-node2 192.168.1.121
ceph-node3 192.168.1.128
4:配置防火墙# firewall-cmd --zone=public --add-port=6789/tcp --permanent
success
# firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent
success
# firewall-cmd --reload
success
# firewall-cmd --zone=public --list-all
public (default, active)
interfaces: eno16777736
sources:
services: dhcpv6-client ssh
ports: 6789/tcp 6800-7100/tcp
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
5:禁用selinux# setenforce 0
# sed -i "s/enforcing/permissive/g" /etc/selinux/config
6:配置时间同步# yum -y install ntp ntpdate
# systemctl restart ntpd.service
# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
# systemctl enable ntpdate.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpdate.service to /usr/lib/systemd/system/ntpdate.service.
7:添加ceph jewel版本并更新yum# rpm -Uvh https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
# yum -y update
8:配置ceph1的免秘钥登录# ssh-keygen
# ssh-copy-id root@ceph-node2
# ssh-copy-id root@ceph-node3
在ceph-node1上创建集群1:安装ceph-deploy# yum -y install ceph-deploy
2:使用ceph-deploy创建一个ceph集群# mkdir /etc/ceph
# cd /etc/ceph/
# ceph-deploy new ceph-node1
# ls
ceph.confceph.logceph.mon.keyring
3:使用ceph-deploy在所有节点安装ceph二进制包# ceph-deploy install ceph-node1 ceph-node2 ceph-node3
# ceph -v
ceph version 9.2.1 (752b6a3020c3de74e07d2a8b4c5e48dab5a6b6fd)
4:在ceph-node1上创建第一个ceph monitor# ceph-deploy mon create-initial
5:在ceph-node1上创建osd# ceph-deploy disk list ceph-node1
# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
# ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
# ceph -s
扩展ceph集群1:添加公共网络地址到配置文件# cd /etc/ceph/
# vim /etc/ceph/ceph.conf
2:在ceph-node2、ceph-node3上创建一个monitor# ceph-deploy mon create ceph-node2
# ceph-deploy mon create ceph-node3
3:从ceph-node2、ceph-node3中添加osd# ceph-deploy disk list ceph-node2 ceph-node3
# ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
# ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
# ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
# ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
4:调整pg和pgp的数量# ceph osd pool set rbd pg_num 256
# ceph osd pool set rbd pgp_num 256
5:查看集群状态,此时集群应该是health状态# ceph -s
    cluster 266bfddf-7f45-416d-95df-4e6487e8eb20
   health HEALTH_OK
   monmap e3: 3 mons at {ceph-node1=192.168.1.120:6789/0,ceph-node2=192.168.1.121:6789/0,ceph-node3=192.168.1.128:6789/0}
            election epoch 8, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
   osdmap e53: 9 osds: 9 up, 9 in
            flags sortbitwise,require_jewel_osds
      pgmap v158: 256 pgs, 1 pools, 0 bytes data, 0 objects
            320 MB used, 134 GB / 134 GB avail
               256 active+clean



页: [1]
查看完整版本: centos7部署ceph集群(正确)