yui 发表于 2019-2-2 08:08:26

基于Centos7.4搭建Ceph

  

本文使用ceph-deploy工具,能快速搭建出一个ceph集群。



  
一、环境准备


[*] 修改主机名


[*]# cat /etc/redhat-release
[*]CentOS Linux release 7.4.1708 (Core)
  

IP
主机名角色
10.10.10.20
admin-nodeceph-deploy10.10.10.21node1mon10.10.10.22node2osd10.10.10.23node3osd  


[*]设置DNS解析(我们这里修改/etc/hosts文件)
[*]每个节点都要配置
   


[*]# cat /etc/hosts
[*]127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
[*]::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[*]10.10.10.20 admin-node
[*]10.10.10.21 node1
[*]10.10.10.22 node2
[*]10.10.10.23 node3
  
  


[*]配置yum源
[*]每个节点都要配置
  


[*]# mv /etc/yum.repos.d{,.bak}
[*]# mkdir /etc/yum.repos.d
[*]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[*]  
[*]  name=Ceph packages for $basearch
[*]  baseurl=http://download.ceph.com/rpm-jewel/el7/$basearch
[*]  enabled=1
[*]  priority=2
[*]  gpgcheck=1
[*]  type=rpm-md
[*]  gpgkey=https://download.ceph.com/keys/release.asc
[*]  

[*]  
[*]  name=Ceph noarch packages
[*]  baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
[*]  enabled=1
[*]  priority=2
[*]  gpgcheck=1
[*]  type=rpm-md
[*]  gpgkey=https://download.ceph.com/keys/release.asc
[*]  

[*]  
[*]  name=Ceph source packages
[*]  baseurl=http://download.ceph.com/rpm-jewel/el7/SRPMS
[*]  enabled=0
[*]  priority=2
[*]  gpgcheck=1
[*]  type=rpm-md
[*]  gpgkey=https://download.ceph.com/keys/release.asc
  


[*]关闭防火墙和Selinux
[*]每个节点都要配置
  


[*]# systemctl stop firewalld.service
[*]# systemctl disable firewalld.service
[*]# setenforce 0
[*]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
  


[*]设置节点之间面秘钥登入
[*]每个节点都要配置

[*]# ssh-keygen
[*]# ssh-copy-id 10.10.10.21
[*]# ssh-copy-id 10.10.10.22
[*]# ssh-copy-id 10.10.10.23
  


[*]使用chrony同步时间
[*]每个节点都要配置
  


[*]# yum install chrony -y

[*]# systemctl restartchronyd
[*]# systemctl enablechronyd
[*]# chronyc source -v (查看时间是否同步,*表示同步完成)
  

二、安装ceph-luminous
  


[*]安装ceph-deploy
[*]只在admin-node节点安装
  


[*]# yum install ceph-deploy -y
  



[*]在管理节点上创建一个目录,用于保存 ceph-deploy 生成的配置文件和密钥对
[*]只在admin-node节点安装



[*]# mkdir /etc/ceph
[*]# cd /etc/ceph/
  


[*]清除配置(若想从新安装可以执行以下命令)
[*]只在admin-node节点安装
  


[*]# ceph-deploy purgedata node1 node2 node3
[*]# ceph-deploy forgetkeys
  


[*]创建集群
[*]只在admin-node节点安装
  


[*]# ceph-deploy new node1
  


[*]修改ceph的配置,将副本数改为2
[*]只在admin-node节点安装
  


[*]# vi ceph.conf

[*]
[*]fsid = 183e441b-c8cd-40fa-9b1a-0387cb8e8735
[*]mon_initial_members = node1
[*]mon_host = 10.10.10.21
[*]auth_cluster_required = cephx
[*]auth_service_required = cephx
[*]auth_client_required = cephx
[*]filestore_xattr_use_omap = true
[*]  osd journal size = 1024
[*]  filestore xattr use omap = true
[*]  osd pool default size = 2
[*]  osd pool default min size = 1
[*]  osd pool default pg num = 333
[*]  osd pool default pgp num = 333
[*]  osd crush chooseleaf type = 1
   


[*]安装ceph
[*]只在admin-node节点安装
  


[*]# ceph-deploy install admin-node node1 node2 node3
  


[*]配置初始 monitor(s)、并收集所有密钥
[*]只在admin-node节点安装
  


[*]# ceph-deploy mon create-initial
[*]# ls
[*]ceph.bootstrap-mds.keyringceph.bootstrap-rgw.keyringceph-deploy-ceph.log
[*]ceph.bootstrap-mgr.keyringceph.client.admin.keyring   ceph.mon.keyring
[*]ceph.bootstrap-osd.keyringceph.conf                   rbdmap
[*]# ceph -s (查看集群状态)
[*]    cluster 8d395c8f-6ac5-4bca-bbb9-2e0120159ed9
[*]   health HEALTH_ERR
[*]            no osds
[*]   monmap e1: 1 mons at {node1=10.10.10.21:6789/0}
[*]            election epoch 3, quorum 0 node1
[*]   osdmap e1: 0 osds: 0 up, 0 in
[*]            flags sortbitwise,require_jewel_osds
[*]      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
[*]            0 kB used, 0 kB / 0 kB avail
[*]                  64 creating
  


[*]创建OSD
  


[*]# lsblk    (node2,node3做osd)
[*]NAME      MAJ:MIN RMSIZE RO TYPE MOUNTPOINT
[*]fd0         2:0    1    4K0 disk
[*]sda         8:0    0   20G0 disk
[*]├─sda1      8:1    0    1G0 part /boot
[*]└─sda2      8:2    0   19G0 part
[*]├─cl-root 253:0    0   17G0 lvm/
[*]└─cl-swap 253:1    0    2G0 lvm
[*]sdb         8:16   0   50G0 disk /var/local/osd0
[*]sdc         8:32   0    5G0 disk
[*]sr0          11:0    14.1G0 rom
[*]# mkfs.xfs /dev/sdb
[*]#mkdir /var/local/osd0
[*]#mount /dev/sdb /var/local/osd0
[*]# chown ceph:ceph/var/local/osd0
[*]# mkdir /var/local/osd1
[*]# mkfs.xfs /dev/sdb
[*]# mount /dev/sdb /var/local/osd1/
[*]# chown ceph:ceph/var/local/osd1
[*]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1    (在admin-node节点执行)
  


[*]将admin-node上的密钥和配合文件拷贝到各个节点
[*]只在admin-node节点安装

  


[*]# ceph-deploy admin admin-node node1 node2 node3
  


[*]确保对 ceph.client.admin.keyring 有正确的操作权限
[*]只在OSD节点执行
  


[*]# chmod +r /etc/ceph/ceph.client.admin.keyring
  


[*]管理节点执行 ceph-deploy 来准备 OSD
  


[*]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
  


[*]激活 OSD
  


[*]# ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1
  


[*]检查集群的健康状况
  


[*]# ceph health
[*]HEALTH_OK



[*]# ceph health
[*]HEALTH_OK
[*]# ceph -s
[*]    cluster 69f64f6d-f084-4b5e-8ba8-7ba3cec9d927
[*]   health HEALTH_OK
[*]   monmap e1: 1 mons at {node1=10.10.10.21:6789/0}
[*]            election epoch 3, quorum 0 node1
[*]   osdmap e14: 3 osds: 3 up, 3 in
[*]            flags sortbitwise,require_jewel_osds
[*]      pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
[*]            15459 MB used, 45950 MB / 61410 MB avail
[*]                  64 active+clean
  

  

  




页: [1]
查看完整版本: 基于Centos7.4搭建Ceph