Ceph: too many PGs per OSD
一、故障现象:查看ceph的集群状态:too many PGs per OSD (698 > max 300)
# ceph -s
cluster e2ca994a-00c4-477f-9390-ea3f931c5062
health HEALTH_WARN
too many PGs per OSD (698 > max 300)
monmap e1: 3 mons at {hz-01-ops-tc-ceph-02=172.16.2.231:6789/0,hz-01-ops-tc-ceph-03=172.16.2.172:6789/0,hz-01-ops-tc-ceph-04=172.16.2.181:6789/0}
election epoch 14, quorum 0,1,2 hz-01-ops-tc-ceph-03,hz-01-ops-tc-ceph-04,hz-01-ops-tc-ceph-02
osdmap e54: 5 osds: 5 up, 5 in
flags sortbitwise,require_jewel_osds
pgmap v1670: 1164 pgs, 3 pools, 14640 kB data, 22 objects
240 MB used, 224 GB / 224 GB avail
1164 active+clean
# ceph --show-config| grep mon_pg_warn_max_per_osd
mon_pg_warn_max_per_osd = 300 二、调整ceph配置信息
# cd /my-cluster
# vim ceph.conf
添加如下参数:
mon_pg_warn_max_per_osd = 1024
# ceph-deploy --overwrite-conf config push hz-01-ops-tc-ceph-04 hz-01-ops-tc-ceph-02 hz-01-ops-tc-ceph-03
found configuration file at: /root/.cephdeploy.conf
Invoked (1.5.39): /usr/bin/ceph-deploy --overwrite-conf config push hz-01-ops-tc-ceph-04 hz-01-ops-tc-ceph-02 hz-01-ops-tc-ceph-03
ceph-deploy options:
username : None
verbose : False
overwrite_conf : True
subcommand : push
quiet : False
cd_conf :
cluster : ceph
client : ['hz-01-ops-tc-ceph-04', 'hz-01-ops-tc-ceph-02', 'hz-01-ops-tc-ceph-03']
func :
ceph_conf : None
default_release : False
Pushing config to hz-01-ops-tc-ceph-04
connected to host: hz-01-ops-tc-ceph-04
detect platform information from remote host
detect machine type
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing config to hz-01-ops-tc-ceph-02
connected to host: hz-01-ops-tc-ceph-02
detect platform information from remote host
detect machine type
write cluster configuration to /etc/ceph/{cluster}.conf
Pushing config to hz-01-ops-tc-ceph-03
connected to host: hz-01-ops-tc-ceph-03
detect platform information from remote host
detect machine type
write cluster configuration to /etc/ceph/{cluster}.conf 在mon节点上重启服务:
# systemctl restart ceph-mon.target
三、然后在管理节点上再次查看集群
# ceph -s
cluster e2ca994a-00c4-477f-9390-ea3f931c5062
health HEALTH_OK
monmap e1: 3 mons at {hz-01-ops-tc-ceph-02=172.16.2.231:6789/0,hz-01-ops-tc-ceph-03=172.16.2.172:6789/0,hz-01-ops-tc-ceph-04=172.16.2.181:6789/0}
election epoch 20, quorum 0,1,2 hz-01-ops-tc-ceph-03,hz-01-ops-tc-ceph-04,hz-01-ops-tc-ceph-02
osdmap e54: 5 osds: 5 up, 5 in
flags sortbitwise,require_jewel_osds
pgmap v1779: 1164 pgs, 3 pools, 14640 kB data, 22 objects
240 MB used, 224 GB / 224 GB avail
1164 active+clean
# ceph --show-config| grep mon_pg_warn_max_per_osd
mon_pg_warn_max_per_osd = 1024
页:
[1]