存储集群消除pg数量过多的告警

[root@xxxxxxxxxxxxxx ~]#ceph -s
    cluster 334cfe7e-9ccc-483d-8d2c-218fde3a5fde
     health HEALTH_WARN
            too many PGs per OSD (307 > max 300)
            nodeep-scrub flag(s) set
     monmap e1: 3 mons at {node1=100.88.28.11:6789/0,node2=100.88.28.12:6789/0,node3=100.88.28.13:6789/0}
            election epoch 42, quorum 0,1,2 xxxxxx,xxxxxxx,xxxxxx
     osdmap e18318: 30 osds: 30 up, 30 in
            flags nodeep-scrub,sortbitwise,require_jewel_osds
      pgmap v177543357: 3072 pgs, 2 pools, 4910 GB data, 673 kobjects
            14752 GB used, 35506 GB / 50259 GB avail
                3072 active+clean
  client io 0 B/s rd, 57589 B/s wr, 8 op/s rd, 8 op/s wr

登录monitor节点

ceph daemon mon.xxxxxxxx config set mon_pg_warn_max_per_osd 312

 再次查看验证

ceph -s

你可能感兴趣的:(ceph)