ceph 17版本:使用cephadm部署17.2版本的单节点集群
ceph17.2.0版本的单节点集群部署记录
准备了一台虚拟机,挂载了3块20GB的盘作为osd介质
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1.8G 0 part /boot
└─sda3 8:3 0 18.2G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 10G 0 lvm /
sdb 8:16 0 20G 0 disk
sdc 8:32 0 20G 0 disk
sdd 8:48 0 20G 0 disk
cat >> /etc/systemd/resolved.conf << EOF
DNS=8.8.8.8 114.114.114.114
EOF
systemctl restart systemd-resolved.service
sudo apt install -y chrony && sudo systemctl enable --now chronyd
systemctl stop ufw
systemctl disable ufw
sudo apt-get purge docker-ce docker-ce-cli containerd.io
sudo rm -rf /var/lib/docker
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get -y update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io
systemctl status docker
apt install lvm2 -y
#安装
sudo apt install -y cephadm
#安装结果
cephadm is already the newest version (17.2.0-0ubuntu0.22.04.2)
cephadm bootstrap --mon-ip 192.168.150.37 --cluster-network 192.168.150.0/24 --single-host-defaults
#安装客户端软件
apt install ceph-common
#设置mon数量
ceph orch apply mon 1
#设置mgr数量
ceph orch apply mgr 1
#查看集群状态
ceph orch ls
#输出
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 9m ago 34m count:1
crash 1/1 9m ago 34m *
grafana ?:3000 1/1 9m ago 34m count:1
mgr 2/1 9m ago 3s count:1
mon 1/1 9m ago 4s count:1
node-exporter ?:9100 1/1 9m ago 34m *
prometheus ?:9095 1/1 9m ago 34m count:1
#部署osd
ceph orch apply osd --all-available-devices
#查看部署结果
ceph osd status
#输出
ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE
0 ceph01 22.5M 19.9G 0 0 0 0 exists,up
1 ceph01 22.5M 19.9G 0 0 0 0 exists,up
2 ceph01 19.4M 19.9G 0 0 0 0 exists,up
ceph osd pool create test 64 64
ceph osd pool ls
#输出
.mgr
test
ceph -s
ceph -s
cluster:
id: 672a7e5a-9642-11ed-b356-c34fd8a37286
health: HEALTH_OK
services:
mon: 1 daemons, quorum ceph01 (age 38m)
mgr: ceph01.nokwhs(active, since 34m)
osd: 3 osds: 3 up (since 119s), 3 in (since 2m)
data:
pools: 2 pools, 65 pgs
objects: 2 objects, 577 KiB
usage: 64 MiB used, 60 GiB / 60 GiB avail
pgs: 65 active+clean
当部署异常的时候,可以使用下面的命令删除掉集群信息重新部署
ceph orch pause
ceph fsid
cephadm rm-cluster --force --zap-osds --fsid <fsid>
简单整理了一下,供以后部署时参考