在已有集群中,同一台机器添加多个OSD存储
1.在node3添加一块磁盘,注意:不要格式化、不要挂载。(Disk /dev/sdc)
[root@node3 ~]# fdisk -lu
Disk /dev/sda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00049415
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 20971519 9436160 8e Linux LVM
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@node3 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.0G 1.5G 6.6G 19% /
devtmpfs 223M 0 223M 0% /dev
tmpfs 235M 0 235M 0% /dev/shm
tmpfs 235M 5.6M 229M 3% /run
tmpfs 235M 0 235M 0% /sys/fs/cgroup
/dev/sda1 1014M 133M 882M 14% /boot
tmpfs 235M 24K 235M 1% /var/lib/ceph/osd/ceph-2
tmpfs 47M 0 47M 0% /run/user/0
2.擦净磁盘(还在/usr/local/src/ceph目录下)
#cd /usr/local/src/ceph
#ceph-deploy disk zap node3 /dev/sdc
[root@node1 ceph]# ceph-deploy disk zap node3 /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node3 /dev/sdc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : node3
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : ['/dev/sdc']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.6.1810 Core
[node3][DEBUG ] zeroing last few blocks of device
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
[node3][DEBUG ] --> Zapping: /dev/sdc
[node3][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node3][DEBUG ] Running command: wipefs --all /dev/sdc
[node3][DEBUG ] stdout: /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
[node3][DEBUG ] /dev/sdc: 8 bytes were erased at offset 0x27ffffe00 (gpt): 45 46 49 20 50 41 52 54
[node3][DEBUG ] /dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
[node3][DEBUG ] /dev/sdc: calling ioclt to re-read partition table: Success
[node3][DEBUG ] Running command: dd if=/dev/zero of=/dev/sdc bs=1M count=10
[node3][DEBUG ] --> Zapping successful for:
3.准备OSD磁盘
在admin(node1)节点上运行
#cd /usr/local/src/ceph
#ceph-deploy osd create node3 --data /dev/sdc
[root@node1 ceph]# ceph-deploy disk zap node3 /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap node3 /dev/sdc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : node3
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : ['/dev/sdc']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdc on node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.6.1810 Core
[node3][DEBUG ] zeroing last few blocks of device
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdc
[node3][DEBUG ] --> Zapping: /dev/sdc
[node3][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[node3][DEBUG ] Running command: wipefs --all /dev/sdc
[node3][DEBUG ] stdout: /dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
[node3][DEBUG ] /dev/sdc: 8 bytes were erased at offset 0x27ffffe00 (gpt): 45 46 49 20 50 41 52 54
[node3][DEBUG ] /dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
[node3][DEBUG ] /dev/sdc: calling ioclt to re-read partition table: Success
[node3][DEBUG ] Running command: dd if=/dev/zero of=/dev/sdc bs=1M count=10
[node3][DEBUG ] --> Zapping successful for:
[root@node1 ceph]# ceph-deploy osd create node3 --data /dev/sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create node3 --data /dev/sdc
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf :
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : node3
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func :
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/sdc
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdc
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to node3
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdc
[node3][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[node3][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c
[node3][DEBUG ] Running command: vgcreate --force --yes ceph-34063643-20a0-497f-a919-ca11f46ba910 /dev/sdc
[node3][DEBUG ] stdout: Physical volume "/dev/sdc" successfully created.
[node3][DEBUG ] stdout: Volume group "ceph-34063643-20a0-497f-a919-ca11f46ba910" successfully created
[node3][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n osd-block-f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c ceph-34063643-20a0-497f-a919-ca11f46ba910
[node3][DEBUG ] stdout: Logical volume "osd-block-f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c" created.
[node3][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[node3][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
[node3][DEBUG ] Running command: restorecon /var/lib/ceph/osd/ceph-3
[node3][DEBUG ] Running command: chown -h ceph:ceph /dev/ceph-34063643-20a0-497f-a919-ca11f46ba910/osd-block-f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c
[node3][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
[node3][DEBUG ] Running command: ln -s /dev/ceph-34063643-20a0-497f-a919-ca11f46ba910/osd-block-f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c /var/lib/ceph/osd/ceph-3/block
[node3][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
[node3][DEBUG ] stderr: got monmap epoch 1
[node3][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQCIY1tdzxZVFRAAwoHaI14E7dMaSzHREbPllA==
[node3][DEBUG ] stdout: creating /var/lib/ceph/osd/ceph-3/keyring
[node3][DEBUG ] added entity osd.3 auth auth(auid = 18446744073709551615 key=AQCIY1tdzxZVFRAAwoHaI14E7dMaSzHREbPllA== with 0 caps)
[node3][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
[node3][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
[node3][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c --setuser ceph --setgroup ceph
[node3][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdc
[node3][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[node3][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-34063643-20a0-497f-a919-ca11f46ba910/osd-block-f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c --path /var/lib/ceph/osd/ceph-3
[node3][DEBUG ] Running command: ln -snf /dev/ceph-34063643-20a0-497f-a919-ca11f46ba910/osd-block-f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c /var/lib/ceph/osd/ceph-3/block
[node3][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
[node3][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-3
[node3][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[node3][DEBUG ] Running command: systemctl enable ceph-volume@lvm-3-f1a29b24-eabe-47eb-b6e3-3dd7d5e0027c
[node3][DEBUG ] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node3][DEBUG ] Running command: systemctl enable --runtime ceph-osd@3
[node3][DEBUG ] stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node3][DEBUG ] Running command: systemctl start ceph-osd@3
[node3][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 3
[node3][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdc
[node3][INFO ] checking OSD status...
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.
4.查看osd是否添加启动成功
[root@node1 ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.06825 root default
-3 0.01949 host node1
0 hdd 0.01949 osd.0 up 1.00000 1.00000
-5 0.01949 host node2
1 hdd 0.01949 osd.1 up 1.00000 1.00000
-7 0.02928 host node3
2 hdd 0.01949 osd.2 up 1.00000 1.00000
3 hdd 0.00980 osd.3 up 1.00000 1.00000