Ceph学习----Ceph rbd 作为设备挂载到本地

CSDN 为我的同步更新博客,博客原地址:airheaven.cn


本文的原地址:http://115.29.141.2/2016/01/11/ceph%E5%AD%A6%E4%B9%A0-ceph-rbd-%E4%BD%9C%E4%B8%BA%E8%AE%BE%E5%A4%87%E6%8C%82%E8%BD%BD%E5%88%B0%E6%9C%AC%E5%9C%B0/


Ceph作为块存储,有时候需要将其挂载到本地作为文件系统使用,如果我们有这样的需求,那么我们应该如何操作呢?

1.在ceph集群中创建image,作为磁盘文件

root@ceph3:~# rbd create test-image --size 256 --pool test-pool
root@ceph3:~# rbd ls test-pool
test-image
root@ceph3:/test# rbd --image test-image info --pool test-pool
rbd image 'test-image':
    size 2048 MB in 512 objects
    order 22 (4096 kB objects)
    block_name_prefix: rb.0.1023.6b8b4567
    format: 1

删除image指令(虽然这里并不需要)

root@ceph3:~# rbd rm test-image -p test-pool
Removing image: 100% complete...done.

2.Kernel Modules && Map rbd to device

有时候我们需要将image挂载到本地,同时修改image中的一些信息,这就需要用到了map操作.
首先我们需要在内核中载入rbd模块

root@ceph3:~# modprobe rbd
root@ceph3:~# rbd create test-image --size 1024 --pool test-pool
root@ceph3:~# rbd map test-image --pool test-pool --id admin 
root@ceph3:~# rbd showmapped 
id pool      image      snap device    
0  test-pool test-image -    /dev/rbd0

格式化/dev/rbd0 然后将其挂载到 /test

root@ceph3:/dev/rbd# mkfs.ext4 /dev/rbd0 
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
65536 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

root@ceph3:/dev/rbd# mkdir /test
root@ceph3:/dev/rbd# mount /dev/rbd0 /test
root@ceph3:/dev/rbd# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        14G  3.9G  9.2G  30% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            480M  4.0K  480M   1% /dev
tmpfs            98M  1.5M   97M   2% /run
none            5.0M     0  5.0M   0% /run/lock
none            490M  152K  490M   1% /run/shm
none            100M   44K  100M   1% /run/user
/dev/rbd0       976M  1.3M  908M   1% /test

进入/test 创建文件

root@ceph3:/dev/rbd# cd /test
root@ceph3:/test# echo "hello world" > hello.txt
root@ceph3:/test# ls
hello.txt  lost+found


后续将针对该设备进行性能测试。


你可能感兴趣的:(ceph)