Ceph librados编程访问

引言 

我需要针对Ceph的对象存储直接进行编程访问,看看用网关和不用网关下的性能差别。基于gate-way进行访问例子已经走通。现在 要测的是不走网关,用librados直接和Ceph集群打交道。 

环境配置
 1. Ceph集群:你要有一个已经配置好的Ceph集群,通过ceph -s可以看到集群的状态。
Ceph librados编程访问_第1张图片

2. 开发库安装 我的系统是CentOS6.5 采用如下命令安装相关开发包(C/C++开发包)

sudo yum install librados2-devel
安装成功后,你可以在/usr/include/rados路径下看到相应的头文件

示例程序
该实例程序来自官网,可参官网实例
http://docs.ceph.com/docs/master/rados/api/librados-intro/

#include <rados/librados.hpp>
#include <string>
#include <list>
int main(int argc, const char **argv)
{
int ret = 0 ;
// Get cluster handle and connect to cluster
std::cout<<"ceph Cluster connect begin."<<std::endl;
std::string cluster_name("ceph");
std::string user_name("client.admin");
librados::Rados cluster ;
ret = cluster.init2(user_name.c_str(), cluster_name.c_str(), 0);
if (ret < 0)
{
std::cerr << "Couldn't initialize the cluster handle! error " << ret << std::endl;
ret = EXIT_FAILURE;
return 1;
} else {
std::cout << "Created a cluster handle." << std::endl;
}
ret = cluster.conf_read_file("/etc/ceph/ceph.conf");
if (ret < 0)
{
std::cerr << "Couldn't read the Ceph configuration file! error " << ret << std::endl;
ret = EXIT_FAILURE;
return 1;
} else {
std::cout << "Read the Ceph configuration file Succeed." << std::endl;
}
ret = cluster.connect();
if (ret < 0) {
std::cerr << "Couldn't connect to cluster! error " << ret << std::endl;
ret = EXIT_FAILURE;
return 1;
}else{
std::cout << "Connected to the cluster." << std::endl;
}
std::cout<<"ceph Cluster connect end."<<std::endl;
// IO context poolname pool-1
std::cout<<"ceph Cluster create io context for pool begin."<<std::endl;
librados::IoCtx io_ctx ;
std::string pool_name("pool-1");
ret = cluster.ioctx_create(pool_name.c_str(), io_ctx);
if (ret < 0)
{
std::cerr << "Couldn't set up ioctx! error " << ret << std::endl;
exit(EXIT_FAILURE);
} else {
std::cout << "Created an ioctx for the pool." << std::endl;
}
std::cout<<"ceph Cluster create io context for pool end."<<std::endl;
// Write an object synchronously
std::cout<<"Write an object synchronously begin."<<std::endl;
librados::bufferlist bl;
std::string objectId("hw");
std::string objectContent("Hello World!");
bl.append(objectContent);
ret = io_ctx.write_full("hw", bl);
if (ret < 0) {
std::cerr << "Couldn't write object! error " << ret << std::endl;
exit(EXIT_FAILURE);
} else {
std::cout << "Wrote new object 'hw' " << std::endl;
}
std::cout<<"Write an object synchronously end."<<std::endl;
// Add an xattr to the object.
librados::bufferlist lang_bl;
lang_bl.append("en_US");
io_ctx.setxattr(objectId, "lang", lang_bl);
// Read the object back asynchronously
librados::bufferlist read_buf;
int read_len = 4194304;
//Create I/O Completion.
librados::AioCompletion *read_completion = librados::Rados::aio_create_completion();
//Send read request.
io_ctx.aio_read(objectId, read_completion, &read_buf, read_len, 0 );
// Wait for the request to complete, and print content
read_completion->wait_for_complete();
read_completion->get_return_value();
std::cout<< "Object name: " << objectId << "\n"
<< "Content: " << read_buf.c_str() << std::endl ;
// Read the xattr.
librados::bufferlist lang_res;
io_ctx.getxattr(objectId, "lang", lang_res);
std::cout<< "Object xattr: " << lang_res.c_str() << std::endl ;
// Print the list of pools
std::list<std::string> pools ;
cluster.pool_list(pools );
std::cout << "List of pools from this cluster handle" << std::endl ;
for (std::list<std::string>::iterator i = pools.begin(); i != pools.end(); ++i)
std::cout << *i << std::endl;
// Print the list of objects
librados::ObjectIterator oit=io_ctx.objects_begin();
librados::ObjectIterator oet=io_ctx.objects_end();
std::cout<< "List of objects from this pool" << std::endl ;
for(; oit!= oet; oit++ ) {
std::cout << "\t" << oit->first << std::endl ;
}
// Remove the xattr
io_ctx.rmxattr(objectId, "lang");
// Remove the object.
io_ctx.remove(objectId);
// Cleanup
io_ctx.close();
cluster.shutdown();
return 0 ;
}

编译指令

g++ -g -c cephclient.cxx -o cephclient.o
g++ -g cephclient.o -lrados -o cephclient

结果输出

[root@gnop029-ct-zhejiang_wenzhou-16-34 ceph-rados]# ./cephclient 
ceph Cluster connect begin.
Created a cluster handle.
Read the Ceph configuration file Succeed.
Connected to the cluster.
ceph Cluster connect end.
ceph Cluster create io context for pool begin.
Created an ioctx for the pool.
ceph Cluster create io context for pool end.
Write an object synchronously begin.
Wrote new object 'hw' 
Write an object synchronously end.
Object name: hw
Content: Hello World!
Object xattr: en_US
List of pools from this cluster handle
rbd
pool-1
pool-2
.rgw
.rgw.root
.rgw.control
.rgw.gc
.rgw.buckets
.rgw.buckets.index
.log
.intent-log
.usage
.users
.users.email
.users.swift
.users.uid
List of objects from this pool
rb.0.d402.238e1f29.00000000ee00
rb.0.d402.238e1f29.000000015000
rb.0.d402.238e1f29.00000000fa2f
rb.0.d402.238e1f29.00000001ac00
rb.0.d402.238e1f29.000000012000

接口说明
实例代码中包含了主要的接口,有:
1. 集群句柄创建
2. 集群连接
3. IO上下文环境初始化
4. 对象读写
5. IO上下文环境关闭
6. 集群句柄关闭

说明
我是参考了官方文档之后,自行走了一遍相关的过程,有不清楚的地方可直接看官网。

官网中针对C/C++/java/Python/PHP相关的访问都进行了说明。

PS: 测试数据待补充。

你可能感兴趣的:(云存储,ceph,librados)