ceph 源码编译in redhat6.4

一.背景

由于某地项目需要需要ceph支持redhat6.4,因此需要在redhat6.4 basic server进行源码的编译以及rpm包的生成。

二.获取ceph源码

获取ceph源码有几种方式,可以选择从git库上获取,也可以在官网获取,值得一提的是,如果在git库上获取时,默认是只clone 主分支源码到当前工作目录,并不会递归clone ceph submodule里的代码,因此在编译时常出现模块代码不存在的问题。

这里采用git库的方式获取代码:
1.获取ceph git主分支

git clone –recursive https://github.com/ceph/ceph.git

2.获取ceph redhat分支

git checkout -b local origin/rh-jewel

可以从ceph 官网下载打包好的

http://download.ceph.com/tarballs/


三 编译源码

在生成rpm包之前,我们需要保证源码编译环境正常,因此我们先编译ceph源码,这里获取的是10.2.5的代码。
执行

./autogen.sh

按照提示对libtool进行安装

yum install libtool

执行

./configure –with-radosgw

按照提示安装gcc-c++

yum install gcc-c++

再次执行configure,我们发现提示如下错误
这里写图片描述

很明显,jewel版本的代码要求支持c++11特性,因此我们升级gcc。
获取gcc-4.8.1

wget http://ftp.gnu.org/gnu/gcc/gcc-4.8.1/gcc-4.8.1.tar.gz
tar xzf gcc-4.8.1.tar.gz  
cd gcc-4.8.1  
./contrib/download_prerequisites    //依赖库
cd ..  
mkdir build_gcc4.8.1
cd build_gcc4.8.1
../gcc-4.8.1/configure --prefix=/usr  -enable-checking=release --enable-languages=c,c++ --disable-multilib
make -j4
make install

到此,我们gcc升级完成,可以重新编译ceph源码。

回到ceph源码目录,重新执行configure –with-radosgw,报以下错误
这里写图片描述
这个错误是缺少cython包
在安装此安装包以前需要安装python2.7,默认系统自带2.6,需要进行升级。
获取epel源

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm 
wget https://rhel6.iuscommunity.org/ius-release.rpm

安装源

rpm -Uvh epel-release-latest-6.noarch.rpm ius-release.rpm

重新执行configure还会报比较多的包缺失,这里不一一赘述。在我的系统中,安装以下安装包:

yum install snappy-devel leveldb-devel libblkid-devel libudeve-dev expat-devel keyutils-libs-devel cryptopp cryptopp-devel cryptopp fuse-devel fuse-libs libuuid-devel libedit-devel libatomic_ops-devel libatomic libaio-devel xfsprogs-devel openldap-devel fuse-python libudev libudev-devel boost boost-devel openssl openssl-devel libcurl libidn-devel curl fcgi-devel fcgi libcurl-devel

在这里,需要注意的是boost的版本应该是1.53
默认boost-random安装完后 生成的so名字为 libboost_random-mt.so.1.46.0,但是configure扫描环境变量扫描文件名为libboost_random-mt.so,因此需要生成软链接。

ln -s /usr/lib64/libboost_random.so.1.53.0 /usr/lib64/libboost_random-mt.so 
ln -s /usr/lib64/libboost_random.so.1.53.0 /usr/lib64/libboost_random.so

boost安装完成后,重新编译ceph

重新configure –with-radosgw再make源码,在这之前需要修改rbd-nbd,在redhat6.4版本上内核不支持nbd,如果需要编译rbd-nbd需要对内核进行升级到3.10以上。

rpm -Uvh kernel-3.10.105-1.el6.x86_64.rpm

然而,这次又出错了,报 gcc rados.c cephfs.c No such file,很困惑,提示里看到一个关键词pybind。

百度了一下pybind这个工具,是将python代码转换为c代码的,rados.c cephfs.c是通过这个工具转换出来的,然后再进行编译,另外在代码里看到很多python2.6的字眼,我们不是将2.6升级成2.7了么,怎么还是2.6?

因此决定将rpm包卸载,进行源码安装。

rpm -e python27 python27-libs

源码编译安装python2.7源码 ,

tar xvf Python-2.7.13.tgz 
./configure –prefix=/usr/local/python27  --enable-shared

如果要生成共享库libpython2.7.so、libpython2.7.so.1.0、libpython2.7.a(.so为动态库,.a为静态库),加上--enable-shared

修改 Modules/Setup 文件

vi Modules/Setup

# Socket module helper for SSL support; you must comment out the other
# socket line above, and possibly edit the SSL variable:
#SSL=/usr/local/ssl
_ssl _ssl.c \
        -DUSE_SSL -I$(SSL)/include -I$(SSL)/include/openssl \
        -L$(SSL)/lib -lssl -lcrypto

默认这块是注释的,放开注释即开。这块功能是开启SSL模块,不然会出现安装完毕后,提示找不到ssl模块的错误。

make 添加-j4使用多核加速.

make -j4
make install

用软连接进行2.6 2.7版本转换

ln -sf /usr/local/python2.7/bin/python /usr/bin/python  
ln -sf /usr/local/python2.7/lib/libpython2.7.so /usr/lib64/libpython2.7.so
ln -sf /usr/local/python2.7/lib/libpython2.7.so.1.0 /usr/lib64/libpython2.7.so.1.0

添加python2.7的环境变量:

vi /etc/profile

export PYTHON_HOME=/usr/local/python2.7/bin;
export PATH=$PATH:$PYTHON_HOME;

使其生效

source /etc/profile

安装setuptools。

wget –no-check-certificate https://pypi.python.org/packages/41/5f/6da80400340fd48ba4ae1c673be4dc3821ac06cd9821ea60f9c7d32a009f/setuptools-38.4.0.zip#md5=3426bbf31662b4067dc79edc0fa21a2e

首先进入setuptools的解目录,执行命令

python setup.py install

安装pip

wget https://pypi.python.org/packages/11/b6/abcb525026a4be042b486df43905d6893fb04f05aac21c32c638e939e447/pip-9.0.1.tar.gz#md5=35f01da33009719497f01a4ba69d63c9

装好setuptools后,在进入pip-9.0.1的解压目录,执行命令

python setup.py install

安装完毕后,再次进行make,报cython.build module找不到,安装cython。

pip install distribute 
pip install Cython 
ln -s /usr/local/bin/sphinx-build /usr/local/bin/sphinx-1.0-build

编译安装

编译安装
make -j4 
make install

配置环境变量
vi /etc/profile

export CEPH_HOME=/usr/local/ceph/bin;
export PATH=$PATH:$PYTHON_HOME:$CEPH_HOME;

拷贝库文件
cp -R /usr/local/ceph/lib/* /usr/lib64/
cp -R /usr/local/ceph/lib/python2.7/site-packages/* /usr/local/python2.7/lib/python2.7/site-packages 

再次make,终于成功了。

所有python 的包

Package          Version  
---------------- ---------
ceph-detect-init 1.0.1    
ceph-disk        1.0.0    
cephfs           0        
certifi          2018.8.24
chardet          3.0.4    
click            6.7      
Cython           0.28.5   
Flask            1.0.2    
idna             2.7      
itsdangerous     0.24     
Jinja2           2.10     
MarkupSafe       1.0      
pip              18.0     
pluggy           0.7.1    
py               1.6.0    
rados            0        
rbd              0        
requests         2.19.1   
setuptools       38.4.0   
six              1.11.0   
toml             0.9.6    
tox              3.3.0    
urllib3          1.23     
virtualenv       16.0.0   
Werkzeug         0.14.1 

四 生成rpm包

在源码编译成功后,表明环境可以正式进行rpm打包了。
进入ceph目录,make dist-bzip2


执行完成后,生成对应的.tar.bz2文件。

准备rpmbuild工作目录

mkdir ~/rpmbuild/ ~/rpmbuild/BUILD ~/rpmbuild/BUILDROOT ~/rpmbuild/RPMS ~/rpmbuild/SOURCES ~/rpmbuild/SPECS ~/rpmbuild/SRPMS

准备构建文件

cp ceph.tar.bz2 ~/rpmbuild/SOURCES 
cp ceph/rpm/init-ceph.in-fedora.patch ~/rpmbuild/SOURCES 
cp ceph/ceph.spec ~/rpmbuild/SPECS

生成rpm包

rpmbuild -bb ~/rpmbuild/SPECS/ceph.spec

生成rpm包提示会有很多依赖,其中boost库的已经安装,只需要在ceph.spec里面将其注释掉,另外注释掉policyhelp就可以打包了

各种rpm依赖https://fedora.pkgs.org

五 一些报错以及解决方法:


### pip install Cython
configure: error: in `/home/ceph/ceph-10.2.11':
configure: error: cython not found
See `config.log' for more details

####
rpm -ivh cryptopp*

configure: error: in `/home/ceph/ceph-10.2.11':
configure: error: no suitable crypto library found
See `config.log' for more details


###
./configure --without-tcmalloc

configure: error: in `/home/ceph/ceph-10.2.11':
configure: error: no tcmalloc found (use --without-tcmalloc to disable)
See `config.log' for more details




### 内存不足

{standard input}:228141: Warning: end of file not at end of a line; newline inserted
{standard input}:228511: Error: open CFI at the end of file; missing .cfi_endproc directive
g++: 编译器内部错误:已杀死(程序 cc1plus)
0x409a71 execute
    ../.././gcc/gcc.c:2823
Please submit a full bug report,
with preprocessed source if appropriate.
Please include the complete backtrace with any bug report.
See  for instructions.
make[3]: *** [mon/Monitor.lo] 错误 1
make[3]: *** 正在等待未完成的任务....
make[3]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make[2]: *** [all-recursive] 错误 1
make[2]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make[1]: *** [all] 错误 2
make[1]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make: *** [all-recursive] 错误 1


### 源码安装zlib

./make_version -g ./.git_version
This is no git repository, not updating .git_version
  CXX      compressor/zlib/libceph_zlib_la-CompressionZlib.lo
compressor/zlib/CompressionZlib.cc:21:18: 致命错误:zlib.h:没有那个文件或目录
 #include 
                  ^
编译中断。
make[3]: *** [compressor/zlib/libceph_zlib_la-CompressionZlib.lo] 错误 1
make[3]: *** 正在等待未完成的任务....
make[3]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make[2]: *** [all-recursive] 错误 1
make[2]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make[1]: *** [all] 错误 2
make[1]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make: *** [all-recursive] 错误 1


### 没有python2.7命令  建立python2.7软连接

/bin/sh: /tmp/ceph-disk-virtualenv/bin/pip: 没有那个文件或目录
make[3]: *** [/tmp/ceph-disk-virtualenv] 错误 127
make[3]: *** 正在等待未完成的任务....
  Downloading https://files.pythonhosted.org/packages/f2/94/3af39d34be01a24a6e65433d19e107099374224905f1e0cc6bbe1fd22a2f/argparse-1.4.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): argparse from https://files.pythonhosted.org/packages/f2/94/3af39d34be01a24a6e65433d19e107099374224905f1e0cc6bbe1fd22a2f/argparse-1.4.0-py2.py3-none-any.whl#sha256=c31647edb69fd3d465a847ea3157d37bed1f95f19760b11a47aa91c04b666314 in /opt/rh/python27/root/usr/lib/python2.7/site-packages (from -r requirements.txt (line 1))
/bin/sh: /tmp/ceph-detect-init-virtualenv/bin/pip: 没有那个文件或目录
make[3]: *** [/tmp/ceph-detect-init-virtualenv] 错误 127
make[3]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make[2]: *** [all-recursive] 错误 1
make[2]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make[1]: *** [all] 错误 2
make[1]: Leaving directory `/data/ceph/ceph-10.2.11/src'
make: *** [all-recursive] 错误 1


### boost版本1.5.3


  CXX      rbd_replay/Replayer.lo
  CXX      rbd_replay/ios.lo
rbd_replay/Replayer.cc: 在成员函数‘void rbd_replay::Replayer::wait_for_actions(const Dependencies&)’中:
rbd_replay/Replayer.cc:342:141: 错误:对‘boost::date_time::subsecond_duration<  , 1000000l>::subsecond_duration(float)’的调用没有匹配的函数
     boost::system_time sub_release_time(action_completed_time + boost::posix_time::microseconds(dep.time_delta * m_latency_multiplier / 1000));
                                                                                                                                             ^
rbd_replay/Replayer.cc:342:141: 附注:备选是:
In file included from /usr/local/include/boost/date_time/posix_time/posix_time_config.hpp:16:0,
                 from /usr/local/include/boost/date_time/posix_time/posix_time_system.hpp:13,
                 from /usr/local/include/boost/date_time/posix_time/ptime.hpp:12,
                 from /usr/local/include/boost/date_time/posix_time/posix_time_types.hpp:12,
                 from /usr/local/include/boost/thread/thread_time.hpp:11,
                 from /usr/local/include/boost/thread/lock_types.hpp:18,
                 from /usr/local/include/boost/thread/pthread/mutex.hpp:16,
                 from /usr/local/include/boost/thread/mutex.hpp:16,
                 from rbd_replay/Replayer.hpp:18,
                 from rbd_replay/Replayer.cc:15:
/usr/local/include/boost/date_time/time_duration.hpp:285:14: 附注:template boost::date_time::subsecond_duration::subsecond_duration(const T&, typename boost::enable_if, void>::type*)
     explicit subsecond_duration(T const& ss,
              ^
/usr/local/include/boost/date_time/time_duration.hpp:285:14: 附注:  template argument deduction/substitution failed:
/usr/local/include/boost/date_time/time_duration.hpp: In substitution of ‘template boost::date_time::subsecond_duration::subsecond_duration(const T&, typename boost::enable_if, void>::type*) [with T = float]’:
rbd_replay/Replayer.cc:342:141:   required from here
/usr/local/include/boost/date_time/time_duration.hpp:285:14: 错误:no type named ‘type’ in ‘struct boost::enable_if, void>’
In file included from /usr/local/include/boost/date_time/posix_time/posix_time_config.hpp:16:0,
                 from /usr/local/include/boost/date_time/posix_time/posix_time_system.hpp:13,
                 from /usr/local/include/boost/date_time/posix_time/ptime.hpp:12,
                 from /usr/local/include/boost/date_time/posix_time/posix_time_types.hpp:12,
                 from /usr/local/include/boost/thread/thread_time.hpp:11,
                 from /usr/local/include/boost/thread/lock_types.hpp:18,
                 from /usr/local/include/boost/thread/pthread/mutex.hpp:16,
                 from /usr/local/include/boost/thread/mutex.hpp:16,
                 from rbd_replay/Replayer.hpp:18,
                 from rbd_replay/Replayer.cc:15:
/usr/local/include/boost/date_time/time_duration.hpp:270:30: 附注:boost::date_time::subsecond_duration::subsecond_duration(const boost::date_time::subsecond_duration&)
   class BOOST_SYMBOL_VISIBLE subsecond_duration : public base_duration
                              ^
/usr/local/include/boost/date_time/time_duration.hpp:270:30: 附注:  no known conversion for argument 1 from ‘float’ to ‘const boost::date_time::subsecond_duration&’
/usr/local/include/boost/date_time/time_duration.hpp:270:30: 附注:boost::date_time::subsecond_duration::subsecond_duration(boost::date_time::subsecond_duration&&)
/usr/local/include/boost/date_time/time_duration.hpp:270:30: 附注:  no known conversion for argument 1 from ‘float’ to ‘boost::date_time::subsecond_duration&&’
make[3]: *** [rbd_replay/Replayer.lo] 错误 1
make[3]: *** 正在等待未完成的任务....
make[3]: Leaving directory `/data/ceph_source/ceph-10.2.11/src'
make[2]: *** [all-recursive] 错误 1
make[2]: Leaving directory `/data/ceph_source/ceph-10.2.11/src'
make[1]: *** [all] 错误 2
make[1]: Leaving directory `/data/ceph_source/ceph-10.2.11/src'
make: *** [all-recursive] 错误 1



#####################################################################


####  升级内核以支持nbd

rpm -Uvh kernel-3.10.105-1.el6.x86_64.rpm



  CXX      tools/rbd_nbd/rbd_nbd-rbd-nbd.o
  CXX      tools/cephfs/cephfs-journal-tool.o
  CXX      tools/cephfs/JournalTool.o
tools/rbd_nbd/rbd-nbd.cc: 在成员函数‘void NBDServer::reader_entry()’中:
tools/rbd_nbd/rbd-nbd.cc:278:14: 错误:‘NBD_CMD_FLUSH’在此作用域中尚未声明
         case NBD_CMD_FLUSH:
              ^
tools/rbd_nbd/rbd-nbd.cc:281:14: 错误:‘NBD_CMD_TRIM’在此作用域中尚未声明
         case NBD_CMD_TRIM:
              ^
tools/rbd_nbd/rbd-nbd.cc: 在函数‘std::ostream& operator<<(std::ostream&, const NBDServer::IOContext&)’中:
tools/rbd_nbd/rbd-nbd.cc:392:8: 错误:‘NBD_CMD_FLUSH’在此作用域中尚未声明
   case NBD_CMD_FLUSH:
        ^
tools/rbd_nbd/rbd-nbd.cc:395:8: 错误:‘NBD_CMD_TRIM’在此作用域中尚未声明
   case NBD_CMD_TRIM:
        ^
tools/rbd_nbd/rbd-nbd.cc: 在函数‘int do_map(int, const char**)’中:
tools/rbd_nbd/rbd-nbd.cc:655:11: 错误:‘NBD_FLAG_SEND_FLUSH’在此作用域中尚未声明
   flags = NBD_FLAG_SEND_FLUSH | NBD_FLAG_SEND_TRIM | NBD_FLAG_HAS_FLAGS;
           ^
tools/rbd_nbd/rbd-nbd.cc:655:33: 错误:‘NBD_FLAG_SEND_TRIM’在此作用域中尚未声明
   flags = NBD_FLAG_SEND_FLUSH | NBD_FLAG_SEND_TRIM | NBD_FLAG_HAS_FLAGS;
                                 ^
tools/rbd_nbd/rbd-nbd.cc:655:54: 错误:‘NBD_FLAG_HAS_FLAGS’在此作用域中尚未声明
   flags = NBD_FLAG_SEND_FLUSH | NBD_FLAG_SEND_TRIM | NBD_FLAG_HAS_FLAGS;
                                                      ^
tools/rbd_nbd/rbd-nbd.cc:657:14: 错误:‘NBD_FLAG_READ_ONLY’在此作用域中尚未声明
     flags |= NBD_FLAG_READ_ONLY;
              ^
tools/rbd_nbd/rbd-nbd.cc:685:14: 错误:‘NBD_SET_FLAGS’在此作用域中尚未声明
   ioctl(nbd, NBD_SET_FLAGS, flags);
              ^
  CXX      tools/cephfs/RoleSelector.o
make[3]: *** [tools/rbd_nbd/rbd_nbd-rbd-nbd.o] 错误 1
make[3]: *** 正在等待未完成的任务....
make[3]: Leaving directory `/data/ceph_source/ceph-10.2.11/src'
make[2]: *** [all-recursive] 错误 1
make[2]: Leaving directory `/data/ceph_source/ceph-10.2.11/src'
make[1]: *** [all] 错误 2
make[1]: Leaving directory `/data/ceph_source/ceph-10.2.11/src'
make: *** [all-recursive] 错误 1


  CXXLD  ceph-dencoder
cd ./pybind/rados; mkdir -p "/data/ceph-10.2.11/src/build"; CC="gcc" CXX="g++" LDSHARED="gcc -shared" CPPFLAGS="-iquote \/data/ceph-10.2.11/src/include -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_THREAD_SAFE -D__STDC_FORMAT_MACROS -D_GNU_SOURCE -DCEPH_LIBDIR=\"/usr/local/ceph/lib\" -DCEPH_PKGLIBDIR=\"/usr/local/ceph/lib/ceph\" -DGTEST_USE_OWN_TR1_TUPLE=0 -D_REENTRANT    " CFLAGS="-iquote \/data/ceph-10.2.11/src/include -Wall -Wtype-limits -Wignored-qualifiers -Winit-self -Wpointer-arith  -fno-strict-aliasing -fsigned-char -rdynamic -O2 -g -pipe -Wall -Wp,-U_FORTIFY_SOURCE -Wp,-D_FORTIFY_SOURCE=2 -fexceptions --param=ssp-buffer-size=4 -fPIE -fstack-protector    -I/usr/include/python2.6 -I/usr/include/python2.6 -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv" LDFLAGS="-L\/data/ceph-10.2.11/src/.libs -Wl,--as-needed  -Wl,-z,relro -Wl,-z,now  -latomic_ops -lpthread -ldl -lutil -lm -lpython2.6" CYTHON_BUILD_DIR="/data/ceph-10.2.11/src/build" /usr/bin/python ./setup.py build \
  --build-base /data/ceph-10.2.11/src/build \
  --verbose
Traceback (most recent call last):
  File "./setup.py", line 5, in 
    from setuptools.command.egg_info import egg_info
ImportError: No module named setuptools.command.egg_info
make[3]: *** [rados-pybind-all] 错误 1
make[3]: Leaving directory `/data/ceph-10.2.11/src'
make[2]: *** [all-recursive] 错误 1
make[2]: Leaving directory `/data/ceph-10.2.11/src'
make[1]: *** [all] 错误 2
make[1]: Leaving directory `/data/ceph-10.2.11/src'
make: *** [all-recursive] 错误 1


running build
running build_ext
building 'rados' extension
creating /data/ceph-10.2.11/src/build/temp.linux-x86_64-2.6
gcc -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -iquote /data/ceph-10.2.11/src/include -Wall -Wtype-limits -Wignored-qualifiers -Winit-self -Wpointer-arith -fno-strict-aliasing -fsigned-char -rdynamic -O2 -g -pipe -Wall -Wp,-U_FORTIFY_SOURCE -Wp,-D_FORTIFY_SOURCE=2 -fexceptions --param=ssp-buffer-size=4 -fPIE -fstack-protector -I/usr/include/python2.6 -I/usr/include/python2.6 -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -iquote /data/ceph-10.2.11/src/include -D__CEPH__ -D_FILE_OFFSET_BITS=64 -D_THREAD_SAFE -D__STDC_FORMAT_MACROS -D_GNU_SOURCE -DCEPH_LIBDIR=/usr/local/ceph/lib -DCEPH_PKGLIBDIR=/usr/local/ceph/lib/ceph -DGTEST_USE_OWN_TR1_TUPLE=0 -D_REENTRANT -fPIC -I/usr/include/python2.6 -c rados.c -o /data/ceph-10.2.11/src/build/temp.linux-x86_64-2.6/rados.o
gcc: 错误:rados.c:没有那个文件或目录
gcc: 致命错误:没有输入文件
编译中断。
error: command 'gcc' failed with exit status 1
make[3]: *** [rados-pybind-all] 错误 1
make[3]: Leaving directory `/data/ceph-10.2.11/src'
make[2]: *** [all-recursive] 错误 1
make[2]: Leaving directory `/data/ceph-10.2.11/src'
make[1]: *** [all] 错误 2
make[1]: Leaving directory `/data/ceph-10.2.11/src'
make: *** [all-recursive] 错误 1








#################################################
Traceback (most recent call last):
  File "/usr/local/ceph/bin/ceph", line 118, in 
    import rados
ImportError: librados.so.2: cannot open shared object file: No such file or directory

###ImportError: librados.so.2### 解决:

cp -R /usr/local/ceph/lib/* /usr/lib64/


################################################
/usr/local/ceph/bin/ceph:118: RuntimeWarning: compiletime version 2.6 of module 'rados' does not match runtime version 2.7
  import rados
Traceback (most recent call last):
  File "/usr/local/ceph/bin/ceph", line 124, in 
    from ceph_argparse import \
ImportError: No module named ceph_argparse

###No module named ceph_argparse### 解决:

cp -R /usr/local/ceph/lib/python2.7/site-packages/* /usr/local/python2.7/lib/python2.7/site-packages


你可能感兴趣的:(ceph 源码编译in redhat6.4)