为什么80%的码农都做不了架构师?>>>
一、安装开发包组、升级操作系统
#yum groupinstall "Development Tools" -y
#yum update -y
注:
1、如果你的系统上的python不是python2.7以上版本请升级到python2.7以上版本(由于Scrapy 需要python 2.7 以上的版本)
#下载python2.7
#wget http://python.org/ftp/python/2.7.3/Python-2.7.3.tar.bz2
#解压
#tar -jxvf Python-2.7.3.tar.bz2
#cd Python-2.7.3
#安装
#./configure
#make all
#make install
#make clean
#make distclean
#查看python 版本
#/usr/local/bin/python2.7 -V
#建立软连接,使系统默认的 python指向 python2.7
#mv /usr/bin/python /usr/bin/python2.6.6
#ln -s /usr/local/bin/python2.7 /usr/bin/python
#解决系统 Python 软链接指向 Python2.7 版本后,因为yum是不兼容 Python 2.7的,所以yum不能正常工作,我们需要指定 yum 的Python版本
vim /usr/bin/yum
将文件头部的
#!/usr/bin/python
改成
#!/usr/bin/python2.6.6
2、强烈建议升级python2.7后再安装pip与setuptools,如果不这样操作会出现很多莫明的问题,让你酸爽到天明!!
3、如果你是升级到python2.7,更大的可能性是全部通过python setup.py 编译安装,所需要的包含但不限于这些包
lxml,zope.interface,Twisted,characteristic,pyasn1-modules,service-identity,Scrapy
PS:我一开始就是编译安装的,其中最多的问题是:
error:command 'gcc' failed with exit status 1
后来我发现,如果有这样的提示不是缺少devel包就是少某一个lib库文件;最令我哭笑不得是安装Scrapy 提示成功,但无法创建项目,测试样例都跑不了,最终我果断的换centos7了!
以下内容都是Centos 7上的操作,升级到python2.7的同学请绕行
二、vim /etc/yum.repo/rpmforge.repo 指定rpmforge,来安装liffi-devel【如果不指定源,yum install liffi-devel会提示没有找到】,原博文中的方法不能用,自己又搜索了方案,使用下面代码覆盖上面打开的文件
#Name: RPMforge RPM Repository for Red Hat Enterprise 5 - dag
#URL: http://rpmforge.net/
[rpmforge]
name = Red Hat Enterprise $releasever - RPMforge.net - dag
#baseurl = http://apt.sw.be/redhat/el5/en/$basearch/dag
mirrorlist = http://apt.sw.be/redhat/el5/en/mirrors-rpmforge
#mirrorlist = file:///etc/yum.repos.d/mirrors-rpmforge
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1
运行下面命令
sudo rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
sudo yum install libffi-devel
原解决方案:http://www.lxway.com/164125081.htm
注:没有rpmforge的需要先安装
rpmforge是Dag、Dries 和其它软件包的组合。它们为 CentOS 提供了超过10000个软件包。rpmforge不是redhat Linux产品或 CentOS 的组成部分,但它是为这些 Linux 套件而设计的。
注释:因为这个安装源不是CentOS 本身的组成部分,要使用rpmforge,必须先安装rpmforce这个Repository。
获取方式:
#32位:
wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.1-1.el5.rf.i386.rpm
rpm -ivh rpmforge-release-0.5.1-1.el5.rf.i386.rpm
#64位:
wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.1-1.el5.rf.x86_64.rpm
安装:
rpm -ivh rpmforge-release-0.5.1-1.el5.rf.x86_64.rpm
安装好之后在/etc/yum.repos.d 目录下面生成:
mirrors-rpmforge --包含一系列的镜像站点
rpmforge.repo --yum源的配置文件
rpmforge-testing.repo --测试用的。
三、如果系统中安装有audit这个包请先移除,它会影响到Scrapy的安装
#yum remove audit
四、安装Scarpy 所需要的开发包
#yum install -y python-devel openssl-devel libxslt-devel libxml2-devel
五、安装pip与setuptools
#yum install python-pip -y
显示没有可用源:
这是因为像centos这类衍生出来的发行版,他们的源有时候内容更新的比较滞后,或者说有时候一些扩展的源根本就没有。所以在使用yum来search python-pip的时候,会说没有找到该软件包。因此为了能够安装这些包,需要先安装扩展源EPEL。EPEL(http://fedoraproject.org/wiki/EPEL) 是由 Fedora 社区打造,为 RHEL 及衍生发行版如 CentOS、Scientific Linux 等提供高质量软件包的项目。
首先安装epel扩展源:
sudo yum -y install epel-release
然后安装python-pip
sudo yum -y install python-pip
安装完之后别忘了清除一下cache
sudo yum clean all
#pip install setuptools
#pip install setuptoos --upgrade
六、安装Scrapy
# pip install Scrapy
Collecting Scrapy
Using cached Scrapy-1.0.3-py2-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /usr/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): queuelib in /usr/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /usr/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): w3lib>=1.8.0 in /usr/lib/python2.7/site-packages (from Scrapy)
Collecting lxml (from Scrapy)
Using cached lxml-3.4.4.tar.gz
Collecting Twisted>=10.0.0 (from Scrapy)
Using cached Twisted-15.4.0.tar.bz2
Requirement already satisfied (use --upgrade to upgrade): six>=1.5.2 in /usr/lib/python2.7/site-packages (from Scrapy)
Collecting service-identity (from Scrapy)
Using cached service_identity-14.0.0-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): cryptography>=0.7 in /usr/lib64/python2.7/site-packages (from pyOpenSSL->Scrapy)
Collecting zope.interface>=3.6.0 (from Twisted>=10.0.0->Scrapy)
Using cached zope.interface-4.1.3.tar.gz
Collecting characteristic>=14.0.0 (from service-identity->Scrapy)
Using cached characteristic-14.3.0-py2.py3-none-any.whl
Collecting pyasn1-modules (from service-identity->Scrapy)
Using cached pyasn1_modules-0.0.8-py2.py3-none-any.whl
Requirement already satisfied (use --upgrade to upgrade): pyasn1 in /usr/lib/python2.7/site-packages (from service-identity->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): idna>=2.0 in /usr/lib/python2.7/site-packages (from cryptography>=0.7->pyOpenSSL->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python2.7/site-packages (from cryptography>=0.7->pyOpenSSL->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): enum34 in /usr/lib/python2.7/site-packages (from cryptography>=0.7->pyOpenSSL->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): ipaddress in /usr/lib/python2.7/site-packages (from cryptography>=0.7->pyOpenSSL->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): cffi>=1.1.0 in /usr/lib64/python2.7/site-packages (from cryptography>=0.7->pyOpenSSL->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): pycparser in /usr/lib/python2.7/site-packages (from cffi>=1.1.0->cryptography>=0.7->pyOpenSSL->Scrapy)
Installing collected packages: lxml, zope.interface, Twisted, characteristic, pyasn1-modules, service-identity, Scrapy
Running setup.py install for lxml
Running setup.py install for zope.interface
Running setup.py install for Twisted
Successfully installed Scrapy-1.0.3 Twisted-15.4.0 characteristic-14.3.0 lxml-3.4.4 pyasn1-modules-0.0.8 service-identity-14.0.0 zope.interface-4.1.3
七、创建项目
[root@localhost workspace]
# scrapy startproject tutorial
2015-10-15 21:54:24 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2015-10-15 21:54:24 [scrapy] INFO: Optional features available: ssl, http11
2015-10-15 21:54:24 [scrapy] INFO: Overridden settings: {}
New Scrapy project 'tutorial' created in:
/workspace/tutorial
You can start your first spider with:
cd tutorial
scrapy genspider example example.com
八、目录结构
[root@localhost workspace]
# tree
.
└── tutorial
├── scrapy.cfg
└── tutorial
├── __init__.py
├── items.py
├── pipelines.py
├── settings.py
└── spiders
└── __init__.py
3 directories, 6 files
九、Scrapy相关文档
http://www.tuicool.com/articles/URNVV3E 【编译安装Scrapy,但很不幸,我没成功】
https://scrapy-chs.readthedocs.org/zh_CN/0.24/intro/overview.html 【很早之前Scrapy中文翻译】
http://scrapy.org/
http://doc.scrapy.org/en/master/