Linux + nginx + php 防止爬虫爬取

进入到nginx安装目录下的conf目录,将如下代码保存为 agent_deny.conf

cd /usr/local/nginx/conf
vim agent_deny.conf

禁止Scrapy等工具的抓取

if ($http_user_agent ~* (Scrapy|Curl|HttpClient)) {
     return 403;
}

禁止指定UA及UA为空的访问

if ($http_user_agent ~* "FeedDemon|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|YisouSpider|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms|^$" ) {
     return 403;             
}

禁止非GET|HEAD|POST方式的抓取

if ($request_method !~ ^(GET|HEAD|POST)$) {
    return 403;
}
if ($http_user_agent ~* LWP::Simple|BBBike|wget|curl) {
    return 444;
}

进入网站的配置目录 引入配置文件

location / {
	include agent_deny.conf;
}

最后进行重启测试

curl -I -A 'YisouSpider' 域名

curl -I -A '' 域名

你可能感兴趣的:(php,Linux)