网络爬虫(Web crawler)|| 爬虫入门程序

网络爬虫

网络爬虫(Web crawler),是一种按照一定的规则,自动地抓取万维网信息的程序或者脚本



爬虫入门程序

环境准备

  1. JDK1.8
  2. IntelliJ IDEA
  3. IDEA自带的Maven

环境准备

    1.创建Maven工程itcast-crawler-first并给pom.xml加入依赖​​​​​​​、


    
    
        org.apache.httpcomponents
        httpclient
        4.5.3
    

    
    
        org.slf4j
        slf4j-log4j12
        1.7.25
    

   2.加入log4j.properties

log4j.rootLogger=DEBUG,A1
log4j.logger.cn.itcast = DEBUG

log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss,SSS} [%t] [%c]-[%p] %m%n

​​​​​​​​​​​​​​  3.编写代码

  1. 打开浏览器,创建HttpClient对象

  2. 输入网址,发起get请求创建HttpGet对象

  3.按回车,发起请求,返回响应,使用HttpClient对象发起请求

  4. 解析响应,获取数据
     判断状态码是否是200
package cn.itcast.crawler.test;

import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;

public class CrawlerFirst {

    public static void main(String[] args) throws Exception {
        //1. 打开浏览器,创建HttpClient对象
        CloseableHttpClient httpClient = HttpClients.createDefault();

        //2. 输入网址,发起get请求创建HttpGet对象
        HttpGet httpGet = new HttpGet("http://www.itcast.cn");

        //3.按回车,发起请求,返回响应,使用HttpClient对象发起请求
        CloseableHttpResponse response = httpClient.execute(httpGet);

        //4. 解析响应,获取数据
        //判断状态码是否是200
        if (response.getStatusLine().getStatusCode() == 200) {
            HttpEntity httpEntity = response.getEntity();
            String content = EntityUtils.toString(httpEntity, "utf8");

            System.out.println(content);
        }
    }
}

网络爬虫(Web crawler)|| 爬虫入门程序_第1张图片​​​​​​​

你可能感兴趣的:(网络爬虫)