WebMagic学习(六)之自定义Pipeline(一个简单的爬虫)

Pipeline的接口

public interface Pipeline {

    /**
     * Process extracted results.
     * ResultItems保存了抽取结果,它是一个Map结构,在page.putField(key,value)中保存的数据,可以通过ResultItems.get(key)获取
     * @param resultItems resultItems
     * @param task task
     */
    public void process(ResultItems resultItems, Task task);
}
  • 将结果输出到控制台
    ConsolePipeline
public class ConsolePipeline implements Pipeline {

    @Override
    public void process(ResultItems resultItems, Task task) {
        System.out.println("get page: " + resultItems.getRequest().getUrl());
        for (Map.Entry entry : resultItems.getAll().entrySet()) {
            System.out.println(entry.getKey() + ":\t" + entry.getValue());
        }
    }
}
  • 将结果保存到MySQL,实现一个简易爬虫
    自定义pileline实现Pipeline接口,实现process方法,在该方法中将数据存入数据库
package com.sima.crawler;
import com.sima.db.MysqlDBUtils;
import us.codecraft.webmagic.ResultItems;
import us.codecraft.webmagic.Task;
import us.codecraft.webmagic.pipeline.PageModelPipeline;
import us.codecraft.webmagic.pipeline.Pipeline;
/**
 * Created by cfq on 2017/4/30.
 */
public class GankDaoPipeline implements Pipeline {
    @Override
    public void process(ResultItems resultItems, Task task) {
        System.out.println("process");
        GankModel gankModel = new GankModel(resultItems.get("title").toString(), resultItems.get("content").toString());
        //可以存入数据库
//        System.out.println(gankModel.getTitle());
        System.out.println("插入" + MysqlDBUtils.insert(gankModel) + "条数据!");
    }
}

其中数据库帮助类代码如下,采用Druid进行数据库连接管理。

public class MysqlDBUtils {
    private static Connection getConn() {
        String confile = "druid.properties";//配置文件名称
        Properties properties = new Properties();
        InputStream inputStream = null;
        DruidDataSource dataSource = null;
        Connection connection = null;
        confile = MysqlDBUtils.class.getResource("/").getPath() + confile;//获取配置文件路径
        File file = new File(confile);
        try {
            inputStream = new BufferedInputStream(new FileInputStream(file));
            properties.load(inputStream);//加载配置文件

            //通过DruidDataSourceFactory获取javax.sql.DataSource
            dataSource = (DruidDataSource) DruidDataSourceFactory.createDataSource(properties);
            connection = dataSource.getConnection();

        } catch (Exception e) {
            e.printStackTrace();
        }
        return connection;
    }

    public static int insert(GankModel gankModel) {
        Connection conn = getConn();
        int i = 0;
        String sql = "insert into gankinfo (title,content) values(?,?)";
        PreparedStatement pstmt;
        try {
            pstmt = (PreparedStatement) conn.prepareStatement(sql);
            pstmt.setString(1, gankModel.getTitle());
            pstmt.setString(2, gankModel.getContent());
            i = pstmt.executeUpdate();
            pstmt.close();
            conn.close();
        } catch (SQLException e) {
            e.printStackTrace();
        }
        return i;
    }
    }
}

druid配置内容如下,druid学习笔记。

#基本属性 url、user、password
url=jdbc:mysql://localhost:3306/istep?useUnicode=true&characterEncoding=utf-8
username=istep
password=istep
#配置初始化大小、最小、最大
initialSize=1
minIdle=1
maxActive=20
#配置获取连接等待超时的时间
maxWait=60000
#配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
timeBetweenEvictionRunsMillis=60000
#配置一个连接在池中最小生存的时间,单位是毫秒
minEvictableIdleTimeMillis=300000
validationQuery=SELECT 'x'
testWhileIdle=true
testOnBorrow=false
testOnReturn=false
#filters=config
#connectionProperties=config.decrypt=true;config.decrypt.key=MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAIZcLMcxhrqm+TE10+o2KKI1eoVw1UdtRtBSpKggXkj460nBhO27QdahWZq0MlkwKEKYLyb79TZFdPov8V3pbdsCAwEAAQ==

Model代码如下:

public class GankModel {

    int id;
    String title;
    String content;

    public GankModel() {
    }

    public GankModel(String title, String content) {
        this.title = title;
        this.content = content;
    }

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public String getTitle() {
        return title;
    }

    public void setTitle(String title) {
        this.title = title;
    }

    public String getContent() {
        return content;
    }

    public void setContent(String content) {
        this.content = content;
    }

    @Override
    public String toString() {
        return "GankModel{" +
                "title='" + title + '\'' +
                ", content='" + content + '\'' +
                '}';
    }
}

爬虫demo,在Processor加入自定义的GankDaoPipeline主流程代码如下

addPipeline(new GankDaoPipeline())
public class GankRepoPageProcessor implements PageProcessor {
    //抓取网站的相关配置,包括编码、抓取间隔、重试次数等
    private Site site = Site.me().setRetryTimes(3).setSleepTime(2000);

    // process是定制爬虫逻辑的核心接口,在这里编写抽取逻辑
    public void process(Page page) {
        //定义如何抽取页面信息
        //爬取干货集中营历史数据,http://gank.io/2017/04/26
        page.addTargetRequests(page.getHtml().links().regex("(http://gank\\.io/\\d+/\\d+/\\d+)").all());
        page.putField("title", page.getHtml().$("h1").toString());//获取标题
        page.putField("content", page.getHtml().$("div.outlink").toString());//获取页面内容
        if (page.getResultItems().get("title") == null) {
            //跳过没有数据的页面
            page.setSkip(true);
        }
    }

    public Site getSite() {
        return site;
    }

    public static void main(String[] args) {
        Spider.create(new GankRepoPageProcessor())
                .addUrl("http://gank.io")//从该url开始
                .addPipeline(new GankDaoPipeline())
                .thread(5)
                .run();
    }
}

你可能感兴趣的:(WebMagic学习(六)之自定义Pipeline(一个简单的爬虫))