webmagic项目实战(爬小说网站)

项目背景

小说网站优书网(http://yousuu.com/bookstore/)提供的小说查询功能不是很强大,很多高级查询功能都没有,比如想要查询出评分在8.0以上并且标签包含‘仙侠’、字数超过100万字的小说列表,查询结果按评分倒序排序。为了解决这个痛点,我们把所有小说数据(包含小说名称、评分、简介、作者等信息)爬到本地来,然后导入elasticsearch中,最后就可以构建出任何我们想要的查询了。

图片描述

图片描述

项目实战

实现分析

观察上面两张截图,可以看到列表页除了简介信息,其他信息都有;所以我们的实现思路就是从列表页提取出小说名称、作者、字数、评分、状态、标签等信息,从详情页提取出简介信息。

代码实现

通过文章《webmagic核心设计和运行机制分析》,我们已经知道在使用WebMagic框架时,因为每个页面的不同,所以必须要我们自己定制处理逻辑的组件是PageProcessor(解析html,提取目标数据)和Pipeline(持久化目标数据)。

1. Spider:程序入口,爬虫启动类
@Component
public class YousuuTask {

    private static final String SITE_CODE = "yousuu";

    private static final String URL = "http://www.yousuu.com/bookstore/?type&tag&countWord&status&update&sort&page=";

    public void doTask() {
        MySpider mySpider = MySpider.create(new YousuuProcessor());

        mySpider.setDownloader(new MyDownloader(SITE_CODE));
        mySpider.setScheduler(new RedisScheduler(SITE_CODE));
        mySpider.addPipeline(new YousuuPipeline());

        mySpider.thread(10);

        int totalPage = 8187;

        // 添加起始url
        for(int i=1; i<=totalPage; i++) {
            Request request = new Request(URL + i);
            // 在Request额外信息中设置页面类型
            request.putExtra(YousuuProcessor.TYPE, YousuuProcessor.LIST_TYPE);
            mySpider.addRequest(request);
        }

        mySpider.run();
    }
}
2. PageProcessor:解析html,提取目标数据
public class YousuuProcessor implements PageProcessor {

    private Site site = Site.me().setRetryTimes(0).setSleepTime(2000).setTimeOut(60000);

    public static final String TYPE = "type";
    public static final String LIST_TYPE = "list";
    public static final String DETL_TYPE = "detl";

    @Override
    public void process(Page page) {
        // 从Request额外信息中取出页面类型,然后分别处理
        String type = page.getRequest().getExtra(TYPE).toString();

        switch (type) {
            case LIST_TYPE:
                processList(page);
                break;
            case DETL_TYPE:
                processDetl(page);
                break;
            default:
                break;
        }
    }

    /**
     * 处理列表页
     * @param page
     */
    private void processList(Page page) {
        Html html = page.getHtml();
        List bookInfoNodes = html.xpath("//div[@class=\"book-info\"]").nodes();

        List novelList = new ArrayList<>();

        for(Selectable node : bookInfoNodes) {
            String novelName = node.xpath("/div/a/text()").toString();
            String novelUrl = node.xpath("/div/a/@href").toString();
            String id = novelUrl.substring(novelUrl.lastIndexOf("/") + 1);

            // 将详情页url添加到调度器
            Request detlRequest = new Request("http://www.yousuu.com/book/" + id);
            detlRequest.putExtra(TYPE, DETL_TYPE);
            page.addTargetRequest(detlRequest);


            // 子节点下标值从1开始
            String author = node.xpath("/div/p[1]/router-link/text()").toString();
            String wordNum = node.xpath("/div/p[1]/span[1]/text()").toString();
            String lastUpdateTime = node.xpath("/div/p[1]/span[2]/text()").toString();
            String status = node.xpath("/div/p[1]/span[3]/text()").toString();

            String scoreStr = node.xpath("/div/p[2]/text()").toString();
            scoreStr = scoreStr.substring("综合评分:".length());
            String[] split = scoreStr.split("\\(");
            Double score = Double.valueOf(split[0]);
            String scorePersonNumStr = split[1].substring(0, split[1].length() - 2);
            Integer scorePersonNum = Integer.valueOf(scorePersonNumStr);

            List tagNodes = node.xpath("/div/p[4]/label").nodes();
            StringBuffer tagBuff = new StringBuffer();
            for(Selectable tagNode : tagNodes) {
                String tag = tagNode.xpath("/label/text()").toString();
                tagBuff.append(tag + ",");
            }

            String tags = null;
            if(tagBuff.length() > 0) {
                tags = tagBuff.substring(0, tagBuff.length()-1);
            }

            Novel novel = new Novel();
            novel.setId(Long.valueOf(id));
            novel.setName(novelName);
            novel.setAuthor(author);
            novel.setWordNum(NumberUtil.getDoubleNumber(wordNum));
            novel.setLastUpdateTime(lastUpdateTime);
            novel.setStatus(status);
            novel.setScore(score);
            novel.setScorePersonNum(scorePersonNum);
            novel.setTags(tags);

            novelList.add(novel);
        }

        page.putField("novelList", novelList);
    }


    /**
     * 处理详情页
     * @param page
     */
    private void processDetl(Page page) {
        Html html = page.getHtml();
        List nodes = html.xpath("//body/*[1]").nodes();
        String script = nodes.get(1).toString();

        int pos1 = script.indexOf("introduction");
        int pos2 = script.indexOf("countWord");

        String intro = script.substring(pos1+15, pos2-3);

        String url = page.getRequest().getUrl();
        String id = url.substring(url.lastIndexOf("/") + 1);

        NovelDTO novelDTO = new NovelDTO(Long.valueOf(id), intro);
        page.putField("novelDTO", novelDTO);
    }


    @Override
    public Site getSite() {
        site.addHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3");
        site.addHeader("Accept-Encoding", "gzip, deflate");
        site.addHeader("Accept-Language", "zh-CN,zh;q=0.9,en;q=0.8");
        site.addHeader("Cache-Control", "max-age=0");
        site.addHeader("Connection", "keep-alive");
        site.addHeader("Cookie", "Hm_lvt_42e120beff2c918501a12c0d39a4e067=1566530194,1566819135,1566819342,1566963215; Hm_lpvt_42e120beff2c918501a12c0d39a4e067=1566963215");
        site.addHeader("Host", "www.yousuu.com");
        site.addHeader("Upgrade-Insecure-Requests", "1");
        site.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36");

        return site;
    }
}

说明:通过Request额外信息传递类型值type,来区分从Scheduler中拉取出来的URL对应页面类型,然后分别解析处理。

3. Pipeline: 持久化目标数据
/**
 * 持久化小说数据
 */
public class YousuuPipeline implements Pipeline {
    private NovelMapper novelMapper = SpringContextUtil.getBean(NovelMapper.class);

    @Override
    public void process(ResultItems resultItems, Task task) {
        //从列表页提取出除小说简介以外的所有信息,批量插入
        Object novelListObj = resultItems.get("novelList");
        if(null != novelListObj) {
            List novelList = (List) novelListObj;
            if(CollectionUtils.isNotEmpty(novelList)) {
                novelMapper.batchInsert(novelList);
            }
        }

        //从详情页提取出小说简介信息,更新
        Object novelDTOObj = resultItems.get("novelDTO");
        if(null != novelDTOObj) {
            NovelDTO novelDTO = (NovelDTO) novelDTOObj;

            Novel novel = new Novel();
            BeanUtils.copyProperties(novelDTO, novel);
            novelMapper.updateByPrimaryKeySelective(novel);
        }
    }
}
4. 实体类
@Data
public class Novel implements Serializable {
    /**
     * 小说id,自增
     */
    @Id
    private Long id;
    /**
     * 小说名称
     */
    private String name;
    /**
     * 小说作者
     */
    private String author;
    /**
     * 小说字数(万字)
     */
    private Double wordNum;
    /**
     * 小说状态
     */
    private String status;
    /**
     * 小说评分
     */
    private Double score;
    /**
     * 评分人数
     */
    private Integer scorePersonNum;
    /**
     * 最后更新时间
     */
    private String lastUpdateTime;
    /**
     * 小说标签,以逗号,分割
     */
    private String tags;
    /**
     * 小说简介
     */
    private String intro;
}
5. 测试程序
@RunWith(SpringRunner.class)
@SpringBootTest
public class SpiderApplicationTests {
    @Autowired
    YousuuTask yousuuTask;

    @Test
    public void test() {
        yousuuTask.doTask();
    }
}

运行结果

图片描述

构建查询

最后将数据库表中保存的小说数据导入到es中(程序源码地址会在本文最后给出),编写DSL构建出我们需要的查询。

图片描述

源代码地址

spider:https://github.com/xiawq87/sp...

es:https://github.com/xiawq87/no...

你可能感兴趣的:(webmagic,java)