开源JAVA爬虫crawler4j源码分析

crawler4j架构很简洁,总共就35个类,架构也很清晰:


edu.uci.ics.crawler4j.crawler 基本逻辑和配置


edu.uci.ics.crawler4j.fetcher 爬取


edu.uci.ics.crawler4j.frontier URL队列相关


edu.uci.ics.crawler4j.parser 对爬取结果进行解析


edu.uci.ics.crawler4j.robotstxt 检查robots.txt是否存在


edu.uci.ics.crawler4j.url URL相关,主要是WebURL


edu.uci.ics.crawler4j.util 是工具类

提前说一下crawler4j中文乱码问题,爬取暂时没有发现有乱码问题,解析时出现乱码。原因是tika在解析HTML时会已meta charset编码做为默认编码,当该编码与实际编码不一致时则会出现乱码,这里将meta charset改为正确的编码,在Page.load中修改如下:

 /**
     * Loads the content of this page from a fetched
     * HttpEntity.
     */
	public void load(HttpEntity entity) throws Exception {

		contentType = null;
		Header type = entity.getContentType();
		if (type != null) {
			contentType = type.getValue();
		}

		contentEncoding = null;
		Header encoding = entity.getContentEncoding();
		if (encoding != null) {
			contentEncoding = encoding.getValue();
		}

		Charset charset = ContentType.getOrDefault(entity).getCharset();
		if (charset != null) {
			contentCharset = charset.displayName();	
		}

		contentData = EntityUtils.toByteArray(entity);
		//中文乱码
		//if(contentCharset != null) contentData = new String(contentData, contentCharset).getBytes();
		if(charset != null) {
			String data = new String(contentData,contentCharset);
			data = data.replaceFirst("contentData = EntityUtils.toByteArray(entity);
		//中文乱码
		if(charset != null) {
			String data = new String(contentData,contentCharset);

			boolean isEncodeGBK = true, isEncodeUTF = true;
			if(!java.nio.charset.Charset.forName("GBK").newEncoder().canEncode(data)) isEncodeGBK = false;
			if(!java.nio.charset.Charset.forName("UTF-8").newEncoder().canEncode(data)) isEncodeUTF = false;
			if(contentCharset.equalsIgnoreCase("GBK") && !isEncodeGBK) {
				contentCharset = "UTF-8"; 
				data = new String(contentData,contentCharset);
			} else if (contentCharset.equalsIgnoreCase("UTF-8") && !isEncodeUTF) {
				contentCharset = "GBK"; 
				data = new String(contentData,contentCharset);
			}
			

			data = data.replaceFirst("", " ");
			contentData = data.getBytes(contentCharset);
		} else {
			String data = new String(contentData);

			boolean isEncodeGBK = true, isEncodeUTF = true;
			if(!java.nio.charset.Charset.forName("GBK").newEncoder().canEncode(data)) isEncodeGBK = false;
			if(!java.nio.charset.Charset.forName("UTF-8").newEncoder().canEncode(data)) isEncodeUTF = false;
			if(isEncodeGBK && !isEncodeUTF) {
				contentCharset = "GBK"; 
				data = new String(contentData,contentCharset);
				data = data.replaceFirst("", " ");
				contentData = data.getBytes(contentCharset);
			} else if (isEncodeUTF && !isEncodeGBK) {
				contentCharset = "UTF-8"; 
				data = new String(contentData,contentCharset);
				data = data.replaceFirst("", " ");
				contentData = data.getBytes(contentCharset);
			}
		}

转载自: http://www.xuebuyuan.com/2018940.html

你可能感兴趣的:(爬虫)