org.jsoup.HttpStatusException: HTTP error fetching URL. Status=544, URL=

使用jsoup爬取信息时,发生如下错误:

org.jsoup.HttpStatusException: HTTP error fetching URL. Status=544, URL=https://……
	at org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:760)
	at org.jsoup.helper.HttpConnection$Response.execute(HttpConnection.java:705)
	at org.jsoup.helper.HttpConnection.execute(HttpConnection.java:295)
	at org.jsoup.helper.HttpConnection.get(HttpConnection.java:284)
	at xyz.util.Utility.getCategoryBlogs(Utility.java:81)
	at xyz.main.App.main(App.java:39)
Exception in thread "main" java.lang.NullPointerException
	at xyz.util.Utility.getCategoryBlogs(Utility.java:88)
	at xyz.main.App.main(App.java:39)

我把该网址复制到浏览器却发现没有问题,怀疑是反爬虫的原因;之前遇到的反爬好像是403比较多,这次居然是544,第一次遇到,印象中500+的是服务器问题,然后我又继续排查。但是继续试了几次后,发现出现问题的URL好像会变化……,懵了,不可复现的……

然后尝试添加用户代理,伪装成浏览器,没想到真是这个原因,一下解决了!

原代码:

		Document doc = null;
        try {
            doc = Jsoup.connect(pageUrl).get();
        } catch (IOException e) {
            e.printStackTrace();
        }

修改后:

        // 获取文档对象
        Document doc = null;
        try {
            Connection con = Jsoup.connect(pageUrl).userAgent(
                "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36")
                .timeout(30000); // 设置连接超时时间

            Connection.Response response = con.execute();

            if (response.statusCode() == 200) {
                doc = con.get();
            } else {
                System.out.println(response.statusCode());
                return null;
            }
        } catch (IOException e) {
            e.printStackTrace();
        }

然后就很顺畅了。

你可能感兴趣的:(Java爬虫:Jsoup)