扇贝python_Python爬虫入门经典 | 简单一文教你如何爬取扇贝单词

现在的博主正在发呆,无意之中打开了扇贝Python必背词汇的网址。那么既然打开了,再加上博主挺无聊的。那么就尝试爬取一下这个网页!

一、网页分析

我们打开此网站之后,通过以往爬取网页的经验,会发现此网页特别容易爬取。

大概查看了网页,我们只需爬取单词和含义即可。首先我们先来查看网页源码

下面分别把他们解析出来:

,分析完毕后,我们就可以通过代码进行实现了。

etree_obj = etree.HTML(html)

word_list = etree_obj.xpath('//strong/text()')

explain_list = etree_obj.xpath('//td[@class="span10"]/text()')

item_zip = zip(word_list,explain_list)

for item in item_zip:

items.append(item)

分析完内容,下面就开始分析分页。鉴于此URL只有三页URL,因此,博主就使用最简单的方式,把Url拼接出来

base_url = "https://www.shanbay.com/wordlist/110521/232414/?page={}"

for i in range(1, 4):

url = base_url.format(i)

print(url)

二、代码实现

# encoding: utf-8

'''

@author 李华鑫

@create 2020-10-08 8:10

Mycsdn:https://buwenbuhuo.blog.csdn.net/

@contact: [email protected]

@software: Pycharm

@file: 作业:爬扇贝Python必背词汇.py

@Version:1.0

'''

import csv

import requests

from lxml import etree

"""

https://www.shanbay.com/wordlist/110521/232414/?page=1

https://www.shanbay.com/wordlist/110521/232414/?page=2

https://www.shanbay.com/wordlist/110521/232414/?page=3

//strong # en

//td[@class="span10"] # cn

"""

base_url = "https://www.shanbay.com/wordlist/110521/232414/?page={}"

headers={

'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36',

}

items =[]

def parse_url(url):

"""解析url,得到响应内容"""

response = requests.get(url=url,headers=headers)

return response.content.decode("utf-8")

def parse_html(html):

"""使用xpath解析html"""

etree_obj = etree.HTML(html)

word_list = etree_obj.xpath('//strong/text()')

explain_list = etree_obj.xpath('//td[@class="span10"]/text()')

item_zip = zip(word_list,explain_list)

for item in item_zip:

items.append(item)

def svae():

"""将数据保存到csv中"""

with open("./shanbei.csv", "a", encoding="utf-8") as file:

writer = csv.writer(file)

for item in items:

writer.writerow(item)

def start():

"""开始爬虫"""

for i in range(1, 4):

url = base_url.format(i)

html = parse_url(url)

parse_html(html)

svae()

if __name__ == '__main__':

start()

三、运行结果

美好的日子总是短暂的,虽然还想继续与大家畅谈,但是本篇博文到此已经结束了,如果还嫌不够过瘾,不用担心,我们下篇见!

如果是对Python爬虫感兴趣的,点击卡片可以和我们一起交流:正在跳转​jq.qq.com

你可能感兴趣的:(扇贝python)