Python 爬虫笔记

1. 概述


2. 网页解析

2.1 获取网页数据

def parse_url:
    base_url = 'https://www.ygdy8.com/html/gndy/rihan/list_6_'
 
    req_headers = {
        'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'
    }

    target_url = base_url
    res = requests.get(target_url, headers=req_headers)
    if res.status_code == requests.codes.ok:
        res.encoding = "gb2312"
        html = res.text
        soup = BeautifulSoup(html, "lxml")
        movie_block_list = soup.find_all(class_="tbspan")

2.2 网页解析

网页解析有常用几种分析方法,一般应用中使用如下几种就可以了。

方案1 requests + BeautifulSoup + find_all
方案2 requests + BeautifulSoup + select css
方案3 requests + lxml/etree + xpath

应用示例参考代码如下。

def parse(response):
    # 方案1: requests +  BeautifulSoup +  find_all
    Soup = BeautifulSoup(response, 'lxml')
    movies = Soup.find_all(class_="top-list")
    #print(movies)
    movie_list = movies[0].find_all('li')
    for each_movie in movie_list:
        title = each_movie.find_all(target="_blank")[0].get('title')
        print(title)

    # 方案2: requests +  BeautifulSoup +  select css
    Soup = BeautifulSoup(response, 'lxml')
    movies = Soup.select('ul[class="top-list fn-clear"] li')
    for each_movie in movies:
        title = each_movie.select('h5 > a')[0].get_text()
        print(title)

    # 方案3: requests +  lxml/etree +  xpath
    con = etree.HTML(response)
    movies = con.xpath('//ul[@class="top-list  fn-clear"]/li')
    for each_movie in movies:
        title = each_movie.xpath('./h5/a/@title')
        print(title)

参考资料:

  1. Python爬虫的两套解析方法和四种爬虫实现过程python脚本之家
  2. Beautiful Soup 的用法 - Python 爬虫学习系列教程 - 极客学院Wiki


3. 数据存储

3.1 txt文档

    file_handle=open('movie.txt', mode='w', encoding='utf-8')

    file_handle.write("【{mark}】\n".format(mark = movie_mark))

    file_handle.close()

3.2 csv文件

import csv

csv_header=["序号", "评分", "链接", "简介" ]

    csv_handle = open('file.csv','w', newline="", encoding='utf-8-sig')
    csv_write = csv.writer(csv_handle, dialect='excel')
    csv_write.writerow(csv_header);

    csv_write.writerow([str(num), movie_mark, movie_url, movie_summary])

    csv_handle.close()

3.3 SQLite

import sqlite3

    conn = sqlite3.connect("movie.db")
    c = conn.cursor()

    c.execute('''CREATE TABLE MOVIE
       (ID            INT     PRIMARY KEY     NOT NULL,
       Mark           TEXT    NOT NULL,
       Summary        TEXT,
       Url            TEXT);''')

    ins = "INSERT INTO MOVIE (ID, Mark, Summary, Url) VALUES (?,?,?,?)"
    v = (csv_index, movie_mark, movie_summary, movie_url)
    c.execute(ins, v)

    conn.commit()
    conn.close()

3.4 图片下载

爬虫除了可以获取文字信息,也可以下载图片。具体的代码参考如下。

import urllib

def download_pic(download_path, pic_link):
    # download pic
    urllib.request.urlretrieve(pic_link, download_path + name + ".jpg")

参考资料:Python 下载图片的三种方法 - qq_34504481的博客 - CSDN博客


4. 程序打包

Python脚本的运行除了安装运行环境,也可把脚本依赖的环境打包在一起,生成EXE可执行文件。
pyinstaller这个工具,能够把Python脚本打包成一个可执行文件。具体的使用如下所示。

pip3 install pyinstaller
pyinstaller.exe -F demo.py


附录A 参考资料

Pyinstaller(python打包为exe文件) - codeforu2 - 博客园

你可能感兴趣的:(Python 爬虫笔记)