Python实战计划学习笔记-抓取霉霉的照片

结果图###

Python实战计划学习笔记-抓取霉霉的照片_第1张图片
Tylor Swift.png

展示代码###

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from bs4 import BeautifulSoup
import os, requests, urllib.request, time

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36',
    'Cookie': 'locale=zh-cn; __whiAnonymousID=b0f6b20dd1ca4c5a93dbccb177b825d7; __qca=P0-806928633-1464338850044; __utmt=1; hide_intercept=true; auth=no; _session=8733749a56921ad91dae906861a0b578; _ga=GA1.2.1548010943.1464338850; __utma=222371101.1548010943.1464338850.1464338850.1464403083.2; __utmb=222371101.16.9.1464405673489; __utmc=222371101; __utmz=222371101.1464403083.2.2.utmcsr=study.163.com|utmccn=(referral)|utmcmd=referral|utmcct=/course/courseLearn.htm; __utmv=222371101.|27=locale=zh-cn=1; _weheartit_anonymous_session=%7B%22page_views%22%3A7%2C%22search_count%22%3A0%2C%22last_searches%22%3A%5B%5D%2C%22last_page_view_at%22%3A1464405682330%7D',
}
pic_path = "D:/Users/Jessy zhou/Plan-for-combating-master/1_week/1.4 homework/pic/"
pic_recode = pic_path + 'pic.txt'
user_recode = pic_path + 'user.txt'

pic_url = []
user_list = []


def write_file(path, item):
    file = open(path, 'a')
    file.write(item)
    file.flush()


def down_pic(path, items):
    while not os.path.exists(path):
        os.makedirs(path)
    for item in items:
        time.sleep(2)
        write_file(pic_recode,item+'\n')
        urllib.request.urlretrieve(item, path + item[-24:-15] + item[-4:])


def get_info(url):
    pic_url = []
    try:
        web_data = requests.get(url, headers=headers)
        web_data.encoding = 'UTF-8'
        if web_data.status_code == 200:
            soup = BeautifulSoup(web_data.text, 'html.parser')
            users = soup.select('span.text-big')
            users_pic = soup.select('div.user-preview.grid-flex > a > img')
            pics = soup.select('img.entry_thumbnail')

            for user, user_pic, pic in zip(users, users_pic, pics):
                user_data = {
                    'user': user.get_text(),
                    'user_pic': user_pic.get('src'),
                }
                user_list.append(user_data)
                pic_url.append(pic.get('src'))
            down_pic(pic_path, pic_url)
        # write_file(user_recode,user_list) # 这个暂时无法运行成功,不过之后需要导入到数据库中,应该可以解决gbk问题
        else:
            print('We got invalid page {} with status_code {}'.format(url, web_data.status_code))  # 将错误的状态码打印出来
    except IOError as e:  # 捕捉异常
        print(e)


def get_url(num):
    url = 'http://weheartit.com/inspirations/taylorswift?page='
    for i in range(1, num):
        get_info(url + str(i))


if __name__ == '__main__':
    get_url(200)

问题总结###

1.如何将图片下载并储存到文件夹中

import urllib.request
urllib.request.urlretrieve(item,path) # path表示图片的路径(文件夹的路径+图片的命名)

2.如何判断路径是否存在

import os
while not os.path.exists(path):
  os.makedirs(path) #若没有路径则直接创建

3.无法写入txt文件
a.注意文件的命名是txt,而非text(但是将response转化为可读为text)
b.

错误代码###

def write_file(path, item):
    file = open(path, 'w')
    file.write(item)

此时无法写入任何字符串

正确代码###

def write_file(path, item):
    file = open(path, 'a') #a:若没有路径文件,则创建文件,若是存在,则指针移到文件末尾,用来防止每次写入都没覆盖
    file.write(item)# 此时item只能是字符串,列表和dic都无法写入
    file.flush() #若无此句则无法写入任何内容,file.close()也可以

你可能感兴趣的:(Python实战计划学习笔记-抓取霉霉的照片)