解决python3 requests中urlretrieve 403forbidden

在用requests库中的urllib.request.urlretrieve(urlcode,folder_path+'test.jpg')去下载图片的是否会报出403 forbidden的访问禁止

res=requests.get(item)
    with open(folder_path+item[-10:],'wb') as f:
        f.write(res.content)

通过以上方法便可实现下载,其中item表示图片的链接,folder_path表示本地路径
以下为完成的用python3实现爬虫图片的代码

import  requests
from bs4 import  BeautifulSoup
import urllib.request

url='http://jandan.net/pic/page-7707'
header = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:23.0) Gecko/20100101 Firefox/23.0'}

# header={'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
source_code=requests.get(url,headers=header)

plain_text=source_code.text

download_links=[]
Soup=BeautifulSoup(plain_text)
folder_path="/Users/chenmeiji/Desktop/a3/"
for pic_tag in Soup.find_all("img"):
    pic_link=pic_tag.get('src')
    download_links.append(pic_link)

for item in download_links:
    res=requests.get(item)
    with open(folder_path+item[-10:],'wb') as f:
        f.write(res.content)

实现以上爬虫代码可参考我的另一篇博客,零基础实现爬虫

你可能感兴趣的:(python学习)