scrapy 的简单应用 爬取图片之家
scrapy 2.3版本
目的:https://www.tupianzj.com/meinv/ 网站的图片爬取
gitHUB:https://github.com/fddqfddq/scrapy/tree/master/tupianzj
1.创建项目
scrapy startproject tupianzjproject
2.创建crawl,使用crawl 模板创建
scrapy genspider tupianzj tupianzj.com -t crawl
3.修改items.py
url:网站url
category:类别
title:标题
imgurl:图片链接
imgname:图片标题
updatetime:更新时间
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass TupianzjItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()url = scrapy.Field()category = scrapy.Field()title = scrapy.Field()imgurl = scrapy.Field()imgname = scrapy.Field()updatetime = scrapy.Field()
4.修改pipelines.py
from scrapy.pipelines.images import ImagesPipeline
from scrapy.exceptions import DropItem
from scrapy.http import Request
import reclass TupianzjPipeline(ImagesPipeline):def get_media_requests(self, item, info):for image_url in item['imgurl']:yield Request(image_url,meta={'item':item['imgname']})def file_path(self, request, response=None, info=None):name = request.meta['item']# name = filter(lambda x: x not in '()0123456789', name)name = re.sub(r'[?\\*|“<>:/()0123456789]', '', name)image_guid = request.url.split('/')[-1]# name2 = request.url.split('/')[-2]filename = u'full/{0}/{1}'.format(name, image_guid)return filename# return 'full/%s' % (image_guid)def item_completed(self, results, item, info):# image_path = [x['path'] for ok, x in results if ok]# if not image_path:# raise DropItem('Item contains no images')# item['image_paths'] = image_pathreturn item
5.修改spiders下的tupianzj.py
注意的是:item['imgurl'] = response.xpath("//div[@id='bigpic']//img//@src").getall()# 这里要用getall(),否则会报错,div中id为bigpic下的图片的src
#爬取页面以.html为结尾的链接,因为详细页面是一样的,所以可以通用爬取,
#如果是子栏目页面如https://www.tupianzj.com/meinv/xingan ,有分页的也可以爬取。
#scrapy crawl tupianzj
#scrapy crawl tupianzj -a tag=xingan ,爬取https://www.tupianzj.com/meinv/xingan
import scrapy
from tupianzj.items import TupianzjItemclass TupianzjSpider(scrapy.Spider):name = 'tupianzj'def start_requests(self):url = 'https://www.tupianzj.com/meinv/'tag = getattr(self, 'tag', None)if tag is not None:url = url + tagyield scrapy.Request(url, self.parse)def parse(self, response):#page_links = response.xpath("//div[@id='container']//a") #div中id为container下的所有超链接#page_links = response.xpath("//div[@class='warpbox_con']//a") #div中样式为warpbox_con下的所有超链接#page_links = response.css("ul.mv_list_r li a")#page_links = response.css("div.warpbox_con a[href$='.html']")page_links = response.css("a[href$='.html']") #页面所有的以.html为结尾的链接yield from response.follow_all(page_links, self.parse_detail)#next_link = response.css("div.pages ul li:nth-last-child(3) a::attr(href)").get() #div中的样式为pages,下的ul下的li下的超链接列表中的倒数第3个超链接的hrefnext_link = response.css("div.pages ul li:nth-last-child(3) a") #div中的样式为pages,下的ul下的li下的超链接列表中的倒数第3个超链接if next_link is not None:yield from response.follow_all(next_link, self.parse)def parse_detail(self, response):try:if response.xpath("//div[@id='bigpic']//img//@src").get() is not None:item = TupianzjItem()item['url'] = response.urlitem['category'] = response.css("div.weizhi a:last-child::text").get(default='') #div中样式为weizhi下的最后一个超链接的文本item['title'] = response.xpath("//div[@class='warp']//h1//text()").get(default='') item['imgurl'] = response.xpath("//div[@id='bigpic']//img//@src").getall()# 这里要用getall(),否则会报错,div中id为bigpic下的图片的srcitem['imgname'] = response.xpath("//div[@class='warp']//h1//text()").get(default='')item['updatetime'] = response.xpath("//div[@class='article_info']//u//text()").get(default='')yield itemnext_url = response.css("div.pages ul li:last-child a::attr(href)").get() #分页超链接if next_url is not None:# 下一页yield response.follow(next_url, callback=self.parse_detail)except Exception as e:print(e)finally:pass
6.修改settings.py
IMAGES_STORE = 'C:\D\ImagesRename' #图片下载的路径
DOWNLOADER_MIDDLEWARES = {
'tupianzj.middlewares.TupianzjDownloaderMiddleware': 543, # 图片下载中间件
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
}
RANDOM_UA_TYPE = 'random' #使用随机user-agent
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
HTTPERROR_ALLOWED_CODES = [514] #允许514返回code
# Scrapy settings for tupianzj project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'tupianzj'SPIDER_MODULES = ['tupianzj.spiders']
NEWSPIDER_MODULE = 'tupianzj.spiders'#save path
IMAGES_STORE = 'C:\D\ImagesRename' #图片下载的路径# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tupianzj (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'tupianzj.middlewares.TupianzjSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {'tupianzj.middlewares.TupianzjDownloaderMiddleware': 543, # 图片下载中间件'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
}
RANDOM_UA_TYPE = 'random' #使用随机user-agent
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
HTTPERROR_ALLOWED_CODES = [514] #允许514返回code
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {'tupianzj.pipelines.TupianzjPipeline': 300,#'tupianzj.JsonWriterPipeline.JsonWriterPipeline': 200,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
7.修改middlewares.py
使用user-agent
# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.htmlfrom scrapy import signals# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapterfrom scrapy.downloadermiddlewares.useragent import UserAgentMiddleware
from fake_useragent import UserAgent
import randomclass TupianzjSpiderMiddleware:# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the spider middleware does not modify the# passed objects.@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_spider_input(self, response, spider):# Called for each response that goes through the spider# middleware and into the spider.# Should return None or raise an exception.return Nonedef process_spider_output(self, response, result, spider):# Called with the results returned from the Spider, after# it has processed the response.# Must return an iterable of Request, or item objects.for i in result:yield idef process_spider_exception(self, response, exception, spider):# Called when a spider or process_spider_input() method# (from other spider middleware) raises an exception.# Should return either None or an iterable of Request or item objects.passdef process_start_requests(self, start_requests, spider):# Called with the start requests of the spider, and works# similarly to the process_spider_output() method, except# that it doesn’t have a response associated.# Must return only requests (not items).for r in start_requests:yield rdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)class TupianzjDownloaderMiddleware(object):# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the downloader middleware does not modify the# passed objects.'''@classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_request(self, request, spider):# Called for each request that goes through the downloader# middleware.# Must either:# - return None: continue processing this request# - or return a Response object# - or return a Request object# - or raise IgnoreRequest: process_exception() methods of# installed downloader middleware will be calledreferer = request.urlif referer:request.headers['referer'] = referer'''# def __init__(self, user_agent):# self.user_agent = user_agent# @classmethod# def from_crawler(cls, crawler):# return cls(# user_agent=crawler.settings.get('USER_AGENT')# )# def process_request(self, request, spider):# referer = request.url# if referer:# request.headers['referer'] = referer# agent = random.choice(self.user_agent)# request.headers['User-Agent'] = agent#随机更换user-agentdef __init__(self, crawler):super(TupianzjDownloaderMiddleware, self).__init__()self.ua = UserAgent()self.ua_type = crawler.settings.get("RANDOM_UA_TYPE", "random")@classmethoddef from_crawler(cls, crawler):return cls(crawler)def process_request(self, request, spider):def get_ua():return getattr(self.ua, self.ua_type)referer = request.urlif referer:request.headers['referer'] = refererrequest.headers.setdefault('User-Agent', get_ua())def process_response(self, request, response, spider):# Called with the response returned from the downloader.# Must either;# - return a Response object# - return a Request object# - or raise IgnoreRequestreturn responsedef process_exception(self, request, exception, spider):# Called when a download handler or a process_request()# (from other downloader middleware) raises an exception.# Must either:# - return None: continue processing this exception# - return a Response object: stops process_exception() chain# - return a Request object: stops process_exception() chainpassdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)
scrapy 的简单应用 爬取图片之家相关推荐
- python scrapy框架 抓取的图片路径打不开图片_Python使用Scrapy爬虫框架全站爬取图片并保存本地的实现代码...
大家可以在Github上clone全部源码. 基本上按照文档的流程走一遍就基本会用了. Step1: 在开始爬取之前,必须创建一个新的Scrapy项目. 进入打算存储代码的目录中,运行下列命令: sc ...
- 使用python的scrapy框架简单的爬取豆瓣读书top250
使用python的scrapy框架简单的爬取豆瓣读书top250 一.配置scrapy环境 1. 配置相应模块 如果没有配置过scrapy环境的一般需要安装lxml.PyOpenssl.Twisted ...
- scrapy爬虫系列之三--爬取图片保存到本地
功能点:如何爬取图片,并保存到本地 爬取网站:斗鱼主播 完整代码:https://files.cnblogs.com/files/bookwed/Douyu.zip 主要代码: douyu.py im ...
- Python实践-简单的爬取图片
1.准备工作: Python的安装: 到官方网站下载:Welcome to Python.org 下载后,安装,配置环境变量,之后检查安装是否成功:win+r,输入cmd,再敲命令python,出现如 ...
- Python使用Scrapy爬虫框架全站爬取图片并保存本地(@妹子图@)
大家可以在Github上clone全部源码. Github:https://github.com/williamzxl/Scrapy_CrawlMeiziTu Scrapy官方文档:http://sc ...
- 【python爬虫】一个简单的爬取百家号文章的小爬虫
需求 用"老龄智能"在百度百家号中搜索文章,爬取文章内容和相关信息. 观察网页 红色框框的地方可以选择资讯来源,我这里选择的是百家号,因为百家号聚合了来自多个平台的新闻报道.首先看 ...
- Python爬虫实战详解:爬取图片之家
前言 本文的文字及图片来源于网络,仅供学习.交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理 如何使用python去实现一个爬虫? 模拟浏览器 请求并获取网站数据 在原始 ...
- 使用Scrapy爬虫框架简单爬取图片并保存本地(妹子图)
使用Scrapy爬虫框架简单爬取图片并保存本地(妹子图) 初学Scrapy,实现爬取网络图片并保存本地功能 一.先看最终效果 保存在F:\pics文件夹下 二.安装scrapy 1.python的安装 ...
- scrapy爬虫,爬取图片
一.scrapy的安装: 本文基于Anacoda3, Anacoda2和3如何同时安装? 将Anacoda3安装在C:\ProgramData\Anaconda2\envs文件夹中即可. 如何用con ...
最新文章
- python压缩和解压缩
- IIS设置Access-Control-Allow-Origin
- 《算法导论》中动态规划求解钢条切割问题
- 5/7 SELECT语句:过滤(LIKE使用通配符)
- Java static、 final修饰符
- linux major头文件_Linux的字符设备
- linux ls 参数列表过长,ls提示参数列表过长解决办法
- 【vim新手心得】最常用快捷键、编辑器vim插件使用心得(VsVim、IdeaVim、Vimium)
- c语言50个小程序,C语言50小程序.doc
- Python 过滤a文件中每一行内容,保存到b文件中
- Emscripten 单词_英语48个音标与单词字母组合拼读发音教程
- 比PS更简单好用的自动抠图软件 一键抠图工具
- Choerodon猪齿鱼实践之角色管理
- 长城汽车携旗下哈弗、欧拉、长城皮卡及WEY登陆北京车展
- QGIS 3.14|地震数据动画效果实战(一)数据准备篇
- 都有哪些比较好用的项目管理软件?
- 【看表情包学Linux】shell 命令及运行原理 | Linux 权限 | 文件权限的修改和转让 | 目录的权限 | Sticky bit 粘滞位
- WordPress初学者入门教程-WordPress的设置
- CCD工业相机电脑连接正常但无法采集图像的相关设置
- 经常被问退休金多少,怎么答