1、settings.py文件

# -*- coding: utf-8 -*-# Scrapy settings for jd project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'jd'SPIDER_MODULES = ['jd.spiders']
NEWSPIDER_MODULE = 'jd.spiders'LOG_LEVEL="WARNING"
LOG_FILE="./jingdong1.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'jd (+http://www.yourdomain.com)'# Obey robots.txt rules
ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {#    'jd.middlewares.JdSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {#    'jd.middlewares.JdDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {#    'jd.pipelines.JdPipeline': 300,
#}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

View Code

2、jingdong.py文件

# -*- coding: utf-8 -*-
import scrapyimport logging
import json
logger = logging.getLogger(__name__)
class JingdongSpider(scrapy.Spider):name = 'jingdong'allowed_domains = ['zhaopin.jd.com']start_urls = ['http://zhaopin.jd.com/web/job/job_list?page=1']pageNum = 1def parse(self, response):content  = response.body.decode()content = json.loads(content)##########去除列表中字典集中的空值###########for i in range(len(content)):#list(content[i].keys()获取当前字典中的keyfor key in list(content[i].keys()): #content[i]为字典if not content[i].get(key):#content[i].get(key)根据key获取valuedel content[i][key] #删除空值字典for i in range(len(content)):logging.warning(content[i])self.pageNum = self.pageNum+1if self.pageNum<=355:next_url = "http://zhaopin.jd.com/web/job/job_list?page="+str(self.pageNum)yield scrapy.Request(next_url,callback=self.parse)pass

3、注意点,针对jingdong的招聘翻页是使用javascrapt,所以无法使用crawlscrapy进行自动翻页,但是我们再network中查看其获取数据的方法。

如:http://zhaopin.jd.com/web/job/job_list?page=2

#############jingdong可以了,那么试试tencent公司的招聘信息吧###############

测试下吧!

结果知道了吧!!!!!  开始干活!!!!!!!!!!

1、settings.py

# -*- coding: utf-8 -*-# Scrapy settings for tencent project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'tencent'SPIDER_MODULES = ['tencent.spiders']
NEWSPIDER_MODULE = 'tencent.spiders'LOG_LEVEL="WARNING"
LOG_FILE="./qq.log"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'# Obey robots.txt rules
#ROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {#    'tencent.middlewares.TencentSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {#    'tencent.middlewares.TencentDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {#    'tencent.pipelines.TencentPipeline': 300,
#}# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

View Code

2、mahuateng.py

# -*- coding: utf-8 -*-
import scrapyimport json
import logging
class MahuatengSpider(scrapy.Spider):name = 'mahuateng'allowed_domains = ['careers.tencent.com']start_urls = ['https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex=1&pageSize=10&language=zh-cn&area=cn']pageNum = 1def parse(self, response):content = response.body.decode()content = json.loads(content)content=content['Data']['Posts']#删除空字典for con in content:#print(con)for key in list(con.keys()):if not con.get(key):del con[key]#记录每一个岗位信息for con in content:logging.warning(con)#####翻页######self.pageNum = self.pageNum+1if self.pageNum<=118:next_url = "https://careers.tencent.com/tencentcareer/api/post/Query?timestamp=1561688387174&countryId=&cityId=&bgIds=&productId=&categoryId=&parentCategoryId=40003&attrId=&keyword=&pageIndex="+str(self.pageNum)+"&pageSize=10&language=zh-cn&area=cn"yield  scrapy.Request(next_url,callback=self.parse)

个人测试是可以的,你们的就看运气了,哈哈!

这些都是个人玩的,码的比较丑陋。

转载于:https://www.cnblogs.com/ywjfx/p/11101091.html

python之scrapy爬取jd和qq招聘信息相关推荐

  1. python用scrapy爬取58同城的租房信息

    上篇我们用了beautifulsoup4做了简易爬虫,本次我们用scrapy写爬虫58同城的租房信息,可以爬取下一页的信息直至最后一页. 1.scrapy的安装 这个安装网上教程比较多,也比较简单,就 ...

  2. python基于scrapy爬取京东笔记本电脑数据并进行简单处理和分析

    这篇文章主要介绍了python基于scrapy爬取京东笔记本电脑数据并进行简单处理和分析的实例,帮助大家更好的理解和学习使用python.感兴趣的朋友可以了解下 一.环境准备 python3.8.3 ...

  3. 基于Python、scrapy爬取软考在线题库

    前言 前段时间,报名个软件设计师考试,自然需要复习嘛,看到软考在线这个平台有历年来的题目以及答案,想法就是做一个题库小程序咯,随时随地可以打开复习.很多人问,这不出现很多类似的小程序了?是的,但是他们 ...

  4. Python爬虫 - scrapy - 爬取妹子图 Lv1

    0. 前言 这是一个利用python scrapy框架爬取网站图片的实例,本人也是在学习当中,在这做个记录,也希望能帮到需要的人.爬取妹子图的实例打算分成三部分来写,尝试完善实用性. 系统环境 Sys ...

  5. Python利用Scrapy爬取前程无忧

    ** Python利用Scrapy爬取前程无忧 ** 一.爬虫准备 Python:3.x Scrapy PyCharm 二.爬取目标 爬取前程无忧的职位信息,此案例以Python为关键词爬取相应的职位 ...

  6. python使用 Scrapy 爬取唯美女生网站的图片资源

    python  python使用 Scrapy 爬取唯美女生网站 的资源,图片很好,爬取也有一定的难度,最终使用Scrapy获取了该网站 1.5W多张美眉照片....如有侵权,联系,立删除. ==== ...

  7. 【爬虫】Scrapy爬取腾讯社招信息

    目标任务:爬取腾讯社招信息,需要爬取的内容为:职位名称,职位的详情链接,职位类别,招聘人数,工作地点,发布时间. 一.预备基础 1.Scrapy简介 Scrapy是用纯Python实现一个为了爬取网站 ...

  8. python爬取boss直聘招聘信息_Python爬虫实战-抓取boss直聘招聘信息

    Python Python开发 Python语言 Python爬虫实战-抓取boss直聘招聘信息 实战内容:爬取boss直聘的岗位信息,存储在数据库,最后通过可视化展示出来 PS注意:很多人学Pyth ...

  9. Scrapy爬取当当网的商品信息存到MySQL数据库

    Scrapy爬取当当网的商品信息存到MySQL数据库 Scrapy 是一款十分强大的爬虫框架,能够快速简单地爬取网页,存到你想要的位置.经过两天的摸索,终于搞定了一个小任务,将当当网的商品信息爬下来存 ...

最新文章

  1. 服务器系统需要定期清理吗,windows 2008服务器系统清理
  2. 编程语言介绍、python解释器执行代码的过程
  3. tomcat 目录配置 appBase和docBase 简介
  4. c++对象长度之内存对齐(2)
  5. 「后端小伙伴来学前端了」Vue中利用全局事件总线改造 TodoList 案例
  6. 63. Unique Paths II 不同路径 II
  7. mysql 12142_php连接mysql的类mysql.class.php
  8. 汇编语言start标号的作用
  9. 李洪强经典面试题146-网络
  10. sprintf()、fprintf()的使用方法
  11. Android TextView跑马灯效果与设置文字阴影
  12. 【原创】RPM安装软件时解决依赖性问题(自动解决依赖型)
  13. 极通EWEBS远程接入v4.2六步实施法
  14. h3csnmp管理命令_华为H3C交换机SNMP配置命令
  15. python调用java之启动jpype
  16. 服务器运行一天死机,服务器死机怎么办?教你排除故障
  17. RS-232实现双机通信
  18. 加装固态,重装系统(双系统)
  19. MySQL~数据库表中数据的增删查改(基础篇)
  20. 1074: 最小公倍数(2级) 两个正整数,计算这两个数的最小公倍数。

热门文章

  1. IntelliJ IDEA for Mac 如何在当前的普通Java项目中创建新的模块/添加模块/创建模块
  2. IntelliJ IDEA 如何设置编辑窗口的背景图片
  3. 浅析 Linux 初始化系统(系统服务管理和控制程序/init system) -- UpStart
  4. Spring Cloud Feign 请求压缩 、Feign的日志级别配置
  5. android获取应用安装通知消息,如何在Android 11 上获取已安装应用列表
  6. java 中允许键重复的,二叉搜索树的定义是否允许重复键?
  7. python采用函数编程模式_浅谈Python 函数式编程
  8. iview select选中值取值_iView的Select选择框
  9. 天线3db波束宽度_浅谈 Wi-Fi 天线(2)
  10. 精简指令和复杂指令计算机,CPU精简指令集和复杂指令集的区别