考虑代码量较长,注释部分含在代码内部

可以实现抓取小说的名字,作者,封面,所有章节的信息

(0)先看效果

(1)数据库设计

from django.db import models
from django.contrib.auth.models import AbstractUser
from django.db.models.signals import pre_delete
from django.dispatch import receiver# 小说列表
class NovelList(models.Model):nid = models.AutoField(primary_key=True)name = models.CharField(verbose_name='小说标题', max_length=15)url = models.FileField(verbose_name='小说封面地址', upload_to='novel./', default='')author = models.CharField(verbose_name='小说作者', max_length=20, default="")introduce = models.CharField(verbose_name='小说介绍', max_length=300, default="")pages = models.IntegerField(verbose_name='小说章节数', default=0)class Meta:verbose_name_plural = '小说'# 小说章节
class NovelContent(models.Model):nid = models.AutoField(primary_key=True)content = models.CharField(verbose_name='章节内容', max_length=8000)# 所属小说idnovel = models.ForeignKey(verbose_name='小说id', to='NovelList', to_field='nid', on_delete=models.CASCADE)title = models.CharField(verbose_name='章节标题', max_length=25)# 上一章idpre_chapter_id = models.IntegerField(verbose_name='上一章章节id', default=0)# 下一章idthen_chapter_id = models.IntegerField(verbose_name='下一章章节id', default=0)class Meta:verbose_name_plural = '小说内容'

(2)抓取小说主函数

import re
import requests
from lxml import etree
import os
import timeif __name__ == '__main__':os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'djangoProject.settings')import djangoimport randomdjango.setup()from app import modelsfrom django.core.files import Filefrom django.core.files.base import ContentFiledjango.setup()# 获得html文章def get_html(url):user_agent = ["Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)","Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)","Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)","Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)","Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0","Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1","Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5","Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0b13pre) Gecko/20110307 Firefox/4.0b13pre","Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11","Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10"]headers = {'User-Agent': random.choice(user_agent)}page_text = requests.get(url=url, headers=headers).texthtml = etree.HTML(page_text, etree.HTMLParser())return html"""index: 解决广告index = 0 :  首发网址htTp://m.26w.cc             page_id %3  == 1index = 1 :  记住网址m.26ksw.cc               page_id %3  == 2index = 2 :  一秒记住http://m.26ksw.cc  page_id % 3 == 0 """def get_novel(book_id=57583, page_start_id=54365820, pages=0, is_force_read=False):"""ps: 考虑有章外章节的存在, 使得章节数缺漏判断极为困难http://www.26ksw.cc/book/{}/{}.html:param book_id: 书籍id:param page_start_id: 章节id:param is_force_read: 可选参数, 是否强制读取, 用于章节数不匹配的情况下, 仍然强制读取:param pages: 章节数, 可选, 选中时才能判断是否需要强制读取:return:"""while 1:try:url = 'http://www.26ksw.cc/book/{}.html'.format(book_id)html = get_html(url)page_all = pagestotal_a = len(html.xpath('//*[@id="list"]/dl/dd[position() > 12]/a'))novel_name = html.xpath('//*[@id="info"]/h1/text()')[0].strip()author = html.xpath('//*[@id="info"]/p[1]/a/text()')[0].strip()introduce = html.xpath('//*[@id="intro"]/text()')[0].strip()url = "http://www.26ksw.cc" + html.xpath('//*[@id="fmimg"]/img/@src')[0]if total_a <= page_all:if not is_force_read:raise Exception("该小说章节被遗漏了, 预期章节为{}, 然而实际章节为{}".format(page_all, total_a))else:print("注意!该小说章节被遗漏了, 预期章节为{}, 然而实际章节为{}".format(page_all, total_a))breakexcept IndexError:print('加载失败,等待5s')time.sleep(5)if not (novel_name and author and introduce):raise Exception("读取异常")else:novel_obj = models.NovelList.objects.filter(name=novel_name).first()if not novel_obj:novel_obj = models.NovelList.objects.create(**{"author": author,"introduce": introduce,"pages": total_a,"name": novel_name})# 保存图片origin_path = os.path.dirname(os.getcwd()) + '/media/novel/'img_save_url = origin_path + str(novel_obj.nid) + '.jpg'with open(img_save_url, "wb") as file:img = requests.get(url)file.write(img.content)file.close()novel_obj.url = 'novel/{}.jpg'.format(novel_obj.nid)novel_obj.save()# 下一篇文章的idnext_page_id = page_start_id# 当前文章page_current = 1# 是否结束end_flag = Falsepage_start_then_nid = 0url = 'http://www.26ksw.cc/book/{}/{}.html'.format(book_id, next_page_id)# 一直读取到没有下一章为止while not end_flag:time.sleep((random.random() + 0.4) * 5)while 1:try:html = get_html(url)# 下一章链接next_page_href = html.xpath('//*[@id="main"]/div/div/div[2]/div[1]/a[4]/@href')[0]# 爬取 题目与段落, 名字novel = {'title': "", 'name': "", 'content': []}novel_name = html.xpath('//*[@id="main"]/div/div/div[1]/a[2]/text()')[0].strip()novel_title = html.xpath('//*[@id="main"]/div/div/div[2]/h1/text()')[0].strip()novel_content_list = html.xpath('//*[@id="content"]/p[@class="content_detail"]')for article_p in novel_content_list:if int(next_page_id) % 3 == 1:p = "<p>" + article_p.xpath('./text()')[0].strip().replace('\r\n                \r\n                    首发网址htTp://m.26w.cc', '') + '</p>'if int(next_page_id) % 3 == 2:p = "<p>" + article_p.xpath('./text()')[0].strip().replace('\r\n                \r\n                    记住网址m.26ksw.cc', '') + '</p>'if int(next_page_id) % 3 == 0:p = "<p>" + article_p.xpath('./text()')[0].strip().replace('\r\n                \r\n                    一秒记住http://m.26ksw.cc', '') + '</p>'novel['content'].append(p)novel['title'] = novel_titlenovel['name'] = novel_namenovel['content'] = "".join(novel['content'])print('next_page_id={}'.format(next_page_id))print('当前章节为{}'.format(page_current), '字符长度为{}'.format(len(novel['content'])), novel, '\n')# 创建章节小说if page_current == 1:page_obj = models.NovelContent.objects.create(title=novel['title'],content=novel['content'],novel=novel_obj,pre_chapter_id=0,then_chapter_id=page_current + 1,)page_start_then_nid = page_obj.nidpage_obj.then_chapter_id = page_start_then_nid + 1page_obj.save()else:page_start_then_nid += 1models.NovelContent.objects.create(title=novel['title'],content=novel['content'],novel=novel_obj,pre_chapter_id=page_start_then_nid - 1,then_chapter_id=page_start_then_nid + 1)# 下一章节idnext_page_id = re.search(r"/(\d+).html", next_page_href)# 没有下一章节, 结束循环if not next_page_id:end_flag = Truebreakelse:next_page_id = next_page_id.group(1)# 下一章节urlurl = 'http://www.26ksw.cc/book/{}/{}.html'.format(book_id, next_page_id)page_current += 1except IndexError or ConnectionError:time.sleep(5)print('索引错误, 等待5s')passpage_obj = models.NovelContent.objects.last()page_obj.then_chapter_id = 0page_obj.save()print("小说已经爬取成功, 爬取章节数:{}".format(page_current))get_novel(book_id=2320, page_start_id=9723862, pages=1672, is_force_read=False)

Django+mysql+requests+lxml抓取小说相关推荐

  1. Python requests 多线程抓取 出现HTTPConnectionPool Max retires exceeded异常

    Python requests 多线程抓取 出现HTTPConnectionPool Max retires exceeded异常 参考文章: (1)Python requests 多线程抓取 出现H ...

  2. 用requests获取网页源代码 python-Python3使用requests包抓取并保存网页源码的方法

    本文实例讲述了Python3使用requests包抓取并保存网页源码的方法.分享给大家供大家参考,具体如下: 使用Python 3的requests模块抓取网页源码并保存到文件示例: import r ...

  3. Python抓取小说

    Python抓取小说 前言 这个脚本命令MAC在抓取小说写,使用Python它有几个码. 代码 # coding=utf-8import re import urllib2 import charde ...

  4. C#网络爬虫抓取小说

    C#网络爬虫抓取小说 2017-09-05DotNet (点击上方蓝字,可快速关注我们) 来源:苍 cnblogs.com/cang12138/p/7464226.html 阅读目录 1.分析html ...

  5. 利用requests库抓取猫眼电影排行

    文章目录 1.抓取目标 2.准备工作 3.抓取分析 4.抓取首页 5.正则提取 6.写入文件 7.整合代码 8.分页爬取 9.运行结果 10.本节代码 最近刚开始了解爬虫,学习了一下基本库的使用.跟着 ...

  6. Python爬虫之requests+正则表达式抓取猫眼电影top100以及瓜子二手网二手车信息(四)...

    requests+正则表达式抓取猫眼电影top100 一.首先我们先分析下网页结构 可以看到第一页的URL和第二页的URL的区别在于offset的值,第一页为0,第二页为10,以此类推. 二.< ...

  7. C# 爬虫 正则、NSoup、HtmlAgilityPack、Jumony四种方式抓取小说

    心血来潮,想爬点小说.通过百度选择了个小说网站,随便找了一本小说http://www.23us.so/files/article/html/13/13655/index.html. 1.分析html规 ...

  8. Python使用lxml模块和Requests模块抓取HTML页面的教程

    Web抓取 Web站点使用HTML描述,这意味着每个web页面是一个结构化的文档.有时从中 获取数据同时保持它的结构是有用的.web站点不总是以容易处理的格式, 如 csv 或者 json 提供它们的 ...

  9. Ubuntu下使用Requests 和 lxml抓取个人主页文章

    首先安装requests lxml库 pip3 install requests lxml 然后开始 #! /usr/bin/env python3 # -*- coding: utf-8 -*-im ...

最新文章

  1. volatile关键字的作用、原理
  2. 如何在ASP.NET Core中使用Azure Service Bus Queue
  3. ElementUI dialog嵌套蒙层遮挡问题
  4. Ubuntu 16.04 + Nginx + Django 项目部署
  5. solr 高亮springdatasolr
  6. 爬虫中 Selenium-Requets-模拟登陆cookie-代理proxy 的简单总结
  7. 编写函数实现有序数组的二分查找
  8. Odoo10参考系列--翻译模块
  9. 白话布隆过滤器BloomFilter
  10. MiroTik 路由器配置无线中继模式(超细教程)
  11. 【VS开发】ClientToScreen 和ScreenToClient 用法
  12. 台式计算机睡眠快捷键,电脑如何设置快捷方式迅速进入睡眠的状态?
  13. 网站服务器过期与域名备案,服务器到期了 域名备案受影响吗
  14. python找不到文件怎么办_python中open找不到文件怎么解决
  15. 《绝地求生》玩家排名预--1.介绍
  16. 旺旺机器人的快捷短语_机器人配置和我的快捷短语
  17. 基于ADSP-BF561的H.264视频编码器的实现
  18. 中国独生子女意外伤害悲情报告
  19. 仙之侠道2玖章各个任务详情_仙之侠道2玖章任务地点 | 手游网游页游攻略大全...
  20. 【技巧】matlab中nanmedian、nanmean和median、mean的区别

热门文章

  1. 炒股的最高境界:炒股就是炒心态,耐心看完受益匪浅!
  2. Android应用接入ApmInsight APM
  3. 同事都在加班,你还敢准时下班吗?
  4. python turtle用法,python中turtle用法
  5. 中国人去美国做计算机软件,中国人在国外都从事什么工作?
  6. 芯片破壁者(五):Acorn和ARM所发现的移动时代
  7. 服务器网站 意大利,意大利云服务器
  8. 历史汇率查询易语言代码
  9. 【Tableau图表】二维条形码图(Barcode plot)
  10. 未来哪些职业会被chatGPT取代