目录

Beautiful Soup 简介

解释器

基本用法

节点选择器

选择元素

提起信息

1、提取名称

2、获取属性

3、获取内容

嵌套选择

关联选择

1、子节点和子孙节点

2、父节点和祖先节点

3、兄弟节点

4、提取信息

方法选择器

1、find_all()

2、find() :返回单个元素

CSS选择器

1、嵌套选择

2、获取属性

3、获取文本


对于一个网页来说,都有一定的特殊结构和层级关系,而且很多节点都有 id,class 来作区分,所以借助它们的结构和属性来提取不也可以吗?介绍一个强大的解析工具 Beautiful Soup ,它借助网页的结构和属性等特性来 解析网页 有了它 ,我们不用再去写一些复杂的正则表达式,只需要简单的几条语句,就可以完成网页中某个元素的提取

Beautiful Soup 简介

Beautful Soup 就是 Python HTML或 XML 的解析库,可以用它来方便地从网页中提取数据:

Beautiful Soup 提供一些简单的、 python式的函数来处理导航、搜索、修改分析树等功能 ,它是一个工具箱,通过解析文档为用户提供需要抓取的数据,因为简单,所以不需要多少代码就可以写出一个完整的应用程序。

Beaut Soup 自动将输入文档转 Unicode 编码,输出文档转换为 UTF-8 编码 你不需考虑编码方式 ,除非文档没有指 一个编码方式,这时你仅仅需要说明一下原始编码方 式就可以了。

Beautiful Soup 成为和lxml, html6lib 一样出色的 Python 解释器,为用户灵活地提供不同的解析策略或强劲的速度.

解释器

Beautiful Soup 在解析时实际上依赖解析器,它除了支持 Python 标准库中的 HTML 解析器外,还 支持一些第三方解析器(比如 lxml)

通过以上对比可以看出, lxml 解析器有解析 HTML 和 XML 的功能,而且速度’快,容错能力强, 所以推荐使用它

from bs4 import BeautifulSoup
soup = BeautifulSoup("<p>Hellow</p>","lxml")
print(soup.p.string)

基本用法

from bs4 import BeautifulSouphtml = '''
<html> <head><title>The Dormouse ’s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie"class= "sister"id="linkl"><! - Elsie … ><la>,
<a href="http://example.com/lacie "class="sister"id="link2"> Lacie</a> and
<a href="http://example.com/tillie" class="sister"id="link3">Tillie</a> ;
and they lived at the bottom of a well .</p>
<p class="story"> ... <Ip>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.prettify())
print(soup.title.string)结果:
<html><head><title>The Dormouse ’s story</title></head><body><p class="title" name="dromouse"><b>The Dormouse ’s story</b></p><p class="story">Once upon a time there were three little sisters; and their names were<a class="sister" href="http://example.com/elsie" id="linkl"><la>,<a class="sister" href="http://example.com/lacie " id="link2">Lacie</a>and<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>;
and they lived at the bottom of a well .</la></a></p><p class="story">...<ip></ip></p></body>
</html>
The Dormouse ’s story

第一步:首先声明变量html,它是一个HTML字符串,但是需要注意的是,他并不是一个完整的HTML字符串,因为body和html节点都没有闭合,接着,将他们当作第一个参数传给BeautifulSoup对象,该对象第二个参数为解析参数器类型(这里使用lmxl),此时就完成了BeautifulSoup对象的初始化,然后,将这个对象赋值给soup变量

第二步:调用prettify() 方法,这个方法可以把解析的字符串以标准的缩进格式输出,这里需要主义的是,输出结果里面包含body和html节点,也就是说对于不标准的HTML字符串BeautifulSoup可以自动更正格式,这一步prettiry() 方法做的,而是初始化BeautifulSoup时就完成了

第三步:soup.title.string 这实际上是输出HTML中title节点的文本内容。所以,soup.title可以选出HTML中的title节点,再调用string属性就可以得到里面的文本了,所以我们可以通过简单调用几个属性完成文本提取

节点选择器

直接调用节点的名称就可以选择节点元素,再调用string属性就可以得到文本

选择元素

from bs4 import BeautifulSouphtml = '''
<html> <head><title>The Dormouse ’s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie"class= "sister"id="linkl"><! - Elsie … ><la>,
<a href="http://example.com/lacie "class="sister"id="link2"> Lacie</a> and
<a href="http://example.com/tillie" class="sister"id="link3">Tillie</a> ;
and they lived at the bottom of a well .</p>
<p class="story"> ... <Ip>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.title)
print(type(soup.title))
print(soup.title.string)
print(soup.head)
print(soup.p)
print(soup.p)结果:<title>The Dormouse ’s story</title>
<class 'bs4.element.Tag'>
The Dormouse ’s story
<head><title>The Dormouse ’s story</title></head>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>

尝试选择了 head 节点,结果也是节点加其内部的所有内容 最后,选择了p 节点 不过这次情况比较特殊,我们发现结果是第一个p 节点的内容,后面的几个p节点 点并没有选到 也就 是说,当有多个节点时,这种选择方式只会选择到第一个匹配的节点,其他的后面节点都会忽略

提起信息

1、提取名称

可以利用name属性获取节点的名称,选取title节点,然后调用name属性就可以得到节点名称

from bs4 import BeautifulSouphtml = '''
<html> <head><title>The Dormouse ’s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie"class= "sister"id="linkl"><! - Elsie … ><la>,
<a href="http://example.com/lacie "class="sister"id="link2"> Lacie</a> and
<a href="http://example.com/tillie" class="sister"id="link3">Tillie</a> ;
and they lived at the bottom of a well .</p>
<p class="story"> ... <Ip>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.title.name)结果:
title

2、获取属性

每个节点可能有多个属性,比如id和class等,选择这个节点元素后,可以调用sttrs获取属性

from bs4 import BeautifulSouphtml = '''
<html> <head><title>The Dormouse ’s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie"class= "sister"id="linkl"><! - Elsie … ><la>,
<a href="http://example.com/lacie "class="sister"id="link2"> Lacie</a> and
<a href="http://example.com/tillie" class="sister"id="link3">Tillie</a> ;
and they lived at the bottom of a well .</p>
<p class="story"> ... <Ip>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.p.attrs)
print(soup.p.attrs['name'])结果;{'class': ['title'], 'name': 'dromouse'}
dromouse

attrs 的返回结果是字典格式,是属性和属性值合成一个字典,要获取name 属性,就相当于从字典中获取某个键值,只需要用中括号加属性名就可以使用

还可以再简化

from bs4 import BeautifulSouphtml = '''
<html> <head><title>The Dormouse ’s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie"class= "sister"id="linkl"><! - Elsie … ><la>,
<a href="http://example.com/lacie "class="sister"id="link2"> Lacie</a> and
<a href="http://example.com/tillie" class="sister"id="link3">Tillie</a> ;
and they lived at the bottom of a well .</p>
<p class="story"> ... <Ip>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.p["name"])
print(soup.p["class"])结果:
dromouse
['title']

返回结果中有的是字符串,有的是列表。这个取决于节点元素,比如:name属性只有唯一性,返回字符串,class属性有多个,就会返回列表

3、获取内容

可以利用string属性获取节点元素包含的文本内容

from bs4 import BeautifulSouphtml = '''
<html> <head><title>The Dormouse ’s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie"class= "sister"id="linkl"><! - Elsie … ><la>,
<a href="http://example.com/lacie "class="sister"id="link2"> Lacie</a> and
<a href="http://example.com/tillie" class="sister"id="link3">Tillie</a> ;
and they lived at the bottom of a well .</p>
<p class="story"> ... <Ip>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.p.string)结果:
The Dormouse ’s story

这里还是只会获取到第一个p节点信息,获取的文本也是第一个p节点里面的文本

嵌套选择

每一个返回结果都是bs4.element.Tag类型,他同样可以继续调用节点进行下一步的选择。比如选择获取head节点信息,我们可以继续调用head来选取其内部的head节点

from bs4 import BeautifulSouphtml = '''
<html> <head><title>The Dormouse ’s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse ’s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie"class= "sister"id="linkl"><! - Elsie … ><la>,
<a href="http://example.com/lacie "class="sister"id="link2"> Lacie</a> and
<a href="http://example.com/tillie" class="sister"id="link3">Tillie</a> ;
and they lived at the bottom of a well .</p>
<p class="story"> ... <Ip>
'''
soup = BeautifulSoup(html,"lxml")
print(soup.head.title)
print(type(soup.head.title))
print(soup.head.title.string)结果:
<title>The Dormouse ’s story</title>
<class 'bs4.element.Tag'>
The Dormouse ’s story

head之后再次调用title而选择的title节点元素,然后打印输出了它的类型,可以看到,它任然是bs4.element.Tag类型,可得在Tag类型基础上再次选择得到的依然还是Tag类型

关联选择

在定位的时候不能一步定位到,就需要在选中某一个节点元素,然后以它为基准再选择它的子节点、父节点、兄弟节点等

1、子节点和子孙节点

选取节点元素之后,想要获取它的直接子节点,可以调用contents属性

from bs4 import BeautifulSouphtml = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and t heir names were
<a href="http://example.com/elsie"class="sister"id= "link1">
<span> Elsie</span>
</a>
<a href="http://example.com/lacie"class="sister"id="link2">Lacie</a>
and
<a href="http://example.com/tillite"class="sister"id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story"> .. . </p>
"""soup = BeautifulSoup(html,"lxml")
print(soup.p.contents)结果:
['\nOnce upon a time there were three little sisters; and t heir names were\n', <a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>, '\n', <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, '\nand\n', <a class="sister" href="http://example.com/tillite" id="link3">Tillie</a>, '\nand they lived at the bottom of a well.\n']

返回结果是列表形式,p节点里既包含文本,又包含节点,最终会将他们以列表形式统一返回。列表中的每个元素都是p节点的直接子节点

使用children 属性也可以得到相应的结果,返回迭代器

from bs4 import BeautifulSouphtml = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and t heir names were
<a href="http://example.com/elsie"class="sister"id= "link1">
<span> Elsie</span>
</a>
<a href="http://example.com/lacie"class="sister"id="link2">Lacie</a>
and
<a href="http://example.com/tillite"class="sister"id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story"> .. . </p>
"""soup = BeautifulSoup(html,"lxml")
print(soup.p.children)
for i,child in enumerate(soup.p.children):print(i,child)结果:
<list_iterator object at 0x01B5AFF0>
0
Once upon a time there were three little sisters; and t heir names were1 <a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>
2 3 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
4
and5 <a class="sister" href="http://example.com/tillite" id="link3">Tillie</a>
6
and they lived at the bottom of a well.

调用descendants属性,获取所有的子孙节点,返回迭代器

from bs4 import BeautifulSouphtml = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and t heir names were
<a href="http://example.com/elsie"class="sister"id= "link1">
<span> Elsie</span>
</a>
<a href="http://example.com/lacie"class="sister"id="link2">Lacie</a>
and
<a href="http://example.com/tillite"class="sister"id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story"> .. . </p>
"""soup = BeautifulSoup(html,"lxml")
print(soup.p.descendants)
for i,child in enumerate(soup.p.descendants):print(i,child)结果:
<generator object Tag.descendants at 0x021BBD30>
0
Once upon a time there were three little sisters; and t heir names were1 <a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>
2 3 <span> Elsie</span>
4  Elsie
5 6 7 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
8 Lacie
9
and10 <a class="sister" href="http://example.com/tillite" id="link3">Tillie</a>
11 Tillie
12
and they lived at the bottom of a well.

2、父节点和祖先节点

调用parent属性,获取某个元素的父节点

from bs4 import BeautifulSouphtml = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a href="http://example.com/elsie"class="sister"id= "link1">
<span> Elsie</span>
</a>
</p>
<a href="http://example.com/lacie"class="sister"id="link2">Lacie</a>
and
<a href="http://example.com/tillite"class="sister"id="link3">Tillie</a>and they lived at the bottom of a well.<p class="story"> .. . </p>
"""soup = BeautifulSoup(html,"lxml")
print(soup.a.parent)结果:
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>
</p>

想要获取所有的祖先节点可以调用parents属性

from bs4 import BeautifulSouphtml = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a href="http://example.com/elsie"class="sister"id= "link1">
<span> Elsie</span>
</a>
</p>
<a href="http://example.com/lacie"class="sister"id="link2">Lacie</a>
and
<a href="http://example.com/tillite"class="sister"id="link3">Tillie</a>and they lived at the bottom of a well.<p class="story"> .. . </p>
"""soup = BeautifulSoup(html,"lxml")
print(soup.a.parents)
print(list(enumerate(soup.a.parents)))结果:
<generator object PageElement.parents at 0x014DBD70>
[(0, <p class="story">Once upon a time there were three little sisters; and t heir names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>
</p>), (1, <body>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>
</p>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
and
<a class="sister" href="http://example.com/tillite" id="link3">Tillie</a>and they lived at the bottom of a well.<p class="story"> .. . </p>
</body>), (2, <html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>
</p>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
and
<a class="sister" href="http://example.com/tillite" id="link3">Tillie</a>and they lived at the bottom of a well.<p class="story"> .. . </p>
</body></html>), (3, <html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a class="sister" href="http://example.com/elsie" id="link1">
<span> Elsie</span>
</a>
</p>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
and
<a class="sister" href="http://example.com/tillite" id="link3">Tillie</a>and they lived at the bottom of a well.<p class="story"> .. . </p>
</body></html>)]

3、兄弟节点

获取同级的节点

from bs4 import BeautifulSouphtml = """
<html>
<body>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a href="http://example.com/elsie"class="sister"id= "link1">
<span> Elsie</span>
</a>Hello
<a href="http://example.com/lacie"class="sister"id="link2">Lacie</a>and
<a href="http://example.com/tillite"class="sister"id="link3">Tillie</a>and they lived at the bottom of a well.
</p>
"""soup = BeautifulSoup(html,"lxml")print("Next Sibling",soup.a.next_sibling)
print("Prev string",soup.a.previous_sibling)
print("Next strings",list(enumerate(soup.a.next_siblings)))
print("Pre string",list(enumerate(soup.a.previous_siblings)))结果:
Next Sibling HelloPrev string Once upon a time there were three little sisters; and t heir names wereNext strings [(0, '\n            Hello\n'), (1, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>), (2, '\n            and\n'), (3, <a class="sister" href="http://example.com/tillite" id="link3">Tillie</a>), (4, '\n            and they lived at the bottom of a well.\n')]
Pre string [(0, '\n            Once upon a time there were three little sisters; and t heir names were\n')]

next_sibling和previous_sibling 分别获取节点的下一个和上一个兄弟元素,next_siblings和previous_sibling则分别返回所有前面和后面的节点的生成器

4、提取信息

提取节点的文本、属性等信息

from bs4 import BeautifulSouphtml = """
<html>
<body>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a href="http://example.com/elsie"class="sister"id= "link1">Linca</a><a href="http://example.com/lacie"class="sister"id="link2">Lacie</a>
</p>
"""soup = BeautifulSoup(html,"lxml")print("Next Sibling")
print(type(soup.a.next_sibling))
print(soup.a.next_sibling)
print(soup.a.next_sibling.string)print("Parent")
print(type(soup.a.parents))
print(list(soup.a.parents)[0])
print(list(soup.a.parents)[0].attrs["class"])结果:
Next Sibling
<class 'bs4.element.Tag'>
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
Lacie
Parent
<class 'generator'>
<p class="story">Once upon a time there were three little sisters; and t heir names were
<a class="sister" href="http://example.com/elsie" id="link1">Linca</a><a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
</p>
['story']

单节点可以调用string、attrs等属性得起文本或属性;返回多个节点的生成器,则可以转化为列表后,取出某个元素,然后再调用string,sttrs等属性说去其对应节点的文本和属性

方法选择器

前面都是通过属性来选择选择,如果遇到比较负载的选择的,它就比较麻烦,不够灵活,Beautiful Soup还为我们提供了一些查询方法

1、find_all()

查询所有和服条件的元素,给它传入一些属性或文本,就可以得到符合条件的元素,它的功能十分强大

API

find_all(self, name=None, attrs={}, recursive=True, text=None,limit=None, **kwargs):

根据节点名来查询元素:name

根据节点名来查询元素,find_all方法传入name参数,其参数值为ul,查询所有ul节点,返回结果是列表类型,长度为2,每个元素为 bs.element.Tag类型

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
print(soup.find_all(name="ul"))
print(type(soup.find_all(name="ul")[0]))结果:
[<ul class="list" id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>]
<class 'bs4.element.Tag'>

因为元素都是Tag类型,所以任然可以进行嵌套查询,返回结果是列表格式,列表中每个元素依然还是Tag类型

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
print(soup.find_all(name="ul"))
for ul in soup.find_all(name="ul"):print(ul.find_all(name='li'))结果:
[<ul class="list" id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>]
[<li class="element"> Foo</li>, <li class="element">Bar</li>, <li class="elernent">Jay</li>]
[<li class="element"> Foo</li>, <li class="element">Bar</li>]

循环获取文本

soup =BeautifulSoup(html,"lxml")
print(soup.find_all(name="ul"))
for ul in soup.find_all(name="ul"):print(ul.find_all(name='li'))for i in ul.find_all(name="li"):print(i.string结果:
[<li class="element"> Foo</li>, <li class="element">Bar</li>, <li class="elernent">Jay</li>]Foo
Bar
Jay
[<li class="element"> Foo</li>, <li class="element">Bar</li>]Foo
Bar

根据节点名来查询:attrs

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
print(soup.find_all(attrs={"id":"list-2"}))
print(soup.find_all(attrs={"name":"elements"}))结果
[<ul class="list list-small" id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>]
[<ul class="list list-small" id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>]

传入方式字典形式传入:attrs={"id":"list-2"},得到结果是列表形式,包含的内容就是符合id为list-1的所有节点

常用的参数我们可以不用attrs传参

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
print(soup.find_all(id="list-2"))
print(soup.find_all(class_="panel-body"))

获取匹配节点的文本:text

text参数可以匹配节点的文本,传入的形式可以是字符串,可以是正则表达式

from bs4 import BeautifulSoup
import rehtml='''
<div class="panel"> <div class="panel-body">
<a>Hello, this is a link</a>
<a>Hello, this is a link, too</a>
</div>
</div>
'''soup = BeautifulSoup(html,"lxml")
print(soup.find_all(text=re.compile("link")))结果:
['Hello, this is a link', 'Hello, this is a link, too']

2、find() :返回单个元素

返回单个元素,也就是第一个匹配的元素,而find_all前者返回的是所有匹配的元素组成列表

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
print(soup.find(name="ul"))
print(type(soup.find(name="ul")))
print(soup.find(class_="list"))结果:
<ul class="list" id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<class 'bs4.element.Tag'>
<ul class="list" id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>

find和find_all 方法完全相同,只不过查询范围不同

find_parents() 和 find_parent() :前者返回所有祖先节点,后者返回直接父节点;

find_next_siblings() 和 find_next_sibling() :前者返回后面所有兄弟节点,后者返回后面第一个兄弟节点;

find_previous_siblings() 和find_previous_sibling() : 前者返回前面所有的兄弟节点,后者返回前面第一个兄弟节点;

find_all_next()  和 find_next() :前者返回节点后所有符合条件的节点,后者返回第一个符合条件的节点

find_all_previous()  和 find_previous() :前者返回节点后所有符合条件的节点,后者返回第一个符合条件的节点

CSS选择器

Beautiful Soup 还提供了另外一种选择器,那就是 css 选择器 如果对 Web 开发熟悉的话,那么对 css 选择器肯定也不陌生 如果不熟悉的CSS 选择器

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
print(soup.select(".panel .panel-heading"))
print(soup.select("ul li"))
print(soup.select("#list-2 .element"))
print(soup.select("ul")[0])结果:
[<div class="panel-heading">
<h4>Hello</h4>
</div>]
[<li class="element"> Foo</li>, <li class="element">Bar</li>, <li class="elernent">Jay</li>, <li class="element"> Foo</li>, <li class="element">Bar</li>]
[<li class="element"> Foo</li>, <li class="element">Bar</li>]
<ul class="list" id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>

select("ul li ")是选择所有ul节点下面的所有li节点,结果便是所有的li节点组成的列表

1、嵌套选择

select()  方法同样支持嵌套选择,先选择所有ul节点,再遍历每个ul节点,选择其li节点,最终以列表形式展示出来

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
for ul in soup.select("ul"):print(ul.select("li"))结果:
[<li class="element"> Foo</li>, <li class="element">Bar</li>, <li class="elernent">Jay</li>]
[<li class="element"> Foo</li>, <li class="element">Bar</li>]

2、获取属性

节点类型是Tag类型,所有获取属性还可以用原来的方法,任然是上面HTML文本

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
for ul in soup.select("ul"):print(ul["id"])print(ul.attrs["id"])结果:
list-1
list-1
list-2
list-2

可以传入中括号和属性名以及通过attrs属性获取属性值

3、获取文本

要获取文本,当然可以用前面讲的string属性。此外,还有一个方法,那就是get_text() ,

from bs4 import  BeautifulSouphtml = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list"id="list-1">
<li class="element"> Foo</li>
<li class="element">Bar</li>
<li class="elernent">Jay</li>
</ul>
<ul class="list list-small"id="list-2" name="elements">
<li class="element"> Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''soup =BeautifulSoup(html,"lxml")
for li in soup.select("li"):print("Get Text",li.get_text())print("string",li.string)结果:
Get Text  Foo
string  Foo
Get Text Bar
string Bar
Get Text Jay
string Jay
Get Text  Foo
string  Foo
Get Text Bar
string Bar

爬虫:Beautiful Soup相关推荐

  1. 爬虫---Beautiful Soup 通过添加不同的IP请求

    上一篇爬虫写了如何应付反爬的一些策略也简单的举了根据UA的例子,今天写一篇如何根据不同IP进行访问豆瓣网获取排行版 requests添加IP代理 如果使用代理的话可以通过requests中的方法pro ...

  2. 爬虫---Beautiful Soup 初始

    我们在工作中,都会听说过爬虫,那么什么是爬虫呢? 什么是网络爬虫 爬虫基本原理 所谓网络爬虫就是一个自动化数据采集工具,你只要告诉它要采集哪些数据,丢给它一个 URL,就能自动地抓取数据了.其背后的基 ...

  3. python3 beautifulsoup 表格_[Python3爬虫]Beautiful Soup解析库

    解析库与Beautiful Soup 通过request库,我们已经能够抓取网页信息了,但要怎么提取包含在Html代码里面的有效信息呢?谈到匹配有效信息你肯定会想到正则表达式,这里就不讨论了,实际上关 ...

  4. 万字博文教你python爬虫Beautiful Soup库【详解篇】

  5. Python爬虫利器之Beautiful Soup的全世界最强用法 五百行文章!

    0. 前言 爬虫是一个非常有意思的东西,比如自己做的一个网页上面什么数据都没有就可以爬虫别人的 然后进行去重 数据分析等等 在这里因为爬虫涉及到的方面非常多 1. Beautiful Soup的简介 ...

  6. Python爬虫入门(8):Beautiful Soup的用法

    Python爬虫入门(1):综述 Python爬虫入门(2):爬虫基础了解 Python爬虫入门(3):Urllib库的基本使用 Python爬虫入门(4):Urllib库的高级用法 Python爬虫 ...

  7. Python 网络爬虫笔记5 -- Beautiful Soup库实战

    Python 网络爬虫笔记5 – Beautiful Soup库实战 Python 网络爬虫系列笔记是笔者在学习嵩天老师的<Python网络爬虫与信息提取>课程及笔者实践网络爬虫的笔记. ...

  8. Python 网络爬虫笔记3 -- Beautiful Soup库

    Python 网络爬虫笔记3 – Beautiful Soup库 Python 网络爬虫系列笔记是笔者在学习嵩天老师的<Python网络爬虫与信息提取>课程及笔者实践网络爬虫的笔记. 课程 ...

  9. Python 爬虫之 Beautiful Soup 模块使用指南

    版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明. 本文链接:https://blog.csdn.net/bruce_6/article/deta ...

  10. Python3 爬虫实战 — 安居客武汉二手房【requests、Beautiful Soup、CSV】

    爬取时间:2019-10-09 爬取难度:★★☆☆☆☆ 请求链接:https://wuhan.anjuke.com/sale/ 爬取目标:爬取武汉二手房每一条售房信息,包含地理位置.价格.面积等,保存 ...

最新文章

  1. matlab径向分布函数作图_常见的概率分布(matlab作图)
  2. python没有运行_Python没有执行__init__
  3. 【今晚七点半】:主编对话李宇翔——我所经历的“前端”开发
  4. ubuntu 16.04 安装Caffe GPU版本
  5. Install Docker Mac OS X
  6. asp.net ajax的学习第一篇
  7. presto领读 查询引擎翻译
  8. 全网最细Linux之Centos8安装MySQL8.0以上版本,您值得收藏!
  9. IIS 301重定向跳转
  10. TiledMap的使用
  11. 2019-2025新能源汽车发展趋势与产业链技术路线解析
  12. 【白帽子学习笔记 22】网络扫描与网络侦查
  13. 关于大一c语言期中考试总结
  14. 2016最新iOS开发证书配置和安装的详细步骤攻略
  15. Android登录 之 Twitter登录
  16. 拯救阿拉德大陆--竞码编程H-20‘
  17. Gitlab在线安装、离线安装、搭建、使用等详细介绍,不能再详细了……
  18. 技术水平真的很重要!技术详细介绍
  19. MATLAB 字符串数组
  20. 使用arch-anywhere来安装arch

热门文章

  1. IP归属地批量查询软件
  2. spring cloud 2020 改变了版本的命名规则
  3. python自动回复机器人手机版_教你用python做一个哄女友的微信自动回复机器人
  4. 如何看证券期货业IT备份标准草案
  5. 【首尔大学韩国语】十二课 买东西
  6. 四款常见数据库同步软件汇总,Mysql数据同步软件
  7. 希捷16TB硬盘上市:速度堪比SSD 售价将近5000元
  8. js 点击事件回调函数传参
  9. managedQuery() 陷阱 Cursor关闭的问题
  10. MySQL之MVVC简单介绍