Beautiful Soup
是一个可以从HTML
或XML
文件中提取数据的Python
库;Beautiful Soup 3
目前已经停止开发,推荐使用Beautiful Soup 4
;注意:以下实例来源于BeautifulSoup
官方文档:Beautiful Soup 4.4.0 文档。
爱丽丝梦游仙境
:html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
BeautifulSoup
解析上述实例,得到一个 BeautifulSoup
的对象,并能按照标准的缩进格式的结构输出:from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.prettify())
D:\Python37\python.exe F:/python_study/testother/bs01.py
<html>
<head>
<title>
The Dormouse's story
</title>
</head>
<body>
<p class="title">
<b>
The Dormouse's story
</b>
</p>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a class="sister" href="http://example.com/elsie" id="link1">
Elsie
</a>
,
<a class="sister" href="http://example.com/lacie" id="link2">
Lacie
</a>
and
<a class="sister" href="http://example.com/tillie" id="link3">
Tillie
</a>
;
and they lived at the bottom of a well.
</p>
<p class="story">
...
</p>
</body>
</html>
进程已结束,退出代码 0
pip install beautifulsoup4
pip install lxml
pip install html5lib
print(f"获取title: {soup.title}\n")
获取title: <title>The Dormouse's story</title>
print(f"获取title.name: {soup.title.name}\n")
获取title.name: title
print(f"获取title.string: {soup.title.string}\n")
获取title.string: The Dormouse's story
print(f"获取title.parent.name: {soup.title.parent.name}\n")
获取title.parent.name: head
print(f"获取第一个p标签: {soup.p}\n")
获取第一个p标签: <p class="title"><b>The Dormouse's story</b></p>
print(f"获取p标签中的['class']: {soup.p['class']}\n")
获取p标签中的['class']: ['title']
print(f"获取第一个a标签: {soup.a}\n")
获取第一个a标签: <a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>
print(f"获取所有a标签: {soup.find_all('a')}\n")
获取所有a标签:
[<a class="sister" href="http://example.com/elsie" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>,
<a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
print(f"获取某个指定的链接: {soup.find(id='link3')}\n")
获取某个指定的链接: <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
print("获取所有a标签链接:")
for link in soup.find_all('a'):
print(f"{link.get('href')}")
获取所有a标签链接:
http://example.com/elsie
http://example.com/lacie
http://example.com/tillie
print(f"获取文档中文字内容:{soup.get_text()}")
获取文档中文字内容:
The Dormouse's story
The Dormouse's story
Once upon a time there were three little sisters; and their names were
Elsie,
Lacie and
Tillie;
and they lived at the bottom of a well.
...
# -*- coding:utf-8 -*-
# 作者:NoamaNelson
# 日期:2023/2/13
# 文件名称:bs01.py
# 作用:BeautifulSoup4的简单使用
# 联系:VX(NoamaNelson)
# 博客:https://blog.csdn.net/NoamaNelson
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.prettify())
# 获取title
print(f"获取title: {soup.title}\n")
# 获取title.name
print(f"获取title.name: {soup.title.name}\n")
# 获取title.string
print(f"获取title.string: {soup.title.string}\n")
# 获取title.parent.name
print(f"获取title.parent.name: {soup.title.parent.name}\n")
# 获取第一个p标签
print(f"获取第一个p标签: {soup.p}\n")
# 获取p标签中的['class']
print(f"获取p标签中的['class']: {soup.p['class']}\n")
# 获取第一个a标签
print(f"获取第一个a标签: {soup.a}\n")
# 获取所有a标签
print(f"获取所有a标签: {soup.find_all('a')}\n")
# 获取某个指定的链接
print(f"获取某个指定的链接: {soup.find(id='link3')}\n")
# 获取所有a标签链接
print("获取所有a标签链接:")
for link in soup.find_all('a'):
print(f"{link.get('href')}")
# 获取文档中文字内容
print(f"获取文档中文字内容:{soup.get_text()}")