大橙子网站建设,新征程启航
为企业提供网站建设、域名注册、服务器等服务
创新互联www.cdcxhl.cn八线动态BGP香港云服务器提供商,新人活动买多久送多久,划算不套路!
10年积累的做网站、成都网站制作经验,可以快速应对客户对网站的新想法和需求。提供各种问题对应的解决方案。让选择我们的客户得到更好、更有力的网络服务。我虽然不认识你,你也不认识我。但先网站设计后付款的网站建设流程,更有竞秀免费网站建设让你可以放心的选择与我们合作。Python爬虫如何使用CSS选择器?针对这个问题,这篇文章详细介绍了相对应的分析和解答,希望可以帮助更多想解决这个问题的小伙伴找到更简单易行的方法。
CSS选择器
这是另一种与find_all()方法有异曲同工的查找方法,写CSS时,标签名不加任何修饰,类名前加.,id名前加#。
在这里我们也可以利用类似的方法来筛选元素,用到的方法是soup.select(),返回的类型是list。
(1)通过标签名查找
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 创建 Beautiful Soup 对象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.select("title")) print(soup.select("b")) print(soup.select("a"))
运行结果
[The Dormouse's story ] [The Dormouse's story] [, Lacie, Tillie]
(2)通过类名查找
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 创建 Beautiful Soup 对象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.select(".title"))
运行结果
[The Dormouse's story
]
(3)通过id名查找
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 创建 Beautiful Soup 对象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.select("#link1"))
运行结果
[The Dormouse's story
]
(4)组合查找
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 创建 Beautiful Soup 对象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.select("p #link1"))
运行结果
[]
(5)属性查找
查找时还可以加入属性元素,属性需要用中括号括起来,注意属性和标签属于同一节点,所以中间不能加空格,否则会无法匹配到。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 创建 Beautiful Soup 对象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.select("a[class='sister']"))
运行结果
[, Lacie, Tillie]
同样,属性仍然可以与上述查找方式组合,不在同一节点的空格隔开,同一节点的不加空格。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 创建 Beautiful Soup 对象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.select("p a[class='sister']"))
运行结果
[, Lacie, Tillie]
(6)获取内容
以上的select()方法返回的结果都是列表形式,可以遍历形式输出,然后用get_text()方法来获取它的内容。
#!/usr/bin/python3 # -*- coding:utf-8 -*- from bs4 import BeautifulSoup html = """The Dormouse's story The Dormouse's story
Once upon a time there were three little sisters; and their names were , Lacie and Tillie; and they lived at the bottom of a well.
...
""" # 创建 Beautiful Soup 对象,指定lxml解析器 soup = BeautifulSoup(html, "lxml") print(soup.select("p a[class='sister']")) for item in soup.select("p a[class='sister']"): print(item.get_text())
运行结果
[, Lacie, Tillie] Lacie Tillie
注意:为注释内容,未输出
关于Python爬虫如何使用CSS选择器问题的解答就分享到这里了,希望以上内容可以对大家有一定的帮助,如果你还有很多疑惑没有解开,可以关注创新互联-成都网站建设公司行业资讯频道了解更多相关知识。