星星博客 »  > 

爬虫day5--xpath及csv简单使用

1.xpath介绍

1.1 什么是xpath

  • XPath(XML Path Language)是一种XML的查询语言,他能在XML树状结构中寻找节点。XPath 用于在 XML 文档中通过元素和属性进行导航
  • xml是一种标记语法的文本格式,xpath可以方便的定位xml中的元素和其中的属性值。lxml是python中的一个第三方模块,它包含了将html文本转成xml对象,和对对象执行
    1.2 结点关系
xml_content = '''
<bookstore>
<book>
    <title lang='eng'>Harry Potter</title>
    <author>JK.Rowing</author>
    <year>2005</year>
    <price>29<price>
</book>
</bookstore>
'''

<bookstore>(文档结点)
<author>JK.Rowing (元素结点)
lang=‘eng’(属性结点)

  • 父(Parent) book元素是title、author、year、price元素的父
  • 子(Children) title、author、year、price都是book元素的子
  • 同胞(Sibling) title、author、year、price都是同胞
    先辈(Ancestor) title元素的先辈是 book元素和bookstore元素

2.xpath基本使用

2.1 工具安装
常用节点选择

  • chrome插件XPath Helper
  • Firefox插件XPath Checker
    安装参考网站
https://blog.csdn.net/weixin_41010318/article/details/86472643?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162273120216780264060924%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=162273120216780264060924&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_v2~rank_v29-1-86472643.pc_search_result_cache&utm_term=XPath+Helper%E5%AE%89%E8%A3%85&spm=1018.2226.3001.4187
符号作用
/从根节点选取
/ /从匹配选择的当前节点文档中的节点,而不考虑他们的位置
.选择的当前节点
选择的当前节点的父节点
@选取属性

在这里插入图片描述

3.lxml

在Python中,我们安装lxml库来使用XPath 技术
lxml 是 一个HTML/XML的解析器,主要的功能是如何解析和提取HTML/XML数据利用etree.HTML,将字符串转化为Element对象
lxml python 官方文档:http://lxml.de/index.html
可使用 pip 安装:pip install lxml
lxml 可以⾃动修正 html 代码

from lxml import etree
import csv

wb_data = """
        <div>
            <ul>
                 <li class="item-0"><a href="link1.html">first item</a></li>
                 <li class="item-1"><a href="link2.html">second item</a></li>
                 <li class="item-inactive"><a href="link3.html">third item</a></li>
                 <li class="item-1"><a href="link4.html">fourth item</a></li>
                 <li class="item-0"><a href="link5.html">fifth item</a>
             </ul>
         </div>
        """

element = etree.HTML(wb_data)
# print(element)
# 获取li标签下面的href属性
links = element.xpath('//li/a/@href')
# 获取li标签下a标签的文本数据
result = element.xpath('//li/a/text()')
# print(links)
# print(result)
dict1 = list(zip(links, result))
headers = ('key', 'value')

with open('code2.csv', 'w', encoding='utf-8', newline='') as file_obj:
    writer = csv.writer(file_obj)
    writer.writerow(headers)
    writer.writerows(dict1)

4.csv简单使用

import csv

# 写入数据 第一种方式 writerow
headers = ['name','age','height']
persons = [('我的梦',18,178),('125',20,180),('城',25,190)]

with open('persons.csv','w',encoding='utf-8') as file_obj:
    writer = csv.writer(file_obj)
    writer.writerow(headers)
    for data in persons:
        writer.writerow(data)
writerows
headers = ('name','age','height')
persons = [('我的梦',18,178),('125',20,180),('城',25,190)]

with open('persons.csv','w',encoding='utf-8',newline='') as file_obj:
    writer = csv.writer(file_obj)
    writer.writerow(headers)
    writer.writerows(persons)

# 第二种的写入方式
# 需要注意 字典里面的key值要和表头里面的保持一致
persons = [{'name': '我的梦', 'age': 20, 'height': 178},
           {'name': '125', 'age': 30, 'height': 178},
           {'name': '城', 'age': 40, 'height': 178}
           ]

with open('persons.csv','w',encoding='utf-8',newline='') as file_obj:
    writer = csv.DictWriter(file_obj,headers)
    writer.writeheader()
    writer.writerows(persons)

# 读取数据
# 方式一
with open('persons.csv', 'r', encoding='utf-8') as file_obj:
    reader = csv.reader(file_obj)
    for x in reader:
        print(x[0])

# 方式二
with open('persons.csv', 'r', encoding='utf-8') as file_obj:
    reader = csv.DictReader(file_obj)
    for x in reader:
        print(x['age'])

相关文章