input
和output
大多可以分为几类:读取文本文件或其他一些存储在磁盘上的格式,从数据库加载数据,利用web API
来获取网络资源。
pandas
有很多用来读取表格式数据作为dataframe
的函数,下面列出来一些。其中read_csv
和read_tabel
是最经常用到的:
这里我们给出这些函数的大致功能,就是把test data
变为dataframe
。这些函数的一些可选参数有以下几类:
Indexing
(索引)
能把返回的一列或多列作为一个dataframe
。另外也可以选择从文件中获取列名或完全不获取列名
Type inference and data conversion
(类型推测和数据转换)
这个包括用户自己定义的转换类型和缺失值转换
Datetime parsing
(日期解析)
包含整合能力,可以把多列中的时间信息整合为一列
Iterating
(迭代)
支持对比较大的文件进行迭代
Unclean data issues
(未清洗的数据问题)
跳过行或柱脚,评论,或其他一些小东西,比如csv中的逗号
因为现实中的数据非常messy
(杂乱),所以有一些数据加载函数(特别是read_csv
)的轩轩也变得越来越多。对于众多参数感觉不知所措是正常的(read_csv
有超过50个参数)。具体的可以去看pandas
官网给出的例子。
一些函数,比如pandas.read_csv
实现type inference
,因为column data type
不是数据类型的一种。这意味着我们没有必要指定哪些columns
是数值,哪些是整数,哪些是字符串。其他一些数据格式,比如HDF5
,数据类型是在格式里的。
先来一个CSV
文件热热身(CSV
文件指的是用逗号隔开数据的文件):
!cat ../examples/ex1.csv
a,b,c,d,message
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
cat
是unix
下的一个shell command
。如果是用windows
,用type
代替cat
import pandas as pd
df = pd.read_csv('../examples/ex1.csv')
df
a | b | c | d | message | |
---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | hello |
1 | 5 | 6 | 7 | 8 | world |
2 | 9 | 10 | 11 | 12 | foo |
我们也可以用read_table
来指定分隔符:
pd.read_table('../examples/ex1.csv', sep=',')
a | b | c | d | message | |
---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | hello |
1 | 5 | 6 | 7 | 8 | world |
2 | 9 | 10 | 11 | 12 | foo |
一个文件不会总是有header row
(页首行),考虑下面的文件:
!cat ../examples/ex2.csv
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
读取这样的文件,设定column name
:
pd.read_csv('../examples/ex2.csv', header=None)
0 | 1 | 2 | 3 | 4 | |
---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | hello |
1 | 5 | 6 | 7 | 8 | world |
2 | 9 | 10 | 11 | 12 | foo |
pd.read_csv('../examples/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
a | b | c | d | message | |
---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | hello |
1 | 5 | 6 | 7 | 8 | world |
2 | 9 | 10 | 11 | 12 | foo |
如果想要从多列从构建一个hierarchical index
(阶层型索引),传入一个包含列名的list
:
!cat ../examples/csv_mindex.csv
key1,key2,value1,value2
one,a,1,2
one,b,3,4
one,c,5,6
one,d,7,8
two,a,9,10
two,b,11,12
two,c,13,14
two,d,15,16
parsed = pd.read_csv('../examples/csv_mindex.csv',
index_col=['key1', 'key2'])
parsed
value1 | value2 | ||
---|---|---|---|
key1 | key2 | ||
one | a | 1 | 2 |
b | 3 | 4 | |
c | 5 | 6 | |
d | 7 | 8 | |
two | a | 9 | 10 |
b | 11 | 12 | |
c | 13 | 14 | |
d | 15 | 16 |
在一些情况下,一个table
可能没有固定的分隔符,用空格或其他方式来分隔。比如下面这个文件:
list(open('../examples/ex3.txt'))
[' A B C\n',
'aaa -0.264438 -1.026059 -0.619500\n',
'bbb 0.927272 0.302904 -0.032399\n',
'ccc -0.264273 -0.386314 -0.217601\n',
'ddd -0.871858 -0.348382 1.100491\n']
可以看到区域是通过不同数量的空格来分隔的。这种情况下,可以传入一个正则表达式给read_table
来代替分隔符。用正则表达式为\s+
,我们得到:
result = pd.read_table('../examples/ex3.txt', sep='\s+')
result
A | B | C | |
---|---|---|---|
aaa | -0.264438 | -1.026059 | -0.619500 |
bbb | 0.927272 | 0.302904 | -0.032399 |
ccc | -0.264273 | -0.386314 | -0.217601 |
ddd | -0.871858 | -0.348382 | 1.100491 |
因为列名比行的数量少,所以read_table
推测第一列应该是dataframe
的index
。
这个解析器功能有很多其他参数能帮你解决遇到文件格式异常的问题(可以见之后的表格)。比如,我们要跳过第一、三、四行,使用skiprows
:
!cat ../examples/ex4.csv
# hey!
a,b,c,d,message
# just wanted to make things more difficult for you
# who reads CSV files with computers, anyway?
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
pd.read_csv('../examples/ex4.csv', skiprows=[0, 2, 3])
a | b | c | d | message | |
---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | hello |
1 | 5 | 6 | 7 | 8 | world |
2 | 9 | 10 | 11 | 12 | foo |
对于缺失值,pandas
使用一些sentinel value
(标记值)来代表,比如NA
和NULL
:
!cat ../examples/ex5.csv
something,a,b,c,d,message
one,1,2,3,4,NA
two,5,6,,8,world
three,9,10,11,12,foo
result = pd.read_csv('../examples/ex5.csv')
result
something | a | b | c | d | message | |
---|---|---|---|---|---|---|
0 | one | 1 | 2 | 3.0 | 4 | NaN |
1 | two | 5 | 6 | NaN | 8 | world |
2 | three | 9 | 10 | 11.0 | 12 | foo |
pd.isnull(result)
something | a | b | c | d | message | |
---|---|---|---|---|---|---|
0 | False | False | False | False | False | True |
1 | False | False | False | True | False | False |
2 | False | False | False | False | False | False |
na_values
选项能把我们传入的字符识别为NA
,导入必须是list
:
result = pd.read_csv('../examples/ex5.csv', na_values=['NULL'])
result
something | a | b | c | d | message | |
---|---|---|---|---|---|---|
0 | one | 1 | 2 | 3.0 | 4 | NaN |
1 | two | 5 | 6 | NaN | 8 | world |
2 | three | 9 | 10 | 11.0 | 12 | foo |
我们还可以给不同的column
设定不同的缺失值标记符,这样的话需要用到dict
:
sentinels = {'message': ['foo', 'NA'],
'something': ['two']}
# 把message列中的foo和NA识别为NA,把something列中的two识别为NA
pd.read_csv('../examples/ex5.csv', na_values=sentinels)
something | a | b | c | d | message | |
---|---|---|---|---|---|---|
0 | one | 1 | 2 | 3.0 | 4 | NaN |
1 | NaN | 5 | 6 | NaN | 8 | world |
2 | three | 9 | 10 | 11.0 | 12 | NaN |
对于一些比较大的文件,我们想要一次读取一小部分,或者每次迭代一小部分。在我们看一个比较大的文件前,先设置一下pandas
中显示的数量:
pd.options.display.max_rows = 10
result = pd.read_csv('../examples/ex6.csv')
result
one | two | three | four | key | |
---|---|---|---|---|---|
0 | 0.467976 | -0.038649 | -0.295344 | -1.824726 | L |
1 | -0.358893 | 1.404453 | 0.704965 | -0.200638 | B |
2 | -0.501840 | 0.659254 | -0.421691 | -0.057688 | G |
3 | 0.204886 | 1.074134 | 1.388361 | -0.982404 | R |
4 | 0.354628 | -0.133116 | 0.283763 | -0.837063 | Q |
... | ... | ... | ... | ... | ... |
9995 | 2.311896 | -0.417070 | -1.409599 | -0.515821 | L |
9996 | -0.479893 | -0.650419 | 0.745152 | -0.646038 | E |
9997 | 0.523331 | 0.787112 | 0.486066 | 1.093156 | K |
9998 | -0.362559 | 0.598894 | -1.843201 | 0.887292 | G |
9999 | -0.096376 | -1.012999 | -0.657431 | -0.573315 | 0 |
10000 rows × 5 columns
如果只是想要读取前几行(不读取整个文件),指定一下nrows
:
pd.read_csv('../examples/ex6.csv', nrows=5)
one | two | three | four | key | |
---|---|---|---|---|---|
0 | 0.467976 | -0.038649 | -0.295344 | -1.824726 | L |
1 | -0.358893 | 1.404453 | 0.704965 | -0.200638 | B |
2 | -0.501840 | 0.659254 | -0.421691 | -0.057688 | G |
3 | 0.204886 | 1.074134 | 1.388361 | -0.982404 | R |
4 | 0.354628 | -0.133116 | 0.283763 | -0.837063 | Q |
读取文件的一部分,可以指定chunksize
:
chunker = pd.read_csv('../examples/ex6.csv', chunksize=1000)
chunker
pandas
返回的TextParser object
能让我们根据chunksize
每次迭代文件的一部分。比如,我们想要迭代ex6.csv
, 计算key
列的值的综合:
chunker = pd.read_csv('../examples/ex6.csv', chunksize=1000)
tot = pd.Series([])
for piece in chunker:
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)
tot[:10]
E 368.0
X 364.0
L 346.0
O 343.0
Q 340.0
M 338.0
J 337.0
F 335.0
K 334.0
H 330.0
dtype: float64
TextParser
有一个get_chunk
方法,能返回任意大小的数据片段:
chunker = pd.read_csv('../examples/ex6.csv', chunksize=1000)
chunker.get_chunk(10)
one | two | three | four | key | |
---|---|---|---|---|---|
0 | 0.467976 | -0.038649 | -0.295344 | -1.824726 | L |
1 | -0.358893 | 1.404453 | 0.704965 | -0.200638 | B |
2 | -0.501840 | 0.659254 | -0.421691 | -0.057688 | G |
3 | 0.204886 | 1.074134 | 1.388361 | -0.982404 | R |
4 | 0.354628 | -0.133116 | 0.283763 | -0.837063 | Q |
5 | 1.817480 | 0.742273 | 0.419395 | -2.251035 | Q |
6 | -0.776764 | 0.935518 | -0.332872 | -1.875641 | U |
7 | -0.913135 | 1.530624 | -0.572657 | 0.477252 | K |
8 | 0.358480 | -0.497572 | -0.367016 | 0.507702 | S |
9 | -1.740877 | -1.160417 | -1.637830 | 2.172201 | G |
可以输出位csv
格式:
data = pd.read_csv('../examples/ex5.csv')
data
something | a | b | c | d | message | |
---|---|---|---|---|---|---|
0 | one | 1 | 2 | 3.0 | 4 | NaN |
1 | two | 5 | 6 | NaN | 8 | world |
2 | three | 9 | 10 | 11.0 | 12 | foo |
data.to_csv('../examples/out.csv')
!cat ../examples/out.csv
,something,a,b,c,d,message
0,one,1,2,3.0,4,
1,two,5,6,,8,world
2,three,9,10,11.0,12,foo
其他一些分隔符也可以使用(使用sys.stdout
可以直接打印文本,方便查看效果):
import sys
data.to_csv(sys.stdout, sep='|')
|something|a|b|c|d|message
0|one|1|2|3.0|4|
1|two|5|6||8|world
2|three|9|10|11.0|12|foo
缺失值会以空字符串打印出来,我们可以自己设定缺失值的指定符:
data.to_csv(sys.stdout, na_rep='NULL')
,something,a,b,c,d,message
0,one,1,2,3.0,4,NULL
1,two,5,6,NULL,8,world
2,three,9,10,11.0,12,foo
如果不指定,行和列会被自动写入。当然也可以设定为不写入:
data.to_csv(sys.stdout, index=False, header=False)
one,1,2,3.0,4,
two,5,6,,8,world
three,9,10,11.0,12,foo
你可以指定只读取一部分列,并按你选择的顺序读取:
data.to_csv(sys.stdout, index=False, columns=['a', 'b', 'c'])
a,b,c
1,2,3.0
5,6,
9,10,11.0
series
也有一个to_csv
方法:
import numpy as np
dates = pd.date_range('1/1/2000', periods=7)
ts = pd.Series(np.arange(7), index=dates)
ts.to_csv('../examples/tseries.csv')
!cat ../examples/tseries.csv
2000-01-01,0
2000-01-02,1
2000-01-03,2
2000-01-04,3
2000-01-05,4
2000-01-06,5
2000-01-07,6
对于大部分磁盘中的表格型数据,用pandas.read_table
就能解决。不过,有时候一些人工的处理也是需要的。
当然,有时候,一些格式不正确的行能会把read_table
绊倒。为了展示一些基本用法,这里先考虑一个小的CSV
文件:
!cat ../examples/ex7.csv
"a","b","c"
"1","2","3"
"1","2","3"
对于单个字符的分隔符,可以使用python
内建的csv
方法。只要给csv.reader
一个打开的文件即可:
import csv
f = open('../examples/ex7.csv')
reader = csv.reader(f)
迭代这个reader:
for line in reader:
print(line)
['a', 'b', 'c']
['1', '2', '3']
['1', '2', '3']
接下来,我们可以根据自己的需要来处理数据。一步步来,首先,把文件读取成一个list of lines
:
with open('../examples/ex7.csv') as f:
lines = list(csv.reader(f))
把lines分成header line和data lines:
header, values = lines[0], lines[1:]
然后我们可以用一个字典表达式来构造一个有列的字典,以及用zip(\*values)
反转行为列:
data_dict = {h: v for h, v in zip(header, zip(*values))}
data_dict
{'a': ('1', '1'), 'b': ('2', '2'), 'c': ('3', '3')}
header
['a', 'b', 'c']
print([x for x in zip(*values)])
[('1', '1'), ('2', '2'), ('3', '3')]
CSV
有很多功能。我们可以定义一个新的分隔符格式,比如字符串的引号,行结束时的回车,这里我们利用csv.Dialect
来构造一个子类:
class my_dialect(csv.Dialect):
lineterminator = '\n'
delimiter = ';'
quotechar = '"'
quoting = csv.QUOTE_MINIMAL
f = open('../examples/ex7.csv')
reader = csv.reader(f, dialect=my_dialect)
for line in reader:
print(line)
['a,"b","c"']
['1,"2","3"']
['1,"2","3"']
当然,也可以设定一个分隔符参数给csv.reader
,而不用单独定义一个子类:
reader = csv.reader(f, delimiter='|')
for line in reader:
print(line)
f.close()
对于一些更复杂的文件,比如用多种字符来做分隔符,就不能知网用csv
模块来处理了。这种情况下,要先做string
的split
,或者用re.split
写入的话,可以用csv.write
。它可以写入与csv.reader
中设定一样的文件:
with open('../examples/mydata.csv', 'w') as f:
writer = csv.writer(f, dialect=my_dialect)
writer.writerow(('one', 'two', 'three'))
writer.writerow(('1', '2', '3'))
writer.writerow(('4', '5', '6'))
writer.writerow(('7', '8', '9'))
!cat ../examples/mydata.csv
one;two;three
1;2;3
4;5;6
7;8;9
JSON (short for JavaScript Object Notation)
已经是发送HTTP
请求的标准数据格式了。这种格式比起表个性的CSV
更自由一些:
obj = """
{"name": "Wes",
"places_lived": ["United States", "Spain", "Germany"],
"pet": null, "siblings": [{"name": "Scott", "age": 30, "pets": ["Zeus", "Zuko"]},
{"name": "Katie", "age": 38,
"pets": ["Sixes", "Stache", "Cisco"]}]
}
"""
JSON
是很接近python
代码的,除了他的缺失值为null
和一些其他的要求。基本的类型是object(dicts), array(lists), strings, numbers, booleans, and nulls.
所以的key
必须是string
。有很多读取JSON
的库,这里用json
,它也是python
内建的库。把JSON string
变为python
格式,用json.loads:
import json
result = json.loads(obj)
result
{'name': 'Wes',
'pet': None,
'places_lived': ['United States', 'Spain', 'Germany'],
'siblings': [{'age': 30, 'name': 'Scott', 'pets': ['Zeus', 'Zuko']},
{'age': 38, 'name': 'Katie', 'pets': ['Sixes', 'Stache', 'Cisco']}]}
使用json.dumps
,可以把python object
转换为JSON
:
asjson = json.dumps(result)
如何把JSON
转变为DataFrame
或其他一些结构呢。可以把a list of dicts(JSON object)
传给DataFrame constructor
而且可以自己指定传入的部分:
siblings = pd.DataFrame(result['siblings'], columns=['name', 'age'])
siblings
name | age | |
---|---|---|
0 | Scott | 30 |
1 | Katie | 38 |
pandas.read_json
可以自动把JSON
数据转变为series
或DataFrame
:
!cat ../examples/example.json
[{"a": 1, "b": 2, "c": 3},
{"a": 4, "b": 5, "c": 6},
{"a": 7, "b": 8, "c": 9}]
5, “c”: 6},
{“a”: 7, “b”: 8, “c”: 9}]
pandas.read_json
假设JSON
数组中的每一个Object
,是表格中的一行:
data = pd.read_json('../examples/example.json')
data
a | b | c | |
---|---|---|---|
0 | 1 | 2 | 3 |
1 | 4 | 5 | 6 |
2 | 7 | 8 | 9 |
如果想要输出结果为JSON
,用to_json
方法:
print(data.to_json())
{"a":{"0":1,"1":4,"2":7},"b":{"0":2,"1":5,"2":8},"c":{"0":3,"1":6,"2":9}}
print(data.to_json(orient='records'))
[{"a":1,"b":2,"c":3},{"a":4,"b":5,"c":6},{"a":7,"b":8,"c":9}]
orient='records'
表示输出的数据结构是 列->值 的形式:
records : list like [{column -> value}, ... , {column -> value}]
python
有很多包用来读取和写入HTML
和XML
格式。比如:lxml, Beautiful Soup, html5lib。
其中lxml
比较快,其他一些包则能更好的处理一些复杂的HTML
和XML
文件。
pandas
有一个内建的函数,叫read_html
, 这个函数利用lxml
和Beautiful Soup
这样的包来自动解析HTML
,变为DataFrame
。这里我们必须要先下载这些包才能使用read_html
:
conda install lxml
pip install beautifulsoup4 html5lib
pandas.read_html
函数有很多额外选项,但是默认会搜索并试图解析含有
tag的表格型数据。结果是a list of dataframe
:
tables = pd.read_html('../examples/fdic_failed_bank_list.html')
len(tables)
1
tables
[ Bank Name City ST CERT \
0 Allied Bank Mulberry AR 91
1 The Woodbury Banking Company Woodbury GA 11297
2 First CornerStone Bank King of Prussia PA 35312
3 Trust Company Bank Memphis TN 9956
4 North Milwaukee State Bank Milwaukee WI 20364
.. ... ... .. ...
542 Superior Bank, FSB Hinsdale IL 32646
543 Malta National Bank Malta OH 6629
544 First Alliance Bank & Trust Co. Manchester NH 34264
545 National State Bank of Metropolis Metropolis IL 3815
546 Bank of Honolulu Honolulu HI 21029
Acquiring Institution Closing Date \
0 Today's Bank September 23, 2016
1 United Bank August 19, 2016
2 First-Citizens Bank & Trust Company May 6, 2016
3 The Bank of Fayette County April 29, 2016
4 First-Citizens Bank & Trust Company March 11, 2016
.. ... ...
542 Superior Federal, FSB July 27, 2001
543 North Valley Bank May 3, 2001
544 Southern New Hampshire Bank & Trust February 2, 2001
545 Banterra Bank of Marion December 14, 2000
546 Bank of the Orient October 13, 2000
Updated Date
0 November 17, 2016
1 November 17, 2016
2 September 6, 2016
3 September 6, 2016
4 June 16, 2016
.. ...
542 August 19, 2014
543 November 18, 2002
544 February 18, 2003
545 March 17, 2005
546 March 17, 2005
[547 rows x 7 columns]]
failures = tables[0]
failures.head()
Bank Name | City | ST | CERT | Acquiring Institution | Closing Date | Updated Date | |
---|---|---|---|---|---|---|---|
0 | Allied Bank | Mulberry | AR | 91 | Today's Bank | September 23, 2016 | November 17, 2016 |
1 | The Woodbury Banking Company | Woodbury | GA | 11297 | United Bank | August 19, 2016 | November 17, 2016 |
2 | First CornerStone Bank | King of Prussia | PA | 35312 | First-Citizens Bank & Trust Company | May 6, 2016 | September 6, 2016 |
3 | Trust Company Bank | Memphis | TN | 9956 | The Bank of Fayette County | April 29, 2016 | September 6, 2016 |
4 | North Milwaukee State Bank | Milwaukee | WI | 20364 | First-Citizens Bank & Trust Company | March 11, 2016 | June 16, 2016 |
这里我们做一些数据清洗和分析,比如按年计算bank failure
的数量:
close_timestamps = pd.to_datetime(failures['Closing Date'])
close_timestamps.dt.year.value_counts()
2010 157
2009 140
2011 92
2012 51
2008 25
...
2004 4
2001 4
2007 3
2003 3
2000 2
Name: Closing Date, dtype: int64
XML(eXtensible Markup Language)
是另一种常见的数据格式,支持阶层型、嵌套的数据。这里我们演示如何用lxml来解析一个XML
格式文件。
纽约都会交通局发布了巴士和地铁的时间表。每一个地跌或巴士都有一个不同的文件(比如Performance_NNR.xml
对应Metro-North Railroad
):
373889
Metro-North Railroad
Escalator Availability
Percent of the time that escalators are operational systemwide. The availability rate is based on physical observations performed the morning of regular business days only. This is a new indicator the agency began reporting in 2009.
2011
12
Service Indicators
M
U
%
1
97.00
97.00
使用lxml.objectify
,我们可以解析文件,通过getroot
,得到一个指向XML
文件中root node
的指针:
from lxml import objectify
path = '../datasets/mta_perf/Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()
root.INDICATOR
返回一个生成器,每次调用能生成一个
XML元素。每一个记录,我们产生一个dict
,tag name
(比如YTD_ACTUAL
)作为字典的key
:
data = []
skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ',
'DESIRED_CHANGE', 'DECIMAL_PLACES']
for elt in root.INDICATOR:
el_data = {}
for child in elt.getchildren():
if child.tag in skip_fields:
continue
el_data[child.tag] = child.pyval
data.append(el_data)
然后我们把这个dict
变为DataFrame
:
perf = pd.DataFrame(data)
perf.head()
AGENCY_NAME | CATEGORY | DESCRIPTION | FREQUENCY | INDICATOR_NAME | INDICATOR_UNIT | MONTHLY_ACTUAL | MONTHLY_TARGET | PERIOD_MONTH | PERIOD_YEAR | YTD_ACTUAL | YTD_TARGET | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Metro-North Railroad | Service Indicators | Percent of commuter trains that arrive at thei... | M | On-Time Performance (West of Hudson) | % | 96.9 | 95 | 1 | 2008 | 96.9 | 95 |
1 | Metro-North Railroad | Service Indicators | Percent of commuter trains that arrive at thei... | M | On-Time Performance (West of Hudson) | % | 95 | 95 | 2 | 2008 | 96 | 95 |
2 | Metro-North Railroad | Service Indicators | Percent of commuter trains that arrive at thei... | M | On-Time Performance (West of Hudson) | % | 96.9 | 95 | 3 | 2008 | 96.3 | 95 |
3 | Metro-North Railroad | Service Indicators | Percent of commuter trains that arrive at thei... | M | On-Time Performance (West of Hudson) | % | 98.3 | 95 | 4 | 2008 | 96.8 | 95 |
4 | Metro-North Railroad | Service Indicators | Percent of commuter trains that arrive at thei... | M | On-Time Performance (West of Hudson) | % | 95.8 | 95 | 5 | 2008 | 96.6 | 95 |
XML
数据能得到比这个例子更复杂的情况。每个tag
都有数据。比如一个而HTML
链接,也是一个有效的XML
:
from io import StringIO
tag = 'Google'
root = objectify.parse(StringIO(tag)).getroot()
我们可以访问任何区域的tag
(比如href
):
root
root.get('href')
'http://www.google.com'
root.text
'Google'