我将大致介绍一下这些函数在将文本数据转换为DataFrame时所用到的一些技术。这些函数的选项可以划分为以下几个大类:
因为工作中实际碰到的数据可能十分混乱,一些数据加载函数(尤其是read_csv)的选项逐渐变得复杂起来。面对不同的参数,感到头痛很正常(read_csv有超过50个参数)。pandas文档有这些参数的例子,如果你感到阅读某个文件很难,可以通过相似的足够多的例子找到正确的参数
其中一些函数,比如pandas.read_csv,有类型推断功能,因为列数据的类型不属于数据类型。也就是说,你不需要指定列的类型到底是数值、整数、布尔值,还是字符串。其它的数据格式,如HDF5、Feather和msgpack,会在格式中存储数据类型。
df = pd.read_csv('F:/hellopython/数据分析/ex1.csv')
#pd.read_table('F:/hellopython/数据分析/ex1.csv', sep=',') 这样也行
df
a b c d message
0 1 2 3 4 hello
1 5 6 7 8 world
2 9 10 11 12 foo
! type "ex2.csv" #先大概查看一下这个文件,只能放在当前目录下
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
pd.read_csv('F:/hellopython/数据分析/ex2.csv', header=None)
0 1 2 3 4
0 1 2 3 4 hello
1 5 6 7 8 world
2 9 10 11 12 foo
pd.read_csv('F:/hellopython/数据分析/ex2.csv', names=['a', 'b', 'c', 'd', 'message'])
a b c d message
0 1 2 3 4 hello
1 5 6 7 8 world
2 9 10 11 12 foo
names = ['a', 'b', 'c', 'd', 'message']
a b c d
message
hello 1 2 3 4
world 5 6 7 8
foo 9 10 11 12
! type "csv_mindex.csv"
key1,key2,value1,value2
one,a,1,2
one,b,3,4
one,c,5,6
one,d,7,8
two,a,9,10
two,b,11,12
two,c,13,14
two,d,15,16
value1 value2
key1key2
one a 1 2
b 3 4
c 5 6
d 7 8
two a 9 10
b 11 12
c 13 14
d 15 16
list(open('ex3.txt'))
[' A B C\n',
'aaa -0.264438 -1.026059 -0.619500\n',
'bbb 0.927272 0.302904 -0.032399\n',
'ccc -0.264273 -0.386314 -0.217601\n',
'ddd -0.871858 -0.348382 1.100491\n']
result = pd.read_table(‘F:/hellopython/数据分析/ex3.txt’, sep=‘\s+’)
result
A B C
aaa -0.264438 -1.026059 -0.619500
bbb 0.927272 0.302904 -0.032399
ccc -0.264273 -0.386314 -0.217601
ddd -0.871858 -0.348382 1.100491
! type ex4.csv
# hey! #看到他1 3 4行都是注释
a,b,c,d,message
# just wanted to make things more difficult for you
# who reads CSV files with computers, anyway?
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo
pd.read_csv('ex4.csv', skiprows=[0, 2, 3])
a b c d message
0 1 2 3 4 hello
1 5 6 7 8 world
2 9 10 11 12 foo
! type ex5.csv
something,a,b,c,d,message
one,1,2,3,4,NA
two,5,6,,8,world
three,9,10,11,12,foo
result = pd.read_csv('ex5.csv', na_values=['NULL'])
result
something a b c d message
0 one 1 2 3.0 4 NaN
1 two 5 6 NaN 8 world
2 three 9 10 11.0 12 foo
sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
pd.read_csv('ex5.csv', na_values=sentinels)
something a b c d message
0 one 1 2 3.0 4 NaN
1 NaN 5 6 NaN 8 world
2 three 9 10 11.012 NaN
在处理很大的文件时,或找出大文件中的参数集以便于后续处理时,你可能只想读取文件的一小部分或逐块对文件进行迭代。
pd.options.display.max_rows = 10
result = pd.read_csv('ex6.csv')
result
one two three four key
0 0.467976 -0.038649 -0.295344 -1.824726 L
1 -0.358893 1.404453 0.704965 -0.200638 B
2 -0.501840 0.659254 -0.421691 -0.057688 G
3 0.204886 1.074134 1.388361 -0.982404 R
4 0.354628 -0.133116 0.283763 -0.837063 Q
... ... ... ... ... ...
9995 2.311896 -0.417070 -1.409599 -0.515821 L
9996 -0.479893 -0.650419 0.745152 -0.646038 E
9997 0.523331 0.787112 0.486066 1.093156 K
9998 -0.362559 0.598894 -1.843201 0.887292 G
9999 -0.096376 -1.012999 -0.657431 -0.573315 0
10000 rows × 5 columns
pd.read_csv('ex6.csv', nrows=5)
one two three four key
0 0.467976 -0.038649 -0.295344 -1.824726 L
1 -0.358893 1.404453 0.704965 -0.200638 B
2 -0.501840 0.659254 -0.421691 -0.057688 G
3 0.204886 1.074134 1.388361 -0.982404 R
4 0.354628 -0.133116 0.283763 -0.837063 Q
chunker = pd.read_csv('ex6.csv', chunksize=1000)
tot = pd.Series([])
for piece in chunker:
tot = tot.add(piece['key'].value_counts(), fill_value=0)
tot = tot.sort_values(ascending=False)
tot[:10] #前十个
E 368.0
X 364.0
L 346.0
O 343.0
Q 340.0
M 338.0
J 337.0
F 335.0
K 334.0
H 330.0
dtype: float64
data = pd.read_csv('ex5.csv')
data
something a b c d message
0 one 1 2 3.0 4 NaN
1 two 5 6 NaN 8 world
2 three 9 10 11.012 foo
data.to_csv('out1.csv')
! type "out1.csv"
,something,a,b,c,d,message
0,one,1,2,3.0,4,
1,two,5,6,,8,world
2,three,9,10,11.0,12,foo
data.to_csv('out1.csv',sep='|')
! type "out1.csv"
data.to_csv('out1.csv',sep='|',na_rep='NULL')
! type "out1.csv"
|something|a|b|c|d|message
0|one|1|2|3.0|4|NULL
1|two|5|6|NULL|8|world
2|three|9|10|11.0|12|foo
data.to_csv('out1.csv',index=False, header=False)
! type "out1.csv"
one,1,2,3.0,4,
two,5,6,,8,world
three,9,10,11.0,12,foo
data.to_csv('out1.csv', index=False, columns=['a', 'b', 'c'])
! type "out1.csv"
a,b,c
1,2,3.0
5,6,
9,10,11.0
import numpy as np
dates = pd.date_range('1/1/2000', periods=7)
ts = pd.Series(np.arange(7), index=dates)
ts.to_csv('ts.csv')
ts
2000-01-01 0
2000-01-02 1
2000-01-03 2
2000-01-04 3
2000-01-05 4
2000-01-06 5
2000-01-07 6
Freq: D, dtype: int32
! type "ts.csv"
,0 #为什么有这玩意,很怪,到时候手动删除一下把
2000-01-01,0
2000-01-02,1
2000-01-03,2
2000-01-04,3
2000-01-05,4
2000-01-06,5
2000-01-07,6
大部分存储在磁盘上的表格型数据都能用pandas.read_table进行加载。然而,有时还是需要做一些手工处理。由于接收到含有畸形行的文件而使read_table出毛病的情况并不少见。
! type "ex7.csv"
"a","b","c"
"1","2","3"
"1","2","3"
import csv
f = open('ex7.csv')
reader = csv.reader(f)
for line in reader:
print(line)
['a', 'b', 'c']
['1', '2', '3']
['1', '2', '3']
with open('ex7.csv') as f:
list(csv.reader(f))
print(lines)
[['a', 'b', 'c'], ['1', '2', '3'], ['1', '2', '3']]
data_dict = {h: v for h, v in zip(header, zip(*values))} #zip(*value)可以去百度一下,很简单
data_dict
{'a': ('1', '1'), 'b': ('2', '2'), 'c': ('3', '3')} #这样就可能变成我们想要的形式
class my_dialect(csv.Dialect):
lineterminator = '\n' #用于写操作的函结束符,默认\r\n
delimiter = ';' #用于分割字段的单字符串,默认为','
quotechar = '"' #用于带有特殊字符(如分隔符)的字段引用符号,默认为'"'
quoting = csv.QUOTE_MINIMAL 引用约定
with open('mydata.csv', 'w') as f:
writer = csv.writer(f, dialect=my_dialect)
writer.writerow(('one', 'two', 'three'))
writer.writerow(('1', '2', '3'))
writer.writerow(('4', '5', '6'))
writer.writerow(('7', '8', '9'))
! type "mydata.csv"
one;two;three #如果换成delimiter='|',那么one two three 等中间的符号就是|
1;2;3
4;5;6
7;8;9
JSON(JavaScript Object Notation的简称)已经成为通过HTTP请求在Web浏览器和其他应用程序之间发送数据的标准格式之一。它是一种比表格型文本格式(如CSV)灵活得多的数据格式。
obj = “”"
{“name”: “Wes”,
“places_lived”: [“United States”, “Spain”, “Germany”],
“pet”: null,
“siblings”: [{“name”: “Scott”, “age”: 30, “pets”: [“Zeus”, “Zuko”]},
{“name”: “Katie”, “age”: 38,
“pets”: [“Sixes”, “Stache”, “Cisco”]}]
}
“”"
其空值null和一些其他的细微差别(如列表末尾不允许存在多余的逗号)之外,JSON非常接近于有效的Python代码。基本类型有对象(字典)、数组(列表)、字符串、数值、布尔值以及null。对象中所有的键都必须是字符串。许多Python库都可以读写JSON数据。我将使用json,因为它是构建于Python标准库中的。
import json
result = json.loads(obj)
result
{'name': 'Wes',
'pet': None,
'places_lived': ['United States', 'Spain', 'Germany'],
'siblings': [{'age': 30, 'name': 'Scott', 'pets': ['Zeus', 'Zuko']},
{'age': 38, 'name': 'Katie', 'pets': ['Sixes', 'Stache', 'Cisco']}]}
asjson = json.dumps(result)
siblings = pd.DataFrame(result['siblings'], columns=['name', 'age'])
siblings
name age
0 Scott 30
1 Katie 38
! type example.json
[{"a": 1, "b": 2, "c": 3},
{"a": 4, "b": 5, "c": 6},
{"a": 7, "b": 8, "c": 9}]
pd.read_json('example.json')
data
a b c
0 1 2 3
1 4 5 6
2 7 8 9
print(data.to_json())
{"a":{"0":1,"1":4,"2":7},"b":{"0":2,"1":5,"2":8},"c":{"0":3,"1":6,"2":9}}
print(data.to_json(orient='records'))
[{"a":1,"b":2,"c":3},{"a":4,"b":5,"c":6},{"a":7,"b":8,"c":9}]
实现数据的高效二进制格式存储最简单的办法之一是使用Python内置的pickle序列化。
frame = pd.read_csv('ex1.csv')
frame
a b c d message
0 1 2 3 4 hello
1 5 6 7 8 world
2 9 10 11 12 foo
frame.to_pickle('frame_pickle')
pd.read_pickle('frame_pickle')
a b c d message
0 1 2 3 4 hello
1 5 6 7 8 world
2 9 10 11 12 foo
#### 注意:pickle仅建议用于短期存储格式。其原因是很难保证该格式永远是稳定的;今天pickle的对象可能无法被后续版本的库unpickle出来。虽然我尽力保证这种事情不会发生在pandas中,但是今后的某个时候说不定还是得“打破”该pickle格式。
DF5是一种存储大规模科学数组数据的非常好的文件格式。它可以被作为C标准库,带有许多语言的接口,如Java、Python和MATLAB等。HDF5中的HDF指的是层次型数据格式(hierarchical data format)。每个HDF5文件都含有一个文件系统式的节点结构,它使你能够存储多个数据集并支持元数据。与其他简单格式相比,HDF5支持多种压缩器的即时压缩,还能更高效地存储重复模式数据。对于那些非常大的无法直接放入内存的数据集,HDF5就是不错的选择,因为它可以高效地分块读写。
frame = pd.DataFrame({'a': np.random.randn(100)})
store = pd.HDFStore('mydata.h5')
store['obj1'] = frame
store['obj1_col'] = frame['a']
store
File path: mydata.h5
/obj1 frame (shape->[100,1])
/obj1_col series (shape->[100])
/obj2 frame_table (typ->appendable,nrows->100,ncols->1,indexers->
[index])
/obj3 frame_table (typ->appendable,nrows->100,ncols->1,indexers->
[index])
store['obj1']
a
0 -0.204708
1 0.478943
2 -0.519439
3 -0.555730
4 1.965781
.. ...
95 0.795253
96 0.118110
97 -0.748532
98 0.584970
99 0.152677
[100 rows x 1 columns]
store.put('obj2', frame, format='table')
store.select('obj2', where=['index >= 10 and index <= 15'])
a
10 1.007189
11 -1.296221
12 0.274992
13 0.228913
14 1.352917
15 0.886429
frame.to_hdf('mydata.h5', 'obj3', format='table')
pd.read_hdf('mydata.h5', 'obj3', where=['index < 5'])
a
0 -0.204708
1 0.478943
2 -0.519439
3 -0.555730
4 1.965781
pandas的ExcelFile类或pandas.read_excel函数支持读取存储在Excel 2003(或更高版本)中的表格型数据。这两个工具分别使用扩展包xlrd和openpyxl读取XLS和XLSX文件。
xlsx = pd.ExcelFile('ex1.xlsx')
pd.read_excel(xlsx, 'Sheet1')
a b c d message
0 1 2 3 4 hello
1 5 6 7 8 world
2 9 10 11 12 foo
writer = pd.ExcelWriter('ex2.xlsx')
frame.to_excel(writer, 'Sheet1')
writer.save()
frame.to_excel('ex2.xlsx')
许多网站都有一些通过JSON或其他格式提供数据的公共API。通过Python访问这些API的办法有不少。一个简单易用的办法(推荐)是requests包(http://docs.python-requests.org)。
import requests
url = 'https://api.github.com/repos/pandas-dev/pandas/issues'
resp = requests.get(url)
data = resp.json()
issues = pd.DataFrame(data, columns=['number', 'title', 'labels', 'state'])
issues
number title labels state
0 47745 BUG: Behavior with fallback between raise and ... [] open
1 47744 ENH/TST: Add quantile & mode tests for ArrowEx... [{'id': 127685, 'node_id': 'MDU6TGFiZWwxMjc2OD... open
2 47743 BUG: groupby transform functions ignore dropna [{'id': 76811, 'node_id': 'MDU6TGFiZWw3NjgxMQ=... open
3 47742 DOC: Clarify return type cases in pandas.unique [] open
4 47740 DOC: update min package versions in install.rs... [{'id': 134699, 'node_id': 'MDU6TGFiZWwxMzQ2OT... open
5 47738 DEPR: deprecate pandas.tests [] open
6 47737 BUILD: install from source fails using Docker ... [{'id': 129350, 'node_id': 'MDU6TGFiZWwxMjkzNT... open
7 47736 TST: add test for last() on dataframe grouped ... [{'id': 127685, 'node_id': 'MDU6TGFiZWwxMjc2OD... open
8 47735 DOC: `pandsa.eval` main body text should cancl... [{'id': 134699, 'node_id': 'MDU6TGFiZWwxMzQ2OT... open
9 47734 BUG: `df.eval` can't concatenate string column... [{'id': 76811, 'node_id': 'MDU6TGFiZWw3NjgxMQ=... open
10 47732 DOC: Updating some capitalization in doc/sourc... [{'id': 134699, 'node_id': 'MDU6TGFiZWwxMzQ2OT... open
11 47730 ENH/TST: Add Reduction tests for ArrowExtensio... [{'id': 127685, 'node_id': 'MDU6TGFiZWwxMjc2OD... open
12 47729 TYP: freq and na_value [] open
13 47727 DOC: update min package versions in install.rs... [{'id': 134699, 'node_id': 'MDU6TGFiZWwxMzQ2OT... open
14 47726 PERF: Slow hdf_read on 10+ million row data + ... [{'id': 8935311, 'node_id': 'MDU6TGFiZWw4OTM1M... open
15 47725 ENH: Proposed min s3fs 2021.5.0 is incompatib... [{'id': 76812, 'node_id': 'MDU6TGFiZWw3NjgxMg=... open
16 47724 BUG: numeric_only with axis=1 in DataFrame.cor... [{'id': 76811, 'node_id': 'MDU6TGFiZWw3NjgxMQ=... open
17 47721 BUG: Rolling std() error [{'id': 76811, 'node_id': 'MDU6TGFiZWw3NjgxMQ=... open
18 47720 ENH: Timestamp.min/max/resolution support non-... [{'id': 3713792788, 'node_id': 'LA_kwDOAA0YD87... open
19 47719 GroupBy enhancement unifies the return of iter... [] open
20 47718 API: Consistent handling of duplicate input co... [{'id': 35818298, 'node_id': 'MDU6TGFiZWwzNTgx... open
21 47716 opt out of bottleneck for nanmean [{'id': 527603109, 'node_id': 'MDU6TGFiZWw1Mjc... open
22 47715 TST: Test for the Enum triggering TypeError (#... [{'id': 127685, 'node_id': 'MDU6TGFiZWwxMjc2OD... open
23 47712 DOC: fix typos in "See also" documentation sec... [] open
24 47711 ENH/TST: Add BaseUnaryOpsTests tests for Arrow... [{'id': 127685, 'node_id': 'MDU6TGFiZWwxMjc2OD... open
25 47710 GH: Add CITATION.cff [{'id': 32933285, 'node_id': 'MDU6TGFiZWwzMjkz... open
26 47708 BUG: json_normalize raises boardcasting error ... [{'id': 49379259, 'node_id': 'MDU6TGFiZWw0OTM3... open
27 47706 WEB: Governance community members [{'id': 32933285, 'node_id': 'MDU6TGFiZWwzMjkz... open
28 47705 BUG: groupby.resample have inconsistent behavi... [{'id': 76811, 'node_id': 'MDU6TGFiZWw3NjgxMQ=... open
29 47703 BUG: Assigning to a shallow copy did not chang... [{'id': 76811, 'node_id': 'MDU6TGFiZWw3NjgxMQ=... open
在商业场景下,大多数数据可能不是存储在文本或Excel文件中。基于SQL的关系型数据库(如SQL Server、PostgreSQL和MySQL等)使用非常广泛,其它一些数据库也很流行。数据库的选择通常取决于性能、数据完整性以及应用程序的伸缩性需求。
import sqlite3
query = """
.....: CREATE TABLE test
.....: (a VARCHAR(20), b VARCHAR(20),
.....: c REAL,d INTEGER
.....: );"""
con = sqlite3.connect('mydata.sqlite')
con.execute(query)
con.commit()
data = [('Atlanta', 'Georgia', 1.25, 6),
.....: ('Tallahassee', 'Florida', 2.6, 3),
.....: ('Sacramento', 'California', 1.7, 5)]
stmt = "INSERT INTO test VALUES(?, ?, ?, ?)"
con.executemany(stmt, data)
cursor = con.execute('select * from test')
rows = cursor.fetchall()
rows
[('Atlanta', 'Georgia', 1.25, 6),
('Tallahassee', 'Florida', 2.6, 3),
('Sacramento', 'California', 1.7, 5)]
pd.DataFrame(rows, columns=[x[0] for x in cursor.description])
a b c d
0 Atlanta Georgia 1.25 6
1 Tallahassee Florida 2.60 3
2 Sacramento California 1.70 5
cursor.description
(('a', None, None, None, None, None, None),
('b', None, None, None, None, None, None),
('c', None, None, None, None, None, None),
('d', None, None, None, None, None, None))