import cPickle as pickle
错误:ModuleNotFoundError: No module named 'cPickle'
cPickle and pickle 它是一个将任意复杂的对象转成对象的文本或二进制的模块。
如何解决?
先看看pickle是干啥的?
持久性就是指保持对象,甚至在多次执行同一程序之间也保持对象
您希望将对象存储在磁盘上,便于以后检索,这就是持久性(你想啥呢!)。
要达到这个目的,有几种方法,每一种方法都有其优缺点
如果希望透明地存储 Python 对象,而不丢失其身份和类型等信息
则需要某种形式的对象序列化
序列化:是一个将任意复杂的对象转成对象的文本或二进制表示的过程
在 Python 中,这种序列化过程称为 pickle、
可以将对象 pickle 成字符串、磁盘上的文件或者任何类似于文件的对象
也可以将这些字符串、文件或任何类似于文件的对象 unpickle 成原来的对象
先把问题解决了吧!
原来再这里,集成到了,pandas里面了
操作方法:直接在anaconda 安装的文件夹D:\Anaconda\Lib下搜索
上图
然后就会发现只需要写
import pandas as pd
import pickle然后就可以了
记得先导入pandas
---------------------
关于利用pickle.load()载入文件出现如下错误的解决办法。
UnicodeDecodeError Traceback (most recent call last)
in ()
1 # Load the raw CIFAR-10 data.
2 cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
----> 3 X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
4
5 # As a sanity check, we print out the size of the training and test data.
D:\python3wp\assignment1\cs231n\data_utils.py in load_CIFAR10(ROOT)
20 for b in range(1,6):
21 f = os.path.join(ROOT, 'data_batch_%d' % (b, ))
---> 22 X, Y = load_CIFAR_batch(f)
23 xs.append(X)
24 ys.append(Y)
D:\python3wp\assignment1\cs231n\data_utils.py in load_CIFAR_batch(filename)
7 """ load single batch of cifar """
8 with open(filename, 'rb') as f:
----> 9 datadict = pickle.load(f)
10 X = datadict['data']
11 Y = datadict['labels']
UnicodeDecodeError: 'ascii' codec can't decode byte 0x8b in position 6: ordinal not in range(128)
首先给出pickle.load()的函数解释:
pickle.load()
pickle.load(file, *, fix_imports=True, encoding=”ASCII”, errors=”strict”)
Read a pickled object representation from the open file object file and return the reconstituted object hierarchy specified therein. This is equivalent to Unpickler(file).load().
The protocol version of the pickle is detected automatically, so no protocol argument is needed. Bytes past the pickled object’s representation are ignored.
The argument file must have two methods, a read() method that takes an integer argument, and a readline() method that requires no arguments. Both methods should return bytes. Thus file can be an on-disk file opened for binary reading, an io.BytesIO object, or any other custom object that meets this interface.
Optional keyword arguments are fix_imports, encoding and errors, which are used to control compatibility support for pickle stream generated by Python 2. If fix_imports is true, pickle will try to map the old Python 2 names to the new names used in Python 3. The encoding and errors tell pickle how to decode 8-bit string instances pickled by Python 2; these default to ‘ASCII’ and ‘strict’, respectively. The encoding can be ‘bytes’ to read these 8-bit string instances as bytes objects.
import pickle
li = [1, 2, 3]
pickle.dump(li, open('db', 'wb'))
ret = pickle.load(open('db', 'rb'))
print(ret)
也就是说pickle.load()默认解码是以encoding=”ASCII”解码的,而我们要载入的文件并不是以”ASCII”形式存储的,所以要改变参数encoding=” ”
解决方式如下:
pickle.load(f,encoding='bytes')
这样编码可以用“字节”来读取这些8位字符串实例作为字节对象。
KeyError:’data’
这里是在实现李飞飞-深度学习与机器视觉课程的课后作业的时候遇到的问题。使用的代码是官方给出的代码,自己只是照着敲一遍,但是却出现了KeyError:’data’。
经过查询也没有发现问题所在。出错的代码是这一段
def load_CIFAR_batch(filename):
""" load single batch of cifar """
with open(filename, 'rb') as f:
datadict = pickle.load(f, encoding='bytes')
Y = datadict['labels']
X = datadict['data']
X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("float")
Y = np.array(Y)
return X, Y
因为这个问题就是说字典datadict里面没有data这个字段罢了,所以百度了也没有办法。后来进行了debug,观察了datadict的数据发现,在key值data和labels 的前面都有一个b,后来了我就加了一个b在data和labels 前面。如下
Y = datadict[b'labels']
X = datadict[b'data']
然后正常了……
百度之后才发现这个是python2和python3的问题,我使用了官方给的一个代码库,不过这个是用python2写的,而官方的训练数据也是使用python2生成的。结果我使用的python3去运行,就出现了这个错误。
这里是百度之后的结果:
b:bytes
python3.x里默认的str是unicode, bytes是py2.x的str, b''前缀代表的就是bytes
python2.x里, b前缀没什么具体意义,只是为了兼容python3.x的这种写法
参考博客
所以我这里加上b''前缀就可以让python2的代码兼容python3了。
最后再说一句:坑的一匹的python2和3,简直坑小白神器
---------------------
作者:kejizuiqianfang
来源:CSDN
原文:https://blog.csdn.net/kejizuiqianfang/article/details/81625067
版权声明:本文为博主原创文章,转载请附上博文链接!