比 json 快 10 倍的的msgpack

json 和 msgpack 都是非常优秀的序列化格式,下面我们将简单的对比一下两者的性能
  • 使用语言 python 2.7
  • 序列化数据类型:大小为 30 m 左右的 dict
  • 测试过程,先伪造 20w 个元素的 dict
  • 然后将 dict 序列化存入文本,读取文本数据进行反序列化

  • msgpack

    • 序列化 耗时 0.196000099182

    • 反序列化 耗时 0.0929999351501

  • json

    • 序列化 耗时 0.28200006485

    • 反序列化 耗时 1.21799993515

import os
import redis
import time
import msgpack

pool = redis.ConnectionPool(host='localhost', port=6379, decode_responses=True)
redis_client = redis.Redis(connection_pool=pool)

dict_sn = redis_client.hgetall('fake_data_2019-07-11')


start = time.time()
str_sn = msgpack.dumps(dict_sn)
print(time.time() - start)


if not os.path.exists('./file.json'):
    with open('file.json', 'wb') as f:
        f.write(str_sn)
else:
    print('File already exists!')

# -*- coding: utf-8 -*-
import os
import time
import msgpack

start = time.time()
if os.path.exists('./file.json'):
    f = open('./file.json', 'rb')
    data = f.read()
    dict_sn = msgpack.loads(data)
    print(len(dict_sn))
else:
    print("file not exists")
print(time.time() - start)


import os
import redis
import json
import time
import msgpack
import pickle

pool = redis.ConnectionPool(host='localhost', port=6379, decode_responses=True)
redis_client = redis.Redis(connection_pool=pool)

dict_sn = redis_client.hgetall('fake_data_2019-07-11')

start = time.time()
print type(dict_sn)
str_sn = json.dumps(dict_sn)
print time.time() - start


if not os.path.exists('./file.json'):
    with open('file.json', 'wt') as f:
        f.write(str_sn)
else:
    print('File already exists!')

# -*- coding: utf-8 -*-
import json
import os
import time
import msgpack

start = time.time()
if os.path.exists('./file.json'):
    f = open('./file.json', 'rt')
    data = f.read()
    dict_sn = json.loads(data)
    print len(dict_sn)

else:
    print "file not exists"
print time.time() - start

你可能感兴趣的:(python)