最近在做下单性能测试,要测试报单出去到收到回报时过了多长时间。基本方案是,for循环批量下单,在下单处和收到下单回报出记录日志,通过分析日志,得到每个下单-回报时间长。
#收集到的从下单到收到下单回报的时间间隔,单位是ms
time_list=[585.044921875, 60.8359375, 69.81298828125, 64.8271484375, 73.803955078125, 61.836181640625, 62.83203125, 65.823974609375, 64.823974609375, 63.829833984375, 60.8359375, 57.843017578125, 94.739013671875, 62.8330078125, 63.8291015625, 62.77099609375, 65.82421875, 58.840087890625, 57.170166015625, 65.903076171875, 517.614990234375, 71.81005859375, 68.81494140625, 65.822021484375, 61.806884765625, 66.9072265625, 67.81689453125, 76.796875, 75.799072265625, 70.809814453125, 62.076904296875, 75.7958984375, 65.7529296875, 78.7861328125, 63.81396484375, 66.89697265625, 297.257080078125, 72.156982421875, 63.862060546875, 65.824951171875, 81.03515625, 63.8349609375, 60.764892578125, 64.959228515625, 65.824951171875, 1100.06005859375, 99.73291015625, 71.803955078125, 266.233154296875, 70.81005859375, 66.81494140625, 98.740966796875, 70.818115234375, 68.81298828125, 63.830078125, 66.74609375, 63.8681640625, 67.81787109375, 64.828857421875, 61.7998046875, 64.828125, 62.760009765625, 63.8349609375, 77.7900390625, 68.779052734375, 63.885009765625, 612.3271484375, 63.777099609375, 69.815185546875, 67.869873046875, 1836.067138671875, 843.743896484375, 65.822998046875, 66.822998046875, 71.80712890625, 63.864990234375, 58.80810546875, 911.56298828125, 60.842041015625, 67.842041015625, 67.81689453125, 71.80908203125, 61.833984375, 70.81005859375, 60.19384765625, 62.045166015625, 62.844970703125, 1718.408203125, 719.077880859375, 58.844970703125, 63.828125, 61.8359375, 62.8291015625, 63.77490234375, 63.83203125, 62.861083984375, 79.7861328125, 65.823974609375, 69.81298828125, 65.822998046875, 197.324951171875, 62.8349609375, 69.85205078125, 1675.525146484375, 678.18798828125, 63.8271484375, 152.591796875, 67.822998046875, 68.76123046875, 1570.799072265625, 570.47509765625, 60.838134765625, 62.037109375, 64.830078125, 69.818115234375, 60.8359375, 65.822021484375, 60.073974609375, 68.85595703125, 72.802978515625, 60.114990234375, 60.8369140625, 61.832763671875, 58.843994140625, 63.831787109375, 71.806884765625, 60.837890625, 69.7529296875, 88.760986328125, 63.8291015625, 63.092041015625, 499.6669921875, 62.8359375, 71.85888671875, 72.80419921875, 61.833984375, 70.81005859375, 62.838134765625, 98.73388671875, 60.828857421875, 836.761962890625, 62.83203125, 72.968017578125, 59.83984375, 64.8251953125, 63.14990234375, 63.8291015625, 64.825927734375, 70.5458984375, 62.83203125, 61.833984375, 68.822021484375, 68.81689453125, 60.7998046875, 212.433837890625, 60.826904296875, 64.827880859375, 88.761962890625, 71.808837890625, 71.81103515625, 252.326904296875, 60.839111328125, 72.880859375, 66.0419921875, 1606.6689453125, 621.341064453125, 58.89111328125, 70.808837890625, 67.81982421875, 58.843994140625, 61.8349609375, 69.81103515625, 66.89404296875, 70.81201171875, 65.822998046875, 63.7568359375, 119.677978515625, 64.825927734375, 1284.568115234375, 336.101806640625, 1084.10205078125, 72.803955078125, 65.768798828125, 63.825927734375, 64.8388671875, 57.84716796875, 69.81396484375, 72.802978515625, 62.796142578125, 59.7861328125, 99.73291015625, 65.825927734375, 72.824951171875, 77.793212890625, 69.81591796875, 65.822998046875, 63.071044921875, 85.77197265625, 69.81103515625, 60.01416015625, 61.8349609375, 63.843994140625, 649.26416015625, 68.81396484375, 62.778076171875, 69.869140625, 66.197998046875, 66.8271484375, 68.81591796875, 71.053955078125, 66.822998046875, 679.182861328125, 67.818115234375, 84.764892578125, 68.782958984375, 62.83203125, 106.719970703125, 70.808837890625, 62.8310546875, 63.827880859375, 60.8369140625, 74.799072265625, 65.883056640625, 61.796142578125, 62.8369140625, 66.01708984375, 64.824951171875, 73.194091796875, 68.761962890625, 750.992919921875, 66.821044921875, 98.68212890625, 179.52001953125, 464.697998046875, 63.7900390625, 68.81787109375, 58.83984375, 68.81396484375, 72.804931640625, 62.828857421875, 69.80810546875, 61.215087890625, 67.820068359375, 66.821044921875, 61.77490234375, 65.82421875, 61.833984375, 66.81787109375, 65.218994140625, 1513.9541015625, 514.62109375, 92.1669921875, 64.826904296875, 1236.7060546875, 225.39697265625, 63.828857421875, 64.779052734375, 943.47802734375, 99.735107421875, 67.81689453125, 1076.123046875, 82.779052734375, 62.8310546875, 64.864990234375, 73.801025390625, 65.822998046875, 67.0, 62.834228515625, 66.821044921875, 63.8310546875]
对它进行单间的统计
import numpy as np
print("平均(ms):",np.mean(time_list))
print("标准差(ms):",np.sqrt(np.var(time_list)))
print("最大(ms):",np.max(time_list))
print("最小(ms):",np.min(time_list))
print("下单次数:",len(time_list))
结果
平均(ms): 166.8326877170139
标准差(ms): 306.0126590408985
最大(ms): 1836.067138671875
最小(ms): 57.170166015625
下单次数: 270
从上面简单统计看出:时间相差太大,最小的57ms,最大的要到1836ms,下单太不稳定了。那么,超过1s的有多少个呢,及分布情况? 在500ms到1s间的又有多少个呢,及分布情况?最大和最小的10个是哪几个呢?
要处理上面提出的问题,最开始想到的估计就是直接用for循环找了,我最开始也是这样的,时间紧,任务重的时候,只能这样,后来网上找了一下,改使用了如下的方式,可能有更好的方式,欢迎留言告知!
分布情况,没有整那些复杂的,就是求对应的下标。
big_value_index = [k for k,v in enumerate(time_list) if v>=1000]
mid_value_index = [k for k,v in enumerate(time_list) if v<1000 and v>=500]
len_big = len(big_value_index)
len_mid = len(mid_value_index)
print("大于1s的次数:",len_big,",对应第几次满足:",big_value_index)
print("大于500ms小于1s的次数:",len_mid,",对应第几次满足:",mid_value_index)
结果
大于1s的次数: 11 ,对应第几次满足: [45, 70, 87, 103, 109, 164, 178, 180, 249, 253, 260]
大于500ms小于1s的次数: 15 ,对应第几次满足: [0, 20, 66, 71, 77, 88, 104, 110, 140, 165, 202, 211, 229, 250, 257]
可以看到还是比较均匀的分布的,说明,下单不稳定,不是一段时间内的,而是整个下单过程中的。
技术点:上面主要用到了enumerate函数,它是把序列如list,tuple,str 加上index,默认从0开始。
求N个最值,是不是又想用for循环,其实不用哈!有现成的函数。
import heapq
print("最大的10个时间:",heapq.nlargest(10,time_list))
print("最小的10个时间:",heapq.nsmallest(10,time_list))
结果
最大的10个时间: [1836.067138671875, 1718.408203125, 1675.525146484375, 1606.6689453125, 1570.799072265625, 1513.9541015625, 1284.568115234375, 1236.7060546875, 1100.06005859375, 1084.10205078125]
最小的10个时间: [57.170166015625, 57.843017578125, 57.84716796875, 58.80810546875, 58.83984375, 58.840087890625, 58.843994140625, 58.843994140625, 58.844970703125, 58.89111328125]
如果要算它们对应的下标怎么办?想一想。
应用神书《Python+Cookbook》中的一段话:
当要查找的元素个数相对比较小的时候,函数 nlargest() 和 nsmallest() 是很合适的。如果你仅仅想查找唯一的最小或最大 (N=1) 的元素的话,那么使用 min() 和max() 函数会更快些。类似的,如果 N 的大小和集合大小接近的时候,通常先排序这个集合然后再使用切片操作会更快点 ( sorted(items)[:N] 或者是 sorted(items)[-N:] )。需要在正确场合使用函数 nlargest() 和 nsmallest() 才能发挥它们的优势 (如果N 快接近集合大小了,那么使用排序操作会更好些)。
list切片能用好,那数据处理效率将大大提高~~