有皱纹的地方只表示微笑曾在那儿呆过。-------马克.吐温
在Unix/Linux下,multiprocessing模块封装了fork()调用,是我们不需要关注fork()的细节。由于windows没有fork调用,因此,multiprocessing需要“模拟”出fork的效果,父进程所有Python对象都必须通过pickle序列号再传到子进程中去。所以,如果multiprocessing在Windows下调用失败了,要先考虑是不是pickle失败了。
Python3中模拟分布式调用时,如果是在Unix/Linux下,测试程序可以正常运行,但如果实在Windows下,将会报错:
代码如下:
import random,time,queue
from multiprocessing.managers import BaseManager
task_queue = queue.Queue()
result_queue = queue.Queue()
class QueueManager(BaseManager):
pass
QueueManager.register('get_task_queue',callable=lambda:task_queue)
QueueManager.register('get_result_queue',callable=lambda:result_queue)
manager=QueueManager(address=('',5000),authkey=b'abc')
manager.start()
task = manager.get_task_queue()
result = manager.get_result_queue()
for i in range(10):
n = random.randint(0,10000)
print('Put task %d' % n)
task.put(n)
print('Try get results..')
for i in range(10):
r = result.get(timeout=10)
print('Result:%s' % r)
manager.shutdown()
print('master exit.')
报错信息:
"E:\python\python project\myfirst\venv\Scripts\python.exe" "E:/python/python project/myfirst/vari/distibuted_master.py"
Traceback (most recent call last):
File "E:/python/python project/myfirst/vari/distibuted_master.py", line 20, in
manager.start()
File "D:\program files\Python3.6\Lib\multiprocessing\managers.py", line 513, in start
self._process.start()
File "D:\program files\Python3.6\Lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\program files\Python3.6\Lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\program files\Python3.6\Lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "D:\program files\Python3.6\Lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle at 0x00000000003D2EA0>: attribute lookup on __main__ failed
Process finished with exit code 1
官网中给出解释说明:pickle模块不能序列化lambda function,故我们需要自行定义函数,实现序列化,代码修改如下:
import queue
import random
from multiprocessing.managers import BaseManager
task_queue = queue.Queue()
result_queue = queue.Queue()
def return_task_queue():
global task_queue
return task_queue
def return_result_queue():
global result_queue
return result_queue
class QueueManager(BaseManager):
pass
if __name__=='__main__':
QueueManager.register('get_task_queue', callable=return_task_queue)
QueueManager.register('get_result_queue', callable=return_result_queue)
manager = QueueManager(address=('127.0.0.1', 5000), authkey=b'abc')
manager.start()
task = manager.get_task_queue()
result = manager.get_result_queue()
for i in range(10):
n = random.randint(0, 10000)
print('Put task %d' % n)
task.put(n)
print('Try get results..')
for i in range(10):
r = result.get(timeout=10)
print('Result:%s' % r)
manager.shutdown()
print('master exit.')
运行结果:
"E:\python\python project\myfirst\venv\Scripts\python.exe" "E:/python/python project/myfirst/vari/distibuted_master.py"
Put task 4752
Put task 5446
Put task 9628
Put task 8701
Put task 225
Put task 3059
Put task 5046
Put task 5630
Put task 9980
Put task 7492
Try get results..
喜欢的朋友可以扫描以下二维码进行关注,公众号将每天更新文章: