python 多进程windows报错 linux不报错 TypeError: cannot pickle ‘_thread.lock‘ object

代码:

class Controller(object):
def handle_new(self):
    record=[]
    for job_name in self.job_info_dic:
        tasks = self.job_info_dic[job_name]['tasks']
        corn = self.job_info_dic[job_name]['corn']
        interval = self.job_info_dic[job_name]['interval']

        for task in tasks:
            process=multi.Process(target=self.cal_task_job,args=(task,corn,interval,))
            process.start()
            record.append(process)
    for process in record:
        process.join()

具体报错是:

File "D:\anaconda3\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "D:\anaconda3\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "D:\anaconda3\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "D:\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
    reduction.dump(process_obj, to_child)
  File "D:\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object

从报错看,是序列化对象时出错的,那就需要把传给multi.Process的参数逐一序列化一下,看哪个参数不能被序列化,结果发现:

a=pickle.dumps(self.cal_task_job)

报错和上面一样,那就是这个self.cal_task_job有问题了.

那为啥linux上就没问题呢?

因为windows创建一个子进程,会拷贝主进程中的所有代码,在linux和mac当中,并不会拷贝你在主进程中执行的代码。

解决办法,把cal_task_job函数移到类的外面,而不作为类的一个方法,问题就解决了。多进程实现,没有报错。

你可能感兴趣的:(python,windows,linux)