Pytorch cifar10_tutorial.py问题BrokenPipeError: [Errno 32] Broken pipe

问题

如果你用windows玩cifar10_tutorial.py,一定碰到过这个问题:
BrokenPipeError: [Errno 32] Broken pipe When i :"run cifar10_tutorial.py

先给解决方案,参考:https://github.com/pytorch/examples/issues/201

分析

用IPython运行,事个运行过程直到报错,列出来的详细情况如下,

(pytorch) E:\APytorchDev\TutorialCode>IPython  cifar10_tutorial.py
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 114, in _main
    prepare(preparation_data)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 225, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
    run_name="__mp_main__")
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "E:\APytorchDev\TutorialCode\cifar10_tutorial.py", line 162, in <module>
    for i, data in enumerate(trainloader, 0):
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
    return _DataLoaderIter(self)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
    w.start()
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
    _check_not_importing_main()
  File "C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
    is not going to be frozen to produce an executable.''')
RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
---------------------------------------------------------------------------
BrokenPipeError                           Traceback (most recent call last)
E:\APytorchDev\TutorialCode\cifar10_tutorial.py in <module>
    160
    161     running_loss = 0.0
--> 162     for i, data in enumerate(trainloader, 0):
    163         # get the inputs
    164         inputs, labels = data

C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py in __iter__(self)
    817
    818     def __iter__(self):
--> 819         return _DataLoaderIter(self)
    820
    821     def __len__(self):

C:\ProgramData\Anaconda3\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py in __init__(self, loader)
    558                 #     before it starts, and __del__ tries to join but will get:
    559                 #     AssertionError: can only join a started process.
--> 560                 w.start()
    561                 self.index_queues.append(index_queue)
    562                 self.workers.append(w)

C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\process.py in start(self)
    110                'daemonic processes are not allowed to have children'
    111         _cleanup()
--> 112         self._popen = self._Popen(self)
    113         self._sentinel = self._popen.sentinel
    114         # Avoid a refcycle if the target function holds an indirect

C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj)
    221     @staticmethod
    222     def _Popen(process_obj):
--> 223         return _default_context.get_context().Process._Popen(process_obj)
    224
    225 class DefaultContext(BaseContext):

C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\context.py in _Popen(process_obj)
    320         def _Popen(process_obj):
    321             from .popen_spawn_win32 import Popen
--> 322             return Popen(process_obj)
    323
    324     class SpawnContext(BaseContext):

C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\popen_spawn_win32.py in __init__(self, process_obj)
     63             try:
     64                 reduction.dump(prep_data, to_child)
---> 65                 reduction.dump(process_obj, to_child)
     66             finally:
     67                 set_spawning_popen(None)

C:\ProgramData\Anaconda3\envs\pytorch\lib\multiprocessing\reduction.py in dump(obj, file, protocol)
     58 def dump(obj, file, protocol=None):
     59     '''Replacement for pickle.dump() using ForkingPickler.'''
---> 60     ForkingPickler(file, protocol).dump(obj)
     61
     62 #

BrokenPipeError: [Errno 32] Broken pipe

(pytorch) E:\APytorchDev\TutorialCode>

其实呢,报错的信息里已经提示了解决方案:

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

究其原因,大致是因为windows的multiprocessing library会不停地产生子进程(child process),如果用main保护起来,子进程在运行时因为会导入这个main,(只有一个main进程),所以避免了循环生成子进程。

参考:https://stackoverflow.com/questions/18204782/runtimeerror-on-windows-trying-python-multiprocessing/18205006#18205006

On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.

猜你喜欢

转载自blog.csdn.net/tanmx219/article/details/86129623