MPICH or Openmpi

MPICH or Openmpi


From:http://stackoverflow.com/questions/2427399/mpich-vs-openmpi


First, it is important to recognize how MPICH and OpenMPI are different, i.e. that they are designed to meet different needs. MPICH is supposed to be high-quality reference implementation of the latest MPI standard and the basis for derivative implementations to meet special purpose needs. OpenMPI targets the common case, both in terms of usage and network conduits.

One common complaint about MPICH is that it does not support InfiniBand, whereas OpenMPI does. However, MVAPICH and Intel MPI (among others) - both of which are MPICH derivatives - support InfiniBand, so if one is willing to define MPICH as "MPICH and its derivatives", then MPICH has extremely broad network support, including both InfiniBand and proprietary interconnects like Cray Seastar, Gemini and Aries as well as IBM Blue Gene (/L, /P and /Q). OpenMPI also supports Cray Gemini, but it is not not supported by Cray. Very recently, MPICH supports InfiniBand through a netmod, but MVAPICH2 has extensive optimizations that make it the preferred implementation in nearly all cases.

An orthogonal axis to hardware/platform support is coverage of the MPI standard. Here MPICH is far and away superior. MPICH has been the first implementation of every single release of the MPI standard, from MPI-1 to MPI-3. OpenMPI has only recently supported MPI-3 and I find that some MPI-3 features are buggy on some platforms. Furthermore, OpenMPI still does not have holistic support for MPI_THREAD_MULTIPLE, which is critical for some applications. It might be supported on some platforms but cannot generally be assumed to work. On the other hand, MPICH has had holistic support for MPI_THREAD_MULTIPLE for many years.

One area where OpenMPI used to be significantly superior was the process manager. The old MPICH launch (MPD) was brittle and hard to use. Fortunately, it has been deprecated for many years (see the MPICH FAQ entry for details). Thus, criticism of MPICH because MPD is spurius. The Hydra process manager is quite good and has the same usability and feature set as ORTE (in OpenMPI).

Here is my evaluation on a platform-by-platform basis:

  • Mac OS: both OpenMPI and MPICH should work just fine. If you want a release version that supports all of MPI-3 or MPI_THREAD_MULTIPLE, you probably need MPICH though. There is absolutely no reason to think about MPI performance if you're running on a Mac laptop.
  • Linux with shared-memory: both OpenMPI and MPICH should work just fine. If you want a release version that supports all of MPI-3 or MPI_THREAD_MULTIPLE, you probably need MPICH though. I am not aware of any significant performance differences between the two implementations. Both support single-copy optimizations if the OS allows them.
  • Linux with Mellanox InfiniBand: use OpenMPI or MVAPICH2. If you want a release version that supports all of MPI-3 or MPI_THREAD_MULTIPLE, you need MVAPICH2 though. I find that MVAPICH2 performs very well but haven't done a direct comparison with OpenMPI on InfiniBand, in part because the features for which performance matters most to me (RMA aka one-sided) have been broken in OpenMPI every time I've tried to use them.
  • Linux with Intel/Qlogic True Scale InfiniBand: I don't have any experience with OpenMPI in this context, but MPICH-based Intel MPI is a supported product for this network and MVAPICH2 also supports it.
  • Cray or IBM supercomputers: MPI comes installed on these machines automatically and it is based upon MPICH in both cases.
  • Windows: I see absolutely no point in running MPI on Windows except through a Linux VM, but both Microsoft MPI and Intel MPI support Windows and are MPICH-based.

In full disclosure, I currently work for Intel in a research capacity (and therefore have no special knowledge about products) and formerly worked for Argonne National Lab for five years, where I collaborated extensively with the MPICH team.


你可能感兴趣的:(GPU,MPICH,Openmpi)