MPI Over InfiniBand
To take full advantage of InfiniBand, an MPI implementation with native InfiniBand support should be used.
Supported MPI Types
MVAPICH2, MVAPICH, and Open MPI support InfiniBand directly. Intel MPI supports InfiniBand through and abstraction layer called DAPL. Take note that DAPL adds an extra step in the communication process and therefore has increased latency and reduced bandwidth when compared to the MPI implementations that use InfiniBand directly.
Running MPI over InfiniBand
Running and MPI program over InfiniBand is identical to running one using standard TCP/IP over Ethernet. The same hostnames will be used in the machines file or in the queuing system.
Open MPI tries to intelligently choose which communication interface it should use and will fall back to using TCP/IP if there is a failure when opening the InifiniBand device. To prevent this behavior add ‘–mca btl ^tcp’ to your command line to exclude TCP/IP as a valid communication interface.
Categories
- Getting Support (5)
- Hardware (35)
- Areca Raid Arrays (3)
- InfiniBand (10)
- LSI Raid Arrays (9)
- NVIDIA Graphics Cards (1)
- Racks (1)
- Troubleshooting (8)
- Software (11)
- ACT Utilities (5)
- HPC apps & benchmarks (1)
- Linux (3)
- Schedulers (3)
- SGE / Grid Engine (1)
- TORQUE (1)
- Tech Tips (17)
Use our Breakin stress test and diagnostics tool to pinpoint hardware issues and component failures.
Check out our product catalog and use our Configurator to plan your next system and get a price estimate.
Request a Consultation from our team of HPC and AI Experts
Would you like to speak to one of our HPC or AI experts? We are here to help you. Submit your details, and we'll be in touch shortly.