

Develop MPI code independent of the fabric, knowing it will run efficiently on whatever network you choose at run time.It also automatically chooses the fastest transport available. It accomplishes this by dynamically establishing the connection only when needed, which reduces the memory footprint. Interconnects based on Remote Direct Memory Access (RDMA), including Ethernet and InfiniBand.Transmission Control Protocol (TCP) sockets.The library provides an accelerated, universal, multifabric layer for fast interconnects via OFI, including for these configurations: designed to natively work with multiple network protocols such as ssh, rsh, pbs, slurm, and sge.a process management system for starting parallel jobs.Improved start scalability is through the mpiexec.hydra process manager, which is:.Support for multi-endpoint communications lets an application efficiently split data communication among threads, maximizing interconnect utilization.Thread safety allows you to trace hybrid multithreaded MPI applications for optimal performance on multicore and manycore Intel® architectures.This lets you quickly deliver maximum application performance (even if you change or upgrade to new interconnects) without requiring major modifications to the software or operating systems. This library implements the high-performance MPI 3.1 standard on multiple fabrics. Helps you deliver optimal performance on extreme scale solutions based on Mellanox InfiniBand* and Cornelis Networks*Īs a result, you gain increased communication throughput, reduced latency, simplified program design, and a common communication infrastructure.Allows tuning for the underlying fabric to happen at run time through simple environment settings, including network-level features like multirail for increased bandwidth.


This optimized framework exposes and exports communication services to HPC applications.
