High Performance Computing (HPC) Clusters

High Performance Computing (HPC) Clusters

High Performance Computing (HPC) clusters are designed to create cloud, technical, analytical environments associated with calculations that require high computing power and artificial intelligence (AI). HPC solutions help provide customers with a competitive edge in their industry and serve as the IT framework for performing the most complex R&D and production tasks. HPCs are energy efficient and allow to significantly increase data center performance while reducing power consumption by up to 40%.

HPC_avectis



Capabilities

  • HPC clusters have been tested and certified to run on the following operating systems: 
    • SUSE Linux Enterprise Server,
    • Red Hat Enterprise Linux Server,
    • CentOS;
  • management of a high-performance computing system is carried out on the basis of pre-installed specialized software:
    • centralized management of IPMI2.0 hardware resources with support for KVM over LAN
  • high-performance servers support various compilers, libraries of mathematical/engineering and other routines, as well as libraries for parallel computing;
  • HPCs are fully assembled and wired in standard (19”) IT cabinets (including those with water-cooled front doors) of various depths and equipped with switches (KVM).



Construction

  • Computing nodes (servers) of various types:

    • 1U/diskless,
    • 1U/8SFF,
    • 2U/diskless,
    • 2U/12LFF,
    • 2U/24LFF,
    • others.
  • All the listed nodes can be equipped with high-performance GPU based on graphics accelerators:

    • NVIDIA Tesla M10,
    • NVIDIA Tesla P40,
    • NVIDIA Tesla V100,
    • NVIDIA Tesla T4,
    • NVIDIA Quadro P6000,
    • NVIDIA Quadro P4000,
    • NVIDIA Quadro P2000,
    • NVIDIA Quadro P620,
    • NVIDIA Quadro RTX5000.
  • For implementing cluster interconnections, nodes can be equipped with high-speed adapters:

    • Mellanox ConnectX-4 EDR InfiniBand Adapter,
    • Mellanox ConnectX-5 EDR InfiniBand Adapter,
    • Mellanox ConnectX-6 HDR InfiniBand Adapter,
    • Intel OPA 100 Series Single-port PCIe 3.0 x8 HFA,
    • Intel OPA 100 Series Single-port PCIe 3.0 x16 HFA.

 

  • LAN (Ethernet) equipment of various types, whose performance, functionality and port density are determined by the range of tasks.

  • Cluster interconnect devices:

    • Mellanox SB7800 InfiniBand Smart Switch,
    • Mellanox SB7890 InfiniBand Smart Switch,
    • Mellanox QM8700 InfiniBand Smart Switch,
    • Mellanox QM8790 InfiniBand Smart Switch,
    • Intel OPA 100 Series 24-port Unmanaged Edge Switch,
    • Intel OPA 100 Series 48-port Unmanaged Edge Switch.

 

  •  Storage hardware:
    • Dual Controller All-Flash System 2U/24SFF,
    • Dual Controller Hybrid-Flash System 2U/24SFF,
    • Dual Controller Hybrid-Flash System 2U/12LFF,
    • Dual Controller Hybrid-Flash High Density System 4U/24LFF,
    • Dual Controller Hybrid-Flash Ultra High-Density System 4U/60LFF,
    • Dual Controller High Performance NVMe System 2U/25 U.2,
    • Dual Controller Scalable NAS.

 



Advantages

  • rapid deployment of reliable, compatible and most efficient HPC systems;
  • ensuring the entire life cycle of HPC cluster projects, from design to commissioning;
  • effective integration and full service support;
  • experienced certified personnel who have undergone comprehensive training from vendors;
  • performance of the full scope of work on the implementation of engineering infrastructures of any degree of complexity, incl. for high-performance computing clusters based on nodes with water-cooled processors and data centers (server rooms) intended to host cluster equipment.