HPC AI Server APY SCG 7U 8 GPU NVIDIA HGX H200 SXM Intel Xeon Scalable

Product available for orders

New APY AI Server, 7U NVIDIA HGX™ H200 eight GPU with dual 5th Gen Intel Xeon Scalable processors, designed for large-scale AI and HPC with up to 12 PCIE slots, 32 DIMMs, 10 NVMe.

CONTACT US
FOR A QUOTE OR APPOINTMENT REQUEST

  • Secure paymentSecure paymentby Visa or Mastercard bank card, SEPA bank transfer, or instantly by FINTECTURE.
  • Fast shipping from €25 excluding taxFast shipping from €25 excluding taxfor small products via UPS, from €60 excluding VAT per GEODIS per pallet for large products
  • Buy and sell collect for free.Buy and sell collect for free.Simply buy your products online and collect them directly from our premises, it's free




  • New
    HPC AI Server APY SCG 7U 8 GPU NVIDIA HGX H200 SXM Intel Xeon Scalable
    HPC AI Server APY SCG 7U 8 GPU NVIDIA HGX H200 SXM Intel Xeon Scalable
    HPC AI Server APY SCG 7U 8 GPU NVIDIA HGX H200 SXM Intel Xeon Scalable
    keyboard_arrow_rightkeyboard_arrow_left

    The 7U APY server is a high-end solution designed for demanding applications in artificial intelligence (AI), high-performance computing (HPC), and big data analytics. It stands out for its computing power, energy efficiency, and advanced connectivity.

    1. Exceptional computing performance

    Equipped with two 5th Gen Intel® Xeon® Scalable processors, the server supports a maximum TDP of 350 W per socket, providing high processing capacity suitable for compute- and AI-intensive workloads.

    2. Advanced GPU Acceleration with NVIDIA HGX H200

    It integrates the NVIDIA® HGX H200 system with eight Tensor Core GPUs, achieving unmatched performance for training and inferencing AI models.

    3. Ultra-fast GPU Interconnect

    With NVLink and a GPU-to-GPU interconnect with 900 GB/s bandwidth, the server provides ultra-fast communication between GPUs, improving the processing of AI and HPC workloads.

    4. Built for Generative AI and Large Language Models (LLM)

    APY has optimized this server for generative AI and deep learning applications, with specialized support for Large Language Models (LLM) and a dedicated software infrastructure.

    5. Energy Efficiency and Advanced Cooling

    It features independent airflow tunnels for the CPU and GPU, optimizing cooling and reducing power consumption. It is equipped with a 4+2 or 3+3 redundant 3000W power supply, certified 80 PLUS® Titanium, ensuring optimal energy efficiency.

    6. Exceptional Connectivity and Expandability

    The server offers:

    12+1 PCIe slots,

    32 DIMM slots for ultra-fast DDR5 memory,

    10 NVMe bays for high-speed storage,

    Dual 10Gb LAN connectivity, with expansion options for advanced networking needs.

    7. Reduced Latency and GPU-Storage Optimization

    The optimized proximity of network interface cards (NICs) and storage to GPUs, as well as the 1:1 GPU-to-NIC ratio, enable minimal latency and improved performance, including NVIDIA GPUDirect Storage.

    The APY server is an ideal platform for enterprises and data centers looking to deploy cutting-edge AI and HPC solutions. Its powerful architecture, energy efficiency, and advanced interconnect make it a compelling choice for the most demanding workloads.

    APY
    APY-7UIAintelxeon8GPUHGXH200

    Data sheet

    Case size
    Rack 7U
    Dimensions
    447mm x 222,25mm x 945mm
    CPU brand
    INTEL
    processor range
    Intel Xeon Scalable
    Number of graphics card(s) supported
    8
    Graphics card brand
    NVIDIA
    Default graphics card
    NVIDIA HGX H200
    Solutions logiciels
    AI

    Specific References