© 2019 Mellanox Technologies | Confidential 1
Paving the Road to Exascale
September 2019
InfiniBand In-Network
Computing Technology
© 2019 Mellanox Technologies | Confidential 2
HDR 200G InfiniBand Wins Next Generation Supercomputers
23.5 Petaflops
8K HDR InfiniBand Nodes
Fat-Tree Topology
50 Petaflops
7.2K HDR InfiniBand Nodes
Dragonfly+ Topology
3.1 Petaflops
1.8K HDR InfiniBand Nodes
Fat-Tree Topology
1.7 Petaflops
2K HDR InfiniBand Nodes
Dragonfly+ Topology
1.6 Petaflops
Hybrid CPU-GPU-FPGA
Fat-Tree Topology
3K HDR InfiniBand Nodes
Dragonfly+ Topology
Highest Performance Cloud
© 2019 Mellanox Technologies | Confidential 3
The Need for Intelligent and Faster Interconnect
CPU-Centric (Onload) Data-Centric (Offload)
Must Wait for the Data
Creates Performance Bottlenecks
Faster Data Speeds and In-Network Computing
Enable Higher Performance and Scale
GPU
CPU
GPU
CPU
Onload Network In-Network Computing
GPU
CPU
CPU
GPU
GPU
CPU
GPU
CPU
GPU
CPU
CPU
GPU
Analyze Data as it Moves!
Higher Performance and Scale
© 2019 Mellanox Technologies | Confidential 4
Data Centric Architecture to Overcome Latency Bottlenecks
CPU-Centric (Onload) Data-Centric (Offload)
Communications Latencies
of 30-40us
Intelligent Interconnect Paves the Road to Exascale Performance
GPU
CPU
GPU
CPU
GPU
CPU
CPU
GPU
GPU
CPU
GPU
CPU
GPU
CPU
CPU
GPU
Communications Latencies
of 3-4us
© 2019 Mellanox Technologies | Confidential 5
GPUDirect
RDMA
Network
Communication
Application ▪ Data Analysis
▪ Real Time
▪ Deep Learning
▪ Mellanox SHARP In-Network Computing
▪ MPI Tag Matching
▪ MPI Rendezvous
▪ Software Defined Virtual Devices
▪ Network Transport Offload
▪ RDMA and GPU-Direct RDMA
▪ SHIELD (Self-Healing Network)
▪ Enhanced Adaptive Routing and Congestion Control
Connectivity ▪ Multi-Host Technology
▪ Socket-Direct Technology
▪ Enhanced Topologies
Accelerating All Levels of HPC / AI Frameworks
© 2019 Mellanox Technologies | Confidential 6
ICON (ICOsahedral Non-hydrostatic Model) Application
▪ New generation unified weather forecasting and climate model
▪ Developed by Max Planck Institute for Meteorology (MPI-M) and the German Meteorological Service (DWD)
▪ New data exchange module YAXT developed to replace traditional halo exchange mechanism
▪ Main challenge lies on efficient handling of sparse data at scale
▪ Improvement jointly developed by DKRZ, UTK, and Mellanox
8%
16%
2%
© 2019 Mellanox Technologies | Confidential 7
Scalable Hierarchical
Aggregation and
Reduction Protocol
(SHARP)
© 2019 Mellanox Technologies | Confidential 8
Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)
▪ Reliable Scalable General Purpose Primitive
▪ In-network Tree based aggregation mechanism
▪ Large number of groups
▪ Multiple simultaneous outstanding operations
▪ Applicable to Multiple Use-cases
▪ HPC Applications using MPI / SHMEM
▪ Distributed Machine Learning applications
▪ Scalable High Performance Collective Offload
▪ Barrier, Reduce, All-Reduce, Broadcast and more
▪ Sum, Min, Max, Min-loc, max-loc, OR, XOR, AND
▪ Integer and Floating-Point, 16/32/64 bits
Data
Aggregated
Aggregated
Result
Aggregated
Result
Data
Host Host Host Host Host
SwitchSwitch
Switch
© 2019 Mellanox Technologies | Confidential 9
SHARP AllReduce Performance Advantages (128 Nodes)
SHARP enables 75% Reduction in Latency
Providing Scalable Flat LatencyScalable Hierarchical
Aggregation and
Reduction Protocol
© 2019 Mellanox Technologies | Confidential 10
SHARP AllReduce Performance Advantages
1500 Nodes, 60K MPI Ranks, Dragonfly+ Topology
SHARP Enables Highest PerformanceScalable Hierarchical
Aggregation and
Reduction Protocol
© 2019 Mellanox Technologies | Confidential 11
Performs the Gradient Averaging
Replaces all physical parameter servers
Accelerate AI Performance
SHARP Accelerates AI Performance
The CPU in a parameter server
becomes the bottleneck
© 2019 Mellanox Technologies | Confidential 12
NCCL-SHARP Delivers Highest Performance
© 2019 Mellanox Technologies | Confidential 13
SHARP Performance Advantage for AI
▪ SHARP provides 16% Performance Increase for deep learning, initial results
▪ TensorFlow with Horovod running ResNet50 benchmark, HDR InfiniBand (ConnectX-6, Quantum)
16%
11%
P100 NVIDIA GPUs, RH 7.5, Mellanox OFED 4.4, HPC-X v2.3, TensorFlow v1.11, Horovod 0.15.0
© 2019 Mellanox Technologies | Confidential 14
Quality of Service
© 2019 Mellanox Technologies | Confidential 15
InfiniBand Quality of Service
Low Priority
VL Arbitrary
SL 0-3
SL 4
SL 6
SL 8
SL 10
SL 12
W 32
W 32
W 32
W 64
W 64
User / Workload Category Service Level
W 64
Virtual Lanes over Physical Link
VL-0
VL-1
VL-2
VL-4
VL-5
VL-6
High Priority
VL Arbitrary
User 1
User 2
User 3
User 4
Other
Clock Sync
Backup
Storage
MPI
MPI
Network
© 2019 Mellanox Technologies | Confidential 16
InfiniBand Congestion Control
Without Congetion Conteol
With Congetion Conteol
Congestion – Throughput loss No congestion – highest throughput!
© 2019 Mellanox Technologies | Confidential 17
SHIELD
Self Healing Technology
© 2019 Mellanox Technologies | Confidential 18
SHIELD - Self Healing Technology
▪ The ability to overcome network failures, locally, by the switches
▪ Software-based solutions suffer from long delays detecting network failures
▪ 5-30 seconds for 1K to 10K nodes clusters
▪ Accelerates network recovery time by 5000X
▪ The higher the speed or scale the greater the recovery value
▪ Available with EDR and HDR switches and beyond
Enables Unbreakable Data Centers
© 2019 Mellanox Technologies | Confidential 19
SHIELD: Consider a Flow From A to B
Data
Server A Server B
© 2019 Mellanox Technologies | Confidential 20
SHIELD: The Simple Case: Local Fix
Server A Server B
Data
© 2019 Mellanox Technologies | Confidential 21
SHIELD: The Remote Case - Using Fault Recovery Notifications
Server A Server B
Data
FRN
Data
© 2019 Mellanox Technologies | Confidential 22
Adaptive Routing
© 2019 Mellanox Technologies | Confidential 23
InfiniBand Proven Adaptive Routing Performance
▪ Oak Ridge National Laboratory – Coral Summit supercomputer
▪ Bisection bandwidth benchmark, based on mpiGraph
▪ Explores the bandwidth between possible MPI process pairs
▪ AR results demonstrate an average performance of 96% of the maximum bandwidth measured
mpiGraph explores the bandwidth between
possible MPI process pairs. In the histograms,
the single cluster with AR indicates that all
pairs achieve nearly maximum bandwidth
while single-path static routing has nine
clusters as congestion limits bandwidth,
negatively impacting overall application
performance.
“The Design, Deployment, and Evaluation of the CORAL Pre-Exascale Systems”,
Sudharshan S. Vazhkudai, Arthur S. Bland, Al Geist, Christopher J. Zimmer, Scott Atchley, Sarp Oral, Don
E. Maxwell, Veronica G. Vergara Larrea, Wayne Joubert, Matthew A. Ezell, Dustin Leverman, James H.
Rogers, Drew Schmidt, Mallikarjun Shankar, Feiyi Wang, Junqi Yin (Oak Ridge National Laboratory) and
Bronis R. de Supinski, Adam Bertsch, Robin Goldstone, Chris Chambreau, Ben Casses, Elsa Gonsiorowski,
Ian Karlin, Matthew L. Leininger, Adam Moody, Martin Ohmacht, Ramesh Pankajakshan, Fernando
Pizzano, Py Watson, Lance D. Weems (Lawrence Livermore National Laboratory) and James Sexton, Jim
Kahle, David Appelhans, Robert Blackmore, George Chochia, Gene Davison, Tom Gooding, Leopold
Grinberg, Bill Hanson, Bill Hartner, Chris Marroquin, Bryan Rosenburg, Bob Walkup (IBM)
InfiniBand High Network Efficiency - mpiGraph
Oak Ridge National Lab Summit Supercomputer
Static Routing Adaptive Routing
© 2019 Mellanox Technologies | Confidential 24
HDR InfiniBand
© 2019 Mellanox Technologies | Confidential 25
Highest-Performance 200Gb/s InfiniBand Solutions
Transceivers
Active Optical and Copper Cables
(10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s)
40 HDR (200Gb/s) InfiniBand Ports
80 HDR100 InfiniBand Ports
Throughput of 16Tb/s, <90ns Latency
200Gb/s Adapter, 0.6us latency
215 million messages per second
(10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s)
MPI, SHMEM/PGAS, UPC
For Commercial and Open Source Applications
Leverages Hardware Accelerations
System on Chip and SmartNIC
Programmable adapter
Smart Offloads
© 2019 Mellanox Technologies | Confidential 26
Leading Connectivity
ConnectX-6 HDR InfiniBand Adapter
Leading Performance
Leading Features
▪ 200Gb/s InfiniBand and Ethernet
▪ HDR, HDR100, EDR (100Gb/s) and lower speeds
▪ 200GbE, 100GbE and lower speeds
▪ Single and dual ports
▪ 200Gb/s throughput, 0.6usec latency, 215 million message per second
▪ PCIe Gen3 / Gen4, 32 lanes
▪ Integrated PCIe switch
▪ Multi-Host - up to 8 hosts, supporting 4 dual-socket servers
▪ In-network computing and memory for HPC collective offloads
▪ Security – Block-level encryption to storage, key management, FIPS
▪ Storage – NVMe Emulation, NVMe-oF target, Erasure coding, T10/DIF
© 2019 Mellanox Technologies | Confidential 27
HDR InfiniBand Switches
▪ 40 ports of HDR, 200G
▪ 80 ports of HDR100, 100G
40 QSFP56 ports
▪ 800 ports of HDR, 200G
▪ 1600 ports of HDR100, 100G
800 QSFP56 ports
© 2019 Mellanox Technologies | Confidential 28
Real Time Network Visibility
Network status/health in real time
Advanced monitoring for troubleshooting
▪ 8 mirror agents triggered by congestion, buffer
usage and latency
▪ Measure queue depth using histograms (64ns
granularity)
▪ Buffer snapshots
▪ Congestion notifications and buffers status
Built-in Hardware Sensors for Rich Traffic Telemetry and Data Collection
© 2019 Mellanox Technologies | Confidential 29
Highest Performance and Scalability for Exascale Platforms
7X
Higher
Performance
96%
Network
Utilization
Flat
Latency
5000X
Higher
Resiliency
2X
Higher
Performance
Deep
Learning
HDR 200G
NDR 400G
XDR 1000G
© 2019 Mellanox Technologies | Confidential 30
InfiniBand Delivers Highest Performance and ROI
▪ High data throughput, extremely low latency, high message rate, RDMA and GPUDirect
▪ In-Network Computing – SHARP™, MPI acceleration engines
▪ Self Healing Network with SHIELD for highest network resiliency
▪ End to end adaptive routing, congestion control and Quality of Service
▪ InfiniBand to Ethernet gateway for Ethernet storage or other Ethernet connectivity
▪ Backward and forward compatibility
Native InfiniBand
NVMe / Storage
InfiniBand High Speed Network
In-Network Computing Engines
Ethernet
Storage / Services
High Speed Gateway
to Ethernet
Compute Servers
Applications
© 2019 Mellanox Technologies | Confidential 31
Thank You

InfiniBand In-Network Computing Technology and Roadmap

  • 1.
    © 2019 MellanoxTechnologies | Confidential 1 Paving the Road to Exascale September 2019 InfiniBand In-Network Computing Technology
  • 2.
    © 2019 MellanoxTechnologies | Confidential 2 HDR 200G InfiniBand Wins Next Generation Supercomputers 23.5 Petaflops 8K HDR InfiniBand Nodes Fat-Tree Topology 50 Petaflops 7.2K HDR InfiniBand Nodes Dragonfly+ Topology 3.1 Petaflops 1.8K HDR InfiniBand Nodes Fat-Tree Topology 1.7 Petaflops 2K HDR InfiniBand Nodes Dragonfly+ Topology 1.6 Petaflops Hybrid CPU-GPU-FPGA Fat-Tree Topology 3K HDR InfiniBand Nodes Dragonfly+ Topology Highest Performance Cloud
  • 3.
    © 2019 MellanoxTechnologies | Confidential 3 The Need for Intelligent and Faster Interconnect CPU-Centric (Onload) Data-Centric (Offload) Must Wait for the Data Creates Performance Bottlenecks Faster Data Speeds and In-Network Computing Enable Higher Performance and Scale GPU CPU GPU CPU Onload Network In-Network Computing GPU CPU CPU GPU GPU CPU GPU CPU GPU CPU CPU GPU Analyze Data as it Moves! Higher Performance and Scale
  • 4.
    © 2019 MellanoxTechnologies | Confidential 4 Data Centric Architecture to Overcome Latency Bottlenecks CPU-Centric (Onload) Data-Centric (Offload) Communications Latencies of 30-40us Intelligent Interconnect Paves the Road to Exascale Performance GPU CPU GPU CPU GPU CPU CPU GPU GPU CPU GPU CPU GPU CPU CPU GPU Communications Latencies of 3-4us
  • 5.
    © 2019 MellanoxTechnologies | Confidential 5 GPUDirect RDMA Network Communication Application ▪ Data Analysis ▪ Real Time ▪ Deep Learning ▪ Mellanox SHARP In-Network Computing ▪ MPI Tag Matching ▪ MPI Rendezvous ▪ Software Defined Virtual Devices ▪ Network Transport Offload ▪ RDMA and GPU-Direct RDMA ▪ SHIELD (Self-Healing Network) ▪ Enhanced Adaptive Routing and Congestion Control Connectivity ▪ Multi-Host Technology ▪ Socket-Direct Technology ▪ Enhanced Topologies Accelerating All Levels of HPC / AI Frameworks
  • 6.
    © 2019 MellanoxTechnologies | Confidential 6 ICON (ICOsahedral Non-hydrostatic Model) Application ▪ New generation unified weather forecasting and climate model ▪ Developed by Max Planck Institute for Meteorology (MPI-M) and the German Meteorological Service (DWD) ▪ New data exchange module YAXT developed to replace traditional halo exchange mechanism ▪ Main challenge lies on efficient handling of sparse data at scale ▪ Improvement jointly developed by DKRZ, UTK, and Mellanox 8% 16% 2%
  • 7.
    © 2019 MellanoxTechnologies | Confidential 7 Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)
  • 8.
    © 2019 MellanoxTechnologies | Confidential 8 Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) ▪ Reliable Scalable General Purpose Primitive ▪ In-network Tree based aggregation mechanism ▪ Large number of groups ▪ Multiple simultaneous outstanding operations ▪ Applicable to Multiple Use-cases ▪ HPC Applications using MPI / SHMEM ▪ Distributed Machine Learning applications ▪ Scalable High Performance Collective Offload ▪ Barrier, Reduce, All-Reduce, Broadcast and more ▪ Sum, Min, Max, Min-loc, max-loc, OR, XOR, AND ▪ Integer and Floating-Point, 16/32/64 bits Data Aggregated Aggregated Result Aggregated Result Data Host Host Host Host Host SwitchSwitch Switch
  • 9.
    © 2019 MellanoxTechnologies | Confidential 9 SHARP AllReduce Performance Advantages (128 Nodes) SHARP enables 75% Reduction in Latency Providing Scalable Flat LatencyScalable Hierarchical Aggregation and Reduction Protocol
  • 10.
    © 2019 MellanoxTechnologies | Confidential 10 SHARP AllReduce Performance Advantages 1500 Nodes, 60K MPI Ranks, Dragonfly+ Topology SHARP Enables Highest PerformanceScalable Hierarchical Aggregation and Reduction Protocol
  • 11.
    © 2019 MellanoxTechnologies | Confidential 11 Performs the Gradient Averaging Replaces all physical parameter servers Accelerate AI Performance SHARP Accelerates AI Performance The CPU in a parameter server becomes the bottleneck
  • 12.
    © 2019 MellanoxTechnologies | Confidential 12 NCCL-SHARP Delivers Highest Performance
  • 13.
    © 2019 MellanoxTechnologies | Confidential 13 SHARP Performance Advantage for AI ▪ SHARP provides 16% Performance Increase for deep learning, initial results ▪ TensorFlow with Horovod running ResNet50 benchmark, HDR InfiniBand (ConnectX-6, Quantum) 16% 11% P100 NVIDIA GPUs, RH 7.5, Mellanox OFED 4.4, HPC-X v2.3, TensorFlow v1.11, Horovod 0.15.0
  • 14.
    © 2019 MellanoxTechnologies | Confidential 14 Quality of Service
  • 15.
    © 2019 MellanoxTechnologies | Confidential 15 InfiniBand Quality of Service Low Priority VL Arbitrary SL 0-3 SL 4 SL 6 SL 8 SL 10 SL 12 W 32 W 32 W 32 W 64 W 64 User / Workload Category Service Level W 64 Virtual Lanes over Physical Link VL-0 VL-1 VL-2 VL-4 VL-5 VL-6 High Priority VL Arbitrary User 1 User 2 User 3 User 4 Other Clock Sync Backup Storage MPI MPI Network
  • 16.
    © 2019 MellanoxTechnologies | Confidential 16 InfiniBand Congestion Control Without Congetion Conteol With Congetion Conteol Congestion – Throughput loss No congestion – highest throughput!
  • 17.
    © 2019 MellanoxTechnologies | Confidential 17 SHIELD Self Healing Technology
  • 18.
    © 2019 MellanoxTechnologies | Confidential 18 SHIELD - Self Healing Technology ▪ The ability to overcome network failures, locally, by the switches ▪ Software-based solutions suffer from long delays detecting network failures ▪ 5-30 seconds for 1K to 10K nodes clusters ▪ Accelerates network recovery time by 5000X ▪ The higher the speed or scale the greater the recovery value ▪ Available with EDR and HDR switches and beyond Enables Unbreakable Data Centers
  • 19.
    © 2019 MellanoxTechnologies | Confidential 19 SHIELD: Consider a Flow From A to B Data Server A Server B
  • 20.
    © 2019 MellanoxTechnologies | Confidential 20 SHIELD: The Simple Case: Local Fix Server A Server B Data
  • 21.
    © 2019 MellanoxTechnologies | Confidential 21 SHIELD: The Remote Case - Using Fault Recovery Notifications Server A Server B Data FRN Data
  • 22.
    © 2019 MellanoxTechnologies | Confidential 22 Adaptive Routing
  • 23.
    © 2019 MellanoxTechnologies | Confidential 23 InfiniBand Proven Adaptive Routing Performance ▪ Oak Ridge National Laboratory – Coral Summit supercomputer ▪ Bisection bandwidth benchmark, based on mpiGraph ▪ Explores the bandwidth between possible MPI process pairs ▪ AR results demonstrate an average performance of 96% of the maximum bandwidth measured mpiGraph explores the bandwidth between possible MPI process pairs. In the histograms, the single cluster with AR indicates that all pairs achieve nearly maximum bandwidth while single-path static routing has nine clusters as congestion limits bandwidth, negatively impacting overall application performance. “The Design, Deployment, and Evaluation of the CORAL Pre-Exascale Systems”, Sudharshan S. Vazhkudai, Arthur S. Bland, Al Geist, Christopher J. Zimmer, Scott Atchley, Sarp Oral, Don E. Maxwell, Veronica G. Vergara Larrea, Wayne Joubert, Matthew A. Ezell, Dustin Leverman, James H. Rogers, Drew Schmidt, Mallikarjun Shankar, Feiyi Wang, Junqi Yin (Oak Ridge National Laboratory) and Bronis R. de Supinski, Adam Bertsch, Robin Goldstone, Chris Chambreau, Ben Casses, Elsa Gonsiorowski, Ian Karlin, Matthew L. Leininger, Adam Moody, Martin Ohmacht, Ramesh Pankajakshan, Fernando Pizzano, Py Watson, Lance D. Weems (Lawrence Livermore National Laboratory) and James Sexton, Jim Kahle, David Appelhans, Robert Blackmore, George Chochia, Gene Davison, Tom Gooding, Leopold Grinberg, Bill Hanson, Bill Hartner, Chris Marroquin, Bryan Rosenburg, Bob Walkup (IBM) InfiniBand High Network Efficiency - mpiGraph Oak Ridge National Lab Summit Supercomputer Static Routing Adaptive Routing
  • 24.
    © 2019 MellanoxTechnologies | Confidential 24 HDR InfiniBand
  • 25.
    © 2019 MellanoxTechnologies | Confidential 25 Highest-Performance 200Gb/s InfiniBand Solutions Transceivers Active Optical and Copper Cables (10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s) 40 HDR (200Gb/s) InfiniBand Ports 80 HDR100 InfiniBand Ports Throughput of 16Tb/s, <90ns Latency 200Gb/s Adapter, 0.6us latency 215 million messages per second (10 / 25 / 40 / 50 / 56 / 100 / 200Gb/s) MPI, SHMEM/PGAS, UPC For Commercial and Open Source Applications Leverages Hardware Accelerations System on Chip and SmartNIC Programmable adapter Smart Offloads
  • 26.
    © 2019 MellanoxTechnologies | Confidential 26 Leading Connectivity ConnectX-6 HDR InfiniBand Adapter Leading Performance Leading Features ▪ 200Gb/s InfiniBand and Ethernet ▪ HDR, HDR100, EDR (100Gb/s) and lower speeds ▪ 200GbE, 100GbE and lower speeds ▪ Single and dual ports ▪ 200Gb/s throughput, 0.6usec latency, 215 million message per second ▪ PCIe Gen3 / Gen4, 32 lanes ▪ Integrated PCIe switch ▪ Multi-Host - up to 8 hosts, supporting 4 dual-socket servers ▪ In-network computing and memory for HPC collective offloads ▪ Security – Block-level encryption to storage, key management, FIPS ▪ Storage – NVMe Emulation, NVMe-oF target, Erasure coding, T10/DIF
  • 27.
    © 2019 MellanoxTechnologies | Confidential 27 HDR InfiniBand Switches ▪ 40 ports of HDR, 200G ▪ 80 ports of HDR100, 100G 40 QSFP56 ports ▪ 800 ports of HDR, 200G ▪ 1600 ports of HDR100, 100G 800 QSFP56 ports
  • 28.
    © 2019 MellanoxTechnologies | Confidential 28 Real Time Network Visibility Network status/health in real time Advanced monitoring for troubleshooting ▪ 8 mirror agents triggered by congestion, buffer usage and latency ▪ Measure queue depth using histograms (64ns granularity) ▪ Buffer snapshots ▪ Congestion notifications and buffers status Built-in Hardware Sensors for Rich Traffic Telemetry and Data Collection
  • 29.
    © 2019 MellanoxTechnologies | Confidential 29 Highest Performance and Scalability for Exascale Platforms 7X Higher Performance 96% Network Utilization Flat Latency 5000X Higher Resiliency 2X Higher Performance Deep Learning HDR 200G NDR 400G XDR 1000G
  • 30.
    © 2019 MellanoxTechnologies | Confidential 30 InfiniBand Delivers Highest Performance and ROI ▪ High data throughput, extremely low latency, high message rate, RDMA and GPUDirect ▪ In-Network Computing – SHARP™, MPI acceleration engines ▪ Self Healing Network with SHIELD for highest network resiliency ▪ End to end adaptive routing, congestion control and Quality of Service ▪ InfiniBand to Ethernet gateway for Ethernet storage or other Ethernet connectivity ▪ Backward and forward compatibility Native InfiniBand NVMe / Storage InfiniBand High Speed Network In-Network Computing Engines Ethernet Storage / Services High Speed Gateway to Ethernet Compute Servers Applications
  • 31.
    © 2019 MellanoxTechnologies | Confidential 31 Thank You