Mellanox VXLAN Acceleration 
VMWorld 2014
Leading Supplier of End-to-End Interconnect Solutions 
Server / Compute Switch / Gateway 
Virtual Protocol Interconnect 
Storage 
Front / Back-End 
Virtual Protocol Interconnect 
56G IB & FCoIB 56G InfiniBand 
10/40/56GbE & FCoE 10/40/56GbE 
Comprehensive End-to-End InfiniBand and Ethernet Portfolio 
ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN 
Cables/Modules 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 2
ConnectX-3 Pro is The Next Generation Cloud Competitive Asset 
 World’s first Cloud offload interconnect solution 
 Provides hardware offloads for Overlay Networks – enables mobility, scalability, serviceability 
 Dramatically lowers CPU overhead, reduces cloud application cost and overall CAPEX and OPEX 
 Highest throughput (10/40GbE, 56Gb/s InfiniBand), SR-IOV, PCIe Gen3, low power 
Cloud 2.0 
The Foundation of Cloud 2.0 
More users 
Mobility 
Scalability 
Simpler Management 
Lower Application Cost 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 3
Protocols: 
VXLAN – VMware and Linux 
NVGRE – Windows 
VM VM VM VM 
Server 
Physical 
Switch 
The Challenge: 
Software implementation leads 
to performance degradation 
CPU overhead increases 
Less virtual machines can be 
Domain 
1 
supported 
Domain 
2 
Domain 
3 
VM VM VM VM 
Three virtual domains connected by Overlay Network 
protocols 
Overlay Networks 
What is it? 
Overlay Networks provide a 
method for “creating” virtual 
domains 
Enable large scale multi-tenant 
isolation 
VM VM VM VM 
Server 
Server 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 4
VXLAN and NVGRE Hardware Offload 
Scalable 
VXLAN 
& NVGRE 
Multi-tenant 
Isolation 
Cloud 
ConnectX-3 Pro 
Overlay networks 
Hardware offload 
Lowering CPU Overhead 
Higher Throughput 
OPEX and CAPEX 
Savings 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 5
CPU CPU 
Payload Payload 
HyperVisor (+ vSwitch) Stateless 
Offload 
Payload VXLAN 
header 
CRC 
VXLAN Hardware Offload 
Payload VXLAN 
header 
CRC 
Stateless 
Offload 
Improved CPU utilization 
More VMs per server 
Higher Networking Throughput 
VM VM VM VM 
VM 
VM 
VM 
VM VM 
VM 
VM 
VM 
VM 
VM 
VM 
VM 
VM VM VM VM 
HyperVisor (+ vSwitch) 
Mellanox ConnectX- 
3 Pro 
Legacy NIC 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 6
VXLAN Throughput with Hardware Offload 
20.51 
40GbE VXLAN Throughput 
36.25 36.11 36.21 36.12 
4.85 
7.62 
11.39 
17.62 17.04 
40 
35 
30 
25 
20 
15 
10 
5 
0 
1 VM pairs 2 VM pairs 4 VM pairs 8 VM pairs 16 VM pairs 
Bandwidth Gb/s 
ConnectX3-Pro VXLAN HW offload ConnectX3-Pro VXLAN no offload 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 7
VXLAN CPU Utilization (Receive) 
1.78 
40GbE VXLAN Receive CPU%/1GbE 
0.39 0.43 
2.49 2.42 
0.64 0.72 
1.00 
3.48 
3.72 
4 
3.5 
3 
2.5 
2 
1.5 
1 
0.5 
0 
1 VM pairs 2 VM pairs 4 VM pairs 8 VM pairs 16 VM pairs 
CPU%/1GbE 
ConnectX3-Pro VXLAN HW offload ConnectX3-Pro VXLAN no offload 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 8
CAPEX and OPEX Savings 
 Without VXLAN Offload  With VXLAN Offload 
17 VMs 36 VMs 
CPU Utilization: 61% 26% 
Example: 1Gb/s traffic per VM 
© 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 9
Thank You 
tomt@mellanox.com

Mellanox VXLAN Acceleration

  • 1.
  • 2.
    Leading Supplier ofEnd-to-End Interconnect Solutions Server / Compute Switch / Gateway Virtual Protocol Interconnect Storage Front / Back-End Virtual Protocol Interconnect 56G IB & FCoIB 56G InfiniBand 10/40/56GbE & FCoE 10/40/56GbE Comprehensive End-to-End InfiniBand and Ethernet Portfolio ICs Adapter Cards Switches/Gateways Host/Fabric Software Metro / WAN Cables/Modules © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 2
  • 3.
    ConnectX-3 Pro isThe Next Generation Cloud Competitive Asset  World’s first Cloud offload interconnect solution  Provides hardware offloads for Overlay Networks – enables mobility, scalability, serviceability  Dramatically lowers CPU overhead, reduces cloud application cost and overall CAPEX and OPEX  Highest throughput (10/40GbE, 56Gb/s InfiniBand), SR-IOV, PCIe Gen3, low power Cloud 2.0 The Foundation of Cloud 2.0 More users Mobility Scalability Simpler Management Lower Application Cost © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 3
  • 4.
    Protocols: VXLAN –VMware and Linux NVGRE – Windows VM VM VM VM Server Physical Switch The Challenge: Software implementation leads to performance degradation CPU overhead increases Less virtual machines can be Domain 1 supported Domain 2 Domain 3 VM VM VM VM Three virtual domains connected by Overlay Network protocols Overlay Networks What is it? Overlay Networks provide a method for “creating” virtual domains Enable large scale multi-tenant isolation VM VM VM VM Server Server © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 4
  • 5.
    VXLAN and NVGREHardware Offload Scalable VXLAN & NVGRE Multi-tenant Isolation Cloud ConnectX-3 Pro Overlay networks Hardware offload Lowering CPU Overhead Higher Throughput OPEX and CAPEX Savings © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 5
  • 6.
    CPU CPU PayloadPayload HyperVisor (+ vSwitch) Stateless Offload Payload VXLAN header CRC VXLAN Hardware Offload Payload VXLAN header CRC Stateless Offload Improved CPU utilization More VMs per server Higher Networking Throughput VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM HyperVisor (+ vSwitch) Mellanox ConnectX- 3 Pro Legacy NIC © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 6
  • 7.
    VXLAN Throughput withHardware Offload 20.51 40GbE VXLAN Throughput 36.25 36.11 36.21 36.12 4.85 7.62 11.39 17.62 17.04 40 35 30 25 20 15 10 5 0 1 VM pairs 2 VM pairs 4 VM pairs 8 VM pairs 16 VM pairs Bandwidth Gb/s ConnectX3-Pro VXLAN HW offload ConnectX3-Pro VXLAN no offload © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 7
  • 8.
    VXLAN CPU Utilization(Receive) 1.78 40GbE VXLAN Receive CPU%/1GbE 0.39 0.43 2.49 2.42 0.64 0.72 1.00 3.48 3.72 4 3.5 3 2.5 2 1.5 1 0.5 0 1 VM pairs 2 VM pairs 4 VM pairs 8 VM pairs 16 VM pairs CPU%/1GbE ConnectX3-Pro VXLAN HW offload ConnectX3-Pro VXLAN no offload © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 8
  • 9.
    CAPEX and OPEXSavings  Without VXLAN Offload  With VXLAN Offload 17 VMs 36 VMs CPU Utilization: 61% 26% Example: 1Gb/s traffic per VM © 2014 Mellanox Technologies - Mellanox Confidential Internal Use Only - 9
  • 10.

Editor's Notes