Cormac Hogan
Andreas Scherr
STO1193BU
#STO1193BU
A Closer Look at vSAN
Networking Design and
Configuration
Considerations
• This presentation may contain product features that are currently under development.
• This overview of new technology represents no commitment from VMware to deliver these
features in any generally available product.
• Features are subject to change, and must not be included in contracts, purchase orders, or
sales agreements of any kind.
• Technical feasibility and market demand will affect final delivery.
• Pricing and packaging for any new technologies or features discussed or presented have not
been determined.
Disclaimer
2
Agenda
1 vSAN Networking Overview
2 Multicast and Unicast
3 NIC Teaming and Load Balancing
4 Network Topologies (incl. Stretched and 2-node)
5 Network Performance Considerations
3
Where should I begin? StorageHub!
• https://storagehub.vmware.com/#!/vmware-vsan/plan-and-design
4
vSAN Networking Overview
5
vSAN Networking – Major Software Components
• CMMDS (Cluster Monitoring, Membership, and Directory Service)
• Inter cluster communications and metadata exchange
– Multicast with <= vSAN 6.5
– Unicast with >= vSAN 6.6
– Heartbeat sent from master to all hosts every second
• Traffic light in steady state
• RDT (Reliable Datagram Transport)
• Bulk of vSAN traffic
– Virtual Disk data distributed across cluster
– Replication /Resynch Traffic
6
vSAN Networking – Ports and Firewalls
• ESXi Firewall considerations
– On enablement of vSAN on a given cluster, all required ports are
enabled/disabled automatically; no admin action
• Ports
– CMMDS (UDP 12345, 23451, 12321)
– RDT (TCP 2233)
– VSANVP (TCP 8080)
– Witness Host (TCP port 2233 and UDP Port 12321)
– vSAN Encryption / KMS Server
• Communication between vCenter and KMS to obtain keys
• vSAN Encryption has special dynamic firewall rule opened on
demand on ESXi hosts
7
Network Connectivity – IPv6
• vSAN can operate in IPv6-only mode
– Available since vSAN 6.2
– All network communications are through IPv6 network
• vSAN supports mixed IPv4 & IPv6 during upgrade only
– Do not run mixed mode in production
8
Minimum NIC requirements for vSAN Networking
9
+10Gb
support
1Gb
support Comments
Hybrid Cluster Y Y
10Gb min. recommended, but 1Gb supported,
<1ms RTT
All-Flash Cluster Y N
All Flash requires 10Gb min. 1Gb not supported,
<1ms RTT
Stretched Cluster - Data to Data Y N
10Gb required between data sites*,
<5ms RTT
Stretched Cluster - Witness to Data Y Y
100Mbps connectivity required from data sites to witness.
<200ms RTT
2-node Data to Data Y Y
10Gb min. required for All-Flash. 1Gb supported for hybrid,
but 10Gb recommended
2-node Witness to Data Y Y
1.5Mbps bandwidth required.
<500ms RTT
Distributed or Standard Switches?
10
• vSphere Standard Switch
• No management dependence on vCenter
• Recovery is simple
• Prone to misconfiguration in larger setups
• vSphere Distributed Switch
• Consistency
Avoids configuration skew
• Teaming and Failover
LACP/LAG/ether-channel
• Network I/O Control
Manage/allocate network bandwidth for
different vSphere traffic types
vSphere Distributed Switch is Free with vSAN
Network I/O Control (NIOC) Configuration Sample
• Single 10-GbE physical adapters for simplicity
• NICs handles traffic for vSAN, vMotion, and virtual machines and management traffic
• If adapter becomes saturated, Network I/O Control controls bandwidth allocation
• Sample configuration:
11
Traffic Type Custom Shares Value Bandwidth
vSAN 100 5Gbps
vMotion 50 2.5Gbps
Virtual Machine 30 1.5Gbp
Management 20 1Gbps
NIC Teaming and Failover options
12
• Keep it simple folks!
• All Virtual Switches Support (vSS + vDS)
– Routed based on IP Hash / Virtual Port ID
• Distributed Switch Only (vDS)
– Route based on Physical NIC Load (LBT)
• Distributed Switch + Physical Switch Only
– Physical switches that support LACP/LAG/ether-
channel provide additional load balancing algorithms
Multi chassis link aggregation capable switches
vSAN Multicast & Unicast
13
What is Multicast?
14
• vSAN 6.5 (and earlier) used multicast traffic as a discovery
protocol to find all other nodes trying to join a vSAN cluster.
• Multicast is a network communication technique utilized to
send information simultaneously (one-to-many or many-to-
many) to a group of destinations over an IP network.
• Multicast needs to be enabled on the switch/routers of the
physical network.
• Internet Group Management Protocol (IGMP) used within
an L2 domain for group membership (follow switch vendor
recommendations)
• Protocol Independent Multicast (PIM) used for routing
multicast traffic to a different L3 domain
Multicast added complexity to vSAN networking
IGMP Considerations
• Consideration with multiple vSAN clusters
– Prevent individual clusters from receiving all multicast streams
– Option 1 – Separate VLANs for each vSAN cluster
– Option 2 - When multiple vSAN clusters reside on the same layer 2 network, VMware
recommends changing the default multicast address
• See VMware KB 2075451
15
Multicast Group Address on vSAN
• The vSAN Master Group Multicast Address created is 224.1.2.3 – CMMDS updates.
• The vSAN Agent Group Multicast Address is 224.2.3.4 – heartbeats.
• The vSAN traffic service will assign the default multicast address settings to each host node.
16
# esxcli vsan network list
Interface
VmkNic Name: vmk2
IP Protocol: IP
Interface UUID: 26ce8f58-7e8b-062e-ba57-a0369f56deac
Agent Group Multicast Address: 224.2.3.4
Agent Group IPv6 Multicast Address: ff19::2:3:4
Agent Group Multicast Port: 23451
Master Group Multicast Address: 224.1.2.3
Master Group IPv6 Multicast Address: ff19::1:2:3
Master Group Multicast Port: 12345
Host Unicast Channel Bound Port: 12321
Multicast TTL: 5
vSAN 6.6 introduces Unicast
in place of Multicast for vSAN
communication
17
vSAN and Unicast
• vSAN 6.6 now communicates using unicast for
CMMDS updates.
• A unicast transmission/stream sends IP packets to a
single recipient on a network.
• vCenter becomes the new source of truth for vSAN
membership.
– List of nodes is pushed to the CMMDS layer
• The Networking Mode (unicast/multicast) is not
configurable
18
vSAN 6.6 and above
Unicast
vSAN and Unicast
• The Cluster summary now shows if a vSAN cluster network mode is Unicast or Multicast:
19
Member Coordination with Unicast on vSAN 6.6
• Who tracks cluster membership if we no
longer have multicast?
• vCenter now becomes the source of truth for
vSAN cluster membership with unicast
• The vSAN cluster continues to operate in
multicast mode until all participating nodes are
upgraded to vSAN 6.6
• All hosts maintain a configuration generation
number in case vCenter has an outage.
– On recovery, vCenter checks the configuration
generation number to see if the cluster
configuration has changed in its absence.
20
vCenter
New Unicast considerations
in vSAN 6.6
21
Upgrade / Mixed Cluster Considerations with unicast
22
vSAN Cluster
Software
Configuration
Disk Format
Version(s)
CMMDS
Mode
Comments
6.6 Only Nodes* All Version 5 Unicast
Permanently operates in unicast. Cannot switch to
multicast. Adding older nodes will partition cluster.
6.6 Only Nodes*
All Version 3 or
below
Unicast
6.6 nodes operate in unicast mode.
Switches back to multicast if < vSAN 6.6 node added.
Mixed 6.6 and vSAN
pre-6.6 Nodes
Mixed Version 5 with
Version 3 or below
Unicast
6.6 nodes with v5 disks operate in unicast mode. Pre-6.6
nodes with v3 disks will operate in multicast mode.
*** This causes a cluster partition! ***
Mixed 6.6 and vSAN
pre-6.6 Nodes
All Version 3 or
Below
Multicast
Cluster operates in multicast mode. All vSAN nodes must
be upgraded to 6.6 to switch to unicast mode.
*** Disk format v5 will make unicast mode permanent ***
vSAN 6.6 only nodes – additional considerations with unicast
• All hosts running vSAN 6.6, cluster will communicate using unicast
– Even if disk groups are formatted with < version 5.0, e.g. version 3.0
• vSAN will revert to multicast mode if a non-vSAN 6.6 node is added to the 6.6 cluster
– But only if no disk group format == version 5.0
• A vSAN 6.6+ cluster will only ever communicate in unicast if a version 5.0 disk group exists
• If a non-vSAN 6.6 node is added to a 6.6 cluster which contains at least one version 5.0 disk
group, this node will be partitioned and will not join the vSAN cluster
23
Considerations with Unicast
• Considerations with vSAN 6.6 unicast and DHCP
– vCenter Server deployed on a vSAN 6.6 cluster
– vSAN 6.6 nodes obtained IP addresses via DHCP
– If IP addresses change, vCenter VM may become unavailable
• Can lead to cluster partition as vCenter cannot update membership
– This is not supported unless DHCP reservations are used.
• Considerations with vSAN 6.6 unicast and IPv6
– IPv6 is supported with unicast communications in vSAN 6.6.
– However IPv6 Link Local Addresses are not supported for
unicast communications on vSAN 6.6
• vSAN doesn’t use link local addresses to track membership
24
vCenter
Query Unicast with esxcli
• vSAN cluster node now displays the CMMDS networking mode - unicast or multicast.
– esxcli vsan cluster get
25
Query Unicast with esxcli
• One can also check which vSAN cluster nodes are operating in unicast mode
– esxcli vsan cluster unicastagent list:
• Unicast info is also displayed in vSAN network details
– esxcli vsan network list
26
NIC Teaming and Load-Balancing
Recommendations
27
NIC Teaming – single vmknic, multiple vmnics (uplinks)
• Route based on originating virtual port
– Pros
• Simplest teaming mode, with very minimum physical
switch configuration.
– Cons
• A single VMkernel interface cannot use more than a single
physical NIC's bandwidth.
• Route Based on Physical NIC Load
– Pros
• No physical switch configuration required.
– Cons
• Since only one VMkernel port, effectiveness of using this is
limited.
• Minor overhead when ESXi re-evaluates the load
28
Load Balancing - single vmknic, multiple vmnics (uplinks)
• vSAN does not use NIC teaming for load
balancing
• vSAN has no load balancing mechanism
to differentiate between multiple vmknics.
• As such, the vSAN IO path chosen is not
deterministic across physical NICs
29
0
100000
200000
300000
400000
500000
600000
700000
800000
900000
1000000
Node 1 Node 2 Node 3 Node 4
KBps Utilization per vmnic -Multiple VMknics
vmnic0 vmnic1
NIC Teaming – LACP & LAG (***Preferred***)
• Pros
– Improves performance and bandwidth
– If a NIC fails and the link-state goes down, the
remaining NICs in the team continue to pass traffic.
– Many load balancing options
– Rebalancing of traffic after failures is automatic
– Based on 802.3ad standards.
• Cons
– Requires that physical switch ports be configured in
a port-channel configuration.
– Complexity on configuration and maintenance
30
Load Balancing – LACP & LAG (***Preferred***)
• More consistency compared to “Route
based on physical NIC load”
• More individual Clients (VMs) will cause
further increase probability of a balanced
load
31
0
50000
100000
150000
200000
250000
300000
350000
400000
450000
500000
Node 1 Node 2 Node 3 Node 4
KBps Utilization per vmnic - LACP Setup
vmnic0 vmnic1
vSAN network on different subnets
• vSAN networks on 2 different subnets?
– If subnets are routed, and one host’s NIC fails,
host will communicate on other subnet
– If subnets are air-gapped, and one host’s NIC
fails, it will not be able to communicate to the
other hosts via other subnet
– That host with failing NIC will become isolated
– TCP timeout 90sec on failure
32
Supported Network Topologies
33
Topologies
• Single site, multiple hosts
• Single site, multiple hosts with Fault Domains
• Multiple sites, multiple hosts with Fault Domains (campus cluster but not stretched cluster)
• Stretched Cluster
• ROBO/2-node
• Design considerations
– L2/L3
– Multicast/Unicast
– RTT (round-trip-time)
34
Simplest topology - Layer-2, Single Site, Single Rack
• Single site, multiple hosts, shared subnet/VLAN/L2 topology, multicast with IGMP
• No need to worry about routing the multicast traffic in pre-vSAN 6.6 deployments
• Layer-2 implementations are simplified even further with vSAN 6.6, and unicast. With such a
deployment, IGMP snooping is not required.
35
Layer-2, Single Site, Multiple Racks – pre-vSAN 6.6 (multicast)
• pre-vSAN 6.6 where vSAN traffic is multicast
• Vendor specific multicast configuration required (IGMP/PIM)
36
Layer-2, Single Site, Multiple Racks – 6.6 and later (unicast)
• vSAN 6.6 where vSAN traffic is unicast
• No need to configure IGMP/PIM on the switches
37
Stretch Cluster Topologies
38
Stretched Cluster – L2 for data, L3 to witness or L3 everywhere
• vSAN 6.5 and earlier, traffic between data sites is multicast (meta) and unicast (IO).
• vSAN 6.6 and later, all traffic is unicast.
• In all versions of vSAN, the witness traffic between a data site and the witness site has always
been unicast.
39
Stretched Cluster - Why not L2 everywhere? (unsupported)
• Consider a situation where the link between Data Site 1 and Data Site 2 is broken
• Spanning Tree may discover a path between Data Site 1 and Data Site 2 exists via switch S1
• Possible performance decrease if data network traffic passes through a lower specification
witness site
40
2-Node (ROBO)
41
2-Node vSAN for Remote Locations
• Both hosts in remote office store data
• Witness in central office or 3rd site
stores witness data
• Unicast connectivity to witness
appliance
– 500ms RTT Latency
– 1.5Mbps bandwidth from Data Site to
WitnessCluster
vSphere vSAN
vSphere vSAN
vSphere vSAN
Witness
vSphere vSAN
Witness
500ms RTT latency
1.5Mbps bandwidth
500ms RTT latency
1.5Mbps bandwidth
42
2-node Direct Connect and Witness traffic separation
43
VSAN
Datastore
witness
10GbE
vSAN traffic via Direct Cable
management & witness traffic
• Separating the vSAN data traffic from witness traffic
• Ability to connect Data nodes directly using Ethernet cables
• Two cables between hosts for higher availability of network
• Witness traffic uses management network
Note: Witness Traffic Separation is NOT supported for
Stretch Cluster at this time
vSAN and Performance
Network relevance
44
General Concept on Network Performance
• Understanding vSAN concepts and features
– Standard vSAN setup vs. Stretch Cluster, FTT=1 or RAID5/6
• Understand Network Best Practice for optimum Performance – physical switch topology
– ISL trunks are not over subscripted
– MTU size factor
– No errors/drops/pause frames on the Network switches
45
General Concept on Network Performance
• Understand Host communication
– No errors/drops/CRC/pause frames on the Network card
– Driver/Firmware as per our HCL
– Use SFP/Gbic certified by your Hardware Vendor
– Use of NIOC to optimize traffic on the protocol layer if links sharing traffic (Ex. VM/vMotion/..)
46
DEMO: Adding 10ms network latency
47
Summary: Graphical interpretation IOPS vs. latency
48
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
0 5 10 15 20 25
IOPS
additional latency increase ms
latency ms Linear (latency ms)
+10ms latency = ~23100 IOPS
+5ms latency = ~33000 IOPS
Native = ~47000 IOPS
DEMO: Network 2% and 10% packet loss
49
Summary: Graphical interpretation IOPS vs. loss %
50
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
0 5 10 15 20 25
IOPS
% loss
loss % Expon. (loss %)
1% loss = ~42300 IOPS
Native = ~47000 IOPS
2% loss = ~32000 IOPS
10% loss = ~3400 IOPS
Nerd Out With These Key vSAN Activities at VMworld
#HitRefresh on your current data center and discover the possibilities!
Earn VMware digital badges to
showcase your skills
• New 2017 vSAN Specialist
Badge
• Education & Certification Lounge:
VM Village
• Certification Exam Center:
Jasmine EFG, Level 3
Become a
vSAN Specialist
Learn from self-paced and expert
led hands on labs
• vSAN Getting Started Workshop
(Expert led)
• VxRail Getting Started (Self
paced)
• Self-Paced lab available online
24x7
Practice with
Hands-on-Labs
Discover how to assess if your IT
is a good fit for HCI
• Four Seasons Willow Room/2nd
floor
• Open from 11am – 5pm Sun,
Mon, and Tue
• Learn more at Assessing &
Sizing in STO1500BU
Visit SDDC
Assessment Lounge
3 Easy Ways to Learn More about vSAN
52
• Live at VMworld
• Practical learning of
vSAN, VxRail and more
• 24x7 availability online
– for free!
vSAN Sizer
vSAN Assessment
New vSAN Tools
• StorageHub.vmware.com
• Reference architectures,
off-line demos and more
• Easy search function
• And More!
Storage Hub Technical Library Hands-On Lab
Test drive vSAN
for free today!
VMworld 2017 vSAN Network Design
Cormac Hogan
@CormacJHogan
Andreas Scherr
@vsantester

More Related Content

PDF
VSAN – Architettura e Design
PPTX
vSAN architecture components
PDF
vSAN Beyond The Basics
PPTX
A day in the life of a VSAN I/O - STO7875
PDF
VMware
PDF
VMware Virtual SAN Presentation
PPTX
VMware vSAN - Novosco, June 2017
PDF
VMware vSphere Networking deep dive
VSAN – Architettura e Design
vSAN architecture components
vSAN Beyond The Basics
A day in the life of a VSAN I/O - STO7875
VMware
VMware Virtual SAN Presentation
VMware vSAN - Novosco, June 2017
VMware vSphere Networking deep dive

What's hot (20)

PPTX
Esxi troubleshooting
PDF
VMware - Virtual SAN - IT Changes Everything
PPT
VMWARE ESX
PPTX
Vce vxrail-customer-presentation new
PDF
VMware vSphere Networking deep dive
PPTX
VMware Horizon Customer Presentation EN
PPTX
VMware Advance Troubleshooting Workshop - Day 3
PPTX
VMware virtual SAN 6 overview
PDF
VMware HCI solutions - 2020-01-16
PPTX
VMware Advance Troubleshooting Workshop - Day 2
PDF
Ceph with CloudStack
PPTX
WebSphere Application Server Family (Editions Comparison)
PPTX
Five common customer use cases for Virtual SAN - VMworld US / 2015
PPTX
VMware Advance Troubleshooting Workshop - Day 4
PPTX
Building a Stretched Cluster using Virtual SAN 6.1
PPTX
VMware vSphere technical presentation
PPTX
Presentation v mware virtual san 6.0
PPTX
VMware VSAN Technical Deep Dive - March 2014
PDF
SDN입문 (Overlay and Underlay)
PDF
Hcx intro preso v2
Esxi troubleshooting
VMware - Virtual SAN - IT Changes Everything
VMWARE ESX
Vce vxrail-customer-presentation new
VMware vSphere Networking deep dive
VMware Horizon Customer Presentation EN
VMware Advance Troubleshooting Workshop - Day 3
VMware virtual SAN 6 overview
VMware HCI solutions - 2020-01-16
VMware Advance Troubleshooting Workshop - Day 2
Ceph with CloudStack
WebSphere Application Server Family (Editions Comparison)
Five common customer use cases for Virtual SAN - VMworld US / 2015
VMware Advance Troubleshooting Workshop - Day 4
Building a Stretched Cluster using Virtual SAN 6.1
VMware vSphere technical presentation
Presentation v mware virtual san 6.0
VMware VSAN Technical Deep Dive - March 2014
SDN입문 (Overlay and Underlay)
Hcx intro preso v2
Ad

Viewers also liked (7)

PDF
VMware Site Recovery Manager
PPTX
VMware Horizon - news
PDF
Open source for you - November 2017
PDF
VMware Workspace One
PDF
Výhody Software Defined Storage od VMware
PPT
VMware Esx Short Presentation
PPTX
VMworld 2017 - Top 10 things to know about vSAN
VMware Site Recovery Manager
VMware Horizon - news
Open source for you - November 2017
VMware Workspace One
Výhody Software Defined Storage od VMware
VMware Esx Short Presentation
VMworld 2017 - Top 10 things to know about vSAN
Ad

Similar to VMworld 2017 vSAN Network Design (20)

PPTX
VMworld 2015: Networking Virtual SAN's Backbone
PDF
Presentation v mware v-sphere distributed switch—technical deep dive
PPTX
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
PDF
10 sdn-vir-6up
PDF
Cumulus Linux 2.5 Overview
PPTX
Virtual Deep-Dive: Citrix Xen Server
PPTX
Inf net2227 heath
PDF
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
PPTX
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
PPTX
Stretching CloudStack over multiple datacenters
PDF
Partner Presentation vSphere6-VSAN-vCloud-vRealize
PDF
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
PPTX
VMworld - vSphere Distributed Switch 6.0 Technical Deep Dive
PPTX
DevOops - Lessons Learned from an OpenStack Network Architect
PDF
VMworld 2014: vSphere Distributed Switch
PDF
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
PDF
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
PDF
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
PDF
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant
VMworld 2015: Networking Virtual SAN's Backbone
Presentation v mware v-sphere distributed switch—technical deep dive
VMworld 2016: How to Deploy VMware NSX with Cisco Infrastructure
10 sdn-vir-6up
Cumulus Linux 2.5 Overview
Virtual Deep-Dive: Citrix Xen Server
Inf net2227 heath
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
NET4933_vDS_Best_Practices_For_NSX_Francois_Tallet_Shahzad_Ali
Stretching CloudStack over multiple datacenters
Partner Presentation vSphere6-VSAN-vCloud-vRealize
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
VMworld - vSphere Distributed Switch 6.0 Technical Deep Dive
DevOops - Lessons Learned from an OpenStack Network Architect
VMworld 2014: vSphere Distributed Switch
NSX: La Virtualizzazione di Rete e il Futuro della Sicurezza
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
Implementing an IPv6 Enabled Environment for a Public Cloud Tenant

Recently uploaded (20)

PPTX
GROUP4NURSINGINFORMATICSREPORT-2 PRESENTATION
PDF
NewMind AI Weekly Chronicles – August ’25 Week IV
PDF
Electrocardiogram sequences data analytics and classification using unsupervi...
PDF
EIS-Webinar-Regulated-Industries-2025-08.pdf
PDF
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
PDF
CXOs-Are-you-still-doing-manual-DevOps-in-the-age-of-AI.pdf
PDF
Connector Corner: Transform Unstructured Documents with Agentic Automation
PDF
LMS bot: enhanced learning management systems for improved student learning e...
PPTX
Module 1 Introduction to Web Programming .pptx
PDF
SaaS reusability assessment using machine learning techniques
PDF
Enhancing plagiarism detection using data pre-processing and machine learning...
PDF
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
PDF
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
PPTX
SGT Report The Beast Plan and Cyberphysical Systems of Control
PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PPTX
future_of_ai_comprehensive_20250822032121.pptx
PDF
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
PDF
Advancing precision in air quality forecasting through machine learning integ...
PDF
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
DOCX
Basics of Cloud Computing - Cloud Ecosystem
GROUP4NURSINGINFORMATICSREPORT-2 PRESENTATION
NewMind AI Weekly Chronicles – August ’25 Week IV
Electrocardiogram sequences data analytics and classification using unsupervi...
EIS-Webinar-Regulated-Industries-2025-08.pdf
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
CXOs-Are-you-still-doing-manual-DevOps-in-the-age-of-AI.pdf
Connector Corner: Transform Unstructured Documents with Agentic Automation
LMS bot: enhanced learning management systems for improved student learning e...
Module 1 Introduction to Web Programming .pptx
SaaS reusability assessment using machine learning techniques
Enhancing plagiarism detection using data pre-processing and machine learning...
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
SGT Report The Beast Plan and Cyberphysical Systems of Control
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
future_of_ai_comprehensive_20250822032121.pptx
Dell Pro Micro: Speed customer interactions, patient processing, and learning...
Advancing precision in air quality forecasting through machine learning integ...
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
Basics of Cloud Computing - Cloud Ecosystem

VMworld 2017 vSAN Network Design

  • 1. Cormac Hogan Andreas Scherr STO1193BU #STO1193BU A Closer Look at vSAN Networking Design and Configuration Considerations
  • 2. • This presentation may contain product features that are currently under development. • This overview of new technology represents no commitment from VMware to deliver these features in any generally available product. • Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind. • Technical feasibility and market demand will affect final delivery. • Pricing and packaging for any new technologies or features discussed or presented have not been determined. Disclaimer 2
  • 3. Agenda 1 vSAN Networking Overview 2 Multicast and Unicast 3 NIC Teaming and Load Balancing 4 Network Topologies (incl. Stretched and 2-node) 5 Network Performance Considerations 3
  • 4. Where should I begin? StorageHub! • https://storagehub.vmware.com/#!/vmware-vsan/plan-and-design 4
  • 6. vSAN Networking – Major Software Components • CMMDS (Cluster Monitoring, Membership, and Directory Service) • Inter cluster communications and metadata exchange – Multicast with <= vSAN 6.5 – Unicast with >= vSAN 6.6 – Heartbeat sent from master to all hosts every second • Traffic light in steady state • RDT (Reliable Datagram Transport) • Bulk of vSAN traffic – Virtual Disk data distributed across cluster – Replication /Resynch Traffic 6
  • 7. vSAN Networking – Ports and Firewalls • ESXi Firewall considerations – On enablement of vSAN on a given cluster, all required ports are enabled/disabled automatically; no admin action • Ports – CMMDS (UDP 12345, 23451, 12321) – RDT (TCP 2233) – VSANVP (TCP 8080) – Witness Host (TCP port 2233 and UDP Port 12321) – vSAN Encryption / KMS Server • Communication between vCenter and KMS to obtain keys • vSAN Encryption has special dynamic firewall rule opened on demand on ESXi hosts 7
  • 8. Network Connectivity – IPv6 • vSAN can operate in IPv6-only mode – Available since vSAN 6.2 – All network communications are through IPv6 network • vSAN supports mixed IPv4 & IPv6 during upgrade only – Do not run mixed mode in production 8
  • 9. Minimum NIC requirements for vSAN Networking 9 +10Gb support 1Gb support Comments Hybrid Cluster Y Y 10Gb min. recommended, but 1Gb supported, <1ms RTT All-Flash Cluster Y N All Flash requires 10Gb min. 1Gb not supported, <1ms RTT Stretched Cluster - Data to Data Y N 10Gb required between data sites*, <5ms RTT Stretched Cluster - Witness to Data Y Y 100Mbps connectivity required from data sites to witness. <200ms RTT 2-node Data to Data Y Y 10Gb min. required for All-Flash. 1Gb supported for hybrid, but 10Gb recommended 2-node Witness to Data Y Y 1.5Mbps bandwidth required. <500ms RTT
  • 10. Distributed or Standard Switches? 10 • vSphere Standard Switch • No management dependence on vCenter • Recovery is simple • Prone to misconfiguration in larger setups • vSphere Distributed Switch • Consistency Avoids configuration skew • Teaming and Failover LACP/LAG/ether-channel • Network I/O Control Manage/allocate network bandwidth for different vSphere traffic types vSphere Distributed Switch is Free with vSAN
  • 11. Network I/O Control (NIOC) Configuration Sample • Single 10-GbE physical adapters for simplicity • NICs handles traffic for vSAN, vMotion, and virtual machines and management traffic • If adapter becomes saturated, Network I/O Control controls bandwidth allocation • Sample configuration: 11 Traffic Type Custom Shares Value Bandwidth vSAN 100 5Gbps vMotion 50 2.5Gbps Virtual Machine 30 1.5Gbp Management 20 1Gbps
  • 12. NIC Teaming and Failover options 12 • Keep it simple folks! • All Virtual Switches Support (vSS + vDS) – Routed based on IP Hash / Virtual Port ID • Distributed Switch Only (vDS) – Route based on Physical NIC Load (LBT) • Distributed Switch + Physical Switch Only – Physical switches that support LACP/LAG/ether- channel provide additional load balancing algorithms Multi chassis link aggregation capable switches
  • 13. vSAN Multicast & Unicast 13
  • 14. What is Multicast? 14 • vSAN 6.5 (and earlier) used multicast traffic as a discovery protocol to find all other nodes trying to join a vSAN cluster. • Multicast is a network communication technique utilized to send information simultaneously (one-to-many or many-to- many) to a group of destinations over an IP network. • Multicast needs to be enabled on the switch/routers of the physical network. • Internet Group Management Protocol (IGMP) used within an L2 domain for group membership (follow switch vendor recommendations) • Protocol Independent Multicast (PIM) used for routing multicast traffic to a different L3 domain Multicast added complexity to vSAN networking
  • 15. IGMP Considerations • Consideration with multiple vSAN clusters – Prevent individual clusters from receiving all multicast streams – Option 1 – Separate VLANs for each vSAN cluster – Option 2 - When multiple vSAN clusters reside on the same layer 2 network, VMware recommends changing the default multicast address • See VMware KB 2075451 15
  • 16. Multicast Group Address on vSAN • The vSAN Master Group Multicast Address created is 224.1.2.3 – CMMDS updates. • The vSAN Agent Group Multicast Address is 224.2.3.4 – heartbeats. • The vSAN traffic service will assign the default multicast address settings to each host node. 16 # esxcli vsan network list Interface VmkNic Name: vmk2 IP Protocol: IP Interface UUID: 26ce8f58-7e8b-062e-ba57-a0369f56deac Agent Group Multicast Address: 224.2.3.4 Agent Group IPv6 Multicast Address: ff19::2:3:4 Agent Group Multicast Port: 23451 Master Group Multicast Address: 224.1.2.3 Master Group IPv6 Multicast Address: ff19::1:2:3 Master Group Multicast Port: 12345 Host Unicast Channel Bound Port: 12321 Multicast TTL: 5
  • 17. vSAN 6.6 introduces Unicast in place of Multicast for vSAN communication 17
  • 18. vSAN and Unicast • vSAN 6.6 now communicates using unicast for CMMDS updates. • A unicast transmission/stream sends IP packets to a single recipient on a network. • vCenter becomes the new source of truth for vSAN membership. – List of nodes is pushed to the CMMDS layer • The Networking Mode (unicast/multicast) is not configurable 18 vSAN 6.6 and above Unicast
  • 19. vSAN and Unicast • The Cluster summary now shows if a vSAN cluster network mode is Unicast or Multicast: 19
  • 20. Member Coordination with Unicast on vSAN 6.6 • Who tracks cluster membership if we no longer have multicast? • vCenter now becomes the source of truth for vSAN cluster membership with unicast • The vSAN cluster continues to operate in multicast mode until all participating nodes are upgraded to vSAN 6.6 • All hosts maintain a configuration generation number in case vCenter has an outage. – On recovery, vCenter checks the configuration generation number to see if the cluster configuration has changed in its absence. 20 vCenter
  • 22. Upgrade / Mixed Cluster Considerations with unicast 22 vSAN Cluster Software Configuration Disk Format Version(s) CMMDS Mode Comments 6.6 Only Nodes* All Version 5 Unicast Permanently operates in unicast. Cannot switch to multicast. Adding older nodes will partition cluster. 6.6 Only Nodes* All Version 3 or below Unicast 6.6 nodes operate in unicast mode. Switches back to multicast if < vSAN 6.6 node added. Mixed 6.6 and vSAN pre-6.6 Nodes Mixed Version 5 with Version 3 or below Unicast 6.6 nodes with v5 disks operate in unicast mode. Pre-6.6 nodes with v3 disks will operate in multicast mode. *** This causes a cluster partition! *** Mixed 6.6 and vSAN pre-6.6 Nodes All Version 3 or Below Multicast Cluster operates in multicast mode. All vSAN nodes must be upgraded to 6.6 to switch to unicast mode. *** Disk format v5 will make unicast mode permanent ***
  • 23. vSAN 6.6 only nodes – additional considerations with unicast • All hosts running vSAN 6.6, cluster will communicate using unicast – Even if disk groups are formatted with < version 5.0, e.g. version 3.0 • vSAN will revert to multicast mode if a non-vSAN 6.6 node is added to the 6.6 cluster – But only if no disk group format == version 5.0 • A vSAN 6.6+ cluster will only ever communicate in unicast if a version 5.0 disk group exists • If a non-vSAN 6.6 node is added to a 6.6 cluster which contains at least one version 5.0 disk group, this node will be partitioned and will not join the vSAN cluster 23
  • 24. Considerations with Unicast • Considerations with vSAN 6.6 unicast and DHCP – vCenter Server deployed on a vSAN 6.6 cluster – vSAN 6.6 nodes obtained IP addresses via DHCP – If IP addresses change, vCenter VM may become unavailable • Can lead to cluster partition as vCenter cannot update membership – This is not supported unless DHCP reservations are used. • Considerations with vSAN 6.6 unicast and IPv6 – IPv6 is supported with unicast communications in vSAN 6.6. – However IPv6 Link Local Addresses are not supported for unicast communications on vSAN 6.6 • vSAN doesn’t use link local addresses to track membership 24 vCenter
  • 25. Query Unicast with esxcli • vSAN cluster node now displays the CMMDS networking mode - unicast or multicast. – esxcli vsan cluster get 25
  • 26. Query Unicast with esxcli • One can also check which vSAN cluster nodes are operating in unicast mode – esxcli vsan cluster unicastagent list: • Unicast info is also displayed in vSAN network details – esxcli vsan network list 26
  • 27. NIC Teaming and Load-Balancing Recommendations 27
  • 28. NIC Teaming – single vmknic, multiple vmnics (uplinks) • Route based on originating virtual port – Pros • Simplest teaming mode, with very minimum physical switch configuration. – Cons • A single VMkernel interface cannot use more than a single physical NIC's bandwidth. • Route Based on Physical NIC Load – Pros • No physical switch configuration required. – Cons • Since only one VMkernel port, effectiveness of using this is limited. • Minor overhead when ESXi re-evaluates the load 28
  • 29. Load Balancing - single vmknic, multiple vmnics (uplinks) • vSAN does not use NIC teaming for load balancing • vSAN has no load balancing mechanism to differentiate between multiple vmknics. • As such, the vSAN IO path chosen is not deterministic across physical NICs 29 0 100000 200000 300000 400000 500000 600000 700000 800000 900000 1000000 Node 1 Node 2 Node 3 Node 4 KBps Utilization per vmnic -Multiple VMknics vmnic0 vmnic1
  • 30. NIC Teaming – LACP & LAG (***Preferred***) • Pros – Improves performance and bandwidth – If a NIC fails and the link-state goes down, the remaining NICs in the team continue to pass traffic. – Many load balancing options – Rebalancing of traffic after failures is automatic – Based on 802.3ad standards. • Cons – Requires that physical switch ports be configured in a port-channel configuration. – Complexity on configuration and maintenance 30
  • 31. Load Balancing – LACP & LAG (***Preferred***) • More consistency compared to “Route based on physical NIC load” • More individual Clients (VMs) will cause further increase probability of a balanced load 31 0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000 Node 1 Node 2 Node 3 Node 4 KBps Utilization per vmnic - LACP Setup vmnic0 vmnic1
  • 32. vSAN network on different subnets • vSAN networks on 2 different subnets? – If subnets are routed, and one host’s NIC fails, host will communicate on other subnet – If subnets are air-gapped, and one host’s NIC fails, it will not be able to communicate to the other hosts via other subnet – That host with failing NIC will become isolated – TCP timeout 90sec on failure 32
  • 34. Topologies • Single site, multiple hosts • Single site, multiple hosts with Fault Domains • Multiple sites, multiple hosts with Fault Domains (campus cluster but not stretched cluster) • Stretched Cluster • ROBO/2-node • Design considerations – L2/L3 – Multicast/Unicast – RTT (round-trip-time) 34
  • 35. Simplest topology - Layer-2, Single Site, Single Rack • Single site, multiple hosts, shared subnet/VLAN/L2 topology, multicast with IGMP • No need to worry about routing the multicast traffic in pre-vSAN 6.6 deployments • Layer-2 implementations are simplified even further with vSAN 6.6, and unicast. With such a deployment, IGMP snooping is not required. 35
  • 36. Layer-2, Single Site, Multiple Racks – pre-vSAN 6.6 (multicast) • pre-vSAN 6.6 where vSAN traffic is multicast • Vendor specific multicast configuration required (IGMP/PIM) 36
  • 37. Layer-2, Single Site, Multiple Racks – 6.6 and later (unicast) • vSAN 6.6 where vSAN traffic is unicast • No need to configure IGMP/PIM on the switches 37
  • 39. Stretched Cluster – L2 for data, L3 to witness or L3 everywhere • vSAN 6.5 and earlier, traffic between data sites is multicast (meta) and unicast (IO). • vSAN 6.6 and later, all traffic is unicast. • In all versions of vSAN, the witness traffic between a data site and the witness site has always been unicast. 39
  • 40. Stretched Cluster - Why not L2 everywhere? (unsupported) • Consider a situation where the link between Data Site 1 and Data Site 2 is broken • Spanning Tree may discover a path between Data Site 1 and Data Site 2 exists via switch S1 • Possible performance decrease if data network traffic passes through a lower specification witness site 40
  • 42. 2-Node vSAN for Remote Locations • Both hosts in remote office store data • Witness in central office or 3rd site stores witness data • Unicast connectivity to witness appliance – 500ms RTT Latency – 1.5Mbps bandwidth from Data Site to WitnessCluster vSphere vSAN vSphere vSAN vSphere vSAN Witness vSphere vSAN Witness 500ms RTT latency 1.5Mbps bandwidth 500ms RTT latency 1.5Mbps bandwidth 42
  • 43. 2-node Direct Connect and Witness traffic separation 43 VSAN Datastore witness 10GbE vSAN traffic via Direct Cable management & witness traffic • Separating the vSAN data traffic from witness traffic • Ability to connect Data nodes directly using Ethernet cables • Two cables between hosts for higher availability of network • Witness traffic uses management network Note: Witness Traffic Separation is NOT supported for Stretch Cluster at this time
  • 45. General Concept on Network Performance • Understanding vSAN concepts and features – Standard vSAN setup vs. Stretch Cluster, FTT=1 or RAID5/6 • Understand Network Best Practice for optimum Performance – physical switch topology – ISL trunks are not over subscripted – MTU size factor – No errors/drops/pause frames on the Network switches 45
  • 46. General Concept on Network Performance • Understand Host communication – No errors/drops/CRC/pause frames on the Network card – Driver/Firmware as per our HCL – Use SFP/Gbic certified by your Hardware Vendor – Use of NIOC to optimize traffic on the protocol layer if links sharing traffic (Ex. VM/vMotion/..) 46
  • 47. DEMO: Adding 10ms network latency 47
  • 48. Summary: Graphical interpretation IOPS vs. latency 48 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 5 10 15 20 25 IOPS additional latency increase ms latency ms Linear (latency ms) +10ms latency = ~23100 IOPS +5ms latency = ~33000 IOPS Native = ~47000 IOPS
  • 49. DEMO: Network 2% and 10% packet loss 49
  • 50. Summary: Graphical interpretation IOPS vs. loss % 50 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 5 10 15 20 25 IOPS % loss loss % Expon. (loss %) 1% loss = ~42300 IOPS Native = ~47000 IOPS 2% loss = ~32000 IOPS 10% loss = ~3400 IOPS
  • 51. Nerd Out With These Key vSAN Activities at VMworld #HitRefresh on your current data center and discover the possibilities! Earn VMware digital badges to showcase your skills • New 2017 vSAN Specialist Badge • Education & Certification Lounge: VM Village • Certification Exam Center: Jasmine EFG, Level 3 Become a vSAN Specialist Learn from self-paced and expert led hands on labs • vSAN Getting Started Workshop (Expert led) • VxRail Getting Started (Self paced) • Self-Paced lab available online 24x7 Practice with Hands-on-Labs Discover how to assess if your IT is a good fit for HCI • Four Seasons Willow Room/2nd floor • Open from 11am – 5pm Sun, Mon, and Tue • Learn more at Assessing & Sizing in STO1500BU Visit SDDC Assessment Lounge
  • 52. 3 Easy Ways to Learn More about vSAN 52 • Live at VMworld • Practical learning of vSAN, VxRail and more • 24x7 availability online – for free! vSAN Sizer vSAN Assessment New vSAN Tools • StorageHub.vmware.com • Reference architectures, off-line demos and more • Easy search function • And More! Storage Hub Technical Library Hands-On Lab Test drive vSAN for free today!

Editor's Notes

  • #5: Go to storagehub. Select vSAN. Select Plan and Design.
  • #6: This section will describe the fundamentals behind vSAN’s architecture.
  • #7: There are a number of distinct parts to vSAN networking. First there is the communication that takes place between all of the ESXi hosts in the vSAN cluster, indicating that they are still actively participating in vSAN. This has traditionally been done via multicast traffic, and a heartbeat is sent from the master to all hosts once every second to ensure they are still active. However, since the release of vSAN 6.6, this communication is now done via unicast traffic. This is a significant change compared to previous versions of vSAN, and should make vSAN configuration much easier from a networking perspective Lastly, there is virtual machine disk I/O. This makes up the majority of the traffic on the vSAN network. Because VMs on the vSAN datastore are made up of a set of objects, these objects may be made up of one or more components. For example, a number of RAID-0 stripes or a number of RAID-1 mirrors. Invariably, a VMs compute and a VMs storage will be located on different ESXi hosts in the cluster. It may also transpire that if a VM has been configured to tolerate one or more failures, the compute may be on node 1, the first RAID-0 mirror may be on host 2 and the second RAID-0 mirror could be on host 3. In this case, disk reads and writes for this virtual machine will have to traverse the vSAN network. This is unicast traffic, and forms the bulk of the vSAN network traffic. RDT traffic has always been unicast. VSANVP (Virtual SAN VASA Provider – Storage Awareness APIs) Used for Storage Policy Based Management Each vSAN node registers a VASA provider to vCenter Server via TCP
  • #8: KMS server port varies from vendor to vendor Enabling encryption requires a rolling upgrade approach to write the DEKs (Disk Encryption Keys) to disk.
  • #10: Speaker notes: In some cases, 10GB may not be required for stretched cluster. It will depend on the number of components, and rebuild bandwidth. Details are in our documentation. In many cases, when the witness and the vSAN traffic are separated in 2-node, witness traffic is placed on the same network as the management traffic.
  • #11: Speaker notes. Single switches become a pain in large configs where you have to ensure MTU, VLAN, subnet mask, gatewa Distributed switches, while being a little more complex to configure, do offer greater advantages to vSAN customers. And it is free.
  • #13: Keep is Simple! Use a single VMkernel port for vSAN and have the networking stack provide resiliency. On this please deploy what your networking team is most comfortable with! Designing vSAN networks – Dedicated or Shared Interfaces? https://blogs.vmware.com/virtualblocks/2017/01/20/designing-vsan-networks-dedicated-shared-interfaces/ IP HASH balance on a per host basis, so a connection between any two hosts will ONLY use one NIC. As a cluster grows the opportunity for a host to use more than one NIC and balance flows grows. It should be noted that seeing 80% on uplink 1, and 20% usage on uplink 2 is not unheard of - this is the nature of how great LACP is vMotion uses a less than documented hash that includes Source and Destination port and opens multiple connections so LACP (even if its benefit for vSAN is not significant) can help vMotion with large VM evacuations.
  • #14: This section will describe the fundamentals behind vSAN’s architecture.
  • #15: An IP Multicast address is called a Multicast Group (MG). Internet Group Management Protocol (IGMP) is a communication protocol used to dynamically add receivers to IP Multicast group membership. There are multiple versions V1, V2 , V3 Protocol Independent Multicast (PIM) is a family of Layer 3 multicast routing protocols that provide different communication techniques for IP Multicast traffic to reach receivers that are in different Layer 3 segments from the Multicast Groups sources. IP multicast sends source packets to multiple receivers as a group transmission, and provides an efficient delivery of data to a number of destinations with minimum network bandwidth consumption IGMP is a communication protocol used to dynamically add receivers to IP Multicast group membership. The IGMP operations are restricted within individual Layer 2 domains. IGMP allows receivers to send requests to the Multicast Groups they would like to join. Becoming a member of a Multicast Group allows routers to know to forward traffic that is destined for the Multicast Groups on the Layer 3 segment where the receiver is connected (switch port). This allows the switch to keep a table of the individual receivers that need a copy of the Multicast Group traffic.   IP Multicast is a fundamental requirement of vSAN prior to v6.6. Earlier vSAN versions depended on IP multicast communication for the process of joining and leaving cluster groups as well as other intra-cluster communication services. IP multicast must be enabled and configured in the IP Network segments that will carry the vSAN traffic service Some customers who do not wish to to use PIM for routing multicast traffic may consider encapsulating the multicast traffic in a VxLAN, or some other fabric overlay.
  • #16: IGMP snooping is a mechanism to constrain multicast traffic to only the ports that have receivers attached. The mechanism adds efficiency because it enables a Layer 2 switch to selectively send out multicast packets on only the ports that need them When a network/VLAN does not have a router that can take on the multicast router role and provide the multicast router discovery on the switches, you can turn on the IGMP querier feature. The feature allows the Layer 2 switch to proxy for a multicast router and send out periodic IGMP queries in that network. This action causes the switch to consider itself an multicast router port. If it’s a small layer 2 environment and you only have 1 VSAN cluster per VLAN and NOTHING else goes on that VLAN besides Vmkernel ports you can use “flooding” and disable snooping for that VLAN. (No significant overhead as your basically having the switch broadcast that traffic). In a larger environment you will
  • #17: This is true even in vSAN 6.6, in case there is a ’revert’ to multicast from unicast. Port 23451 is used by the master for sending a heartbeat to each host in the cluster every second. Port 12345 is used for the CMMDS updates.
  • #18: This section will describe the fundamentals behind vSAN’s architecture.
  • #21: Speaker notes: vCenter Server and ESXi hosts must be 6.50d EP2 or later vSAN cluster IP address list is maintained by vCenter and is pushed to each node. The following changes will trigger an update from vCenter: A vSAN cluster is formed A new vSAN node is added or removed from vSAN enabled cluster An IP address change or vSAN UUID change on an existing node
  • #22: This section will describe the fundamentals behind vSAN’s architecture.
  • #23: Table for education purposes – only really interested in first 3 behaviours.
  • #25: When vCenter server recovers, vCenter (vSAN health) will attempt to reconcile its current list of unicast addresses with the vSAN cluster, and will push down stale unicast addresses to vSAN nodes. This may trigger a vSAN cluster partition and vCenter may no longer be accessible (since it runs on that vSAN cluster). DHCP with reservations (i.e. assigned IP addresses that are bound to the mac addresses of vSAN VMkernel ports) is supported, as is DHCP without reservations but with the managing vCenter hosted residing outside of the vSAN cluster IPv6 is supported with unicast communications in vSAN 6.6. With IPv6, a link-local address is an IPv6 unicast address that can be automatically configured on any interface using the link-local prefix. vSAN, by default, does not add a node’s link local address to other cluster nodes (as a neighbor). As a consequence, IPv6 Link local addresses are not supported for unicast communications on vSAN 6.6. A link-local address is an IPv6 unicast address that can be automatically configured on any interface using the link-local prefix FE80::/10 (1111 1110 10) . IPv6 link-local addresses are a special scope of address which can be used only within the context of a single layer two domain
  • #28: This section will describe the fundamentals behind vSAN’s architecture.
  • #29: Note: vSAN does not use NIC teaming for load balancing
  • #30: In a simple I/O test performed in our labs - using 120 VMs with a 70:30 read/write ratio with a 64K block size on a four node all flash vSAN cluster, we can clearly see vSAN make no attempt to balance the traffic
  • #32: Again using a simple I/O test performed in our labs, using 120 VMs with a 70:30 read/write ratio with a 64K block size on a four node all flash vSAN cluster
  • #33: Note: this is contrary to what we had in our documentation. Not recommended, if subnet not routed
  • #34: This section will describe the fundamentals behind vSAN’s architecture.
  • #37: Multiple TOR (top of rack) switches Explain IGMP needed on all switches
  • #38: Multiple TOR (top of rack) switches Explain IGMP needed on all switches
  • #39: This section will describe the fundamentals behind vSAN’s architecture.
  • #40: Multiple TOR (top of rack) switches Explain IGMP needed on all switches Between Data Site 1 and Data Site 2, VMware supports implementing a stretched L2 (switched) configuration or a L3 (routed) configuration. Both topologies are supported. Between Data Sites and Witness Site, VMware supports an L3 (routed) configuration
  • #41: Multiple TOR (top of rack) switches Explain IGMP needed on all switches
  • #42: This section will describe the fundamentals behind vSAN’s architecture.
  • #43: Key Message/Talk track: Remote offices and branch offices (ROBO) is a common geographic model for many organizations. vSAN makes it easy to run a 2 node vSAN cluster to provide all of the storage needs in these branch offices, while using the primary site as the location to house the witness appliances. This makes for a fast, affordable, and flexible management and delivery of services in environments that require this type of topology. Additionally, vSAN ROBO Edition licensing has no host limit restriction. There is only a restriction of the number of VMs that may be run in a site. A maximum of 25 VM’s can be run in a single site, or across sites. Any multiple of 25 requires an additional 25 VM-pack license. There is no upgrade path from vSAN ROBO licenses to regular CPU or Desktop vSAN licenses. ---------------------------------- Overview: 2 Node ROBO for vSAN designed for branch office scenarios (retail, etc.) Both hosts in remote office store data. Each remote office seen as a 2 Node cluster Witness VM(s) lives in primary site – one witness VM for each remote office Can easily scale from 2 Node to more by adding additional hosts and removing the vSAN Witness. All sites managed by one vCenter instance Minimum requirements when using 2 Node from the site to the location of the vSAN Witness Appliance. 500ms RTT latency 1.5Mbps bandwidth
  • #45: This section will describe the fundamentals behind vSAN’s architecture.
  • #48: 4k blocksize, threads=32, 10x VMs, Stretch-Cluster native, no latency introduced: ~ 46938.13 IOPS 1ms RTT latency: ~ 39377.07 IOPS, ~ 83% from native 5ms RTT latency: ~ 32679.18 IOPS, ~ 69% from native 10ms RTT latency: ~ 20333.63 IOPS, ~ 43% from native 20ms RTT latency: ~ 11828.53 IOPS, ~ 25% from native
  • #50: Native ~ 46k IOPS First test 1% -- 44k IOPS Second test 2% -- 30k IOPS 3rd test 5% -- 8k IOPS Native again ~ 46k IOPS 0.1% loss: ~ 45298.27 IOPS, ~ 97% from native 0.5% loss: ~ 44689.50 IOPS, ~ 96% from native 1% loss : ~ 43337.53 IOPS, ~ 93% from native 5% loss : ~ 9373.43 IOPS, ~ 20% from native 10% loss : ~ 3117.98 IOPS, ~ 6% from native
  • #53: Mention metrics in talk track