0% found this document useful (0 votes)
493 views116 pages

Aruba VSG Campus-Design

This document provides a summary and design guidelines for implementing an Aruba ESP (Edge Services Platform) campus network. It covers recommendations for the campus connectivity layer including protocols like OSPF, IP multicast, and quality of service. It also addresses the campus LAN design including two-tier and three-tier network options. Additional sections cover wireless LAN design, network policies, reference architectures for small, medium and large campuses, and capacity planning. The goal is to provide best practices for building a scalable and resilient campus network using Aruba ESP technologies and components.

Uploaded by

supriono legimin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
493 views116 pages

Aruba VSG Campus-Design

This document provides a summary and design guidelines for implementing an Aruba ESP (Edge Services Platform) campus network. It covers recommendations for the campus connectivity layer including protocols like OSPF, IP multicast, and quality of service. It also addresses the campus LAN design including two-tier and three-tier network options. Additional sections cover wireless LAN design, network policies, reference architectures for small, medium and large campuses, and capacity planning. The goal is to provide best practices for building a scalable and resilient campus network using Aruba ESP technologies and components.

Uploaded by

supriono legimin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
  • ESP Campus Design
  • Aruba ESP Campus Introduction
  • EPS Campus Design Goals
  • Aruba ESP Campus Connectivity Layer
  • Aruba ESP Campus LAN Design
  • Aruba ESP Wireless LAN Design
  • Aruba ESP Campus Policy Design
  • Aruba ESP Campus Reference Architectures
  • Summary
  • What's New in This Version

ESP Campus Design

Validated Solution Guide

Aruba Solution TME

March 02, 2023


ESP Campus Design March 02, 2023

Table of Contents
ESP Campus Design 4

Aruba ESP Campus Introduction 5


Purpose of This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

EPS Campus Design Goals 6

Aruba ESP Campus Connectivity Layer 8


OSPF Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
IP Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Distributed Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Radio Frequency Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Access Point Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Channel Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Proxy ARP on Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
NAT/Routing on Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Network Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Aruba ESP Campus LAN Design 41


Two-Tier Campus LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Three-Tier Campus LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
OSPF Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
IP Multicast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Overlay Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Distributed Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Spanning Tree Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Network Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Aruba ESP Campus LAN Design Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Aruba ESP Wireless LAN Design 65


Radio Frequency Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Wireless Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Visitor Wireless . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
WLAN Multicast and Broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
WLAN QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Bridge Mode Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Tunnel Mode Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
WLAN Resiliency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Campus Wireless Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Aruba ESP Campus Policy Design 84


Network Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Role-Based Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Network Infrastructure Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Validated Solution Guide 2


ESP Campus Design March 02, 2023

Aruba ESP Campus Reference Architectures 99


Campus Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Campus Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Small Campus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Medium Campus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Large Campus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Distributed Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Capacity Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Aruba Central NetConductor Scale Guide 113

Summary 114

What’s New in This Version 115

Validated Solution Guide 3


ESP Campus Design March 02, 2023

ESP Campus Design


This guide is intended to help IT professionals understand the following design considerations for a
campus environment:

• Hardware selection
• Software selection
• Topology
• High availability
• Scalability
• Application performance
• Security

Validated Solution Guide 4


ESP Campus Design March 02, 2023

Aruba ESP Campus Introduction


The Aruba Networks ESP Campus is built on technology that provides tools to transform a campus net-
work into a modern, agile connectivity platform that satisfies the varied requirements of organizations
of any size, with distributed or centralized operations.

• The Aruba AOS-CX operating system applies consistent common switching operations across
the campus, branch, and data center that can be managed on-premises or from the cloud.
• AOS 10 is an enterprise-grade, cloud-native, wireless operating system that supports multiple
overlay designs, allowing maximum flexibility.
• Aruba Central is a data-rich cloud platform that provides network management services and
tools to wired, wireless, and policy infrastructure.

All three core components of Aruba ESP are backed by artificial intelligence capabilities that provide
best practice guidance and enable comprehensive analytics throughout the network’s operational
lifecycle.

Traditionally, IT personnel face difficult demands supporting extensive network services and meeting
continually expanding requirements for new technologies. The challenges are compounded because
many networks contain outdated or siloed legacy infrastructure and often require excessive manual
operation and complex maintenance.

Networks of any size require humans to find and address issues that arise. Troubleshooting can be as
difficult and time consuming as finding a needle in a haystack.

IT leaders must carefully assess their infrastructure and operational models with a long-term goal of
modernizing the architecture to take full advantage of emerging solutions that maximize productivity
and reduce strain on existing networks, tools and professional staff. The addition of new technology is a
constant in IT: making decisions that not only add new capability but also eliminate current challenges
is essential for business success.

Purpose of This Guide


This guide presents design details and considerations for campus networks in the Aruba ESP (Edge
Services Platform) architecture and associated hardware and software components, with examples of
reference architectures for small, medium, and large campuses.

Requirements that shape the design and benefits the design can provide to an organization are pre-
sented.

The guide uses a sample system that integrates access points (APs), gateways, access switches, aggre-
gation switches, and core switches, with cloud-based orchestration and network management.

Validated Solution Guide 5


ESP Campus Design March 02, 2023

EPS Campus Design Goals


The overall goal is to create a simple, scalable design that is easy to replicate at different sites. Compo-
nent selection should include a specific set of products that collectively perform all required operations
and maintenance functions.

The design also must include target of subsecond failover components that perform if a network device
or link between two network becomes unavailable.

All protocols are tuned to deliver a highly available network that continually meets all functional
requirements.

This guide can be used to design new networks or to optimize and upgrade existing networks. The
guide is not intended to present an exhaustive description of all available options. It focuses on the
most highly recommended and proven designs, features, software, and hardware.

Customer Use Cases

With so many wireless devices on a network, performance and availability are key concerns. Wireless
clients with different capabilities operate at different performance levels. If the wireless network cannot
self-optimize, slower clients can degrade performance for faster clients.

Wi-Fi 5 and Wi-Fi 6 standards support speeds greater than 1 Gbps. To accommodate the increased data
rates, the APs implement the IEEE 802.3bz Ethernet standard of 2.5 and 5 Gbps.

An organization can achieve the higher data rates on existing twisted-pair cabling when connecting to
Aruba switches with Smart Rate ports that also support the 802.3bz Ethernet standard.

To support the explosion of Internet of Things (IoT) devices and the latest wireless technologies, IEEE
802.3bt Power over Ethernet (PoE), eliminates the need for dedicated power, while offering simplicity
and cost reduction. The access layer acts as a collection point for high-performance wired and wireless
devices. The access layer needs enough capacity to support current power and bandwidth and to scale
as the number of devices grows.

Security is a critical requirement for the campus networks. Authorized users must be authenticated
and given access to services they need to do their jobs. IoT devices must be identified using MAC
authentication and profiling to prevent rogue devices from using the network.

In addition to internal access within company-managed assets, users connect their personal devices,
guests require Internet access, and contractors or vendors must access the Internet and the organiza-
tion’s internal network.

Broad access must be enabled while maintaining network security and integrity. Because connecting
so many devices and user types significantly increases the administrative burden, the network should
be designed to maximize automated, secure device onboarding.

This guide discusses the following use cases:

• Use artificial Intelligence to augment available network operator resources with Smart Telemetry.

Validated Solution Guide 6


ESP Campus Design March 02, 2023

• Enforce Zero Trust Security to secure the network from inside and outside attacks.
• Create a powerful unified infrastructure with centralized cloud-based management.

Validated Solution Guide 7


ESP Campus Design March 02, 2023

Aruba ESP Campus Connectivity Layer


The Connectivity Layer is the foundation for the Aruba ESP architecture. It features the physical in-
frastructure components with extensive flexibility and high availability, along with the telemetry for
analyzing the current state of the network. Enterprise networks need to support their existing end-
points, including many legacy systems, while transitioning to a modern architecture. This section
will discuss the different design aspects of the Connectivity Layer and how they relate to the ESP
Architecture.

OSPF Routing
In a large organization, all departments need to be connected and sharing information. To accomplish
this in an easy, scalable manner, a dynamic routing protocol is needed. The two most common choices
for an enterprise network are Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP). In
this design, we recommend using OSPF for its simplicity and ease of configuration. OSPF is a dynamic,
link-state, standards-based routing protocol that is commonly deployed in campus networks. OSPF
provides fast convergence and excellent scalability, making it a good choice for large networks because
it can grow with the network without the need for redesign.

OSPF uses areas which provides segmentation of the network to limit routing advertisements and
allow for route summarization. Area segmentation is normally done on logical network boundaries,
such as buildings or locations, and it helps minimize the impact of routing changes across the network.
In large networks with WANs, multiple OSPF areas are very useful, but in a typical campus network, a
single area is recommended.

In the three-tier and two-tier campus designs, the access switches have a default gateway in the
management VLAN for operational access, and the VLANs are terminated at the aggregation layer
switches. Aggregation layer switches are prevented from forming adjacencies with access switches,
but they will advertise the available VLANs to the switches. The aggregation and core switches in the
three-tier design use OSPF to dynamically exchange routes for the campus network. OSPF uses point
to point links between aggregation and core devices for more efficient transfer of OSPF updates. The
collapsed core and WAN gateways in the two-tier design use OSPF to dynamically exchange routes for
the campus network.

Enabling authentication for OSPF devices is only required for highly secure networks, for all other
network deployments authentication is optional. The Internet is accessed using a static default route
originating from the DMZ firewall. The diagram below outlines the key points for running OSPF in a
three-tier wired design.

OSPF in three-tier wired

Validated Solution Guide 8


ESP Campus Design March 02, 2023

IP Multicast
IP multicast allows a single IP data stream to be replicated by the network and sent from a single source
to multiple receivers. IP multicast is much more efficient than sending multiple unicast streams or
flooding a broadcast stream that would propagate everywhere. Common examples of multicast traffic
in a campus network are IP telephony music on hold and IP video broadcast streaming of pre-recorded
content.

This design uses protocol independent multicast-sparse mode (PIM-SM) to route multicast traffic on
the network. The mechanisms to route multicast traffic are rendezvous point (RP), bootstrap router
(BSR), Multicast Source Discovery Protocol (MSDP) and Internet Group Management Protocol (IGMP).
PIM-SM should be configured on all routed links to enable multicast traffic on the network. In this
design, the OSPF routing table is used for reverse path forwarding to direct the multicast traffic.

The RP is the root of the multicast tree when using sparse mode. Multiple RPs can be configured for
redundancy, although normally, only one RP is active at a time for each multicast group. Multiple
RPs can be active if MSDP is enabled because it allows a multicast domain to share source tree tables
between RPs. MSDP allows switches to have Inter and Intra Domain active-active redundancy using an
Anycast IP address as the RP. Anycast is a networking technique that allows for multiple devices to
share the same IP address. Based on the location of the user request, the switches send the traffic to
the closest device in the network which reduces latency and increases redundancy.

Validated Solution Guide 9


ESP Campus Design March 02, 2023

In a Campus, MSDP is needed for intra domain redundancy and should be enabled on the core switches
in either the two-tier or three-tier topologies. The RP candidate announcement, in combination with
MSDP, advertises the Anycast IP address to neighboring devices. Neighboring devices will not know
what devices want to be the RP unless BSR is enabled. The BSR is elected from a list of candidate-BSRs
configured on the network. There can only be a single active BSR, and it advertises RP information to
all PIM-enabled routers, freeing the administrator from having to statically configure the RP address
on each router in the network. BSR, RP and MSDP should be enabled on the core switches to identify
the active RP and notify neighboring devices.

When a client wants to join a multicast group, it sends an IGMP join message to the local multicast
router which is also known as the designated router (DR). The DR forwards the join message towards
the RP and all routers in the path do the same until the IGMP join message reaches the RP. Multicast
traffic is forwarded back down the shared tree to the client. Periodic IGMP join messages are sent to
the RP for each multicast group with active clients. If a DR wants to stop traffic from a multicast group
because it no longer has active clients, it can send an IGMP prune message to the RP. To prevent the DR
from flooding traffic to all clients on a local subnet, Layer 2 switches snoop the IGMP messages and
only forward traffic to clients that have sent a join message. IGMP should be enabled on aggregation
switches and collapsed core switches.
NOTE:
IGMP timers must match across all switches on the network, including switches from other
vendors.

Dynamic Multicast Optimization

The 802.11 standard states that multicast traffic over WLAN must be transmitted at the lowest basic rate
so all clients are able to decode it. Aruba recommends enabling Dynamic Multicast Optimization (DMO)
to allow the AP to convert the multicast traffic to unicast for each client device. DMO is a technology
from Aruba that converts multicast frames over to the air to unicast. This provides the same benefits as
ARP optimization to decrease the channel duty cycle and guarantee frame delivery. Unicast frames are
acknowledged by the client and can be retransmitted if a frame is lost over the air. Unicast frames are
also transmitted at the highest possible data rate supported by the client vs the lowest basic rate.

Multicast and broadcast frames are natively transmitted at the lowest basic rate in order to have a
higher chance of successful delivery to all associated clients. With DMO enabled, the multicast packet is
converted in the AP/Gateway datapath. This operation can be CPU intensive depending on the multicast
packet size and number if multicast streams. It is not recommended to have multiple multicast sources
with the same data crossing the datapath with DMO enabled. If multicast is required, it is recommended
to use the largest possible Layer 2 network so multiple multicast streams are not converted at the
same time. If large Layer 2 networks cannot be created, it is recommended to use additional Gateways
just for the conversion. Enabling DMO in a properly sized installation does not have a negative impact
on the Gateway or AP performance.

Validated Solution Guide 10


ESP Campus Design March 02, 2023

The following figure shows IP multicast BSR, RP, MSDP, IGMP Snooping and DMO placement in the
two-tier and three-tier wired designs.

IP Multicast BSR, RP, MSDP, IGMP Snooping and DMO placement

Broadcast to Unicast Conversion

Aruba WLANs also can convert broadcast packets into unicast frames over the air. The advantage is
because broadcast frames over the air are required to be transmitted over the lowest possible data rate
configured called the “basic rate” in an attempt to guarantee delivery of every frame. Since broadcasts
have no acknowledgement of delivery like unicast does, there is not an option to retransmit a lost
broadcast frame. When the frame over the air is converted to unicast, the AP can send the frame at a
much higher data rate and get confirmation of delivery. Retransmitting lost unicast frames is possible
because each frame is acknowledged.

Converting broadcast to unicast has two large benefits, first as described above, guaranteed delivery of
the frame and second, it will greatly decrease channel duty cycle and deliver the frames at the highest
possible data rate on a per client basis. When broadcast frames are delivered at the basic rate for every
client, it is equivalent to a single lane road with someone driving under the speed limit. When there
isn’t a lot of traffic it isn’t terrible but when there is a lot of traffic the AP duty cycle suffers.

ARP Optimization

There are two modes available for ARP optimizations, “ARP Only” and “All”. With “ARP Only” the AP will
only optimize ARP requests and the rest of the broadcast and multicast traffic will be forwarded as
usual. With “All” enabled, every multicast and broadcast will be dropped. In order for multicast to be
converted to unicast please see Dynamic Multicast Optimization section above.

Distributed Overlay
Distributed overlay networks in the Aruba ESP Campus are created using a technology called EVPN-
VXLAN. This suite of protocols creates a dynamically formed network fabric that extends layer 2 con-
nectivity over an existing physical network and layer 3 underlay. It is an open standards technology
that creates more agile, secure, and scalable networks in campuses and data centers. EVPN-VXLAN
consists of:

• Ethernet VPN (EVPN) a BGP driven control plane for overlays that provides virtual connectivity
between different layer 2/3 domains over an IP or MPLS network.
• Virtual extensible LANs (VXLAN), a common network virtualization tunneling protocol that ex-
pands the layer 2 network address space from 4,000 to 16 million networks.

Aruba ESP implements EVPN-VXLAN overlays on an IP underlay using redundant, layer 3 links for
high-speed resiliency and maximum bandwidth utilization.

Validated Solution Guide 11


ESP Campus Design March 02, 2023

Figure 1: Distributed Overlay

EVPN-VXLAN Benefits

Step 1 Uniform bridging and routing across a diverse campus topology. 1. Efficient layer 2 extension
across layer 3 boundaries. 2. Anycast gateways ensure consistent first-hop routing services across the
campus.

Step 2 End-to-end segmentation using VXLAN-Group Based Policies (VXLAN-GBP) provides the ability
to propogate policy anywhere in the campus.

Step 3 Transported across any IP network supporting jumbo frames. VXLAN only needs to be deployed
on the edge devices of the fabric.

EVPN-VXLAN Control Plane

In Aruba ESP, the EVPN-VXLAN control plane is Multi-protocol BGP (MP-BGP) which communicates MAC
addresses, MAC/IP bindings, and IP Prefixes to ensure endpoint reachability across the fabric. This
approach is far superior to flood and learn communication on the data plane, which is inefficient, as
well as centralized control plane approaches and their inherent scaling limitations.

Validated Solution Guide 12


ESP Campus Design March 02, 2023

The use of MP-BGP with EVPN address families between virtual tunnel endpoints (VTEPs) provides
a standards-based, highly scalable control plane for sharing endpoint reachability information with
native support for multi-tenancy. MP-BGP has been used for many years by service providers to offer
secure Layer 2 and Layer 3 VPN services at very large scale. Network operations are simplified by using
an iBGP design with route reflectors so that peering is only required between access switches and the
core. BGP control plane constructs to be familiar with include:

• Route Distinguisher (RD) - In order to support multi-tenancy and the likelihood of overlapping
IP addressing, the use of an RD associated with a BGP prefix allows the unique identification
of the virtual network associated with each prefix. In VXLAN a Layer 3 VNID maps to a VRF and
represents a tenant with a dedicated virtual network and corresponding routing table.
• Route Target (RT) - Route targets are used as an attribute to flag the source of specific prefix
updates and as a filtering criteria for importing prefixes into a routing table. In a typical campus
environment, with any to any communication on the same tenant, this attribute is not relevant
unless route leaking is required between different virtual networks.
• Route Reflector (RR) - To optimize the process of sharing reachability information between
VTEPs, the use of route reflectors at the core will allow for simplified iBGP peering. This design
allows for all VTEPs to have the same iBGP peering configuration and eliminates the need for a
full mesh of iBGP neighbors.
• Address Family (AF) - Different types of routing tables (IPv4 unicast, IPv6 unicast, L3VPN, etc. )
are supported in MP-BGP. The Layer 2 VPN address family (AFI=25) and the EVPN address family
(SAFI=70) are used to advertise IP and MAC address information between BGP speakers. The
EVPN address family routing table contains reachability information for establishing VXLAN
tunnels between VTEPs.

The Aruba ESP Campus design uses two layer 3 connected core switches as iBGP route reflectors. The
quantity of destination prefixes and overlay networks will consume physical resources in the form of
forwarding tables and should be taken into consideration when designing the network. The reference
architecture section will provide hardware guidelines for scaling the design of the fabric underlay.

VXLAN Network Model

VXLAN encapsulates layer 2 Ethernet frames in layer 3 UDP packets. These VXLAN tunnels provide both
layer-2 and layer-3 virtualized network services to connected endpoints. A VTEP is the function within
a switch that handles the origination or termination of VXLAN tunnels. Similar to a traditional VLAN ID,
a VXLAN Network Identifier (VNI) identifies an isolated layer-2 segment in a VXLAN overlay topology.
Symmetric Integrated Routing and Bridging (IRB) capability allows the overlay networks to support
contiguous layer-2 forwarding and layer-3 routing across leaf nodes.
NOTE:
Always configure jumbo frames on VXLAN VTEPs and fabric intermediate devices to ensure the
accurate transport of additional encapsulation.

Validated Solution Guide 13


ESP Campus Design March 02, 2023

VXLAN networks are composed of two key virtual network constructs: Layer 2 VNI and Layer 3 VNI.
Below is an explanation of the relationship of an L2 VNI, L3 VNI, and VRF.

• L2VNIs are analogous to a VLAN and in CX use the configuration of a VLAN. An L2VNI bridges
traffic between the endpoints within VLANs and, when appropriate, between VTEPs.
• L3 VNIs are analgous to VRFs and route between the subnets of L2VNIs within a VTEP or between
VTEPs.

– Multiple L3 VNIs can exist within a single VRF.

VXLAN Encapsulation

An overlay network is implemented using Virtual Extensible LAN (VXLAN) tunnels that provide both
layer-2 and layer-3 virtualized network services to endpoints connected to the campus. Similar to
a traditional VLAN ID, a VXLAN Network Identifier (VNI) identifies an isolated layer-2 segment in a
VXLAN overlay topology. Symmetric Integrated Routing and Bridging (IRB) capability allows the overlay
networks to support contiguous layer-2 forwarding and layer-3 routing across leaf nodes.
A VTEP encapsulates a frame in the following headers:

• IP header: Valid addresses of VTEPs or VXLAN multicast groups on the transport network. Devices
in the transport network forward VXLAN packets based on the outer IP header.
• UDP header for VXLAN: The default VXLAN destination UDP port number is 4789.
• VXLAN header: VXLAN information for the encapsulated frame.

– 8-bit VXLAN Flags: If the fifth bit (I flag) is 1, the VNI is valid. If the I flag is 0, the VNI is invalid.
All other bits are reserved and set to 0.
– 16-bit VXLAN Group Policy ID: Provides a group ID which determines the policy enforced
on the packet.
– 24-bit VXLAN Network Identifier: Identifies the VXLAN of the frame. It is also called the
virtual network identifier (VNI).
– 24-bit Reserved field

Figure 2: vxlan_packet_header

Validated Solution Guide 14


ESP Campus Design March 02, 2023

Quality of Service
Quality of service (QoS) refers to the ability of a network to provide higher levels of service using traffic
prioritization and control mechanisms. Applying the proper QoS policy is important for real-time traffic,
such as Skype or Zoom, and business-critical applications, like Oracle or Salesforce. To accurately
configure QoS on a network, there are several aspects to consider, such as bit rate, throughput, path
availability, delay, jitter, and loss. The last three—delay, jitter and loss—are easily improved by using an
appropriate queueing algorithm which allows the administrator to schedule and deliver applications
with higher requirements before applications with lower requirements. The areas of the network
that require queueing are the places with constrained bandwidth, like the wireless or WAN sections.
Wired LAN uplinks are designed to carry the appropriate amount of traffic for the expected bandwidth
requirements and since QoS queueing does not take effect until there is active congestion, queueing is
not typically needed on LAN switches.

The easiest strategy to deploy QoS is to identify the applications running in the network that are critical
and give them a higher level of service using the QoS scheduling techniques described in this guide.
The remaining applications stays in the best-effort queue to minimize the upfront configuration and
to lower the day-to-day operational effort of troubleshooting a complex QoS policy. If additional
applications become critical in the future, they are identified and added to the existing list of business-
critical applications. This can be repeated as needed without requiring a comprehensive policy for
all applications on the network. This strategy is normally used by organizations who do not have a
corporate-wide QoS policy or are troubleshooting application performance problems in specific areas
of the network.

An example of this type of strategy prioritizes real-time applications, along with a few other key ap-
plications that require fast response times because users are waiting on remote servers. Identifying
business critical applications and giving them special treatment on the network allows employees to
remain productive doing the tasks that matter the most to a business. Real-time applications are placed
into a strict priority queue and business critical applications are serviced by one or more premium
queues that provide a higher level of service during times of congestion. The administrator must be
careful to limit the amount of traffic placed into strict priority queues to prevent over saturation of the
interface buffers. If all traffic is marked with strict priority, the QoS strategy becomes nothing more
than a large first-in first-out queue which defeats the purpose of creating a QoS strategy. After critical
traffic is identified and placed into the appropriate queues, the rest of the traffic is placed into a default
queue with a lower level of service than the applications used for running the business. If the higher
priority applications are not using the bandwidth assigned, the default queue uses all the available
bandwidth.

Validated Solution Guide 15


ESP Campus Design March 02, 2023

Traffic Classification and Marking

Aruba recommends using the access switch as a QoS policy enforcement point for traffic over the LAN.
This means selected applications are identified by IP address and port number at the ingress of the
access switch and marked for special treatment. Traffic is optionally queued on the uplinks, but this is
not a requirement with a properly designed campus LAN environment. Any applications that are not
identified are re-marked to value of zero, giving them a best-effort level of service. The diagram below
shows where traffic is classified, marked and queued as it passes through the switch.

Classification, marking and queueing

In a typical enterprise network, applications with similar characteristics are categorized based on how
they operate over the network. These application categories are sent into different queues according
to the types of applications. For example, if broadcast video or multimedia streaming applications
are not used for business purposes, there is no need to account for them in a QoS policy. However, if
Skype and Zoom are used for making business-related calls, the traffic must be identified and given a
higher priority. Certain traffic that is not important to the business, like YouTube, gaming and general
web browsing, should be identified and placed into the scavenger class, which allows it to be dropped
first and with the highest frequency during times of congestion.

A comprehensive QoS policy requires business relevant and scavenger class applications to be cat-
egorized on the network. Layer 3 and Layer 4 classifications group the applications together into
categories to help identify the ones with similar characteristics. After sorting the applications that
are important from the ones that are not, they are combined into groups for queuing. Scheduling
algorithms rely on classification markings to identify applications as they pass through a network
device. Aruba recommends marking applications at Layer 3, rather than Layer 2 so the markings are
carried throughout the life of a packet. The goal of the QoS policy is to allow critical applications to
share the available bandwidth with minimal system impact and engineering effort.

DiffServ Code Point (DSCP)

Layer 3 marking uses the IP type of service (ToS) byte with either the IP Precedence three most-
significant bit values from 0 to 7 or the DSCP six most-significant bit values from 0 to 63. The DSCP
values are more common because they provide a higher level of QoS granularity, but they are also
backward compatible to IP precedence because of their left-most placement in the ToS byte. Layer

Validated Solution Guide 16


ESP Campus Design March 02, 2023

3 markings are added in the standards-based IP header, so they remain with the packet as it travels
across the network. When an additional IP header is added to a packet, like in the case of traffic in an
IPsec tunnel, the inner header DSCP marking must be copied to the outer header to allow the network
equipment along the path to use the values.

DSCP marking

There are several RFCs associated with the DSCP values as they pertain to the per-hop behaviors (PHBs)
of traffic as it passes through the various network devices along its path. The diagram below shows the
relationship between PHB and DSCP, along with their associated RFCs.

DSCP relationship with per-hop behaviors

Voice traffic is marked with the highest priority using an Expedited Forwarding (EF) class. Multimedia
applications, broadcast and video conferencing are placed into an assured forwarding (AF31) class
to give them a percentage of the available bandwidth as they cross the network. Signaling, network
management, transactional and bulk applications are given an assured forwarding (AF21) class. Finally,
default and scavenger applications are placed into the Default (DF) class to give them a reduced amount
of bandwidth but not completely starve them during times of interface congestion. The figure below
shows an example of six application types mapped to a 4-class LAN queueing model.

Six application types in a 4-class LAN queueing model

Validated Solution Guide 17


ESP Campus Design March 02, 2023

The “Best effort” entry at the end of the QoS policy marks all application flows that are not recognized
by the Layer 3 / Layer 4 classification into the best effort queue. This prevents end users who mark
their own packets from getting higher priority access across the network.

Traffic Prioritization and Queuing

The Aruba switch supports up to eight queues, but in this example only four queues are used. The
real-time interactive applications are combined into one strict priority queue, while multimedia and
transactional applications are placed into deficit round robin (DRR) queues, and the last queue is used
for scavenger and default traffic. DRR is a packet-based scheduling algorithm that groups applications
into classes and shares the available capacity between them according to a percentage of the bandwidth
which is defined by the network administrator. Each DRR queue is given its fair share of the bandwidth
during times of congestion, but all of them can use as much of the bandwidth as needed when there is
no congestion.

The outbound interface requires the DSCP values shown in the second column in order to queue
applications through the queues. The weighted values used in the DRR LAN scheduler column are
added together and each DRR queue is given a share of the total. The values will need to be adjusted
according to the volume of applications in each category on the network. The adjustment process is
often done with trial and error as the QoS policy is used to affect the applications in the environment.
The queues are sequentially assigned in a top-down order as shown in the 4-class example below.

QoS summary for Aruba switch

Validated Solution Guide 18


ESP Campus Design March 02, 2023

Wi-Fi Multimedia

Wi-Fi Multimedia (WMM) - is a certification program that was created by the Wi-Fi Alliance that covers
QoS over Wi-Fi networks. WMM prioritizes traffic into one of four queues, and traffic receives different
treatment in the network based on its traffic class. The different treatment includes a shortened wait
time between packets and tagging of packets using DSCP and IEEE 802.1p markings. Aruba allows
users to define which traffic fits in to each queue and the DSCP and 802.1p values can be adjusted
appropriately to match the network.

To take advantage of WWM functionality in a Wi-Fi network, three requirements have to be met:

Step 1 The access point is Wi-Fi Certified™ for WMM and has WMM enabled.

Step 2 The client device must be Wi-Fi Certified™ for WMM

Step 3 The source application supports WMM.


NOTE:
WMM is supported in all of Aruba’s Wi-Fi products.

QoS is not only set for a VLAN or port but can be set dynamically per application using Policy Enforce-
ment Firewall. Most networks, including wireless LANs, operate far below capacity most of the time.
This means there is very little congestion and traffic experiences good performance. QoS provides
predictable behavior for those occasions and points in the network when congestion is experienced.
During overload conditions, QoS mechanisms grant certain traffic high priority, while making fewer
resources available to lower-priority clients. For instance, increasing the number of voice users on the
network may entail delaying or dropping data traffic.

Validated Solution Guide 19


ESP Campus Design March 02, 2023

The Wi-Fi network is shared across multiple clients and the medium is bandwidth limited. The wireless
spectrum occupied by an RF channel is shared by an access point, its associated clients, and by all
other access points and clients in the vicinity that are using the same channel. Prior to Wi-Fi 6 and BSS
Coloring, only one client or AP could transmit at any given time on any channel.

Wi-Fi uses carrier-sense, multiple-access with collision avoidance (CSMA/CA), much like the shared
Ethernet networks did in the early days. Before transmitting a frame, CSMA/CA requires each device
to monitor the wireless channel for other Wi-Fi transmissions. If a transmission is in progress, the
device sets a back-off timer to a random interval and tries again when the timer expires. Once the
channel is clear, the device waits a short interval – the arbitration inter-frame space – before starting
its transmission. Since all devices follow the same set of rules, CSMA/CA ensures “fair” access to the
wireless channel for all Wi-Fi devices. The Wi-Fi standard defines a distributed system in which there is
no central coordination or scheduling of clients or APs.

The WMM protocol adjusts two CSMA/CA parameters, the random back-off timer and the arbitration
inter-frame space, according to the QoS priority of the frame to be transmitted. High-priority frames
are assigned shorter random back-off timers and arbitration inter-frame spaces, while low-priority
frames must wait longer. WMM thereby gives high priority frames a much higher probability of being
transmitted sooner. A station with low-priority traffic, on seeing another station transmit, must set the
back-off timer to a random number within a broad range, say 15 to 1024. A station with high-priority
traffic will select a random number from a smaller range, say 7 to 31. Statistically this ensures that
the high priority frame will be transmitted with a shorter delay and has a lower probability of being
dropped.

Back-off and Arbitration inter-frame timers for WMM

When a high-priority frame is served by a Wi-Fi network interface, the device is allowed to use a shorter
arbitration inter-frame space than other devices using the same channel. This means that when the
wireless channel goes quiet, devices with high-priority frames wait a shorter inter-frame space relative
to other devices with lower priority traffic. This mechanism thereby assures more rapid transmission
of high priority traffic.

Validated Solution Guide 20


ESP Campus Design March 02, 2023

The random back-off timer and arbitration inter-frame space mechanism address conditions during
which multiple devices have traffic to transmit at the same time, and the total offered traffic is high
relative to the capacity of the channel. However, these mechanisms don’t address how a particular
client or AP ensures QoS within its own interface during a temporary traffic peak. That capability is
handled by an internal priority queuing mechanism. As packets are sent to the MAC layer of the Wi-Fi
interface, they are internally lined up in their respective priority queues which are serviced in strict
priority order. If the device generates more traffic than it can transmit onto the wireless channel, the
higher priority traffic will override other packets within the interface.

WMM defines four priority levels in ascending priority: background, best effort, video, and voice. The
default values for random back-off timers and arbitration inter-frame spaces are defined in the 802.11
standard, as is the queuing structure in the Wi-Fi interface. Since QoS must be maintained end-to-end,
it is important that WMM priority levels be mapped to the QoS priorities in use on the LAN. The table
below shows how DSCP priorities are translated into the four WMM priority levels.

WMM to DSCP mapping

WMM Access Category Description DSCP

Voice Priority Real-time interactive 46


Video Priority Multimedia streaming 26
Best Effort Priority Transactional 18
Background Priority Best effort 0

Spanning Tree
High availability is a primary goal of any enterprise to conduct business on an ongoing basis. One
method to ensure high availability, is to provide Layer 2 redundancy with multiple links. Without this
redundancy, the failure of a link will result in a network outage. However, adding multiple links also
introduces the potential for Layer 2 loops in the network. The spanning tree protocol (STP) can prevent
loops in the network, regardless of the network topology. This section will dive into what devices
should be root of the spanning tree topology, and what version of STP to use. This section will also
cover supplemental features that should be enabled.

With many different versions of spanning tree and different network devices using different defaults, it is
important to standardize on a common version of spanning tree running in order to have a predictable
spanning tree topology. The recommended version of spanning tree for Aruba Gateways and switches
is Rapid Per VLAN Spanning Tree (Rapid PVST+).

Validated Solution Guide 21


ESP Campus Design March 02, 2023

Spanning Tree and Root Bridge Selection

Spanning tree should be enabled on all devices as a heavy-handed loop prevention mechanism. This
should be done regardless of network topology to prevent accidental loops. Gateways and access
switches should have high bridge IDs to prevent them from becoming the root bridge of the network.
Any Layer 3 device can be left at the default priority, as it is unlikely Layer 2 VLANs will be stretched
across these devices so there is not a need to configure STP on them. The root bridge needs to be the
device or pair of devices that are central to the network and aggregate VLANs for downstream devices.
In the campus topologies discussed in this guide, the root bridge candidates are the collapsed core,
access aggregation and services aggregation devices.

In the three-tier wired design, the root bridges are the access aggregation switches and service ag-
gregation switches. As mentioned in the campus design overview, VSX and MC-LAG are used to allow
dual-connections between the access and aggregation layers without the need for STP on the individual
links. Even though there are multiple root bridges, they will not interfere with each other because
spanning tree does not extend over the Layer 3 boundary between the devices. In this example, the
access switches are Layer 2 and will need to be set with a high bridge ID. The core devices are Layer 3
switches and do not need to be set to a specific spanning tree value.

Spanning tree root bridge placement

Validated Solution Guide 22


ESP Campus Design March 02, 2023

Spanning Tree Supplemental Features

Spanning tree has several supplementary features that help keep the network stable. In this section,
there is a brief overview of each feature along with a description of why they should be enabled.

Root Guard - Stops devices from sending a superior BPDU on interfaces that shouldn’t send a superior
or lower priority BPDU on an interface. This should be enabled on the aggregation or collapsed core
downlinks to prevent access switches from becoming root of the network. This should not be enabled
on the links connecting the aggregation switches to the core switch.

Admin Edge - Allows a port to be automatically enabled without going through the listening and
learning phases of spanning tree on the switch. This is should only be used on single device ports or
with a PC, daisy chained to a phone. This needs to be used with caution, because spanning tree will
not run on these ports so if there is a loop on the network it will not be detected by spanning tree. This
feature should only be used for client facing ports on access switches.

Validated Solution Guide 23


ESP Campus Design March 02, 2023

BPDU Guard - Automatically blocks a port if it sees a BPDU on them. This feature typically should be
enabled on admin-defined client facing ports on access switches. This is to prevent any BPDU’s from
being received on ports that have been configured as a client facing port. BPDU guard ensures that
BPDU’s do not get received on access ports preventing loops and spoofed BPDU packets.

Loop Protect - Allows the switch to automatically block a loop if detected and unblock a loop auto-
matically if the loop has disappeared. This feature should be turned on for all access port interfaces
to prevent accidental loops from access port to access port. Loop Protect should not be enabled on
the uplink interfaces of access switches or in the core, aggregation or collapsed core layers of the
network.

BPDU Filter - Ignores BPDU’s that are sent to an interface and will not send any of its own BPDU’s to
interfaces. The main use case for this feature is in a multi tenancy environment when the servicing
network does not want to participate in the customers spanning tree topology. A BPDU filter enabled
interface will still allow other switches to participate in their own spanning tree topology. It is not
recommended to use BPDU filter unless there is a reason the network infrastructure does not need to
participate in the spanning tree topology.

Fault monitor - This feature can be used to automatically detect excessive traffic and link errors. Fault
monitor can be used to log events, send SNMP traps or temporarily disable a port. It is recommended
to enable fault monitor in notification mode for all recognized faults. This feature should be enabled
on all interfaces for data continuity, but do not enable the disable feature with fault monitor because
spanning tree and loop protect is used to stop loops.

Radio Frequency Design


A site survey is an important tool for understanding the radio frequency (RF) behavior at a site and,
more importantly, where and how much interference might be encountered with the intended coverage
zones. A survey also helps to determine the type of wireless network equipment, where it goes, and
how it needs to be installed. A good survey allows identification of AP mounting locations, existing
cable plants, and yields a plan to get the wireless coverage the network requires. Since RF interacts with
the physical world around it, and because all office environments are unique, each wireless network
has slightly different characteristics.

To provide ubiquitous multimedia coverage in a multi-floor/multi-building campus with uninterrupted


service, the correct RF elements are required to ensure a successful implementation. Planning tools
have evolved with the radio technologies and applications in use today, but a familiarity with the
RF design elements and mobile applications are required to produce a good plan. Completing a site
survey yields good information that can be used again and again as the wireless network grows and
continues to evolve.

Validated Solution Guide 24


ESP Campus Design March 02, 2023

AirMatch

After a successful site survey helps you properly place your APs, there are additional ways to provide
long-term performance management for your wireless network. The AirMatch feature provides auto-
mated RF optimization by dynamically adapting to the ever-changing RF environment at the network
facility.

AirMatch enables the following key functions:

• Compute channel, channel width, and transmit power for APs

• Deploy channel and power plan based on configured deployment times

• Provide neighbor APs list to the Key Management service

• Provide AP data to the Live Upgrade service

In the ESP solution, the AirMatch service is moved to Central, which is capable of computing and
deploying RF allocation to APs across the entire enterprise network. The AirMatch service receives
telemetry data from APs for radio measurements, channel range, transmit power range, operational
conditions, and local RF events like radar detection or high noise.

To determine the plan improvement, the average radio conflict metric is created. For each radio of an
AP, the overlapped channels with its neighbors are calculated, and path loss weight is used to come up
with the average weight value. After AirMatch comes up with a new plan, its conflict value is compared
with the current operating network, and an improvement percentage is calculated. If the improvement
percentage is higher than or equal to the configured quality threshold which is 8% by default, then a
new plan will be deployed at the configured “Automatically Deploy Optimization” timer. The AP can still
make the local channel changes in the case of poor service to a client. These localized channel changes
are done without disturbing the entire channel plan. This information is relayed to the AirMatch service
so the AirMatch engine can factor in the changes for future channel plans.

It is recommended to configure AirMatch wireless coverage tuning value to Balanced. When making
changes to AirMatch, remember channel change events are disruptive, so it should be done only when
absolutely required. However, AirMatch can only optimize the RF environment to a certain degree. If
the APs are not initially located correctly in your environment AirMatch may not be able to overcome
poor physical deployment of APs.

ClientMatch

There is an eminent need for directing the clients to the best suited APs based on dynamic environment
variables in order to achieve the best network performance. ClientMatch constantly optimizes the
client association by continuously scanning the wireless environment and sharing information about
the clients and the APs. Based on the dynamic data obtained, the clients are steered to the most
suitable AP. No software changes are required in the clients to achieve this functionality.

Validated Solution Guide 25


ESP Campus Design March 02, 2023

ClientMatch looks at the client’s view of the network while making a steering decision. The client view
of the network is built with all the probe requests that are received from the same client on different
APs. This is used to build the Virtual Beacon Report which forms the building block for ClientMatch
which helps clients that tend to stay associated to an AP despite deteriorating signal levels. ClientMatch
will continuously monitor the client’s RSSI while it is associated to an AP, and if needed, will try to
move the client to a radio that would give it a better experience. This prevents mobile clients from
remaining associated to an AP with less-than-ideal RSSI, which can cause poor connectivity and reduce
performance for other clients associated with the AP.

ClientMatch continually monitors the RF neighborhood for each client to provide ongoing client band-
steering and load balancing, and enhanced AP reassignment for roaming mobile clients. Since the
client is not aware of AP load from a client count and channel utilization perspective, ClientMatch will
help the client make a better decision as to which AP it is connected to. ClientMatch is not a roaming
assistant, but rather a network optimization tool to enhance a client’s decision when selecting the
proper AP.

ClientMatch is Wi-Fi 6 aware and there is no special knob for this feature because it is enabled by
default but can be disabled if required. It will try to match Wi-Fi 6 clients to Wi-Fi 6 radios in a mixed AP
deployment environment.

Roaming Optimizations

Each AP deployment and its RF environment are unique, so the best practice for one environment may
not be the same for another. Still, certain components will help clients for seamless roaming and achieve
an overall better user experience. Wireless client devices are extremely sensitive to RF environment.
Device performance can be substantially improved by using the following recommendations.

Roaming recommendations

Validated Solution Guide 26


ESP Campus Design March 02, 2023

Feature Recommendation Description

Transmit Power Leave it at default values and let Adjusting the AP’s transmit power
AirMatch take care of these values. using Aruba’s AirMatch technology is
recommended for optimal roaming
performance. Leave it at default
values (5 GHz: Min 18 / Max 21 dBm
and 2.4 GHz: Min 6 / Max 12 dBm) and
let AirMatch take care of this
configuration. In case manual
configuration is required then please
note that the difference between
minimum and maximum Tx power on
the same radio should not be more
than 6 dBm. Tx power of 5 GHz radio
should be 6 dBm higher than that of
2.4 GHz radio. Having the radio power
on the 2.4 GHz radio lower allows for
the 2.4 GHz band to be less attractive
to a client and influencing the use of
a 5 GHz radio. Since 2.4 GHz is
roughly twice as strong of a signal, a
lower dBm power is needed to make
the radio less attractive to a client.
Some clients that are dual band
capable will still prefer a 2.4 GHz
radio over a 5 GHz radio if the
available power for both is the same.
The best solution will drive as many
clients to a 5 GHz radio as possible
but allow 2.4 GHz-only clients to
maintain connectivity. Setting a
consistent power levels across all
available radios leads to more
predictable roaming behavior among
a group of APs.

Validated Solution Guide 27


ESP Campus Design March 02, 2023

Feature Recommendation Description

Channel width Let AirMatch decide the optimal Use 80 MHz channel with DFS
channel and the channel width suited channels only if no radar signal
for the particular RF environment interference is detected near your
facility. Enabling DFS channels could
create coverage holes for clients who
do not support it. Most of the clients
do not scan DFS channels initially;
this makes roaming more
inconsistent when using these
channels.
Band Steering Enable 11ax aware Client Match ClientMatch optimizes user
experience by steering clients to the
best AP w.r.t client capabilities and AP
load.
Local probe 15 Prevents APs from responding to a
request threshold client’s probe request if their signal to
noise ratio value is below 15 dB,
thereby encouraging roaming to
closer APs which are responding to
the client’s probe request.

Fast roaming recommendations

Feature Recommendation
Description

Opportunistic Key Caching Enable Avoids full dot1x key exchange during roaming by
(OKC) caching the opportunistic key. NOTE: MacOS and iOS
devices do not support OKC.
802.11r Fast BSS Transition Enable 802.11r enables clients to roam faster and recent
macOS, iOS, Android clients, and Win10 are support
the protocol. Some older 802.11n devices, handheld
scanners and printers may have connectivity issues
with 802.11r enabled on WLAN. This feature is
disabled by default.
802.11k Enable Enable 802.11k with the following changes. Set the
Beacon Report to Active Channel Report and disable
the Quiet Information Element parameter from the
Radio Resource Management profile.

Validated Solution Guide 28


ESP Campus Design March 02, 2023

Air Slice

Air Slice is a unique RF feature that leverages Wi-Fi 6 core constructs to optimize user and application
experience. By combining Aruba’s stateful firewall and Layer 7 Deep Packet Inspection (DPI) to identify
user roles and applications, the APs will dynamically allocate the bandwidth and other RF resources
that are required to meet the performance metrics for business-critical applications to ensure better
user experience. Using Air Slice, the network administrator can further orchestrate radio resources to
work with ClientMatch to go beyond the traditional capabilities of Airtime Fairness.

Air Slice uses internal hardware queues to prioritize traffic within the same access class. Zoom video
can be prioritized over other video traffic with the same WMM tag which means it crosses the barriers of
traditional WMM QoS. WMM boost is also implemented to auto increase WMM priority for applications
that do not have DSCP/ToS markings set, which shows how best-effort traffic can be prioritized using Air
Slice. Air Slice provides benefit for non-11ax clients by using adaptive priority queuing for the enterprise
application flows and WMM boost.

A growing number of enterprises are using latency-sensitive, bandwidth-demanding applications


like Augmented Reality or Virtual Reality, or other collaborative applications such as Zoom, Skype for
Business, and Slack. These applications have stringent quality of service requirements. If these QoS
parameters are not met, it translates into poor user experience; moreover, IoT devices such as security
cameras or HVAC sensors are also becoming prevalent, and their requirements are very different in
terms of sleep cycles and latency sensitivity.

Since many of these new applications have stringent QoS requirements in terms of latency, bandwidth,
and throughput, an enhanced QoS is needed. Air Slice should be enabled in order to improve the user
experience while using latency sensitive applications.

The following table lists the applications supported by default with Air Slice.

Default Applications

Wi-Fi Calling
Office 365
GoToMeeting
Cisco WebEx
Dropbox
GitHub
Zoom
Skype For Business
Slack
Amazon Web Services

Validated Solution Guide 29


ESP Campus Design March 02, 2023

Access Point Placement


Aruba recommends doing a site survey for all wireless network installations, whether you use a virtual
tool or physical site survey for special cases like manufacturing floors and warehouses. The main goal
of a site survey is to determine the feasibility of building a wireless network on your site. You also use
the site survey to determine the best place for access points and other equipment, such as antennas
and cables. With that in mind, the following guidelines can be used as a good starting point for most
office environments.

For typical wireless bandwidth capacity in an office environment, we recommend placing APs ap-
proximately every 35-50 feet (10-15 meters). Each AP provides coverage for 1500-2500 square feet
(140-232 square meters) with enough overlap for seamless client roaming. In traditional offices, the
average space per user is approximately 175-200 square feet (16-18.5 square meters), and in open-office
environments, the space per user can be as low as 75-100 square feet (7-9.3 square meters). With three
devices per user, a traditional office layout with 50-foot AP spacing, and approximately ten users per
2000 square feet, leads to an average of 30 devices connected to each AP.

The numbers work out roughly the same in higher-density, open-office layouts with 35-foot AP spacing.
Because users move around and are not evenly distributed, the higher density allows the network to
handle spikes in device count and growth in the number of wireless devices over time. In an average
2500-user network with three devices per person, this works out to 7500 total devices, and with 30
devices per AP, this translates to approximately 250 APs for this example. A key thing to remember
for AP placement is that RF signals with higher frequency cover short distance compared to the low-
frequency signals. Therefore, APs should be placed in such a way that the 5 GHz band signal covers the
target area.

For Wi-Fi 6 AP deployment, the minimum Received Signal Strength Indicator (RSSI), which is a measure-
ment of how well client device can hear a signal from an access point, should be -55 dBm throughout
the coverage area. The reason for choosing an RSSI better than -55 is so the APs can reliably provide
a Modulation and Coding Scheme (MCS) with at least an MCS11 data rate on 40 MHz high density
deployments. MCS rates dictate both the technology chosen and the transmit and receive rates for the
wireless client. Wi-Fi 6 clients with a poor MCS rate will roll back to older technologies like Wi-Fi 5 and
use a lower transmit and receive rate for successful data transmission.

While deploying a Wi-Fi 6 network using dual-band APs, 2.4 GHz radios of some of the APs should be
turned off to avoid co-channel interference. Sufficient coverage should be validated using heat maps
to make sure there are no coverage holes.

Whenever possible, APs should be placed near users and devices in offices, meeting rooms, and
common areas, instead of in hallways or closets. The following figure shows a sample office layout
with APs. The staggered spacing between APs is equal in all directions and ensures suitable coverage
with seamless roaming.

Sample office AP layout

Validated Solution Guide 30


ESP Campus Design March 02, 2023

After studying an environment with the 35-50-foot (10-15 meter) rule in mind, make sure there is enough
capacity for the number of users. In an average office environment with APs every 35-50 feet (10-15
meters), the 30 devices per AP average will easily be satisfied. If there are high-density areas such as
large conference rooms, cafeterias, or auditoriums, additional APs may be needed.

AP Mounting Recommendations

Indoor APs are typically deployed in ceiling or wall mount deployments. Aruba does not recommend
desk or cubical mounted deployments. These locations typically do not allow for a clear line-of-sight
throughout the coverage area, which in turn can reduce overall WLAN performance. With the exception
of the hospitality style AP, APs with internal antennae should not be wall mounted if at all possible. Wall
mounted APs should have external antennae and Aruba has a selection of antennae options available.
There are some cases where wall mounting an internal antennae AP on the wall is valid however this
kind of design should include professional services to validate proper roaming and coverage patterns
base on the AP model selected.

Channel Planning
The Aruba AirMatch software will handle automating channel and power assignment for even the most
challenging wireless installations. If you want to plan your channels following the details in this section,
please contact an Aruba or partner systems engineer or consulting systems engineer for verification of
the design.
The following figure shows a typical 2.4 GHz channel layout with each color representing one of
the three available non-overlapping channels of 1, 6, and 11 for North America in this band. Reused
channels are separated as much as possible, but with only three available channels, there will be
some co-channel interference caused by two radios being on the same channel. Aruba recommends

Validated Solution Guide 31


ESP Campus Design March 02, 2023

using only these three channels for 2.4 GHz installations in order to avoid the more serious problem
of adjacent channel interference caused by radios on overlapping channels or adjacent channels
with radios too close together. A site survey could further optimize this type of design with a custom
power level, channel selection, and enabling and disabling 2.4 GHz radios for optimal coverage and to
minimize interference.

Channel layout for 2.4 GHz band with three unique channels

The 5 GHz band offers higher performance and suffers from less external interference than the 2.4
GHz band. It also has many more channels available, so it is easier to avoid co-channel interference
and adjacent channel interference. With more channels in the 5 GHz band, Aruba recommends all
capable clients connect on 5 GHz and recommend converting older clients from 2.4 GHz to 5 GHz
when possible. As with the 2.4 GHz spectrum, the radio management software handles the automatic
channel selection for the 5 GHz spectrum.

Channel Width

An important decision for 5 GHz deployments is what channel width to use. Wider channel widths
mean higher throughput for individual clients but fewer non-overlapping channels, while narrower
channel widths result in less available bandwidth per client but more available channels.

In most office environments, 40 MHz-wide channels are recommended because they provide a good
balance of performance and available channels. If there is a high-density open-office environment or a
loss of channels due to DFS interference, it is better to start with 20 MHz channels. Dynamic Frequency
Selection (DFS) is a Wi-Fi function that enables WLANs to use 5 GHz frequencies that are generally
reserved for radars.

Validated Solution Guide 32


ESP Campus Design March 02, 2023

Due to the high number of APs and increasing number of connected devices, there are certain office
environments that would benefit from 80 MHz-wide channels and the much wider 160 MHz channels.
The wider channels will make sense once there are enough Wi-Fi 6 clients to take advantage of the new
features outlined in the Wi-Fi 6 Enhancement section below. The following figure highlights the 40
MHz channel allocation for the 5 GHz band and shows the DFS channels.

Channel allocation for the 5 GHz band

Depending on country-specific or region-specific restrictions, some of the UNII-2/UNII-2 Extended


Dynamic Frequency Selection (DFS) channels may not be available. If an AP detects radar transmissions
on a channel, the AP stops transmitting on that channel for a time and moves to another channel. It is
recommended to use AirMatch for channel allocation as it will detect interference and accommodate
the optimal channel plan to avoid active DFS channels.

In the past, it was common to disable all DFS channels, but today most organizations attempt to use
the channels available in their country. In some areas, DFS channels overlap with radar systems. If
specific DFS channels regularly detect radar in your environment, we recommend removing those
channels from your valid-channel plan to prevent coverage problems.

Using 40 MHz-wide channels, there are up to 12 channels available. Depending on local regulations
and interference from radar or other outside sources, the total number of usable channels will vary
from location to location. To get maximum performance, it is recommended to allow AirMatch to
automatically determine the channel width whenever possible. It is also recommended not to disable
any channel widths in case AirMatch determines they will work in the environment.
NOTE:
You can find a list of the 5 GHz channels available in different countries at the following: 5 GHz
WLAN Channels by Country

Validated Solution Guide 33


ESP Campus Design March 02, 2023

Power Settings

The optimum power settings vary based on the physical environment and the initial settings should
always follow the recommendations from an early site survey. For the long term, Aruba recommends
using AirMatch to decide the optimal transmit power values for each AP. AirMatch uses telemetry data
from the entire network to compute transmit power values unique for the particular deployment.
Please refer to the AirMatch section for more details.

When not using AirMatch, the following guidelines for a typical wireless design are recommended:

• In the 2.4 GHz band, set the minimum power threshold to 6 dBm and the maximum power to 12
dBm

• In the 5 GHz band, set the minimum power threshold to 18 dBm and the maximum to 21 dBm

• Do not exceed a power level difference of 6 dBm between the minimum and maximum settings
on all radio bands

• The minimum power level differences between equal coverage level 2.4 GHz radios and 5 GHz
radios should be 6 dBm

Channel Planning Summary

The number of APs and their exact placement comes down to performance versus client density. In a
high-density deployment, better performance is possible using a larger number of lower-bandwidth
channels rather than fewer higher-bandwidth channels. One hundred wireless devices get better
performance split between two radios on 20 MHz channels than they do on one radio using a 40 MHz
channel. This is because the more channels you use, the better overall throughput is for a higher
number of devices. As mentioned previously, a typical Aruba wireless installation uses the AirMatch
software and AI Insights running in the cloud for RF channel planning and optimization.

Proxy ARP on Gateway


Enabling this feature will inform the Gateway to ARP on behalf of a client in the user table. When
enabled on a VLAN with an IP address, the Gateway will provide its MAC address in the Proxy ARP. If
the VLAN does not have an IP address the Gateway will supply the MAC address of the client in the
user table. This feature is off by default and should only be changed to address specific deployment
scenarios where the Gateway is a transparent hop to another device, for example when using Aruba
VIA VPN.

Validated Solution Guide 34


ESP Campus Design March 02, 2023

NAT/Routing on Gateway
Campus installations of a Gateway should always be Layer 2 and the Gateway should not perform
Layer 3 operations. The client’s default gateway should be another device like a router or switch and
the Layer 2 network should be dedicated for the clients attached to the Gateway. The broadcast and
multicast management features of the Gateway allows large subnets to be used without issue. It
is recommended the Layer 2 network be as large as supported by the Gateway and the supporting
switching infrastructure. Table sizes, ARP learning rates, physical layer rates, and redundancy all are
factors to account for in the switching infrastructure.
NOTE:
Firewall policies must be used when routing is enabled on the Gateway to control inter-VLAN
traffic and determine whether traffic should be routed or NAT’d.

Network Resiliency
Aruba’s recommended network design is a highly available, fault tolerant network. There are features
that should be enabled from a software perspective to ensure the network is prepared for interruptions.
This section will provide general guidelines for software solutions that provide fault tolerance and
allow for upgrades with minimal service impact

Wireless Resiliency Technologies

For campus wireless Aruba recommends either APs on their own or APs with Gateways. In both of these
designs there are features that can be enabled to ensure that the network is highly resilient.

Authentication State/Key Sync

Authentication keys are synchronized across APs by the Key Management Service (KMS) in Central.
This allows a client to roam between APs without re-authenticating or rekeying their encrypted traffic.
This decreases the load on the RADIUS servers, but also speeds up the roaming process for a seamless
user experience. Key synchronization and management are automatically handled by the APs and
Central, so no additional user configuration is required.

Firewall State Sync

Traffic from a client can be synchronized across a primary and secondary Gateway when using a cluster.
This allows the client to seamlessly fail from its primary Gateway to a secondary Gateway. The system
synchronizes encryption keys between Access Points so that when a client moves to its secondary
Gateway the client does not need to re-auth or rekey its encrypted traffic. These two operations working

Validated Solution Guide 35


ESP Campus Design March 02, 2023

together make it completely transparent to the client when they move between Gateways or APs. This
is a key component to Aruba’s high availability design and Live Upgrade features. When using a bridged
SSID the firewall state is synced upon a roaming event from a client for a seamless roaming event with
no traffic disruption.

Cluster Design Failure Domain

If more than one Gateway fails, there is a possibility of an outage to some clients. When a Gateway fails,
the clients with a single Gateway connection will be recalculated, and the cluster will be rebalanced.
This doesn’t happen immediately and there isn’t a set amount of time as it depends on the number
of users. If a second Gateway fails before the rebalancing can occur the client will be disassociated
and reconnect to an available Gateway. This will look like a dirty roam, but the client will reestablish a
connection as long as the Gateways are not at capacity during a failure. To mitigate a multiple Gateway
failure, a resilient deployment should minimize the common points of failure. Using disparate line
cards or switches, multiple uplinks spanning line cards or switches, port configuration validation, and
multiple Gateways are all foundational requirements to limit the failure domain.

Switching Resiliency Technologies

When it comes to the campus switches, Aruba recommends either a two-tier LAN with collapsed core
or a three-tier LAN with a routed core. In both of these designs there are common features that can
be enabled to ensure that the network is highly resilient. The two-tier campus is on the left and the
three-tier campus is on the right.

Two-tier and three-tier wired

Validated Solution Guide 36


ESP Campus Design March 02, 2023

Virtual switching Extension

VSX is a virtualization technology used to logically combine two AOS-CX switches into a single logical
device. From a management/control plane perspective, each switch is independent of the other, while
the Layer 2 switch ports are treated like a single logical switch. VSX is supported on 6400, 8320, 8325,
or 8400 models, but it is not supported on Aruba CX 6300, 6200 or 6100 models. VSX should only be
enabled if the devices are positioned in a collapsed core or aggregation layer. It is important to note
that a VSX pair cannot be mixed between different models meaning an 8325 cannot be mixed with
8320. Here is the list of supported combinations:

• Aruba CX 6400: All combinations within the 6400 family are supported
• Aruba CX 8320: All combinations within the 8320 family are supported
• Aruba CX 8325: All combinations within the 8325 family are supported
• Aruba CX 8400: All combinations within the 8400 family are supported

VSX pair placement

VSX pairs stay in sync using synchronization software and hardware tables shared over the inter-switch
link (ISL). The ISL is a standard link aggregation group (LAG) designated to run the ISL protocol between
the paired devices. VSX allows the two devices to appear as a single device using an Active Gateway
feature which is a shared IP and MAC address. Each VSX pair appears as a single Layer 2 switch to a
common downstream access switch utilizing a specialized LAG called a MC-LAG. Multi-Chassis Link
Aggregation MC-LAG allows the aggregation layer switch pair to appear as a single device to other
devices in the network, such as the dual-connected access layer switches. MC-LAG allows all uplinks
between adjacent switches to be active and passing traffic for higher capacity and availability, as shown
in the right side of the following figure.

Validated Solution Guide 37


ESP Campus Design March 02, 2023

The access switch uses a standard LAG connection and from the access switch perspective, the VSX
pair appear as a single upstream switch. This minimizes the fault domains for links by separating
the connections between the two VSX paired switches. This also minimizes the service impact with
the live upgrade feature because each device has its own independent control plane and link to the
downstream access devices.

Traditional spanning tree vs VSX

NOTE:
When using LAG or MC-LAG, STP is not required, but should be enabled as an additional enhanced
loop protection security mechanism.

LACP - The link aggregation control protocol (LACP) combines two or more physical ports into a single
trunk interface for redundancy and increased capacity.

LACP Fallback - LAGs with LACP Fallback enabled allow an active LACP interface to establish a connec-
tion with its peer before it receives LACP PDUs. This feature is useful for access switches using Zero
Touch Provisioning (ZTP) connecting to LACP configured aggregation switches.

Inter-switch Link - The best practice for configuring the ISL LAG is to permit all VLANs. Specifying a
restrictive list of VLANs is valid if the network administrator wants more control.

MC-LAG - These LAGs should be configured with the specific VLANs and use LACP active mode. MC-LAGs
should NOT be configured with a permit all VLAN.

VSX Keepalive - The VSX keepalive is a UDP probe which sends hellos between the two VSX nodes and
is used to detect a split-brain situation. The keepalives should be enabled with a direct IP connection
between the VSX pairs in a designated VRF domain. The VSX keepalive is not yet supported over the
Out-of-Band Management (OOBM) port.

Active-Gateway - This is the default-gateway for endpoints within the subnet and it needs to be
configured on both VSX primary and secondary switches. Both devices must also have the same virtual
MAC address configured from the Private MAC address spaces listed below. There are four ranges
reserved for private use.

• x2-xx-xx-xx-xx-xx
• x6-xx-xx-xx-xx-xx

Validated Solution Guide 38


ESP Campus Design March 02, 2023

• xA-xx-xx-xx-xx-xx
• xE-xx-xx-xx-xx-xx
NOTE:
x is any Hexadecimal value.

PIM Dual - If the network is multicast enabled, the default PIM Designated Router is the VSX primary
switch. In order to avoid a long convergence time in case of a VSX primary failure, the VSX secondary
can also establish PIM peering. Aruba recommends configuring PIM active-active for each VSX pair.

VSX and Rapid PVST - Aruba recommends configuring VSX and spanning tree per the design guidelines
outlined in the document below.
NOTE:
Certain VSX use cases fall outside of the design guidance in this document, but they are covered
in detail in the VSX Configuration Best Practices guide. The best practices guide includes in-
depth information about spanning tree interactions, traffic flows, active forwarding and the live
upgrade process.

Virtual Switching Framework

Stacking allows multiple access switches connected to each other and behave like a single switch.
Stacking increases the port density by combining multiple physical devices into one virtual switch,
allowing management and configuration from one IP address. This reduces the total number of
managed devices while better utilizing the port capacity in an access wiring closet. The members of a
stack share the uplink ports, which provides additional bandwidth and redundancy.

AOS-CX access switches provides front plane stacking using the Virtual switching Framework (VSF)
feature, utilizing two of the four front panel SFP ports operating at 10G, 25G, or 50G speeds. VSF
Combines the control and management plane of both switches in a VSF stack which allows for simpler
management and redundancy in the access closet. VSF is supported on the 6300 and 6200 model of
switches.

VSF supports up to 10 members on a 6300 an up to 8 members on a 6200, Aruba recommends a ring


topology for the stacked switches. A ring topology, which can be used for 2 switches all the way up to
10 switches, allows for a fault tolerance in the case of link failure because the devices can still reach the
commander or standby switch using the secondary path. The commander and standby switch should
have separate connections to the pair of upstream aggregation switches. If the commander fails, the
standby switch can still forward traffic upstream minimizing the failure domain to just the commander
switch. The recommended interface for switch stacking links is a 50G Direct Attach Cable (DAC) which
will allow enough bandwidth for traffic across members.

There are three stacking-device roles:

• Commander—Conducts overall management of the stack and manages the forwarding databases,
synchronizing them with the standby.

Validated Solution Guide 39


ESP Campus Design March 02, 2023

• Standby—Provides redundancy for the stack and takes over stack-management operations if the
commander becomes unavailable or if an administrator forces a commander failover.
• Members—Are not part of the overall stack management; however, they must manage their local
subsystems and ports to operate correctly as part of the stack. The commander and standby are
also responsible for their own local subsystems and ports.

VSF connections

In order to mitigate the effects of a VSF split stack, a split-detection mechanism must be enabled for
the commander and standby members of the stack which is known as Multi-Active Detection (MAD).
This is done using a connection between the OOBM ports on the primary and secondary members to
detect when a split has occurred. Aruba recommends the OOBM ports are directly connected using an
Ethernet cable.

VSF OOBM MAD and links

Validated Solution Guide 40


ESP Campus Design March 02, 2023

Aruba ESP Campus LAN Design


The Aruba CX switching portfolio provides a range of products for use in core, aggregation, and access
layers of the campus. Aruba switches are built using a cloud-native operating system called AOS-CX. To
achieve increased network resiliency and facilitate automation, AOS-CX implements a database-centric
operational model.

With features such as always-on PoE, Virtual Switching Framework (VSF) for access stacking, and Virtual
Switching Extension (VSX) for core and aggregation redundancy, organizations can rely on Aruba CX
switches to satisfy mission-critical requirements throughout the campus.

Aruba ESP campus LAN design focuses on the two most common topologies:

• Two-tier with collapsed core.


• Three-tier using aggregation.

Redundant, routed links are the preferred uplink configuration between switches. Though most
networks deployed today are configured for layer 2 access and routing at the core or aggregation layer,
organizations looking ahead to software-defined overlay topologies should start moving toward layer
3 access with routing at the edge to increase underlay network resilience.

The Aruba ESP campus LAN design may contain one or more of the following elements:

• Aruba Central
• Aruba ClearPass
• Aruba CX 8xxx Ethernet switches
• Aruba CX 6xxx Ethernet switches.

Two-Tier Campus LAN


The two-tier wired architecture includes access switches or switch stacks connected to a dual-switch
collapsed core. The access switches provide layer 2 services to connected endpoints and connect to
core switches providing both layer 2 and layer 3 services.

The two-tier design is well suited for small buildings with few wiring closets and access switches. It
also works well in larger environments when the fiber cables from each wiring closet are homed in a
single location. The following figure illustrates this design.

Two-Tier Collapsed Core

Validated Solution Guide 41


ESP Campus Design March 02, 2023

Collapsed Core Layer

Use Aruba CX switches that support VSX redundancy to provide access switches and other devices the
option of connecting over a redundant, MC-LAG layer 2 connection.

A foundation for establishing a network overlay can be built within the two-tier topology by configuring
OSPF peering between each access and core switch and adding a loopback interface to each access
switch in the OSPF backbone area.

Access Layer

Use Aruba CX switches that support VSF stacking for simplified growth in the network closet. In layer 2
access designs, use uplink ports on different VSF stack members, one into each MC-LAG configured
aggregation switch. This ensures efficient, fault-tolerant layer 2 bandwidth up from the access layer.

Enable Aruba ESP Colorless Ports by configuring port policies to allow 802.1x dynamic authentication
and network configuration.

Enable layer 2 protection mechanisms such as Loop Protection, BPDU Filter, Root Guard, and BPDU
Protection.

To simplify the network as much as possible, all routing is performed on the core devices.

Three-Tier Campus LAN


Organizations move to a three-tier network design for the following primary reasons:

• Network growth producing cross-campus traffic beyond the capacity of a single collapsed core.

Validated Solution Guide 42


ESP Campus Design March 02, 2023

• Network growth beyond a small number of access aggregation points.


• Network growth beyond a few building aggregation points.

The three-tier campus design is recommended for large networks with thousands of users or where the
physical wiring layout is not suited for a two-tier design. Layer 3 services for wired network hosts are
moved from the core to VSX pairs of aggregation switches. A pair of core switches joins the aggregation
switches together using high-speed layer 3 links and multiple equal-cost multipath (ECMP). Additional
capacity between pairs of aggregation switches is added by increasing the number of links between the
core and aggregation switches. The access switches remain layer 2 only. The figure below illustrates
this design.

Three-Tier Redundant Core

When connecting a gateway cluster or other layer 2 device directly to the core, use Aruba CX switches
that support VSX in order to take advantage of MC-LAG.

When high volumes of wireless endpoints are connected to a gateway cluster, deploy a services ag-
gregation block off the core to isolate the unique demands of bridging wireless to wired traffic. This
offloads layer 2 connections from core switches allowing a layer-3-only standalone core, increasing
resiliency at the most critical point in the network. The figure below illustrates this design.

Three-Tier Standalone Core

Validated Solution Guide 43


ESP Campus Design March 02, 2023

Core Layer

Use Aruba CX switches with sufficient ports of appropriate speeds to service the full bandwidth demands
of the campus aggregation layer.

The core layer of the LAN is the most critical part of the campus network and standalone cores. The core
layer reduces network complexity by carrying only routed traffic. The standalone core uses separate
switches, acting independently of one another, with dual ECMP connections to all aggregation switches.
ECMP is an advanced routing strategy in which next-hop packet forwarding occurs over multiple paths
with identical routing metric calculations.

When considering core topologies, it is important to use point-to-point links because link up/down
changes propagate almost immediately to the underlying protocols. Topologies with redundant ECMP
links are the most deterministic and convergence is measured in milliseconds.

Aggregation Layer

Use Aruba CX switches that support VSX redundancy with sufficient ports of appropriate speeds to
service the full bandwidth demands of the campus access layer. In layer 2 access designs, use VSX
MC-LAG to provide efficient layer 2 connectivity to the devices.

Validated Solution Guide 44


ESP Campus Design March 02, 2023

In most designs, the aggregation layer of the LAN limits cable distance to access devices, isolates the
network core from layer 2 traffic, and provides layer 3 services to access VLANs. In campus designs
using layer 3 access, routing moves to the edge switches and the aggregation devices fulfill a simpler,
transit-only function.

Consider running OSPF to provide loopback reachability to access layer devices even in networks
relying on layer 2 access. Enabling loopback reachability to all devices in the campus LAN ensures the
ability to leverage a growing range of options for network automation and orchestration.

Access Layer

Use Aruba CX switches that support VSF stacking for simplified growth in the network closet. In layer 2
access designs, use uplink ports on different VSF stack members, one into each MC-LAG-configured
aggregation switch. This ensures efficient, fault-tolerant layer 2 bandwidth up from the access layer.

Enable Aruba ESP Colorless Ports by configuring port policies to allow 802.1x dynamic authentication
and network configuration.

Enable layer 2 protection mechanisms such as Loop Protection, BPDU Filter, Root Guard, and BPDU
Protection.

To reduce load on the network core, all routing is performed on the aggregation devices.

OSPF Routing
Aruba ESP best practice uses OSPF for its simplicity and ease of configuration. OSPF is a dynamic,
link-state, standards-based routing protocol commonly deployed in campus networks. OSPF provides
fast convergence and excellent scalability, making it a good choice for large networks because it can
grow with the network without requiring redesign.

OSPF defines areas to limit routing advertisements and to allow for route summarization. The Aruba
ESP campus design uses a single area for the campus LAN. Multi-area, backbone designs are considered
when connecting multiple campus or WAN topologies. OSPF is often used to exchange routes between
the campus LAN and a WAN gateway or a DMZ firewall.

The ESP underlay best practice configuration uses OSPF point-to-point links between aggregation
and core devices. Interfaces on aggregation switches providing layer 3 services to downstream hosts
are configured as members of the OSPF domain. Configure the OSPF router for passive-interface
default to prevent unintended adjacencies from forming with devices plugged into a layer 2 access
port. When an OSPF neighbor is expected on a port, disable passive operation.

When configuring access switches, best practice is to configure an IP address in the management VLAN
and to enable OSPF on that VLAN IP interface. Adding /32 loopback interfaces to OSPF also lays the
foundation for a high-reliability management network.

The diagram below illustrates the OSPF fundamentals of a three-tier campus LAN.

Validated Solution Guide 45


ESP Campus Design March 02, 2023

OSPF in Three-Tier Wired

IP Multicast
Aruba Networks ESP uses Protocol Independent Multicast — Sparse Mode (PIM-SM) to route multicast
traffic on the network. Additional mechanisms required to route multicast traffic are:

• Bootstrap router protocol (BSR)


• The rendezvous point (RP)
• Multicast Source Discovery Protocol (MSDP)
• Internet Group Management Protocol (IGMP)

Configure PIM-SM on all routed links to enable multicast traffic on the network. PIM-SM uses reverse
path forwarding (RPF) based on the active routing table to find the best path toward an RP or multicast
source.

The BSR protocol is enabled on all routers in a PIM multicast domain to share RP information dynami-
cally. After a single router is elected as the BSR, it advertises RP information to all participating routers,
freeing the administrator from having to configure an RP address on each network router. After the
BSR is elected, all routers with RP candidate interfaces send the candidate RP IP address to the BSR.
The active RP is selected from the list of candidates.

Validated Solution Guide 46


ESP Campus Design March 02, 2023

The RP contains information on active multicast sources in a PIM multicast domain and is the root of
the Rendezvous Point Tree (RPT). Anycast networking enables multiple RPs to be active at the same
time for redundancy and traffic flow optimization. Configure core switches to announce an anycast
loopback IP address as a candidate-RP to be selected by the BSR.
MSDP facilitates active-active RP anycast redundancy by sharing multicast source information between
RPs, ensuring that all multicast sources in a PIM multicast domain are known by the full set of anycast
RPs. MSDP is enabled on campus core switches.
Enable IGMP on aggregation switch layer 3 interfaces with downstream clients. If a wireless gateway or
another layer 2 device is connected directly to the core switches, enable IGMP on layer 3 interfaces
facing those devices. Enable IGMP snooping on access layer switches.
NOTE:
IGMP timers must match across all switches on the network, including switches from other
vendors.

Overlay Networks
Overlay networks provide a mechanism to deploy flexible topologies that meet the demands of ever-
changing endpoints and applications. By fully decoupling user data traffic and associated policy from
the physical topology of the network, overlays enable on-demand deployment of L2 or L3 services.
Overlays also make it simpler to carry device or user role information across the network without
requiring all devices in the path to understand or manage the roles. Overlay networks are virtual
networks, built on top of an underlay network composed of physical infrastructure. The underlay
should be designed for stability, scalability, and predictability. Technologies such as GRE and IPSEC
have been used to create overlays in the campus and WAN space for many years. Virtual Extensible
LAN (VXLAN) is now an option to create distributed overlay networks in the campus.
Aruba provides the flexibility to choose between centralized overlays or distributed overlays to address a
range of traffic engineering and policy enforcement requirements. User Based Tunneling is a centralized
overlay architecture that provides simplified operations and advanced security features on gateway
devices. EVPN-VXLAN is a distributed overlay architecture that enables dynamic tunnels between
switches anywhere on a campus, providing consistent and continuous policy enforcement across the
network. Both overlay models support the “Colorless Ports” feature, which enables automated client
onboarding and access control for ease of operations.

Centralized Overlay Summary

User-Based Tunneling (UBT) is a centralized overlay that enables administrators to tunnel specified
user traffic to a gateway cluster to enforce policy using services such as firewalling, DPI, application
visibility, and bandwidth control. UBT selectively tunnels traffic based on a user or device role. The
policies associated with each client usually are assigned through a RADIUS server such as ClearPass
Policy Manager.

Validated Solution Guide 47


ESP Campus Design March 02, 2023

Distributed Overlay Summary

Distributed overlays are an evolution of the traditional campus network design. Distributed overlays
are built using EVPN-VXLAN on highly available underlays and are tied to a full policy-based micro-
segmentation, based on global roles, across the entire network infrastructure. Role-based policies
abstract policy from the underlaying network and enable flexible and simplified policy definition and
enforcement. This is provisioned by fully automating the overlay to provide a single-pane-of-glass
management overview.

Validated Solution Guide 48


ESP Campus Design March 02, 2023

A Distributed Fabric is formed by assigning personas to various devices in the network. The list below
describes the purpose of each persona.

• Route Reflector (RR) - Core switches are configured as BGP route reflectors (the RR persona)
to share EVPN reachability information. This reduces the number of peering sessions required
across the fabric.
• Stub - Wireless aggregation switches are configured with the stub persona to extend policy
enforcement to wireless gateways, which only support static VXLAN tunnels. The aggregation
switches carry GPID values from the campus fabric VXLAN tunnels forward into static VXLAN
tunnels configured between the aggregation switches and the gateways.
• Border - Internet edge switches use the border persona to provide connectivity between the
fabric and services outside the fabric.
• Edge - The edge persona is applied to access switches that provide primary VXLAN tunnel in-
gress/egress and policy enforcement for endpoint traffic into or out of the fabric.
• Intermediate Devices - Wired aggregation switches are underlay devices with no fabric persona
assigned. They do not run a VTEP and must support jumbo frames.

Distributed Overlay
Distributed overlay networks in the Aruba ESP Campus are created using EVPN-VXLAN. This suite of
protocols creates a dynamically formed network fabric that extends layer 2 connectivity over an existing
physical network and layer 3 underlay. It is an open standards suite to create more agile, secure, and
scalable networks in campuses and data centers. EVPN-VXLAN consists of:

• Ethernet VPN (EVPN), a BGP-driven control plane for overlays that provides virtual connectivity
between different layer 2/3 domains over an IP or MPLS network.

Validated Solution Guide 49


ESP Campus Design March 02, 2023

• Virtual extensible LANs (VXLAN), a common network virtualization tunneling protocol that ex-
pands the number of layer 2 broadcast domains to 16 million from the 4,000 available using
traditional VLANs.

Aruba ESP implements EVPN-VXLAN overlays on an IP underlay using redundant, layer 3 links for
high-speed resiliency and maximum bandwidth utilization.

Figure 3: Distributed Overlay

EVPN-VXLAN Benefits

• Uniform bridging and routing across a diverse campus topology:

– Efficient layer 2 extension across layer 3 boundaries.

– Anycast gateways to ensure consistent first-hop routing services across the campus.

• End-to-end segmentation using VXLAN-Group Based Policies (VXLAN-GBP) provides the ability
to propagate policy anywhere in the campus.

• Transported across any IP network supporting jumbo frames, VXLAN must be deployed only on
the edge devices of the fabric.

Validated Solution Guide 50


ESP Campus Design March 02, 2023

EVPN-VXLAN Control Plane

In Aruba ESP, the EVPN-VXLAN control plane is Multi-Protocol BGP (MP-BGP) which communicates
MAC addresses, MAC/IP bindings, and IP Prefixes to ensure endpoint reachability across the fabric.
This approach is far superior to both inefficient flood-and-learn communication on the data plane and
centralized control plane approaches with inherent scaling limitations.

The use of MP-BGP with EVPN address families between virtual tunnel endpoints (VTEPs) provides
a standards-based, highly scalable control plane for sharing endpoint reachability information with
native support for multi-tenancy. For many years, service providers have used MP-BGP to offer secure
layer 2 and layer 3 VPN services at a very large scale. An iBGP design with route reflectors simplifies
design by eliminating the need for a full mesh of BGP peerings across the full set of switches containing
VTEPs. BGP peering is required only between VTEP terminating switches (access, stub, and service
aggregation) and the core.

BGP control plane constructs include:

• Address Family (AF) - MP-BGP enables the exchange of network reachability information for
multiple address types by categorizing them into address families (IPv4, IPv6, L3VPN, etc.). The
layer 2 VPN address family (AFI=25) and the EVPN subsequent address family (SAFI=70) advertise
IP and MAC address information between MP-BGP speakers. The EVPN address family contains
reachability information for establishing VXLAN tunnels between VTEPs.
• Route Distinguisher (RD) - A route distinguisher enables MP-BGP to carry overlapping layer
3 and layer 2 addresses within the same address family by prepending a unique value to the
original address. The RD is only a number with no inherent meaningful properties. It does
not associate an address with a route or bridge table. The RD value supports multi-tenancy
by ensuring that a route announced for the same address range via two different VRFs can be
advertised in the same MP-BGP address family.
• Route Target (RT) - Route targets are MP-BGP extended communities used to associate an
address with a route or bridge table. In an EVPN-VXLAN network, importing and exporting a
common VRF route target into the MP-BGP EVPN address family establishes layer 3 reachability
for a set of VRFs defined across a number of VTEPs. Layer 2 reachability is shared across a
distributed set of L2 VNIs by importing and exporting a common route target in the L2 VNI
definition. Additionally, layer 3 routes can be leaked between VRFs using the IPv4 address family
by importing route targets into one VRF that are exported by other VRFs.
• Route Reflector (RR) - To optimize the process of sharing reachability information between
VTEPs, the use of route reflectors at the core enables simplified iBGP peering. This design allows
all VTEPs to have the same iBGP peering configuration, eliminating the need for a full mesh of
iBGP neighbors.

The Aruba ESP Campus design uses two layer 3 connected core switches as iBGP route reflectors.
The number of destination prefixes and overlay networks consume physical resources in the form of
forwarding tables and should be considered when designing the network. Refer to the “Reference
Architecture” section for hardware guidelines when scaling the fabric underlay design.

Validated Solution Guide 51


ESP Campus Design March 02, 2023

VXLAN Network Model

VXLAN encapsulates layer 2 Ethernet frames in layer 3 UDP packets. These VXLAN tunnels provide both
layer-2 and layer-3 virtualized network services to connected endpoints. A VTEP is the function within
a switch that handles the origination or termination of VXLAN tunnels. Similar to a traditional VLAN ID,
a VXLAN Network Identifier (VNI) identifies an isolated layer-2 segment in a VXLAN overlay topology.
Symmetric Integrated Routing and Bridging (IRB) enables the overlay networks to support contiguous
layer-2 forwarding and layer-3 routing across leaf nodes.
NOTE:
Configure jumbo frames on all underlay links in the fabric to ensure the accurate transport of
additional encapsulation.

VXLAN networks comprise two key virtual network constructs: Layer 2 VNI and Layer 3 VNI. The rela-
tionship between an L2VNI, L3VNI, and VRF is described below:

• L2VNIs are analogous to a VLAN and, for AOS-CX use, the configuration of a VLAN. An L2VNI
bridges layer 2 traffic between endpoints attached to different VTEPs.
• L3VNIs are analogous to VRFs and route between the subnets of L2VNIs between VTEPs.

– Multiple L2VNIs can exist within a single VRF.

VXLAN Encapsulation

An overlay network is implemented using Virtual Extensible LAN (VXLAN) tunnels that provide both
layer 2 and layer 3 virtualized network services to endpoints connected to the campus. The VXLAN
Network Identifier (VNI) associates tunneled traffic with the correct corresponding layer 2 VLAN or layer
3 route table so the receiving VTEP can forward the encapsulated frame appropriately. The Symmetric
Integrated Routing and Bridging (IRB) capability allows the overlay networks to support contiguous
layer 2 forwarding and layer 3 routing across leaf nodes.

A VTEP encapsulates a frame in the following headers:

• IP header: IP addresses in the header can be VTEPs or VXLAN multicast groups in the transport
network. Intermediate devices between the source and destination forward VXLAN packets
based on this outer IP header.

• UDP header for VXLAN: The default VXLAN destination UDP port number is 4789.

• VXLAN header: VXLAN information for the encapsulated frame.

– 8-bit VXLAN Flags: The first bit signals if a GBP ID has been set on the packet and the fifth
bit signals if the VNI is valid. All other bits are reserved and set to “0”.
– 16-bit VXLAN Group Policy ID: The group ID identifies the policy enforced on tunneled
traffic.
– 24-bit VXLAN Network Identifier: Specifies the virtual network identifier (VNI) of the
encapsulated frame.

Validated Solution Guide 52


ESP Campus Design March 02, 2023

– 24-bit Reserved field

Figure 4: vxlan_packet_header

Quality of Service
Quality of service (QoS) refers to a network’s ability to provide higher levels of service using traffic
prioritization and control mechanisms. Applying the proper QoS policy is important for real-time traffic
(such as on Skype or Zoom) and business-critical applications (such as Oracle or Salesforce).

To accurately configure QoS on a network, consider several aspects, including bit rate, throughput,
path availability, delay, jitter, and loss. The last three—delay, jitter, and loss—are easily improved with
a queuing algorithm that enables the administrator to prioritize applications with higher requirements
over those with lower requirements. The areas of the network that require queuing are those with
constrained bandwidth, such as the wireless or WAN sections. Wired LAN uplinks are designed to carry
the appropriate amount of traffic for the expected bandwidth requirements. Since QoS queuing does
not take effect until active congestion occurs, it is not typically needed on LAN switches.

The easiest strategy to deploy QoS is to identify the critical applications running in the network and
give them a higher level of service using the QoS scheduling techniques described in this guide. The
remaining applications stay in the best-effort queue to minimize upfront configuration and lower
the day-to-day operational effort of troubleshooting a complex QoS policy. If additional applications
become critical, they are added to the list. This can be repeated as needed without requiring a com-
prehensive policy for all applications. This strategy is often used by organizations that do not have a
corporatewide QoS policy or that are troubleshooting application performance problems in specific
network areas of the network.

One example prioritizes real-time applications along with a few other critical applications that require
fast response times because users are waiting on remote servers. Identifying business-critical applica-
tions and giving them special treatment on the network helps employees remain productive. Real-time
applications are placed into a strict priority queue, and business-critical applications are given a higher
level of service during congestion.

Validated Solution Guide 53


ESP Campus Design March 02, 2023

The administrator must limit the amount of traffic placed into strict priority queues to prevent oversat-
uration of interface buffers. Giving all traffic priority defeats the purpose of creating a QoS strategy.
After critical traffic is identified and placed in the appropriate queues, the rest of the traffic is placed in a
default queue with a lower level of service. If the higher-priority applications do not use the bandwidth
assigned, the default queue uses all available bandwidth.

Traffic Classification and Marking

Aruba recommends using the access switch as a QoS policy enforcement point for traffic over the LAN.
This means selected applications are identified by IP address and port number at the access switch
and marked for special treatment. Optionally, traffic can be queued on the uplinks, but this is not
required in an adequately designed campus LAN environment. Any applications that are not identified
are re-marked with a value of zero, giving them a best-effort level of service. The diagram below shows
where traffic is classified, marked, and queued as it passes through the switch.

Classification, Marking, and Queuing

In a typical enterprise network, applications with similar characteristics are categorized based on how
they operate over the network. Then, the applications go in different queues according to category. For
example, if broadcast video or multimedia streaming applications are not used for business purposes,
there is no need to account for them in a QoS policy. However, if Skype and Zoom are used for making
business-related calls, the traffic must be identified and given a higher priority. Specific traffic that
is not important to the business (for example: YouTube, gaming, and general web browsing) should
be identified and placed into the “scavenger” class, where it is dropped first and with the highest
frequency during times of congestion.

A comprehensive QoS policy requires categorization of business-relevant and scavenger-class applica-


tions on the network. Layer 3 and layer 4 classifications group applications into categories with similar
characteristics. After sorting the critical applications from the others, they are combined into groups
for queuing.

Scheduling algorithms rely on classification markings to identify applications passing through a net-
work device. Applications marked at layer 3 rather than layer 2 will carry the markings throughout the
life of a packet.

The goal of the QoS policy is to allow critical applications to share the available bandwidth with minimal
system impact and engineering effort.

Validated Solution Guide 54


ESP Campus Design March 02, 2023

DiffServ Code Point (DSCP)

Marking in layer 3 uses the IP type of service (ToS) byte with either the IP precedence three most
significant bit values (from 0 to 7) or the DSCP six most significant bit values (from 0 to 63). The
DSCP values are more common because they provide a higher level of QoS granularity. They are also
backward compatible with IP precedence because of their leftmost placement in the ToS byte.

Layer 3 markings are added in the standards-based IP header, so they remain with the packet as it
travels across the network. When an additional IP header is added to a packet, such as traffic in an
IPsec tunnel, the inner header DSCP marking must be copied to the outer header to allow the network
equipment along the path to use the values.

DSCP marking

Several RFCs are associated with the DSCP values as they pertain to the per-hop behaviors (PHBs)
of traffic passing through the various network devices along its path. The diagram below shows the
relationship between PHB and DSCP and their associated RFCs.

DSCP relationship with PHBs

Voice traffic is marked with the highest priority using an Expedited Forwarding (EF) class. Multimedia
applications, broadcast, and video conferencing are placed into an assured forwarding (AF31) class
to give them a percentage of the available bandwidth as they cross the network. Signaling, network
management, transactional, and bulk applications are given an assured forwarding (AF21) class. Finally,
default and scavenger applications are placed into the default (DF) class to provide them a reduced
amount of bandwidth but not completely starve them during times of interface congestion. The figure
below shows an example of six application types mapped to a four-class LAN queuing model.

Six application types in a 4-class LAN queuing model

Validated Solution Guide 55


ESP Campus Design March 02, 2023

The best-effort entry at the end of the QoS policy marks all application flows that are not recognized
by the layer 3 / layer 4 classification into the best-effort queue. This prevents end-users who mark their
own packets from getting higher-priority access across the network.

Traffic Prioritization and Queuing

The Aruba switch supports up to eight queues, but this example uses only four queues. The real-time
interactive applications are combined into one strict priority queue. In contrast, multimedia and
transactional applications are placed into deficit round-robin (DRR) queues, and the last queue is used
for scavenger and default traffic. DRR is a packet-based scheduling algorithm that groups applications
into classes and shares available capacity among them according to a percentage of bandwidth defined
by the network administrator. Each DRR queue gets its fair share of bandwidth during congestion, but
all of them can use as much bandwidth as needed when there is no congestion.

The outbound interface requires the DSCP values shown in the second column to queue the applications.
The weighted values used in the DRR LAN scheduler column are added, and each DRR queue is given
a share of the total. The values must be adjusted according to the volume of applications in each
category. This is often a trial-and-error process because the QoS policy affects the applications in
the environment. As shown in the four-class example below, the queues are sequentially assigned in
top-down order.

QoS Summary for Aruba Switch

Validated Solution Guide 56


ESP Campus Design March 02, 2023

Spanning Tree Protocol


High availability is the primary goal for any enterprise conducting business on an ongoing basis. Layer
2 loops cause catastrophic network disruptions, making prevention and timely removal of loops a
critical part of network administration.

The Spanning Tree Protocol (STP) dynamically discovers and removes layer 2 loops in a network. This
section covers STP topology, the type of STP to use, and supplemental features to enable.

One method of increasing availability between network infrastructure is to establish layer 2 redundancy
using multiple links, so an individual link failure does not result in a network outage. Multiple strategies
can be applied to prevent the redundant connections from forwarding layer 2 frames in an infinite
loop. The Aruba ESP architecture uses Virtual Switching Extension (VSX), discussed in the following
Network Resiliency section, to prevent loops between centrally administered network switches. STP in
combination with Loop Protect is configured primarily to resolve accidental loops created by users in
the access layer.

With many different types of STP and varying network devices using different defaults, it is important
to standardize on a common version for predictable STP topology. The recommended STP version for
Aruba Gateways and Switches is Rapid per VLAN Spanning Tree (Rapid PVST+).

STP and Root Bridge Selection

Enable STP on all devices providing layer 2 services to prevent accidental loops.

Validated Solution Guide 57


ESP Campus Design March 02, 2023

STP creates a loop-free topology by selecting a root bridge and subsequently permitting only one port
on each non-root switch to forward frames in the direction of the root. The root bridge is dynamically
selected, using the lowest bridge ID as its primary selector. The bridge ID begins with a bridge priority,
which can be set administratively to influence root bridge selection. Aggregation switches and collapsed
core switches should have low bridge priorities to ensure that a switch at that layer becomes the root
bridge of the network. The root bridge should be a device that is central to the aggregation of VLANs
for downstream devices. In the campus topologies discussed in this guide, the root bridge candidates
are the collapsed core, access aggregation, and services aggregation devices.

In the three-tier wired design, the root bridges are the access aggregation switches and service aggre-
gation switches. Virtual Switching Extension (VSX) and multichassis link aggregation groups (MC-LAGs)
are used to allow dual connections between the access and aggregation layers without the need for
STP on individual links. Each set of aggregation switches is a separate layer 2 domain with its own
root bridge, but they do not interfere with each other because STP does not extend over the layer 3
boundary between the devices. In this example, the aggregation switches are set with a low bridge
priority to ensure that one switch in each VSX pair becomes the root bridge. The core devices are layer
3 switches and do not run STP.

Spanning Tree Protocol root bridge placement

Validated Solution Guide 58


ESP Campus Design March 02, 2023

STP Supplemental Features

STP has several supplemental features that help keep the network stable. Here is a brief overview of
supplemental features, with justifications for enabling them.

Root Guard stops devices from sending a superior bridge protocol data unit (BPDU) on interfaces
that should not send a superior or lower-priority BPDU on an interface. Enable Root Guard on the
aggregation or collapsed core downlinks to prevent access switches from becoming the root of the
network. Do not enable it on the links connecting the aggregation switches to the core switch.

Admin Edge allows a port to be enabled automatically without going through the listening and learning
phases of STP on the switch. This should be used only on single device ports or with a PC daisy-chained
to a phone. Use Admin Edge with caution because STP does not run on these ports, so if there is a loop
on the network, STP cannot detect it. This feature should be used only for client-facing ports on access
switches.

BPDU Guard automatically blocks any port on which it detects a BPDU. This feature typically should
be enabled on admin-defined client-facing ports on access switches. This prevents any BPDUs from
being received on ports configured as client-facing. BPDU Guard ensures that BPDUs are not received
on access ports, preventing loops and spoofed BPDU packets.

BPDU Filter ignores BPDUs sent to an interface and does not send its own BPDUs to interfaces. The
main use for this feature is in a multitenancy environment when the servicing network does not want to
participate in the customer’s STP topology. A BPDU Filter–enabled interface still allows other switches
to participate in their own STP topologies. Using BPDU Filter is not recommended unless the network
infrastructure does not need to participate in the STP topology.

Loop Protect is a supplemental protocol to STP that can detect loops when a device creating a loop
does not support STP and also drops BPDUs. Loop Protect automatically disables ports to block a
detected loop and re-enables ports when the loop disappears. This feature should be turned on for
all access port interfaces to prevent accidental loops from port to port. Loop Protect should not be
enabled on the uplink interfaces of access switches or in the core, aggregation, or collapsed core layers
of the network.

Fault Monitor can be used to automatically detect excessive traffic and link errors. Fault Monitor
can be used to log events, send SNMP traps, or temporarily disable a port. Enable Fault Monitor in
notification mode for all recognized faults, and enable it on all interfaces for data continuity, but do
not enable the disable feature with Fault Monitor because Loop Protect is used to stop loops.

Network Resiliency

Switching Resiliency Technologies

For campus switches, Aruba recommends either a two-tier LAN with collapsed core or a three-tier LAN
with a routed core. In both designs, common features can be enabled to ensure that the network is
highly resilient. The two-tier campus is shown on the left below; the three-tier campus is on the right.

Validated Solution Guide 59


ESP Campus Design March 02, 2023

Two-Tier and Three-Tier Wired

Virtual Switching Extension (VSX)

VSX enables two AOS-CX switches to appear as a single switch to downstream-connected devices.
Use VSX in a collapsed core or aggregation point to add redundancy. In a standard link aggregation
group (LAG), multiple physical interfaces between two devices are combined into a single logical
link. Virtual Switching Extension (VSX) extends this capability by combining ports across two AOS-CX
switches on one side of the LAG, referred to as a multi-chassis LAG (MC-LAG). The VSX pair appears as a
single layer 2 switch to downstream connected devices, which can be another switch, a gateway, or
an individual network host. The active gateway features enable configuration of a redundant Layer 3
network gateway using a shared IP and MAC address. Dual-homing connected devices in this manner
adds network resiliency by eliminating a single point of failure for upstream connectivity.

From a management/control plane perspective, each switch is independent. VSX pairs synchronizate
MAC, ARP, STP, and other state tables tables over an inter-switch link (ISL). The ISL is a standard LAG
between the VSX pair designated to run the ISL protocol.

VSX is supported on Aruba CX 6400, CX 8320, CX 8325, CX 8360 and CX 8400 models. A VSX pair cannot
be mixed between different models, meaning a CX8320 cannot be mixed with a CX 8325.

VSX Pair Placement

Validated Solution Guide 60


ESP Campus Design March 02, 2023

The access switch uses a standard LAG connection. From the access switch perspective, the VSX pair is
a single upstream switch. This minimizes the fault domains for links by separating the connections
between the VSX-paired switches. This also minimizes the service impact with the Live Upgrade feature
because each device has its own control plane and link to the downstream access devices.

Traditional STP vs VSX

MC-LAG enables all uplinks between adjacent switches to be active and passing traffic for higher
capacity and availability, as shown in the right side of the figure below.

NOTE:
When using LAG or MC-LAG, STP is not required but should be enabled as an additional enhanced
loop protection security mechanism.

Validated Solution Guide 61


ESP Campus Design March 02, 2023

VSX Terminology

LACP—The Link Aggregation Control Protocol (LACP) combines two or more physical ports into a single
trunk interface for redundancy and increased capacity.

LACP Fallback—LAGs with LACP Fallback enabled allow an active LACP interface to connect with its
peer before it receives LACP protocol data units (PDUs). This feature is useful for access switches using
Zero Touch Provisioning (ZTP) connecting to LACP-configured aggregation switches.

Inter-switch link—Best practice for configuring the ISL LAG is to permit all VLANs. Specifying a restric-
tive list of VLANs is valid if the network administrator requires more control.

MC-LAG—These LAGs should be configured with the specific VLANs and use LACP active mode. MC-LAGs
should NOT be configured with all-VLAN permission.

VSX keepalive—The VSX keepalive is a User Datagram Protocol (UDP) probe that sends hellos between
the two VSX nodes to detect a split-brain situation. The keepalives should be sent using the out-of-band
management (OOBM) port connected to a dedicated management network, or enabled on a direct IP
connection using a dedicated physical link between the VSX pair members.

Active gateway—This is the default gateway for endpoints within the subnet. It must be configured
on VSX primary and secondary switches. Both devices also must have the same virtual MAC address
configured from the private MAC address spaces listed below. There are four ranges reserved for private
use.

• x2-xx-xx-xx-xx-xx
• x6-xx-xx-xx-xx-xx
• xA-xx-xx-xx-xx-xx
• xE-xx-xx-xx-xx-xx
NOTE:
x is any hexadecimal value.

PIM Dual—If the network is multicast-enabled, the default PIM DR is the VSX primary switch. The VSX
secondary also can establish PIM peering to avoid a long convergence time in case of VSX primary
failure. Aruba recommends configuring PIM as active-active for each VSX pair.

VSX and Rapid PVST—Aruba recommends configuring VSX and STP per the design guidelines outlined
in the best-practices document below.
NOTE:
Certain VSX use cases fall outside this design guidance but are covered in detail in VSX Config-
uration Best Practices. It includes in-depth information about STP interactions, traffic flows,
active forwarding, and the Live Upgrade process.

Validated Solution Guide 62


ESP Campus Design March 02, 2023

Virtual Switching Framework

Stacking allows multiple access switches to be connected to each other and behave like a single
switch. Stacking combines multiple physical devices into one virtual switch, increasing port density
and allowing management and configuration from one IP address. This reduces the total number of
managed devices while better utilizing the port capacity in an access wiring closet. Stack members
share the uplink ports, which provides additional bandwidth and redundancy.
AOS-CX access switches provide front plane stacking using the Virtual Switching Framework (VSF) fea-
ture, using two of the four front panel SFP ports operating at 10G, 25G, or 50G speeds. VSF combines the
control and management planes of both switches in a VSF stack, which allows for simpler management
and redundancy in the access closet. VSF is supported on Aruba CX 6200 and 6300 switches.
VSF supports up to 10 members on a 6300 and up to eight members on a 6200. Aruba recommends a
ring topology for the stacked switches. A ring topology can be used for 2–10 switches and allows for
link-failure fault tolerance, because the devices can still reach the commander or standby switch using
the secondary path.
The commander and standby switches should have separate connections to the pair of upstream
aggregation switches. If the commander fails, the standby can still forward traffic, limiting the failure
to the commander switch. The recommended interface for switch stacking links is a 50G direct-attach
cable (DAC), which allows enough bandwidth for traffic across members.
There are three stacking-device roles:

• The commander conducts overall management of the stack and synchronizes forwarding
databases with the standby.
• The standby provides redundancy for the stack and takes over stack management operations if
the commander becomes unavailable or if an administrator forces a commander failover.
• Members are not part of the overall stack management, but they must manage their local
subsystems and ports to operate correctly as part of the stack. The commander and standby
also are responsible for their own local subsystems and ports.

VSF Connections

Validated Solution Guide 63


ESP Campus Design March 02, 2023

To mitigate the effects of a VSF split stack, a split-detection mechanism must be enabled for the
commander and standby, known as multi-active detection (MAD). MAD is enabled using a connection
between the OOBM ports on the primary and secondary members to detect when a split occurs. Aruba
recommends connecting OOBM ports directly with an Ethernet cable.

VSF OOBM MAD and links

Aruba ESP Campus LAN Design Summary


The ESP campus wired LAN provides network access for employees, APs, and Internet of Things (IoT)
devices. The campus LAN also becomes the logical choice for interconnecting the WAN, data center,
and Internet access, making it a critical part of the network.

The simplified access, aggregation, and core design provides the following benefits:

• An intelligent access layer protects from attacks while maintaining user transparency within the
layer 2 VLAN boundaries.
• The aggregation and core layers provide IP routing using OSPF and IP multicast using PIM sparse
mode with redundant BSRs and RPs.
• The services aggregation connects critical networking devices such as corporate servers, WAN
routers, and Internet edge firewalls.
• The core is a high-speed dual-switch interconnect that provides path redundancy and subsecond
failover for nonstop forwarding of packets.
• Combining the core and services aggregation into a single layer allows the network to scale when
a standalone core is not required.

When overlay networks are needed, Aruba provides the flexibility to choose between centralized or
distributed overlays to address different traffic and policy requirements. Both overlay models support
the “Colorless Ports” feature, which enables automated client onboarding and access control for ease
of operations.

Validated Solution Guide 64


ESP Campus Design March 02, 2023

Aruba ESP Wireless LAN Design


The Aruba ESP WLAN portfolio provides a range of products for indoor or outdoor coverage, centralized
or distributed bridging with policy enforcement, and capabilities for in-building 5G, location services,
and IoT.

The Aruba ESP WLAN can be designed using two different traffic engineering approaches:

• Bridge Mode - In this mode, the AP bridges traffic from the WLAN onto the correct user VLAN.
AP-connected switch ports must trunk all wireless user VLANs.
• Tunnel Mode - In this mode, traffic is tunneled back to a gateway, which then bridges the WLAN
frame to the correct user VLAN. Only the gateway-connected switch ports must trunk the wireless
user VLANs.

Tunnel Mode is used to centralize the data plane, apply advanced segmentation rules, or deploy in
combination with user-based tunneling (UBT).

Mixed Mode is a third, alternative option that enables a single SSID to bridge traffic locally or send it
over a tunnel based on policy.

Each wireless client should be able to connect to multiple APs from anywhere in the network. This
enables low latency roaming for real-time applications and allows the network to adapt during routine
AP maintenance or an unscheduled outage. A higher density of APs enables the network to support
more wireless devices while delivering consistent performance and increased connection reliability.

Aruba APs can include Bluetooth and IEEE 802.15.4 (ZigBee) radios to provide connectivity for IoT
devices. These hardware capabilities are coupled with deep integration at the software layer to provide
IoT-specific support.

The Aruba ESP campus WLAN design may contain one or more of the following elements:

• Aruba Central
• Aruba ClearPass
• Aruba Gateway 9xxx
• Aruba Gateway 7200
• Aruba AP-6xx
• Aruba AP-5xx

Radio Frequency Design


Aruba recommends conducting a wireless design survey for all Wi-Fi network installations. In today’s de-
ployments with higher AP density, a predictive survey tool can provide sound, estimated AP placement
for many typical environments such as offices, schools, and hotels. A predictive survey determines AP
placement based on user inputs to software.

Validated Solution Guide 65


ESP Campus Design March 02, 2023

For more complex environments such as manufacturing floors or unique architectures, a physical site
survey provides the most accurate AP design information because it measures both Wi-Fi data and
non-Wi-Fi RF interference onsite. An onsite survey involves placing an AP in the environment to be
covered and using software to measure the signal propagation.

Both methods can provide a heat map of the coverage areas that shows recommended AP placement.
An onsite survey can provide increased accuracy in the RF design, more detail for AP mounting locations,
and recommendations for antenna placement.

The primary design goal should be to determine the number and placement of APs required to support
the projected number and density of clients while limiting co-channel interference and maximizing
throughput.

Aruba recommends conducting a predictive survey by a wireless professional for common areas such as
general carpeted office space, warehouses, and retail areas. Use an onsite survey for more challenging
environments such as manufacturing facilities, large public venues, or areas where external interference
has been detected.

RF Signal Coverage

An important RF design consideration is the differing propagation patterns between higher and lower
radio frequencies. WLAN designs should target 5GHz signal coverage to ensure maximum network
performance and capacity. When properly implemented, a 5GHz network design can upgrade easily to
6GHz due to the similarity in propagation characteristics.

To maximize performance in Wi-Fi 6 deployments, the minimum Received Signal Strength Indicator
(RSSI) should be -55 dBm at cell edge to deliver an MCS11 data rate on a 40MHz-wide channel with soft
roaming support.

When deploying a Wi-Fi 6 network using dual-band APs, 2.4 GHz radios of some of the APs should be
turned off to reduce co-channel interference. This helps accommodate the limited number of channels
and provide greater propagation of a 2.4GHz signal. Use VisualRF or onsite tools to ensure that coverage
gaps do not occur when disabling the radios.
NOTE:
Although signal availability is critical for WLAN operation, Aruba ESP wireless networks should
be designed for user and device capacity, not RF coverage.

Access Point Placement

Use the following recommendations as a starting point for designing Aruba indoor omnidirectional
access points within a typical office environment:

• Space APs 30-50 feet (10-15 meters) apart.

– Consider client performance, architecture, and interior design.

Validated Solution Guide 66


ESP Campus Design March 02, 2023

• Design for the following capacity:

– 30-40 clients per AP


– 2.5 Wi-Fi devices carried per user (laptop, phone, tablet, smartwatch, etc.) with a 50%
connection rate.

• Add APs to areas with frequent or increased user density.

– Conference rooms, atriums, or special event areas. Identify peak-load periods when the
WLANs have the highest usage or visibility.

The figure below shows a sample office layout with APs. The staggered spacing between APs is equal
in all directions and ensures suitable coverage with seamless roaming.

Sample Office AP Layout

AP Mounting

AP mounting height, location, and orientation are as crucial for wireless coverage as streetlamp loca-
tions are for roadway lighting. Follow the guidelines below when mounting indoor omnidirectional
down-tilt antenna APs:

• With the exception of the hospitality-style AP, avoid mounting AP-xx5 series APs vertically on a
wall.

• AP-xx5 series APs should be mounted horizontally (white surface facing down) at a height of
12-25 ft (4-8 meters).

• If mounting on a wall, use a 90° AP mount that supports the AP models used, available for
purchase from a third-party vendor.

• Do not install APs above drop ceilings. This introduces attenuation between APs and clients and
can negatively impact AirMatch calculations.

Validated Solution Guide 67


ESP Campus Design March 02, 2023

• Do not mount APs on building columns, pillars, or I-beams unless intentionally using a physical
structure to create RF shadows.

• Do not install APs in credenzas, bookcases, or other furniture that would introduce unnecessary
attenuation.

• Consult with an experienced WLAN engineer to review planned AP locations.

AirMatch

Aruba AirMatch analyzes periodic RF data across the entire network, or a subset of the network, to
derive configuration changes algorithmically for every Aruba AP on the network. The APs receive
regular updates based on environmental conditions, which benefits both IT staff and users. AirMatch
is the enhanced version of Adaptive Radio Management (ARM) technology. It has a new automated
channel optimization, transmit power adjustment, and channel width tuning system that uses machine
learning intelligence to generate the optimal view of the entire WLAN automatically.
NOTE:
Aruba’s Adaptive Radio Management (ARM) continues to run locally on APs and can alter the
transmit power of an individual AP in response to high interference.

ClientMatch

ClientMatch improves the experience of wireless users by reducing the number of sticky clients, load
balancing them between APs and steering them between supported bands. ClientMatch continually
monitors the RF neighborhood of each client to determine if it is getting the required level of service
from the AP with which it is currently associated. When appropriate, it can steer clients intelligently to
an AP radio that can provide better service. Aruba recommends that ClientMatch remains enabled.

ClientMatch is Wi-Fi 6 aware, and there is no special knob for this feature. It is enabled by default,
but it can be disabled if required. It attempts to match Wi-Fi 6 clients to Wi-Fi 6 radios in a mixed-AP
deployment environment.
NOTE:
While ClientMatch is effective at matching clients to the best radio available to them, it should
not be used as a replacement for proper RF design.

Channel Planning

Aruba AirMatch can handle dynamic channel, bandwidth, and transmit power assignment for most
installations. RF configuration outside the recommended AirMatch feature should be discussed with
an Aruba or partner system engineer.

Validated Solution Guide 68


ESP Campus Design March 02, 2023

5GHz Channel Width

An important decision for 5 GHz deployments is the channel width to use. Wider channels mean
higher throughput for individual clients but fewer non-overlapping channels, while narrower channels
result in lower data rates per client but more available channels, and thus less risk of co-channel
interference.

AirMatch does a great job of managing bandwidth dynamically, but WLAN performance requirements
may vary within a campus. Because of this, Aruba allows for adjustment of minimum and maximum
channel width. Performance requirements may merit adjustment of channel width parameters. For
example:

• Example 1: A marketing department may have lower client density and higher throughput re-
quirements for mobile users. Raising the minimum channel width from 20MHz to 40MHz ensures
that no AP uses channels less than 40MHz wide.
• Example 2: A distribution center has low throughput requirements for data collection guns, but
requires maximum reliability with low interference. Statically assigning the channel width to
20MHz may be ideal to reduce the risk of co-channel interference that could impact client devices
negatively.

Wider channels become increasingly easier to operate as Wi-Fi 6 clients become more prominent within
the typical mobile device population. For optimal performance, allow AirMatch to determine channel
width whenever possible.

Use AirMatch for channel allocation; it detects interference and accommodates the optimal channel
plan to avoid radar. If specific DFS channels regularly detect radar in the environment, remove them
from the valid-channel plan to prevent coverage problems.

Power Settings

Optimum power settings vary based on the physical environment. Aruba recommends using AirMatch
to decide each AP’s optimal transmit power values.

When not using AirMatch, follow these guidelines for a typical wireless design:

• In the 2.4 GHz band, set the minimum power threshold to 6 dBm and the maximum to 12 dBm.

• In the 5 GHz band, set the minimum power threshold to 18 dBm and the maximum to 21 dBm.

• Do not exceed a power level difference of 6 dBm between the minimum and maximum settings
on all radio bands.

• The transmit power for 5 GHz radios should be 6 dBm higher than for 2.4 GHz radios.

Setting a consistent power level across all available radios leads to more predictable roaming behavior
among a group of APs.

Validated Solution Guide 69


ESP Campus Design March 02, 2023

Roaming Optimizations

An Aruba WLAN can support any environmental requirements, user densities, and applications with
proper RF design and correct AP placement. The list below includes settings to consider modifying to
promote healthy client-initiated roaming.

Roaming Best Practice

Feature Recommendation Description

Transmit power Default, AirMatch Leave at default values (5 GHz: Min 18 / Max 21
dBm and 2.4 GHz: Min 6 / Max 12 dBm) and let
AirMatch tune to the environment.
Channel width Default Evaluate if DFS channels should be allowed in
the WLAN channel plan. Remove them and
limit 80 MHz channel width if radar signal
interference is detected, if clients do not
support the channels, or if older client roaming
is inconsistent.
Band steering Enable 11ax aware ClientMatch optimizes user experience by
ClientMatch steering clients to the best AP based on client
capabilities and AP load.
Local probe request 15 This prevents APs from responding to a client’s
threshold probe request if the signal-to-noise ratio is
below 15 dB and enourages clients to associate
with a better-suited AP.

Fast Roaming Best Practice

Feature Recommendation
Description

Opportunistic Key Caching Enable Avoids full 802.1x key exchange during client roams
(OKC) by caching the session key within the WLAN.NOTE:
MacOS and iOS devices do not support OKC.
802.11r fast BSS transition Enable Implements the full 802.11r standard supported by
recent versions of macOS, iOS, Android, and
Windows 10 clients. Some older 802.11n devices may
have connectivity issues with 802.11r enabled on
WLAN.
802.11k Enable Set the Beacon Report to Active Channel Report and
disable the Quiet Information Element parameter
from the Radio Resource Management profile.

Validated Solution Guide 70


ESP Campus Design March 02, 2023

Wi-Fi 6 Enhancements

The following AOS 10 features support specific capabilities for the Wi-Fi 6 standard.

High Efficiency

Each of these Wi-Fi 6 specific features fall within the high-efficiency profile. The high efficiency param-
eter activates all the Wi-Fi 6 features on the radio. High efficiency is enabled by default, and Aruba
recommends keeping it enabled.

OFDMA

OFDMA enables a Wi-Fi channel to be divided into smaller subchannels so the AP can send data to
multiple clients simultaneously. A 20 MHz–wide channel supports up to nine clients, and the number
of subchannels continually adjusts to support fewer higher-speed clients or additional lower-speed
clients. Subchannel use is dynamic and adjusts automatically every transmission cycle, depending on
client data needs.

This feature is enabled by default for Wi-Fi 6 clients and APs but works only when both sides are Wi-Fi 6
capable. Wider channels support more subchannels. This means an 80 MHz–wide channel can support
up to 37 clients at a time. OFDMA currently supports downlink traffic from the AP to the clients and will
eventually support uplink traffic from the clients to the AP.

Downlink MU-MIMO

The Wi-Fi 6 standard enhances MU-MIMO to support up to eight clients simultaneously when using an
eight spatial stream (SS) AP, such as the Aruba 55X models. An increased number of spatial streams
has the following benefits:

• Achieving higher data rates when communicating with a single client.


• Achieving higher aggregate performance in an MU-MIMO environment when communicating
with multiple clients simultaneously.

Single and Dual Stream Clients

Validated Solution Guide 71


ESP Campus Design March 02, 2023

This feature is enabled by default. Keeping it enabled leads to increased capacity and higher speeds
per user.

Transmit Beamforming

Wi-Fi 6 employs an explicit beamforming procedure using channel-sounding with a null data packet to
compute the antenna weights and focus the RF energy for each user. Keep this feature enabled for
optimal performance benefits.

Target Wake Time

An important power-saving feature of Wi-Fi 6 is target wake time (TWT). TWT uses negotiated policies
based on expected traffic activity between Wi-Fi 6 clients and a Wi-Fi 6 AP to determine a scheduled
wake time for each client. Keep this feature enabled to allow clients to sleep longer and save more
power.

BSS Coloring

BSS coloring allows APs on the same channel to be closer together and transmit simultaneously if they
are tagged different colors. This makes operating 80Mhz channels easier to implement in traditional
office settings.

Wireless Security
Wireless security is a key component of the Aruba ESP WLAN solution. The latest improvements to
wireless security come in a protocol update called WPA3 which Aruba was instrumental in defining.
Migrate wireless clients to WPA3 as soon as supported to ensure a reliable and secure WLAN.

WPA3

WPA3 can be deployed using WPA3-Personal (SAE) or WPA3-Enterprise. WPA3 includes an increase in
security while complexity remains the same as WPA2. WPA3 requires no changes in workflows or usage,
with no new steps or caveats to remember. Aruba Simultaneous Authentication of Equals (SAE) protocol
was added to the IEEE 802.11s mesh networking standard and certified in 2012. SAE is an implementation
of the dragonfly key exchange, which performs a password-authenticated key exchange using a zero-
knowledge proof where each side proves it knows the password without exposing the password or
password-derived data. The WPA3-SAE user experience is identical to WPA2-PSK, where a user simply
enters the password and connects.

The Wi-Fi Alliance has published a list of WPA3-Certified Client Devices.

Validated Solution Guide 72


ESP Campus Design March 02, 2023

WPA3-Personal

WPA3-Personal is a replacement for WPA2-PSK. It uses password-based authentication based on the


dragonfly key exchange (RFC 7664), which is resistant to active, passive, and dictionary attacks. For
backward compatibility, enable “Transition Mode” so that WPA3 capable clients connect using WPA3-
SAE and legacy clients connect using WPA2-PSK.

WPA3-Enterprise (CCM 128)

CCM 128 is WPA3 with AES CCM encryption and dynamic keys using 802.1X.

CCM 128 is the correct choice for networks moving to WPA3 today. The operating mode is backward
compatible with WPA2, but adds optional support for 802.11w Protected Management Frame (PMF).
Clients that are PMF capable (support 802.11w) and legacy clients can connect to the same SSID. The
mode is supported in bridge, tunnel, and mixed-mode SSIDs.

WPA3-Enterprise (CNSA)

WPA3 with AES GCM-256 encryption uses Commercial National Security Algorithm (CNSA) (192 bit),
new key management (SHA-384), and mandatory PMF endpoint support. The WPA3-Enterprise CSNA
(192-bit) mode requires a compatible EAP server (such as Aruba ClearPass Policy Manager 6.8 or later
versions) and requires EAP-TLS. Strict key exchange and cipher requirements may not be supported on
all devices. The mode is supported in bridge, tunnel, and mixed-mode SSIDs. It is primarily used by
government agencies.

WPA3-Enterprise (GCM 256)

WPA3 with AES GCM-256 encryption requires new key management (SHA-256), new ciphers, and PMF.
Legacy clients are not supported. The operating mode can be used for sites that require stronger key
management and encryption when the client population has support for GCM 256.

Enhanced Open

Aruba Opportunistic Wireless Encryption (OWE) provides unauthenticated data encryption to open
Wi-Fi networks. To the user, an OWE network looks just like an open network with no padlock symbol,
but data are encrypted. OWE performs an unauthenticated Diffie-Hellman key exchange when the
client associates with the AP.

This key is used to derive keys to encrypt all management and data traffic sent and received by the
client and AP. Central proactively copies the keys to neighboring APs.

No additional device provisioning is required for OWE. Aruba recommends enabling OWE for visitor
networks where encryption is needed but authentication is not required, such as coffee shops, bars,
schools, public venues, and stadiums.

Transition Mode enables an administrator to configure a single open SSID for backward compatibility.
The AP automatically creates two basic SSIDs with separate beacons when OWE is enabled.

Validated Solution Guide 73


ESP Campus Design March 02, 2023

• BSSID 1 — An open network for non-OWE stations with an information element (IE) to indicate a
BSSID 2 is available. Legacy clients connect to this BSSID, and their traffic is not encrypted.

• BSSID 2 — A hidden OWE network with the Robust Secure Network Indicator Element (RSN-IE)
Authentication Key Management (AKM) field indicating the use of suite 18 (the OWE suite) for
authentication. In addition, an IE to indicate BSSID 1 is available. OWE-capable clients connecting
to the hidden SSID receive PMF and encryption benefits.

Aruba supports configuring OWE SSID in bridge or tunnel mode.

Multiple Pre-Shared Key

The Multiple Pre-Shared Keys (MPSK) feature enables devices connecting to the same SSID to use
different PSKs. One helpful example is headless IoT devices that do not support 802.1X. MPSK enhances
WPA2 pre-shared key mode by enabling device-specific or group-specific passphrases. Passphrases
are assigned administratively to individual or groups of devices based on common attributes such
as profiling data or assigned uniquely to individual device registrations using Aruba ClearPass Policy
Manager. This establishes a one-to-one relationship between devices and a specific user to provide
visibility, accountability, and management, and subsequently reduces the administrative burden when
changing the passphrase for a set of devices.
NOTE:
MPSK is not compatible with WPA3-Personal (SAE).

Visitor Wireless
The Aruba ESP Architecture provides access to visitors and employees over the same infrastructure,
while ensuring that visitor access does not compromise corporate network security.

Using the organization’s existing WLAN provides a convenient, cost-effective way to offer Internet
access for visitors and contractors. The wireless visitor network:

• Provides Internet access to visitors through an open wireless SSID, with web access control in
the gateway’s firewall.
• Supports the creation of temporary visitor authentication credentials that an authorized internal
user can manage.
• Keeps visitor network traffic separate from the internal network.

Every AP can be provisioned with controlled, open access to wireless connectivity to the Internet. Visitor
traffic is tunneled securely from the wireless AP back to the gateway and into a separate VLAN with
Internet-only access. The figure below shows how traffic is passed from the wireless visitor network
VLAN to the firewall.

Visitor wireless network

Validated Solution Guide 74


ESP Campus Design March 02, 2023

A visitor network should require a username and password entered on a captive portal. Lobby ambas-
sadors or other administrative staff can issue temporary visitor accounts. This design provides the
flexibility to tailor control and administration to the organization’s requirements while maintaining a
secure network infrastructure.

It is common for the gateway to act as a DHCP server and router for visitor clients. As long as the
projected load metrics are below the gateway’s recommended limits, layer 3 operations can be enabled
for visitors or IoT networks.

When routing is enabled on a gateway, use firewall policies to control traffic between VLANs. The DHCP
service on the gateway is not redundant, so an external DHCP server is recommended for mission-
critical visitor access.

Validated Solution Guide 75


ESP Campus Design March 02, 2023

WLAN Multicast and Broadcast

Dynamic Multicast Optimization

The 802.11 standard states that multicast traffic over a WLAN must be transmitted at the lowest basic
rate. Dynamic Multicast Optimization (DMO) is an Aruba technology that converts multicast frames to
unicast before forwarding from a gateway to an AP. Unicast frames are acknowledged by the client and
can be retransmitted if a frame is lost over the air. Unicast frames also are transmitted at the highest
possible data rate supported by the client, which greatly reduces duty cycle in the cell, freeing up
bandwidth for all users.

For performance optimization purposes, avoid having more than one multicast source broadcasting
the same data on the same WLAN datapath. Use the largest possible layer 2 network to avoid converting
multiple multicast streams simultaneously.

The figures below show a typical IP multicast topology with DMO enabled.

IP multicast BSR, RP, MSDP, IGMP snooping, and DMO placement

Broadcast to Unicast Conversion

Aruba WLANs can convert broadcast packets into unicast frames to optimize airtime usage. Broadcast
frames over the air must be transmitted with the lowest possible data rate configured (called the “basic
rate”). Since broadcasts have no delivery acknowledgment, there is no option to retransmit a lost
broadcast frame. When the frame over the air is converted to unicast, the AP can send it at a much
higher data rate and retrieve delivery confirmation. A lost frame can be retransmitted.

Unicast greatly decreases channel duty cycle and delivers frames at the highest possible data rate per
client.

Validated Solution Guide 76


ESP Campus Design March 02, 2023

Broadcast Filtering

An Aruba ESP WLAN should be configured for ARP filtering to reduce broadcast transmissions.

• All - The WLAN drops all broadcast and multicast frames except DHCP, ARP, IGMP group queries,
and IPv6 neighbor discovery protocols.
• ARP - The WLAN drops broadcast and multicast frames except DHCP, ARP, IGMP group queries,
and IPv6 neighbor discovery protocols. Additionally, it converts ARP requests to unicast and
sends frames directly to the associated clients.

• Unicast ARP Only - This option enables the WLAN to convert ARP requests to unicast frames and
send them to the associated clients.
• Disabled - The IAP forwards all the broadcast, and multicast traffic is forwarded to the wireless
interfaces.

WLAN QoS

Wi-Fi Multimedia

Wi-Fi Multimedia (WMM) is a certification program created by the Wi-Fi Alliance that covers QoS over
Wi-Fi networks. WMM prioritizes network traffic into one of four queues. Based on its assigned traffic
class, traffic receives different treatment, such as a shortened wait time between packets or tagging of
packets using DSCP and IEEE 802.1p markings.

Users can define the traffic assigned to each queue, and DSCP and 802.1p values can be adjusted as
needed to match the wired LAN.

To take advantage of WMM functionality in a Wi-Fi network, three requirements must be met:

• The AP is Wi-Fi Certified™ for WMM and has WMM enabled.


• The client device is Wi-Fi Certified™ for WMM.
• The source application supports WMM.

NOTE:
WMM is supported in all Aruba Wi-Fi products.

QoS is set for a VLAN or port and can be set dynamically per application using a policy enforcement
firewall. Most networks, including wireless LANs, operate below capacity. There is very little congestion,
and traffic flows well. QoS provides predictable behavior for congested periods. During overload
conditions, QoS mechanisms grant certain traffic high priority while making fewer resources available
to lower-priority traffic. Increasing the number of voice users, for instance, may mean delaying or
dropping data traffic.

Validated Solution Guide 77


ESP Campus Design March 02, 2023

Wi-Fi manages airtime contention using carrier-sense, multiple-access with collision avoidance (CS-
MA/CA), much like shared Ethernet networks did in the past. CSMA/CA requires that each device
monitors the wireless channel for other Wi-Fi transmissions before transmitting a frame. The Wi-Fi
standard defines a distributed system in which there is no central coordination or scheduling of clients
or APs. However, with Wi-Fi 6 and BSS coloring, channel contention is greatly reduced.

The WMM protocol adjusts two CSMA/CA parameters: the random back-off timer and the arbitration
inter-frame space, according to the QoS priority of the frame to be transmitted. High-priority frames are
assigned shorter random back-off times and arbitration inter-frame spaces, while low-priority frames
must wait longer.

Back-off and arbitration inter-frame timers for WMM

WMM defines four priority levels for 802.11 traffic. In ascending priority:

• Background
• Best effort
• Video
• Voice

Since QoS must be maintained end-to-end, WMM priority levels must be mapped to the QoS priorities
in use on the LAN. The table below shows how DSCP priorities are translated to the four WMM priority
levels.

WMM to DSCP mapping

WMM access category Description DSCP

Voice priority Real-time interactive 46


Video priority Multimedia streaming 26
Best-effort priority Transactional 18
Background priority Best effort 0

Validated Solution Guide 78


ESP Campus Design March 02, 2023

AirSlice

AirSlice is a unique RF technology that uses Policy Enforcement Firewall (PEF) deep packet inspection
to guarantee performance for latency-sensitive, high-bandwidth, and IoT services such as 4K video
streaming or unified communications (UC). An Advanced License is required to enable Deep Packet
Inspection before configuring AirSlice.

The table below lists the applications supported by default with Air Slice.

Default applications

Wi-Fi Calling
Office 365
GoToMeeting
Cisco WebEx
Dropbox
GitHub
Zoom
Skype for Business
Slack
Amazon Web Services

More information about AirSlice can be found in the Aruba AirSlice Tech Brief.

Bridge Mode Deployment


Bridge Mode provides an easy solution when tunneled traffic is not needed and advanced gateway
features are not required. In this mode, wireless traffic is bridged directly from the AP into the wired
infrastructure. The access switch ports for the APs are trunked to provide SSID-to-VLAN connectivity.
The AP handles the packet encryption, user authentication, and policy enforcement functions, while
features such as RF management, key management, live upgrades, monitoring, and troubleshooting
are managed in Central.

The figure below illustrates the ArubaOS 10 (AOS 10) bridge mode topology.

Bridge Mode

Validated Solution Guide 79


ESP Campus Design March 02, 2023

Figure 5: img

Tunnel Mode Deployment


Aruba gateways can be added to a greenfield design or an existing bridge mode deployment. The only
requirement is to run the same software version on APs and gateways. A tunnel mode deployment offers
robust security features and maximum operational flexibility. Gateways can be deployed individually or
clustered for increased redundancy and scale. Clusters are automatically created by adding gateways
to the same group in Central.

Tunnel Mode increases visibility into applications, which helps prioritize business-critical data. This
model also provides microsegmentation, dynamic RADIUS proxy, and encryption over the LAN. Seam-
less roaming is supported across an entire layer 3 campus.

The diagram below shows the AOS 10 tunneled mode topology.

APs in Tunneled Mode

Validated Solution Guide 80


ESP Campus Design March 02, 2023

Proxy ARP on Gateway

Enabling this feature directs the gateway to respond to an ARP request on behalf of a client in the
user table. When enabled on a VLAN with an IP address, the gateway provides its MAC address in the
proxy ARP. If the VLAN does not have an IP address, the gateway supplies the client’s MAC address.
This feature is turned off by default. Enable it only to address deployments in which the gateway is a
transparent hop to another device, such as with Aruba VIA (Virtual Internet Access) VPN.

Layer 3 Design

Campus installations of a gateway should always be layer 2, and the gateway should not perform layer
3 operations. The client’s default gateway should be another device, such as a router or switch, and the
layer 2 network should be dedicated for the clients attached to the gateway. The gateway’s broadcast
and multicast management features enable the use of large subnets without issue.

Validated Solution Guide 81


ESP Campus Design March 02, 2023

Make the layer 2 network as large as can be supported by the gateway and switching infrastructure.
Table sizes, ARP learning rates, physical layer rates, and redundancy all can affect the switching
infrastructure design.

WLAN Resiliency
Aruba ESP provides a variety of components useful for designing a highly available, fault-tolerant
network. This section provides general guidelines for software features that increase fault tolerance
and allow for upgrades with minimal service impact.

Authentication State/Key Sync

Authentication keys are synchronized across APs by the Key Management Service (KMS) in Central. This
allows clients to roam between APs without reauthenticating or rekeying encrypted traffic. Key sync
reduces load on the RADIUS servers and speeds the roaming process for a seamless experience. Key
synchronization and management are handled automatically by the APs and Central; no additional
user configuration is required.

Firewall State Sync

Traffic from a client can be synchronized across primary and secondary gateways when using a cluster.
This allows the client to fail from the primary gateway to the secondary seamlessly. The system
synchronizes encryption keys between APs so when a client moves to its secondary gateway, the client
does not need to reauthenticate or rekey its encrypted traffic. To the client, moving between gateways
or APs is transparent.

This is a crucial component of Aruba’s high availability design and Live Upgrade features. When using a
bridged SSID, the firewall state is synced for each roaming event, so the client experiences seamless
roaming with no traffic disruption.

Cluster Design Failure Domain

When a gateway fails, clients left with a single gateway connection are rebalanced across the cluster.
The length of time required for this operation depends on the number of clients on the network. If a
second gateway fails before the rebalancing can occur, the client is disassociated and reconnected
to an available gateway. The client can reestablish a connection as long as other gateways are not at
capacity.

To mitigate a multiple gateway failure, minimize the common points of failure. To limit the risk of
domain failure, use disparate line cards or switches, multiple uplinks spanning line cards or switches,
port configuration validation, and multiple gateways.

Validated Solution Guide 82


ESP Campus Design March 02, 2023

Campus Wireless Summary


The ESP campus WLAN provides network access for employees, visitors, and IoT devices. Regardless of
their location, wireless devices have the same experience when connecting to their services.

The benefits of the Aruba wireless solution include:

• Seamless network access for employees, visitors, and IoT devices.


• Plug-and-play deployment for wireless APs.
• Wi-Fi 6 enhancements that address connectivity issues for high-density deployments and im-
prove the performance of the network.
• Live upgrades to perform operating system updates with little to no impact on service.

Validated Solution Guide 83


ESP Campus Design March 02, 2023

Aruba ESP Campus Policy Design


Aruba ESP provides the tools needed to deploy a zero-trust network design that employs role-based
authentication, campus-wide micro-segmentation, and consistent policy enforcement at every point
in the network.

Traditional VLAN and access control list (ACL) security features remain important components of the
security framework. Aruba ESP greatly fortifies standard security practices by:

• Enabling automated application and enforcement of security policy throughout the overlay
network,
• Incorporating roles, or personas that clearly identify and group access attempts for each user or
device for consistent policy application.
• Successfully segmenting campus traffic originating from both wired and wireless endpoints.

Two policy frameworks are provided within the ESP architecture to allow organizations to choose the
approach that best meets current requirements.

Aruba Dynamic Segmentation provides mechanisms to assign user traffic to secure network segments
dynamically using policies defined by business requirements. This solution uses Generic Route Encapsu-
lation (GRE) to tunnel traffic from APs and switches to a gateway cluster, for consistent, high-throughput
policy enforcement on north/south traffic.

Aruba Central NetConductor also provides mechanisms for securely and dynamically segmenting user
traffic. This solution uses a distributed overlay architecture built on VXLAN-GBP to segment user traffic
in the network using an efficient and fault-tolerant fabric topology and GBP enforcement at any point
in the network.

Figure 6: Policy intro

Network Segmentation
Logical network segmentation is a fundamental tool of secure IP network design. Virtual LANs (VLANs)
are used to separate IP broadcast domains. Virtual routing and forwarding (VRF) instances enable a
single network device to provide routing services within multiple, distinct routing domains. VLANs and
VRFs both use access control lists (ACLs) and route-maps to filter communication between subnets.

Validated Solution Guide 84


ESP Campus Design March 02, 2023

Virtual LAN

A VLAN segments traffic at layer 2 and restricts MAC reachability. From a policy standpoint, VLANs
are used for macro-segmentation. Devices are grouped into a VLAN by IP subnet, and an IP ACL is
applied to the VLAN interface. This serves as the main policy enforcement mechanism to determine
if endpoints in a VLAN or subnets are permitted to communicate with endpoints in other VLANs or
subnets.

In NetConductor, VLANs also are configured to create layer 2 segments.

Virtual Routing and Forwarding

A VRF instance enables a single network device to manage multiple routing domains and maintain
entirely separate route tables. Member interfaces forward traffic based on the VRF-specific route table.
A VRF can contain overlapping IP addresses with another VRF, because the individual route tables are
discrete.

Because VRF separates IP routing domains, the network can be segmented to enforce policy effectively.
For example, a retail organization can place all PCI traffic in a VRF separate from other enterprise net-
works to guarantee that regulated traffic cannot intermingle with enterprise traffic. VRF segmentation
should be kept to a minimum when designing campus networks in order to preserve resources on the
network infrastructure.

In NetConductor, VRFs can be used to create layer 3 overlay networks.

Access Control Lists

An access control list (ACL) is a filter applied to network traffic to identify hosts or networks and to
restrict communication with other hosts or network segments. Filtering rules can be applied to MAC
addresses, IPv4 addresses and ports, or IPv6 addresses and ports.

When traffic matches an ACL rule, policy is applied to permit, deny or drop the traffic. The ACL also can
be used to apply Quality of Service (QoS) policies and route map filters.

ACLs are commonly used to create an IPv4 filter at a layer 3 network boundary. This enables configu-
ration of filters based on IP address, type, and port number to regulate communication between IP
subnets. For example, an ACL can deny user VLANs (BYOD, employee, and visitor) from accessing a
network infrastructure management VLAN.

On AOS-CX devices, ACLs are applied to an interface in a specific direction depending on the platform. It
is important to note this direction. The following table lists the AOS-CX capability to apply ACL inbound
or outbound by interface type and by ACL type.

Validated Solution Guide 85


ESP Campus Design March 02, 2023

ACL Interface Type 8400X 8360 8325 8320 64xx 6300 6200 6100

Ingress IPv4 ACL on ports Yes Yes Yes Yes Yes Yes Yes Yes
Ingress IPv4 ACL on VLANs Yes Yes Yes Yes Yes Yes Yes Yes
Ingress routed IPv4 ACL on VLANs Yes Yes Yes Yes Yes Yes - -
Ingress IPv6 ACL on ports Yes Yes Yes Yes Yes Yes Yes Yes
Ingress IPv6 ACL on VLANs Yes Yes Yes Yes Yes Yes Yes Yes
Ingress routed IPv6 ACL on VLANs Yes Yes Yes Yes Yes Yes - -
Ingress MAC ACL on ports Yes Yes Yes Yes Yes Yes Yes Yes
Ingress MAC ACL on VLANs Yes Yes Yes Yes Yes Yes Yes Yes
Egress IPv4 ACL (on route-only ports) Yes Yes Yes Yes Yes Yes - -
Egress IPv4 ACL (on bridged ports) - Yes - - Yes Yes Yes -
Egress routed IPv4 ACL on VLANs - Yes Yes Yes Yes Yes - -
Egress IPv6 ACL on ports - Yes - - Yes Yes Yes -
Egress IPv4 ACL on VLANs - Yes Yes Yes Yes Yes Yes -
Egress routed IPv6 ACL on VLANs - Yes Yes Yes Yes Yes - -
Egress IPv6 ACL on VLANs - Yes Yes Yes Yes Yes Yes -
Egress MAC ACL on ports - Yes - - Yes Yes Yes -
Egress MAC ACL on VLANs - Yes - - Yes Yes Yes -
Control plane ACLs Yes Yes Yes Yes Yes Yes Yes Yes
Ingress ADC on ports Yes Yes Yes Yes Yes Yes Yes Yes

Role-Based Policy

User Role

The Aruba WLAN solution has long featured the concept of a user role, or persona. The user role is a
container for a variety of attributes that define how a user or device is permitted to use the network
and resources connected to it. In most cases, the role is derived and assigned to a user or device based
on the credentials provided during 802.1X-based authentication. Roles also can be based on device
properties, usage patterns, geography, time of day, and a number of additional attributes.

Validated Solution Guide 86


ESP Campus Design March 02, 2023

A major advancement in security policy enforcement, Aruba ESP provides the ability to assign a role
to any authenticated or profiled traffic in the network and enforce the associated policy at any point
in the network: switch, AP, or gateway. This ability enables a true zero-trust network environment by
ensuring that every packet is associated to a role and compliant with the policy of that role. Policy
can be applied using ACLs, VLAN assignment, QoS markings, time restrictions, and other security
requirements in a variety of combinations.

Policy

Plan security policy carefully to make the best use of roles for a comprehensive solution that does not
interfere with legitimate use of the network.

Common types or categories of roles with similar policies are:

• Trusted
• Untrusted
• IoT

Trusted

Trusted roles should require 802.1X-based, enterprise authentication. Trusted roles provide varying
levels of access to internal, limited-access resources. Examples of roles that typically fall into the
trusted category are:

• Employee

– Typical permissions include:


* Full access to general network resources, including File, Print, Email, Internet, and
access to internal systems.
– Typical exceptions/prohibitions include:
* Secure or sensitive systems such as related to PCI, PHI, or PII data.
* Operational controls - HVAC, lighting, building automation systems.
* Network Management System - management interfaces, configuration management
applications.

• IT

– Typical permissions include:


* Standard employee access.
* Network Management System - management interfaces, configuration management
applications.
– Typical exceptions/prohibitions include:
* Exceptions may be PCI, PHI, PII data

• Critical

– Typical permissions include:

Validated Solution Guide 87


ESP Campus Design March 02, 2023

* This role is typically assigned to a single account used only when access to authentica-
tion services is disrupted.
* This role has specific system control rights needed to perform repair, recovery and
restoration when required.

Untrusted

Untrusted roles should require authentication, but may rely on non-802.1X mechanisms such as a
pre-shared key or guest registration. Examples of roles that typically fall into the untrusted category
are:

• Visitor

– Typical permissions include:


* Internet-only access limited to HTTPS, HTTP, DNS, DHCP protocols.
– Typical exceptions/prohibitions include:
* Bandwidth limits.
* Application categories (gambling, gaming, adult content, etc.).
* Time of day.

• Contractor

– Typical permissions include:

* Internet access with limitations.


* Sensitive or operational systems as needed for the contracted function.

– Typical exceptions/prohibitions include:

* Device profiling for minimum patch level.


* Application categories (gambling, gaming, adult content, etc.).
* Time of day.
* Network Management System - management interfaces, configuration management
applications.

NOTE:
A contractor in this context is a project oriented, short-term resource, working from their own
client devices.

IoT

IoT roles must be tailored to the limitations and requirements of the devices to be deployed. Policy
must permit the IoT solution to function properly with correct peer-to-peer communication. Policy
also must permit the management plane of the IoT solution to reach all devices. Examples of roles that
typically fall into the IoT category are:

• Office devices

Validated Solution Guide 88


ESP Campus Design March 02, 2023

– Typical permissions include:


* Access to consumables management system.
* Access to print servers, DHCP, and DNS.
– Typical exceptions/prohibitions include:
* No access to Internet or other systems.

• Building management devices

– Typical permissions include:


* Access to building management system (BMS) hosts.
* Access to DHCP and DNS.
– Typical exceptions/prohibitions include:
* No access to Internet or other systems.

When device-to-device communication is not required, configure the user roles to deny traffic between
endpoints.

When designing network roles, the goal is to create roles that make policy enforcement consistent and
precise while limiting available attack surface.

Developing an accurate network profile and specific access requirements for each unique type of user
or device is essential.

Policy Enforcement

Centralized Overlay

When building networks using a centralized policy architecture, APs and switches form GRE tunnels to
a gateway cluster and all user traffic is tunneled. This results in an efficient design for primarily client
initiated, north/south-bound traffic such as that destined for a data center or the Internet.

As the authentication server in 802.1X authentication sessions, the gateway applies a role to every
client session upon successful authentication. Aruba gateway products also serve as the primary policy
enforcement point using a high-throughput policy enforcement firewall and deep-packet inspection
engine.

Distributed Overlay

When building networks using a distributed policy architecture, switches form an EVPN-VXLAN over-
lay fabric. Data traffic is encapsulated in a VXLAN packet that includes a group policy ID (GPID) on
every datagram. This enables policy enforcement on any VXLAN-GBP-aware device configured with a
corresponding role and policy.

The distributed policy architecture implemented in the Aruba Central NetConductor solution provides
consistent and efficient policy enforcement on all directions of traffic flow and opens the potential for
Internet-wide, Zero Trust networking.

The policy enforced on a switch is configured within Aruba Central and downloaded to the switch when
it is added to the overlay fabric.

Validated Solution Guide 89


ESP Campus Design March 02, 2023

802.1X and Policy Based Segmentation

Remote Authentication Dial-In User Service (RADIUS)

RADIUS is a networking protocol used to communicate user credentials to an authentication database,


then relay the corresponding user profile from the database to the network infrastructure.

The primary authentication method used in ESP design is 802.1X. Although Aruba products are
standards-compliant and integrate with many commonly available RADIUS server products, a
comprehensive ESP policy layer must use Aruba ClearPass Policy Manager as the RADIUS and policy
server.

Aruba ESP offers critical capabilities required to operate a secure and reliable RADIUS infrastructure.
Use “authentication priority” to enable fallback from 802.1X to MAC-based authentication to ensure
that critical devices always receive required network access. “Client limits” should be configured to
limit the number of authenticated clients from any single device port to prevent unintended MAC
flooding.

Another important method to increase fault-tolerance is creating a “critical authentication” role. This
role can be assigned to critical devices if an authentication request fails. It should be associated
with a policy that permits only basic connectivity for continued authentication attempts. A critical
authentication role on the switching infrastructure allows devices to access the network if the RADIUS
server is temporarily unreachable.

Role Assignment

Roles are typically assigned when a user or device is authenticated with 802.1X. The RADIUS server
queries a back-end LDAP containing a profile associated with the credentials provided. In Aruba ESP, a
vendor specific attribute (VSA) is used to communicate the assigned role.

After authentication, the RADIUS server can relay standard defined attributes and VSAs related to the
profile back to the network infrastructure.

When endpoints do not have appropriate credentials to connect to the network, a “rejection role”
should be used to prevent continued authentication failures. The rejection role should place the device
in a restricted VLAN with access to DHCP and DNS only. This ensures that the supplicant sees itself
as authenticated and failing to get a route to the Internet. The supplicant should discontinue further
authentication attempts.

A change-of-authorization message is a special RADIUS message that enables modification of the policy
applied to an already authenticated user or device. A role can be changed for an existing authenticated
user or device based on time, location, or behavioral variables.

Basic principles of user role assignment include:

• The user role dictates if traffic is switched locally or tunneled to another device.

• Any RADIUS server can be used to return a local user role (LUR).

• Roles are assigned from the RADIUS server using the “HPE-User-Role” VSA.

Validated Solution Guide 90


ESP Campus Design March 02, 2023

• A role can assign a policy or QoS ACL, reauthentication timers, and captive portal redirect.

• The role must exist on a network device before the associated policy can be enforced by that
device.

Colorless Ports

The ESP colorless ports feature enables dynamic assignment of a role and policy to an access port
following device connection and authentication. A device can connect to any port in the network and
be assigned to the correct VLAN based on the associated policy. As a result of this VLAN assignment,
the traffic then can be put directly onto a traditional VLAN, encapsulated in a GRE tunnel and sent to a
gateway using UBT, or encapsulated in a VXLAN tunnel and sent across a NetConductor fabric.

Dynamic Segmentation

In Aruba ESP, dynamic segmentation is the solution for assigning user traffic to secure network segments
dynamically based on policies defined by business requirements. Colorless ports facilitate onboarding
wired devices to dynamically assigned roles and policies. For wireless devices, 802.1X is preferred for
authentication and role assignment.

Dynamic segmentation uses GRE to tunnel APs and switches to a centralized gateway cluster and
provide firewall-based segmentation and policy enforcement on the data plane. Policy can be applied to
traffic bridged at the AP; however, using a gateway cluster provides better throughput and enforcement
capabilities.

Micro-segmentation of traffic is achieved by enforcing role-based policy on the gateway cluster of the
tunneled data plane. In addition to applying IP-based policy between subnets, MAC layer traffic can be
filtered within a subnet to limit peer-to-peer communication.

Wireless Data Plane Modes

Dynamic segmentation supports three wireless data plane modes:

• Tunnel Mode
• Bridge Mode
• Mixed Mode

Tunnel mode SSIDs use a GRE tunnel from the APs to a centralized gateway cluster. Policy is configured
in Aruba Central, and the gateway is the enforcement point providing policy enforcement firewall and
DPI engine capabilities.

Gateways are required for campuses expecting to deploy more than 500 APs and 5000 clients at
a single site. Gateways are recommended for maximum policy enforcement capabilities with high
throughput.

Tunnel Mode in Three-Tier Wired

Validated Solution Guide 91


ESP Campus Design March 02, 2023

Figure 7: Tunnel-mode_Three-Tier

Bridge mode SSIDs bridge wireless traffic to a VLAN locally and do not tunnel traffic to a gateway. As
a result, a bridge mode AP must have all wireless user VLANs trunked to the connected switch port.
Policy is configured in Aruba Central, and the AP is the enforcement point providing policy enforcement
firewall and DPI capabilities for a moderate density of clients. Bridge mode SSIDs should be considered
for locations expected to deploy fewer than 500 APs with fewer than 5000 clients per site. Consider the
overall security, policy, and traffic engineering requirements of the planned deployment.

Bridge Mode in Two-Tier Wired

Validated Solution Guide 92


ESP Campus Design March 02, 2023

Figure 8: Bridge-mode_Two-Tier

Mixed mode SSIDs enable both bridge and tunnel forwarding modes on a single SSID. Reducing SSIDs
on a campus increases WLAN performance by reducing the number of management and beacon frames
transmitted. Mixed mode SSIDs only support 802.1X authentication. They should be considered if the
design requires both bridged and tunneled traffic within the campus. Aruba ClearPass is preferred, but
not mandatory, to deploy mixed mode SSIDs. Bridged and tunneled VLAN derivation uses a standard
VSA (such as filter-id) or an Aruba VSA (such as user role), which is sent by the RADIUS server.

User-Based Tunneling for Wired

User-Base Tunneling (UBT) allows traffic from a switch to be tunneled to a gateway cluster using GRE in
the same manner as traffic in a tunnel mode SSID. As with tunnel mode, policy is configured in Central,
and the gateway serves as the policy enforcement point. This results in consistent policy enforcement
on wired and wireless traffic and simplified engineering for northbound flows.

The following diagram shows UBT in the three-tier wired design. The employee, visitor, and Internet of
Things (IoT) devices have different access policies based on the assigned user role even though they
come from the same VLAN subnet in the access layer.

Policy Enforcement Using UBT in Three-Tier Wired

Validated Solution Guide 93


ESP Campus Design March 02, 2023

Figure 9: Wired_Policy_Three-Tier

Central NetConductor

In Aruba ESP, Central NetConductor is the solution for building a policy-based campus overlay network
using VXLAN-EVPN to enable policy enforcement at the per-packet level at any point in the network.
Colorless ports facilitate onboarding endpoints to the correct overlay network and policy domain using
802.1X for primary authentication.

NetConductor consistently distributes policy configuration across the ESP campus ensuring that gate-
ways, APs, and switches all are able to identify the roles in use and enforce the associated policies.

VXLAN-GBP

NetConductor uses VXLAN-GBP to segment user traffic in a network by tagging VXLAN packets with a
Group Policy ID (GPID) associated to a user role. The roles and policies are defined in Aruba Central
and distributed to all devices in the overlay fabric.

The GPID is set at the ingress virtual tunnel endpoint (VTEP) based on the user role assigned when
the device or user initially authenticates to the network. Endpoints can be authenticated by MAC or
802.1X.

Validated Solution Guide 94


ESP Campus Design March 02, 2023

Associating policy to each packet at ingress to the fabric and enforcing policy on each packet at egress
from the fabric establishes a framework for a highly granular, scalable, and dynamic policy layer. An
assigned policy tag is carried on data traffic within the fabric, from user segment to WAN block or
physical security infrastructure to data center, with no loss of fidelity.
Segmentation using VXLAN-GBP provides the following advantages:

• Role-to-role micro-segmentation of user traffic on the same VLAN by tagging with a user role
stored as a GPID in the VXLAN header.
• Consistent policy enforcement across a campus and between fabric domains.
• At the egress, the switch determines if the traffic from the source role (carried in the Group Policy
ID) is permitted or denied for the destination role (determined from the destination MAC). Traffic
is forwarded or dropped accordingly.
• In a gateway, traffic on a VXLAN tunnel can be terminated and enter another VXLAN tunnel.
The Group Policy ID must be transported to the new tunnel for the final destination to enforce
role-based policies.

Figure 10: distributed_overlay

EVPN to GRE Policy

Policy is communicated between GRE overlays and EVPN overlays using a static VXLAN tunnel between
the EVPN fabric border on the gateway aggregation switches and the gateway cluster where the GRE
sessions terminate. The GPID identifies an Aruba role and is carried in the VXLAN header across the
static VXLAN tunnel between the two solutions.

Validated Solution Guide 95


ESP Campus Design March 02, 2023

Ensure that fabric border switches are sized appropriately for the number of MAC addresses expected.

Campus to Branch Policy

When a Central gateway group is configured for SD-Branch, a VXLAN header can be inserted into SD-
Branch destined packets. The header contains the VXLAN GPID used within the EVPN fabric. This
enables the remote site to receive the GPID and apply the same role to the packet that was applied at
the originating site.

Network Infrastructure Security


The main responsibility of a secure network is to guarantee that only authorized users and devices can
access it. Security protocols must admit authorized users while keeping all other users out.

Using a centralized service to maintain user and device profiles is the easiest approach. A central service
running a secure networking protocol provides better security and easier maintenance. Policy can be
applied and usage can be tracked from a single point, for more consistent, accurate, and streamlined
security management.

Local Accounts

Other than required default accounts, local accounts should not be created. All management access
should be authenticated through TACACS or RADIUS for accountability. This includes service accounts:
the admin/root account should not be used for service accounts or monitoring systems. If a local
account is necessary, it should comply with the organization’s password complexity and rotation
requirements. Account information on any local account should be limited to as few people as possible,
with strict access and usage requirements.

Session Timeouts

Users connected to Central, gateways, or switches should not be allowed to stay connected for extended
periods of inactivity. This guideline limits the number of open sessions and reduces the risk of an
unauthorized user potentially editing information on devices with an idle connection.

The recommended timeout varies between one and five minutes, depending on security policy. Ensure
that the timeout is consistent across the network.

Minimum Password

Complex passwords are required for almost all user access, and network infrastructure is no exception.
Complex passwords dramatically decrease the likelihood of being “cracked.”

Validated Solution Guide 96


ESP Campus Design March 02, 2023

It is best practice across all Aruba platforms to require a complex password, defined as a password with
eight or more characters, requiring a combination of uppercase letters, lowercase letters, numerals,
and special characters.

It is also recommended that few to no local accounts exist on devices, and the admin/root account pass-
word is set to a 64-character complex password and stored in a safe or secured password-management
application. Setting a password of this length and storing it in a safe typically reduces the need to
rotate the admin/root account regularly. With RADIUS and TACACS, password complexity is gener-
ally controlled by a directory service that complies with the organization’s password complexity and
expiration requirements.

SSH

The Secure Shell Protocol (SSH) is one of the most common ways to connect to network infrastructure
because it is the most secure. It is important to ensure that the strongest ciphers are used to access
the network devices. Different regions and systems support different ciphers, so review the Aruba
hardening guide for details on which cipher to select. The hardening guide can be found on the Aruba
support site.

TACACS

Terminal Access Controller Access-Control System (TACACS) is a security protocol that provides central-
ized authentication and validation of users attempting to access a network device. TACACS servers
should integrate with an existing directory service to enforce group permission, password complexity,
and expiration requirements. It is recommended that all Aruba devices enable TACACS for authentica-
tion, command authorization, and auditing. Aruba ClearPass Policy Manager supports TACACS services
for any hardware vendor that supports the TACACS protocol, including Aruba devices.

Certificate Replacement

Replacing factory-issued certificates is a key recommendation of the Aruba hardening guide. Always
replace certificates before devices go into production.

TLS 1.2 should be the only cipher enabled on any web interface. On gateways, the cipher strength can
be set to high, as recommended in the hardening guide.

Using a wildcard certificate for management of web interfaces is acceptable across all Aruba plat-
forms.

Validated Solution Guide 97


ESP Campus Design March 02, 2023

API Access

When using any application programming interface (API), it is critical to protect the interface. If the API
is enabled, an ACL is recommended so that only a management subnet can access the API. Depending
on the Aruba product, the management interface may no longer require enabling. Devices managed
through Aruba Central no longer require the local web interface, and it can be disabled.

Validated Solution Guide 98


ESP Campus Design March 02, 2023

Aruba ESP Campus Reference


Architectures
This section describes the components and features of an Aruba ESP campus, with reference designs
for small, medium, and large campus networks. Each design includes a sample bill-of-materials.

Select the reference design that most closely aligns with specific production requirements as a starting
point for building the required campus solution.

Campus Components
The following products and features provide the foundation for the ESP campus architecture. Use the
tables below for guidance on designing a properly sized campus network.

Switches

6100 6200 6300 6400 8325 8360 8400

Access
Function/Persona AccessAccess/Edge
Access/Edge
Aggregation, Aggregation, Aggregration,
Core/RR Core/RR Core/RR
VSF Support No Yes Yes No No No No
VSX Support No No No Yes Yes Yes Yes
Redundant No No Yes Yes Yes Yes Yes
power option
Layer 3 routes 576(static 1024 64K 64K 131K 600K 1M
only)
MAC 8,192 16K 32K 32K 98K 212K 786K
addresses

Access Points

510 530 550 630 650

Wi-Fi 6 Support Yes Yes Yes Yes Yes


Radio 2 radio2x2:2 2 2 or 3 radio4x4:4 (2.4GHz), 3 3
configuration (2.4GHz), 4x4:4 radio4x4:4 single 8x8:8 or dual 4x4:4 radio2x2:2 radio4x4:4
(5GHz) (5GHz)

Validated Solution Guide 99


ESP Campus Design March 02, 2023

510 530 550 630 650

AP peak data 2.7Gb/s 3Gb/s 5.4Gb/s 3.9 Gb/s 7.8Gb/s


rate
Max clients per 100 150 150 100 150
radio
(recommended)
Ethernet ports 1x2.5GE, 1xGE 2x5GE 2x5GE 2x2.5GE 2x5GE
Max active PoE 20.8W 26.4W 38.2W 23.8W 32W
consumption

Cables and Transceivers

Refer to the following document to ensure proper selection of supported cables and transceivers when
planning for physical connectivity within the campus:

ArubaOS-Switch and ArubaOS-CX Transceiver Guide

Campus Features
Additional design consideration should be given to enabling the following features within the campus
network.

Switching

Small campus Medium campus Large campus

IP Routing Optional Yes Yes


Multicast routing Optional Optional Yes
NTP Yes Yes Yes
sFlow Optional Yes Yes
Spanning tree Yes Yes Yes, consider layer 3 access
QoS Yes, if voice traffic Yes Yes

Wireless

Validated Solution Guide 100


ESP Campus Design March 02, 2023

Small campus Medium campus Large campus

DMO Optional Optional Yes


NTP Yes Yes Yes
QoS Yes, if voice over Wi-Fi Yes Yes
RADIUS Yes Yes Yes

Services

Services Description Notes

Aruba ClearPass Virtual appliance Recommended (on-premises)


Aruba Central Cloud services Recommended

Small Campus
The small campus typically supports up to 5000 users with 2-3 devices per user. The network can be
a single building, several floors in a larger building, or a group of small buildings located near one
another.

The illustration below depicts a small campus consisting of three IDFs in a single location connected to
a VSX collapsed core.

Validated Solution Guide 101


ESP Campus Design March 02, 2023

Example Design

The example small campus design includes one combined server room/MDF, and three intermediate
IDFs that connect to the MDF using multimode fiber. This small campus reference design supports 750
employees and requires 75 APs to provide 2.4 GHz and 5 GHz coverage.

Building characteristics:

• 3 floors, 75,000 square feet total size


• 10 wiring closets (IDFs)
• 750 employees with up to 2500 concurrent IPv4 clients
• 75 APs
• 1 combined server room/MDF

Example Bill-of-Materials

The example small campus bill-of-materials includes redundancy and bandwidth suitable for a highly
reliable LAN and WLAN for a US based, small campus.

Validated Solution Guide 102


ESP Campus Design March 02, 2023

QuantityExampleSKU Description

20 Aruba 6300M 48-port HPE Smart Rate Operate the access switches as 2-device,
1/2.5/5GbE Class 6 PoE and 4-port SFP56 VSF stacks providing 96 access ports per IDF
Switch (JL659A) with redundant 25 Gb/s uplinks.
2 Aruba 8360-32Y4C v2 32p 25G SFP/+/28 4 Operate the collapsed core switches as a
Sec 4p 100G QSFP+/28 Front-to-Back 3 Fans VSX pair with 25 Gb/s downlinks to the
2 AC (JL700C) access stacks
75 Aruba AP-635 (US) Tri-radio 2x2:2 802.11ax A cost-effective, 3-radio AP providing
Wi-Fi 6E Internal Antennas Campus AP market leading Wi-Fi 6 services and
(R7J28A) performance.
Aruba ClearPass Authentication and policy services for the
campus network.
Aruba Central Cloud management and AI-driven insights
for the campus network.

Medium Campus
The medium campus architecture is targeted for organizations supporting 5,000–15,000 users with
multiple devices per user. The network can be a group of buildings located near one another, one large
building, or several large or high-density floors in a building. This architecture uses access aggregation
switches to consolidate traffic onto higher bandwidth uplinks toward the core and to provide layer 3
services to the access layer.

The illustration below depicts a medium campus consisting of six IDFs connecting to two, VSX access
aggregation points that are further connected to a VSX core with directly connected gateways.

Validated Solution Guide 103


ESP Campus Design March 02, 2023

Example Design

The example medium campus design includes a VSX redundant core with directly connected gateway
cluster and WAN services. Each floor includes two IDFs that connect to the MDF. This medium campus
reference design supports 5,000 employees and requires 500 APs to provide full 2.4 GHz and 5 GHz
coverage.

Campus characteristics:

• 3 buildings of 5 floors each and 500,000 square feet total size


• 2 IDFs per floor
• 1 aggregation point per building
• 5,000 employees with up to 18,000 concurrent IPv4 clients
• 500 APs
• 2 gateways
• 1 MDF/computer room

Validated Solution Guide 104


ESP Campus Design March 02, 2023

Example Bill-of-Materials

The example medium campus bill-of-materials includes redundancy and bandwidth suitable for a
highly reliable LAN and WLAN for a US-based, medium campus.

QuantityExample SKU Description

90 Aruba 6300M 48-port HPE Smart Rate Operate the access switches as 3-device,
1/2.5/5GbE Class 6 PoE and 4-port SFP56 VSF stacks providing 144 access ports per
Switch (JL659A) IDF with redundant 25 Gb/s uplinks.
6 Aruba 6200F 48G Class4 PoE 4SFP+ 370W Use alternative switches for wired access
Switch (JL727A) service to low-density areas or a
management LAN.
66 Aruba 25G SFP28 LC eSR 400m MMF XCVR 25 Gb/s links to aggregation switches from
(JL485A) access switches.
6 Aruba 8360-32Y4C v2 32p 25G SFP/+/28 4 Operate the aggregation switches as a VSX
Sec 4p 100G QSFP+/28 Front-to-Back 3 Fans pair with 25 Gb/s downlinks to the access
2 AC (JL700C) stacks and 100Gb/s uplinks to the core.
66 Aruba 25G SFP28 LC eSR 400m MMF XCVR 25 Gb/s links to access switches from
(JL485A) aggregation switches.
12 Aruba 100G QSFP28 LC CWDM4 2km SMF 100 Gb/s links to core switches from
Transceiver (R0Z30A) aggregation switches.
2 Aruba 8360-12C v2 12-port 100G 12-port, VSX capable core switches with 100
QSFP+/QSFP28 Front-to-Back 3 Fans 2 AC Gb/s ports.
(JL708C)
12 Aruba 100G QSFP28 LC CWDM4 2km SMF 100 Gb/s links to building aggregation
Transceiver (R0Z30A) switches from core switches.
4 Aruba 25G SFP28 LC SR 100m MMF 25 Gb/s links to WLAN gateway cluster from
Transceiver (JL484A) core switches.
2 Aruba 9240 (US) Campus Gateway 4xSFP28 Gateway cluster.
1 Expansion Slot (R7H95A)
4 Aruba 25G SFP28 LC SR 100m MMF 25 Gb/s links to core from gateway cluster.
Transceiver (JL484A)
500 Aruba AP-655 (US) Tri-radio 4x4:4 802.11ax A high-performance, 3-radio AP providing
Wi-Fi 6E Internal Antennas Campus AP best-in-class Wi-Fi 6E services and
(R7J39A) performance.
Aruba ClearPass Authentication and policy services for the
campus network.
Aruba Central Cloud management and AI-driven insights
for the campus network.

Validated Solution Guide 105


ESP Campus Design March 02, 2023

Large Campus
The large campus architecture is targeted for organizations supporting more than 15,000 users with
multiple devices per user. The network would typically be a group of large buildings located near one
another. This architecture uses a standalone, layer 3 core and services aggregation points connecting
to gateways and WAN services.

The illustration below depicts a large campus comprising six IDFs connecting to two access aggregation
points that are further connected to a standalone, layer 3-only core with gateways connected to a
services aggregation VSX pair.

Example Design

The example large campus design includes a standalone, layer 3 redundant core and services aggrega-
tion points for WLAN and WAN. Each floor includes two IDFs that connect to the MDF. This large campus
reference design supports 15,000 employees and requires 1500 APs to provide full 2.4 GHz and 5 GHz
coverage.

Building characteristics:

Validated Solution Guide 106


ESP Campus Design March 02, 2023

• 10 buildings of 3 floors each and 1.8 million square feet total size
• 2 IDFs per floor
• 1 aggregation point per building
• 15,000 employees with up to 50,000 concurrent IPv4 clients
• 1,500 APs
• 4 gateways
• 1 MDF/on-prem data center

Example Bill-of-Materials

The example large campus bill-of-materials includes redundancy and bandwidth suitable for a highly
reliable LAN and WLAN for a US based, large campus.

QuantityExample SKU Description

180 Aruba 6300M 48-port HPE Smart Rate Operate the access switches as 3-device,
1/2.5/5GbE Class 6 PoE and 4-port SFP56 VSF stacks providing 144 access ports per
Switch (JL659A) IDF with redundant 25 Gb/s uplinks.
10 Aruba 6200F 48G Class4 PoE 4SFP+ 370W Use alternative switches for wired access
Switch (JL727A) service to low-density areas or a
management LAN.
190 Aruba 25G SFP28 LC eSR 400m MMF XCVR 25 Gb/s links to aggregation switches from
(JL485A) access switches.
10 Aruba 8360-32Y4C v2 32p 25G SFP/+/28 4 Operate the aggregation switches as a VSX
Sec 4p 100G QSFP+/28 Front-to-Back 3 Fans pair with 25 Gb/s downlinks to the access
2 AC (JL700C) stacks and 100Gb/s uplinks to the core.
190 Aruba 25G SFP28 LC eSR 400m MMF XCVR 25 Gb/s links to access switches from
(JL485A) aggregation switches.
20 Aruba 100G QSFP28 LC CWDM4 2km SMF 100 Gb/s links to core switches from
Transceiver (R0Z30A) aggregation switches.
2 Aruba 8325-32C 32-port 100G 32-port, VSX capable core switches with 100
QSFP+/QSFP28 Front-to-Back 6 Fans 2 AC Gb/s ports.
(JL626A)
10 Aruba 100G QSFP28 LC CWDM4 2km SMF 100 Gb/s links to building aggregation
Transceiver (R0Z30A) switches from core switches.
2 Aruba 8360-16Y2C v2 16p 25G 16-port, VSX capable WLAN aggregation
SFP/SFP+/SFP28 2p 100G QSFP+/28 switches.
Front-to-Back 3 Fans 2 AC (JL702C)
4 Aruba 100G QSFP28 LC CWDM4 2km SMF 100 Gb/s links to core switches from
Transceiver (R0Z30A) aggregation switches.

Validated Solution Guide 107


ESP Campus Design March 02, 2023

QuantityExample SKU Description

8 Aruba 25G SFP28 LC SR 100m MMF 25 Gb/s links to WLAN gateway cluster from
Transceiver (JL484A) aggregation switches.
4 Aruba 9240 (US) Campus Gateway 4xSFP28 Gateway cluster.
1 Expansion Slot (R7H95A)
8 Aruba 25G SFP28 LC SR 100m MMF 25 Gb/s links to core from gateway cluster.
Transceiver (JL484A)
1500 Aruba AP-655 (US) Tri-radio 4x4:4 802.11ax A high-performance, 3-radio AP providing
Wi-Fi 6E Internal Antennas Campus AP best-in-class Wi-Fi 6E services and
(R7J39A) performance.
Aruba ClearPass Authentication and policy services for the
campus network.
Aruba Central Cloud management and AI-driven insights
for the campus network.

Distributed Overlay
The large campus reference architecture section above provides guidance for hardware selection for
the various personas.

Figure 11: Distributed Overlay

Validated Solution Guide 108


ESP Campus Design March 02, 2023

Underlay Components

The table below lists the switch models appropriate for each persona required in a distributed overlay
design. Refer to the large campus reference architecture for additional switch configuration guidance.

Device Persona Place in Network Platform Minimum Software Version

Route Reflector Campus Core 8360, 8325, 8400 CX 10.10.0002


Edge Campus Access 6300 CX 10.10.0002
Stub Wireless Aggregation 8360 CX 10.10.0002
Border WAN Aggregation 8360, 8325 CX 10.10.0002
WLAN Gateway WLAN Gateway 7xxx and 9xxx GW AOS 10.4.0

Services Recommendation

Deploy Aruba ESP distributed overlay networks using the Aruba Central NetConductor for a simplified
workflow enabling the EVPN-VXLAN control plane, layer 2 topology, and layer 3 services.

Capacity Planning
The following section provides planning guidance for switching and gateway capacity within the ESP
campus reference architectures. The architectures were tested thoroughly in an end-to-end solution
environment that incorporates best-practice deployment recommendations, applications, and load
profiles that represent production environments.

The following tables provide validated values for capacity planning of the ESP campus design.

Core and Aggregation Switch Scaling

Refer to the Aruba product data sheets for detailed specifications not included in this guide.

Aruba campus core and aggregation switches

8XXX core and aggregation switch

Feature 84xx 8360 8325 8320

#VLANs 4,094 4,094 4,040 4,040


#ACLs 16,000 with 64,000 4,000 with 8,000 512 with 2,304 4,000 with 14,336
entries per ACL entries per ACL entries per ACL entries per ACL

Validated Solution Guide 109


ESP Campus Design March 02, 2023

Feature 84xx 8360 8325 8320

ACL IPv4: 512,000 IPv6: IPv4: 65,536 IPv6: IPv4: 2,304 IPv6: IPv4: 14,336 IPv6:
entries 98,304 MAC: 98,304 16,384 MAC: 65,536 2,304 MAC: 2,304 7,168
ingress
ACL IPv4: 198,656 IPv4: 8,192 IPv6: IPv4: 2,304 IPv6: IPv4: 256 IPv6: 255
entries 2,048 MAC: 8,192 256
egress
MAC 768,000 212,992 98,304 98,304
ARP IPv4: 756,000 IPv6: IPv4:145,780 IPv4: 120,000 IPv6: IPv4:120,000
524,000 IPv6:145,780 52,000 IPv6:52,000
Routing IPv4: 1,011,712 IPv6: IPv4: 606,977 IPv6: IPv4: 131,072 IPv6: IPv4: 131,072 IPv6:
524,288 v4+v6: 630,784 v4+v6: 32,732 v4+v6: 32,732 v4+v6:
1,011,712 606,977 163,796 163,796
IGMP/MLDI: 32,767 M: 32,767 I: 7,000 M: 7,000 I: 4,094 M: 4,094 I: 4,094 M: 4,094
Multicast IPv4: 32,767 IPv6: IPv4: 7,000 IPv6: IPv4: 4,094 IPv6: IPv4: 4,094 IPv6:
routes 32,767 7,000 4,094 4,094
Active IPv4: 4,094 IPv6: IPv4: 1,024 IPv6: IPv4: 4,040 IPv6: IPv4: 4,040 IPv6:
gate- 4,094 v4+v6: 4,094 1,024 v4+v6: 1,026 4,040 v4+v6: 4,040 4,040 v4+v6: 4,040
way
#LAGs 256 16p per LAG 52 16p per LAG 56 (32 for JL627A) 54 (32 for JL759A)
16p per LAG 16p per LAG
#VRFs 256 256 256 256

6XXX aggregation switch

Feature 6400 6300

#VLANs 4,094 4,094


#ACLs 4,000 with8,000 entries per ACL 4,000 with 8,000 entries per ACL
ACL entries IPv4: 64,000 IPv6: 64,000 MAC: IPv4: 20,480 IPv6: 5,120 MAC: 20,480
ingress 64,000
ACL entries IPv4: 64,000 IPv6: 20,460 MAC: IPv4: 8,192 IPv6: 2,048 MAC: 8,192
egress 64,000
MAC 32,768 32,768
ARP IPv4: 49,152 IPv6: 49,152 IPv4: 49,152 IPv6: 49,152
Routing IPv4: 61,000 IPv6: 61,000 v4+v6: IPv4: 61,000 IPv6: 61,000 v4+v6:
65,536 65,536
IGMP/MLD I: 7,000 M: 7,000 I: 8,192 M: 8,192

Validated Solution Guide 110


ESP Campus Design March 02, 2023

Feature 6400 6300

Multicast IPv4: 8,192 IPv6: 8,192 IPv4: 8,192 IPv6: 8,192


routes
Active gateway IPv4: 1,024 IPv6: 1,024 v4+v6: 1,024 IPv4: 1,024 IPv6: 1,024 v4+v6: 1,024
#LAGs 256 16p per LAG 52 16p per LAG
#VRFs 256 256

Access Switch Scaling

Refer to the Aruba product data sheets for detailed specifications not included in this guide.

Aruba campus access switches

6XXX access switch

Feature 6400 6300 6200

VLANs 4,094 4,094 2,048


ACLs 4,000 with 8,000 entries 4,000 with 8,000 entries 4,000 with 8,000 entries
per ACL per ACL per ACL
ACL entries IPv4: 64,000 IPv6: 64,000 IPv4: 20,480 IPv6: 5,120 IPv4: 5,120 IPv6: 1,280
ingress MAC: 64,000 MAC: 20,480 MAC: 5,120
ACL entries IPv4: 64,000 IPv6: 20,460 IPv4: 8,192 IPv6: 2,048 IPv4: 2,048 IPv6: 512
egress MAC: 64,000 MAC: 8,192 MAC: 2,048
MAC table 32,768 32,768 16,000
UBT clients 256 256 128
per port
UBT clients 1,024 1,024 1,024
per system

Gateway Scaling

Refer to the Aruba product data sheets for detailed specifications not included in this guide.

Aruba Gateways

Aruba indoor APs

7XXX Gateway

Validated Solution Guide 111


ESP Campus Design March 02, 2023

UBT Active firewall


Gateway tunnels APs Clients VLANs MACs IPsec sessions ACLs Roles

7280 34,816 2,048 32,000 4,094 131,072 32,768 1,940,717 2,680 1,318
7240 34,816 2,048 32,000 4,094 131,047 32,768 1,917,569 2,680 1,318
7240XM 34,816 2,048 32,000 4,094 131,047 32,768 1,917,569 2,680 1,318
7220 17,408 1,024 32,000 4,094 130,89924,576 1,896,839 2,680 1,318
7210 8,704 512 16,000 4,094 130,36316,384 1,834,959 2,680 1,318
7205 4,352 256 8,000 4,094 130,764 8,192 1,910,147 2,680 1,318
7030 1,088 64 4,000 4,094 65,536 4,096 63,783 2,680 1,318
7024 544 32 2,000 4,094 65,531 2,048 64,171 2,680 1,318
7010 544 32 2,000 4,094 65,529 2,048 63,840 2,680 1,318
7008 272 16 1,000 4,094 65,530 1,024 63,714 2,680 1,318
7005 272 16 1,000 4,094 65,536 1,024 64,844 2,680 1,318

9XXX Gateway

UBT Active firewall


Gateway tunnels APs Clients VLANs MACs IPsec sessions ACLs Roles

9004 544 32 2,000 4,094 16,384 2,048 64,000 2,680 1,316


9012 544 64 4,000 4,094 16,384 2,048 64,000 2,680 1,316

Validated Solution Guide 112


ESP Campus Design March 02, 2023

Aruba Central NetConductor Scale Guide


The following scale guidance is for Aruba Central NetConductor on Central 2.5.5 and AOS-CX v10.10.
Several, if not all, of these values are expected to increase as improvements are implemented and
additional scale testing can be completed.

Per fabric IP scale:

• 128 VTEPs
• 131 BGP sessions
• 4 L3 VRFs
• 20 VLANs extended across all VTEPs
• 10k IPv4 and IPv6 host routes
• 40k fabric IPv4/IPv6 ARP/ND

VTEP MAC Capacity:

• Access and Stub VTEPs = 23000


• Border VTEP = 1000

Please find the detailed NTL validated scale numbers at the URL below.

https://hpe.sharepoint.com/teams/ArubaDocRepo/DocPortal/Shared%20Documents/Forms/AllItems.aspx?csf=1&web

Unicast Traffic Validation

The following combination of traffic streams were validated:

• Traffic between the wired clients on physical VTEPs

• Traffic between the wired clients on physical VTEPs and simulated VTEPs

• Traffic between the wired clients and UBT clients

• Traffic between the wired clients and wireless client

Multicast Validation on Simulated VTEPs

• The multicast sources on the border VTEP.


• The multicast clients on access VTEPs.
• The RP configured on the border VTEP.
• IGMP snooping enabled on all the extended VLANs across the VTEPs.
• IGMP enabled on the interface VLANs.
• PIM enabled on one of the VRFs.
• 2K joins were simulated from physical VTEPs.

Validated Solution Guide 113


ESP Campus Design March 02, 2023

Summary
The flow of information is a critical component to a well-run organization. The Aruba ESP campus design
provides a prescriptive solution based on best practice and validated topologies to keep information
moving seamlessly and securely. Organizations can build robust networks that easily accommodate
their technical requirements without sacrificing capacity.

Whether users are at a large campus or a smaller remote site, the design provides a consistent set of
features and functionality for network access, which helps improve user satisfaction and productivity
while reducing operational expense. The ESP campus delivers a consistent and scalable methodology,
improving overall usable network bandwidth and resilience while making it easier to deploy, maintain,
and troubleshoot.

Validated Solution Guide 114


ESP Campus Design March 02, 2023

What’s New in This Version


The following changes were made since Aruba last published this guide: - New structure and consoli-
dated content. - August 2022

Validated Solution Guide 115


© Copyright 2021 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The
only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be
liable for technical or editorial errors or omissions contained herein. Aruba Networks and the Aruba logo are registered trademarks of Aruba
Networks, Inc. Third-party trademarks mentioned are the property of their respective owners. To view the end-user software agreement,
go to: www.arubanetworks.com/assets/legal/EULA.pdf

ESP-CPDS

Common questions

Powered by AI

A two-tier collapsed core design in a campus network is significant for its efficiency and scalability, making it suitable for small to medium-sized environments with fewer wiring closets or when wiring closets are centrally located . This design reduces complexity by combining the core and aggregation layers, simplifying management and routing by performing all routing at the core layer, which supports both Layer 2 and Layer 3 services . It ensures network stability and efficiency through features such as Virtual Switching Extension (VSX) for redundancy and Multi-Active Detection (MAD) for split-stack detection to enhance failover mechanisms . Additionally, it supports OSPF for dynamic, scalable routing, facilitating fast convergence and route summarization . This architecture minimizes points of failure with redundant, load-balanced paths, ensuring network resilience and fault tolerance ."}

When using DSCP markings to ensure network quality of service, several considerations are crucial. First, accurate classification of traffic types is essential to ensure that each packet receives the appropriate level of service. Correctly marking packets at Layer 3 ensures DSCP values persist throughout the packet's journey across the network, influencing how routers and switches prioritize handling. It is necessary to maintain consistency across the network, avoiding misconfigured or conflicting DSCP settings that lead to service degradation. Additionally, understanding and integrating associated RFC standards for DSCP values and per-hop behavior (PHB) codes ensures that QoS objectives are effectively met, preventing misallocation of resources .

DSCP values play a crucial role in QoS policies by providing a granular method for classifying and prioritizing packets traversing a network. They use the upper six bits of the IP type of service (ToS) byte to mark packets with specific priority levels, influencing packet handling in routers and switches. DSCP is preferred over IP Precedence because it offers a higher level of granularity with a 6-bit field compared to IP Precedence's 3-bit, allowing for 64 different priority levels versus eight. This fine-tuned control facilitates differentiated per-hop behaviors (PHBs) across the network, enabling efficient bandwidth allocation and prioritization of critical applications, such as voice and video, within the LAN environment .

Aruba's AirMatch technology enhances roaming performance by adjusting transmit power and channel width settings to optimize the RF environment for connected devices. AirMatch uses machine learning to analyze RF data and algorithmically derive configuration changes, ensuring optimal coverage and balancing load across APs. By automating channel, bandwidth, and power settings, AirMatch reduces interference and inconsistencies in service levels, promoting more predictable and seamless client roaming across access points. This optimization is crucial in mixed device environments, including Wi-Fi 6 clients, ensuring efficient utilization of available resources and reducing the likelihood of poor connectivity .

Aruba's ClientMatch technology optimizes client connectivity in mixed AP environments by continuously monitoring the RF neighborhood for each client to perform band-steering and load balancing. It enhances AP reassignment for roaming clients, improving connection quality. While not a roaming assistant, it helps clients make better decisions regarding AP connections by steering them to the appropriate radio frequency band, especially emphasizing connections to Wi-Fi 6 radios where applicable. By doing so, ClientMatch increases throughput and reduces network congestion, ensuring users are always connected to the most optimal AP, which is particularly beneficial in environments with diverse device capabilities .

Overlay networks enhance the flexibility and security of campus network deployments by enabling the creation of virtual network topologies that are independent of the physical infrastructure. This decoupling allows on-demand deployment of Layer 2 and Layer 3 services, facilitating the adaptation to changing endpoint and application requirements . Technologies like EVPN-VXLAN support distributed overlay architectures, which provide dynamic tunnels between switches, allowing for consistent and continuous policy enforcement across the network, bolstering security . These overlays segment user traffic using mechanisms like VXLAN-GBP, which ensures efficient, fault-tolerant, and secure user traffic management at any point in the network . Additionally, these networks simplify the carrying of device or user role information, enhancing security without requiring all devices to understand or manage these roles . Centralized overlay solutions like User-Based Tunneling offer advanced security features, ensuring that policies are enforced directly at gateway clusters through services such as firewalling and DPI . These features enable seamless integration of security policies and help reduce the administrative burden associated with managing a large number of devices .

The three-tier network architecture in large organizations offers scalability, improved traffic management, and enhanced reliability for complex traffic patterns across extensive campuses. This architecture consists of core, aggregation, and access layers, allowing for better separation of responsibilities and traffic handling . It supports network growth beyond a single collapsed core, facilitating connections across multiple campus locations and accommodating a large number of users . This design uses dynamic routing protocols like OSPF, offering fast convergence and scalability, which are critical in large environments . OSPF route summarization and segmentation help minimize the impact of routing changes, ensuring efficient network management . Considerations include the need for more extensive management and possibly more complex configurations due to multiple layers . Additionally, the implementation of advanced features such as Virtual Switching Extension (VSX) aids in redundancy, providing a highly resilient network by eliminating single points of failure . However, it requires advanced planning and integration skills to properly configure and maintain the network .

For optimal placement of access points (APs) in a wireless network, conduct a site survey to accurately assess the RF environment and ensure proper AP placement . APs in indoor environments should typically be spaced 30-50 feet (10-15 meters) apart and positioned near users, such as in offices and common areas, rather than in hallways or closets . In challenging environments or those with significant external interference, use onsite surveys for increased accuracy . To minimize co-channel interference, disable 2.4 GHz radios on some APs when deploying dual-band APs for Wi-Fi 6 networks . Ensure a minimum Received Signal Strength Indicator (RSSI) of -55 dBm to maintain adequate coverage and data rate performance . Finally, place APs to cover the target area with 5 GHz signals, which supports higher performance and future 6 GHz upgrades .

MC-LAG (Multi-Chassis Link Aggregation) enhances redundancy and reliability in campus network designs by allowing an aggregation layer switch pair to appear as a single device to other network devices, such as dual-connected access layer switches . This configuration ensures that all uplinks between adjacent switches are active, increasing capacity and availability . Furthermore, MC-LAG minimizes fault domains by separating connections between VSX-paired switches, reducing service impact and supporting high availability with independent control planes . Additionally, MC-LAG in combination with VSX allows layer 2 redundancy without relying solely on traditional Spanning Tree Protocol, thereby offering enhanced loop protection security .

A distributed overlay network architecture, such as EVPN-VXLAN, differs from a centralized overlay architecture like User-Based Tunneling (UBT) in scope and operation. EVPN-VXLAN creates distributed dynamic tunnels among switches across a campus, offering consistent and continuous policy enforcement with policy-based micro-segmentation that allows for greater flexibility and simplicity in policy definition and enforcement. In contrast, UBT focuses on centralizing policy control by tunneling specific user traffic to a central gateway cluster, where policy enforcement and services like firewalling and application visibility are applied. While UBT simplifies operations through centralized management, EVPN-VXLAN enhances network agility and the ability to handle policy changes and user roles across the network .

You might also like