0% found this document useful (0 votes)
121 views174 pages

DNAC 1.2.6. Automation Lab Guide Ver 1.0

Uploaded by

ellisigp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views174 pages

DNAC 1.2.6. Automation Lab Guide Ver 1.0

Uploaded by

ellisigp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Cisco DNA Center 1.2.

6
Implementing Software-
Defined Access w/ Fabric
Enabled Wireless
Lab Guide
Version 2.00
10 March 2019

Presented by: Cisco’s Solutions Readiness Engineering Team


Contents
Executive Summary....................................................................................................................................... 5
Software-Defined Access .......................................................................................................................... 5
Introducing Cisco DNA Center 1.2 ................................................................................................................ 6
What is Cisco DNA Center? ....................................................................................................................... 6
Software-Defined Access vs DNA Center .................................................................................................. 7
DNA Center Version History and Release Dates ....................................................................................... 7
SDA Versioning .......................................................................................................................................... 8
Introduction to Fabric Enabled Wireless .................................................................................................. 8
Wireless integration in SD-Access............................................................................................................. 9
SD–Access Wireless Architecture............................................................................................................ 10
SD-Access Wireless Architecture Components ....................................................................................... 10
SD–Access Wireless Protocols and Communication Interfaces .......................................................... 15
SD-Access Wireless Platform Support................................................................................................. 16
SD–Access Wireless Network deployment ......................................................................................... 16
AP to WLC communication ................................................................................................................. 17
About this Lab ............................................................................................................................................. 19
Lab Topology (Simplified View)............................................................................................................... 20
Physical Topology – Simplified View ................................................................................................... 21
Topology Diagram ............................................................................................................................... 22
Addresses and Credentials ...................................................................................................................... 23
Exercise 1: Introduction to DNA Center 1.2 ................................................................................................ 24
Exercise 2: Using the DNA Center Discovery Tool ...................................................................................... 30
Exercise 3: Using the DNA Center Inventory Tool....................................................................................... 40
About Device Roles ................................................................................................................................. 42
Exercise 4: Integrating DNA Center with the Identity Services Engine (ISE)............................................... 44
About ISE ‘Day 0 Brownfield Support’ .................................................................................................... 50
Exercise 5: Using the DNA Center Design Application ................................................................................ 51
Creating Sites and Buildings – Network Hierarchy Tab........................................................................... 51
Defining Shared Common Servers – Network Settings Tab (Part 1) ....................................................... 56
About Multiple AAA ............................................................................................................................ 58
Device Credentials – Network Settings Tab (Part 2) ............................................................................... 62

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 2


IP Address Pools – Network Settings Tab (Part 3) .................................................................................. 64
About Global and Site-Specific Network Settings ............................................................................... 64
Create and Reserve IP Address Pool ....................................................................................................... 66
Exercise 6: Using the DNA Center Policy Application ................................................................................. 76
Creating VNs and Binding SGTs to VNs ............................................................................................... 77
Exercise 7: Using the DNA Center Provision Application ............................................................................ 81
About Provisioning .................................................................................................................................. 81
Assigning Devices to a Site – Provisioning Step 1 ................................................................................... 83
Assigning Shared Services – Provisioning Step 2 .................................................................................... 86
Exercise 8: Creating the Fabric Overlay ...................................................................................................... 91
Identify and create Transits .................................................................................................................... 91
Creating the Fabric Domain .................................................................................................................... 92
About Software-Defined Access Validation Feature in DNA Center....................................................... 97
About Pre-Verification ........................................................................................................................ 97
About Verification (Post-Verification)................................................................................................. 98
Adding Devices to the Fabric .................................................................................................................. 99
Exercise 9: Fusion Routers and Configuring FusionInternal Router ......................................................... 113
Border Automation & Fusion Router Configuration Variations – BorderNode and FusionInternal ..... 114
Identifying the Variation – BorderNode and FusionInternal ................................................................ 114
Creating Layer-3 Connectivity ............................................................................................................... 116
About Route Distinguishers (RD) ...................................................................................................... 117
About Route Targets (RT) .................................................................................................................. 118
Putting It All Together ....................................................................................................................... 118
Extending the VRFs to the FusionInternal ............................................................................................ 123
Optional - Extending the VRFs – Verification ............................................. Error! Bookmark not defined.
Use VRF Leaking to Share Routes on FusionInternal ............................................................................ 126
Use VRF Leaking to Share Routes and Advertise to BorderNode ......................................................... 129
About Route Leaking ......................................................................................................................... 129
Route Leaking – Validation (BorderNode and Control Plane Nodes) ................................................... 131
Optional - Route Leaking – Validation (Edge Nodes) ............................................................................ 135
LISP Forwarding Logic – Part 1 .......................................................................................................... 135
Exercise 10: DNA Center Host Onboarding ............................................................................................... 139
Host Onboarding Part 1 ........................................................................................................................ 139

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 3


Host Onboarding Part 2 (for CAMPUS VN) ........................................................................................... 141
Host Onboarding Part 2 (for INFRA VN) ................................................................................................ 146
Host Onboarding Part 2 (for GUEST VN) ............................................................................................... 146
Host Onboarding – Verification (Optional) ........................................................................................... 150
Exercise 11: Configuring External Connectivity ........................................................................................ 152
VRF to GRT and Back Again ................................................................................................................... 152
Lab Solution....................................................................................................................................... 153
Border Automation & Fusion Router Configuration Variations – DefaultBorder and FusionExternal . 155
Identifying the Variation – DefaultBorder and FusionExternal ............................................................ 155
Creating Layer-3 Connectivity ............................................................................................................... 156
Advertise SDA Prefixes via BGP ............................................................................................................. 159
Extending the VRFs – Validation ........................................................................................................... 162
Advertise the Default Route to DefaultBorder ..................................................................................... 167
Lab Environment Specific Configurations – Advertising Additional Routes ......................................... 168
Default Route Verification .................................................................................................................... 169
Exercise 12: Testing Wireless Connectivity ............................................................................................... 173
Enable the Wireless client to Connect .................................................................................................. 174

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 4


Executive Summary
Digitization is transforming business in every industry – requiring every company to be an IT company.
Cisco’s Digital Network Architecture (DNA) is an open, software-driven architecture built on a set of
design principles with the objective of providing:
- Insights and Actions to drive faster business innovation
- Automation and Assurance to lower IT costs and complexity while meeting business and user
expectations
- Security and Compliance to reduce risk as the organization continues to expand and grow

Software-Defined Access

Cisco’s revolutionary Software-Defined Access (SD-Access) solution provides policy-based automation


from the edge to the cloud. Secure segmentation for users and things is enabled through a network
overlay fabric, drastically simplifying and scaling operations while providing complete visibility and
delivering new services quickly. By automating day-to-day tasks such as configuration, provisioning, and
troubleshooting, SD-Access reduces the time it takes to adapt the network, improves issue resolutions,
and reduces security breach impacts. This results in significant CapEx and OpEx savings for the business.

SD-Access Benefits

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 5


Introducing Cisco DNA Center 1.2

What is Cisco DNA Center?

The Cisco Digital Network Architecture Center (DNA Center) is a collection of applications, processes,
services, and packages running on a dedicated hardware appliance. DNA Center is the centralized
manager and the single pane of glass that powers Cisco’s Digital Network Architecture (DNA). DNA
begins with the foundation of a digital-ready infrastructure that includes the routers, switches, access-
points, and Wireless LAN controllers. The Identity Services Engine (ISE) is the key policy-manager for the
DNA Solution. DNA Center is brought to life with the simplified workflow of Design, Provision, Policy,
and Assurance. These are the tools that power DNA Center.

Figure 0-1: Digital Network Architecture Solution

DNA Center has its own internal architecture that is composed of three parts. The base part is called
maglev. On this base are two software stacks: Network Controller Platform (NCP) and Network Data
Platform (NDP). NCP is often referred to as Automation, and NDP is referred to as Analytics. The NCP
software stack provides the Software-Defined Access solution, and the NDP software stack provides the
Assurance solution.

These software stacks have several abstracted network services, network applications, bundles and
plugins. There are hundreds of microservices running, and this number continues to grow as additional
functionality is added to DNA Center. Exposed directly to the user are packages. These are the visible
deployed units see in the settings page. From the perspective of the DNA Center dashboard – the GUI –
these packages make up the components that power the DNA Center workflows through applications
and tools. On the dashboard, applications are the top rows of icons and the tools are the bottom rows.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 6


Figure 0-2: DNA Center 1.2 Dashboard

Software-Defined Access vs DNA Center

DNA Center is the physical appliance running Applications and Tools. Software-Defined Access (SDA) is
just one of the solutions provided by the DNA Center appliance. The difference is very subtle, but it is
critical to understand, particularly when referring to version numbering.

Another solution provided by DNA Center is Assurance. Additional solutions on the roadmap include
Cisco SD-WAN (Viptela) and Wide Area Bonjour. While each of these solutions provided by DNA Center
are tightly integrated, some can run in isolation. A DNA Center appliance can be deployed for Assurance
(Analytics) and/or deployed for Software-Defined Access (Automation). This is accomplished by the
installation / uninstallation of certain packages as listed in the release notes.

Note: If Automation and Assurance are deployed in this isolated manner, these two deployments would not coexist in
the same network. They would be independent and separate networks with no knowledge of the other. Unlike DNA
Center 1.0 where Automation and Assurance was a two-box solution, a DNA Center 1.2 deployment utilizing both
solutions must have them both installed on the same appliance.

Cisco DNA Center Version History and Release Dates

Cisco DNA Center has had a rich history of version upgrades since Early Field Trails (EFT) and First
Customer Shipment (FCS). A regular set of DNA Center patches is road mapped for the foreseeable
future.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 7


1. DNA Center 1.1.6 – May 17, 2018
2. DNA Center 1.2.5 – October 2, 2018
3. DNA Center 1.2.6 – October-31, 2018
4. DNA Center 1.2.8 – December-20,2018

Software Defines Access Versioning

The SDA solution version is a combination of Cisco DNA Center controller version, Device Platform
(IOS, IOS XE, NX-OS, and AireOS) version, and the version of Cisco ISE. The SDA version and the DNA
Center version are not the same thing. This means there are different device compatibility
specifications with DNA Center (for Assurance) and Software-Defined Access (for Automation).

When upgrading any SDA components, including DNA Center controller version, device platform
version, and ISE version, it is critically important to pay attention to the versions required to maintain
compatibility between all the components. The SD-Access Product Compatibility matrix – located here –
will indicate the compatible DNA Center, device platform, and ISE versions.

At the time this lab guide was created, the current SDA version is SDA 1.2 (update 1.2.6). This indicates
that SDA is running on DNA Center version 1.2.6. Now, we have introduced two separate DNA Center
code trains – 1.1.x and 1.2.x – with 1.2.x bringing new features to the platform.

Introduction to Fabric Enabled Wireless

Fabric Enabled Wireless (FEW) or SD-Access Wireless is defined as the integration of wireless access in
the SD-Access architecture to gain all the advantages of the Fabric and DNA-C automation.

Some of these benefits are:

• Centralized Wireless control plane: The same innovative RF features that Cisco has today in
Cisco Unified Wireless Network (CUWN) deployments will be leveraged in SD-Access Wireless as
well. Wireless operations stay the same as with CUWN in terms of RRM, client onboarding and
client mobility and so on which simplifies IT adoption
• Optimized Distributed Data Plane: The data plane is distributed at the edge switches for
optimal performance and scalability without the hassles usually associated with distributing
traffic (spanning VLANs, subnetting, large broadcast domains, etc.)
• Seamless L2 Roaming everywhere: SD-Access Fabric allows clients to roam seamlessly across
the campus while retaining the same IP address
• Simplified Guest and mobility tunneling: An anchor WLC controller is not needed any more and
the guest traffic can directly go to the DMZ without hopping through a Foreign controller
• Policy simplification: SD–Access breaks the dependencies between Policy and network
constructs (IP address and VLANs) simplifying the way we can define and implement Policies. For
both wired and wireless clients

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 8


• Segmentation made easy: Segmentation is carried end to end in the Fabric and is hierarchical,
based on Virtual Networks (VNIs) and Scalable Group Tags(SGTs). Same segmentation policy is
applied to both wired and wireless users

All these advantages while still maintaining:

• Best in Class Wireless with Future-proof Wave2 APs and 3504/5520/8540 WLCs
• Investment protection by supporting existing AireOS WLC; SD-Access Wireless is optimized for
802.11acWave2 APs but also supports Wave 1 APs

The benefits are summarized in the picture below:

Wireless integration in SD-Access

If customer has a wired network based on the SD-Access Fabric, there are two options to integrate the
wireless access:

1. SD-Access Wireless Architecture


2. CUWN Wireless Over the Top (OTT)

Let's first examine the SD-Access Wireless since it brings the full advantages of Fabric for wireless users
and things. Initially the architecture and the main components are introduced and then the reader will
learn how to setup a SD-Access Wireless network using DNAC.

OTT is basically running traditional wireless on top of a SDA Fabric wired network. This option will be
covered later in the document together with the Design considerations.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 9


SD–Access Wireless Architecture

The overall architecture is represented in the diagram below:

In SD-Access Wireless, the Control plane is centralized: this means that, as with CUWN, a CAPWAP
tunnel is maintained between APs and WLC. the main difference is that, the data plane is distributed
using VXLAN directly from the Fabric enabled APs. WLC and APs are integrated into the Fabric and the
Access Points connect to the Fabric overlay (Endpoint ID space) network as “special” clients.

SD-Access Wireless Architecture Components

Below is the main architecture diagram with a description of the main components.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 10


• Control Plane Nodes–Host database that manages Endpoint ID to Device relationships
• Fabric Border Nodes–A Fabric device (e.g. Core or Distribution switch) that connects external L3
network(s) to the SDA Fabric. There are two type of Border nodes: Fabric Border to connect to
“known” routes and Default Border to connect to “unknown routes”. In the rest of the
document the Border nomenclature is used when we refer to the generic function of connecting
Fabric to the outside network
• Fabric Edge Nodes–A Fabric device (e.g. Access switch) that connects Wired Endpoints to the
SDA Fabric
• Fabric Wireless Controller–Wireless Controller (WLC) that is fabric-enabled
• Fabric Mode APs–Access Points that are fabric-enabled
• DNA Center (DNAC)–Single pane of glass for Enterprise network Automation and Assurance.
DNAC brings together the Enterprise SDN Controller, the Policy Engine (ISE) and the Analytics
Engine (NDP)
• Policy Engine–External ID Services (e.g. ISE) is leveraged for dynamic User or Device to Group
mapping and policy definition
• Analytics Engine–External Data Collector (Cisco Network Data Platform) is leveraged to analyze
User or Device to App flows and monitor fabric status.

The following sections describe the roles and functions of the main components of the SD-Access
Wireless Architecture

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 11


Control Plane Node

Fabric Control-Plane Node is based on a LISP Map Server / Resolver and runs the Fabric Endpoint ID
Database to provide overlay reachability information.

• CP is the Host Data Base, tracks Endpoint ID (EID) to Edge Node bindings, along with other
attributes
• CP supports multiple types of Endpoint ID lookup keys (IPv4 /32, IPv6 /128 (*) or MAC
addresses)
• It receives prefix registrations from Edge Nodes and Fabric WLCs for wired local endpoints and
wireless clients respectively
• It resolves lookup requests from remote Edge Nodes to locate Endpoints
• It updates Fabric Edge nodes and Border nodes with wireless client mobility and RLOC
information

(*) IPv6 is not supported in the first release of SDA

Fabric Edge Node

Fabric edge provides connectivity for Users and Devices connected to the Fabric.

• Responsible for Identifying and Authenticating Wired Endpoints

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 12


• Register wirelessIPv4/IPv6 Endpoint ID information with the Control-Plane Node(s)
• Provides AnycastL3 Gateway for connected Endpoints
• Provide VN services for Wireless Clients
• Onboard APs into fabric and form VXLAN tunnels with APs
• Provides Guest functionality for wireless hosts interacting with the Guest Border, and Guest
Control–Plane node

Fabric Border Node

All traffic entering or leaving the Fabric goes through the Fabric Border.

• There are 2 types of Fabric Border node: Border and Default Border. Both types provide the
fundamental routing entry and exit point for all data traffic going into and/or out of the Fabric
Overlay, as well as for VN and/or group-based policy enforcement (for traffic outside the Fabric).
• A Fabric Border is used to add " known " IP/mask routes to the map system. A known route is
any IP/mask that you want to advertise to your Fabric Edge nodes (e.g. Remote WLC, Shared
Services, DC, Branch, Private Cloud, and so on)
• A Default Border is used for any " unknown " routes (e.g. Internet or Public Cloud),as a gateway
of last resort
• A Border is where Fabric and non-Fabric domains exchange Endpoint reachability and policy
information
• Border is responsible for translation of context (VRF and SGT) from one domain to another

Fabric Enabled WLC

Fabric Enabled WLC integrates with the LISP Control Plane

Control Plane is centralized at the WLC for Wireless functions.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 13


• WLC is still responsible for AP image/config, Radio Resource Management (RRM) and client
session management and roaming and all the other wireless Control Plane functions
• For Fabric integration:

▪ For wireless, client MAC address is used as EID


▪ Interacts with the Host Tracking DB on Control-Plane node for Client MAC address
registration with SGT and Virtual Network Information
▪ The VN information is mapped to a VLAN on the FEs
▪ The WLC is responsible for updating the Host Tracking DB with roaming information
for wireless clients
▪ Fabric enabled WLC needs to be physically located on the same site as APs (initial
release doesn't support APs to be deployed across a WAN from WLC)

Note: WLC and APs need to be within 20ms of latency, usually this mean same physical site.

Fabric AP

Fabric AP extends the SD-Access data plane to the wireless edge.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 14


• Fabric AP is a local mode AP and needs to be directly connected to Fabric Edge switch
• CAPWAP control plane goes to the WLC using Fabric as transport
• Fabric is enabled per SSID :

▪ For Fabric enabled SSID, AP converts 802.11traffic to 802.3 and encapsulates it into
VXLAN encoding the VNI and SGT info of the client
▪ AP forwards client traffic based on forwarding table as programmed by the WLC. Usually
VXLAN tunnel destination is first hop switch
▪ SGT and VRF based policies for wireless users on Fabric SSIDs are applied at the Fabric
edge. Same as for wired

• For Fabric enabled SSIDs the user data plane is distributed at the APs leveraging VXLAN as
encapsulation
• AP applies all wireless specific features like SSID policies, AVC, QoS, etc.

Note: For feature support on APs please refer to the WLC release notes.

SD–Access Wireless Protocols and Communication Interfaces

Below a graphical and detailed explanation of the protocols and interfaces used in SD-Access Wireless:

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 15


• WLC <–> AP: Control plane WLC and AP communication is via CAPWAP similar to existing modes
• AP <–> Switch: Data Traffic is switched from the AP to the Edge switch using VXLAN Tunnel
encapsulation, UDP port is 4789 as per standard
• WLC <–> Control Plane node: The wireless LAN controller communicates with the Control Plane
(CP) running on TCP port 4342 on the controller
• DNAC <–> WLC: In the first release DNAC uses the CLI interface through SSH/Telnet to configure
the WLC
• Switch <–> Control Plane node: The fabric enabled switches communicate with the Control
Plane node on TCP port 4789

SD-Access Wireless Platform Support

The SD-Access Wireless Architecture is supported on the following Wireless LAN controllers with AireOS
release 8.5 and higher:

• AIR-CT3504
• AIR-CT5520
• AIR-CT8540

This architecture is optimized for Wave2 11ac access points in Local mode: AP1810, AP1815, AP1830,
AP1850, AP2800 and AP3800

Wave1 802.11ac access points are supported with SD-Access wireless with a limited set of features
compared to Wave2 (please refer to the Release notes for more information). Outdoor APs are not
supported in SDA 1.1 (November release). Support for these access points is in the roadmap.

SD–Access Wireless Network deployment

Some important consideration to deploy WLC and APs in SD-Access Wireless network, please refer to
the picture below:

For Access Points:

• AP is directly connected to FE (or to an extended node switch)


• AP is part of Fabric overlay
• AP belongs to the INFRA_VN which is mapped to the global routing table
• AP joins the WLC in Local mode

For the Wireless LAN Controller (WLC)

• WLC is connected outside Fabric (optionally directly to Border)


• WLC needs to reside in global routing table
• No need for inter-VRF leaking for AP to join the WLC
• WLC can only communicate to one Control Plane node (two for redundancy), hence one WLC
can only belong to one Fabric Domain (FD)

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 16


Note: In order to make the Fabric control plane protocol more resilient, it's important that a specific route to the
WLC is present in each fabric node's global routing table. The route to WLC's IP address be should be either
redistributed into the underlay IGP protocol at the Border or configured statically at each node. In other words, the
WLC should not be reachable through the default route.

AP to WLC communication

From a network deployment prospective, the Access Points are connected in the overlay network while
the WLC resides outside the SD-Access fabric in the traditional IP network.

The WLC subnet will be advertised into the underlay so fabric nodes in the network (fabric edge, and
control plane) can do native routing to reach the WLC. The AP subnets in overlay will be advertised to
the external network so WLC can reach the APs via overlay.

Let's look a bit deeper into how the CAPWAP traffic flows between APs and WLC for Fabric enabled
SSIDs. This is the Control plane traffic only for AP Join and all the other control plane traffic (Client data
plane traffic is not going to the WLC as it is distributed from APs to the switch using VXLAN).

• Border (internal or External) redistribute the WLC route in the underlay (using the IGP of choice)
• FE learns the route in the Global Routing Table
• When FE receives CAPWAP packet from AP, the FE finds a match in the RIB and packet is
forwarded with no VXLAN encapsulation
• The AP to WLC CAPWAP traffic travels in the underlay

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 17


The South-North direction CAPWAP traffic, from APs to WLC, is described in the picture below:

The North-South direction CAPWAP traffic, from WLC to APs, is described in the picture below:

• The AP subnet is registered in CP as it is part of the overlay


• Border exports AP local EID space from the CP to global routing table and also import the AP
routes into LISP map-cache entry
• Border advertises the local AP EID space to the external domain
• When Border receives CAPWAP packet from WLC, the LISP lookup happens, and traffic is sent to
FE with VXLAN encapsulation
• The WLC to AP CAPWAP traffic travels in the overlay

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 18


Note: Here we have described the CAPWAP traffic path from AP to WLC. Same path applies to other type of traffic
originated from the AP and sent to destinations known in the global routing table like DHCP, DNS, etc.

About this Lab


This lab begins with the switching, routing, and underlay already provisioned, and with DNA Center and
Cisco ISE v2.3 installed and bootstrapped with IP addresses, NTP, and network connectivity. Cisco ISE
has authentication and authorization policies in place. This guide will walk users through the major use
cases and features of the Fabric enabled wireless solution in DNA Center 1.2.

Lab Exercises:

• Exercise 1: Introduction to DNA Center 1.2 - The lab begins with a quick overview of the new DNA-C
1.2.6.
• Exercise 2: Using the DNA Center Discovery Tool - The DNA Center Discovery tool will be used to
discover and view the underlay devices.
• Exercise 3: Using the DNA Center Inventory Tool - The Inventory tool will be used to help lay out
the topology maps for later use in the lab.
• Exercise 4: Integrating DNA Center with the Identity Services Engine (ISE) - DNA Center and ISE will
be integrated using ISE’s pxGrid interface.
• Exercise 5: Using the DNA Center Design Application - Sites will be created using the Design
Application. Here, common attributes, resources, and credentials are defined for re-use during
various DNA Center workflows.
• Exercise 6: Using the DNA Center Policy Application - Virtual Networks are then created in the
Policy Application, thus creating network-level segmentation. During this step, groups (SGTs)
learned from ISE will be associated with the Virtual Networks, creating micro-level segmentation.
• Exercise 7: Using the DNA Center Provision Application - Discovered devices will be provisioned to
the Site created in the Design Application.
• Exercise 8: Creating the Fabric Overlay - The overlay fabric will be provisioned.
• Exercise 9: Fusion Routers and Configuring FusionInternal Router - Fusion routers will be discussed
in detail, and FusionInternal will be configured.
• Exercise 10: Host On-boarding – Access Points and End hosts will be on-boarded in this exercise.
• Exercise 11: DefaultBorder Provisioning - DefaultBorder (external border node) will be provisioned
to join the Fabric.
• Exercise 12: Configuring External Connectivity - External connectivity to the Internet from the
Fabric will be configured.
• Exercise 13: Testing Wireless External Connectivity - External connectivity will be tested.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 19


Lab Topology (Simplified View)

The core of the network is a Catalyst 3850 (copper) switch called the LabAccessSwitch. It is the
intermediate node between the other devices that will ultimately become part of the Software-Defined
Access Fabric. It is directly connected to a Catalyst 9300 (#2) and another Catalyst 9300 (#3) switch.
These will both act as edge nodes in the Fabric. The LabAccessSwitch (#1) is also directly connected to a
pair of Catalyst 3850 (fiber) switches (#4 and #5) that will act as control plane & border plane nodes (co-
located) in the Fabric: CPN-DBN(#4) & (#5) CPN-BN. LabAccessSwitch (#1) is directly connected to the
segment that provides access to DNA Center, ISE , the Jump Host (not pictured) and is the default
gateway for all of these management devices.

A pair of ISR-4451s, FusionInternal (#6), that acts a fusion router for the shared services. This internal
fusion router provides network access to the WLC-3504 and the DHCP/DNS server which is a Windows
Server 2012 R2 virtual machine. The Internet edge device, FusionExternal (#7), will perform some route
leaking and is acting as an (external) fusion router.

Two Windows 7 machines act as the host machines in the network. The Windows 7 machines are
connected to GigabitEthernet 1/0/23 on EdgeNode1 (#2) and EdgeNode2 (#3).

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 20


Physical Topology – Simplified View
Topology Diagram

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 22


Addresses and Credentials

The table below provides the access information for the devices within a given pod. Usernames and
password are case sensitive.

IP Address Host Name Username Password


https://192.168.100.10/ DNA Center admin DNACisco!
https://192.168.100.20/ ISE admin ISEisC00L
https://192.168.50.240/ WLC_3504 Operator CiscoDNA!12345
192.168.255.1 EdgeNode1 cisco cisco
192.168.255.2 EdgeNode2 cisco cisco
192.168.255.4 CPN-DBN cisco cisco
192.168.255.6 LabAccessSwitch cisco cisco
192.168.255.7 FusionInternal cisco Cisco
192.168.255.8 CPN-BN cisco Cisco
192.168.255.9 FusionExternal cisco Cisco
198.18.133.30 DHCP/DNS Server Administrator CiscoDNA!
Exercise 1: Introduction to DNA Center 1.2

Note: Due to current compatibility, please use Google Chrome as the browser throughout the lab.

Step 1. Open the browser to DNA Center using the management IP address
https://192.168.100.10 and login with the following credentials:

Username: admin
Password: DNACisco!

Note: DNA Center’s login on screen is dynamic. It may have a different background. When first accessing the login
page, the browser may appear to freeze for up to approximately twenty seconds. This is directly related to our lab
environment and will not happen in production. It does not impact any configuration or actual performance of DNA
Center.

DNA Center’s SSL certificate may not be automatically accepted by your browser. If this occurs, use the advanced
settings to allow the connection – as shown below.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 24


Step 2. Once logged in, the DNA Center dashboard is displayed.

Step 3. To view the DNA Center version, click on the gear at the top right and then select
About DNA Center. Notice the DNA Center Controller version 1.2.6.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 25


Step 4. Press Show Packages to view the various packages that make up the DNA Center 1.2.6
Version.

Step 5. Click the OK button to close the dialog box.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 26


Step 6. The DNA Center main screen is divided into two main areas, Applications and Tools.
Applications are the top half and Tools are the bottom half.

These areas contain the primary components for creating and managing the Solutions
provided by the DNA Center Appliance.

DNA Center 1.2 introduced the Assurance Application along with the Telemetry,
Network Plug and Play and Command Runner Tools among others.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 27


Step 7. The System Settings pages control how the DNA Center system is integrated with other
platforms, show information on users and applications, and provide the ability to
perform system backup and restore.

To view the System Settings, click on the gear at the top right, and then select System
Settings.

Step 8. On the System 360 Tab, DNA Center displays the number of currently running primary
services. To view the running services, click the Up button.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 28


Deployment Note: DNA Center can take over 90-120 minutes to be completely initialized after powering on.
This large collection of services takes time to fully be instantiated.

Note: Individual service log information can be accessed by hovering over a service and clicking the Grafana or
Kibana logos. These features are beyond the scope of this guide.

Step 9. Click the to close the Services pane.


Remain on the System Settings Page.

Deployment Note: Recall that the solution, Software Defined Access, is dependent on the DNA Platform version, the
Individual package versions, and the IOS / IOS XE software versions. Research should be performed before
arbitrarily updating packages in DNA Center.

Note: The screen shot above was taken during an early upgrade cycle. The DNA Center in the lab may or may not
show updates available, as DNA Center updates are being released approximately every two weeks.

If updates are available, please DO NOT attempt to update the DNA Center appliance in the lab. This lab guide is
currently based on DNA Center 1.2.6.

Step 10. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.

This completes Exercise 1.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 29


Exercise 2: Using the DNA Center Discovery Tool
In DNA Center, the Discovery tool is used to find existing devices using CDP (Cisco Discovery Protocol),
LLDP or using IP address ranges. IP address range will be used in the lab. Before DNA Center can
perform automation and configuration on a device, it must be discovered and added to inventory. Using
the discovery tool will accomplish both tasks.

Step 11. Return to DNA Center in the browser.

Click on the Discovery tool from the home page.


A bookmark is also available in the browser.

Step 12. This opens the New Discovery page.


Enter the Discovery Name as SDA Lab.

Step 13. Choose the Discovery type as Range.

Note: Outside of the lab environment, this IP address could be any Layer-3 interface or Loopback Interface on any
switch that DNA Center has IP reachability to. In this lab, DNA Center is directly connected to the LabAccessSwitch
on Gig 1/0/12. That interface has an IP address of 192.168.100.6. The LabAccessSwitch is also DNA Center’s
default gateway to the actual Internet. It represents the best starting point to discover the lab topology.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 30


Step 14. Since the lab is relatively small, we shall use the loopback IP range of all the devices
192.168.255.1 – 192.168.255.10.

Note: DNA Center uses the CDP table information (show cdp neighbors) of the defined device (192.168.100.6 /
LabAccessSwitch) to find CDP neighbors. It will continue to find CDP neighbors to the depth provided by CDP
level. This is done by querying the CISCO-CDP-MIB via SNMP.

Step 15. Change the Preferred Management IP to Use Loopback.

Note: This will instruct DNA Center to use the Loopback IP address of the discovered equipment for management
access. DNA Center will use Telnet/SSH to access the discovered equipment through their Loopback IP address.
Later, DNA Center will configure the Loopback as the source interface for RADIUS/TACACS+ packets.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 31


Step 16. Expand the Credentials section, and then click on Add Credentials.
The Add Credentials pane will slide in from the right side of the page.

Step 17. At a minimum, CLI and SNMPv2 Read/Write credentials must be defined. The routers
and switches will be discovered using the CLI and SNMPv2 credentials.

Use the table below to populate the applicable credentials.


The Save button must be clicked after entering each credential set.

Field Value Save as Global


CLI Credentials Username Operator Checked
Password CiscoDNA!12345
Enable Password cisco
SNMP v2c Read Credentials Name/Description RO Checked
Read Community RO
SNMP v2c Write Credentials Name/Description RW Checked
Write Community RW

Note: Follow the numbered screen shots below if needed.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 32


Note: The Save as Global option saves the credentials to the Design > Network Settings > Device Credentials
screen. These can be used later as part of LAN Automation. The Save button must be clicked after entering each
credential set. This populates the Credentials section in the center of the screen.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 33


Step 18. Once the credentials have been added, they will populate the center of the screen
under the Credentials section. Credentials that had the Save as Global option selected
appear in Purple(ish). Credentials that do not use this option are considered Job
Specific. They are only used for this particular Discovery job and appear in Green(ish).
Ensure the credentials are appropriately populated similarly to the screen shot below.

Step 19. The final step is to enable Telnet as a discovery protocol for any devices not configured
to support SSH.

Scroll down the page and click to open the Advanced section.
Click the protocol Telnet. Ensure it has a blue check mark to it. A warning message
should appear. Click OK to close it.

Rearrange the order so that Telnet comes before SSH.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 34


© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 35
Note: The protocol order determines the order that DNA Center will use to try to access the device using the VTY
lines. If the first protocol method is unsuccessful, DNA Center will attempt the next protocol. Once a method is
successful, DNA Center will use that method for future device access and configuration. Telnet is used only because
of the nature of this lab environment. SSH should be used in production.

Bug Note: A current bug in DNA Center removes the previous Preferred Management IP selection after interacting
with the Advanced section. This bug is also intermittent and may not happen during your lab.

Please ensure The Preferred Management IP is Use Loopback before clicking the Start button in the next step.

Please read the bug notice above before continuing.

Step 20. Click the Start button in the lower right-hand corner to begin the discovery job.

Step 21. As the Discovery progresses, the page will present the devices on the right-hand side.
This may require clicking the icon will to display the Discovered devices.

Note: Full discovery with this number of CDP hops may take up to ten minutes to complete. While devices may show
as Discovered and the Discovery job may appear Complete, the entire process itself is not complete until devices
have fully populated in the Inventory tool.

Step 22. Verify that the Discovery process was able to find Seven (7) devices.

Step 23. Verify that the devices with the following IP addresses have been discovered.

• 192.168.255.1 – EdgeNode1.dna.local
• 192.168.255.2 – EdgeNode2.dna.local
• 192.168.255.4 – CPN-DBN.dna.loca
• 192.168.255.6 – LabAccessSwitch.dna.local
• 192.168.255.7 – FusionInternal.dna.local
• 192.168.255.8 – CPN-BN.dna.local
• 192.168.255.9 – FusionExternal.dna.local

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 36


Step 24. Ensure that there are for Status, ICMP, SNMP, and CLI for all ten (10) discovered
devices.
Step 25. For WLC , we will run another Discovery task.
Step 26. Enter the Discovery Name as WLC and Discovery type as Range.
Step 27. Manually input the From and To address as 192.168.50.240 which is the IP of WLC.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 37


Step 28. Expand the Credentials section, and then click on Add Credentials.
The Add Credentials pane will slide in from the right side of the page.

Use the table below to populate the applicable credentials.


The Save button must be clicked after entering each credential set.

Field Value Save as Global


CLI Credentials Username Operator Checked
Password CiscoDNA12345!
Enable Password Cisco
SNMP v2c Read Credentials Name/Description RO Checked
Read Community RO
SNMP v2c Write Credentials Name/Description RW Checked
Write Community RW

Note: You may use the credentials that were saved earlier from the previous discovery as they are the same.

Step 29. The final step is to enable Telnet as a discovery protocol for any devices not configured
to support SSH.
Scroll down the page and click to open the Advanced section.
Click the protocol Telnet. Ensure it has a blue check mark to it. A warning message
should appear. Click OK to close it.
Rearrange the order so that Telnet comes before SSH.

It should show up like as the screenshots below:

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 38


Step 30. Click the Start button in the lower right-hand corner to begin the discovery job.

Step 31. As the Discovery progresses, the page will present the devices on the right-hand side.
This may require clicking the icon will to display the Discovered devices.

Step 32. Verify that the Discovery process was able to find the WLC and should show up as
follows:

Step 33. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.

This completes Exercise 2.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 39


Exercise 3: Using the DNA Center Inventory Tool
The DNA Center Inventory tool is used to add, update, and/or delete devices that are managed by DNA
Center. The Inventory tool can also be used to perform additional functions and actions such as
updating devices credentials, resyncing devices, and exporting the devices’ configuration. It also
provides access to the Command Runner tool.

Step 34. From the DNA Center Home Page, click the Inventory tool.
A bookmark is also available in the browser.

Step 35. All eight discovered devices should show as Reachable and Managed.
Their up time, last update time, and the default resync interval should also be displayed.

Note: While the DNA Center Discovery App may show the discovery process has completed, the full process of
discovery and adding to inventory is not completed until the Reachability Status and Last Inventory Collection
Status are listed as shown above, Reachable and Managed, respectively. The full process from beginning
Discovery to being added to Inventory may take up to ten minutes based on the size of the lab topology. Expect
longer times in production, particularly if the number of CDP hops is larger and if there is a larger number of devices.

Note: Please make sure all the devices shown above are found in your pod. If they are not, please notify the
instructor.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 40


Step 36. Additional information is available in the Inventory tool by adding columns.
Use the Layout settings drop-down to add the Device Role, Config, and
IOS/Firmware columns.

Click Apply.

Note: The Config columns allows a view of the running configuration of the device. The IOS/Firmware column is
useful for quickly viewing the software version running on the devices.

Step 37. Device Role controls where DNA Center displays a device in topology view in both the
Provision Application and Topology tool. It does not modify or add any configuration
with regards to the device role selected.

Use the chart below to confirm/set each device to the role shown.
Device Device Role
CPN-BN Core
CPN-DBN Core
EdgeNode1 Access
EdgeNode2 Access
FusionExternal Border Router
FusionInternal Distribution
LabAccessSwitch Distribution
WLC_3504 Access

Step 38. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
Step 39. You can refer the Topology tool to view the complete topology next.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 41


About Device Roles

The Device Role is used to position devices in the DNA Center topology maps under the Fabric tab in the
Provision Application and in the Topology tool. The device positions in those applications and tools are
shown using the classic three-tiered Core (Border), Distribution, and Access layout. This will become
even clearer later in the lab guide.

Device Role Topology Position


Internet Top Row
Border Router Below Internet (and Shown as Connected to the Internet)
Core Third Row
Distribution Fourth Row
Access Bottom Row
Unknown To the Side of Bottom Row

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 42


Figure 3-1: Device Role Topology Positions

1. Internet Role (Non-Configurable Option)


2. Border Router Role
3. Core Role
4. Distribution Role
5. Access Role
6. Unknown Role

This completes Exercise 3.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 43


Exercise 4: Integrating DNA Center with the Identity Services Engine (ISE)
In DNA Center 1.0’s Design Application, there was a hyperlink that opened to the
Settings – Authentication and Policy Servers screen. This was a quick shortcut for the Settings page in
DNA Center for integrating with the Identity Services Engine.

Figure 4-1: DNA Center 1.0 AAA Shortcut and AAA System Settings

Step 40. This shortcut has been removed in DNA Center 1.1 and above.
To begin integration between DNA Center and ISE, navigate to System Settings.
Step 41. Click on the gear at the top right, and then select System Settings.
Step 42. From the System 360 Tab, click the Configure settings hyperlink in the Identity Service
Engine section.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 44


Step 43. This hyperlink will navigate to:
System Settings > Settings > Authentication and Policy Servers.
Click Add. A dialog box will slide over from the right labeled Add AAA/ISE server.

Step 44. Use the table below to populate the credentials and fields.

Field Value
Server IP Address * 192.168.100.20
Shared Secret * ISEisC00L (Note: These are zeroes – 0 – not the ‘O.’
Cisco ISE Server ON
Username * admin
Password * ISEisC00L (Note: These are zeroes – 0 – not the ‘O.’
FQDN * ise.dna.local
Subscriber Name * DNAC
View Advanced Settings Open and Expand
Protocol TACACS Selected

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 45


Step 45. Click the View Advanced Settings button.
Use the scroll on the right to scroll down.
Step 46. Click the TACACS box. It should change color to .
Step 47. Click Apply.

Note: The RADIUS protocol will be selected by default. Use the Default Authentication and Account Ports of 1812
and 1813 for RADIUS and the default port 49 for TACACS.

DNA Center will begin integrating with ISE using pxGrid.


This includes the process of mutual certificate authentication between DNA Center and ISE.

Additional details on this background process can be found in Appendix C.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 46


Step 48. After a few moments, the Creating AAA server… icon will disappear.
Verify the status is listed as INPROGRESS.

Note: In DNA Center 1.0, once the mutual certificate authentication with ISE was completed, the process was listed
as Active in the DNA Center GUI. However, DNA Center still needed to be added to ISE’s trusted pxGrid
subscribers as shown in the following steps. Until that process was completed in the ISE GUI, the connection
between ISE and DNA Center was not truly Active. DNA Center 1.1 and above addresses this issue with the
INPROGRESS status. Once the following steps are completed, the status will be listed as Active.

Step 49. Open a new browser tab, and log into ISE using IP address https://192.168.100.20 and
credentials:
• user: admin
• password: ISEisC00L

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 47


Step 50. Once logged in, go to Administration > pxGrid Services.

Step 51. Validate the pxGrid connection is online.


The online connection information will appear on the bottom of the page.

Step 52. Validate that there are current online subscribers.


The subscribers (clients) are shown in the list in the center of the page along with their
status.

Step 53. On this page, the dnac client will appear as Pending.
Total Pending Approval (1) should show at the top of the list.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 48


Note: The Integration process takes typically 2-5 minutes to complete. If the dnac entry does not show up in the list,
please wait one minute and press the button. Continue this process of waiting one minute and refreshing
up to five times. If dnac does not appear by then, please contact your instructor.

Step 54. Click Total Pending Approval (1) and then select Approve All.

Step 55. ISE will ask for secondary validation.


Click Approve All in the pop-up dialog box.

Step 56. Ensure the dnac client now has an Online status.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 49


Step 57. Close the ISE browser tab and return to the DNA Center browser tab.
Refresh the page in DNA Center using the button.

Verify that ISE (192.168.100.20) now has a status of ACTIVE.

Step 58. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.

About ISE ‘Day 0 Brownfield Support’

During the integration of ISE and DNA Center, all Scalable Group Tags (SGTs) present in ISE are pulled
into DNA Center. Whatever policy is configured in the (TrustSec) egress matrices of ISE when DNA
Center and ISE are integrated are also pulled into DNA Center. This is referred to as the Day 0
Brownfield Support: If policies are present in ISE at the point of integration, those policies are pulled in
DNA Center and populated.

Except for SGTs, anything TrustSec and TrustSec Policy related that is created directly on ISE OOB (out-
of-band) from DNA Center after the initial integration will not be available or be displayed in DNA
Center. There is a cross launch capability in DNA Center to see what is present in ISE with respect to
TrustSec Policy.

Note: The information above is current as of DNA Center 1.2.6.


Additional capabilities in the future may extended the integration of ISE and DNA Center.

Deployment Note and Additional Caveat: If something is created OOB in ISE after initial integration with DNA Center,
then CoA (Change of Authorization) Pushes needs to be done manually. Generally, in the ISE GUI, changes to the
TrustSec Matrix trigger a CoA Push down to all devices. CoA needs to be done manually if a TrustSec policy is
created OOB (created in ISE GUI) after initial integration with DNA Center.

This completes Exercise 4.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 50


Exercise 5: Using the DNA Center Design Application
DNA Center provides a robust Design Application to allow customers of every size and scale to easily
define their physical sites and common resources. Using a hierarchical format that is intuitive to use,
the Design Application removes the need to redefine the same resource such as DHCP, DNS, and AAA
Servers in multiple places when provisioning devices.

Creating Sites and Buildings – Network Hierarchy Tab

Step 59. From the DNA Center home page, click Design to enter the Design Application.

Step 60. This navigates to Design > Network Hierarchy.


Click on Add Site > Add Area

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 51


Step 61. Enter the Area Name as “San Jose”. Click on Add afterwards

Step 62. Click San Jose, click the gear , and then select Add Building

The Add Building dialog box will appear.

Step 63. Enter Building name as “SJC-13” and address as “325 East Tasman Drive, San Jose,
California. and ADD.
Step 64. A new floor will be added to the SJC-BLD13 building in the San Jose site.
Begin by expanding San Jose with the button.
Step 65. Click SJC-1, click the gear , and then select Add Floor.
The Add Floor dialog box will appear.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 52


Step 66. Use the table below to populate the fields.
Please upload the Floor image as the final step.

Field Value
Floor Name * Floor1
Parent SJC-BLD13
Type (RF Model) Cubes and Walled Offices
Floor Image SJC14.png
Width (ft) 100
Length (ft) 100
Height (ft) 10

Step 67. After filling in the fields, press the Upload File Button.
Step 68. Navigates to the Desktop > FEW > floorplans directory.
Select the SJC14.png file and press Open.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 53


Note: DNA Center currently supports any raster image graphical file format. This includes .jpg, .gif, .png, .dxf, and
.dwg. DXF was chosen in the lab simply based on the availability of a high-quality floor map.

Step 69. DNA Center will return to the Add Floor dialog box.
The Floor map name and a preview of the floor map will now appear.
Notice that the Length (ft) has changed in response the image size.
Click the Add Button.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 54


Step 70. The Floor Plan appears, although it extends into the Network Hierarchy pane on the left.
Navigate the mouse pointer over the light grey line frame on the left hand edge of the
map. It will highlight to a dark grey color (called out below).
Step 71. Click the frame once it turns dark grey, and drag the mouse to the right. This will drag
the image over to allow the covered Network Hierarchy pane to be visible again.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 55


Defining Shared Common Servers – Network Settings Tab (Part 1)

DNA Center allows saving common network resources and settings with Design Application’s
Network Settings sub-application (tab). As described earlier, this allows information pertaining to the
enterprise to be stored so it can be reused throughout DNA Center workflows.

The idea is to define once and use many.

By default, when clicking the Network Settings tab, newly configured settings are assigned as Global
network settings. They are applied to the entire hierarchy and inherited to each site, building, and floor.
It is possible to define specific network settings and resources to specific sites. The site-specific feature
will be used during LAN Automation. For this lab deployment, the entire network will share the same
network settings and resources.

DNA Center 1.0 supported AAA, DHCP, DNS, NTP Server, Syslog, SNMP Trap, and Netflow Collector
servers. However, by default, in DNA Center 1.0, only AAA, DHCP and DNS Servers were displayed, and
therefore the full server support for that release was easily missed.

Figure 5-1: DNA Center 1.0 Default Servers

DNA Center 1.2 has added additional shared servers. These include NTP Servers, Netflow Collector
Service and SNMP Server (renamed from SNMP Trap Server). It also provides the ability to configure
Time Zone and Message of the Day (MoTD).

Deployment Note: For a Software-Defined Access workflow, AAA, DHCP, and DNS servers are required to be
configured.
For an Assurance Workflow, SYSLOG, SNMP, and Netflow servers must be configured. These servers for
Assurance need to point to the DNA Center’s IP Address (192.168.100.10 in the Lab).

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 56


Step 72. From the Design Application, click the Network Settings tab.

Step 73. By default, the AAA, Netflow Collector, and NTP servers are not shown.

From the Network Tab, click Add Servers.


A dialog box appears.

Step 74. Select AAA, Netflow Collector, and NTP, and press OK.

Step 75. All available shared services servers should be available.


Confirm this by verifying that AAA Server is now shown on the page.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 57


About Multiple AAA

In DNA Center 1.0, the same AAA servers was configured for both network users (Device Authentication
and Authorization and for endpoints/clients (Client Authentication and Authorization).

In contrast, DNA Center 1.2 provides an option to configure separate AAA servers for network users and
for clients/endpoints. In DNA Center 1.0, only the RADIUS protocol was supported for network users. In
DNA Center 1.1 and above, both TACACS and RADIUS protocols are supported for network users.

These changes in DNA Center 1.1 are referred to as Multiple AAA.

Note: TACACS actually refers to Cisco TACACS+ and not TACACS (RFC 1492).

Step 76. Select the next to both Network and Client/Endpoint. The boxes change to and
additional settings are displayed.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 58


Step 77. Configure the AAA Server for Network Authentication using the Table below.

Field Value
Servers ISE
Protocol TACACS
Network drop-down 192.168.100.20
IP Address (Primary) drop-down 192.168.100.20

Step 78. Configure the AAA Server for Client/Endpoint Authentication using the Table below.

Field Value
Servers ISE
Protocol RADIUS
Client/Endpoint drop-down 192.168.100.20
IP Address (Primary) drop-down 192.168.100.20

Deployment Note: TACACS is not supported for Client/Endpoint authentication in DNA Center.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 59


Step 79. Configure the remaining servers using the information in the table below.

Field Value
DHCP Server 198.18.133.30
DNS Server – Domain Name dna.local
DNS Server – IP Address 198.18.133.30
SYSLOG Server 192.168.100.10
SNMP Server 192.168.100.10
Netflow Collector Server IP Address 192.168.100.10
Netflow Collector Server Port 2055
NTP Server 192.168.100.6
Time Zone EST5EDT

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 60


Step 80. Click Save to save the settings.

Step 81. Verify that an Information and a Success notification appears indicating the
settings were saved.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 61


Device Credentials – Network Settings Tab (Part 2)

While configuring the earlier Discovery job, some credentials were added with the Save as Global
setting. Applying that setting will populate those specific credentials to the Device Credentials tab of
the Network Settings of the Design Application. Global Device Credentials should be the credentials
that most of the devices in a deployment use. The Save as Global setting also populates these
credentials automatically in the Discovery tool. In this way, they do not need to be re-entered when
configuring a future Discovery job.

Figure 5-2: Automatic Credential Population – New Discovery

The other major benefit of Global Device Credentials is that they are used as part of the LAN Automation
feature.

Note: The prerequisite steps could be completed later during the LAN Automation section. The steps will be
completed now to avoid numerous backs and forth jumps between DNA Center Provision and Design Applications
later in the lab.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 62


DNA Center has the ability to store multiple sets of Global Device Credentials. However, a specific set
must be defined for use in LAN Automation. The minimum required credentials are CLI, SNMPV2C
Read, and SNMPV2C Write.

Note: There can be a maximum of five (5) global credentials defined for any category.

Step 82. Navigate to Design > Network Settings > Device Credentials.
Step 83. In the Device Credentials tab, click the button underneath CLI Credentials for the
username Operator.
The button changes to .

Step 84. Click the Read button underneath SNMP Credentials.


Ensure SNMPV2C Read micro-tab is selected.
(This is indicated by the words being in grey).
The button changes to .

Step 85. Click the SNMPV2C Write micro-tab. It will change color from blue to grey.
Click the Write button underneath SNMP Credentials.
The button changes to .

Step 86. Click Save.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 63


Step 87. Verify that a Success notification appears indicating the settings are saved.

IP Address Pools – Network Settings Tab (Part 3)

DNA Center supports both manually entered IP address allotments as well as integration with IPAM
solutions, such as Infoblox and Bluecat.

Deployment Note: DHCP IP Address Pools required in the deployment must be manually defined and configured on
the DHCP Server. DNA Center does not provision the actual DHCP server, even if it is a Cisco device. It is simply
setting aside pools as a visual reference. These Address Pools will be associated with VN (Virtual Networks/VRFs)
during the Host Onboarding section.

About Global and Site-Specific Network Settings

Consider a large continental-United States network deployment with sites in New York and Los Angeles.
Each site would likely use their own DHCP, DNS, and AAA (ISE Policy Service Nodes – PSN). For
deployments such as these, it is possible to configure site-specific Network Settings for Network, Device
Credentials, IP Pools, and more.

By default, when navigating to the Network Settings tab, the Global site is selected. This can be seen by
the green vertical indicator. These green lines in DNA Center indicate current navigation location of the
Design Application to help the user understand which item for which site is being configured.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 64


Figure 5-3: Network Hierarchy Position Indicators – DNA Center Design Application

This is because Pool Reservations are not available in the Global Hierarchy and must be done at the Site,
Building, or Floor level.

Step 88. Navigate to Design > Network Settings > IP Address Pools.
Step 89. Several IP Address Pools will be created for various uses. Some will be used for Device
Onboarding (End-host IP Addresses) while others will be used for Guest Access, LAN
Automation, and Infrastructure.

In the IP Address Pools tab, click Add.


The Add IP Pool dialog box appears.

Step 90. Configure the seven IP Address Pools as shown in the table below.
Because the DHCP and DNS servers have already been defined, they are available from
the drop-down boxes and do not need to manually define.
This demonstrates the define once and use many concepts that was described earlier.
The Overlapping checkbox should remain unchecked for all IP Address Pools.

IP Pool Name IP Subnet CIDR Prefix Gateway IP Address DHCP Server(s) DNS Server(s)
AccessPoints 172.16.50.0 /24 (255.255.255.0) 172.16.50.1 198.18.133.30 198.18.133.30
FusionExternal 192.168.130.0 /24 (255.255.255.0) 192.168.130.1 – –
FusionInternal 192.168.30.0 /24 (255.255.255.0) 192.168.30.1 – –
Production 172.16.101.0 /24 (255.255.255.0) 172.16.101.1 198.18.133.30 198.18.133.30
Staff 172.16.201.0 /24 (255.255.255.0) 172.16.201.1 198.18.133.30 198.18.133.30
WiredGuest 172.16.250.0 /24 (255.255.255.0) 172.16.250.1 198.18.133.30 198.18.133.30
WirelessGuest 172.16.150.0 /24 (255.255.255.0) 172.16.150.1 198.18.133.30 198.18.133.30

Step 91. Click the Save button between each IP Address Pool to save the settings.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 65


Step 92. Verify that a Success notification appears when saving each IP Address Pool.

Step 93. Once completed, the IP Address Pools tab should appear as below.

Create and Reserve IP Address Pool

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 66


Step 94. Now we shall reserve the IP pools at the floor level to be utilized later in the lab. Click on
Floor 1.

Step 95. From the upper right corner, click on Reserve IP Pool.

Step 96. Add the details as follows for the IP Pools

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 67


IP Pool Name IP Subnet Global IP Pool Gateway IP DHCP Server(s) DNS Server(s)
Address
AccessPoints_Floor1 172.16.50.0 AccessPoints 172.16.50.1 198.18.133.30 198.18.133.30
FusionExternal_Floor1 192.168.130.0 FusionExternal – – –
FusionInternal_Floor1 192.168.30.0 FusionInternal – – –
Production_Floor1 172.16.101.0 Production 172.16.101.1 198.18.133.30 198.18.133.30
Staff_Floor1 172.16.201.0 Staff 172.16.201.1 198.18.133.30 198.18.133.30
WiredGuest_Floor1 172.16.250.0 WiredGuest 172.16.250.1 198.18.133.30 198.18.133.30
WirelessGuest_Floor1 172.16.150.0 WirelessGuest 172.16.150.1 198.18.133.30 198.18.133.30

Step 97. The final screenshot should look up as follows:

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 68


Step 98. Next, go to Global > Network Settings > Wireless.

Step 99. Under Enterprise Wireless, click on Add.

Step 100. Enter the Wireless SSID as ProductionSSID and click Next.

Step 101. Next, it shall prompt you for entering the Wireless profile name. Go ahead and create a
wireless profile as CampusProfile.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 69


Step 102. Choose the site as Floor 1 and SJC-BLD13 from the dropdown
and click Finish in the end.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 70


Step 103. You shall add a guest SSID as well. Under Guest Wireless, click on Add.

Step 104. Enter the Wireless SSID as GuestSSID, choose level of security as Web Auth and
Authentication Server as ISE Authentication with Portal kind: Self Registered and
redirect to Success Page after successful authentication click Next.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 71


Step 105. Add the Guest Portal

Step 106. Building the Guest Portal, Enter the name of the portal Guest_Portal_Access and click
on SAVE

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 72


Step 107. Now let’s login to the ISE, Work center > Guest Access > Portals and Components >
Guest Portals and confirm the created Guest_Portal_Access

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 73


Step 108. Since, we created a wireless profile CampusProfile previously, we will utilize the same.
Click on the checkmark against the profile and click on Finish at the end.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 74


Step 109. Final wireless SSID, should show up as follows:

Step 110. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.

This completes Exercise 5.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 75


Exercise 6: Using the DNA Center Policy Application
The Policy Application supports creating and managing Virtual Networks, supports Policy Administration
and Contracts, and supports Scalable Groups using the Registry Tab. Most deployments will want to set
up their SD-Access Policy (Virtual Networks and Contracts) before doing any SD-Access Provisioning. The
general order of operation (for SDA) is Design, Policy, and Provision, corresponding with the order of the
applications seen on DNA Center dashboard.

In this section, the segmentation for overlay network (which has not yet been fully created) will be
defined here in DNA Center Policy Application. This process virtualizes the overlay network into multiple
self-contained Virtual Networks (VRFs).

Deployment Note: By default, any network device (or user) within a Virtual Network is permitted to communicate with
other devices (or users) in the same Virtual Network. To enable communication between different Virtual Networks,
traffic must leave the Fabric (Default) Border and then return, typically traversing a firewall or fusion router. This is
process is done through route leaking and multi-protocol BGP (MP-BGP). This will be covered in later exercises.

Step 111. From the DNA Center home page, click Policy to enter the Policy Application.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 76


Step 112. This navigates to Policy > Dashboard.
Verify that two (2) Virtual Networks (VNs) and eighteen (18) Scalable Groups (SGTs) are
present.

Note: The eighteen (18) SGTs were the SGTs present on ISE during the DNA Center and ISE integration. They were
imported as described in the About Day 0 Brownfield Support section.

DNA Center has two default Virtual Networks: DEFAULT_VN and INFRA_VN. The DEFAULT_VN is present to encourage
NetOps personnel to use segmentation, as this why SDA was designed and built. At the present, it should be ignored
– specific VNs will be created.

Deployment Note: Future releases may remove the DEFAULT_VN. The INFRA_VN is for Access Points and Extended
Nodes only. It is not meant for end users or clients. The INFRA_VN will actually be mapped to the Global Routing
Table (GRT) in LISP and not a VRF instance of LISP. Despite being present in the GRT, It is still considered part of
the overlay network.

Creating VNs and Binding SGTs to VNs

Within the Software-Defined Access solution, two technologies are used to segment the network: VRFs
and Scalable Group Tags (SGTs). VRFs (VNs) are used to segment the network overlay itself. SGTs are
used to segment inside of VRF. Encapsulation in SDA embeds the VRF and SGT information into the
packet to enforce policy end-to-end across the network.

The routing control plane of the overlay (the LISP process) will make forwarding decision based on the
VRF information. The routing policy plane of the overlay makes forwarding decisions based on the SGT
information. Both pieces of information must be present for a packet to traverse the Fabric.

This exercise will focus on creating Virtual Networks and associating SGTs with them as this is the
minimum requirement for packet forwarding in an SDA Fabric.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 77


Note: The DNA Center 1.0 Lab Guide focused on creating detailed policy that was then pushed to ISE. This was
used to show the interaction between these two platforms. Policy is a critical component of the SDA solution,
although policy creation is not the focus of this Lab Guide. Please refer to the DNA Center 1.0 SRE Lab Guide and
the DNA Center User Guide for information on Group-Based Access Control, Registry, Policies, and Contracts.

Step 113. Click the Virtual Network button or the Virtual Network tab.

Step 114. Begin by creating a new Virtual Network.

Click the .
Step 115. Enter the Virtual Network Name of CAMPUS.

Note: Please pay attention to Capitalization. The name of the virtual network defined in DNA Center will later be
pushed down the Fabric devices as a VRF Definition. VRF Definitions on the CLI are case sensitive. VRF Campus
and VRF CAMPUS would be considered two different VRFs.

Step 116. Multiple Scalable Groups can be selected by clicking on them individually or by clicking
and dragging over them.

Move all Available Scalable Groups except BYOD, Guest, Quarantined_Systems, and
Unknown to the right-hand column Groups in the Virtual Network.

Step 117. Click Save.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 78


Step 118. Verify the CAMPUS VN has been created and contains fourteen (14) SGTs.

Step 119. Click the button to create a second VRF.


Enter the Virtual Network Name of GUEST.

Step 120. Select and move the BYOD, Guest, Quarantined_Systems, and Unknown SGTs from
Available Scalable Groups to the right-hand column Groups in the Virtual Network.
Step 121. Click Save.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 79


Step 122. Verify the GUEST VN has been created and contains four (4) SGTs.

Step 123. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.

Note: In the Host Onboarding section, the VNs that were just created will be associated with the created IP Address
Pools. This process is how a particular subnet becomes associated with a particular VRF.

This completes Exercise 6.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 80


Exercise 7: Using the DNA Center Provision Application
Now that devices are discovered, added to Inventory, Network Design elements have been completed,
and the VNs are created, the (LISP) overlay can be provisioned, and Host Onboarding can begin.

About Provisioning

Provisioning consists of two separate actions:


1. Assigning Devices to Site, Building, or Floor
2. Provision Devices added to Site, Building, or Floor

When completing the first step, assigning a device to a site, DNA Center will push certain site-level
networks settings configured in the Design Application to the devices whether they will be used as part
of the Fabric overlay.

Specifically, DNA Center pushes the Netflow Exporter, SNMP Server and Traps, and Syslog network
server information configured in the Design Application for a site to the devices assigned to the site.

Note: Understanding what configuration elements are pushed at which step will be particularly important in future labs
when working with DNA Center Assurance.

When the devices are provisioned to the site – Step 2 – the remaining network settings from the Design Application
will be pushed down to these devices. They include time zone, NTP server, and AAA configuration.

The second step, provisioning the device (that has been assigned to a site), is a prerequisite before that
device can be added to the Fabric and perform a Fabric role.

DNA Center provides a helpful notification explaining this:

Figure 7-1: Provision Before Fabric

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 81


Step 124. From the DNA Center home page, click Provision to enter the Provision Application.

Step 125. The Provision Application will open to the Device Inventory Page.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 82


Assigning Devices to a Site – Provisioning Step 1

The first step of the provisioning process begins by selecting devices and associating (assigning) them to
a site, building, or floor previously created with the Design Application.

Before devices can be provisioned, they must be discovered and added to Inventory. Therefore, the
Discovery tool and Design exercises were completed first. There is a distinct order-of-operation in DNA
Center workflows.

In this lab, all devices in Inventory will be assigned to a site (Step 1). After that, only some devices will
be provisioned to a site (Step 2). Among that second group, only certain devices could be provisioned to
a site will become part of the Fabric and operate in a Fabric Role. This is the level of granularity that
DNA Center provides in Orchestration and Automation. In the lab, all devices provisioned to the site will
become receive further provisioning to be operate in a Fabric Role.

Use the table below for reference on how devices will be added to site, provisioned to site, and used in a
Fabric Role.

Assign to Site Provisioned to Site Assign as Fabric Role


CPN-DBN Yes Yes
CPN-BN Yes Yes
EdgeNode1 Yes Yes
EdgeNode2 Yes Yes
FusionExternal – –
FusionInternal – –
LabAccessSwitch – –
WLC_3504 Yes Yes

Deployment Note: There are two level of compatibility in DNA Center and two different support matrices. The first
level is being compatible with DNA Center – or, ostensibly, compatible for Assurance. The second level is being
compatible with Software-Defined Access. Therefore, the distinction between SDA and DNA Center versioning and
platform support was called out specifically at the beginning of the lab.

To provision a device that has been assigned to a site (Step 2), it must be a device supported for SDA.

Just because a device can be Automated by DNA Center does not necessarily mean this device is being automated
by the SDA process in DNA Center. Device Automation and Software-Defined Access are technically separate
packages on the Appliance.

Step 126. From the Provision > Device Inventory page, click the top checkbox to select all
devices.
The box changes to and all devices are highlighted.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 83


Step 127. Click Actions .

Step 128. Click Assign Device to Site.


DNA Center navigates to the Assign Devices to Site page.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 84


Step 129. In the Choose a site drop-down box, select site …SJC-BLD13/Floor1.
This will close the drop-down.
Step 130. Click Apply to All.
Step 131. Click Apply on the right-hand side.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 85


Step 132. Verify that a Success notification appears indicating the selected devices were
added to the site.

Assigning Shared Services – Provisioning Step 2

Step 133. Now that all the selected devices have been assigned a site, the next step is to Provision
them with the “Shared Services” (AAA, DHCP, DNS, NTP, etc) which were setup in the
Design App. In order to do this, select the same devices again and this time select from
Actions > Provision.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 86


Assign Sites, if you did not in the last step and click on Next.

Click Next on the “Configuration” and “Advanced Configuration” steps and go to


Summary. Here check to confirm that what was configured in the Design application is
being configured and then click on “Deploy”.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 87


Click on Provision now.

Step 134. Back on the Device Inventory page within the Provision application, confirm that all
devices were successfully provisioned.

Step 135. The WLC will have to be provisioned separately since its different type of device.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 88


Step 136. Under “Configuration” from WLC provisioning choose if this will be the Active Main
WLC. Also, select the Managed AP Location. This can be the location of the WLC or a
child of the parent location.

Step 137. Finally, under Summary, confirm that everything is configured correctly and click on
Deploy.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 89


Make sure that you configure now, and the provisioning is successful.

Step 138. Once this is done, you can go to the WLC GUI and under WLANs you can see that the
SSIDs are created but are disabled for now.

Step 139. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.

This completes Exercise 7.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 90


Exercise 8: Creating the Fabric Overlay
The Fabric Overlay is the central component that defines SDA. In documentation, devices that are
supported for SDA are devices that are capable of operating in one of the Fabric Overlay Roles –
operating as a Fabric Node. From a functionality standpoint, this means the device has the ability to run
LISP and to encapsulate LISP data packets in the VXLAN GPO format. When assigning devices to a fabric
role (Border, Control Plane, or Edge), DNA Center will provision a VRF-based LISP configuration on the
device.

Deployment Note: On IOS-XE devices this LISP configuration will utilize the syntax that was first introduced in IOS
XE 16.5.1 and enforced in 16.6.1.

Creating the Fabric Overlay is a multi-step workflow. Devices must be discovered, added to Inventory,
assigned to a Site, and provisioned to a Site before they can be added to the Fabric. Each of Fabric
Overlay steps are managed under the Fabric tab of the Provision Application.
1. Identify and create Transits
2. Create Fabric Domain (or use Default)
3. Assign Fabric Role(s)
4. Setup Up Host Onboarding

Identify and create Transits

With version 1.2.X, the concept of SD-Access Multisite was introduced. Also, there is an obvious
requirement of connecting the SD-Access Fabric with the rest of the company. As a result, the new
workflow asks for you to create a “Transit” which will connect the fabric to beyond its domain.

As mentioned, there are 2 types of Transits:

1. SDA Transit: To connect 2 or more SDA Fabric Domains with each other (requires an end to end
MTU of 9100)
2. IP Transit: To connect the SDA Fabric Domain to the Traditional network for a Layer 3 hand-off

In this lab, you will be configuring an IP transit to connect the CPN-BN with the Fusion Internal Router
and the CPN-DBN with the Fusion External Router. These both will be IP Transits and must be configured
before configuring the Fabric domain.

Step 140. From the Fabric tab in the Provisioning Application, click on “Add Fabric Domain or
Transit” and then click on “Add Transit”

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 91


Step 141. Create the transit to the Fusion Internal
a. Give the Transit a name: “FusionInternal”
b. Choose the Transit Type “IP-Based”
c. Choose the Routing Protocol to BGP
d. The Remote AS Number will 65444

Creating the Fabric Domain

A fabric domain is a logical construct in DNA Center. A fabric domain is defined by a set of devices that
share the same control plane node(s) and border node(s). In a domain, end-host facing devices are
added as edge node(s).

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 92


Deployment Note: Currently, a DNA Center cluster can support up to ten (10) fabric domains. In this lab, only a
single fabric domain will be created. By default, DNA Center has the Default LAN Fabric configured. It is not required
to use this preconfigured fabric domain.

Step 142. From the DNA Center Provision > Device Inventory page, click the Fabric tab.

Step 143. Click the Add Fabric button to create a new Fabric Domain.
The Add Fabric Domain dialog box appears.

Step 144. Ensure that you choose the site level as Floor1.
Name the Fabric domain FEW_Fabric and click Add.

Note: You cannot use any special character (such as - symbol) as part of the Fabric domain name.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 93


Step 145. DNA Center will create the Fabric Domain.

Verify a Success notification appears indicating the domain was created.

Step 146. DNA Center returns to the Fabric Domain and Transits page underneath Provision >
Fabric.
Step 147. Next, you shall create a Transit device which shall be IP-based to provide access from
the Fabric to the Non-Fabric network components.
Click on Add Fabric Domain or Transit and then click on Add Transit.

This shall open a new fly-out window for Adding a Transit.

Step 148. Add as “IP-based” and the only option to add right now is BGP as the protocol.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 94


Step 149. Add the details as follows and click Save in the end.

Option Value
Transit Name FusionInternal
Transit Type Select IP-Based
Routing Protocol BGP
Autonomous Number 65444

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 95


Step 150. Similarly, go ahead and add a new transit for the external fusion device as well. Click on
Add Transit again in the upper right corner.

Step 151. Add the details as follows and click Save in the end.

Option Value
Transit Name FusionExternal
Transit Type Select IP-Based
Routing Protocol BGP
Autonomous Number 65333

Step 152. Now, once you have created the transits, click the newly created FEW_Fabric Fabric
Domain.

The Fabric > FEW_Fabric > Fabric Infrastructure page is displayed.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 96


About Software-Defined Access Validation Feature in DNA Center

When DNA Center is used to automate SDA, how can the automated configuration be validated? This
was the key question and motivation behind the Validation feature. There are two types of validation,
Pre-verification and (Post-) Verification. Pre-verification answers the question “Is my network ready to
deploy Fabric?” by running several prechecks before a device is added to a Fabric. The (post-)
verification validates a device’s state after a Fabric change has been made.

Lab Guide Critical Note: All Pre- and Post-Verification steps listed in the lab guide are Optional due to time
constraints. It is not possible to segment off some of the Pre- and Post- Verification steps present in the lab using
different section headings indicating that these steps are optional. Please read through, although skip over the Pre-
and Post- Verification steps if encountering time constraints.

About Pre-Verification

Pre-verification checks are run on devices that have been assigned a role, although have not been added
to the Fabric yet. (This means that the Save button has not yet been clicked). Currently eight
pre-verification checks are supported.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 97


Pre-Verification Purpose
Software Version Validates if the software version of device is at the minimum to support SDA Fabric
Hardware Version Validates if the hardware version of the device supports SDA Fabric
Software License Validates if device has valid license to deploy SDA Fabric
Image Type Validates if device has right image type supporting SDA Fabric
Loopback Interface Checks for presence of Loopback 0 interface on the device and verifies it is in the Up/Up state
Connectivity * Checks underlay connectivity between Fabric node devices
Multicast Checks that no instance of ip multicast vrf configuration is present on the device for
VRFs (VNs) defined in the Policy Application
Interface VLAN Checks that all interfaces on the device are assigned to default VLAN 1

Note: The connectivity check depends on the device role selected and what devices are already present in the
Fabric. During the connectivity check, DNA Center logs into the device and initiates a ping to the Loopback interface
of another device. It does not specify a source interface (such as the local Loopback).

If a device is selected as an edge node, DNA Center will perform a ping from that device to the control plane node
and the border nodes. If the device is selected as a border node, DNA Center will perform a ping from that device to
the control plane node(s) only.

About Verification (Post-Verification)

This is performed after a device is added to the Fabric. There are two places where verification is
supported – the initial topology map under Provision > Fabric > Select Devices page and the Provision >
Fabric > Host Onboarding page. Verification checks whether the SDA provisioned configuration is
present on the device.

Verification Purpose
Select Device – VN Validates that all VRFs are created for the VNs in the Fabric
Select Device – Fabric Role Validates that all required configuration is present for a device to perform the Fabric Role
Host Onboarding – Segment Validates all segments are created under each VN in the Fabric
Host Onboarding – Port Assignment Ensures ports have an assigned VLAN and authentication method

Note: The Post-Verification check does not check for any configuration that may have been added manually to the
device using the CLI. It is only checking for parameters configured by DNA Center during the provisioning workflows.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 98


Adding Devices to the Fabric

To create a Fabric, at minimum, an edge node and control plane (node) must be defined. For
communication outside of that Fabric, a border (node) must also be defined. Border nodes will be
discussed in detail in later sections, as they represent one of the biggest changes of DNA Center 1.1 from
DNA Center 1.0.

Icon and text color for each device name in the Fabric Topology map are very important. Grey devices
with grey text are not part of the Fabric or not currently part of the Fabric. They are either simply not
assigned a Fabric Role or are an Intermediate Node – a Layer-3 device between two Fabric Nodes.

Devices that are any color with blue text are currently selected. Devices with a blue outline have been
assigned a Fabric Role, although the Save button has not been pressed, yet. This blue outline indicates
Intention. Devices that are Blue have been added a Fabric role and have had that Fabric configuration
pushed down to the device.

Figure 10-1: Decoding the Fabric Topology Map

1. Device is not part of the Fabric.


2. Device is not selected.
3. Device is selected.
4. Device has been added as a Fabric role, although the Save button has not been pressed.
5. Device has been added as a Fabric role and configuration has been pushed down to the device
because the Save button has been pressed.
6. Device has been added as in the border role in the Fabric.
7. Device has been added in the control plane role in the Fabric.
8. Device has been added in the edge role in the Fabric.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 99


When a device is clicked in the Fabric Topology Map, a dialog box appears.

Figure 10-2: Fabric Provisioning Options

Option Role
Add to Fabric Edge Node
Add as CP Control Plane Node
Add as Border Border Node (Any of three varieties)
Add as CP+Border Co-located Control Plane Node and Border Node
Enable Guests This is related the Fabric Wireless.
View Device Info –

Note: In DNA Center 1.0, the hostname needed to be clicked in order to select the device and display the popup. In
DNA Center 1.2 the icon itself can be clicked. Due to the zooming and panning capabilities of the topology map, the
browser experience is going to vary wildly when interacting with it. In tests, Chrome has performed best, and the
screen shots from this lab are from the Chrome browser while all screen shots from the DNA Center 1.0 lab guide are
from the Firefox ESR browser. Firefox ESR performed better during testing for DNA Center 1.0.

Deployment Note: The topology map has created combability problems with some browsers. Both Firefox Quantum
and Firefox ESR are currently impacted. Internet Explorer and Microsoft Edge will not be tested, and Safari remains
untested.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 100


Step 153. From the Fabric topology map, click on CPN-BN.dna.local
The icon turns blue, indicating it is selected.
Step 154. From the popup, select Add as CP+Border.
This indicates the device will be added to the Fabric in a Control Plane and Border role.

The grey icon now has a blue outline, indicating the intention to add this device to the
Fabric.

This shall open a new fly out window to the right with additional settings to be done for the fabric
border role.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 101


Step 155. We shall add the transit that we created earlier. Add the details as follows:

Option Value
Border to Select Rest of Company
Local Autonomous Number 65004
Select IP Address Pool FusionInternal_Floor1
Transits FusionInternal

Step 156. Click the dropdown for the FusionInternal device and click on Add Interface.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 102


Step 157. Select the External Interface to be TenGigabitEthernet1/0/2 and click on all the Virtual
Networks.

Step 158. Click on Add in the end.

Step 159. You shall see that the device has a Blue outline to it but it has not yet saved. You need to
click on Save in the end

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 103


Step 160. This shall open a new Fly-out window to the right. You need to click on Apply in the end.

It shall show you that the Fabric device provisioning has initiated and after it has pushed the
requisite configurations, it shall show up as that the Device has been updated to the Fabric Domain
Successfully.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 104


Step 161. It shall show up as follows once added successfully with the Fabric role.

Step 162. You shall next add another device as Co-located Border and Control Plane.

Step 163. From the Fabric topology map, click on CPN-BN.dna.local


The icon turns blue, indicating it is selected.

Step 164. From the popup, select Add as CP+Border.


This indicates the device will be added to the Fabric in a Control Plane and Border role.

The grey icon now has a blue outline, indicating the intention to add this device to the
Fabric.

Step 165. This shall open a new fly out window to the right with additional settings to be done for
the fabric border role.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 105


Step 166. We shall add the transit that we created earlier for External Fusion device.
Add the details as follows:

Option Value
Border to Select Outside World
Local Autonomous Number 65004
Select IP Address Pool FusionExternal_Floor1
Transits FusionExternal
Connected to Internet Checked

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 106


Step 167. Click the dropdown for the FusionExternal device and click on Add Interface.

Step 168. Select the External Interface to be TenGigabitEthernet1/0/2 and click on only the Guest
Virtual Networks.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 107


Step 169. Click on Add in the end.

Step 170. You shall see that the device has a Blue outline to it but it has not yet saved. You need to
click on Save in the end

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 108


Step 171. This shall open a new Fly-out window to the right. You need to click on Apply in the end.

It shall show you that the Fabric device provisioning has initiated and after it has pushed the
requisite configurations, it shall show up as that the Device has been updated to the Fabric Domain
Successfully.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 109


Step 172. Next, click on the devices EdgeNode1.dna.local and EdgeNode2.dna.local and add them
to the Fabric utilizing the option from the menu as Add to Fabric.

Step 173. Click on Save in the end and wait for the devices to be updated with their Fabric roles of
Edge Nodes and should show up as this finally

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 110


Step 174. Next you need to add the Fabric Enabled Wireless LAN Controller to the Fabric. You will
do so by selecting the WLC and clicking “Add to Fabric”.

Step 175. You will now see that the WLC also turns blue and is provisioned.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 111


You have now successfully added all the required devices to the fabric.

This completes Exercise 8.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 112


Exercise 9: Fusion Routers and Configuring Fusion Internal Router
The generic term fusion router comes from the MPLS world. The basic concept is that the fusion router
is usually aware of the prefixes available inside each VPN (VRF), either because of static routing
configuration or through route peering, and can therefore fuse some of these routes together. A fusion
router’s responsibilities are to route traffic using separate VRFs and to route traffic to and from a VRF to
a shared pool of resources such as DHCP servers, DNS servers, and the WLC.

A fusion router has several support requirements. It must support:


1. Multiple VRFs
2. 802.1q tagging (VLAN Tagging)
3. Sub-interfaces (when using a router)
4. BGPv4 and specifically the MP-BGP extensions

Deployment Note: While it is feasible to use a switch as a fusion router, switches add additional complexity, as
generally only the high-end chassis models support sub-interfaces. Therefore, on a fixed configuration model such
as a Catalyst 9300, an SVI must be created on the switches and added to VRF forwarding definition. This abstracts
the logical concept of a VRF even further through logical SVIs. A Layer-2 trunk is used to connect to the border
node, which itself is likely configured for a Layer-3 handoff using a sub-interface. To reduce unnecessary complexity,
an Integrated Services Router (ISR) is used in the lab as the fusion router.

Because the fusion router is outside the SDA fabric, it is not specifically managed (for Automation) by
DNA Center. Therefore, the configuration of a fusion router will always be manual. Future release and
development may reduce or eliminate the need for a fusion router. FusionInternal will be used to
allowed end-hosts in Virtual Networks of the SDA Fabric to communicate with shared services

Note: It is also possible, with minimal additional configuration, to allow hosts in separate VNs to communicate with
each other. This is outside the scope of this lab guide and not required for SDA.

This is a multi-step workflow performed on the CLI of FusionInternal.


1. Create the Layer-3 connectivity between CPN-BN and FusionInternal.
2. Use BGP to extend the VRFs from the CPN-BN to FusionInternal.
3. Use VRF leaking to share routes between the VRFs on FusionInternal.
4. Distribute the routes between the VRFs back to the CPN-BN.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 113


Figure 15-1: Route Leaking Workflow

Border Automation & Fusion Router Configuration Variations – BorderNode and FusionInternal

Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be different.
Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be aware of what
sub-interface is forwarding for which VRF and what IP address is assigned to that sub-interface on your particular lab
pod during your particular lab run-through. The fusion router’s configuration is meant to be descriptive in nature, not
prescriptive.

There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there
are six variations in how FusionInternal needs to be configured to match CPN-BN. These are provided and detailed
in Appendix K and as also provided as text and image files in the DNAC 1.1 folder on the desktop of the Jump Host.

Identifying the Variation – BorderNode and FusionInternal

When following the instructions in the lab guide, DNA Center will provision three sub-interfaces on
BorderNode beginning with GigabitEthernet 0/0/2.3001 and continuing through GigabitEthernet
0/0/2.3003. These interfaces will be assigned an IP address with a /30 subnet mask (255.255.255.252)
and will always use the lower number (the odd number address) of the two available addresses.
DNA Center will vary which sub-interface is forwarding for which VRF and the Global Routing Table
(GRT).

To understand which explanatory graphic and accompanying configuration text file to follow, identify
the order of the VRF/GRT that DNA Center has provisioned on the sub-interfaces.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 114


In the example above, Gig0/0/2.3001 is forwarding for CAMPUS VRF, Gig 0/0/2.3002 is forwarding for
the GUEST VRF, and Gig 0/0/2.3003 is forwarding for the GRT (also known as the INFRA_VN).

The corresponding graphic – located in the DNAC 1.2 folder the desktop of the Jump Host – is
BorderNode Interface Order – CAMPUS, GUEST, GRT, and the corresponding text file, also located where
noted above, is FusionInternal - Campus, Guest, GRT. Please be sure to use the appropriate files and do
not directly copy and paste from the lab guide unless instructed directly and specifically to do so.

Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be different.
Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be aware of what
sub-interface is forwarding for which VRF and what IP address is assigned to that sub-interface on your particular lab
pod during your particular lab run-through. The fusion router’s configuration is meant to be descriptive in nature, not
prescriptive.

There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there
are six variations in how FusionInternal needs to be configured to match CPN-BN. These are provided and detailed
in Appendix K and as also provided as text and image files in the DNAC 1.1 folder on the desktop of the Jump Host.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 115


Creating Layer-3 Connectivity

The first task is to allow IP connectivity from the BorderNode to FusionInternal. This must be done for
each Virtual Network that requires connectivity to shared services. DNA Center has automatically
configured the BorderNode in previous exercises.

Table 15-1: DNA Center Configured Layer-3 Interfaces for Hand Off – BorderNode

VLAN VRF IP Address Interface


3002 GUEST 192.168.30.1/30 GigEthernet0/0/2.3002
3003 CAMPUS 192.168.30.5/30 GigEthernet0/0/2.3003
3005 INFRA_VN (GRT) 192.168.30.9/30 GigEthernet0/0/2.3005
N/A N/A 192.168.37.3 GigEthernet0/0/2

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 116


Figure 15-2: BorderNode and FusionInternal Interfaces

Using this information, a list of interfaces and IP address can be planned on the FusionInternal.

Table 15-2: Interfaces to be Manually Configured – FusionInternal


Interface VRF IP Address VLAN
GigabitEthernet 0/0/2.3002 GUEST 192.168.30.2/30 3001
GigabitEthernet 0/ 0/2.3003 CAMPUS 192.168.30.6/30 3002
GigabitEthernet 0/0/2.3003 INFRA_VN (GRT) 192.168.30.9/30 3003
GigabitEthernet 0/0/2 N/A 192.168.37.7 N/A

However, to configure an interface to forward for a VRF forwarding instance, the VRF must first be
created. Before creating the VRFs on FusionInternal, it is important to understand the configuration
elements of a VRF definition. The most important portion of a VRF configuration – other than the
case-sensitive name – is the route-target (RT) and the route-distinguisher (RD).

Note: In older versions of IOS such as IOS 12.x, VRFs were not address-family aware. They were supported for IPv4
only and used a different syntax, ip vrf <name>, for configuration. It was mandatory to define the RT and RD in these
older code versions. In current versions of code, it is possible to create a VRF without the RT and RD values as long
as the address-family is defined. This method (of not defining the RT and RD) is often used when creating VRFs for
VRF-Lite deployments that do not require route leaking.

About Route Distinguishers (RD)

A route distinguisher makes an IPv4 prefix globally unique. It distinguishes one set of routes (in a VRF)
from another. This is particularly critical when different VRFs contain overlapping IP space. A route
distinguisher is an eight-octet/eight-byte (64-bit) field that is prepended to a four-octet/four-byte (32-
bit) IPv4 prefix. Together, these twelve octets/twelve bytes (96 bits) create the VPNv4 address.
Additional information can be found in RFC 4364. There are technically three supported formats for the
route distinguisher, although they are primarily cosmetic in difference. The distinctions are beyond the
scope of this guide.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 117


About Route Targets (RT)

Route targets, in contrast, are used to share routes among VRFs. While the structure is similar to the
route distinguisher, a route target is actually a BGP Extended-Community Attribute. The route target
defines which routes are imported and exported into the VRFs. Additional information can be found in
RFC 4360.

Many times, for ease of administration, the route-target and route-distinguisher are configured as the
same number, although this is not a requirement. It is simply a configuration convention that reduces
an administrative burden and provides greater simplicity. This convention is used in the configurations
provisioned by DNA Center. The RD and RT will also match the LISP Instance-ID.

Putting It All Together

For route leaking to work properly, FusionInternal must have the same VRFs configured as BorderNode
(CP-BN). In addition, the route-distinguisher (RD) and route-target (RT) values must be the same. These
have been auto-generated by DNA Center. The first step is to retrieve those values. One the VRFs are
configured in FusionInternal, the interfaces can be configured to forward for a VRF.

Step 176. Return to the SecureCRT application.


Step 177. The consoles for the BorderNode (CP-BN) and FusionInternal should still be opened.
Step 178. Display and then copy the DNA Center provisioned VRFs, RTs, and RDs shown on
BorderNode(CP-BN).

BorderNode# show run | section vrf definition


vrf definition CAMPUS
rd 1:4099
address-family ipv4
route-target export 1:4099
route-target import 1:4099
exit-address-family

vrf definition DEFAULT_VN


rd 1:4098
address-family ipv4 Copy and paste to FusionInternal
route-target export 1:4098
route-target import 1:4098
exit-address-family

vrf definition GUEST


rd 1:4100
address-family ipv4
route-target export 1:4100
route-target import 1:4100
exit-address-family
! Output omitted for brevity

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 118


Note: The management vrf - Mgmt-intf is not part of the route-leaking process. It can be ignored as part of this
exercise.

Step 179. On the console of FusionInternal, paste the VRF configuration that was copied from
BorderNode (CP-BN).

Copying and pasting is required. The RDs and RTs must match exactly.

configure terminal
vrf definition CAMPUS
rd 1:4099
address-family ipv4
route-target export 1:4099
route-target import 1:4099
exit-address-family
vrf definition DEFAULT_VN
rd 1:4098
address-family ipv4
route-target export 1:4098
route-target import 1:4098
exit-address-family

vrf definition GUEST


rd 1:4100
address-family ipv4
route-target export 1:4100
route-target import 1:4100
exit-address-family

exit

end

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 119


Step 180. Create the Layer-3 sub-interface that will be used for CAMPUS VRF.

configure terminal
interface GigabitEthernet0/2.3001

Step 181. Use a meaningful description to help with future troubleshooting.

description FusionInternal to BorderNode for VRF CAMPUS

Step 182. Use VLAN 3001 for sub-interface.

encapsulation dot1Q 3001

Step 183. Add the interface to the VRF forwarding instance CAMPUS.

vrf forwarding CAMPUS

Step 184. Configure the /30 IP Address that corresponds with BorderNode’s interface.

ip address 192.168.30.2 255.255.255.252

Step 185. Exit out of sub-interface configuration mode

exit

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 120


Note: The creation of the sub-interface on a physical interface that already has exiting IP configuration will cause the
IS-IS adjacency to bounce. This is expected.

Step 186. Create the Layer-3 sub-interface that will be used for GUEST VRF.
Once completed, exit sub-interface configuration mode.
Use the following information:

Description: FusionInternal to BorderNode(CP-BN) for VRF GUEST


VLAN: 3002
VRF Instance: GUEST
IP Address: 192.168.30.6/30

Step 187. Create the final Layer-3 sub-interface used for the Global Routing table (INFRA_VN).
Once completed, exit global configuration mode completely.
Use the following information:

Description: FusionInternal to BorderNode(CP-BN) for GRT


VLAN: 3003
VRF Instance: Not-Applicable
IP Address: 192.168.30.10/30

Step 188. Ping the BorderNode(CP-BN) from the FusionInternal using CAMPUS VRF.

ping vrf CAMPUS 192.168.30.1

Step 189. Ping the BorderNode from the FusionInternal using GUEST VRF.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 121


ping vrf GUEST 192.168.30.5

Step 190. Ping the BorderNode (CP-BN) from the FusionInternal using the Global routing table
and a sub-interface.

ping 192.168.30.9

Step 191. Ping the BorderNode (CP-BN) from the FusionInternal using the Global routing table
and physical interface.

ping 192.168.37.3

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 122


Extending the VRFs to the FusionInternal

BGP is used to extend the VRFs to the FusionInternal router. As with the sub-interface configuration,
DNA Center has fully automated BorderNode’s (CP-BN)BGP configuration. Early exercises verified the
BGP communications between the control plane nodes and BorderNode (CP-BN) to ensure that
communication is occurring and prefixes (NLRI) are being exchanged.

Note: The BGP Adjacencies created between a border node and fusion router use the IPv4 Address Family (not the
VPNv4 Address family). Note, however, the adjacencies will be formed over a VRF session.

Step 192. Create the BGP process on FusionInternal.


Use the corresponding Autonomous-System number automated by DNA Center on the
BorderNode(CP-BN).

configure terminal
router bgp 65444

Step 193. Define the neighbor and its corresponding AS Number.


This neighbor should use the IP address associated with the GRT sub-interface.

neighbor 192.168.30.9 remote-as 65004

Step 194. Define the update-source to use the GRT sub-interface.

neighbor 192.168.30.9 update-source GigabitEthernet0/2.3003

Step 195. Activate the exchange of NLRI with the BorderNode (CP-BN).

address-family ipv4
neighbor 192.168.30.9 activate

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 123


Step 196. Add a network statement to advertise the DHCP and DNS Servers' subnet.

network 198.18.133.0

Step 197. Add a network statement to advertise the WLC subnet.

network 192.168.50.0

Step 198. Exit IPv4 Address-Family.

exit-address-family

Step 199. Enter IPv4 Address-Family for vrf CAMPUS.

address-family ipv4 vrf CAMPUS

Step 200. Define the neighbor and its corresponding AS Number.


This neighbor should use the IP address associated with the CAMPUS sub-interface.

neighbor 192.168.30.1 remote-as 65004

Step 201. Define the update-source to use the CAMPUS sub-interface.

neighbor 192.168.30.1 update-source GigabitEthernet0/2.3001

Step 202. Activate the exchange of NLRI with the BorderNode (CP-BN) for vrf CAMPUS.

neighbor 192.168.30.1 activate

Step 203. Exit IPv4 Address-Family.

exit-address-family

Step 204. Enter IPv4 Address-Family for vrf GUEST.

address-family ipv4 vrf GUEST

Step 205. Define the neighbor and its corresponding AS Number.


This neighbor should use the IP address associated with the GUEST sub-interface.

neighbor 192.168.30.5 remote-as 65004

Step 206. Define the update-source to use the GUEST sub-interface.

neighbor 192.168.30.5 update-source GigabitEthernet0/2.3002

Step 207. Activate the exchange of NLRI with the BorderNode (CP-BN) for vrf GUEST.

neighbor 192.168.30.5 activate

Step 208. Exit IPv4 Address-Family.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 124


exit-address-family

Step 209. Exit BGP configuration mode and out of global configuration mode completely.

exit
end

Step 210. On FusionInternal, verify that three (3) BGP Adjacencies come up.
There should be a BGP adjacency for each VRF and for the GRT.

Step 211. On BorderNode (CP-BN), verify that three (3) BGP Adjacencies come up.
There should be a BGP adjacency for each VRF and for the GRT.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 125


Use VRF Leaking to Share Routes on FusionInternal

FusionInternal has routes to the SDA Prefixes learned from BorderNode (CP-BN). It also has routes to
its directly connected subnets where the DHCP/DNS servers and WLC reside. Now that all these routes
are in the routing tables on FusionInternal, they can be used for fusing the routes (route leaking).

Route-maps are used to specify which routes are leaked between the Virtual Networks. These
route-maps need to match very specific prefixes. This can be best accomplished by first defining a
prefix-list and then referencing that prefix-list in a route-map.

Prefix-lists are similar to ACLs in that they can be used to match something. Prefix-lists are configured to
match an exact prefix length, a prefix range, or a specific prefix. Once configured, the prefix-list can be
referenced in the route-map. Together, prefix-lists and route-maps provides the deep granularity
necessary to ensure the correct NLRI are advertised to BorderNode(CP-BN).

Note: The following prefix-lists and route-maps can be safely copied and pasted.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 126


On FusionInternal, configure a two-line prefix-list that matches the /24 CAMPUS VRF subnets. Name the prefix list
CAMPUS_VRF_NETWORKS.

configure terminal
ip prefix-list CAMPUS_VRF_NETWORKS seq 5 permit 172.16.101.0/24
ip prefix-list CAMPUS_VRF_NETWORKS seq 10 permit
172.16.201.0/24
end

Step 212. Configure a prefix-list that matches the /24 GUEST VRF subnet.
Name the prefix list GUEST_VRF_NETWORKS.

configure terminal
ip prefix-list GUEST_VRF_NETWORKS seq 5 permit 172.16.250.0/24
end

Note: The prefix-list uses the plural NETWORKS and not the singular NETWORK. Please name the prefix-list exactly
as shown.

Step 213. Configure a two-line prefix-list that matches the DHCP/DNS Servers’ and WLC’s subnets.
Name the prefix list SHARED_SERVICES_NETWORKS.

configure terminal
ip prefix-list SHARED_SERVICES_NETWORKS seq 5 permit
198.18.133.0/24
ip prefix-list SHARED_SERVICES_NETWORKS seq 10 permit
192.168.50.0/24
end

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 127


Step 214. Route-maps can now be configured to match the specific prefixes referenced in the
prefix lists.
Configure a route-map to match the CAMPUS_VRF_NETWORKS prefix list.
Name the route-map CAMPUS_VRF_NETWORKS.

configure terminal
route-map CAMPUS_VRF_NETWORKS permit 10
match ip address prefix-list CAMPUS_VRF_NETWORKS
end

Step 215. Configure a route-map to match the GUEST_VRF_NETWORKS prefix list.


Name the route-map GUEST_VRF_NETWORKS.

configure terminal
route-map GUEST_VRF_NETWORKS permit 10
match ip address prefix-list GUEST_VRF_NETWORKS
end

Note: The route-map uses the plural NETWORKS and not the singular NETWORK. Please name the route-map
exactly as shown.

Step 216. Configure a route-map to match the SHARED_SERVICES_NETWORKS prefix list.


Name the route-map SHARED_SERVICES_NETWORKS.

configure terminal
route-map SHARED_SERVICES_NETWORKS permit 10
match ip address prefix-list SHARED_SERVICES_NETWORKS
end

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 128


Use VRF Leaking to Share Routes and Advertise to BorderNode

About Route Leaking

Route leaking is done by importing and exporting route-maps under the VRF configuration. VRFs should
export prefixes belonging to itself using a route-map. The VRF should also import desired routes used
for access to shared services using a route-map.

Using the route-map SHARED_SERVICES_NETWORKS with the import command will permit only the
shared services subnets to be leaked to the VRFs. This will allow the End-Hosts in the Fabric to
communicate with the DHCP/DNS Servers and the WLC, but not allow inter-VRF communication.

Figure 15-3: VRF Leaking for Shared Service Access Only

Using the route-target import command will allow for inter-VRF communication. Inter-VRF
communication is beyond the scope of this lab guide, and uncommon in campus production networks.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 129


Figure 15-4: Inter-VRF Leaking and Leaking Shared Services

In production, rather than permitting inter-VRFs communication which adds additional complexity, the
entire non-Guest prefix space (IP Address Pools) will be associated with a single VN (VRF). Scalable
Group Tags (SGTs) are then used to permit or deny communication between end-hosts. This is a simpler
solution that also provides more visibility and granularity into which hosts can communicate with each
other.

Note: When using route-maps, only a single import and export command can be used. This make sense, as a
route-map is used for filtering the ingress and egress routes against the element matched in that route-map.
Route-maps are used when more finer control is required over the routes that are imported and exported from a VRF
than the control that is provided by the import route-target and export route-target commands.

If route-maps are not used to specify a particular set of prefixes, VRF leaking can be performed by importing and
exporting route-targets. Using route-targets in this way can export all routes from a particular VRF instance and
imports all routes from another VRF instance. It is less granular and more often used in MPLS. Route-target allows
for multiple import and export commands to be applied, as they are used without any filtering mechanism – such as a
route-map.

Step 217. Configure the CAMPUS VRF for route leaking using route-maps.
The VRF should export its own routes and import the Shared Services Networks only.

configure terminal
vrf definition CAMPUS
address-family ipv4
import ipv4 unicast map SHARED_SERVICES_NETWORKS
export ipv4 unicast map CAMPUS_VRF_NETWORKS
exit-address-family
exit
end

Note: The VRF leaking configuration for both CAMPUS and GUEST VRFs can safely be copied and pasted.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 130


Step 218. Configure the GUEST VRF for route leaking using route-maps.
The VRF should export its own routes and import the Shared Services Networks only.

configure terminal
vrf definition GUEST
address-family ipv4
import ipv4 unicast map SHARED_SERVICES_NETWORKS
export ipv4 unicast map GUEST_VRF_NETWORKS
exit-address-family
exit
end

Step 219. Please be patient. BGP converges slowly, but reliably.


It may take a few minutes for the routes to propagate.

If more than five minutes have passed, please check spelling, plurals, and Capitalizations
for the prefix-lists, route-maps, and VRFs. If routes are still not propagating, contact
your instructor.

Note: In Cisco software, import actions are triggered when a new routing update is received or when routes are
withdrawn. During the initial BGP update period, the import action is postponed allowing BGP to convergence more
quickly. Once BGP converges, incremental BGP updates are evaluated immediately and qualified prefixes are
imported as they are received.

Route Leaking – Validation (BorderNode and Control Plane Nodes)

Route leaking will allow the shared services routes to be imported to the VRF forwarding tables on
FusionInternal. These routes will then be advertised via eBGP to BorderNode. BorderNode will use the
iBGP VPNv4 adjacency to advertise these routes to the control plane nodes. The VPNv4 transport
session carries all the VRFs information together and internal BGP process keep track of which prefix is
associated with which VRF. Once these routes reach the control plane nodes, those routers will make

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 131


the decision on which routes are redistributed in and out of LISP and BGP. Once imported into LISP, the
routes are available to the end-hosts in the EID-space through their corresponding edge nodes.

Step 220. Display the routes known to vrf CAMPUS on BorderNode.

BorderNode# show ip route vrf CAMPUS

Routing Table: CAMPUS


Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks


B 172.16.101.0/24 [200/0] via 192.168.255.4, 19:59:19
C 172.16.101.1/32 is directly connected, Loopback1021
B 172.16.201.0/24 [200/0] via 192.168.255.4, 19:59:19
C 172.16.201.1/32 is directly connected, Loopback1022
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.0/30 is directly connected, GigabitEthernet0/0/2.3001
L 192.168.30.1/32 is directly connected, GigabitEthernet0/0/2.3001
B 192.168.50.0/24 [20/0] via 192.168.30.2, 00:39:16
The Shared Services routes are known to CAMPUS VRF.
B 198.18.133.0/24 [20/0] via 192.168.30.2, 00:39:16

Step 221. Display the routes known to vrf GUEST on ControlPlaneNode.

ControlPlaneNode# show ip route vrf CAMPUS

Routing Table: CAMPUS


Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 132


a - application route
+ - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks


B 172.16.101.0/24 [200/0], 20:36:03, Null0
C 172.16.101.1/32 is directly connected, Loopback1021
B 172.16.201.0/24 [200/0], 20:36:03, Null0
C 172.16.201.1/32 is directly connected, Loopback1022
B 192.168.50.0/24 [200/0] via 192.168.255.3, 00:03:25 The Shared Services routes are known to CAMPUS VRF.
B 198.18.133.0/24 [200/0] via 192.168.255.3, 00:03:25

Step 222. Display the routes known to vrf GUEST on BorderNode.

BorderNode# show ip route vrf GUEST

Routing Table: GUEST


Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks


B 172.16.250.0/24 [200/0] via 192.168.255.4, 20:29:46
C 172.16.250.1/32 is directly connected, Loopback1023
192.168.30.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.30.4/30 is directly connected, GigabitEthernet0/0/2.3002
L 192.168.30.5/32 is directly connected, GigabitEthernet0/0/2.3002
B 192.168.50.0/24 [20/0] via 192.168.30.6, 00:03:18 The Shared Services routes are known to GUEST VRF.
B 198.18.133.0/24 [20/0] via 192.168.30.6, 00:03:18

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 133


Step 223. Display the routes known to vrf GUEST on ControlPlaneNode

ControlPlaneNode# show ip route vrf GUEST

Routing Table: GUEST


Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is not set

172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks


B 172.16.250.0/24 [200/0], 20:36:28, Null0
C 172.16.250.1/32 is directly connected, Loopback1023
B 192.168.50.0/24 [200/0] via 192.168.255.3, 00:05:39
The Shared Services routes are known to GUEST VRF.
B 198.18.133.0/24 [200/0] via 192.168.255.3, 00:05:39

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 134


Optional - Route Leaking – Validation (Edge Nodes)

The end-hosts attached to the Edge Nodes have not yet joined the network. This will be addressed in
the next exercise on Fabric DHCP. Therefore, the Edge Nodes have not registered any prefixes with the
control plane nodes. Using lig, it is possible to verify that a packet sourced from the EID-space sent to
the shared service subnets would be encapsulated and sent via the overlay.

LISP Forwarding Logic – Part 1

When a packet is sent from an end-host to an edge node, the edge node must determine if the packet is
to be LISP (VXLAN) encapsulated. If the packet is encapsulated, it is sent via the overlay. If not, it is
forwarded natively via the underlay. To be eligible for encapsulation, a packet must be sourced from
the EID-space. The edge node will then look for a default route or a null route in its routing table. If the
end-host packet matches either of those routes, it is eligible for encapsulation, and the edge node will
query the control plane node for the how to reach the destination. When an edge node receives
information back from the control plane node after a query, it is stored in the LISP map-cache.

Note: If the control plane node does not know not know how to reach a destination, it will reply to the edge node with
a negative map reply (NMR). This triggers the edge node to forward the packet natively. However, if the use-petr
command is configured in the LISP configuration on the edge node, an NMR triggers the edge node to send the
packet via the overlay to a default border.

Step 224. Verify the LISP map-cache for CAMPUS VRF (LISP Instance 4099).

EdgeNode1# show ip lisp map-cache instance-id 4099


LISP IPv4 Mapping Cache for EID-table vrf CAMPUS (IID 4099), 3 entries

0.0.0.0/0, uptime: 02:15:30, expires: never, via static-send-map-request


Negative cache entry, action: send-map-request
172.16.101.0/24, uptime: 02:15:30, expires: never, via dynamic-EID, send-map-request
Negative cache entry, action: send-map-request
172.16.201.0/24, uptime: 02:15:30, expires: never, via dynamic-EID, send-map-request
Negative cache entry, action: send-map-request

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 135


Step 225. Verify the LISP map-cache for GUEST VRF (LISP Instance 4100).

EdgeNode1# show ip lisp map-cache instance-id 4100


LISP IPv4 Mapping Cache for EID-table vrf GUEST (IID 4100), 2 entries

0.0.0.0/0, uptime: 02:19:21, expires: never, via static-send-map-request


Negative cache entry, action: send-map-request
172.16.250.0/24, uptime: 02:19:21, expires: never, via dynamic-EID, send-map-request
Negative cache entry, action: send-map-request

Step 226. Use lig to query the control plane nodes on how to reach the shared services from
CAMPUS VRF.

EdgeNode1# lig instance-id 4099 198.18.133.0


Mapping information for EID 198.18.133.0 from 192.168.255.3 with RTT 133 msecs
198.18.133.0/24, uptime: 00:00:00, expires: 23:59:59, via map-reply, complete
Locator Uptime State Pri/Wgt Encap-IID Traffic for the DHCP and DNS servers should be
192.168.255.3 00:00:00 up 10/10 - sent to BorderNode when sourced from vrf
CAMPUS EID-space.

EdgeNode1# lig instance-id 4099 192.168.50.0


Mapping information for EID 192.168.50.0 from 192.168.255.3 with RTT 132 msecs
192.168.50.0/24, uptime: 00:00:00, expires: 23:59:59, via map-reply, complete
Locator Uptime State Pri/Wgt Encap-IID Traffic for the WLC should be sent to BorderNode
192.168.255.3 00:00:00 up 10/10 - when sourced from vrf CAMPUS EID-space.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 136


Step 227. Use LIG to query the control plane nodes on how to reach the shared services from vrf
GUEST.

EdgeNode1# lig instance-id 4100 198.18.133.0


Mapping information for EID 198.18.133.0 from 192.168.255.3 with RTT 132 msecs
198.18.133.0/24, uptime: 00:00:00, expires: 1d00h, via map-reply, complete
Locator Uptime State Pri/Wgt Encap-IID Traffic for the DHCP and DNS servers should be
192.168.255.3 00:00:00 up 10/10 - sent to BorderNode when sourced from vrf
GUEST EID-space.

EdgeNode1# lig instance-id 4100 192.168.50.0


Mapping information for EID 192.168.50.0 from 192.168.255.3 with RTT 132 msecs
192.168.50.0/24, uptime: 00:00:00, expires: 1d00h, via map-reply, complete
Locator Uptime State Pri/Wgt Encap-IID Traffic for the WLC should be sent to BorderNode
192.168.255.3 00:00:00 up 10/10 - when sourced from vrf GUEST EID-space.

Step 228. Verify the new LISP map-cache for CAMPUS VRF (LISP instance 4099).

EdgeNode1# show ip lisp map-cache instance-id 4099


LISP IPv4 Mapping Cache for EID-table vrf CAMPUS (IID 4099), 5 entries

0.0.0.0/0, uptime: 02:27:34, expires: never, via static-send-map-request


Negative cache entry, action: send-map-request
172.16.101.0/24, uptime: 02:27:34, expires: never, via dynamic-EID, send-map-request
Negative cache entry, action: send-map-request
172.16.201.0/24, uptime: 02:27:34, expires: never, via dynamic-EID, send-map-request
Negative cache entry, action: send-map-request The shared services subnets are
192.168.50.0/24, uptime: 00:00:47, expires: 23:59:12, via map-reply, complete
associated with the RLOC
Locator Uptime State Pri/Wgt Encap-IID
192.168.255.3 00:00:47 up 10/10 - 192.168.255.3 which is the
198.18.133.0/24, uptime: 00:00:56, expires: 23:59:03, via map-reply, complete Loopback 0 address for
Locator Uptime State Pri/Wgt Encap-IID BorderNode.
192.168.255.3 00:00:56 up 10/10 -

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 137


Step 229. Verify the new LISP map-cache for GUEST VRF (LISP Instance 4100).

EdgeNode1# show ip lisp map-cache instance-id 4100


LISP IPv4 Mapping Cache for EID-table vrf GUEST (IID 4100), 4 entries

0.0.0.0/0, uptime: 02:30:39, expires: never, via static-send-map-request


Negative cache entry, action: send-map-request
172.16.250.0/24, uptime: 02:30:39, expires: never, via dynamic-EID, send-map-request
Negative cache entry, action: send-map-request
192.168.50.0/24, uptime: 00:01:33, expires: 23:58:27, via map-reply, complete The shared services subnets are
Locator Uptime State Pri/Wgt Encap-IID associated with the RLOC
192.168.255.3 00:01:33 up 10/10 - 192.168.255.3 which is the
198.18.133.0/24, uptime: 00:03:45, expires: 23:56:21, via map-reply, complete Loopback 0 address for
Locator Uptime State Pri/Wgt Encap-IID
192.168.255.3 00:03:45 up 10/10 -
BorderNode.

This completes the optional route Leaking validation on the edge nodes.

This completes Exercise 9.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 138


Exercise 10: DNA Center Host Onboarding

Host Onboarding Part 1

Host Onboarding is two distinct steps – both located under the Provision > Fabric > Host Onboarding
tab for a particular fabric domain.

The first step is to select the Authentication template. These templates are predefined and pushed
down to all devices that are operating as edge nodes. This step must be completed first.

There are currently four pre-built authentication templates in DNA Center.


1. Easy Connect
2. Closed Mode
3. Open Authentication
4. No Authentication

Each option will cause DNA Center to push down a separate configuration set to the edge nodes. Closed
Mode is the most restrictive and provides the best device security posture of the four options. It will
require connected end-host devices to authenticate to the network using 802.1x. If 802.1x fails, MAC
Authentication Bypass (MAB) will be attempted. If MAB fails, the device will not be permitted any
network access.

Figure 11-1: Closed Mode Port Behavior

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 139


Note: In DNA Center 1.0, the same Authentication templates existed, although they were known by different names.
The functionality and configuration has not changed.

1. DefaultEasyConnectAuth
2. DefaultWiredDot1xClosedAuth
3. DefaultWiredDot1xOpenAuth
4. DefaultWiredNoAuth

Figure 11-2: DNA Center 1.0 Host Onboarding Authentication Templates

The device CLI – as well as the DNA Center provisioned configuration – is moving towards the IBNS 2.0
(or C3PL) style. The particular 802.1x/MAB interface level provisioned by DNA Center may change in future releases
to IBNS 2.0, although it did not change between DNA Center 1.0 and DNA Center 1.1.6.

Step 230. From Provision > Fabric > Host Onboarding for the FEW_Fabric Fabric,
select No Authentication. The icon changes to .

Step 231. Click the Save button.


The Modify Authentication Template dialogue box appears.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 140


Step 232. Select Now and click Apply.

Step 233. DNA Center will save the configuration template, although configuration is not yet
pushed to the devices.
Verify a Success notification appears indicated the Authentication template was
saved.

Host Onboarding Part 2 (for CAMPUS VN)

The second step (of Host Onboarding) is to bind the IP Address Pools to the Virtual Networks (VNs). At
that point, these bound components are referred to as Host Pools. Multiple IP address pools can be
associated with the same VN. However, an IP Address Pool should not be associated with multiple VNs.
Doing so would allow communication between the VNs and break the first line of segmentation in SDA.

The second step (of Host Onboarding) has a multi-step workflow that must be completed for each VN.
1. Select the Virtual Network
2. Select the desired Pool(s)
3. Select the traffic type
4. Enable Layer-2 Extension (optional)

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 141


Step 234. From Provision > Fabric > Host Onboarding for the FEW_Fabric Fabric,
select CAMPUS under Virtual Networks.
The Edit Virtual Network dialog box appears for the CAMPUS VN.

Note: Notice that the available Virtual Networks are the VNs created during the Policy Application exercises. DNA
Center configuration operates on a specific workflow that has a distinct order of operation: Design, Policy, Provision,
and Assurance. This is also the order in which the applications are listed on DNA Center’s home page.

Step 235. The IP Address pools created during the Design Application exercises are displayed.
Select the next to Production and Staff.
The boxes change to .
Step 236. From the Choose Traffic dialog box, select Data for both Production and Staff
Address Pools.
Step 237. By default, the Layer-2 Extension (Layer-2 Overlay) is enabled when an Address Pool is
associated with a VN. Leave these in the On position.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 142


Step 238. Verify the options match the example below, and press Update.
The Modify Authentication Template dialogue box appears.

Note: The Layer-2 extension is not absolutely required for this Wired-Only Lab Guide, although the current
recommendation is to leave it On due to some changes on how ARP is forwarded in the Fabric. This extension is

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 143


used primarily with Fabric Wireless, although might also be used in an environment where applications
communicated without IP (and only Layer-2). Leaving this extension On will not necessarily impact things in the lab –
except for some specifics on ARP – as the end hosts will have their packets forwarded by the Layer-3 process, not
the Layer-2 process. It will also allow the ability to see the full SDA LISP configuration for both Layer-3 and Layer-2.

Step 239. Select Now and click Apply.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 144


Step 240. DNA Center will begin pushing configuration down to the devices.

Verify a Success notification appears indicated the Segment was associated with
the Virtual Network.

Step 241. DNA Center returns to the Provision > Fabric > Host Onboarding for the Initech Fabric.
Verify the CAMPUS VN is now highlighted in blue. This indicates IP addresses have been
bound with this VRF.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 145


Host Onboarding Part 2 (for INFRA VN)

Step 242. From Provision > Fabric > Host Onboarding for the FEW_Fabric, select INFRA under
Virtual Network.
The Edit Virtual Network dialog box appears for the INFRA VN.

Host Onboarding Part 2 (for GUEST VN)

Step 243. From Provision > Fabric > Host Onboarding for the FEW_Fabric Fabric,
select GUEST under Virtual Networks.
The Edit Virtual Network dialog box appears for the GUEST VN.

Step 244. The IP Address Pools created during the Design Application exercises are displayed.
The IP Address Pools Production and Staff show no indication that they are currently
associated with another VN.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 146


Scroll down and select the next to WiredGuest.
The boxes change to .
Step 245. From the Choose Traffic dialog box, select Data.
Step 246. By default, the Layer-2 Extensions (Layer-2 Overlay) is enabled when an Address Pool is
associated with a VN. Leave this in the On position.

Step 247. Verify the options match the example below, and press Update.
The Modify Authentication Template dialogue box appears.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 147


Step 248. Select Now and click Apply.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 148


Step 249. DNA Center will begin pushing configuration down to the devices.

Verify a Success notification appears indicated the Segment was associated with
the Virtual Network.

Step 250. DNA Center returns to the Provision > Fabric > Host Onboarding for the FEW_Fabric.
Verify the CAMPUS VN and GUEST VN are now highlighted in blue. This indicates
Address Pools have been bound with these VRFs.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 149


Host Onboarding – Verification (Optional)

Host Onboarding verification supports two checks. The Segment verification validates that VLANs and
Interface VLANs (AnyCast Gateways) have been provisioned on edge nodes. Port Assignment
verification validates that a specific configuration was provisioned for a specific port.

Note: For Port Verification to provide meaningful information, a static port assignment must be made. This is done at
the bottom of the page in the Provision > Fabric > Host Onboarding page. The port assignment defines a static
Authentication assignment rather than using a dynamic authentication from ISE. It can be useful if specific ports
need a particular assignment or if Open or No Authentication templates are used.

Step 251. From Provision > Fabric > Host Onboarding for the FEW_Fabric, click the
Validation drop-down.
Step 252. Select Verification.
The Verification dialog box appears.

Step 253. Click next to Select All. box changes to .


Step 254. Click Start.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 150


Step 255. The Verification check should complete with all green check marks for both edge nodes.
Click on underneath Segment for EdgeNode2 to get more information.
A new browser window appears.

Step 256. Close the Segment browser window and close the Verification dialog box using their
respective buttons.

Note: DNA Center begins creating SVI on the edge nodes beginning at Interface VLAN 1021 and moving upward for
the number of VNs. It also creates the associated (Layer-2) VLAN 1021 and up.

VLAN 3999 is provisioned as the critical VLAN and VLAN 4000 as the voice VLAN. VLANs 1002-1005 were originally
intended for bridging with FDDI and Token Ring networks. Because of backward compatibility reasons in IOS and
VTP, these VLAN numbers remain reserved and cannot be used or deleted. They will always appear on the devices.

This completes Exercise 10.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 151


Exercise 11: Configuring External Connectivity
External (Internet) connectivity for the Fabric Domain has a significant number of possible variations,
and these variations are based on underlying network design. The common similarity among those
variations is that the DefaultBorder will be connected to some next-hop device. This could be an actual
Internet edge router, a fusion router, the ISP router, or some other next-hop device. While this next-
hop is critical to the solution, it is technically not part of Software-Defined Access. Software-Defined
Access is characterized not by the devices that DNA Center can discover – or what is referred to as DNA
Center compatibility – but by the device DNA Center can provision for a fabric role – or what is referred
to as SDA compatibility. Because of this, fusion routers or next-hop external devices are only briefly
mentioned or described in documentation such as the Software-Defined Access Cisco Validated Designs.
They are outside of the Fabric, and therefore they are not SDA – more specifically they are not managed
by the SDA processes of DNA Center.

In the lab, DefaultBorder is connected to FusionExternal as the external next-hop. DefaultBorder has a
static default route to FusionExternal via the Global Routing Table. FusionExternal has a static default
route to its next hop via its Global Routing Table. That next-hop – the cloud in the diagram below –
beyond FusionExternal provides access to the true Internet.

Figure 19-1: DefaultBorder and FusionExternal Default Routes

VRF to GRT and Back Again

Regardless of how the rest of the network itself is designed or deployed outside of the Fabric, a few
things are going to be in common. A default border will have the SDA prefixes in its VRF routing tables.
A default border will also have a route to its next hop in its global routing table.
Somehow, the default route must be advertised to the VRFs. This allows packets to egress the Fabric
towards the Internet. In addition, the SDA prefixes in the VRF tables must be advertised to the external
domain to draw (attract) packets back in.

The VRF configuration and BGP configuration will be pushed down to a default border by DNA Center.
This allows the device to operate as part of the Fabric. The VRF-Lite – the Layer-3 handoff – may or may
not be used in all deployments. However, using a Layer-3 handoff represents one of the less complex
ways to address this need of providing Internet access to the end-hosts.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 152


Note: The solution used in the lab is one of many possible solutions to achieve the goal of providing Internet access
to the EID-space. The solution provided represents best practices based on the topology and need of the lab
deployment.

The concept behind the fusion router is that any manual configuration is completed on only the devices
that are not managed by the SDA processes in DNA Center. It is feasible to use policy-based routing or
other methods to configure route leaking between the Global and VRF routing tables on DefaultBorder.
Manual configuration should be strictly avoided on devices that have been added as Fabric Nodes. There
is a complex interaction between LISP, CEF, and (in some cases) BGP. To this end, the leaking between
Global and VRF routing tables will be completed on an external fusion router, FusionExternal, rather
than attempting to manually configure it on the DefaultBorder. This is considered a best practice.

One method would be to create a similar configuration on FusionExternal to the one completed on
FusionInternal. While this is technically feasible, it is not necessary to extend the VRFs to
FusionExternal unless the policy plane (SGTs) needs to be extended beyond the Fabric to the Internet
Domain. This type of policy extension is beyond the scope of this guide, although an early example can
be found in the LISP configuration guide for the ASR-1000 router.

Note: The default border solution in SDA – particularly with extending policy (SGTs) and with other types of hand-offs
and protocols – is continuously evolving. Its evolution is heavily dependent on what other Fabric (if any) the policy
plane is extending towards (example: SDA to ACI).

Lab Solution

In the lab, DefaultBorder will keep the DNA Center provisioned Layer-3 handoff. This automated
configuration will not be touched. On the other side of the link, FusionExternal will not become
VRF-aware. All SDA prefixes (the EID-space) will be learned in its Global Routing Table, and the default
route from FusionExternal will be advertised back to DefaultBorder via BGP. This BGP configuration on
FusionExternal will not use multi-protocol BGP or form adjacencies over a VRF session.

Why Will This Work?

VRF’s are locally significant. The automated BGP configuration on DefaultBorder is expecting BGP
neighbors (adjacencies) via VRF routing tables. This just means that routes learned from a neighbor will
be installed into the VRF tables instead of the Global Routing Table. On the other side of the physical
link – FusionExternal – the BGP neighbor relationship does not need to be formed using VRFs. Said in
another way, when a neighbor is defined under BGP configuration, it simply means that the router is
going to exchange routes from that particular VRF or that address family with a defined neighbor.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 153


Providing Internet access to the SDA Fabric (leaking between the global and VRF routing tables) is a
multistep process performed on the CLI of FusionExternal. These steps will be:
1. Creating the Layer-3 connectivity between DefaultBorder and FusionExternal
2. Using BGP to create adjacencies between DefaultBorder and FusionExternal
(This advertises the SDA prefixes to the external world).
3. Using network statements to advertise the default route back to DefaultBorder
to be inserted into and advertised by the VRF routing tables.

Figure 19-2: External Connectivity (Default Route Leaking) Workflow

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 154


Border Automation & Fusion Router Configuration Variations – DefaultBorder and FusionExternal

Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be different.
Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be aware of what
sub-interface is forwarding for which VRF and what IP address is assigned to that sub-interface on your particular lab
pod during your particular lab run-through. The fusion router’s configuration is meant to be descriptive in nature, not
prescriptive.

There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there
are six variations in how FusionExternal needs to be configured to match DefaultBorder. These are provided and
detailed in Appendix K and as also provided as text and image files in the DNAC 1.1 folder on the desktop of the
Jump Host.

Identifying the Variation – DefaultBorder (CP-DBN) and FusionExternal

When following the instructions in the lab guide, DNA Center will provision three sub-interfaces on
DefaultBorder beginning with GigabitEthernet 0/0/0.3004 and continuing through GigabitEthernet
0/0/2.3006. These interfaces will be assigned an IP address with a /30 subnet mask (255.255.255.252)
and will always use the lower number (the odd number address) of the two available addresses.
DNA Center will vary which sub-interface is forwarding for which VRF and the Global Routing Table
(GRT).

To understand which explanatory graphic and accompanying configuration text file to follow, identify
the order of the VRF/GRT that DNA Center has provisioned on the sub-interfaces.

In the example above, Gig0/0/2.3004 is forwarding for CAMPUS VRF, Gig 0/0/2.3005 is forwarding for
the GRT and Gig 0/0/2.3006 is forwarding for the GUEST VRF.

The corresponding graphic – located in the DNAC 1.1 folder the desktop of the Jump Host and
Appendix K – is DefaultBorder Interface Order – CAMPUS, GRT, GUEST, and the corresponding text file, also
located where noted above, is FusionExternal - Campus, GRT, Guest. Please be sure to use the appropriate
files and do not directly copy and paste from the lab guide unless instructed directly and specifically to
do so.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 155


Creating Layer-3 Connectivity

The first task is to provide IP connectivity from DefaultBorder to FusionExternal. This must be done for
each Virtual Network (VRF) that requires connectivity to unknown destinations outside of the Fabric.
DNA Center has automated the configuration of DefaultBorder in previous exercises. As a reminder, the
configuration of FusionExternal will be different than that of FusionInternal.

Table 19-1: DNA Center Configured Layer-3 Interfaces for HandOff - DefaultBorder
Interface VRF IP Address VLAN
3004 CAMPUS 192.168.130.1/30 GigabitEthernet0/0/0.3004
3005 Global Route 192.168.130.5/30 GigabitEthernet 0/0/0.3005
3006 GUEST 192.168.130.9/30 GigabitEthernet 0/0/0.3006
NA N/A 192.168.59.5 GigabitEthernet 0/0

Figure 19-3: DefaultBorder (CP-DBN) and FusionExternal Interfaces

Using this information, a list of interfaces and IP address can be planned on the FusionExternal.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 156


Table 19-2: Interfaces to be Manually Configured – FusionExternal

Interface VRF IP Address VLAN


GigabitEthernet 0/0.3004 N/A 192.168.130.2/30 3004
GigabitEthernet 0/0.3005 N/A 192.168.130.6/30 3005
GigabitEthernet 0/0.3006 N/A 192.168.130.10/30 3006
GigabitEthernet 0/0 N/A 192.168.59.9 N/A

Since VRFs are not being extended to FusionExternal, no VRFs need to be created on this router. The
process can immediately begin with the creation of the sub-interfaces on FusionExternal.

Step 257. On the console of FusionExternal, create the Layer-3 sub-interface that will form an
adjacency with the CAMPUS VRF sub-interface on DefaultBorder.

configure terminal
interface GigabitEthernet0/0.3004

Step 258. Use a meaningful description for future troubleshooting.

description FusionExternal to DefaultBorder VRF CAMPUS Sub-


Interface

Step 259. Use VLAN 3004 for the sub-interface.

encapsulation dot1Q 3004

Step 260. Configure the /30 IP Address that corresponds with DefaultBorder’s interface.

ip address 192.168.130.2 255.255.255.252

Step 261. Exit out of sub-interface configuration mode

exit

Note: The creation of the sub-interface on a physical interface that already has IP Address configuration will cause
the IS-IS adjacency to bounce. This is expected.

Step 262. Create the Layer-3 sub-interface that will form an adjacency using the GRT (INFRA-VN)
sub-interface on DefaultBorder. Once completed, exit sub-interface configuration
mode.

Description: FusionExternal to DefaultBorder GRT Sub-Interface

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 157


VLAN: 3005
IP Address: 192.168.130.6/30

Step 263. Create the Layer-3 sub-interface that will form an adjacency using the GUEST VRF
sub-interface on DefaultBorder.
Once completed, exit global configuration mode completely.

Description: FusionExternal to DefaultBorder VRF GUEST Sub-Interface


VLAN: 3006
IP Address: 192.168.130.10/30

Step 264. Change to the console for DefaultBorder.


Ping FusionExternal from DefaultBorder using the vrf CAMPUS.

ping vrf CAMPUS 192.168.130.2

Step 265. Ping FusionExternal from DefaultBorder using vrf GUEST.

ping vrf GUEST 192.168.130.10

Step 266. Ping FusionExternal from DefaultBorder using the Global routing table and a sub-
interface.

ping 192.168.130.6

Step 267. Ping FusionExternal from DefaultBorder using the physical interface.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 158


ping 192.168.59.9

Advertise SDA Prefixes via BGP

BGP is used to advertise the SDA prefixes to the FusionExternal router. As with the interface
configuration, DNA Center has fully automated the DefaultBorder BGP configuration. Earlier exercises
verified the BGP communications between the control plane nodes and DefaultBorder ensuring
communication is occurring and prefixes (NLRI) are being exchanged.

In the lab solution, the BGP Adjacencies created between DefaultBorder and FusionExternal use the
IPv4 Address Family. However, the adjacency will be formed over a VRF session on DefaultBorder’s side
of the link and formed over the Global Routing Table on the FusionExternal’s side of the link. Because
things were fully automated on DefaultBorder, it is simply a matter of configuring FusionExternal to
form adjacencies and accept the routes. No VRFs or address-family ipv4 vrf are needed. As a reminder,
when a neighbor is defined under BGP configuration, it simply means that the router is going to
exchange routes from that particular VRF or that particular address family with the defined neighbor.
When not using VRFs like FusionExternal, routes will be exchanged using the GRT.

Step 268. Create the BGP process on FusionExternal.


Use the corresponding Autonomous-System number defined in DNA Center on the
DefaultBorder for the external world.

configure terminal
router bgp 65333

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 159


Step 269. Use Loopback 0 as the BGP Router-ID.

bgp router-id interface Loopback0

Step 270. Define DefaultBorder as a neighbor using its corresponding AS Number.


This neighbor should use the IP address associated with the DefaultBorder’s VRF
CAMPUS sub-interface.

neighbor 192.168.130.1 remote-as 65004

Step 271. Define the update-source to use the appropriate sub-interface.

neighbor 192.168.130.1 update-source GigabitEthernet0/0.3004

Step 272. Define DefaultBorder as another neighbor using its corresponding AS Number.
This neighbor should use the IP address associated with the DefaultBorder’s GRT sub-
interface.

neighbor 192.168.130.5 remote-as 65004

Step 273. Define the update-source to use the appropriate sub-interface.

neighbor 192.168.130.5 update-source GigabitEthernet0/0.3005

Step 274. Define DefaultBorder as yet another neighbor using its corresponding AS Number.
This neighbor should use the IP address associated with the DefaultBorder’s VRF GUEST
sub-interface.

neighbor 192.168.130.9 remote-as 65004

Step 275. Define the update-source to use the appropriate sub-interface.

neighbor 192.168.130.9 update-source GigabitEthernet0/0.3006

Step 276. Activate the exchange of NLRI with the all the neighbors associated with DefaultBorder.

address-family ipv4
neighbor 192.168.130.1 activate
neighbor 192.168.130.5 activate
neighbor 192.168.130.9 activate

Step 277. Exit the IPv4 Address-Family.

exit-address-family

Step 278. Exit BGP configuration mode and exit global configuration mode entirely.

exit
end

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 160


Step 279. Confirm that three adjacencies come Up.
• 192.168.130.1
• 192.168.130.5
• 192.168.130.9

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 161


Extending the VRFs – Validation

Because multi-protocol BGP and VPNv4 prefixes are not being used on FusionExternal, only BGP
commands for IPv4 can be used to verify prefixes learned from DefaultBorder. Both IPv4 and VPNv4
commands will be used on DefaultBorder for verification.

Note: Due to the recursive routing caused by mutual redistribution of BGP and LISP, it is possible to sometimes see
LISP routes for the shared services domain in the control plane nodes’ VRF routing tables, BGP routes to the shared
services domain in DefaultBorder’s VRF routing tables, and BGP routes to the shared services domain in the GRT of
FusionExternal. The following captures will not show these potential recursive routes.

Step 280. Display the IPv4 BGP Adjacency, messages, and prefix advertisements on DefaultBorder.

DefaultBorder# show ip bgp ipv4 unicast summary


BGP router identifier 192.168.255.5, local AS number 65004
BGP table version is 7, main routing table version 7
2 network entries using 496 bytes of memory
2 path entries using 272 bytes of memory
1/1 BGP path/bestpath attribute entries using 280 bytes of memory
1 BGP community entries using 24 bytes of memory
2 BGP extended community entries using 48 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 1120 total bytes of memory
BGP activity 19/10 prefixes, 27/11 paths, scan interval 60 secs

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.130.6 4 65333 24 21 7 0 0 00:15:21 0
192.168.255.4 4 65004 4904 4908 7 0 0 3d02h 1
192.168.255.8 4 65004 4899 4904 7 0 0 3d02h 1

1. The BGP adjacency with the FusionExternal using the GRT is up and exchanging zero (0) prefixes.

Step 281. Display the IPv4 BGP Adjacency, messages, and prefix advertisements on FusionInternal.

FusionExternal# show ip bgp ipv4 unicast summary


BGP router identifier 192.168.255.9, local AS number 65333
BGP table version is 10, main routing table version 10
5 network entries using 720 bytes of memory
5 path entries using 440 bytes of memory
2/2 BGP path/bestpath attribute entries using 320 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 1504 total bytes of memory

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 162


BGP activity 7/2 prefixes, 7/2 paths, scan interval 60 secs

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.130.1 4 65004 5 10 10 0 0 00:02:47 2
192.168.130.5 4 65004 7 9 10 0 0 00:02:53 2
192.168.130.9 4 65004 6 10 10 0 0 00:02:34 1

1. The BGP adjacency with all three neighbors interfaces on DefaultBorder.

Step 282. Display the VPNv4 BGP Adjacency, messages, and prefix advertisements on
DefaultBorder.

DefaultBorder# show bgp vpnv4 unicast all summary


BGP router identifier 192.168.255.5, local AS number 65004
BGP table version is 7, main routing table version 7
3 network entries using 768 bytes of memory
6 path entries using 816 bytes of memory
4/2 BGP path/bestpath attribute entries using 1184 bytes of memory
1 BGP community entries using 24 bytes of memory
2 BGP extended community entries using 48 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 2840 total bytes of memory
BGP activity 5/0 prefixes, 8/0 paths, scan interval 60 secs

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd


192.168.130.2 4 65333 15 10 7 0 0 00:06:43 0
192.168.130.10 4 65333 14 10 7 0 0 00:06:30 0
192.168.255.4 4 65004 18 11 7 0 0 00:06:46 3
192.168.255.8 4 65004 18 11 7 0 0 00:06:47 3

1. No VPNv4 Routes are learned from the FusionExternal. This is expected, as no routes have been advertised
from FusionExternal. The adjacency is up, though.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 163


Step 283. Display the routes known to CAMPUS VRF on the DefaultBorder(CP-DBN)
There will be no changes from same verification performed earlier.

show ip route vrf CAMPUS

Step 284. Display the routes known to GUEST VRF on the DefaultBorder (CP-DBN).
There will be no changes from same verification performed earlier.

show ip route vrf GUEST

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 164


Step 285. Show all routes learned from BGP on FusionExternal along with their next-hop.

FusionExternal# show bgp all

For address family: IPv4 Unicast


BGP table version is 10, local router ID is 192.168.255.9
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
t secondary path,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

Network Next Hop Metric LocPrf Weight Path


*> 172.16.101.0/24 192.168.130.1 0 65004 i
*> 172.16.201.0/24 192.168.130.1 0 65004 i
*> 172.16.250.0/24 192.168.130.9 0 65004 i
*> 192.168.255.4/32 192.168.130.5 0 65004 i
*> 192.168.255.8/32 192.168.130.5 0 65004 i

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 165


Note: The command show bgp all can be used on DefaultBorder to see specifically which prefixes (NLRI) are learned
and what BGP address-family they are learned by.

1. Prefixes learned from the BGP IPv4 session.


2. Prefixes leaned from the BGP VPNv4 session for the VRF CAMPUS.
3. Prefixes learned from the BGP VPVn4 session for the VRF GUEST.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 166


Advertise the Default Route to DefaultBorder

FusionExternal has routes to the SDA prefixes learned from DefaultBorder in its global routing table. It
also has a default route to its next-hop router. This default route must be advertised back to
DefaultBorder.

There are four different methods to advertise a default route in BGP – each with its own caveats.
Three of the methods are very similar and result in the same effect: a default route is injected into the
BGP RIB and is then advertised to neighbors. The origin of that route is the key difference between
these methods.
1. network 0.0.0.0
a. This will inject the default route into BGP if there is a default route present in the GRT.
2. redistribution
a. This will inject the default route into BGP if distributing from another routing protocol
(OSPF, IS-IS, EIGRP). The default route must currently be in the GRT AND be learned
from that routing protocol that is being redistributed (example, redistributing IS-IS and
learned from IS-IS).
3. default-information originate
a. This will cause the default route to be artificially generated and injected into the BGP
RIB regardless of whether or not it exists in the GRT.
b. In modern Cisco software versions, this also requires a redistribution statement to
trigger the default route to be advertised. The default-information originate command
alone is not enough to trigger the advertisement of the default route.
4. neighbor X.X.X.X default-originate
a. This command will only advertise the default route to a specific neighbor
- All previous approaches advertised the default route to all neighbors.
b. The default route will not be present in the BGP RIB where this command is configured –
which prevents advertisements to all neighbors.
c. The default route is artificially generated and injected into BGP similarly to default-
information originate.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 167


Each approach has its own benefits and considerations. The neighbor X.X.X.X default-originate option
is particularly useful in cases with stub-autonomous systems. This is where a full BGP table is not
required.

If the lab were an actual deployment, FusionExternal would likely be BGP peered with an upstream
router. In this scenario, any default route advertisement method that advertise the default route to all
BGP peers presents a less than optimal approach.

The lab topology is best also described as a stub-autonomous system. Because of these reasons, the
most optimal approach is to use the fourth option – neighbor X.X.X.X default-originate – for advertising
the default route.

Deployment Note: In production, BGP peering with upstream routers generally uses filter-lists to block things like an
ill-configured default route advertisement that might create routing loops.

Whichever method used to advertise a default route using BGP must be carefully considered based on the needs and
design of the deployment.

Lab Environment Specific Configurations – Advertising Additional Routes

Due to the physical location (Cisco DMZ) and default firewall rules on the true edge (the next-hop router
beyond FusionExternal) of the lab network, pings, traceroutes, NTP, and nslookup are all blocked except
to certain IP addresses. To simplify verification of connectivity from the Fabric to non-Fabric,
FusionExternal will also advertise its Loopback 77 into BGP. That IP address is 7.7.7.7/32. It will
represent an unknown destination – a destination on the outside of the Fabric or ostensibly the Internet
– and will serve as a destination for testing connectivity and the configuration.

Step 286. Enter BGP configuration mode on FusionExternal.

configure terminal
router bgp 65333

Step 287. Add a network statement to advertise Loopback 77.

network 7.7.7.7 mask 255.255.255.255

Step 288. Advertise the default route to the specific neighbor 192.168.130.1.

address-family ipv4
neighbor 192.168.130.1 default-originate

Step 289. Advertise the default route to the specific neighbor 192.168.130.5.

neighbor 192.168.130.5 default-originate

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 168


Step 290. Advertise the default route to the specific neighbor 192.168.130.9.

neighbor 192.168.130.9 default-originate

Step 291. Exit address-family configuration mode, and then exit global configuration mode
entirely.

exit-address-family
exit
end

Default Route Verification


Step 292. Wait approximately one to two minutes to allow routes to propagate.
Step 293. Display the routes now known to vrf CAMPUS on the DefaultBorder.

DefaultBorder# show ip route vrf CAMPUS

Routing Table: CAMPUS


Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is 192.168.130.2 to network 0.0.0.0

B* 0.0.0.0/0 [20/0] via 192.168.130.2, 00:02:09 Internet default route is known to CAMPUS VRF
7.0.0.0/32 is subnetted, 1 subnets
B 7.7.7.7 [20/0] via 192.168.130.2, 00:03:51
Route to FusionExternal’s Loopback 77 known to
172.16.0.0/24 is subnetted, 2 subnets CAMPUS VRF
B 172.16.101.0 [200/0] via 192.168.255.4, 00:35:37
B 172.16.201.0 [200/0] via 192.168.255.4, 00:35:37
192.168.130.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.130.0/30 is directly connected, GigabitEthernet0/0/0.3004
L 192.168.130.1/32 is directly connected, GigabitEthernet0/0/0.3004

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 169


Step 294. Display the routes now known to vrf GUEST on the DefaultBorder.

DefaultBorder# show ip route vrf GUEST

Routing Table: GUEST


Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is 192.168.130.10 to network 0.0.0.0

B* 0.0.0.0/0 [20/0] via 192.168.130.10, 00:06:09 Internet default route is known to GUEST VRF
7.0.0.0/32 is subnetted, 1 subnets
B 7.7.7.7 [20/0] via 192.168.130.10, 00:07:51 Route to FusionExternal’s Loopback 77 known to
172.16.0.0/24 is subnetted, 1 subnets GUEST VRF
B 172.16.250.0 [200/0] via 192.168.255.4, 00:39:37
192.168.130.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.130.8/30 is directly connected, GigabitEthernet0/0/0.3006
L 192.168.130.9/32 is directly connected, GigabitEthernet0/0/0.3006

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 170


Step 295. If time permits, explore the routes known to the VPNv4 Address-Family and to the VRF
tables on ControlPlaneNode. DefaultBorder will learn the 7.7.7.7/32 and default route
from FusionExternal. These will be advertised to both control plane nodes in each VRF.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 171


Note: The command show bgp all can be used one final time DefaultBorder to see specifically which prefixes (NLRI)
are learned and what BGP address-family they are learned by.

1. Default route learned through BGP IPv4 session.


2. Default route leaned for the BGP VPNv4 session for the VRF CAMPUS.
3. Default route learned for the BGP VPVn4 session for the VRF GUEST.

This completes Exercise 11.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 172


Exercise 12: Testing Wireless Connectivity

To verify the wireless host connectivity, you will use the Apache Gucamole
(http://192.168.100.100:8080/guacamole/#/) to launch consoles for the wireless host VM.

Step 296. From the jump host use the web Brower and click on the bookmarked(PC VMs)
gucamole link to connect to your Wireless host

Step 297. Open console windows to access the PC-Wireless VM by selecting it and right clicking.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 173


Enable the Wireless client to Connect

Step 298. Once open, click on the wireless networking icon in the
system tray and open the available SSID panel.
Click on the SSID for your Pod and connect.

This should succeed.

This completes Exercise 12.

This completes the DNA Center 1.2.6 Wireless Automation


Lab.

© 2018 Cisco Systems Inc. Solutions Readiness Engineering Page 174

You might also like