DNAC 1.2.6. Automation Lab Guide Ver 1.0
DNAC 1.2.6. Automation Lab Guide Ver 1.0
6
Implementing Software-
Defined Access w/ Fabric
Enabled Wireless
Lab Guide
Version 2.00
10 March 2019
Software-Defined Access
SD-Access Benefits
The Cisco Digital Network Architecture Center (DNA Center) is a collection of applications, processes,
services, and packages running on a dedicated hardware appliance. DNA Center is the centralized
manager and the single pane of glass that powers Cisco’s Digital Network Architecture (DNA). DNA
begins with the foundation of a digital-ready infrastructure that includes the routers, switches, access-
points, and Wireless LAN controllers. The Identity Services Engine (ISE) is the key policy-manager for the
DNA Solution. DNA Center is brought to life with the simplified workflow of Design, Provision, Policy,
and Assurance. These are the tools that power DNA Center.
DNA Center has its own internal architecture that is composed of three parts. The base part is called
maglev. On this base are two software stacks: Network Controller Platform (NCP) and Network Data
Platform (NDP). NCP is often referred to as Automation, and NDP is referred to as Analytics. The NCP
software stack provides the Software-Defined Access solution, and the NDP software stack provides the
Assurance solution.
These software stacks have several abstracted network services, network applications, bundles and
plugins. There are hundreds of microservices running, and this number continues to grow as additional
functionality is added to DNA Center. Exposed directly to the user are packages. These are the visible
deployed units see in the settings page. From the perspective of the DNA Center dashboard – the GUI –
these packages make up the components that power the DNA Center workflows through applications
and tools. On the dashboard, applications are the top rows of icons and the tools are the bottom rows.
DNA Center is the physical appliance running Applications and Tools. Software-Defined Access (SDA) is
just one of the solutions provided by the DNA Center appliance. The difference is very subtle, but it is
critical to understand, particularly when referring to version numbering.
Another solution provided by DNA Center is Assurance. Additional solutions on the roadmap include
Cisco SD-WAN (Viptela) and Wide Area Bonjour. While each of these solutions provided by DNA Center
are tightly integrated, some can run in isolation. A DNA Center appliance can be deployed for Assurance
(Analytics) and/or deployed for Software-Defined Access (Automation). This is accomplished by the
installation / uninstallation of certain packages as listed in the release notes.
Note: If Automation and Assurance are deployed in this isolated manner, these two deployments would not coexist in
the same network. They would be independent and separate networks with no knowledge of the other. Unlike DNA
Center 1.0 where Automation and Assurance was a two-box solution, a DNA Center 1.2 deployment utilizing both
solutions must have them both installed on the same appliance.
Cisco DNA Center has had a rich history of version upgrades since Early Field Trails (EFT) and First
Customer Shipment (FCS). A regular set of DNA Center patches is road mapped for the foreseeable
future.
The SDA solution version is a combination of Cisco DNA Center controller version, Device Platform
(IOS, IOS XE, NX-OS, and AireOS) version, and the version of Cisco ISE. The SDA version and the DNA
Center version are not the same thing. This means there are different device compatibility
specifications with DNA Center (for Assurance) and Software-Defined Access (for Automation).
When upgrading any SDA components, including DNA Center controller version, device platform
version, and ISE version, it is critically important to pay attention to the versions required to maintain
compatibility between all the components. The SD-Access Product Compatibility matrix – located here –
will indicate the compatible DNA Center, device platform, and ISE versions.
At the time this lab guide was created, the current SDA version is SDA 1.2 (update 1.2.6). This indicates
that SDA is running on DNA Center version 1.2.6. Now, we have introduced two separate DNA Center
code trains – 1.1.x and 1.2.x – with 1.2.x bringing new features to the platform.
Fabric Enabled Wireless (FEW) or SD-Access Wireless is defined as the integration of wireless access in
the SD-Access architecture to gain all the advantages of the Fabric and DNA-C automation.
• Centralized Wireless control plane: The same innovative RF features that Cisco has today in
Cisco Unified Wireless Network (CUWN) deployments will be leveraged in SD-Access Wireless as
well. Wireless operations stay the same as with CUWN in terms of RRM, client onboarding and
client mobility and so on which simplifies IT adoption
• Optimized Distributed Data Plane: The data plane is distributed at the edge switches for
optimal performance and scalability without the hassles usually associated with distributing
traffic (spanning VLANs, subnetting, large broadcast domains, etc.)
• Seamless L2 Roaming everywhere: SD-Access Fabric allows clients to roam seamlessly across
the campus while retaining the same IP address
• Simplified Guest and mobility tunneling: An anchor WLC controller is not needed any more and
the guest traffic can directly go to the DMZ without hopping through a Foreign controller
• Policy simplification: SD–Access breaks the dependencies between Policy and network
constructs (IP address and VLANs) simplifying the way we can define and implement Policies. For
both wired and wireless clients
• Best in Class Wireless with Future-proof Wave2 APs and 3504/5520/8540 WLCs
• Investment protection by supporting existing AireOS WLC; SD-Access Wireless is optimized for
802.11acWave2 APs but also supports Wave 1 APs
If customer has a wired network based on the SD-Access Fabric, there are two options to integrate the
wireless access:
Let's first examine the SD-Access Wireless since it brings the full advantages of Fabric for wireless users
and things. Initially the architecture and the main components are introduced and then the reader will
learn how to setup a SD-Access Wireless network using DNAC.
OTT is basically running traditional wireless on top of a SDA Fabric wired network. This option will be
covered later in the document together with the Design considerations.
In SD-Access Wireless, the Control plane is centralized: this means that, as with CUWN, a CAPWAP
tunnel is maintained between APs and WLC. the main difference is that, the data plane is distributed
using VXLAN directly from the Fabric enabled APs. WLC and APs are integrated into the Fabric and the
Access Points connect to the Fabric overlay (Endpoint ID space) network as “special” clients.
Below is the main architecture diagram with a description of the main components.
The following sections describe the roles and functions of the main components of the SD-Access
Wireless Architecture
Fabric Control-Plane Node is based on a LISP Map Server / Resolver and runs the Fabric Endpoint ID
Database to provide overlay reachability information.
• CP is the Host Data Base, tracks Endpoint ID (EID) to Edge Node bindings, along with other
attributes
• CP supports multiple types of Endpoint ID lookup keys (IPv4 /32, IPv6 /128 (*) or MAC
addresses)
• It receives prefix registrations from Edge Nodes and Fabric WLCs for wired local endpoints and
wireless clients respectively
• It resolves lookup requests from remote Edge Nodes to locate Endpoints
• It updates Fabric Edge nodes and Border nodes with wireless client mobility and RLOC
information
Fabric edge provides connectivity for Users and Devices connected to the Fabric.
All traffic entering or leaving the Fabric goes through the Fabric Border.
• There are 2 types of Fabric Border node: Border and Default Border. Both types provide the
fundamental routing entry and exit point for all data traffic going into and/or out of the Fabric
Overlay, as well as for VN and/or group-based policy enforcement (for traffic outside the Fabric).
• A Fabric Border is used to add " known " IP/mask routes to the map system. A known route is
any IP/mask that you want to advertise to your Fabric Edge nodes (e.g. Remote WLC, Shared
Services, DC, Branch, Private Cloud, and so on)
• A Default Border is used for any " unknown " routes (e.g. Internet or Public Cloud),as a gateway
of last resort
• A Border is where Fabric and non-Fabric domains exchange Endpoint reachability and policy
information
• Border is responsible for translation of context (VRF and SGT) from one domain to another
Note: WLC and APs need to be within 20ms of latency, usually this mean same physical site.
Fabric AP
▪ For Fabric enabled SSID, AP converts 802.11traffic to 802.3 and encapsulates it into
VXLAN encoding the VNI and SGT info of the client
▪ AP forwards client traffic based on forwarding table as programmed by the WLC. Usually
VXLAN tunnel destination is first hop switch
▪ SGT and VRF based policies for wireless users on Fabric SSIDs are applied at the Fabric
edge. Same as for wired
• For Fabric enabled SSIDs the user data plane is distributed at the APs leveraging VXLAN as
encapsulation
• AP applies all wireless specific features like SSID policies, AVC, QoS, etc.
Note: For feature support on APs please refer to the WLC release notes.
Below a graphical and detailed explanation of the protocols and interfaces used in SD-Access Wireless:
The SD-Access Wireless Architecture is supported on the following Wireless LAN controllers with AireOS
release 8.5 and higher:
• AIR-CT3504
• AIR-CT5520
• AIR-CT8540
This architecture is optimized for Wave2 11ac access points in Local mode: AP1810, AP1815, AP1830,
AP1850, AP2800 and AP3800
Wave1 802.11ac access points are supported with SD-Access wireless with a limited set of features
compared to Wave2 (please refer to the Release notes for more information). Outdoor APs are not
supported in SDA 1.1 (November release). Support for these access points is in the roadmap.
Some important consideration to deploy WLC and APs in SD-Access Wireless network, please refer to
the picture below:
AP to WLC communication
From a network deployment prospective, the Access Points are connected in the overlay network while
the WLC resides outside the SD-Access fabric in the traditional IP network.
The WLC subnet will be advertised into the underlay so fabric nodes in the network (fabric edge, and
control plane) can do native routing to reach the WLC. The AP subnets in overlay will be advertised to
the external network so WLC can reach the APs via overlay.
Let's look a bit deeper into how the CAPWAP traffic flows between APs and WLC for Fabric enabled
SSIDs. This is the Control plane traffic only for AP Join and all the other control plane traffic (Client data
plane traffic is not going to the WLC as it is distributed from APs to the switch using VXLAN).
• Border (internal or External) redistribute the WLC route in the underlay (using the IGP of choice)
• FE learns the route in the Global Routing Table
• When FE receives CAPWAP packet from AP, the FE finds a match in the RIB and packet is
forwarded with no VXLAN encapsulation
• The AP to WLC CAPWAP traffic travels in the underlay
The North-South direction CAPWAP traffic, from WLC to APs, is described in the picture below:
Lab Exercises:
• Exercise 1: Introduction to DNA Center 1.2 - The lab begins with a quick overview of the new DNA-C
1.2.6.
• Exercise 2: Using the DNA Center Discovery Tool - The DNA Center Discovery tool will be used to
discover and view the underlay devices.
• Exercise 3: Using the DNA Center Inventory Tool - The Inventory tool will be used to help lay out
the topology maps for later use in the lab.
• Exercise 4: Integrating DNA Center with the Identity Services Engine (ISE) - DNA Center and ISE will
be integrated using ISE’s pxGrid interface.
• Exercise 5: Using the DNA Center Design Application - Sites will be created using the Design
Application. Here, common attributes, resources, and credentials are defined for re-use during
various DNA Center workflows.
• Exercise 6: Using the DNA Center Policy Application - Virtual Networks are then created in the
Policy Application, thus creating network-level segmentation. During this step, groups (SGTs)
learned from ISE will be associated with the Virtual Networks, creating micro-level segmentation.
• Exercise 7: Using the DNA Center Provision Application - Discovered devices will be provisioned to
the Site created in the Design Application.
• Exercise 8: Creating the Fabric Overlay - The overlay fabric will be provisioned.
• Exercise 9: Fusion Routers and Configuring FusionInternal Router - Fusion routers will be discussed
in detail, and FusionInternal will be configured.
• Exercise 10: Host On-boarding – Access Points and End hosts will be on-boarded in this exercise.
• Exercise 11: DefaultBorder Provisioning - DefaultBorder (external border node) will be provisioned
to join the Fabric.
• Exercise 12: Configuring External Connectivity - External connectivity to the Internet from the
Fabric will be configured.
• Exercise 13: Testing Wireless External Connectivity - External connectivity will be tested.
The core of the network is a Catalyst 3850 (copper) switch called the LabAccessSwitch. It is the
intermediate node between the other devices that will ultimately become part of the Software-Defined
Access Fabric. It is directly connected to a Catalyst 9300 (#2) and another Catalyst 9300 (#3) switch.
These will both act as edge nodes in the Fabric. The LabAccessSwitch (#1) is also directly connected to a
pair of Catalyst 3850 (fiber) switches (#4 and #5) that will act as control plane & border plane nodes (co-
located) in the Fabric: CPN-DBN(#4) & (#5) CPN-BN. LabAccessSwitch (#1) is directly connected to the
segment that provides access to DNA Center, ISE , the Jump Host (not pictured) and is the default
gateway for all of these management devices.
A pair of ISR-4451s, FusionInternal (#6), that acts a fusion router for the shared services. This internal
fusion router provides network access to the WLC-3504 and the DHCP/DNS server which is a Windows
Server 2012 R2 virtual machine. The Internet edge device, FusionExternal (#7), will perform some route
leaking and is acting as an (external) fusion router.
Two Windows 7 machines act as the host machines in the network. The Windows 7 machines are
connected to GigabitEthernet 1/0/23 on EdgeNode1 (#2) and EdgeNode2 (#3).
The table below provides the access information for the devices within a given pod. Usernames and
password are case sensitive.
Note: Due to current compatibility, please use Google Chrome as the browser throughout the lab.
Step 1. Open the browser to DNA Center using the management IP address
https://192.168.100.10 and login with the following credentials:
Username: admin
Password: DNACisco!
Note: DNA Center’s login on screen is dynamic. It may have a different background. When first accessing the login
page, the browser may appear to freeze for up to approximately twenty seconds. This is directly related to our lab
environment and will not happen in production. It does not impact any configuration or actual performance of DNA
Center.
DNA Center’s SSL certificate may not be automatically accepted by your browser. If this occurs, use the advanced
settings to allow the connection – as shown below.
Step 3. To view the DNA Center version, click on the gear at the top right and then select
About DNA Center. Notice the DNA Center Controller version 1.2.6.
These areas contain the primary components for creating and managing the Solutions
provided by the DNA Center Appliance.
DNA Center 1.2 introduced the Assurance Application along with the Telemetry,
Network Plug and Play and Command Runner Tools among others.
To view the System Settings, click on the gear at the top right, and then select System
Settings.
Step 8. On the System 360 Tab, DNA Center displays the number of currently running primary
services. To view the running services, click the Up button.
Note: Individual service log information can be accessed by hovering over a service and clicking the Grafana or
Kibana logos. These features are beyond the scope of this guide.
Deployment Note: Recall that the solution, Software Defined Access, is dependent on the DNA Platform version, the
Individual package versions, and the IOS / IOS XE software versions. Research should be performed before
arbitrarily updating packages in DNA Center.
Note: The screen shot above was taken during an early upgrade cycle. The DNA Center in the lab may or may not
show updates available, as DNA Center updates are being released approximately every two weeks.
If updates are available, please DO NOT attempt to update the DNA Center appliance in the lab. This lab guide is
currently based on DNA Center 1.2.6.
Step 10. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
Note: Outside of the lab environment, this IP address could be any Layer-3 interface or Loopback Interface on any
switch that DNA Center has IP reachability to. In this lab, DNA Center is directly connected to the LabAccessSwitch
on Gig 1/0/12. That interface has an IP address of 192.168.100.6. The LabAccessSwitch is also DNA Center’s
default gateway to the actual Internet. It represents the best starting point to discover the lab topology.
Note: DNA Center uses the CDP table information (show cdp neighbors) of the defined device (192.168.100.6 /
LabAccessSwitch) to find CDP neighbors. It will continue to find CDP neighbors to the depth provided by CDP
level. This is done by querying the CISCO-CDP-MIB via SNMP.
Note: This will instruct DNA Center to use the Loopback IP address of the discovered equipment for management
access. DNA Center will use Telnet/SSH to access the discovered equipment through their Loopback IP address.
Later, DNA Center will configure the Loopback as the source interface for RADIUS/TACACS+ packets.
Step 17. At a minimum, CLI and SNMPv2 Read/Write credentials must be defined. The routers
and switches will be discovered using the CLI and SNMPv2 credentials.
Step 19. The final step is to enable Telnet as a discovery protocol for any devices not configured
to support SSH.
Scroll down the page and click to open the Advanced section.
Click the protocol Telnet. Ensure it has a blue check mark to it. A warning message
should appear. Click OK to close it.
Bug Note: A current bug in DNA Center removes the previous Preferred Management IP selection after interacting
with the Advanced section. This bug is also intermittent and may not happen during your lab.
Please ensure The Preferred Management IP is Use Loopback before clicking the Start button in the next step.
Step 20. Click the Start button in the lower right-hand corner to begin the discovery job.
Step 21. As the Discovery progresses, the page will present the devices on the right-hand side.
This may require clicking the icon will to display the Discovered devices.
Note: Full discovery with this number of CDP hops may take up to ten minutes to complete. While devices may show
as Discovered and the Discovery job may appear Complete, the entire process itself is not complete until devices
have fully populated in the Inventory tool.
Step 22. Verify that the Discovery process was able to find Seven (7) devices.
Step 23. Verify that the devices with the following IP addresses have been discovered.
• 192.168.255.1 – EdgeNode1.dna.local
• 192.168.255.2 – EdgeNode2.dna.local
• 192.168.255.4 – CPN-DBN.dna.loca
• 192.168.255.6 – LabAccessSwitch.dna.local
• 192.168.255.7 – FusionInternal.dna.local
• 192.168.255.8 – CPN-BN.dna.local
• 192.168.255.9 – FusionExternal.dna.local
Note: You may use the credentials that were saved earlier from the previous discovery as they are the same.
Step 29. The final step is to enable Telnet as a discovery protocol for any devices not configured
to support SSH.
Scroll down the page and click to open the Advanced section.
Click the protocol Telnet. Ensure it has a blue check mark to it. A warning message
should appear. Click OK to close it.
Rearrange the order so that Telnet comes before SSH.
Step 31. As the Discovery progresses, the page will present the devices on the right-hand side.
This may require clicking the icon will to display the Discovered devices.
Step 32. Verify that the Discovery process was able to find the WLC and should show up as
follows:
Step 33. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
Step 34. From the DNA Center Home Page, click the Inventory tool.
A bookmark is also available in the browser.
Step 35. All eight discovered devices should show as Reachable and Managed.
Their up time, last update time, and the default resync interval should also be displayed.
Note: While the DNA Center Discovery App may show the discovery process has completed, the full process of
discovery and adding to inventory is not completed until the Reachability Status and Last Inventory Collection
Status are listed as shown above, Reachable and Managed, respectively. The full process from beginning
Discovery to being added to Inventory may take up to ten minutes based on the size of the lab topology. Expect
longer times in production, particularly if the number of CDP hops is larger and if there is a larger number of devices.
Note: Please make sure all the devices shown above are found in your pod. If they are not, please notify the
instructor.
Click Apply.
Note: The Config columns allows a view of the running configuration of the device. The IOS/Firmware column is
useful for quickly viewing the software version running on the devices.
Step 37. Device Role controls where DNA Center displays a device in topology view in both the
Provision Application and Topology tool. It does not modify or add any configuration
with regards to the device role selected.
Use the chart below to confirm/set each device to the role shown.
Device Device Role
CPN-BN Core
CPN-DBN Core
EdgeNode1 Access
EdgeNode2 Access
FusionExternal Border Router
FusionInternal Distribution
LabAccessSwitch Distribution
WLC_3504 Access
Step 38. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
Step 39. You can refer the Topology tool to view the complete topology next.
The Device Role is used to position devices in the DNA Center topology maps under the Fabric tab in the
Provision Application and in the Topology tool. The device positions in those applications and tools are
shown using the classic three-tiered Core (Border), Distribution, and Access layout. This will become
even clearer later in the lab guide.
Figure 4-1: DNA Center 1.0 AAA Shortcut and AAA System Settings
Step 40. This shortcut has been removed in DNA Center 1.1 and above.
To begin integration between DNA Center and ISE, navigate to System Settings.
Step 41. Click on the gear at the top right, and then select System Settings.
Step 42. From the System 360 Tab, click the Configure settings hyperlink in the Identity Service
Engine section.
Step 44. Use the table below to populate the credentials and fields.
Field Value
Server IP Address * 192.168.100.20
Shared Secret * ISEisC00L (Note: These are zeroes – 0 – not the ‘O.’
Cisco ISE Server ON
Username * admin
Password * ISEisC00L (Note: These are zeroes – 0 – not the ‘O.’
FQDN * ise.dna.local
Subscriber Name * DNAC
View Advanced Settings Open and Expand
Protocol TACACS Selected
Note: The RADIUS protocol will be selected by default. Use the Default Authentication and Account Ports of 1812
and 1813 for RADIUS and the default port 49 for TACACS.
Note: In DNA Center 1.0, once the mutual certificate authentication with ISE was completed, the process was listed
as Active in the DNA Center GUI. However, DNA Center still needed to be added to ISE’s trusted pxGrid
subscribers as shown in the following steps. Until that process was completed in the ISE GUI, the connection
between ISE and DNA Center was not truly Active. DNA Center 1.1 and above addresses this issue with the
INPROGRESS status. Once the following steps are completed, the status will be listed as Active.
Step 49. Open a new browser tab, and log into ISE using IP address https://192.168.100.20 and
credentials:
• user: admin
• password: ISEisC00L
Step 53. On this page, the dnac client will appear as Pending.
Total Pending Approval (1) should show at the top of the list.
Step 54. Click Total Pending Approval (1) and then select Approve All.
Step 56. Ensure the dnac client now has an Online status.
Step 58. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
During the integration of ISE and DNA Center, all Scalable Group Tags (SGTs) present in ISE are pulled
into DNA Center. Whatever policy is configured in the (TrustSec) egress matrices of ISE when DNA
Center and ISE are integrated are also pulled into DNA Center. This is referred to as the Day 0
Brownfield Support: If policies are present in ISE at the point of integration, those policies are pulled in
DNA Center and populated.
Except for SGTs, anything TrustSec and TrustSec Policy related that is created directly on ISE OOB (out-
of-band) from DNA Center after the initial integration will not be available or be displayed in DNA
Center. There is a cross launch capability in DNA Center to see what is present in ISE with respect to
TrustSec Policy.
Deployment Note and Additional Caveat: If something is created OOB in ISE after initial integration with DNA Center,
then CoA (Change of Authorization) Pushes needs to be done manually. Generally, in the ISE GUI, changes to the
TrustSec Matrix trigger a CoA Push down to all devices. CoA needs to be done manually if a TrustSec policy is
created OOB (created in ISE GUI) after initial integration with DNA Center.
Step 59. From the DNA Center home page, click Design to enter the Design Application.
Step 62. Click San Jose, click the gear , and then select Add Building
Step 63. Enter Building name as “SJC-13” and address as “325 East Tasman Drive, San Jose,
California. and ADD.
Step 64. A new floor will be added to the SJC-BLD13 building in the San Jose site.
Begin by expanding San Jose with the button.
Step 65. Click SJC-1, click the gear , and then select Add Floor.
The Add Floor dialog box will appear.
Field Value
Floor Name * Floor1
Parent SJC-BLD13
Type (RF Model) Cubes and Walled Offices
Floor Image SJC14.png
Width (ft) 100
Length (ft) 100
Height (ft) 10
Step 67. After filling in the fields, press the Upload File Button.
Step 68. Navigates to the Desktop > FEW > floorplans directory.
Select the SJC14.png file and press Open.
Step 69. DNA Center will return to the Add Floor dialog box.
The Floor map name and a preview of the floor map will now appear.
Notice that the Length (ft) has changed in response the image size.
Click the Add Button.
DNA Center allows saving common network resources and settings with Design Application’s
Network Settings sub-application (tab). As described earlier, this allows information pertaining to the
enterprise to be stored so it can be reused throughout DNA Center workflows.
By default, when clicking the Network Settings tab, newly configured settings are assigned as Global
network settings. They are applied to the entire hierarchy and inherited to each site, building, and floor.
It is possible to define specific network settings and resources to specific sites. The site-specific feature
will be used during LAN Automation. For this lab deployment, the entire network will share the same
network settings and resources.
DNA Center 1.0 supported AAA, DHCP, DNS, NTP Server, Syslog, SNMP Trap, and Netflow Collector
servers. However, by default, in DNA Center 1.0, only AAA, DHCP and DNS Servers were displayed, and
therefore the full server support for that release was easily missed.
DNA Center 1.2 has added additional shared servers. These include NTP Servers, Netflow Collector
Service and SNMP Server (renamed from SNMP Trap Server). It also provides the ability to configure
Time Zone and Message of the Day (MoTD).
Deployment Note: For a Software-Defined Access workflow, AAA, DHCP, and DNS servers are required to be
configured.
For an Assurance Workflow, SYSLOG, SNMP, and Netflow servers must be configured. These servers for
Assurance need to point to the DNA Center’s IP Address (192.168.100.10 in the Lab).
Step 73. By default, the AAA, Netflow Collector, and NTP servers are not shown.
Step 74. Select AAA, Netflow Collector, and NTP, and press OK.
In DNA Center 1.0, the same AAA servers was configured for both network users (Device Authentication
and Authorization and for endpoints/clients (Client Authentication and Authorization).
In contrast, DNA Center 1.2 provides an option to configure separate AAA servers for network users and
for clients/endpoints. In DNA Center 1.0, only the RADIUS protocol was supported for network users. In
DNA Center 1.1 and above, both TACACS and RADIUS protocols are supported for network users.
Note: TACACS actually refers to Cisco TACACS+ and not TACACS (RFC 1492).
Step 76. Select the next to both Network and Client/Endpoint. The boxes change to and
additional settings are displayed.
Field Value
Servers ISE
Protocol TACACS
Network drop-down 192.168.100.20
IP Address (Primary) drop-down 192.168.100.20
Step 78. Configure the AAA Server for Client/Endpoint Authentication using the Table below.
Field Value
Servers ISE
Protocol RADIUS
Client/Endpoint drop-down 192.168.100.20
IP Address (Primary) drop-down 192.168.100.20
Deployment Note: TACACS is not supported for Client/Endpoint authentication in DNA Center.
Field Value
DHCP Server 198.18.133.30
DNS Server – Domain Name dna.local
DNS Server – IP Address 198.18.133.30
SYSLOG Server 192.168.100.10
SNMP Server 192.168.100.10
Netflow Collector Server IP Address 192.168.100.10
Netflow Collector Server Port 2055
NTP Server 192.168.100.6
Time Zone EST5EDT
Step 81. Verify that an Information and a Success notification appears indicating the
settings were saved.
While configuring the earlier Discovery job, some credentials were added with the Save as Global
setting. Applying that setting will populate those specific credentials to the Device Credentials tab of
the Network Settings of the Design Application. Global Device Credentials should be the credentials
that most of the devices in a deployment use. The Save as Global setting also populates these
credentials automatically in the Discovery tool. In this way, they do not need to be re-entered when
configuring a future Discovery job.
The other major benefit of Global Device Credentials is that they are used as part of the LAN Automation
feature.
Note: The prerequisite steps could be completed later during the LAN Automation section. The steps will be
completed now to avoid numerous backs and forth jumps between DNA Center Provision and Design Applications
later in the lab.
Note: There can be a maximum of five (5) global credentials defined for any category.
Step 82. Navigate to Design > Network Settings > Device Credentials.
Step 83. In the Device Credentials tab, click the button underneath CLI Credentials for the
username Operator.
The button changes to .
Step 85. Click the SNMPV2C Write micro-tab. It will change color from blue to grey.
Click the Write button underneath SNMP Credentials.
The button changes to .
DNA Center supports both manually entered IP address allotments as well as integration with IPAM
solutions, such as Infoblox and Bluecat.
Deployment Note: DHCP IP Address Pools required in the deployment must be manually defined and configured on
the DHCP Server. DNA Center does not provision the actual DHCP server, even if it is a Cisco device. It is simply
setting aside pools as a visual reference. These Address Pools will be associated with VN (Virtual Networks/VRFs)
during the Host Onboarding section.
Consider a large continental-United States network deployment with sites in New York and Los Angeles.
Each site would likely use their own DHCP, DNS, and AAA (ISE Policy Service Nodes – PSN). For
deployments such as these, it is possible to configure site-specific Network Settings for Network, Device
Credentials, IP Pools, and more.
By default, when navigating to the Network Settings tab, the Global site is selected. This can be seen by
the green vertical indicator. These green lines in DNA Center indicate current navigation location of the
Design Application to help the user understand which item for which site is being configured.
This is because Pool Reservations are not available in the Global Hierarchy and must be done at the Site,
Building, or Floor level.
Step 88. Navigate to Design > Network Settings > IP Address Pools.
Step 89. Several IP Address Pools will be created for various uses. Some will be used for Device
Onboarding (End-host IP Addresses) while others will be used for Guest Access, LAN
Automation, and Infrastructure.
Step 90. Configure the seven IP Address Pools as shown in the table below.
Because the DHCP and DNS servers have already been defined, they are available from
the drop-down boxes and do not need to manually define.
This demonstrates the define once and use many concepts that was described earlier.
The Overlapping checkbox should remain unchecked for all IP Address Pools.
IP Pool Name IP Subnet CIDR Prefix Gateway IP Address DHCP Server(s) DNS Server(s)
AccessPoints 172.16.50.0 /24 (255.255.255.0) 172.16.50.1 198.18.133.30 198.18.133.30
FusionExternal 192.168.130.0 /24 (255.255.255.0) 192.168.130.1 – –
FusionInternal 192.168.30.0 /24 (255.255.255.0) 192.168.30.1 – –
Production 172.16.101.0 /24 (255.255.255.0) 172.16.101.1 198.18.133.30 198.18.133.30
Staff 172.16.201.0 /24 (255.255.255.0) 172.16.201.1 198.18.133.30 198.18.133.30
WiredGuest 172.16.250.0 /24 (255.255.255.0) 172.16.250.1 198.18.133.30 198.18.133.30
WirelessGuest 172.16.150.0 /24 (255.255.255.0) 172.16.150.1 198.18.133.30 198.18.133.30
Step 91. Click the Save button between each IP Address Pool to save the settings.
Step 93. Once completed, the IP Address Pools tab should appear as below.
Step 95. From the upper right corner, click on Reserve IP Pool.
Step 100. Enter the Wireless SSID as ProductionSSID and click Next.
Step 101. Next, it shall prompt you for entering the Wireless profile name. Go ahead and create a
wireless profile as CampusProfile.
Step 104. Enter the Wireless SSID as GuestSSID, choose level of security as Web Auth and
Authentication Server as ISE Authentication with Portal kind: Self Registered and
redirect to Success Page after successful authentication click Next.
Step 106. Building the Guest Portal, Enter the name of the portal Guest_Portal_Access and click
on SAVE
Step 110. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
In this section, the segmentation for overlay network (which has not yet been fully created) will be
defined here in DNA Center Policy Application. This process virtualizes the overlay network into multiple
self-contained Virtual Networks (VRFs).
Deployment Note: By default, any network device (or user) within a Virtual Network is permitted to communicate with
other devices (or users) in the same Virtual Network. To enable communication between different Virtual Networks,
traffic must leave the Fabric (Default) Border and then return, typically traversing a firewall or fusion router. This is
process is done through route leaking and multi-protocol BGP (MP-BGP). This will be covered in later exercises.
Step 111. From the DNA Center home page, click Policy to enter the Policy Application.
Note: The eighteen (18) SGTs were the SGTs present on ISE during the DNA Center and ISE integration. They were
imported as described in the About Day 0 Brownfield Support section.
DNA Center has two default Virtual Networks: DEFAULT_VN and INFRA_VN. The DEFAULT_VN is present to encourage
NetOps personnel to use segmentation, as this why SDA was designed and built. At the present, it should be ignored
– specific VNs will be created.
Deployment Note: Future releases may remove the DEFAULT_VN. The INFRA_VN is for Access Points and Extended
Nodes only. It is not meant for end users or clients. The INFRA_VN will actually be mapped to the Global Routing
Table (GRT) in LISP and not a VRF instance of LISP. Despite being present in the GRT, It is still considered part of
the overlay network.
Within the Software-Defined Access solution, two technologies are used to segment the network: VRFs
and Scalable Group Tags (SGTs). VRFs (VNs) are used to segment the network overlay itself. SGTs are
used to segment inside of VRF. Encapsulation in SDA embeds the VRF and SGT information into the
packet to enforce policy end-to-end across the network.
The routing control plane of the overlay (the LISP process) will make forwarding decision based on the
VRF information. The routing policy plane of the overlay makes forwarding decisions based on the SGT
information. Both pieces of information must be present for a packet to traverse the Fabric.
This exercise will focus on creating Virtual Networks and associating SGTs with them as this is the
minimum requirement for packet forwarding in an SDA Fabric.
Step 113. Click the Virtual Network button or the Virtual Network tab.
Click the .
Step 115. Enter the Virtual Network Name of CAMPUS.
Note: Please pay attention to Capitalization. The name of the virtual network defined in DNA Center will later be
pushed down the Fabric devices as a VRF Definition. VRF Definitions on the CLI are case sensitive. VRF Campus
and VRF CAMPUS would be considered two different VRFs.
Step 116. Multiple Scalable Groups can be selected by clicking on them individually or by clicking
and dragging over them.
Move all Available Scalable Groups except BYOD, Guest, Quarantined_Systems, and
Unknown to the right-hand column Groups in the Virtual Network.
Step 120. Select and move the BYOD, Guest, Quarantined_Systems, and Unknown SGTs from
Available Scalable Groups to the right-hand column Groups in the Virtual Network.
Step 121. Click Save.
Step 123. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
Note: In the Host Onboarding section, the VNs that were just created will be associated with the created IP Address
Pools. This process is how a particular subnet becomes associated with a particular VRF.
About Provisioning
When completing the first step, assigning a device to a site, DNA Center will push certain site-level
networks settings configured in the Design Application to the devices whether they will be used as part
of the Fabric overlay.
Specifically, DNA Center pushes the Netflow Exporter, SNMP Server and Traps, and Syslog network
server information configured in the Design Application for a site to the devices assigned to the site.
Note: Understanding what configuration elements are pushed at which step will be particularly important in future labs
when working with DNA Center Assurance.
When the devices are provisioned to the site – Step 2 – the remaining network settings from the Design Application
will be pushed down to these devices. They include time zone, NTP server, and AAA configuration.
The second step, provisioning the device (that has been assigned to a site), is a prerequisite before that
device can be added to the Fabric and perform a Fabric role.
Step 125. The Provision Application will open to the Device Inventory Page.
The first step of the provisioning process begins by selecting devices and associating (assigning) them to
a site, building, or floor previously created with the Design Application.
Before devices can be provisioned, they must be discovered and added to Inventory. Therefore, the
Discovery tool and Design exercises were completed first. There is a distinct order-of-operation in DNA
Center workflows.
In this lab, all devices in Inventory will be assigned to a site (Step 1). After that, only some devices will
be provisioned to a site (Step 2). Among that second group, only certain devices could be provisioned to
a site will become part of the Fabric and operate in a Fabric Role. This is the level of granularity that
DNA Center provides in Orchestration and Automation. In the lab, all devices provisioned to the site will
become receive further provisioning to be operate in a Fabric Role.
Use the table below for reference on how devices will be added to site, provisioned to site, and used in a
Fabric Role.
Deployment Note: There are two level of compatibility in DNA Center and two different support matrices. The first
level is being compatible with DNA Center – or, ostensibly, compatible for Assurance. The second level is being
compatible with Software-Defined Access. Therefore, the distinction between SDA and DNA Center versioning and
platform support was called out specifically at the beginning of the lab.
To provision a device that has been assigned to a site (Step 2), it must be a device supported for SDA.
Just because a device can be Automated by DNA Center does not necessarily mean this device is being automated
by the SDA process in DNA Center. Device Automation and Software-Defined Access are technically separate
packages on the Appliance.
Step 126. From the Provision > Device Inventory page, click the top checkbox to select all
devices.
The box changes to and all devices are highlighted.
Step 133. Now that all the selected devices have been assigned a site, the next step is to Provision
them with the “Shared Services” (AAA, DHCP, DNS, NTP, etc) which were setup in the
Design App. In order to do this, select the same devices again and this time select from
Actions > Provision.
Step 134. Back on the Device Inventory page within the Provision application, confirm that all
devices were successfully provisioned.
Step 135. The WLC will have to be provisioned separately since its different type of device.
Step 137. Finally, under Summary, confirm that everything is configured correctly and click on
Deploy.
Step 138. Once this is done, you can go to the WLC GUI and under WLANs you can see that the
SSIDs are created but are disabled for now.
Step 139. Click the (Apps Button) or the Button to return to the DNA
Center dashboard.
Deployment Note: On IOS-XE devices this LISP configuration will utilize the syntax that was first introduced in IOS
XE 16.5.1 and enforced in 16.6.1.
Creating the Fabric Overlay is a multi-step workflow. Devices must be discovered, added to Inventory,
assigned to a Site, and provisioned to a Site before they can be added to the Fabric. Each of Fabric
Overlay steps are managed under the Fabric tab of the Provision Application.
1. Identify and create Transits
2. Create Fabric Domain (or use Default)
3. Assign Fabric Role(s)
4. Setup Up Host Onboarding
With version 1.2.X, the concept of SD-Access Multisite was introduced. Also, there is an obvious
requirement of connecting the SD-Access Fabric with the rest of the company. As a result, the new
workflow asks for you to create a “Transit” which will connect the fabric to beyond its domain.
1. SDA Transit: To connect 2 or more SDA Fabric Domains with each other (requires an end to end
MTU of 9100)
2. IP Transit: To connect the SDA Fabric Domain to the Traditional network for a Layer 3 hand-off
In this lab, you will be configuring an IP transit to connect the CPN-BN with the Fusion Internal Router
and the CPN-DBN with the Fusion External Router. These both will be IP Transits and must be configured
before configuring the Fabric domain.
Step 140. From the Fabric tab in the Provisioning Application, click on “Add Fabric Domain or
Transit” and then click on “Add Transit”
A fabric domain is a logical construct in DNA Center. A fabric domain is defined by a set of devices that
share the same control plane node(s) and border node(s). In a domain, end-host facing devices are
added as edge node(s).
Step 142. From the DNA Center Provision > Device Inventory page, click the Fabric tab.
Step 143. Click the Add Fabric button to create a new Fabric Domain.
The Add Fabric Domain dialog box appears.
Step 144. Ensure that you choose the site level as Floor1.
Name the Fabric domain FEW_Fabric and click Add.
Note: You cannot use any special character (such as - symbol) as part of the Fabric domain name.
Step 146. DNA Center returns to the Fabric Domain and Transits page underneath Provision >
Fabric.
Step 147. Next, you shall create a Transit device which shall be IP-based to provide access from
the Fabric to the Non-Fabric network components.
Click on Add Fabric Domain or Transit and then click on Add Transit.
Step 148. Add as “IP-based” and the only option to add right now is BGP as the protocol.
Option Value
Transit Name FusionInternal
Transit Type Select IP-Based
Routing Protocol BGP
Autonomous Number 65444
Step 151. Add the details as follows and click Save in the end.
Option Value
Transit Name FusionExternal
Transit Type Select IP-Based
Routing Protocol BGP
Autonomous Number 65333
Step 152. Now, once you have created the transits, click the newly created FEW_Fabric Fabric
Domain.
When DNA Center is used to automate SDA, how can the automated configuration be validated? This
was the key question and motivation behind the Validation feature. There are two types of validation,
Pre-verification and (Post-) Verification. Pre-verification answers the question “Is my network ready to
deploy Fabric?” by running several prechecks before a device is added to a Fabric. The (post-)
verification validates a device’s state after a Fabric change has been made.
Lab Guide Critical Note: All Pre- and Post-Verification steps listed in the lab guide are Optional due to time
constraints. It is not possible to segment off some of the Pre- and Post- Verification steps present in the lab using
different section headings indicating that these steps are optional. Please read through, although skip over the Pre-
and Post- Verification steps if encountering time constraints.
About Pre-Verification
Pre-verification checks are run on devices that have been assigned a role, although have not been added
to the Fabric yet. (This means that the Save button has not yet been clicked). Currently eight
pre-verification checks are supported.
Note: The connectivity check depends on the device role selected and what devices are already present in the
Fabric. During the connectivity check, DNA Center logs into the device and initiates a ping to the Loopback interface
of another device. It does not specify a source interface (such as the local Loopback).
If a device is selected as an edge node, DNA Center will perform a ping from that device to the control plane node
and the border nodes. If the device is selected as a border node, DNA Center will perform a ping from that device to
the control plane node(s) only.
This is performed after a device is added to the Fabric. There are two places where verification is
supported – the initial topology map under Provision > Fabric > Select Devices page and the Provision >
Fabric > Host Onboarding page. Verification checks whether the SDA provisioned configuration is
present on the device.
Verification Purpose
Select Device – VN Validates that all VRFs are created for the VNs in the Fabric
Select Device – Fabric Role Validates that all required configuration is present for a device to perform the Fabric Role
Host Onboarding – Segment Validates all segments are created under each VN in the Fabric
Host Onboarding – Port Assignment Ensures ports have an assigned VLAN and authentication method
Note: The Post-Verification check does not check for any configuration that may have been added manually to the
device using the CLI. It is only checking for parameters configured by DNA Center during the provisioning workflows.
To create a Fabric, at minimum, an edge node and control plane (node) must be defined. For
communication outside of that Fabric, a border (node) must also be defined. Border nodes will be
discussed in detail in later sections, as they represent one of the biggest changes of DNA Center 1.1 from
DNA Center 1.0.
Icon and text color for each device name in the Fabric Topology map are very important. Grey devices
with grey text are not part of the Fabric or not currently part of the Fabric. They are either simply not
assigned a Fabric Role or are an Intermediate Node – a Layer-3 device between two Fabric Nodes.
Devices that are any color with blue text are currently selected. Devices with a blue outline have been
assigned a Fabric Role, although the Save button has not been pressed, yet. This blue outline indicates
Intention. Devices that are Blue have been added a Fabric role and have had that Fabric configuration
pushed down to the device.
Option Role
Add to Fabric Edge Node
Add as CP Control Plane Node
Add as Border Border Node (Any of three varieties)
Add as CP+Border Co-located Control Plane Node and Border Node
Enable Guests This is related the Fabric Wireless.
View Device Info –
Note: In DNA Center 1.0, the hostname needed to be clicked in order to select the device and display the popup. In
DNA Center 1.2 the icon itself can be clicked. Due to the zooming and panning capabilities of the topology map, the
browser experience is going to vary wildly when interacting with it. In tests, Chrome has performed best, and the
screen shots from this lab are from the Chrome browser while all screen shots from the DNA Center 1.0 lab guide are
from the Firefox ESR browser. Firefox ESR performed better during testing for DNA Center 1.0.
Deployment Note: The topology map has created combability problems with some browsers. Both Firefox Quantum
and Firefox ESR are currently impacted. Internet Explorer and Microsoft Edge will not be tested, and Safari remains
untested.
The grey icon now has a blue outline, indicating the intention to add this device to the
Fabric.
This shall open a new fly out window to the right with additional settings to be done for the fabric
border role.
Option Value
Border to Select Rest of Company
Local Autonomous Number 65004
Select IP Address Pool FusionInternal_Floor1
Transits FusionInternal
Step 156. Click the dropdown for the FusionInternal device and click on Add Interface.
Step 159. You shall see that the device has a Blue outline to it but it has not yet saved. You need to
click on Save in the end
It shall show you that the Fabric device provisioning has initiated and after it has pushed the
requisite configurations, it shall show up as that the Device has been updated to the Fabric Domain
Successfully.
Step 162. You shall next add another device as Co-located Border and Control Plane.
The grey icon now has a blue outline, indicating the intention to add this device to the
Fabric.
Step 165. This shall open a new fly out window to the right with additional settings to be done for
the fabric border role.
Option Value
Border to Select Outside World
Local Autonomous Number 65004
Select IP Address Pool FusionExternal_Floor1
Transits FusionExternal
Connected to Internet Checked
Step 168. Select the External Interface to be TenGigabitEthernet1/0/2 and click on only the Guest
Virtual Networks.
Step 170. You shall see that the device has a Blue outline to it but it has not yet saved. You need to
click on Save in the end
It shall show you that the Fabric device provisioning has initiated and after it has pushed the
requisite configurations, it shall show up as that the Device has been updated to the Fabric Domain
Successfully.
Step 173. Click on Save in the end and wait for the devices to be updated with their Fabric roles of
Edge Nodes and should show up as this finally
Step 175. You will now see that the WLC also turns blue and is provisioned.
Deployment Note: While it is feasible to use a switch as a fusion router, switches add additional complexity, as
generally only the high-end chassis models support sub-interfaces. Therefore, on a fixed configuration model such
as a Catalyst 9300, an SVI must be created on the switches and added to VRF forwarding definition. This abstracts
the logical concept of a VRF even further through logical SVIs. A Layer-2 trunk is used to connect to the border
node, which itself is likely configured for a Layer-3 handoff using a sub-interface. To reduce unnecessary complexity,
an Integrated Services Router (ISR) is used in the lab as the fusion router.
Because the fusion router is outside the SDA fabric, it is not specifically managed (for Automation) by
DNA Center. Therefore, the configuration of a fusion router will always be manual. Future release and
development may reduce or eliminate the need for a fusion router. FusionInternal will be used to
allowed end-hosts in Virtual Networks of the SDA Fabric to communicate with shared services
Note: It is also possible, with minimal additional configuration, to allow hosts in separate VNs to communicate with
each other. This is outside the scope of this lab guide and not required for SDA.
Border Automation & Fusion Router Configuration Variations – BorderNode and FusionInternal
Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be different.
Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be aware of what
sub-interface is forwarding for which VRF and what IP address is assigned to that sub-interface on your particular lab
pod during your particular lab run-through. The fusion router’s configuration is meant to be descriptive in nature, not
prescriptive.
There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there
are six variations in how FusionInternal needs to be configured to match CPN-BN. These are provided and detailed
in Appendix K and as also provided as text and image files in the DNAC 1.1 folder on the desktop of the Jump Host.
When following the instructions in the lab guide, DNA Center will provision three sub-interfaces on
BorderNode beginning with GigabitEthernet 0/0/2.3001 and continuing through GigabitEthernet
0/0/2.3003. These interfaces will be assigned an IP address with a /30 subnet mask (255.255.255.252)
and will always use the lower number (the odd number address) of the two available addresses.
DNA Center will vary which sub-interface is forwarding for which VRF and the Global Routing Table
(GRT).
To understand which explanatory graphic and accompanying configuration text file to follow, identify
the order of the VRF/GRT that DNA Center has provisioned on the sub-interfaces.
The corresponding graphic – located in the DNAC 1.2 folder the desktop of the Jump Host – is
BorderNode Interface Order – CAMPUS, GUEST, GRT, and the corresponding text file, also located where
noted above, is FusionInternal - Campus, Guest, GRT. Please be sure to use the appropriate files and do
not directly copy and paste from the lab guide unless instructed directly and specifically to do so.
Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be different.
Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be aware of what
sub-interface is forwarding for which VRF and what IP address is assigned to that sub-interface on your particular lab
pod during your particular lab run-through. The fusion router’s configuration is meant to be descriptive in nature, not
prescriptive.
There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there
are six variations in how FusionInternal needs to be configured to match CPN-BN. These are provided and detailed
in Appendix K and as also provided as text and image files in the DNAC 1.1 folder on the desktop of the Jump Host.
The first task is to allow IP connectivity from the BorderNode to FusionInternal. This must be done for
each Virtual Network that requires connectivity to shared services. DNA Center has automatically
configured the BorderNode in previous exercises.
Table 15-1: DNA Center Configured Layer-3 Interfaces for Hand Off – BorderNode
Using this information, a list of interfaces and IP address can be planned on the FusionInternal.
However, to configure an interface to forward for a VRF forwarding instance, the VRF must first be
created. Before creating the VRFs on FusionInternal, it is important to understand the configuration
elements of a VRF definition. The most important portion of a VRF configuration – other than the
case-sensitive name – is the route-target (RT) and the route-distinguisher (RD).
Note: In older versions of IOS such as IOS 12.x, VRFs were not address-family aware. They were supported for IPv4
only and used a different syntax, ip vrf <name>, for configuration. It was mandatory to define the RT and RD in these
older code versions. In current versions of code, it is possible to create a VRF without the RT and RD values as long
as the address-family is defined. This method (of not defining the RT and RD) is often used when creating VRFs for
VRF-Lite deployments that do not require route leaking.
A route distinguisher makes an IPv4 prefix globally unique. It distinguishes one set of routes (in a VRF)
from another. This is particularly critical when different VRFs contain overlapping IP space. A route
distinguisher is an eight-octet/eight-byte (64-bit) field that is prepended to a four-octet/four-byte (32-
bit) IPv4 prefix. Together, these twelve octets/twelve bytes (96 bits) create the VPNv4 address.
Additional information can be found in RFC 4364. There are technically three supported formats for the
route distinguisher, although they are primarily cosmetic in difference. The distinctions are beyond the
scope of this guide.
Route targets, in contrast, are used to share routes among VRFs. While the structure is similar to the
route distinguisher, a route target is actually a BGP Extended-Community Attribute. The route target
defines which routes are imported and exported into the VRFs. Additional information can be found in
RFC 4360.
Many times, for ease of administration, the route-target and route-distinguisher are configured as the
same number, although this is not a requirement. It is simply a configuration convention that reduces
an administrative burden and provides greater simplicity. This convention is used in the configurations
provisioned by DNA Center. The RD and RT will also match the LISP Instance-ID.
For route leaking to work properly, FusionInternal must have the same VRFs configured as BorderNode
(CP-BN). In addition, the route-distinguisher (RD) and route-target (RT) values must be the same. These
have been auto-generated by DNA Center. The first step is to retrieve those values. One the VRFs are
configured in FusionInternal, the interfaces can be configured to forward for a VRF.
Step 179. On the console of FusionInternal, paste the VRF configuration that was copied from
BorderNode (CP-BN).
Copying and pasting is required. The RDs and RTs must match exactly.
configure terminal
vrf definition CAMPUS
rd 1:4099
address-family ipv4
route-target export 1:4099
route-target import 1:4099
exit-address-family
vrf definition DEFAULT_VN
rd 1:4098
address-family ipv4
route-target export 1:4098
route-target import 1:4098
exit-address-family
exit
end
configure terminal
interface GigabitEthernet0/2.3001
Step 183. Add the interface to the VRF forwarding instance CAMPUS.
Step 184. Configure the /30 IP Address that corresponds with BorderNode’s interface.
exit
Step 186. Create the Layer-3 sub-interface that will be used for GUEST VRF.
Once completed, exit sub-interface configuration mode.
Use the following information:
Step 187. Create the final Layer-3 sub-interface used for the Global Routing table (INFRA_VN).
Once completed, exit global configuration mode completely.
Use the following information:
Step 188. Ping the BorderNode(CP-BN) from the FusionInternal using CAMPUS VRF.
Step 189. Ping the BorderNode from the FusionInternal using GUEST VRF.
Step 190. Ping the BorderNode (CP-BN) from the FusionInternal using the Global routing table
and a sub-interface.
ping 192.168.30.9
Step 191. Ping the BorderNode (CP-BN) from the FusionInternal using the Global routing table
and physical interface.
ping 192.168.37.3
BGP is used to extend the VRFs to the FusionInternal router. As with the sub-interface configuration,
DNA Center has fully automated BorderNode’s (CP-BN)BGP configuration. Early exercises verified the
BGP communications between the control plane nodes and BorderNode (CP-BN) to ensure that
communication is occurring and prefixes (NLRI) are being exchanged.
Note: The BGP Adjacencies created between a border node and fusion router use the IPv4 Address Family (not the
VPNv4 Address family). Note, however, the adjacencies will be formed over a VRF session.
configure terminal
router bgp 65444
Step 195. Activate the exchange of NLRI with the BorderNode (CP-BN).
address-family ipv4
neighbor 192.168.30.9 activate
network 198.18.133.0
network 192.168.50.0
exit-address-family
Step 202. Activate the exchange of NLRI with the BorderNode (CP-BN) for vrf CAMPUS.
exit-address-family
Step 207. Activate the exchange of NLRI with the BorderNode (CP-BN) for vrf GUEST.
Step 209. Exit BGP configuration mode and out of global configuration mode completely.
exit
end
Step 210. On FusionInternal, verify that three (3) BGP Adjacencies come up.
There should be a BGP adjacency for each VRF and for the GRT.
Step 211. On BorderNode (CP-BN), verify that three (3) BGP Adjacencies come up.
There should be a BGP adjacency for each VRF and for the GRT.
FusionInternal has routes to the SDA Prefixes learned from BorderNode (CP-BN). It also has routes to
its directly connected subnets where the DHCP/DNS servers and WLC reside. Now that all these routes
are in the routing tables on FusionInternal, they can be used for fusing the routes (route leaking).
Route-maps are used to specify which routes are leaked between the Virtual Networks. These
route-maps need to match very specific prefixes. This can be best accomplished by first defining a
prefix-list and then referencing that prefix-list in a route-map.
Prefix-lists are similar to ACLs in that they can be used to match something. Prefix-lists are configured to
match an exact prefix length, a prefix range, or a specific prefix. Once configured, the prefix-list can be
referenced in the route-map. Together, prefix-lists and route-maps provides the deep granularity
necessary to ensure the correct NLRI are advertised to BorderNode(CP-BN).
Note: The following prefix-lists and route-maps can be safely copied and pasted.
configure terminal
ip prefix-list CAMPUS_VRF_NETWORKS seq 5 permit 172.16.101.0/24
ip prefix-list CAMPUS_VRF_NETWORKS seq 10 permit
172.16.201.0/24
end
Step 212. Configure a prefix-list that matches the /24 GUEST VRF subnet.
Name the prefix list GUEST_VRF_NETWORKS.
configure terminal
ip prefix-list GUEST_VRF_NETWORKS seq 5 permit 172.16.250.0/24
end
Note: The prefix-list uses the plural NETWORKS and not the singular NETWORK. Please name the prefix-list exactly
as shown.
Step 213. Configure a two-line prefix-list that matches the DHCP/DNS Servers’ and WLC’s subnets.
Name the prefix list SHARED_SERVICES_NETWORKS.
configure terminal
ip prefix-list SHARED_SERVICES_NETWORKS seq 5 permit
198.18.133.0/24
ip prefix-list SHARED_SERVICES_NETWORKS seq 10 permit
192.168.50.0/24
end
configure terminal
route-map CAMPUS_VRF_NETWORKS permit 10
match ip address prefix-list CAMPUS_VRF_NETWORKS
end
configure terminal
route-map GUEST_VRF_NETWORKS permit 10
match ip address prefix-list GUEST_VRF_NETWORKS
end
Note: The route-map uses the plural NETWORKS and not the singular NETWORK. Please name the route-map
exactly as shown.
configure terminal
route-map SHARED_SERVICES_NETWORKS permit 10
match ip address prefix-list SHARED_SERVICES_NETWORKS
end
Route leaking is done by importing and exporting route-maps under the VRF configuration. VRFs should
export prefixes belonging to itself using a route-map. The VRF should also import desired routes used
for access to shared services using a route-map.
Using the route-map SHARED_SERVICES_NETWORKS with the import command will permit only the
shared services subnets to be leaked to the VRFs. This will allow the End-Hosts in the Fabric to
communicate with the DHCP/DNS Servers and the WLC, but not allow inter-VRF communication.
Using the route-target import command will allow for inter-VRF communication. Inter-VRF
communication is beyond the scope of this lab guide, and uncommon in campus production networks.
In production, rather than permitting inter-VRFs communication which adds additional complexity, the
entire non-Guest prefix space (IP Address Pools) will be associated with a single VN (VRF). Scalable
Group Tags (SGTs) are then used to permit or deny communication between end-hosts. This is a simpler
solution that also provides more visibility and granularity into which hosts can communicate with each
other.
Note: When using route-maps, only a single import and export command can be used. This make sense, as a
route-map is used for filtering the ingress and egress routes against the element matched in that route-map.
Route-maps are used when more finer control is required over the routes that are imported and exported from a VRF
than the control that is provided by the import route-target and export route-target commands.
If route-maps are not used to specify a particular set of prefixes, VRF leaking can be performed by importing and
exporting route-targets. Using route-targets in this way can export all routes from a particular VRF instance and
imports all routes from another VRF instance. It is less granular and more often used in MPLS. Route-target allows
for multiple import and export commands to be applied, as they are used without any filtering mechanism – such as a
route-map.
Step 217. Configure the CAMPUS VRF for route leaking using route-maps.
The VRF should export its own routes and import the Shared Services Networks only.
configure terminal
vrf definition CAMPUS
address-family ipv4
import ipv4 unicast map SHARED_SERVICES_NETWORKS
export ipv4 unicast map CAMPUS_VRF_NETWORKS
exit-address-family
exit
end
Note: The VRF leaking configuration for both CAMPUS and GUEST VRFs can safely be copied and pasted.
configure terminal
vrf definition GUEST
address-family ipv4
import ipv4 unicast map SHARED_SERVICES_NETWORKS
export ipv4 unicast map GUEST_VRF_NETWORKS
exit-address-family
exit
end
If more than five minutes have passed, please check spelling, plurals, and Capitalizations
for the prefix-lists, route-maps, and VRFs. If routes are still not propagating, contact
your instructor.
Note: In Cisco software, import actions are triggered when a new routing update is received or when routes are
withdrawn. During the initial BGP update period, the import action is postponed allowing BGP to convergence more
quickly. Once BGP converges, incremental BGP updates are evaluated immediately and qualified prefixes are
imported as they are received.
Route leaking will allow the shared services routes to be imported to the VRF forwarding tables on
FusionInternal. These routes will then be advertised via eBGP to BorderNode. BorderNode will use the
iBGP VPNv4 adjacency to advertise these routes to the control plane nodes. The VPNv4 transport
session carries all the VRFs information together and internal BGP process keep track of which prefix is
associated with which VRF. Once these routes reach the control plane nodes, those routers will make
The end-hosts attached to the Edge Nodes have not yet joined the network. This will be addressed in
the next exercise on Fabric DHCP. Therefore, the Edge Nodes have not registered any prefixes with the
control plane nodes. Using lig, it is possible to verify that a packet sourced from the EID-space sent to
the shared service subnets would be encapsulated and sent via the overlay.
When a packet is sent from an end-host to an edge node, the edge node must determine if the packet is
to be LISP (VXLAN) encapsulated. If the packet is encapsulated, it is sent via the overlay. If not, it is
forwarded natively via the underlay. To be eligible for encapsulation, a packet must be sourced from
the EID-space. The edge node will then look for a default route or a null route in its routing table. If the
end-host packet matches either of those routes, it is eligible for encapsulation, and the edge node will
query the control plane node for the how to reach the destination. When an edge node receives
information back from the control plane node after a query, it is stored in the LISP map-cache.
Note: If the control plane node does not know not know how to reach a destination, it will reply to the edge node with
a negative map reply (NMR). This triggers the edge node to forward the packet natively. However, if the use-petr
command is configured in the LISP configuration on the edge node, an NMR triggers the edge node to send the
packet via the overlay to a default border.
Step 224. Verify the LISP map-cache for CAMPUS VRF (LISP Instance 4099).
Step 226. Use lig to query the control plane nodes on how to reach the shared services from
CAMPUS VRF.
Step 228. Verify the new LISP map-cache for CAMPUS VRF (LISP instance 4099).
This completes the optional route Leaking validation on the edge nodes.
Host Onboarding is two distinct steps – both located under the Provision > Fabric > Host Onboarding
tab for a particular fabric domain.
The first step is to select the Authentication template. These templates are predefined and pushed
down to all devices that are operating as edge nodes. This step must be completed first.
Each option will cause DNA Center to push down a separate configuration set to the edge nodes. Closed
Mode is the most restrictive and provides the best device security posture of the four options. It will
require connected end-host devices to authenticate to the network using 802.1x. If 802.1x fails, MAC
Authentication Bypass (MAB) will be attempted. If MAB fails, the device will not be permitted any
network access.
1. DefaultEasyConnectAuth
2. DefaultWiredDot1xClosedAuth
3. DefaultWiredDot1xOpenAuth
4. DefaultWiredNoAuth
The device CLI – as well as the DNA Center provisioned configuration – is moving towards the IBNS 2.0
(or C3PL) style. The particular 802.1x/MAB interface level provisioned by DNA Center may change in future releases
to IBNS 2.0, although it did not change between DNA Center 1.0 and DNA Center 1.1.6.
Step 230. From Provision > Fabric > Host Onboarding for the FEW_Fabric Fabric,
select No Authentication. The icon changes to .
Step 233. DNA Center will save the configuration template, although configuration is not yet
pushed to the devices.
Verify a Success notification appears indicated the Authentication template was
saved.
The second step (of Host Onboarding) is to bind the IP Address Pools to the Virtual Networks (VNs). At
that point, these bound components are referred to as Host Pools. Multiple IP address pools can be
associated with the same VN. However, an IP Address Pool should not be associated with multiple VNs.
Doing so would allow communication between the VNs and break the first line of segmentation in SDA.
The second step (of Host Onboarding) has a multi-step workflow that must be completed for each VN.
1. Select the Virtual Network
2. Select the desired Pool(s)
3. Select the traffic type
4. Enable Layer-2 Extension (optional)
Note: Notice that the available Virtual Networks are the VNs created during the Policy Application exercises. DNA
Center configuration operates on a specific workflow that has a distinct order of operation: Design, Policy, Provision,
and Assurance. This is also the order in which the applications are listed on DNA Center’s home page.
Step 235. The IP Address pools created during the Design Application exercises are displayed.
Select the next to Production and Staff.
The boxes change to .
Step 236. From the Choose Traffic dialog box, select Data for both Production and Staff
Address Pools.
Step 237. By default, the Layer-2 Extension (Layer-2 Overlay) is enabled when an Address Pool is
associated with a VN. Leave these in the On position.
Note: The Layer-2 extension is not absolutely required for this Wired-Only Lab Guide, although the current
recommendation is to leave it On due to some changes on how ARP is forwarded in the Fabric. This extension is
Verify a Success notification appears indicated the Segment was associated with
the Virtual Network.
Step 241. DNA Center returns to the Provision > Fabric > Host Onboarding for the Initech Fabric.
Verify the CAMPUS VN is now highlighted in blue. This indicates IP addresses have been
bound with this VRF.
Step 242. From Provision > Fabric > Host Onboarding for the FEW_Fabric, select INFRA under
Virtual Network.
The Edit Virtual Network dialog box appears for the INFRA VN.
Step 243. From Provision > Fabric > Host Onboarding for the FEW_Fabric Fabric,
select GUEST under Virtual Networks.
The Edit Virtual Network dialog box appears for the GUEST VN.
Step 244. The IP Address Pools created during the Design Application exercises are displayed.
The IP Address Pools Production and Staff show no indication that they are currently
associated with another VN.
Step 247. Verify the options match the example below, and press Update.
The Modify Authentication Template dialogue box appears.
Verify a Success notification appears indicated the Segment was associated with
the Virtual Network.
Step 250. DNA Center returns to the Provision > Fabric > Host Onboarding for the FEW_Fabric.
Verify the CAMPUS VN and GUEST VN are now highlighted in blue. This indicates
Address Pools have been bound with these VRFs.
Host Onboarding verification supports two checks. The Segment verification validates that VLANs and
Interface VLANs (AnyCast Gateways) have been provisioned on edge nodes. Port Assignment
verification validates that a specific configuration was provisioned for a specific port.
Note: For Port Verification to provide meaningful information, a static port assignment must be made. This is done at
the bottom of the page in the Provision > Fabric > Host Onboarding page. The port assignment defines a static
Authentication assignment rather than using a dynamic authentication from ISE. It can be useful if specific ports
need a particular assignment or if Open or No Authentication templates are used.
Step 251. From Provision > Fabric > Host Onboarding for the FEW_Fabric, click the
Validation drop-down.
Step 252. Select Verification.
The Verification dialog box appears.
Step 256. Close the Segment browser window and close the Verification dialog box using their
respective buttons.
Note: DNA Center begins creating SVI on the edge nodes beginning at Interface VLAN 1021 and moving upward for
the number of VNs. It also creates the associated (Layer-2) VLAN 1021 and up.
VLAN 3999 is provisioned as the critical VLAN and VLAN 4000 as the voice VLAN. VLANs 1002-1005 were originally
intended for bridging with FDDI and Token Ring networks. Because of backward compatibility reasons in IOS and
VTP, these VLAN numbers remain reserved and cannot be used or deleted. They will always appear on the devices.
In the lab, DefaultBorder is connected to FusionExternal as the external next-hop. DefaultBorder has a
static default route to FusionExternal via the Global Routing Table. FusionExternal has a static default
route to its next hop via its Global Routing Table. That next-hop – the cloud in the diagram below –
beyond FusionExternal provides access to the true Internet.
Regardless of how the rest of the network itself is designed or deployed outside of the Fabric, a few
things are going to be in common. A default border will have the SDA prefixes in its VRF routing tables.
A default border will also have a route to its next hop in its global routing table.
Somehow, the default route must be advertised to the VRFs. This allows packets to egress the Fabric
towards the Internet. In addition, the SDA prefixes in the VRF tables must be advertised to the external
domain to draw (attract) packets back in.
The VRF configuration and BGP configuration will be pushed down to a default border by DNA Center.
This allows the device to operate as part of the Fabric. The VRF-Lite – the Layer-3 handoff – may or may
not be used in all deployments. However, using a Layer-3 handoff represents one of the less complex
ways to address this need of providing Internet access to the end-hosts.
The concept behind the fusion router is that any manual configuration is completed on only the devices
that are not managed by the SDA processes in DNA Center. It is feasible to use policy-based routing or
other methods to configure route leaking between the Global and VRF routing tables on DefaultBorder.
Manual configuration should be strictly avoided on devices that have been added as Fabric Nodes. There
is a complex interaction between LISP, CEF, and (in some cases) BGP. To this end, the leaking between
Global and VRF routing tables will be completed on an external fusion router, FusionExternal, rather
than attempting to manually configure it on the DefaultBorder. This is considered a best practice.
One method would be to create a similar configuration on FusionExternal to the one completed on
FusionInternal. While this is technically feasible, it is not necessary to extend the VRFs to
FusionExternal unless the policy plane (SGTs) needs to be extended beyond the Fabric to the Internet
Domain. This type of policy extension is beyond the scope of this guide, although an early example can
be found in the LISP configuration guide for the ASR-1000 router.
Note: The default border solution in SDA – particularly with extending policy (SGTs) and with other types of hand-offs
and protocols – is continuously evolving. Its evolution is heavily dependent on what other Fabric (if any) the policy
plane is extending towards (example: SDA to ACI).
Lab Solution
In the lab, DefaultBorder will keep the DNA Center provisioned Layer-3 handoff. This automated
configuration will not be touched. On the other side of the link, FusionExternal will not become
VRF-aware. All SDA prefixes (the EID-space) will be learned in its Global Routing Table, and the default
route from FusionExternal will be advertised back to DefaultBorder via BGP. This BGP configuration on
FusionExternal will not use multi-protocol BGP or form adjacencies over a VRF session.
VRF’s are locally significant. The automated BGP configuration on DefaultBorder is expecting BGP
neighbors (adjacencies) via VRF routing tables. This just means that routes learned from a neighbor will
be installed into the VRF tables instead of the Global Routing Table. On the other side of the physical
link – FusionExternal – the BGP neighbor relationship does not need to be formed using VRFs. Said in
another way, when a neighbor is defined under BGP configuration, it simply means that the router is
going to exchange routes from that particular VRF or that address family with a defined neighbor.
Critical Lab Guide Note: The configuration elements provisioned during your run-through are likely to be different.
Please be sure not to copy and paste from the Lab Guide unless instructed specifically to do so. Be aware of what
sub-interface is forwarding for which VRF and what IP address is assigned to that sub-interface on your particular lab
pod during your particular lab run-through. The fusion router’s configuration is meant to be descriptive in nature, not
prescriptive.
There are six possible varieties in how DNA Center can provision the sub-interfaces and VRFs. This means there
are six variations in how FusionExternal needs to be configured to match DefaultBorder. These are provided and
detailed in Appendix K and as also provided as text and image files in the DNAC 1.1 folder on the desktop of the
Jump Host.
When following the instructions in the lab guide, DNA Center will provision three sub-interfaces on
DefaultBorder beginning with GigabitEthernet 0/0/0.3004 and continuing through GigabitEthernet
0/0/2.3006. These interfaces will be assigned an IP address with a /30 subnet mask (255.255.255.252)
and will always use the lower number (the odd number address) of the two available addresses.
DNA Center will vary which sub-interface is forwarding for which VRF and the Global Routing Table
(GRT).
To understand which explanatory graphic and accompanying configuration text file to follow, identify
the order of the VRF/GRT that DNA Center has provisioned on the sub-interfaces.
In the example above, Gig0/0/2.3004 is forwarding for CAMPUS VRF, Gig 0/0/2.3005 is forwarding for
the GRT and Gig 0/0/2.3006 is forwarding for the GUEST VRF.
The corresponding graphic – located in the DNAC 1.1 folder the desktop of the Jump Host and
Appendix K – is DefaultBorder Interface Order – CAMPUS, GRT, GUEST, and the corresponding text file, also
located where noted above, is FusionExternal - Campus, GRT, Guest. Please be sure to use the appropriate
files and do not directly copy and paste from the lab guide unless instructed directly and specifically to
do so.
The first task is to provide IP connectivity from DefaultBorder to FusionExternal. This must be done for
each Virtual Network (VRF) that requires connectivity to unknown destinations outside of the Fabric.
DNA Center has automated the configuration of DefaultBorder in previous exercises. As a reminder, the
configuration of FusionExternal will be different than that of FusionInternal.
Table 19-1: DNA Center Configured Layer-3 Interfaces for HandOff - DefaultBorder
Interface VRF IP Address VLAN
3004 CAMPUS 192.168.130.1/30 GigabitEthernet0/0/0.3004
3005 Global Route 192.168.130.5/30 GigabitEthernet 0/0/0.3005
3006 GUEST 192.168.130.9/30 GigabitEthernet 0/0/0.3006
NA N/A 192.168.59.5 GigabitEthernet 0/0
Using this information, a list of interfaces and IP address can be planned on the FusionExternal.
Since VRFs are not being extended to FusionExternal, no VRFs need to be created on this router. The
process can immediately begin with the creation of the sub-interfaces on FusionExternal.
Step 257. On the console of FusionExternal, create the Layer-3 sub-interface that will form an
adjacency with the CAMPUS VRF sub-interface on DefaultBorder.
configure terminal
interface GigabitEthernet0/0.3004
Step 260. Configure the /30 IP Address that corresponds with DefaultBorder’s interface.
exit
Note: The creation of the sub-interface on a physical interface that already has IP Address configuration will cause
the IS-IS adjacency to bounce. This is expected.
Step 262. Create the Layer-3 sub-interface that will form an adjacency using the GRT (INFRA-VN)
sub-interface on DefaultBorder. Once completed, exit sub-interface configuration
mode.
Step 263. Create the Layer-3 sub-interface that will form an adjacency using the GUEST VRF
sub-interface on DefaultBorder.
Once completed, exit global configuration mode completely.
Step 266. Ping FusionExternal from DefaultBorder using the Global routing table and a sub-
interface.
ping 192.168.130.6
Step 267. Ping FusionExternal from DefaultBorder using the physical interface.
BGP is used to advertise the SDA prefixes to the FusionExternal router. As with the interface
configuration, DNA Center has fully automated the DefaultBorder BGP configuration. Earlier exercises
verified the BGP communications between the control plane nodes and DefaultBorder ensuring
communication is occurring and prefixes (NLRI) are being exchanged.
In the lab solution, the BGP Adjacencies created between DefaultBorder and FusionExternal use the
IPv4 Address Family. However, the adjacency will be formed over a VRF session on DefaultBorder’s side
of the link and formed over the Global Routing Table on the FusionExternal’s side of the link. Because
things were fully automated on DefaultBorder, it is simply a matter of configuring FusionExternal to
form adjacencies and accept the routes. No VRFs or address-family ipv4 vrf are needed. As a reminder,
when a neighbor is defined under BGP configuration, it simply means that the router is going to
exchange routes from that particular VRF or that particular address family with the defined neighbor.
When not using VRFs like FusionExternal, routes will be exchanged using the GRT.
configure terminal
router bgp 65333
Step 272. Define DefaultBorder as another neighbor using its corresponding AS Number.
This neighbor should use the IP address associated with the DefaultBorder’s GRT sub-
interface.
Step 274. Define DefaultBorder as yet another neighbor using its corresponding AS Number.
This neighbor should use the IP address associated with the DefaultBorder’s VRF GUEST
sub-interface.
Step 276. Activate the exchange of NLRI with the all the neighbors associated with DefaultBorder.
address-family ipv4
neighbor 192.168.130.1 activate
neighbor 192.168.130.5 activate
neighbor 192.168.130.9 activate
exit-address-family
Step 278. Exit BGP configuration mode and exit global configuration mode entirely.
exit
end
Because multi-protocol BGP and VPNv4 prefixes are not being used on FusionExternal, only BGP
commands for IPv4 can be used to verify prefixes learned from DefaultBorder. Both IPv4 and VPNv4
commands will be used on DefaultBorder for verification.
Note: Due to the recursive routing caused by mutual redistribution of BGP and LISP, it is possible to sometimes see
LISP routes for the shared services domain in the control plane nodes’ VRF routing tables, BGP routes to the shared
services domain in DefaultBorder’s VRF routing tables, and BGP routes to the shared services domain in the GRT of
FusionExternal. The following captures will not show these potential recursive routes.
Step 280. Display the IPv4 BGP Adjacency, messages, and prefix advertisements on DefaultBorder.
1. The BGP adjacency with the FusionExternal using the GRT is up and exchanging zero (0) prefixes.
Step 281. Display the IPv4 BGP Adjacency, messages, and prefix advertisements on FusionInternal.
Step 282. Display the VPNv4 BGP Adjacency, messages, and prefix advertisements on
DefaultBorder.
1. No VPNv4 Routes are learned from the FusionExternal. This is expected, as no routes have been advertised
from FusionExternal. The adjacency is up, though.
Step 284. Display the routes known to GUEST VRF on the DefaultBorder (CP-DBN).
There will be no changes from same verification performed earlier.
FusionExternal has routes to the SDA prefixes learned from DefaultBorder in its global routing table. It
also has a default route to its next-hop router. This default route must be advertised back to
DefaultBorder.
There are four different methods to advertise a default route in BGP – each with its own caveats.
Three of the methods are very similar and result in the same effect: a default route is injected into the
BGP RIB and is then advertised to neighbors. The origin of that route is the key difference between
these methods.
1. network 0.0.0.0
a. This will inject the default route into BGP if there is a default route present in the GRT.
2. redistribution
a. This will inject the default route into BGP if distributing from another routing protocol
(OSPF, IS-IS, EIGRP). The default route must currently be in the GRT AND be learned
from that routing protocol that is being redistributed (example, redistributing IS-IS and
learned from IS-IS).
3. default-information originate
a. This will cause the default route to be artificially generated and injected into the BGP
RIB regardless of whether or not it exists in the GRT.
b. In modern Cisco software versions, this also requires a redistribution statement to
trigger the default route to be advertised. The default-information originate command
alone is not enough to trigger the advertisement of the default route.
4. neighbor X.X.X.X default-originate
a. This command will only advertise the default route to a specific neighbor
- All previous approaches advertised the default route to all neighbors.
b. The default route will not be present in the BGP RIB where this command is configured –
which prevents advertisements to all neighbors.
c. The default route is artificially generated and injected into BGP similarly to default-
information originate.
If the lab were an actual deployment, FusionExternal would likely be BGP peered with an upstream
router. In this scenario, any default route advertisement method that advertise the default route to all
BGP peers presents a less than optimal approach.
The lab topology is best also described as a stub-autonomous system. Because of these reasons, the
most optimal approach is to use the fourth option – neighbor X.X.X.X default-originate – for advertising
the default route.
Deployment Note: In production, BGP peering with upstream routers generally uses filter-lists to block things like an
ill-configured default route advertisement that might create routing loops.
Whichever method used to advertise a default route using BGP must be carefully considered based on the needs and
design of the deployment.
Due to the physical location (Cisco DMZ) and default firewall rules on the true edge (the next-hop router
beyond FusionExternal) of the lab network, pings, traceroutes, NTP, and nslookup are all blocked except
to certain IP addresses. To simplify verification of connectivity from the Fabric to non-Fabric,
FusionExternal will also advertise its Loopback 77 into BGP. That IP address is 7.7.7.7/32. It will
represent an unknown destination – a destination on the outside of the Fabric or ostensibly the Internet
– and will serve as a destination for testing connectivity and the configuration.
configure terminal
router bgp 65333
Step 288. Advertise the default route to the specific neighbor 192.168.130.1.
address-family ipv4
neighbor 192.168.130.1 default-originate
Step 289. Advertise the default route to the specific neighbor 192.168.130.5.
Step 291. Exit address-family configuration mode, and then exit global configuration mode
entirely.
exit-address-family
exit
end
B* 0.0.0.0/0 [20/0] via 192.168.130.2, 00:02:09 Internet default route is known to CAMPUS VRF
7.0.0.0/32 is subnetted, 1 subnets
B 7.7.7.7 [20/0] via 192.168.130.2, 00:03:51
Route to FusionExternal’s Loopback 77 known to
172.16.0.0/24 is subnetted, 2 subnets CAMPUS VRF
B 172.16.101.0 [200/0] via 192.168.255.4, 00:35:37
B 172.16.201.0 [200/0] via 192.168.255.4, 00:35:37
192.168.130.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.130.0/30 is directly connected, GigabitEthernet0/0/0.3004
L 192.168.130.1/32 is directly connected, GigabitEthernet0/0/0.3004
B* 0.0.0.0/0 [20/0] via 192.168.130.10, 00:06:09 Internet default route is known to GUEST VRF
7.0.0.0/32 is subnetted, 1 subnets
B 7.7.7.7 [20/0] via 192.168.130.10, 00:07:51 Route to FusionExternal’s Loopback 77 known to
172.16.0.0/24 is subnetted, 1 subnets GUEST VRF
B 172.16.250.0 [200/0] via 192.168.255.4, 00:39:37
192.168.130.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.130.8/30 is directly connected, GigabitEthernet0/0/0.3006
L 192.168.130.9/32 is directly connected, GigabitEthernet0/0/0.3006
To verify the wireless host connectivity, you will use the Apache Gucamole
(http://192.168.100.100:8080/guacamole/#/) to launch consoles for the wireless host VM.
Step 296. From the jump host use the web Brower and click on the bookmarked(PC VMs)
gucamole link to connect to your Wireless host
Step 297. Open console windows to access the PC-Wireless VM by selecting it and right clicking.
Step 298. Once open, click on the wireless networking icon in the
system tray and open the available SSID panel.
Click on the SSID for your Pod and connect.