1 - 50 Days CCNA Journey - Study Notes - Complete Book
1 - 50 Days CCNA Journey - Study Notes - Complete Book
Reach Us At:
Email – [email protected]
YouTube - http://www.youtube.com/c/NetworkNuggets
LinkedIn - www.linkedin.com/in/kuldeep-sheokand-05636998
NETWORK NUGGETS
http://www.youtube.com/c/NetworkNuggets
Table of Contents
1. Introduction to networking
1.1 Basic networking terms
1.2 What is network and types of network?
1.3 Protocols
1.4 Cabling
1.5 Data transmission types
1.6 Data modes
1.7 What to learn to start networking career?
2. OSI model
2.1 Types of network reference models
2.2 OSI model: Introduction
2.2.1 Layered architecture
2.2.2 Upper layers
2.2.3 Lower layers
2.2.4 Application layer
2.2.5 Concept of headers and trailers
2.2.6 Presentation layer
2.2.7 Session layer
2.2.8 Transport layer
2.2.9 Concept of TCP/UDP port numbers
2.2.10 TCP 3-way handshaking process
2.2.11 Internet using sockets
2.2.12 Network layer
2.2.13 Data link layer
2.2.14 Physical layer
2.2.15 PDU explained
2.2.16 Why layered architecture?
2.2.17 Encapsulation and de-encapsulation
15. Vlan
15.1 Broadcast domain
15.2 Why use VLAN?
15.3 What is VLAN?
15.4 VLAN id
15.5 Types of VLAN
15.6 How to configure VLANs
15.7 Types of switch ports
15.8 Trunking/Tagging concept
15.9 Operations on native VLAN
15.10 Operations on trunk links
16. VTP
16.1 Introduction
16.2 VTP conditions
16.3 VTP configurations
16.4 VTP modes
17. Inter-VLAN routing
17.1 Inter-VLAN using a router
17.2 Inter-VLAN using Router on a stick
17.3 SVI – switched virtual interface
18. STP
18.1 STP & loop conditions
18.1.1 What is a Loop in networking?
18.1.2 Loop conditions
18.2 Function of STP
18.2.1 Why use STP?
18.2.2 STP convergence
18.2.3 Working of STP
18.2.4 Broadcast storm
24.2.1 Autonomous AP
24.2.2 Cisco meraki
24.2.3 Lightweight AP
24.2.4 WLC functions
24.2.5 WLC deployment models
24.2.6 Cisco AP modes
24.3 Building a wireless LAN
24.3.1 Connecting a Cisco AP
24.3.2 Accessing a Cisco WLC
24.3.3 Types of ports & interfaces
24.3.4 Connecting a Cisco WLC
24.3.5 Using interfaces
24.3.6 WLAN configuration
24.4 Securing wireless networks
24.4.1 Authentication
24.4.2 Encryption
24.4.3 Authentication methods
24.4.4 Encryption methods
24.4.5 WPA protocols & versions
24.4.6 Security types
25. Cisco security
25.1 Security fundamentals
25.1.1 Common security terms
25.1.2 Common security threats
25.1.3 Human vulnerabilities
25.1.4 Password vulnerabilities
25.1.5 Password alternatives
25.1.6 Managing user access
25.1.7 AAA server
1. Introduction to Networking
What is networking?
Communication
Medium
Signal
Electrical & Electronics
Networks/ Types of network
Protocols
Cables
What to learn to start networking career
What is communication?
It is the process of – Sending information from one place to another place – Using signals –
Via a medium.
What is information?
It is any type of data: Text, Image, Audio and Video.
What is medium?
It is a link (way to connect devices) through which signals travel from one place to another
place. A medium can be either wired, or wireless.
What is signal?
It is a way of sending information through a medium.
Types of signals:
- Analog
- Digital
- Electrical
- Optical/ Light
- Radio frequency
It is a group of devices connected with each other to share data, hardware and software
resources.
For example: Bluetooth file sharing, Share it and Internet.
Types of Networks:
• PAN
• LAN
• CAN
• MAN
• WAN
• INTERNET
Personal Area Network (PAN)
It is the network between only two devices connected using wires or wirelessly. For
example: when you share data between 2 devices using a cross cable, that is the example of
a personal area network.
Local Area Network (LAN)
It is the network created by a switch. When you need to create a network for an office or a
building, you need to bring multiple hosts/devices on a single network. This single network
is created using switches. So, all hosts (PCs, phones, cameras etc.) are connected to the
switch. This type of network is called a local area network.
Campus Area Network (CAN)
It is the name given to the LAN designed for campuses and educational institutes like
schools, colleges and universities etc. It is much bigger than local area network and require
special network designs (Cisco 3-layer architecture, will be discussed in later chapters) and
specialized switches like distribution and core switches and some other devices like access-
points, wireless lan controller and PBX etc.
So, yes technology is always in flux, always changing, always improving, always being better
and better making it easier for people to access network and internet resources.
Internet
It is the network of networks, the biggest network of the planet. It is the interconnection of
multiple smaller and bigger networks spanned across the globe for different purposes.
It is the mutual agreement of different vendors to follow and use OSI and TCP/IP standards
to create hardware and software compatible with each other.
It is the reason of current economic independence, social media connectivity and work
culture flexibilities and emerging concepts like work from home.
Intranet
It is a term given to the inside/private network of an organization.
Extranet
It is a term given to the outside network (network of some other branch) of an organization.
1.3 What is a protocol?
It is the set of rules & regulations to do something specific.
In networking, all tasks need a particular protocol to perform its function. For example to
send email you will need SMTP. Following are some of the protocols used in networking: IP,
MAC, DNS and HTTP/HTTPS.
1.4 Network cables:
Following are some of the cables used in different types of networks:
• Co-axial (rarely used nowadays)
• Twisted pair (mostly used in LAN)
• Fiber (widely used in WAN)
1.5 Data transmission types:
Data transmission means when you are transferring some information in the network, how
many devices are receiving it at a time. There are 3 types of data transmission types:
1. Unicast (one to one communication)
2. Multicast (one to many or one to group communication)
3. Broadcast (one to all communication)
1.6 Data modes:
Data modes means the way how data is being sent and received in between devices in a
network. It means if two devices can send and receive data at a time or not. There are 2
possible data modes:
1. Simplex
2. Duplex
Simplex – means one sided communication.
Duplex – means two sided communication.
Duplex mode is also of 2 types:
- Half duplex
- Full duplex
Half duplex – means only one side can communicate at a time.
Full duplex – means both of the sides can communicate simultaneously.
IT and networking are ever growing field. But to master the advance concept and latest
technology trends, you will need to master the existing fundamental concepts.
All technology in IT are inter-related. For example, to learn Linux you will need to learn IP
addressing and other similar concepts.
Following are the topics to be mastered to enter into networking:
- Network reference models like OSI and TCP/IP model
- IP addressing and Subnetting
- Network devices and network Cabling
2. OSI model
2.1 Types of network reference models
There are 3 network reference models, as following:
1. OSI Model
2. TCP/IP Model
3. Cisco 3 Layer Architecture Model
OSI model is used for just training purposes, just to understand the logical flow of data in
the networks.
TCP/IP model is the actual implementation of the OSI model in real world. It explains various
protocols used for different purposes at different layers.
Cisco 3 layer architecture model is for network designing reference.
This whole process is known as Segmentation & these smaller units are known as Segments
at transport layer. It establishes end-to-end connectivity using port numbers and ensure
reliable data delivery via error detection & re-transmission (using TCP).
Protocols Used: (TCP/UDP)
TCP is used for reliable communication (by TCP 3-way handshaking process). E.g. SMTP, FTP
UDP is used for unreliable communications. E.g. Phone calls, Video calls
2.2.9 Concept of TCP/UDP port numbers
- IPv4
- IPv6
Devices & Protocol at this layer:
- Router
- IPv4, IPv6, ICMP
Note: Type ‘IPCONFIG’ in your CMD and see the IP address details.
2.2.13 Data link layer
It completes the Final Formatting of the data before actually sending it over the physical
links.
Web-Servers don’t care if the requests are coming from Wired cables or Wireless
frequencies
Switches don’t care if they are sending either IPv4 or IPv6 as they have nothing to do
with it
Allows inter-operability between devices & vendors:
Google Chrome can freely talk to Apache Server as they both agree on HTML
standards
HUAWEI Ethernet switch can talk to D-Link Ethernet switch as they agree on
Ethernet standards
CISCO router can connect to Juniper router as they agree on IP routing standards
2.2.17 Encapsulation and de-encapsulation
It is the process of:
Encapsulation:
• Process of adding data formatting on the Sending Host to create a PDU
• It occurs when data moves down the OSI stack i.e. 7>6>5> and so on
• Data is passed to the layer below
• The process repeats until the physical layer is reached
De-Capsulation:
• Process of removing data formatting on the Receiving Host to retrieve information
from a PDU
• It occurs when data moves up the OSI stack i.e. 1>2>3 and so on
• Each layer removes its own header/trailer
• Data is then passed up to the layer above
• Process repeats until the application is reached
2.3 Where is OSI model helpful in real life?
When troubleshooting a network problem we often go up the OSI model:
Physical - Is the network cable plugged in?
Data Link - Do you have a link light?
Network - Are you getting an IP?
Transport - Can you ping your default gateway?
Session - Do you have DNS information?
– Can you ping 8.8.8.8 but not www.google.com?
Presentation & Application - Can you browse a website?
3. IP Addressing
3.1 3-tier IP address assignment architecture
What is IANA?
IANA – Internet Assigned Numbers Authority
It is an internet organization which deals in the assignment of numbers used in networks
and internet. For example:
So, you can understand that you don’t just directly get internet address from IANA when
you are using home Wi-Fi or wired internet. You get IP from your local vendor (T3
providers), who get their addresses from T2 providers and they get their addresses from T1
providers like Jio.
So, this is how IP address assignment works and operate and makes internet possible
globally.
3.2 Introduction
• 32 bit, Binary Number
• Have 2 parts: Network Part & Host Part
• 32 bit long binary number is difficult to understand for us, so for simplicity, it is
divided into 4 parts. Each part is of 1 Byte called an octet:
• 1 Octet = 8 Bits
• This format is called ‘Dotted Decimal Notation’
• As each number is of 1byte (8bits): 2^8 = 256 where decimal range would be 0 – 255
Note: that total numbers are still 256.
3.3 Classes of IP address
Total IPv4 Addresses are broken into 5 different classes: A, B, C, D, & E
Class is determined by first 4 most significant bits of 1st octet!
Class A (0 – 127) - It starts with 0XXXXXXX
Class B (128 – 191) - Starts with 10XXXXXX
Class C (192 – 223) - Starts with 110XXXXX
Class D (224 – 239) - Starts with 1110XXXX
Class E (240 – 255) - Starts with 1111XXXX
Classes defined IP addresses into 2 parts:
Network Address – Used by network devices such as routers. E.g. 192.168.10.0
Host Address – Used by end devices like PCs. E.g. 192.168.10.5
- DHCP server is used for automatically providing IP address, subnet mask and default
gateway etc.
- Very useful in larger networks.
4. Subnetting
4.1 Introduction
- Foundation of Internet
- Essential knowledge part for the administrator of any network
- Proved its worth in various areas like saving address spaces, Security and Traffic
control etc.
4.2 What is subnetting?
It is the process of dividing a single network into multiple sub-networks by borrowing bits
from host field & moving them to network field. The result is more number of sub-networks
with lesser number of hosts per sub-net.
Note: It doesn't gives you more hosts, but actually costs you 2 hosts per sub-network.
1 – for Network address and 1 – for Broadcast address. But, on a larger scale, it saves IP.
4.3 Need of subnetting?
• When company is using 2 or more different technologies (like Ethernet & token ring)
in their different LAN segments
• When hosts dominating most of the LAN bandwidth need to be isolated
• Breakdown network to decrease latency
• Breakdown broadcast domain to reduce network congestion
• To restrict 2 network segments by distance limitations
Note: CISCO recommends less than 500 hosts in one subnet.
Borrowing a bit from the host portion and writing it as power of 2 (as we are using binary
system), means changing the PF notation.
For 4 sub-networks:
11111111.11111111.11111111.11000000 /26
For 8 sub-networks:
11111111.11111111.11111111.11100000 /27
& so on...
It is how sub-netting is done on the basis of number of sub-networks needed.
In such case, we borrow the number of bits from host portion & increase the network bits
(converting host's 0s into network's 1s from left to right, in host portion).
Following is how sub-netting is done on the basis of number of IP addresses/ hosts needed.:
In such case, we reduce the number of bits from host portion (keeping as many host bits as
needed from right to left and converting all rest 0s into 1s)
For <= 254 hosts; use 8 host bits (0s); as 2^8 – 256
Similarly:
2^7 <= 126
2^6 <= 62
2^5 <= 30
2^4 <= 14
2^3 <= 6
2^2 <= 2
Why 6 bits are needed to create 60 hosts?
Convert 60 into binary number – 111100
So, as we can see here, 6 bits is needed for creating a number equal to 60!
So, this is the reason behind the following pattern:
256 – 2^8
128 – 2^7
64 – 2^6
And so on...
Like if we need 5 hosts in a sub-net;
11111111.11111111.11111111.11111000 /29
255. 255. 255. 248
Or if we need 12 hosts per sub-network;
11111111.11111111.11111111.11110000 /28
255. 255. 255. 240
Similarly, for 50 hosts per sub-network;
11111111.11111111.11111111.11000000 /26
255. 255. 255. 192
4.5 Subnet mask and subnetting
What is Subnet Mask?
It is an address that tells us about the total no. of network bits and hosts bits in an IP
address or that separate network part from the host part.
192.168.1.0/24
By Default:
11111111.11111111.11111111.00000000 /24 (N/W bits=24; Host bits=8)
255. 255. 255. 0
With Subnetting:
11111111.11111111.11111111.11100000 /27 (N/W bits=27; Host bits=5)
255. 255. 255. 224
How it is calculated?
11111111.11111111.11111111.00000000 / 24
Let's take single octet (8bits) of network portion for explanation:
1 1 1 1 1 1 1 1
128 64 32 16 8 4 2 1
Now, add the numbers falling under all the total 1s to get the desired subnet mask.
Here it is: 128+64+32+16+8+4+2+1 = 255
11111111.11111111.11111111.11100000 / 27
1 1 1 0 0 0 0 0
128 64 32 0 0 0 0 0
128+64+32+0+0+0+0+0 = 224
192.168.1.0 / 26
11111111.11111111.11111111.1100000
It will result into:
255. 255. 255. 192
Similarly: 192.168.1.0 / 29
11111111.11111111.11111111.11111000
255. 255. 255. 248
Note: For all 1's in the octet, subnet mask will always be 255
Note: Don't add the numbers falling under 0s
Subnetting of 192.168.1.0 /27
Let’s understand Subnetting by dividing it into 4 simple steps:
1st Step
Calculate total no. of sub-networks formed by new PF notation!
Formula is = 2^no of on bits (bits borrowed from host portion)
2^3 = 8
So, total 8 sub-networks will be created by using / 27 notation.
2nd Step
Calculate total no. of hosts (IP addresses) per sub-net!
Formula is = 2^no of off bits (remaining host bits)
2^5 = 32
So, there will be 32 hosts available per sub-net.
3rd Step
Calculate the IP range which will be used to create 8 sub-network blocks!
For the sake of simplification, consider it like this:
As we know that without Subnetting there are 256 total hosts in 1 network of class C's
which has range from 0 to 255 not from 1 to 256, reason being we are using binary no.
system.
In the same way; With Subnetting here;
Total no. of hosts per subnet = 32
IP range = 0 to 31 (0.0.0.31)
Note: There is an alternate way also. (Using subnet masks)
Default subnet mask of class C:
255.255.255.0
New obtained subnet mask after Subnetting:
255.255.255.224
Now, to find IP range, use this formula:
Total subnet mask 255.255.255.255
– New obtained subnet mask - 255.255.255.224
--------------------------------------------------- -----------------------------------
0 .0 .0 .31
-------------------------------------------------- -----------------------------------
So, it’s clear that by using either way, same IP range is achieved.
4th Step
Find out the address blocks of all 8 sub-networks formed!
1st Subnet:
192.168.1.0 + IP Range means 192.168.1.0 + 0.0.0.31
192.168.1.0 – 192.168.1.31
This is the range of first sub-network's address block where
192.168.1.0 = Network Address &
192.168.1.31 = Broadcast Address
And both of these addresses can't be assigned to any host in the network.
So, out of 32 total hosts, only 30 hosts will be usable.
Remember:
192.168.1.31 is the broadcast address for first sub-net block, so the second block with start
with the next number which is 192.168.1.32.
2nd Subnet:
192.168.1.0 + IP Range = 192.168.1.32 + 0.0.0.31
192.168.1.32 – 192.168.1.63
5. Network devices
5.1 Types of network devices
- Hub
- Bridge
- Switch
- Router
- Gateway
- Access Point (AP)
- Firewall (FW)
- Wireless Controller (WLC)
- Private Branch Exchange (PBX)
5.2 Hub
Hub is a device that allows multiple computers to communicate with each other over a
network. It has several Ethernet ports that are used to connect two or more hosts together.
Each computer or device connected to the hub can communicate with any other device
connected to one of the hub's Ethernet ports.
Hubs are similar to switches, but are not as "smart." While switches send incoming data to a
specific port, hubs broadcast all incoming data to all active ports. For example, if five devices
are connected to an 8-port hub, all data received by the hub is relayed to all the five active
ports. While this ensures the data gets to the right port, it also leads to inefficient use of the
network bandwidth.
For this reason, switches are much more commonly used than hubs.
Disadvantages of hub
- It uses more bandwidth of the network due to unnecessary network broadcast.
- Hubs works on half duplex which means hub can’t send or receive data at the same
time.
- Due to this data collision may occur which may corrupt your data or may need to
send again.
Hubs in brief
Layer 1 Device and are called as dumb devices because they always broadcast
1 Collision Domain
1 Broadcast Domain
Work on Half duplex
Wasted Bandwidth
Security Risks
Use CSMA/CD method to recover from collisions
Replaced by switches
5.3 Bridge
Bridge connects two or more local area networks (LANs) together. It is a layer 2 device
which work on MAC address and understands frame. It has usually 2 ports and segments
LAN into smaller sections. It have multiple collision domains (usually 2) within single
broadcast domain, it means data can be send or receive on segment of the network at the
same time.
A bridge can transfer data between different protocols and technologies (i.e. a Token Ring
and Ethernet network). The device is similar to a router, but it does not analyze the data
being forwarded. Because of this, bridges are typically fast at transferring data, but not as
versatile as a router. A bridge cannot be used as a firewall like most routers can be.
Now a days bridges are not being used anymore it has been replaced by switches.
Functions of a bridge
- It allows a single LAN to be extended to greater distances. You can join different
types of network links with a bridge while retaining the same broadcast domain.
- For example, you can bridge two distant LANs with bridges joined by fiber-optic
cable.
- A bridge forwards frames, but a filtering mechanism can be used to prevent
unnecessary frames from propagating across the network.
- They provide a barrier that keeps electrical or other problems on one segment from
propagating to the other segment.
- A bridge isolates each LAN from the collisions that occur on other LANs. Thus, it
creates separate collision domains within the same broadcast domain.
Remember:
On Ethernet networks, collisions occur when two nodes attempt to transmit at the
same time.
As more nodes are added to a network, collisions increase.
A bridge can be used to divide a network into separate collision domains while
retaining the broadcast domain.
A broadcast domain is basically a LAN as compared to an internetwork, which is
multiple LANs connected by routers.
In a broadcast domain, any node can send a message to any other node using data
link layer addressing, while a routed network requires internetwork addressing.
5.4 Switch
A switch is used to network multiple end devices together to bring on a network. They
comes with more port than hubs, generally with 12, 24 and 48 Ethernet ports. These ports
can connect to computers, cable or DSL modems, and other switches. High-end switches can
have more than 50 ports and often are rack mounted.
Switches are more advanced than hubs and less capable than routers. Unlike hubs, switches
can limit the traffic to and from each port so that each device connected to the switch has a
sufficient amount of bandwidth.
For this reason, you can think of a switch as a "smart hub." However, switches don't provide
the firewall and logging capabilities that routers do.
Switches in brief
Layer 2 device
Intelligent devices, broadcast once then unicast
Work on MAC address
Understands frames
Can work on both Full – Duplex and Half – Duplex
Have multiple Collision Domains
Have single broadcast domain
Save Bandwidth in comparison of Bridge
Increased security
5.5 Router
This is a hardware device that routes data (hence the name) from a local area network (LAN)
to another network connection. So, a router used to connect two network, Internal and
external networks. It is also used to find best path for packets to travel in the network.
It can have other functions as well, like packet filtering etc.
Routers in brief
- Layer 3 device
- Work on IP address
- Understand packets
- Few ports in compare of switch
- Can be used as a firewall
- Have multiple broadcast domains
5.6 Gateway
A gateway is a hardware device that acts as a "gate" between two networks. It may be a
router, firewall, server, or other device that enables traffic to flow in and out of the
network.
While a gateway protects the nodes within network, it is also a node itself. The gateway
node is considered to be on the "edge" of the network as all data must flow through it
before coming in or going out of the network.
It may also translate data received from outside networks into a format or protocol
recognized by devices within the internal network.
A router is a common type of gateway used in home networks. This is the reason it is known
as default gateway.
It allows computers within the local network to send and receive data over the Internet.
A firewall is a more advanced type of gateway, which filters inbound and outbound traffic,
disallowing incoming data from suspicious or unauthorized sources.
A proxy server is another type of gateway that uses a combination of hardware and
software to filter traffic between two networks. For example, a proxy server may only allow
local computers to access a list of authorized websites.
5.7 Access point
An access point is a device, such as a wireless router, that allows wireless devices to connect
to a network. Most access points have built-in routers, while others must be connected to a
router in order to provide network access. In either case, access points are typically
hardwired to other devices, such as network switches or broadband modems.
Access points can be found in many places, including houses, businesses, and public
locations. In most houses, the access point is a wireless router, which is connected to a DSL
or cable modem. However, some modems may include wireless capabilities, making the
modem itself the access point.
Large businesses often provide several access points, which allows employees to wirelessly
connect to a central network from a wide range of locations. Public access points can be
found in stores, coffee shops, restaurants, libraries, and other locations.
While access points typically provide wireless access to the Internet, some are intended only
to provide access to a closed network. For example, a business may provide secure access
points to its employees so they can wirelessly access files from a network server.
Also, most access points provide Wi-Fi access, but it is possible for an access point to refer
to a Bluetooth device or other type of wireless connection.
However, the purpose of most access points is to provide Internet access to connected
users.
It may also be abbreviated AP or WAP (for wireless access point). However, WAP is not as
commonly used as AP since WAP is the standard acronym for Wireless Access Protocol.
5.8 Firewall
Firewall is a network security device. It can be either hardware or software. A hardware
firewall acts as a barrier between a trusted system or network and outside connections,
such as the Internet.
However, a computer or software firewall is more of a filter than a wall, allowing trusted
data to flow through it. Software firewalls are more common for individual users and can be
custom configured via a software interface. Both Windows and OS X include built-in
firewalls.
Many businesses and organizations protect their internal networks using hardware firewalls.
A single or double firewall may be used to create a demilitarized zone (DMZ), which
prevents untrusted data from ever reaching the LAN.
DMZ – collection of servers which are available for public uses. They are kept separate from
other dedicated LAN servers.
A WLAN controller manages wireless network access points that allow wireless devices to
connect to the network. What a wireless access point does for your network is similar to
what an amplifier does for your home stereo.
It takes the bandwidth coming from a router and stretches it so that many devices can go on
the network from farther distances away.
5.10 IP Phone
As technology is evolving, so is our voice industry. Traditional landline phones are being
replaced with IP phones. These phones run on existing Ethernet links (RJ-45 ports) instead of
old Rj-11 lines.
5.11 PBX
PBX stands for Private Branch Exchange, which is a private telephone network used within a
company or organization.
The users of the PBX phone system can communicate internally (within their company) and
externally (with the outside world), using different communication channels like Voice over
IP, ISDN or analog.
1. ROM
2. Flash
3. NVRAM
4. VRAM
6.1.1 ROM (Read only Memory)
Well I need my device to check the hardware, see what's installed, look at the various
components, and make sure they're functioning properly. And what do we call that
particular phase?
- POST (Power On Self-Test)
6.1.2 Flash memory
- Contains the operating system (Cisco IOS)
- VLAN.dat (Information of virtual LANs in switches)
6.1.3 NVRAM
Stores startup configuration.
This may include IP addresses (routing protocol, hostname of a router).
Startup-configuration:
It's a backup copy of your running configuration and it's stored in NVRAM because NVRAM
is nonvolatile.
6.1.4 VRAM
- Contains the running copy of the configuration file.
- Stores the routing table.
- ARP information can be stored here.
And remember, RAM is volatile. – Power is turned off, everything in RAM is gone.
ROM
FLASH
- Bootstrap Program
- POST
NVRAM
Find & Load IOS
VRAM
Load Saved Configurations
(Also known as Startup config)
Stores running
Configurations Only
6.2 Router identification:
One side of console cable is 8 pin RJ-45 used to connect to the console port of router, while
other side is RS-232 serial port which is connected on the PC’s serial port.
Note: Console cable maybe of the following types:
RJ-45 to RS-232
RJ-45 to USB
USB to USB
IOS is the operating system used on the majority of cisco devices like routers and
switches.
Previously cisco catalyst switches used CatOS which was then replaced with iOS.
Cisco’s PIX firewall used Fitness operating system which was then replaced with iOS.
The command line interface of all of the operating systems are mostly identical.
It's been under continuous development since 1984.
It is a gigantic single image which is coded in C language.
The current stable version of cisco iOS is 15.8
IOS configuration is usually done through a text-based command line interface (CLI).
The core function of Cisco IOS is to enable data communications between
network nodes.
6.4.2 IOS functions
Cisco IOS also offers dozens of additional services that an administrator can use to improve
the performance and security of network traffic, like:
Encryption
Authentication
Firewall capabilities
- Policy enforcement
- Deep packet inspection
Quality of service
Intelligent routing and
Proxy capability etc.
In Cisco's Integrated Services Routers (ISRs), IOS can also support call processing and unified
communications services.
6.4.3 IOS versions
For cisco, there are 4 IOS version:
1. LAN Lite
2. LAN Base
3. IP Base and
4. IP Service
Source:http://nhprice.com/comparison-of-cisco-ios-image.html
6.4.4 IOS variants
There are three variants of the operating system:
- IOS XE
- IOS XR &
- Nexus OS
IOS XE runs on enterprise-grade Cisco ISRs, Aggregation Services Routers and Catalyst
switches.
IOS XR runs on Cisco's service provider products, such as its Carrier Routing System routers.
Nexus OS runs on Cisco's Nexus family of data center switches.
- It is used mainly for displaying outputs of the router functions by using show
command.
- It is also used for saving and deleting configurations.
Some important commands of 2nd mode are: Copy, Show, Delete, Erase, Reload and Setup
3. Global configuration mode
- It is used for the execution of all major commands of the router.
- It is used for assigning ip address on the router interface, to change the hostname, to
run the routing protocols and all similar advanced stuff.
As, all of the commands are run in this mode (even the commands of first and second
modes), so, this is called the global configuration mode.
6.5.3 Sub-working modes of a router
There are some sub-modes of a router:
1. Interface sub mode
When you are in some interface of a router.
Router(config-if)#
2. Line sub mode
When you are in console mode.
Router(config-line)#
3. Protocol sub mode
When you are running some routing protocol.
Router(config-rtr)#
6.5.4 Router mode navigation
Exit drops back down a level.
Router(config)# exit
Router# exit
Router>
End drops back to Privilege Exec Mode from any level.
For example:
Router(config)# interface fa0/1
Router(config-if)# end
Router#
6.5.5 Some important 2nd mode commands
#clock – to set date & time settings
#copy – to copy data from one location to another
#debug – to troubleshoot network problems
#delete – to delete the content of flash memory
#erase – to erase the content of NVRAM
#ping – to check the connectivity between two nodes in the network
Router(config-if)# exit
Router(config)# do show run
Command to verify the description set on the router interface.
6.6.4 Speed & duplex settings
Rommon> IP_SUBNET_MASK=255.0.0.0
Rommon> IP_DEFAULT_GATEWAY=10.0.0.2
Rommon> TFTP_SERVER=10.0.0.2
Rommon> TFTP_FILE=manav.bin (type the iOS filename which is stored on tftp sever)
Rommon> tftpdnld
Type “y” on the output screen.
Rommon> set
Rommon> reset
6.9 iOS upgradation and licensing
6.9.1 IOS backup
- Copies of the device’s system iOS image and configuration can be saved to Flash,
TFTP, or USB.
- If you copy a configuration file into the running-config, it will be merged with the
current configurations.
- To replace a configuration, factory reset the device and then copy the new
configuration into the startup-config.
#copy flash: tftp
#copy start tftp
#copy run usb
6.9.2 Factory reset
How to factory reset a router:
Router# write erase
Factory resetting a router means to erase all the startup configurations of the router.
Reload to boot up with a blank configuration.
6.9.3 IOS image upgradation
IOS software images can be downloaded from: https://software.cisco.com/
After downloading the software, copy it to the device’s flash using TFTP server:
#copy tftp flash:
Delete the old system image or use the ‘boot system’ command to change the boot image.
#boot system
Go to the cisco license portal https://www.cisco.com/go/license and enter the PAK code
and UDI to generate the license.
Copy the license in the flash of the router.
#license install flash:
#show license
6.10 Password setting
6.10.1 Types of passwords
1. Console password
2. User mode password
- Enable password
- Enable secret
3. Aux password
4. Telnet password
5. SSH password
6.10.2 Console password
It is used to restrict the physical access of the CISCO device from unauthorized users on
console port. It is applied on the physical line of the device (line console).
()# line console 0
(-line)# password ****
(-line)# login
(-login)# exit
6.10.3 User password
It is used to restrict the physical access of the CISCO device from unauthorized users on aux
port. It is applied on the physical line of the device (line console).
()# line aux 0
(-line)# password ****
(-line)# login
6.10.5 Telnet password
()# line vty 0 15
(-line)# password ****
(-line)# login
6.10.6 SSH password
()# line vty 0 15
(-login)# transport input ssh
(-line)# password ****
(-line)# login
6.11 Telnet & SSH configuration
6.11.1 Telnet V/S SSH
Step 2:
Type the following command in the rommon mode to ignore the startup-config at boot:
Rommon 1> confreg 0X2142
The startup-config is still there with the full configuration including the unknown enable
secret, but the router doesn’t check it when it boots. Type the reset command to reload:
Rommon1> reset
Step 3:
The router will boot up with no configuration. Type ‘no’ to bypass the setup wizard.
Enter enable mode. You will not be asked for the enable secret as it is not in the running-
config.
Step 4:
Copy the startup-config into the running-config.
# copy start run
Step 5:
Enter a new enable secret in global configuration mode to override the old one. It will go
into the running-config.
()# enable secret cisco@123
Step 6:
Change the value of configuration register from 0X2142 (which we did at the time of router
restart) to the original value which is 0X2102 for normal boot.
()# config-register 0X2102
Step 7:
Run the following command to save the running-config into startup-config:
# copy run start
This will merge the new enable secret with the existing startup-config.
7. Routing fundamentals
7.1 What is routing?
When multiple equal length routes are there for the same destination, all of them will be
added to the routing table and traffic will be load-balanced between them.
Scenario for lab practice:
8. Static routing
8.1 Concept
9. Default routing
9.1 Concept
- It is a type of routing which is useful in those cases when the destination network id
is unknown to the router.
- In such cases, router will use a default route and will send all the incoming traffic to
that route, by default.
- We can say that it is a special case of static routing.
9.2 Configuration
Router(config)# ip route (any destination) (any subnet mask) (next-hop ip address)
Router(config)# ip route 0.0.0.0 0.0.0.0 123.19.17.33
9.3 Practical applications
Customer routers need default routing to be connected to ISP networks.
- When a routing protocol is used, routers automatically advertise their best paths to
known networks to each other.
- Routers use this information to determine their own best path to the known
destinations.
- When the state of the network changes, such as a link going down or a new subnet
being added, the routers update each other.
- Routers will automatically calculate a new best path and update the routing table if
the network changes.
10.2 Dynamic V/S static routing
Routing protocols are more scalable than administrator-defined static routes.
Using purely static routes is only feasible in very small environments.
10.3 Advantages of dynamic routing
The routers automatically advertise available subnets to each other without the
administrator having to manually enter every route on every router.
If a subnet is added or removed the routers will automatically discover that and
update their routing tables.
If the best path to a subnet goes down routers automatically discover that and will
calculate a new best path if one is available.
Using a combination of a dynamic routing protocol and static routes is very common
in real world environments.
In this case the routing protocol will be used to carry the bulk of the network
information.
Static routes can also be used on an as needed basis. For example for backup
purposes or for a static route to the Internet (which will typically be injected into the
dynamic routing protocol and advertised to the rest of the routers – will be discussed
in NAT)
In Distance Vector protocols, each router sends it’s directly connected neighbors a list of all
its known networks along with its own distance to each of those networks. Distance vector
routing protocols do not advertise the entire network topology.
A router only knows its directly connected neighbors and the lists of networks those
neighbors have advertised. It doesn't have detailed topology information beyond its directly
connected neighbors.
All of the IGPs do the same job, which is to advertise routes within an organization and
determine the best path or paths.
An organization will typically pick one of the IGPs.
If an organization has multiple IGPs in effect (for example because of a merger), information
can be redistributed between them. This should generally be avoided if possible.
Link state routing protocols are also known as “Intelligent Routing”
12. OSPF
12.1 Introduction
• O – Open & SPF – Shortest Path First
• IEEE developed this open source protocol which work on Shortest Path First i.e. SPF
algorithm and support biggest possible networks.
• OSPF is a link state protocol.
• The metric of OSPF is “Cost”.
• Cost formula of OSPF: 10^8
--------------------
Bandwidth
• AD value is = 110
• OSPF works on Dijkstra algorithm.
• OSPF normally uses 224.0.0.5 as its multicast ip.
• OSPF creates and maintain all three tables:
- Routing table (#show ip route)
- Topology table and (#show ip ospf database)
- Neighbour table (#show ip ospf neighbour)
• Timers of OSPF:
- Hello timer - 10 secs
- Dead Interval/Flush timer - 40 secs
12.2 Link state concept
12.2.1 Explanation of link-state concept – why?
A major drawback of distance vector protocols is that they not only send routing updates at
a regularly scheduled time, but these routing updates contain full routing tables for that
protocol.
If the sending router knows of more than 25 RIP routes, the update will require multiple
packets, since a RIP update packet contains a max of 25 routes.
This takes up valuable bandwidth and puts an unnecessary drain on the receiving router's
CPU and memory.
Once the OSPF network has reached a state of convergence, the routers have synchronized
link state databases. The beauty of the Dijkstra algorithm is that recalculation of routes due
to a network change is so fast that routing loops literally have no time to form.
12.2.4 Explanation of link-state concept – what?
This exchange of LSAs between neighbors helps bring about one major advantage of link
state protocols - all routers in the network will have a similar view of the overall network.
In comparison to RIP updates (every 30 seconds!), OSPF LSAs aren't sent out all that often —
they're flooded when there's an actual change in the network, and each LSA is refreshed
every 30 minutes.
Before any LSA exchange can begin, a neighbor relationship must be formed. Neighbors
must be discovered and form an adjacency, after which LSAs will be exchanged.
12.2.5 Here's a live OSPF database
When an OSPF-enabled router receives an LSA, that router checks its OSPF database for any
pre-existing entries for that link.
If there is an entry for the link, the sequence numbers come into play:
12.2.8 Lsa sequence numbers – how?
Sequence number is the same: LSA is ignored, no additional action taken.
Sequence number is lower: The router ignores the update and transmits an LSU containing
an LSA back to the original sender. Basically, the router with the most recent information is
telling the original sender “Hey, you sent me old info. Here’s the latest info on that link.”
Sequence number is higher: The router adds the LSA to its database and sends an
LSAcknowledgement back to the original sender. The router floods the LSA and updates its
own routing table by running the SPF algorithm against the now-updated database.
12.3 Process-ID
OSPF needs a 16 bit ID known as “Process ID” to start its process.
Total value: 2^16 = 65536
0 - Not used
1 - 65535 -> Usable range
• Process ID can be same or different on each router. It means this ID is local to the
router only, it is not send to the other routers in hello packet information and it is
not matched during ospf Neighborship formation process.
• Multiple OSPF processes can be run on a single router, and routes are not exchanged
between such processes by default.
12.4 Wildcard mask concept
In simple words, it is opposite to the subnet mask.
In subnet mask, network part is given importance and host part is ignored. So, network part
is represented by 1 & host part by 0.
But, in wildcard mask, host part is given importance and network part is ignored.
So, network part is represented by 0 & host part is represented by 1.
12.4.1 How to calculate Wildcard mask?
So, wildcard mask for class A: 00000000.1111111.11111111.11111111 => 0.255.255.255
Class B => 0.0.255.255
OSPF areas allow us to build a hierarchy into our network, where we have a "backbone
area" (Area 0), and expand the network from there. It means the routers of other areas
should be connected to such a router which has at least one of its interfaces in area 0.
This is the basis of OSPF 2 level hierarchy: area 0 and all other areas.
This is the reason area 0 is known as “Backbone Area”.
2. OSPF Hellos allow the neighbors to remind each other that they are still there, which
means they're still neighbors!
R1()#router ospf 10
network 192.168.10.0 0.0.0.255 area 0
network 172.19.0.0 0.0.255.255 area 0
exit
R2()#router ospf 10
network 192.168.10.0 0.0.0.255 area 0
network 10.0.0.0 0.255.255.255 area 0
exit
12.8 Neighborship conditions
R1()#router ospf 10
network 10.0.0.0 0.255.255.255 area 10
network 30.0.0.0 0.255.255.255 area 10
exit
R2()#router ospf 10
network 10.0.0.0 0.255.255.255 area 10
network 20.0.0.0 0.255.255.255 area 10
exit
R3()#router ospf 10
network 30.0.0.0 0.255.255.255 area 10
network 40.0.0.0 0.255.255.255 area 10
exit
R4()#router ospf 10
network 40.0.0.0 0.255.255.255 area 10
network 20.0.0.0 0.255.255.255 area 10
network 172.16.0.0 0.0.255.255 area 0
network 172.17.0.0 0.0.255.255 area 0
• Means these are the routers which are at the border of a company and they connect
internal networks of one AS with external network of another AS.
• They are the routers where re-distribution is done.
12.12 OSPF Network types
1. Point to Point (Serial links)
2. Point to Multipoint (Wireless Internet)
3. Broadcast multi-access (Ethernet networks)
4. Non-broadcast multi-access (Frame relay links)
12.13 Concept of DR & BDR
If all routers in an OSPF network had to form adjacencies with every other router, and
continued to exchange LSAs with every other router, a large amount of bandwidth would be
used any time a router flooded a network topology change.
So, OSPF uses a designated router (DR) and a backup designated router (BDR) to handle
adjacency changes in its segments.
There's no need to have all four routers flooding news of the same network change -- so the
router that detects the change will let the DR and BDR for this segment know, and in turn
the DR will flood the change.
The Designated Router (DR) is the router that will receive the LSAs from the other routers in
the area, and then flood the LSA indicating the network change to all non-DR and non-BDR
routers.
If the DR fails, the Backup Designated Router (BDR) takes its place. The BDR is promoted to
DR and another election is held, this one to elect a new BDR.
Routers that are neither the DR nor the BDR for a given network segment are known as
DROTHERS.
12.13.1 DR & BDR working principal
When a router on an OSPF segment with a DR and BDR detects a change in the network, the
detecting router will not notify all of its neighbors.
The detecting router will send a multicast to 224.0.0.6, the All Designated Routers address,
where both the DR and BDR will hear it.
The DR then sends a multicast to 224.0.0.5, the All OSPF Routers Address, where every
OSPF-speaking router on that segment will hear it.
The BDR updates its OSPF database in order to stay ready to step into the DR role if needed,
but only the DR sends this multicast.
12.13.2 DR & BDR election process
• All router interfaces on the segment with an OSPF interface priority of 1 or greater
are eligible to participate in the election.
• The router with the highest interface priority is elected DR.
• This process is repeated to elect a new BDR. A single router cannot be the DR and
BDR for the same segment.
• Setting the interface priority to zero will disqualify that router from participating in
the election.
12.13.3 What is router id or R-ID?
As we have seen that by default priority is same for all the routers which is 1. So, there is a
tie between all the routers.
#show ip ospf neighbor
In such scenario, router id comes into picture and plays its role.
Router ID:
1st priority – Manually Configured R-ID
2nd priority – Highest ip on Loopback interface
3rd priority – Highest IP on Physical interface
12.13.4 How to manually configure router id?
You can change the “Reference Bandwidth” part of the formula with the auto-cost
reference-bandwidth command.
If you have Gig Ethernet interfaces (or faster) in your network, you should use the auto-cost
command to set the reference bandwidth at least as high as the bandwidth of the fastest
interface in your OSPF network. (And probably higher)
12.15.1 How to change reference bandwidth in OSPF?
Area types
Area Restriction
Normal None
Stub No Type 5 AS-external LSA allowed
Totally Stub No Type 3, 4 or 5 LSAs allowed except the default summary route
NSSA No Type 5 AS-external LSAs allowed, but Type 7 LSAs that convert to
Type 5 at the NSSA ABR can traverse
LSA types
Type Description
LSA Type 1: Router LSA.
LSA Type 2: Network LSA.
LSA Type 3: Summary LSA.
LSA Type 4: Summary ASBR LSA.
LSA Type 5: Autonomous system external LSA.
LSA Type 6: Multicast OSPF LSA.
LSA Type 7: Not-so-stubby area LSA.
LSA Type 8: External attribute LSA for BGP
13. IP services
13.1 Need scenarios
These are the services that run on underlying IP networks.
It means, first we have devices connected in the network and then we have the connectivity
between them using static or dynamic routing depending on the type and size of the
network.
Then these IP services run on top of that IP network.
Each service serves a different purpose. Like:
- DHCP provide dynamic IP address assignment
- NAT does the public to private and private to public IP conversion
- ACL provides access security, and
- FHRP provides redundancy in the network.
13.2 Types of IP services
Following are some of the IP services discussed in this chapter:
1. DHCP: Dynamic Host Configuration Protocol
- Same network
- Different network (DHCP relay)
2. NAT: Network Address Translation
- Static
- Dynamic
- PAT
3. ACL: Access Control List
- Standard
- Extended
4. FHRP: First Hop Redundancy Protocol
- HSRP
- VRRP
- GLBP
5. NTP: Network Time Protocol
- Configuration
- Stratum levels
6. SNMP: Simple Network Management Protocol
- Concept
- Versions
7. Syslog: System Logging
- Concept
- Configuration
8. QoS: Quality of Service
- Concept
- Types
9. SSH: Secure Shell
- Already discussed in earlier videos with full explanation and configuration labs
10. FTP/TFTP: File Transfer Protocol/Trivial FTP
- Already discussed in earlier videos with full explanation and configuration labs
13.3 DHCP
Dynamic Host-Configuration Protocol (DHCP):
- It is a Dynamic/Automatic method to assign IP Addresses
- And it provide not only IP Addresses, but:
- Subnet Masks
- Gateways, and
- DNS
- Now, what is a DNS?
- Domain Name Server: resolve a URL (website name) to an IP Address and vice-versa
- Works on UDP port 53
13.3.1 DHCP DORA process
To achieve DHCP Service, some negotiation will happen:
13.3 NAT
13.3.1 Introduction
- Private IP Addresses don't carry Internet
- Public IP Addresses can't be assigned to private devices
- NAT will translate Private to Public and vice-versa
Note that: NAT is done ONLY by Routers, no Switches, no Multi-layer switches.
13.3.2 NAT types
1. Static
One to one translating
2. Dynamic
Group to Group Translating - also, this did not solve everything, IP exhaustion still there - so
here comes:
3. PAT (Port Address Translation) - or NAT Overload
PAT will do a one to 65535 Translation
13.3.3 NAT terminology
Inside – client side (our side)
Outside – server side (their side)
Local – private (of LAN)
13.4 NTP
13.4.1 Introduction
NTP is used to synchronize date and time settings on all the devices in the network.
- We have to stay synchronized
- Give a precise information, with real timing and date
- Either by setting an inner clock manually, or
- Asking someone to inform us about timing
- Uses UDP = 123
13.4.2 Stratum levels
Each network device can either be a Server or a Client.
- Stratum is needed:
- How preferred and accurate this source is
- Starts from 0 up to 15
- The closest, the better
- By default: a cisco router = 8
The NTP Stratum model is a representation of the hierarchy of time servers in an NTP
network, where the Stratum level (0-15) indicates the device's distance to the reference
clock.
For example:
Stratum 0 serves as a reference clock and is the most accurate and highest precision time
server (e.g., atomic clocks, GPS clocks, and radio clocks.)
Stratum 1 servers take their time from Stratum 0 servers and so on up to Stratum 15.
Stratum 16 clocks are not synchronized to any source.
The upper limit for stratum is 15; stratum 16 is used to indicate that a device is
unsynchronized.
13.4.3 Configuration
R1(config)#ntp server (ip address of ntp server)
13.5 SNMP
13.5.1 Introduction
SNMP is used to monitor all the devices in a network from a central point of surveillance.
13.6 Syslog
13.6.1 Introduction
Syslog server is used to store all the messages generated by all the devices in the network at
a centralized location.
- It is aware of "everything" happening in the network
- Know all what's happening behind the scenes (or even in front of)
- Starts from the obvious information up to "Emergency"
- Server/Client Relationship
13.6.2 Syslog server types
- Server can be a Normal Server that collects all the loggings
- Server can use the "Syslog" or "Splunk" or “Kiwi” software
- Client is the networking device that generates logs
Quote: "Every Awesome Cisco Engineer Will Need Ice-Cream Daily"
13.6.3 Syslog message types
Syslog messages starts from 0 to 7
- 0 means most critical, need immediate action
- 7 means least critical, just information messages
- 0 – Emergency
- 1 – Alert
- 2 – Critical
- 3 – Error
- 4 – Warning
- 5 – Notifications
- 6 – Information
- 7 – Debugging
- "Every Awesome Cisco Engineer Will Need Ice-Cream Daily"
13.7 QoS
13.7.1 Introduction
Quality of Service i.e. QoS is used to prefer one type of traffic over another. It is also knowns
as “traffic engineering”.
- If traffic was more than bandwidth?
- If congestion WILL happen:
Can some traffic be more preferred than another?
Generally, UDP will be preferred over TCP (TCP will automatically do A retransmission)
QoS will prefer based on Variety of Factors, some are: Classification, Marking, Queuing
Shaping, and Policing.
13.7.2 Classification and marking
Classifying the traffic according to its importance (Very High, High, Med, Low)
13.7.3 Queuing
Giving a specific priority to every type of packet (Giving the priority of "very high" to the
"UDP" traffic)
Dividing the Transmission capacity with respect to the priority (Giving 40% to the very high,
20% to the high, etc.)
13.7.4 Policing and shaping
- Policing is counting the traffic before transmitting it, and limiting it (limit the FTP
traffic to be transmitted at maximum of only 2Mbps)
*counting the desired traffic, and dropping alt that exceeds
- Shaping limits the Queued traffic to a certain amount of traffic, and what EXCEEDS,
wait at the queue.
The incoming frame will be having the following MAC address as its destination MAC
address: ffff.ffff.ffff - destination MAC address of the unknown frame
When switch will see this destination MAC address, it will flood that frame out of all the
switch interfaces except the one where it came from. Then the device with destination ip
address will reply to the switch with its own MAC address which will be received and stored
by the switch for all further communication. Next time, there will be no flooding, only
unicast.
When the destination network-id is different than the source network-id, then switch will
check whether the default gateway address is configured or not.
Again, there are 2 possible scenarios.
- Default gateway is configured. Or,
- It is not.
If default gateway is configured, then:
The frame will be forwarded out to the respective port which is connected to the gateway
device (by default a router) and router will handle all further communications.
If default gateway is not configured, then: The frame will be simply dropped. This is basically
how a switch works.
14.5 Functions of a switch
Following are the 3 main functions of a switch:
1. Address learning
2. Frame forwarding
3. Loop prevention
- An area where all of the devices receive the same information at the same time.
Larger is the size of the broadcast domain, more will be broadcast traffic.
- Broadcast traffic is always a challenge for all the switched (L2) networks.
- The reason is wastage of bandwidth and uncontrolled/unmanaged traffic.
15.2 Why use VLAN?
Now; the question is:
- How to reduce the size of broadcast domain?
- How to reduce broadcasting traffic?
The answer to all of these questions is:
VLANs – virtual local area networks
15.3 What is VLAN?
It is a collection of same of type of:
- Devices
- Traffic, or
- Departments
VLANs are represented by a number which is known as VLAN ID.
Each VLAN is assigned a different/unique ID.
15.4 VLAN id
VLAN is 12 bit ID.
It means total 4096 VLANs can be created in a switch.
0 = Not used
4096 = Not used
Switch(config-vlan)# name HR
Remember:
Single VLAN – Single Network
Same VLAN – Same Network
Different VLAN – Different Network
How to add ports in a vlan?
For a single port:
Switch(config)# interface fa0/1
Switch(config-if)# switchport access vlan 10
For a range of ports:
Switch(config)# interface range fa0/2-5
Switch(config-if-range)# switchport access vlan 10
Lab – default vlan communication in a single switch
Encapsulation/tagging types
- ISL : Inter Switch Link
CISCO proprietary
Obsolete
- DOT1Q
Open source protocol
Popularly used in industry, even by CISCO.
How to check/verify a trunk port
Note that a trunk port will not be shown in the “show vlan” or “show vlan brief” command.
The reason is it is not part of a single VLAN but part of multiple VLANs.
So, use the following commands to verify the configurations of a trunk link:
#show interface trunk
#show interfaces gig0/1 switchport
15.9 Operations on native VLAN
How to change native vlan of a trunk link
Switch(config)# interface gig0/1
Switch(config-if)# switchport trunk native vlan 50
Note: This same command should be run on both side of the link.
15.10 Operations on trunk links
How to permit/deny vlan on a trunk link
Switch(config)# interface gig0/1
Switch(config-if)# switchport trunk allowed vlan ?
WORD VLAN IDs of the allowed VLANs
add add VLANs to the current list
all all VLANs
except all VLANs except the following
none no VLANs
remove remove VLANs from the current list
#show interface trunk
What VTP does basically, is that it copy VLAN database from one switch (Switch with higher
configuration revision number – will be discussed in coming slides) and will paste it across
all other switches in the network.
16.2 VTP conditions
There are 2 conditions that need to be satisfied to configure VTP:
1. All switch must be the part of same VTP domain
Switch1(config)# vtp domain cisco
Switch2(config)# vtp domain cisco
Switch3(config)# vtp domain cisco
2. All links between switches must be trunk links
#show vtp status
16.3 VTP configurations
• They synchronize their VLAN database with the switch of highest configuration
revision number.
SW1(config)# vtp mode server
Vtp mode client
1. Client mode
• VLANs cannot be created, deleted or modified in this mode.
• They synchronize their VLAN database with the switch of highest configuration
revision number.
SW1(config)# vtp mode client
Vtp mode transparent
1. Transparent mode
• VLANs can be created, deleted and modified in this mode but VLAN database of this
switch is not shared with any other switch in the network.
• It is only local to the switch on which it is configured.
• It doesn’t send or receive VLAN database but it can pass the VLAN database to other
switches connected to it through trunk links.
SW1(config)# vtp mode transparent
The reason it is called “Router on a Stick” is because a single router is used to inter-connect
multiple VLANs.
It seems like router is standing on a stick, or something like that.
How to configure inter vlan routing using “router on a stick” method?
SW1 configurations:
- Configure respective VLANs like 10, 20 and all.
vlan 10
name HR
vlan 20
name Sales
exit
interface fa0/1
switchport access vlan 10
interface fa0/2
switchport access vlan 20
exit
interface gig0/1
switchport mode trunk
exit
Router Configurations:
- No shut the interface connected to the switch.
- Create multiple sub-interfaces in the router’s physical interface, one per VLAN.
interface gig0/0
no shutdown
exit
interface gig0/0.1
encapsulation dot1q 10
ip address 192.168.10.1 255.255.255.0
interface gig0/0.2
encapsulation dot1q 20
ip address 192.168.20.1 255.255.255.0
How to configure inter vlan routing using “svi – switch virtual interface”?
L2 Switch Configuration:
- Configure respective VLANs like 10, 20 and all.
- Add desired ports into respective VLANs.
- Configure the link between L2 switch and the L3 switch as a trunk link.
vlan 10
name HR
vlan 20
name Sales
exit
interface fa0/1
switchport access vlan 10
interface fa0/2
switchport access vlan 20
exit
interface gig0/1
switchport mode trunk
exit
L3 Switch Configuration:
- Configure desired VLANs like 10, 20 and all.
- Assign ip address on each VLAN of different network using VLANs as interfaces.
- These ip addresses will now act as the default gateways for the devices in those
VLANs.
- Make the link between L3 switch and L2 switch as trunk link.
- Run the following command to make the L3 switch work as a Router:
ip routing
vlan 10
name HR
vlan 20
name Sales
exit
interface vlan 10
ip address 192.168.10.1 255.255.255.0
interface vlan 20
ip address 192.168.20.1 255.255.255.0
exit
Try this one!
18. STP
18.1 STP & loop conditions
It is a layer 2 protocol which was designed to prevent Switching Loops/broadcast Storm.
It is enabled by default in CISCO switches.
IEEE standard – 802.1d
18.1.1 What is a Loop in networking?
It is a phenomenon when a packet neither arrives at its destination nor it get dropped along
the path. It keep on travelling in the network until it consumes all network resources and
bring the network down.
How loop occurs in switched networks?
When switches are connected to each other in a linear way, then there is no concern of loop
formation because a frame has only one path to reach from source to destination in the
switched network.
But, when switches are connected to each other in a ring so that we could have
backup/redundant links (as we know it is must from network design point of view – because
redundancy is a prime factor to be considered while designing a network), there will exist
multiple paths to reach from source to destination.
It will create loop in the switched network.
18.1.2 Loop conditions
- Switches connected in ring for redundancy
- Bad cabling (a cable in the 2 ports of a same switch)
In case, there are multiple links to the same destination, it blocks some ports based on the
network topology, so that no more than one path exist to reach to the destination.
It block enough ports so that:
- There is no loop, and
- There is no loss of connectivity
18.2.1 Why use STP?
As we know, a switch takes some time to bring its ports up. This phenomenon or behavior of
switch is defined by the term “STP convergence.”
It means STP takes a certain amount of time to understand the complete network topology
and to decide which ports to be put in blocking and which ports to be put in forwarding
state.
This is known as “STP Convergence Time.”
This time is typically from 30 to 50 seconds, depending upon the values of STP timers. (Will
discuss later)
18.2.3 Working of STP
STP works by selecting one of the switches as a main switch, which is known as “Root
Bridge”. What happens as a result that now all of the frames in the network will only pass
through the “Root Bridge”. And all redundant/backup paths will be put in blocking state.
So, there will be only one path reach from source to destination.
Hence, there will be no loop. Simple, isn’t it.
18.2.4 Broadcast storm
SW1(config)# no spanning-tree vlan 1
To see the effect of broadcast storm, run this command on all 3 switches to disable
spanning-tree protocol.
As we know that STP is an automatic process. We need not to do anything regarding it. STP
is enabled by default and will keep doing its function continuously.
As routing protocols used hello packets for their automatic route learning process, STP uses
BPDUs as their hello packets to send and receive its information regarding “Root Bridge”
election and Port Type decision etc.
So, BPDUs are the hello packets of STP which is sent after every 2 seconds. Root Bridge is
elected by the exchange of BPDUs.
Bridge-ID: (B-ID)
It is the combination of: Priority + Mac-Address
It is an 8 byte value: 2 Byte Priority + 6 Byte Mac address
Root-ID: (R-ID)
It is the Bridge-ID of Root Bridge.
Lab Verification
#show spanning-tree
After changing the Hello to list their own BID as the sender’s BID and listing that switch’s
root cost, the switch forwards the Hello out all designated ports.
Step 3. Steps 1 and 2 repeat until something changes.
Each switch relies on these periodically received Hellos from the root as a way to know that
its path to the root is still working.
When a switch fails to receive a Hello, it knows a problem might be occurring in the
network.
When a switch ceases to receive the Hellos, or receives a Hello that lists different details,
something has failed, so the switch reacts and starts the process of changing the spanning-
tree topology.
- The cost is the sum of the costs of all the switch ports the frame would exit if it
flowed over that path.
- The switches also look at their neighbor’s root cost, as announced in Hello BPDUs
received from each neighbor.
Note that: The default cost values of links are based on the operating speed of the link, not
the maximum speed.
For example: If a 10/100/1000 port runs at 10 Mbps for some reason, its default STP cost on
a Cisco switch is 100, the default cost for an interface running at 10 Mbps.
STP cost – in case of tie breaker
Switches need a tiebreaker to use in case the best root cost ties for two or more paths.
If a tie occurs, the switch applies these three tiebreakers to the paths that tie, in order, as
follows:
1. Choose based on the Lowest neighbor Bridge ID
2. Choose based on the Lowest neighbor Port Priority
3. Choose based on the Lowest neighbor internal Port Number
NOTE: Two additional tiebreakers are needed in some cases, (although these would be
unlikely today).
A single switch can connect two or more interfaces to the same collision domain by
connecting to a hub.
So, if a switch ties with itself, two additional tiebreakers are used:
- The lowest interface STP/RSTP priority and, if that ties,
- The lowest internal interface number.
STP port types selection – altn
3. Blocking Port – is elected on the basis of higher mac-address on a link.
As we have studied that, all redundant/backup links are put in blocking state by STP. This is
because one side of all the backup links is elected as blocking port. When one of the
main/functioning links is down, then it brings up and take responsibility of data transfer.
18.5 STP states
STP is known to have 4 states:
Blocking No No Stable
Listening No No Transitory
Learning No Yes Transitory
Forwarding Yes Yes Stable
Disabled No No Stable
As we have discussed earlier, STP timers are the reason why a switch takes 30 to 50 seconds
to bring its links up.
Depending upon the values of these timers, we can say that STP converges typically in the
time from 30 to 50 seconds.
- Listening to learning: 15 seconds
- Learning to forwarding: 15 seconds
In addition, a switch might have to wait MaxAge seconds (default 20 seconds) before even
choosing to move an interface from blocking to forwarding state.
Max age (if no hello from neighbor switch): 20 seconds
18.7 How to manually make a switch as “Root Bridge?”
It can be made by changing the priority of a particular VLAN by using the following
command:
Switch1(config)# spanning-tree vlan 1 priority 4096
Note: Priority of a switch is a value multiple of 4096.
18.8 Root Bridge: Primary, Secondary
By using this concept, one switch is made as “Root Bridge” and another as “Backup Root
Bridge”.
To make primary Root Bridge:
Switch1(config)# spanning-tree vlan 1 root primary
To make secondary Root Bridge:
Switch2(config)# spanning-tree vlan 1 root secondary
18.9 PVSTP
The original Spanning Tree protocol (802.1d) is quite outdated by today’s standards and only
worked on a single VLAN or a single switch that does not support VLAN’s. Cisco saw the
need for Spanning Tree on all VLAN’s and create the proprietary PVST and PVST+ protocols
which enable spanning-tree on a per vlan instance. So in this case every single vlan on each
switch has its own STP process running to detect and eliminate loops in a layer two
switching network.
So, In CISCO switches, “Root Bridge” can be elected on per VLAN basis.
How to configure pvst?
Switch1(config)# spanning-tree mode pvst
Switch2(config)# spanning-tree mode pvst
Switch3(config)# spanning-tree mode pvst
To make Switch3 as Root Bridge for VLAN10:
Switch3(config)# spanning-tree vlan 10 priority 0
To make Switch2 as Root Bridge for VLAN20:
Switch2(config)# spanning-tree vlan 20 priority 0
18.9.1 Loop prevention v/s slow convergence STP advanced
• STP as we know it, keeps the network loop free but at what cost?
• The exact cost to you and I is 50 seconds! That is a long time in networking terms.
For almost a minute data cannot flow across the network. In most cases this is a critical
issue, especially for important network services. (Some services get timeout in this time
period.)
To deal with this issue, Cisco added the following features to STP implementation on its
switches:
- PortFast, BPDUGuard and BPDUFilter
- UplinkFast, BackboneFast etc.
18.10 Portfast
If you have a laptop or a server connected to a switchport then you know that:
When you configure a switchport as portfast, STP will be disabled on that port and it will
transition to forwarding state when it comes up and will never be blocked.
Switch(config)# interface fa0/1
Switch(config-if)#spanning-tree portfast
18.11 BPDU guard
As we learned, Portfast disables STP on a switchport but an important fact is that a Portfast
switchport will keep listening for BDPUs. If someone adds a switch to a port which has been
configured as Portfast, the consequences will be unpredictable and is some cases disastrous.
To guard against this situation, Cisco provides the BPDUGuard and BPDUFilter features.
If a switch is plugged into a switchport configured as Portfast, it could change the STP
topology without the administrator knowing and could even bring down the network.
To prevent this, BPDUGuard can be configured on the switchport.
BPDU Guard feature protects the port from receiving STP BPDUs, however the port can
transmit STP BPDUs. When an STP BPDU is received on a BPDU Guard enabled port, the port
is shutdown and the state of the port changes to ErrDis (Error-Disable) state and an
administrator will have to bring the port up.
In modern networks this Spanning Tree Protocol (STP) convergence time gap is not
acceptable
Cisco enhanced the original Spanning Tree Protocol (STP) IEEE 802.1D specification with
features such as PortFast, UplinkFast and BackboneFast to speed up the Spanning Tree
Protocol (STP) convergence time, But these were proprietary enhancements.
The Rapid Spanning Tree Protocol (RSTP) IEEE standard is available to address the Spanning
Tree Protocol (STP) convergence time gap issue.
Rapid Spanning Tree Protocol (RSTP) enables STP Root Ports and STP Designated Ports to
change from the blocking to forwarding port state in a few seconds.
In order to speed things up:
Rapid STP: NO Listening, NO Blocking,
Only 3 States: Discarding, Learning and Forwarding
Then delay will become = 3 + 3 = 6 Seconds
18.15.1 How to configure rstp?
Switch(config)# spanning-tree mode rapid-pvst
802.1w was actually an amendment to the 802.1D standard. The IEEE first published 802.1D
(STP) in 1990, and anew in 1998. After the 1998 version of 802.1D, the IEEE published the
802.1w (RSTP) amendment to 802.1D in 2001, which first standardized RSTP. IEEE replaced
STP with RSTP in the revised 802.1D standard in 2004. In another move, in 2011 the IEEE
moved all the RSTP details into a revised 802.1Q standard. As of today, RSTP actually lies in
the 802.1Q standards document. Many people refer to RSTP as 802.1w because that was
the first IEEE document to define it. They are right based on timing and context. However,
we are focusing on the concepts of STP and RSTP rather than the IEEE standard numbers.
STP & RSTP differences
In STP, the root switch creates a Hello with all other switches, updating and forwarding the
Hello. With RSTP, each switch independently generates its own Hellos. Additionally, RSTP
allows for queries between neighbors, rather than waiting on timers to expire, as a means
to avoid waiting to learn information. So, RSTP lowers waiting times for cases in which RSTP
must wait for a timer.
19.2 RSTP timers
STP requires a switch to wait for MaxAge seconds, which STP defines based on 10 times the
Hello timer, or 20 seconds, by default. RSTP shortens this timer, defining MaxAge as three
times the Hello timer. Additionally, RSTP can send messages to the neighboring switch to
inquire whether a problem has occurred rather than wait for timers. The best way to get a
sense for these mechanisms is to see how the RSTP alternate port and the backup port both
work. RSTP uses the term Alternate Port to refer to a switch’s other ports that could be used
as the Root Port in the case of root port failure. The Backup Port concept provides a backup
port on the local switch for a Designated Port. Note that: Backup ports apply only to designs
that use hubs, so they are unlikely to be useful today.
19.3 RSTP port roles
Port that begins a non-root switch’s best path to the root bridge Root Port
Port that replaces the root port when the root port fails Alternate Port
Port that replaces a designated port when a designated port fails Backup Port
STP waits for a time (forward delay) in both listening and learning states. The reason for this
delay in STP is that, at the same time, the switches have all been told to time out their MAC
table entries. When the topology changes, the existing MAC table entries may actually cause
a loop.
With STP, the switches all tell each other (with BPDU messages) that the topology has
changed and to time out any MAC table entries using the forward delay timer. This removes
the entries, which is good, but it causes the need to wait in both listening and learning state
for forward delay time (default 15 seconds each).
RSTP, to converge more quickly, avoids relying on timers. RSTP switches tell each other
(using messages) that the topology has changed. Those messages also direct neighboring
switches to flush the contents of their MAC tables in a way that removes all the potentially
loop-causing entries, without a wait.
As a result, in RSTP, a port can immediately transition to a forwarding state, without waiting,
and without using the learning state.
RSTP backup port
RSTP backup port role creates a way for RSTP to quickly replace a switch’s designated port.
The need for the backup port role only happens in designs that are a little unlikely today.
The reason is that a design must use hubs, which then allows the possibility that one switch
connects more than one port to the same collision domain.
With a backup port, if the current designated port fails, SW4 can start using the backup port
with rapid convergence.
With each pair of Ethernet links configured as an EtherChannel, STP treats each
EtherChannel as a single link.
In other words, both links to the same switch must fail for a switch to need to cause STP
convergence.
Without EtherChannel, if you have multiple parallel links between two switches, STP blocks
all the links except one.
With EtherChannel, all the parallel links can be up and working at the same time, while
reducing the number of times STP must converge, which in turn makes the network more
available.
19.4.2 RSTP & portfast
PortFast allows a switch to immediately transition from blocking to forwarding, bypassing
listening and learning states.
However, the only ports on which you can safely enable PortFast are ports on which you
know that no bridges, switches, or other STP-speaking devices are connected.
Otherwise, using PortFast risks creating loops, the very thing that the listening and learning
states are intended to avoid.
PortFast is most appropriate for connections to end-user devices.
19.4.3 RSTP & bpduguard
STP and RSTP open up the LAN to several different types of possible security exposures. For
example:
1. An attacker could connect a switch to one of these ports, one with a low STP/RSTP
priority value, and become the root switch.
The new STP/RSTP topology could have worse performance than the desired topology.
2. The attacker could plug into multiple ports, into multiple switches, become root, and
actually forward much of the traffic in the LAN.
Without the networking staff realizing it, the attacker could use a LAN analyzer to copy large
numbers of data frames sent through the LAN.
STP and RSTP open up the LAN to several different types of possible security exposures. For
example:
3. Users could innocently harm the LAN when they buy and connect an inexpensive
consumer LAN switch (one that does not use STP/RSTP).
Such a switch, without any STP/RSTP function, would not choose to block any ports and
could cause a loop.
The Cisco BPDU Guard feature helps defeat these kinds of problems by disabling a port if
any BPDUs are received on the port.
So, this feature is particularly useful on ports that should be used only as an access port and
never connected to another switch.
In addition, the BPDU Guard feature helps prevent problems with PortFast.
PortFast should be enabled only on access ports that connect to user devices, not to other
LAN switches.
Using BPDU Guard on these same ports makes sense because if another switch connects to
such a port, the local switch can disable the port before a loop is created.
19.6 Introduction to MSTP
• IEEE standard – 802.1s
• Multiple VLANs can have a common “Root Bridge”.
• Originally designed for vendors having lesser hardware capabilities than CISCO.
19.7 How to detect switching loops?
Network will be up for a while and then it will slowly start to get slow down and finally it will
be down.
This will keep happening until the problem is resolved.
Question: What is the fastest way to remove a switching loop?
Answer: Unplug all the devices.
Using LLDP
- It is also a layer 2 protocol for the same task but it is open source.
- It is useful for devices of other vendors then CISCO like Juniper and Huawei etc.
20.4 LLDP configuration
Enable LLDP globally:
Switch(config)# lldp run
Disable LLDP globally:
Switch(config)# no lldp run
Enable LLDP on an interface:
Switch(config)# interface fa0/1
Switch(config-if)# lldp enable
Disable LLDP on an interface:
Switch(config)# interface fa0/1
Switch(config-if)# no lldp enable
Remember:
CDP and LLDP are very useful in troubleshooting of network issues but they are also a threat
to the network integrity.
If taken advantage of by the attackers, these protocols can be used to steal important
network information which can be further used for privilege escalation.
So, some security experts advise to turn them off and use some other methods for the same
purpose.
LLDP verification
Switch# show lldp
Switch# show lldp neighbors
Switch# show lldp neighbors detail
What if the bandwidth of an interface is not enough? This technology can Aggregate/Bundle
multiple interfaces into a new single interface. Ether-Channel is a port link aggregation
technology or port-channel architecture used primarily on Cisco switches. It allows grouping
of several physical Ethernet links to create one logical Ethernet link for the purpose of:
- Providing fault-tolerance, and
- High-speed links between devices in the network.
21.2 What is LACP?
Done by negotiating between the two devices using the LACP protocol and Device Role.
LACP has 2 states: Active and Passive. Watch out for both devices, at least one of them must
be ACTIVE. LACP can be done on both Layer2 (Switches) and L3 (Routers) devices. In L3, no
need for Negotiating and Device Roles.
21.3 Switch LACP configuration
Switch1(config)# interface range fa0/1-2
Switch1(config-if-range)#channel-group 1 mode ?
active Enable LACP unconditionally
auto Enable PAgP only if a PAgP device is detected
desirable Enable PAgP unconditionally
on Enable Ether-channel only
passive Enable LACP only if a LACP device is detected
How to verify lacp on a switch?
Switch1# show spanning-tree
Switch1# show etherchannel port-channel
Switch1# show etherchannel summary
21.4 Router LACP configuration
Router1(config)# interface port-channel 1
Router1(config)# interface range gigabitEthernet 0/0/0-1
Router1(config-if-range)# channel-group 1
How to verify lacp on a router?
Router1# show interfaces port-channel 1
LANs typically connect nearby devices: devices in the same room, in the same building, or in
a campus of buildings.
LANs can be classified on the following 2 basis:
1. Technology based LANs, and
- Ethernet LANs, and
- Wireless LANs.
2. Scale/Size based LANs
- SOHO LANs, and
- Enterprise LANs.
22.2 Ethernet LANs
It is a combination of user devices, LAN switches, and different kinds of cabling. Each link
can use different types of cables, at different speeds.
However, they all work together to deliver Ethernet frames from the one device on the LAN
to some other device.
• Ethernet LANs happen to use cables for the links between nodes, and because many
types of cables use copper wires, Ethernet LANs are often called wired LANs.
• Ethernet LANs also make use of fiber-optic cabling, which includes a fiberglass core
that devices use to send data using light.
In comparison to Ethernet, wireless LANs do not use wires or cables, instead using radio
waves for the links between nodes.
22.3 Ethernet links
The term Ethernet link refers to any physical cable between two Ethernet nodes:
- The cable itself,
- The connectors on the ends of the cable, and
- The matching ports on the devices into which the connectors will be inserted.
The cable holds some copper wires, grouped as twisted pairs to prevent crosstalk.
Crosstalk – EMI (Electromagnetic Interference) between pair of cables of same wire is called
crosstalk.
The 10BASE-T and 100BASE-T standards require two pairs of wires (one for each direction).
But the 1000BASE-T standard requires four pairs (to allow both ends to transmit and receive
simultaneously on each wire pair).
22.4 Cabling architecture
To understand the wiring of the cable—which wires need to be in which pin positions on
both ends of the cable—you need to first understand how the NICs and switches work.
- As a rule, Ethernet NIC transmitters use the pair connected to pins 1 and 2; the NIC
receivers use a pair of wires at pin positions 3 and 6.
- LAN switches, knowing those facts about what Ethernet NICs do, do the opposite:
Their receivers use the wire pair at pins 1 and 2, and their transmitters use the wire
pair at pins 3 and 6.
22.5 Ethernet family of standards
• The term Ethernet refers to a family of LAN standards that together define the
physical and data-link layers of the world’s most popular wired LAN technology.
• The standards, defined by the Institute of Electrical and Electronics Engineers (IEEE),
define the cabling, the connectors on the ends of the cables, the protocol rules, and
everything else required to create an Ethernet LAN.
• One of the most significant strengths of the Ethernet family of protocols is that these
protocols use the same data-link standard.
www.EthernetAlliance.org – to check all the latest developments of Ethernet.
The term Ethernet refers to an entire family of standards.
Some standards define the specifics of how to send data over a particular type of cabling,
and at a particular speed.
Other standards define protocols, or rules, that the Ethernet nodes must follow to be a part
of an Ethernet LAN.
All these Ethernet standards come from the IEEE and include the number 802.3 as the
beginning part of the standard name.
Although Ethernet includes many physical layer standards, Ethernet acts like a single LAN
technology because it uses the same data-link layer standard over all types of Ethernet
physical links.
That standard defines a common Ethernet header and trailer. (As a reminder, the header
and trailer are bytes of overhead data that Ethernet uses to do its job of sending data over a
LAN.)
No matter whether the data flows over a UTP cable or any kind of fiber cable, and no matter
the speed, the data-link header and trailer use the same format.
Speed Common name Informal IEEE Formal IEEE Cable type &
standard name standard name Maximum length
10Gbase-LR Single-mode 10 km
An Ethernet header at the front, the encapsulated data in the middle, and an Ethernet
trailer at the end.
Following is the commonly used frame structure:
Padding – A process of adding more bits to make the frame of minimum transferrable size
(46 bytes) if it is less than that.
Maximum Transmission Unit (MTU) – Size of the maximum Layer 3 packet that can be sent
over a medium.
Because the Layer 3 packet rests inside the data portion of an Ethernet frame, 1500 bytes is
the largest IP MTU allowed over an Ethernet.
Note: Errors in frame are checked at the receiving side not on the sending side itself.
It means that, currently first 48 bits of an IPv6 address are used to identify the network
globally, and the next 16 bits are used for Subnetting. (which makes 48+16=64 bits, network
part).
The remaining 64 bits are used for identifying the hosts. (host part)
Global Unicast IPv6 Addresses range
Since the leftmost three bits are reserved as "001" for Global unicast IPv6 addresses, the
range of Global Unicast Addresses available now are from 2000 to 3FFF, as shown below:
Unique Local IPv6 addresses can be viewed as globally unique "private routable" IPv6
addresses, which are typically used inside an organization.
A range of FC00::/7 means that IPv6 Unique Local addresses begin with 7 bits with exact
binary pattern as 1111 110L
So, we can have two Unique Local IPv6 Unicast Address prefixes.
1111 1100 (FC in hexadecimals) and 1111 1101 (FD in hexadecimals)
23.5.3 Link Local IPv6 Addresses
Allow communications between devices on a local link.
They start with FE80::/10
IPv6 addresses with prefixes FC00::/7 and Unique local IPv6 addresses
FD00::/8
IPv6 addresses with prefixes FF00::/8 Multicast IPv6 addresses
IPv6 addresses with prefixes 2001:0DB8::/32 and Reserved for documentation
3FFF:FFFF::/32
IPv6 uses the Router Solicitation (RS) & Router Advertisement (RA) messages to learn the
IPv6 Network Prefix, IPv6 Prefix Length, default router IPv6 address from network routers.
After obtaining the IPv6 Network Prefix, IPv6 Prefix Length, default router IPv6 address from
network routers, IPv6 network interfaces can automatically derive a Global Unicast IPv6
Address using EUI-64 method.
IPv6 can use Stateless DHCPv6 to learn the DNS Server IPv6 addresses.
23.6.2 Static IPv6 Address Configuration
There are two methods for Static IPv6 Global Unicast Address Configuration:
1. You can type-in the entire 128-bit IPv6 address for the network interface.
2. You can configure 64 bit IPv6 Global Unicast Address network prefix & then use EUI-64
method to derive the remaining 64 host part bits.
How to configure Static Global Unicast IPv6 Address?
R1(config)# interface fa 0/0
R1(config-if)# ipv6 address 2001:db8:aaaa:1::1/64
R1(config-if)# no shutdown
R1# Show ipv6 interface brief
EUI-64 based Global Unicast IPv6 address
• The EUI-64 method of generating an Global Unicast IPv6 Address involves selecting
the 6 byte (48 bit) interface MAC address and the and then generating a Global
Unicast IPv6 Address by expanding it into a 64 bit interface part. (host part)
• To make a Global Unicast IPv6 Address unique, IPv6 insert 2 bytes (16 bits) into the
middle of the MAC address.
The 48 bit MAC address is divided into two 3 byte parts and then a binary number
1111111111111110 (FFFE in hexadecimals) is inserted in between them to make complete
64 bits.
• Also the 7th bit (from left) in the MAC address is flipped. Which means, if the 7th bit
in the MAC address (from left) is 1, change it to 0 or if the 7th bit (from left) in the
MAC address is 0, change it to 1.
• The 7th bit (from left) in the MAC address is called as Universal/Local (U/L) bit.
Universal/Local (U/L) bit is used to indicate whether the address is universally assigned or
locally assigned.
The Universal/Local (U/L) bit set to 0 means that it is IEEE assigned MAC address.
The Universal/Local (U/L) bit set to 1, means that the MAC address is locally assigned mac-
address.
How to configure EUI-64 based global Unicast IPv6 Address?
R1(config)# interface fa 0/0
R1(config-if)# ipv6 address 2001:db8:aaaa:1::/64 eui-64
R1(config-if)# no shutdown
R1# Show ipv6 interface brief
FastEthernet0/0
FE80::C800:CFF:FEF0:8
2001:DB8:AAAA:1:C800:CFF:FEF0:8
The MAC address of the interface is "ca00.0cf0.0008“
IPv4 IPv6
IPv4 addresses are 32 bit length IPv6 addresses are 128 bit length.
IPv4 addresses are binary numbers IPv6 addresses are binary numbers
represented in decimals. represented in hexadecimals.
Options fields are available in IPv4 header. No option fields, but IPv6 Extension
headers are available.
As per the name, a wireless network means a network without wires. It removes the need
to be connected to a wire or cable.
Wired networks have some shortcomings.
- When a device is connected by a wire, it cannot move around very easily or very far.
- As devices get smaller and more mobile, it just is not practical to connect them to a
wire.
What wireless is about?
Wireless networking is not about having a complete wireless network but to have a solution
for the part of the network where you cannot have the cables extended.
More than 80% of your network is still wired. Routers are still there, switches are still there.
Some new devices and protocols are added to provide the wireless connectivity.
Which devices wireless adds?
A wireless network add some extra devices in the infrastructure, like:
- Access Point (AP) and Wireless Controller (WLC)
What wireless offers?
In comparison to a wired network, a wireless network offers:
- Mobility and Convenience
To avoid colliding with other transmissions already in progress. The side effect – No host can
transmit and receive at the same time on a shared medium. A wireless LAN is similar.
IEEE 802.11 WLANs are always half duplex because transmissions between stations use the
same frequency or channel. Only one station can transmit at any time; otherwise, collisions
occur.
Wireless LAN – full duplexed?
To achieve full-duplex mode, one station’s transmission would have to occur on one
frequency while it receives over a different frequency—much like full-duplex Ethernet links
work. Although this is certainly possible and practical, the 802.11 standard does not permit
full-duplex operation.
24.1.1 Common wireless terms
BSS – Basic Service Set
There should be a way to control:
- Which devices are allowed to use the wireless medium, and
- The methods that are used to secure the wireless transmissions.
The solution is to make every wireless service area a closed group of mobile devices that
forms around a fixed device; before a device can participate, it must advertise its
capabilities and then be granted permission to join. The 802.11 standard calls this a BSS –
Basic Service Set.
- At the heart of every BSS is a wireless AP – Access Point.
- The AP also establishes its BSS over a single wireless channel.
- The AP and the members of the BSS must all use the same channel to communicate
properly.
NOTE: Recall that wired Ethernet devices each have a unique MAC address to send frames
from a source to a destination over a Layer 2 network. Wireless devices must also have
unique MAC addresses to send wireless frames at Layer 2 over the air.
Service set identifier – SSID
It is a text string containing a logical name for the network connectivity provided by the AP.
Tip to remember:
BSSID – Machine Readable, unique name that identify the BSS (AP)
SSID – Human Readable, non-unique name that identifies the wireless service.
BSS association
Membership with the BSS is called an association. A wireless device must send an
association request to the AP and the AP must either grant or deny the request.
Once associated, a device becomes a client, or a station (STA), of the BSS. What then?
As we know that a BSS has a single AP and no connection to a regular Ethernet network. In
this way, the AP and its associated clients make up a standalone network. But sooner or
later, wireless clients will need to communicate with other devices that are not members of
the BSS. Fortunately, an AP can also uplink into an Ethernet network because it has both
wireless and wired capabilities. The upstream wired Ethernet network is known as the -
Distribution System (DS) for the wireless BSS, shown in the following figure.
Distribution System can be extended so that multiple VLANs are mapped to multiple SSIDs.
To do this, the AP must be connected to the switch by a trunk link that carries the VLANs.
The AP uses the 802.1Q tagging to map the VLAN numbers to the appropriate SSIDs.
For example:
VLAN 10 – mapped to SSID “Network1”
VLAN 20 – mapped to SSID “Network2” and
VLAN 30 – mapped to SSID “Guest.”
In effect, when an AP uses multiple SSIDs, it is trunking VLANs over the air, and over the
same channel, to wireless clients. The clients must use the appropriate SSID that has been
mapped to the respective VLAN when the AP was configured.
The AP then appears as multiple logical APs – One per BSS – With a unique BSSID for each.
Even though an AP can advertise and support multiple logical wireless networks, each of the
SSIDs covers the same geographic area. The reason is that the AP uses the same transmitter,
receiver, antennas, and channel for every SSID that it supports.
Extended service set – ESS
Normally, one AP cannot cover the entire area where clients might be located. A campus
infrastructure for example. To cover more area than a single AP’s cell can cover, you simply
need to add more APs and spread them out geographically. When APs are placed at
different geographic locations, they can all be interconnected by a switched infrastructure.
The 802.11 standard calls this an extended service set (ESS), as shown in the following
figure.
It is when two or more wireless clients to communicate directly with each other, with no
other means of network connectivity. This is known as an ad hoc wireless network, or an
independent basic service set (IBSS).
- Mesh network
Repeater
Normally, each AP in a wireless network has a wired connection to the switched network. To
extend wireless coverage beyond a normal AP’s cell, additional APs can be connected to the
switches using Ethernet wires.
But in some scenarios, it is not possible to run a wired connection to a new AP from the
switch because the cable distance is too great to support Ethernet communication.
In that case, you can add an additional AP that is configured for repeater mode. A wireless
repeater takes the signal it receives and repeats or retransmits it in a new cell area around
the repeater.
If the repeater has a single pair of transmitter and receiver, it must operate on the same
channel that the AP is using. Some repeaters can use two pairs of transmitters and receivers
to keep the original and repeated signals isolated on different channels.
One transmitter and receiver pair is dedicated to signals in the AP’s cell, while the other pair
is dedicated to signals in the repeater’s own cell.
The central site bridge is connected to an omnidirectional antenna, such that its signal is
transmitted equally in all directions so that it can reach the other sites simultaneously. The
bridges at each of the other sites can be connected to a uni-directional antenna aimed at
the central site.
Mesh network
To provide wireless coverage over a very large area, it is not always practical to run Ethernet
cabling to every AP that would be needed. Instead, you could use multiple APs configured in
mesh mode.
In a mesh topology, wireless traffic is bridged from AP to AP, in a daisy-chain fashion, using
another wireless channel. (Just like switches) Mesh APs can leverage dual radios – One using
a channel in one range of frequencies and one a different range.
Each mesh AP usually maintains a BSS on one channel, with which wireless clients can
associate. Client traffic is then usually bridged from AP to AP over other channels as a
backhaul network. At the edge of the mesh network, the backhaul traffic is bridged to the
wired LAN infrastructure.
With Cisco APs, you can build a mesh network indoors or outdoors. The mesh network runs
its own dynamic routing protocol to work out the best path for backhaul traffic to take
across the mesh APs.
24.1.3 RF overview
The sender – A transmitter sends an alternating current into a section of wire (an antenna),
which sets up moving electric and magnetic fields that propagate out and away as traveling
waves. The electric and magnetic fields travel along together at right angles. The signal must
keep alternating, by cycling up and down, to keep the electric and magnetic fields pushing
outward.
Note that the electromagnetic waves do not travel in a straight line, instead, they travel by
expanding in all directions away from the antenna. The waves begin small, expand outward
in all three dimensions and are replaced by new waves.
It is something similar when you through a stone in the pool of water and waves start
expending in the outward directions, finishes after some length and replaced by new waves.
The whole process is reversed at the receiving end of a wireless link. As the electromagnetic
waves reach the receiver’s antenna, they induce an electrical signal. If everything works
right, the received signal will be a copy of the original transmitted signal.
Frequency – the number of times the signal makes one complete up and down cycle in 1
second. Or it can be defined as the number of cycles per second.
Hertz (Hz) is the most commonly used frequency unit. (Hz, KHz, MHz, GHz etc.)
1 Hertz – Total number of cycles per second.
24.1.8 Wireless bands & channels
Wireless bands:
Take reference of google for: Frequency Spectrum. You will find that a range of frequencies
might be used for the same purpose. These ranges of frequencies as a whole is referred to
as the – Frequency Bands.
One of the two main frequency ranges used for wireless LAN communication lies between
2.400 and 2.4835 GHz. This is usually called the 2.4-GHz band. The other wireless LAN range
lies between 5.150 and 5.825 GHz and is called the 5-GHz band.
The 5-GHz band actually contains the following four separate and distinct bands:
5.150 to 5.250 GHz; 5.250 to 5.350 GHz
5.470 to 5.725 GHz; 5.725 to 5.825 GHz
Do not worry about memorizing the band names or exact frequency ranges; just be aware
of the two main bands at 2.4 and 5 GHz.
Wireless channels:
To maintain things in order, bands are divided into a number of different channels. Each
channel is represented by a channel number and a specific frequency is assigned to each
channel. As long as the channels are defined by some standards body, they can be used
consistently everywhere.
Wireless Channels overlapping
An AP should use a channel number different than the channel number used by neighbor
APs. Now, each channel should have a range of unique frequencies so that they don’t
overlap each other.
In the 5-GHz band, this is possible. Each channel is allocated a frequency range that does not
overlap the frequencies allocated for any other channel. In other words, the 5-GHz band
consists of non-overlapping channels.
The same is not true of the 2.4-GHz band. Each of its channels is much too wide to avoid
overlapping the next lower or upper channel number.
In fact, each channel covers the frequency range that is allocated to more than four
consecutive channels! The only way to avoid any overlap between adjacent channels is to
configure APs to use only channels 1, 6, and 11.
24.1.9 Wireless generations
Remember that: Wireless devices and APs should all be capable of operating on the same
band. For example, a 5-GHz wireless phone can communicate only with an AP that offers
Wi-Fi service on 5-GHz channels. A device that supports 802.11b/g will support both
802.11b and 802.11g.
As we have studied, a device can operate on both bands, how does it decide which band to
use? APs can operate on both bands simultaneously to support clients that may be present
on each band. However, wireless clients associate with an AP on one band at a time, while
scanning for other APs on both bands. The band used to connect to an AP is chosen
depending upon several factors like operating system and wireless adapter driver etc.
A wireless client can have an association with one AP on one band and then can switch to
the other band if it found that the signal conditions are better on that band. Cisco APs have
dual radios to support BSSs on one 2.4-GHz channel and other BSSs on one 5-GHz channel
simultaneously.
One AP – One Radio – One Channel – One BSS
Some models also have two 5-GHz radios that can be configured to operate BSSs on two
different channels at the same time, to provide wireless coverage to more number of users
in a condensed area. You can configure a Cisco AP to operate on a specific channel number.
But as the number of APs grows, manual channel assignment becomes a difficult task. Cisco
wireless architectures can automatically assign each AP to an appropriate channel.
2.4 GHz v/s 5 GHz
On the 2.4-GHz band, RF signals reach further than on the 5-GHz band and also penetrate
walls and objects easier.
However, the 2.4-GHz band is commonly more crowded with more number of wireless
devices as most of the devices are configured to use 2.4 GHz band as a default setting.
Remember that only three non-overlapping channels are available, so the chances of other
neighboring APs using the same channels is greater.
5-GHz band has many more channels available to use, making channels less crowded and
experiencing less interference.
An autonomous AP offers a simple path for data to travel between the wireless and wired
networks where data has to travel only through the AP to reach the network on the other
side.
Two wireless users that are associated to the same autonomous AP can reach each other
through the AP without having to pass up into the wired network. But remember that no
two devices can communicate directly.
Because SSIDs and their VLANs must be extended at Layer 2, you should consider how they
are extended throughout the switched network. As the wireless network expands, the
infrastructure becomes more difficult to configure correctly and becomes less efficient.
To manage thousands of autonomous APS altogether, as the wireless network grows, you
could use an AP management platform such as Cisco Prime Infrastructure or Cisco DNA
Center in the enterprise.
But the things is that such management platform would need to be purchased, configured,
and maintained. A simpler approach is a cloud-based AP architecture, where the AP
management function is not local in the enterprise but placed in the cloud, on the internet.
Cisco Meraki is cloud-based and offers centralized management of wireless, switched, and
security networks built from Meraki products.
Cisco Meraki APs can be deployed automatically, once you register with the Meraki cloud.
Each AP will contact the cloud when it powers up and will self-configure. From that point on,
you can manage the AP through the Meraki cloud dashboard.
Through the cloud networking service, you can:
- Configure APs
- Manage APs
- Monitor your wireless network, and
- Generate reports etc.
Remember that the network is arranged similar to the previous one of the autonomous AP
network.
The reason is: APs in a cloud-based network are all autonomous, too. The difference lies in
the fact that all of the APs are managed, controlled, and monitored centrally from the cloud
location.
It adds the intelligence – to automatically guide each AP on which channel and what
transmit power level to use. It can gather information from all of the APs about – RF
interference and wireless usage statistics.
24.2.3 Split-MAC architecture (Lightweight AP)
To overcome the limitations create by distributed autonomous Aps, the functions of
autonomous APs needed to be shifted to some central location.
Some of such issues includes:
The lightweight AP-WLC separation is known as a split-MAC architecture, where the normal
MAC operations are divided into two different operations being managed from different
locations and by different devices.
This occurs for every AP in the network; each one must boot and connect itself to a WLC to
support wireless clients.
The WLC becomes the central hub that supports and coordinates a number of APs spanned
across the wireless network.
Used for packets traveling to and from wireless clients that are associated with the AP. Data
packets are transported over the data tunnel but are not encrypted by default.
Note: CAPWAP – Is based on the Lightweight Access Point Protocol (LWAPP) – a legacy Cisco
proprietary protocol.
The tunnel exists between the IP address of the WLC and the IP address of the AP, which
allows all of the tunneled packets to be routed at Layer 3. The traffic to and from clients
associated with SSID 100 is transported across the network infrastructure encapsulated
inside the CAPWAP data tunnel.
Now the AP will be having only a single IP address: 10.10.10.10 and it can use one IP address
for both management and tunneling. Also remember that no trunk link is needed because
all of the VLANs it supports are encapsulated and tunneled as Layer 3 IP packets, not as
Layer 2 frames.
Each AP has a control and a data tunnel back to the centralized WLC. Like:
AP1 to WLC – 1st CAPWAP tunnel
AP2 to WLC – 2nd CAPWAP tunnel
AP3 to WLC – 3rd CAPWAP tunnel, and so on.
As the wireless network grows, the WLC simply builds more CAPWAP tunnels to reach more
APs. SSID 100 can exist on every AP, and VLAN 100 can reach every AP through the network
of tunnels.
24.2.4 WLC functions
Following are some of the important functions of a Cisco Wireless LAN Controller:
Dynamic channel assignment
Transmit power optimization
Self-healing wireless coverage
Dynamic client load balancing
Security management
Dynamic channel assignment:
Based on other active access points in the area, WLC can automatically choose and
configure the RF channel used by each AP.
Transmit power optimization:
The WLC can automatically set the transmit power of each AP based on the coverage area
needed.
Self-healing wireless coverage:
If the radio of one AP dies, the coverage gap can be cured by turning up the transmit power
of surrounding APs automatically.
Local:
This mode is the default mode on all the Cisco APs. If not configured in any other mode, an
AP will be configured in this mode. It offers one or more functioning BSSs on a specific
channel.
FlexConnect:
An AP at a remote site can locally switch traffic between an SSID and a VLAN if its CAPWAP
tunnel to the WLC is down and if it is configured to do so.
Rogue detector:
An AP dedicates itself to detecting rogue devices by correlating MAC addresses heard on the
wired network with those heard over the air. Rogue devices are those that appear on both
networks.
Bridge:
An AP becomes a dedicated bridge (point-to-point or point-to-multipoint) between two
networks. Two APs in bridge mode can be used to link two locations separated by a
distance. Multiple APs in bridge mode can form an indoor or outdoor mesh network.
Note:
Remember that a lightweight AP is normally in local mode when it is providing BSSs and
allowing client devices to associate to wireless LANs. When an AP is configured to operate in
one of the other modes, local mode (and the BSSs) is disabled. Other modes will be
activated (according to configuration) along with the ESS.
You can connect a serial console cable from your PC to the console port on the AP to
configure and manage Cisco APs. Once the AP is operational and has an IP address, you can
also use Telnet or SSH to connect to its CLI over the wired network.
Autonomous APs support browser-based management sessions via HTTP and HTTPS
Lightweight APs can also be managed from a browser session from the WLC.
Connecting and configuring a WLC:
To connect and configure a WLC, you will need to open a web browser to the WLC’s
management address. This can be done only after the WLC has an initial configuration and a
management IP address assigned to its management interface.
The web-based GUI provides an effective way to monitor, configure, and troubleshoot a
wireless network. You can also connect to a WLC with an SSH session, where you can use its
CLI to monitor, configure, and debug activity.
But Cisco expects you to configure the WLC by using GUI.
24.3.2 Accessing a Cisco WLC
When you are logged in, the WLC will display a monitoring dashboard. Click on the
“Advanced” link in the upper-right corner to make further configurations.
This will bring up the full WLC GUI.
Controllers have multiple distribution system ports that you must connect to the network.
These ports can operate independently, each one transporting multiple VLANs to a unique
group of internal controller interfaces.
The CAPWAP tunnels also pass through the distribution system ports which extend to a
controller’s APs. Client data also passes from wireless LANs to wired VLANs over the ports.
In-band management traffic using a web browser, SSH, Simple Network Management
Protocol (SNMP), Trivial File Transfer Protocol (TFTP), etc. reaches the controller using these
ports.
Distribution system ports can be configured in redundant pairs. One port is primarily used; if
it fails, a backup port is used.
Remember that even though the LAG (link aggregation) acts as a traditional EtherChannel,
Cisco WLCs do not support any link aggregation negotiation protocol, like LACP or PAgP.
Therefore, you must configure the switch ports as an unconditional (always-on
EtherChannel).
Connecting a Cisco WLC – service port
Controllers can have a single service port that must be connected to a switched network.
The service port is assigned to a management VLAN so that you can access the controller
with SSH or a web browser to perform initial configuration or for maintenance.
Remember that the service port supports only a single VLAN, so the corresponding switch
port must be configured for access mode only.
24.3.5 Using WLC interfaces
As we know that a controller can connect to multiple VLANs on the switched network using
distribution system ports. Now, the controller must somehow map those external wired
VLANs to equivalent internal logical wireless networks. It means WLC should know that how
many and which VLANs are there in the network that it has to provide the wireless
connectivity to.
For example, VLAN 20 is set aside for wireless users in the “Training” division of a company.
That VLAN must be connected to a unique wireless LAN that exists on a controller and it’s
associated APs. The wireless LAN must then be extended to every client that associates with
the Service Set Identifier (SSID) “Training.”
Cisco wireless controllers provide the necessary connectivity through internal logical
interfaces, which must be configured with an IP address, subnet mask, default gateway, and
a Dynamic Host Configuration Protocol (DHCP) server.
Each interface is then assigned to a physical port and a VLAN ID.
The controller bind one WLAN to one of its dynamic interfaces and then push the WLAN
configuration out to all of its APs by default. Then, wireless clients will be able to learn about
the new WLAN by receiving its beacons and will be able to probe and join the new BSS.
Similar to the concept of VLANs, you can use WLANs to separate wireless users and their
traffic into logical networks. Users associated with one WLAN cannot cross over into
another one unless their traffic is bridged or routed from one VLAN to another through the
wired network infrastructure.
But don’t be just tempted to use a new WLAN for every occasion, just to keep groups of
users isolated from each other or to support different types of devices. It is usually wise to
plan your wireless network first.
Let’s discuss two limitations here:
- Cisco controllers support a maximum of 512 WLANs, but only 16 of them can be
actively configured on an AP.
- Advertising each WLAN to wireless clients use airtime.
If you create too many WLANs, a channel can be starved of any usable airtime and the
clients will have a hard time transmitting their own data because the channel is overly busy
with beacon transmissions coming from the AP.
So, it’s better to limit the number of WLANs to five or fewer. A maximum of three WLANs is
the best. Remember that by default, no WLANs are defined on a controller.
Before you create a new WLAN, think about the following parameters it will need to have:
- SSID string
- Controller interface and VLAN number
- Type of wireless security needed
First you will create the appropriate dynamic controller interface to support the new WLAN;
then you will enter the necessary WLAN parameters. Each configuration step is performed
using a web browser session that is connected to the WLC’s management IP address.
WLAN configuration steps
Step 1. Configure a RADIUS Server
Step 2. Create a Dynamic Interface
Step 3. Create a New WLAN
Step 4: Configure WLAN Security
Step 5: Configure WLAN QoS
Step 6: Configure Advanced WLAN Settings
Step 1. Configure a RADIUS Server
Security > AAA > RADIUS > Authentication
• Click New to create a new server.
• Next, enter the server’s IP address, shared secret key, and port number.
• Be sure to set the server status to enable so that the controller can begin using it.
• Click Apply to complete the server configuration.
Step 2. Create a Dynamic Interface
Controller > Interfaces > New
• Next, enter the IP address, subnet mask, and gateway address for the interface.
• You should also define DHCP server addresses that the controller will use when it
relays DHCP requests from clients that are bound to the interface.
• Click the Apply button to complete the interface configuration and return to the list
of interfaces.
Step 3. Create a New WLAN
Create a New WLAN – wlan id
The ID number are useful when you use templates for automated configurations on multiple
controllers simultaneously.
• Click the Apply button to create the new WLAN.
Create a New WLAN – radio selection
You can control whether the WLAN is enabled or disabled with the Status check box.
By default, the WLAN will be offered on all radios that are joined with the controller.
You can select a more specific policy with 802.11a only, 802.11a/g only, 802.11g only, or
802.11b/g only. For example, if you are creating a new WLAN for devices that have only a
2.4-GHz radio, it probably does not make sense to advertise the WLAN on both 2.4- and 5-
GHz AP radios.
Create a New WLAN – hiding SSID
Use the Broadcast SSID check box to select whether the APs should broadcast the SSID
name in the beacons they transmit. Broadcasting SSIDs is convenient for users because their
devices can learn and display the SSID names automatically. Most devices need the SSID in
the beacons to understand that the AP is still available for that SSID. Hiding the SSID, is of no
notable use. It just prevents user devices from discovering an SSID and trying to use it as a
default network.
Step 4: Configuring WLAN Security
Use the Security tab to configure the security settings. Layer 2 Security tab is selected by
default. From there select the appropriate security scheme to use.
Step 5: Configuring WLAN QoS
The controller will consider all frames in the WLAN to be normal data by default and will
handle them in a “best effort” way. This setting can be changed in one of the following
ways:
- Platinum (voice)
- Gold (video)
- Silver (best effort)
- Bronze (background)
Step 6: Configuring Advanced WLAN Settings
- Coverage hole detection
- Peer-to-peer blocking
- Client exclusion and Client load limits, etc.
Note that, by default, a controller will not allow management traffic that is initiated from a
WLAN. That means you cannot access the controller GUI or CLI from a wireless device that is
associated with the WLAN. Instead, you can access the controller through its wired
interfaces.
Finalizing WLAN Configuration
When you are satisfied with the settings in each of the WLAN configuration tabs, click the
Apply button in the upper-right corner of the WLAN Edit screen. The WLAN will be created
and added to the controller configuration.
If data is sent through open space, how can it be secured so that it stays private and intact?
The 802.11 standard offers a framework of wireless security mechanisms that can be used
to add trust, privacy, and integrity to a wireless network.
24.4.1 Authentication
Some common attacks focus on a malicious user pretending to be an AP. The fake AP can
send beacons, answer probes, and associate clients just like the real AP it is impersonating.
Once a client associates with the fake AP, the attacker can easily intercept all
communication to and from the client from its central position.
To prevent this type of man-in-the-middle attack, the client should authenticate the AP
before the client itself is authenticated.
Rogue clients – Unknown devices that happen to be within range of your network.
Potential clients must identify themselves by presenting some form of credentials to the
APs. Following figure shows the basic client authentication process.
24.4.2 Encryption
The client’s relationship with the AP might become much more trusted, but data passing to
and from the client is still available to eavesdroppers on the same channel.
To protect data privacy on a wireless network, the data should be encrypted for its journey
through free space.
This is accomplished by encrypting the data payload in each wireless frame just prior to
being transmitted, then decrypting it as it is received.
Message Integrity
The intended recipient should be able to decrypt the message and recover the original
contents, but what if someone managed to alter the contents along the way?
A message integrity check (MIC) is a security tool that can protect against data tampering.
It is like a secret stamp inside the encrypted data frame. The stamp is based on the contents
of the data bits to be transmitted. Once the recipient decrypts the frame, it can compare the
secret stamp to its own idea of what the stamp should be, based on the data bits that were
received. If the two stamps are identical, the recipient can safely assume that the data has
not been tampered with.
Following figure shows the MIC process:
If any client screening is used at all, it comes in the form of web authentication. A client can
associate right away but must open a web browser to see and accept the terms for use and
enter basic credentials. From that point, network access is opened up for the client.
Client operating systems would flag such networks to warn you that your wireless data will
not be secured if you join.
Wired Equivalent Privacy – WEP
WEP uses the RC4 cipher algorithm. The same algorithm encrypts data at the sender and
decrypts it at the receiver. The algorithm uses a string of bits as a key, commonly called a
WEP key, to derive other encryption keys – one per wireless frame.
WEP is known as a shared-key security method. A client can associate with an AP only if it
have a correct WEP key. WEP keys can be either 40 or 104 bits long, represented by a string
of 10 or 26 hex digits. WEP was defined in the original 802.11 standard in 1999. In 2001, a
number of weaknesses were discovered and revealed, so work began to find better wireless
security methods.
WEP encryption and WEP shared-key, both authentication methods are weak methods to
secure a wireless LAN. As a result, by 2004, WEP was officially deprecated.
802.1x/EAP – Extensible Authentication Protocol
As its name implies (Able to be extended or stretched – extendable – designed to allow the
addition of new capabilities and functionality), EAP is extensible and does not consist of any
one particular authentication method.
Instead, EAP defines a set of common functions that actual authentication methods can use
to authenticate users. Each method is unique and different, but each one follows the EAP
framework.
EAP has another interesting quality:
A wireless client might be able to associate with an AP but will not be able to pass data to
any other part of the network until it successfully authenticates.
For the functionality of EAP – AAA server is required.
The following figure shows the three-step 802.1x process used by AAA server:
With open and WEP authentication, wireless clients are authenticated locally at the AP
without further intervention. But with 802.1x; the client uses open authentication to
associate with the AP, and then the actual client authentication process occurs at an
authentication server.
Supplicant:
The client device that is requesting access
Authenticator:
The network device that provides access to the network (usually a wireless LAN controller
[WLC])
Authentication Server (AS):
The device which has the pre-configured database of user credentials and permits or denies
network access based on that database. This device is usually a RADIUS server.
The wireless LAN controller becomes a middleman in the authentication process between
the client and the AAA server. When you configure user authentication on a wireless LAN,
you will not have to select a specific method. Instead, you select 802.1x on the WLC which
can further use any one or more than one from the given EAP methods. It is then up to the
client and the authentication server to use a compatible method.
Following are some of the EAP-based authentication methods:
LEAP – Lightweight Extensible Authentication Protocol
EAP FAST – Eap flexible authentication by secure tunnelling
PEAP – Protected eap
EAP TLS – Eap transport layer security
Lightweight Extensible Authentication Protocol – LEAP
Cisco proprietary wireless authentication method. Was developed as an early improvement
to the weaker WEP method. Instead of a static key, LEAP attempted to overcome WEP
weaknesses by using dynamic WEP keys that changed frequently. Even though wireless
clients and controllers still offer LEAP, you should not use it.
EAP flexible authentication by secure tunnelling – EAP FAST
A more secure method developed by Cisco.
Phase 1: After the supplicant and AS have authenticated each other, they negotiate a
Transport Layer Security (TLS) tunnel.
Phase 2: The end user can then be authenticated through the TLS tunnel for additional
security.
Protected EAP – PEAP
The AS uses a digital certificate with the Supplicant for authentication.
Once the identity of the supplicant is approved by the AS, the two will build a TLS tunnel to
be used for the client authentication and encryption key exchange. The digital certificate of
the AS consists of data in a standard format that identifies the owner and is
signed/validated by a third party. The third party is known as a certificate authority (CA) and
is known and trusted by both the AS and the supplicants.
Note: Digital certificate is only used at the AS.
EAP transport layer security – EAP TLS
In PEAP, it was easy to install a certificate on a single server, but the clients are left to
identify themselves through other means. EAP Transport Layer Security (EAPTLS) goes one
step further by requiring certificates on the AS and on every client device.
It is considered to be the most secure wireless authentication method available; however,
implementing it can be complex.
24.4.4 Encryption methods
Wireless Privacy and Integrity Methods
We have discussed various authentication methods so far. Now comes the encryption part.
As WEP has been compromised and deprecated. Then what other options are available to
encrypt data and protect its integrity through free space?
Wireless Privacy and Integrity PROTOCOLS
Following are some of the protocols which are used in various encryption methods:
TKIP – Temporal Key Integrity Protocol
CCMP – Counter/CBC-MAC Protocol
GCMP – Galois/Counter Mode Protocol
Temporal Key Integrity Protocol – TKIP
TKIP is used in WPA wireless certifications.
TKIP adds the following security features using legacy hardware and the underlying WEP
encryption:
Sender’s MAC address: The MIC also includes the sender’s MAC address as evidence of the
frame source.
TKIP sequence counter: This feature provides a record of frames sent by a unique MAC
address, to prevent frames from being replayed as an attack.
TKIP was deprecated in the 802.11-2012 standard.
Counter/CBC-MAC Protocol – CCMP
A more secure method than TKIP.
CCMP consists of two algorithms:
- AES counter mode encryption
- Cipher Block Chaining Message Authentication Code (CBC-MAC) used as a message
integrity check (MIC)
CCMP is used in WPA2 (described in later pages)
The Advanced Encryption Standard (AES) – is the current encryption algorithm adopted by
U.S. National Institute of Standards and Technology (NIST) and the U.S. government, and
widely used around the world. AES is open, publicly accessible, and represents the most
secure encryption method available today.
If you want to use CCMP can be to secure a wireless network, the client devices and APs
must support the AES counter mode and CBC-MAC in hardware because CCMP cannot be
used on legacy devices that support only WEP or TKIP.
How can you know if a device supports CCMP? Look for the WPA2 designation, which is
described in the following slides.
Galois/Counter Mode Protocol – GCMP
A more efficient method than CCMP.
GCMP consists of two algorithms:
- AES counter mode encryption
- Galois Message Authentication Code (GMAC) used as a message integrity check
GCMP is used in WPA3, which is described in the following section.
24.4.5 WPA protocols & versions
WPA, WPA2, and WPA3
The Wi-Fi Alliance (http://wi-fLorg), which is a nonprofit wireless industry association, has
worked out ways of Message Integrity through its Wi-Fi Protected Access (WPA) industry
certifications.
To date, there are three different versions:
- WPA (TKIP)
- WPA2 (CCMP) and
- WPA3 (GCMP)
As long as the Wi-Fi Alliance certify – a wireless client device – an AP and its associated –
WLC for the same WPA version, all three devices should be compatible with each other and
should offer the same security components.
WPA2 was a replacement of WPA and used superior AES CCMP algorithms rather than
deprecated TKIP which was used in WPA. (It is WPA not WPA1)
In 2018, WPA Version 3 (WPA3) was introduced as a future replacement for WPA2. WPA3
uses stronger encryption by AES with the Galois/Counter Mode Protocol (GCMP).
Each successive version is meant to replace prior versions by offering better security
features.
Summary of all authentication and encryption methods:
- An internal unsatisfied employee can also cause the serious damage to the network
resources and reachability. Means networks can be broken from inside the network
i.e. from LAN side or from outside i.e. from WAN side.
So, security must be implemented at every part of the network.
Major concerns and what should be done!
The security architecture of a network should use:
- Firewalls and intrusion prevention systems (IPS), at the network boundaries and
- Hosts should use antivirus and antimalware tools
- Routers, (at the edge between LAN & WAN) with access lists to filter packets
- LAN with tools like port security, DHCP snooping and Dynamic ARP inspection.
Additionally, an infected mobile device (like a HDD or pen drive/USB) can be used a security
threat if connected to the internal network.
A typical enterprise network
Sometimes it also need to allow its workers to carry laptops and smartphones. And the
enterprise might want to provide network access to the occasionally visiting guests.
Additionally, the enterprise may provide wireless connectivity to its employees (and guests),
offering its wireless access to people who are within range.
So, you can see, as the network and its connectivity expand, the enterprise will have more
difficulty maintaining its network boundaries.
25.1.1 Common security terms
- Vulnerability
- Exploit
- Threat
Vulnerability
As you know that there is no door which can’t be penetrated. It means the hardest kind of
security can be broken if you can find its weakness. In security terms, this weakness is called
a vulnerability. In other words:
- Anything that can be considered to be a weakness
- That can be used to compromise the security of something else, such as: the
integrity of data, or how a system performs
Is called a vulnerability.
Exploit
The tool used to break the system using a vulnerability is called an exploit. Like a wire piece
used to open a lock. An exploit is effective only if it is used against the targeted weakness or
vulnerability. Otherwise an exploit is of no use.
Threat
Technically speaking, an exploit such as the wire piece is not effective at all by itself.
Someone must use it to break the lock. Now there exist an actual potential to break in,
destroy and steal. This potential to break the system is known as a threat.
There are many different vulnerabilities and exploits that can be leveraged by malicious
users to become threats to an organization and its data.
Such as: Systems and Applications, etc.
Mitigation techniques
These are the techniques that can be used to prevent the malicious activities and to protect
the network from possible attacks.
Amplification Attacks
The impact of a reflection attack is limited. The reason is only a single host is the victim and
the amount of traffic being reflected to the target is small. In amplification attacks, the
effect of the attack is amplified by using some protocol or service to generate a large
amount of traffic towards the target host.
As a result, large amounts of network bandwidth can be consumed forwarding the amplified
traffic toward the target, especially if many reflectors are involved.
Man-in-the-Middle Attacks
A type of attack used to eavesdrop on data passing from one machine to another, avoiding
detection.
The process shown in the above figure, poisons the ARP table entry in any system which
receives the spoofed ARP reply. From that point on, a poisoned system will blindly forward
traffic to the attacker’s MAC address, which is now representing the destination.
The attacker is able to know the real destination’s MAC address because he received an
earlier ARP reply from the destination host. This process can be repeated to poison the ARP
entries on multiple hosts and then forwarding traffic between them without detection.
Once an attacker is between two hosts, he can passively eavesdrop on and inspect all traffic
passing between them. The attacker might also take an active role and modify the data
passing through.
Reconnaissance Attacks
As they say: “It is always good to know about the strengths and weaknesses of an enemy.”
• In terms of security, to make the attack focused and more effective, it is always
better to get to know the details of a target.
• These details can reveal some vulnerabilities that can be used to execute the attack.
• Such attacks are known as Reconnaissance Attacks.
This type of attacks are useful in discovering more details about the target and its systems
before an actual attack.
If an attacker knows the domain name of a business, “nslookup” can reveal the owner of the
domain and the IP address space registered to it.
The “whois” and “dig” commands are tools that can query DNS information to reveal
detailed information about domain owners, contact information, mail servers, authoritative
name servers, etc.
Then the attacker can progress to using ping sweeps to send pings to each IP address in the
target range. Hosts that answer the ping sweep then become live targets. Port scanning
tools can then sweep through a range of UDP and TCP ports to see if a target host answers
on any port numbers. Any replies indicate that a corresponding service is running on the
target host.
Keep in mind that a reconnaissance attack is not a true attack because nothing is exploited
as a result. It is used for gathering information about target systems and services so that
vulnerabilities can be discovered and exploited using other types of attacks.
Buffer Overflow Attacks
Operating systems and applications normally read and write data temporary memory space
known as buffers. All processes should work normally as long as the memory space is
maintained properly and data is placed within the correct buffer locations.
If a buffer is filled above its limit, the incoming data might be stored in unexpected memory
locations. An attacker can exploit this condition by sending data that is larger than expected.
The target system might store that data, overflowing its buffer into another area of
memory, eventually crashing a service or the entire system. The attacker might also be able
to craft the large message by inserting malicious code in it. If the target system stores that
data as a result of a buffer overflow, then it can run that code without realizing.
Malware – Trojan horse
Some types of security threats can come in the form of malicious software or malware.
(Malicious + Software = Malware)
A “Trojan horse” for example, is a malicious software that is hidden and packaged inside
other software that looks normal and legitimate. If a user decides to install it, the “Trojan
horse” software is silently installed too.
Then the malware can run attacks of its own on the local system or against other systems.
Trojan horse malware can spread from one computer to another only through user
interaction such as:
- Opening email attachments
- Downloading software from the Internet, and
- Inserting a USB drive into a computer
Malware – virus
Viruses are malware that can propagate between systems more readily. One thing, a virus
can’t spread on itself.
To spread, virus software must inject itself into another application, then rely on users to
transport the infected application software to other victims.
Malware – worm
There is another type of malware which is able to propagate to and infect other systems on
its own. An attacker develops worm software and deposits it on a system by any means.
From that point on, the worm replicates itself and spreads to other systems through their
vulnerabilities, then replicates and spreads again and again.
Summary of malware types
123456789
Sounds familiar!
One of these can be the password of many of you use for your different logins.
Right?
If I can guess it, so can an experienced and bad intentioned attacker while trying to log in
into your accounts online or offline.
Type of password attacks
Online Password Attack –
By actually entering each password guess as the system prompts for user credentials.
Offline Password Attack –
Occurs when the attacker is able to retrieve the encrypted or hashed passwords ahead of
time, then goes offline to an external computer and uses software there to repeatedly
attempt to recover the actual password.
Attackers can also use software to perform “Dictionary Attacks” to discover a user’s
password. The software will automatically attempt to log in with passwords taken from a
dictionary or word list. It might have to go through thousands or millions of attempts before
discovering the real password.
The software can perform a “Brute-force Attack” by trying every possible combination of
letter, number, and symbol strings. Brute-force attacks require very powerful computing
resources and a large amount of time.
Password policies
To mitigate password attacks, an enterprise should implement password policies for all
users. Such policies include guidelines that require a long password string made up of a
combination of upper- and lowercase characters along with numbers and some special
characters.
(Generally known as complex passwords – which are difficult to guess and reveal by a
password attack)
You enter password in your mail address and get a code on your other device (like mobile
phone). You can’t enter into your mail account without entering that password received on
the mobile. This type of authentication method is known as “2 factor authentication.”
Digital certificates
A digital certificate provides information about the identity of a device. A digital certificate is
issued by a Certification Authority (CA).
Biometric
The idea behind using biometric means as authentication method is to use some physical
attribute from a user’s body to uniquely identify that person. These physical attributes are
usually unique to each individual’s body structure and cannot be easily stolen or duplicated.
For example, a user’s fingerprint can be scanned and used as an authentication factor. Other
examples include face recognition, palm prints, voice recognition, iris recognition, and
retinal scans.
Some methods are more trusted than others. Sometimes facial recognition systems can be
fooled by using photographs or masks of trusted people. (Mission Impossible)
Biometric patterns such as fingerprints, facial shapes, and iris patterns can be affected by
injuries and the aging process. So, multiple biometric credentials can be used to
authenticate users.
25.1.6 Managing user access
There are several methods to manage user activities:
Global console password
(Same password for all users – user anonymity)
Individual console password
(Login local using local username & password database – not manageable)
AAA management
(Centralized management)
25.1.7 AAA server
A centralized authentication server can contain a database of all possible users and their
passwords, as well as policies to authorize user activities.
- Authentication: Who is the user?
- Authorization: What is the user allowed to do?
- Accounting: What did the user do?
For greater security, AAA servers can also support multifactor user credentials and more.
AAA servers usually support the following two protocols to communicate with enterprise
resources:
1. TACACS+
2. RADIUS
TACACS+ V/S RADIUS
TACACS+:
- A Cisco proprietary protocol
- Separates each of the AAA functions
- Secure and encrypted communication
- Uses TCP port 49
RADIUS:
- A standards-based protocol
- Combines authentication and authorization into a single resource
- Uses UDP ports 1812 and 1813 (accounting)
- Not completely encrypted.
For example, if corporate users receive an email message that contains a message
concerning:
- A legal warrant for their arrest, or
- A threat to expose some supposed illegal behavior
They might be tempted to follow a link to a malicious site. Such an action might infect a
user’s computer and then open a back door or introduce malware or a worm that could
then impact the business operations.
This is the reason, security programs are made and advertised throughout the enterprise for
better co-ordination and sync with the IT team for the overall protection of the enterprise
network.
An effective security program should have the following basic elements:
User awareness
User training
Physical access control
User awareness
All users should be made aware of the need for data confidentiality to protect corporate
information, as well as their own credentials and personal information.
They should also be made aware of:
- Potential threats
- Schemes to mislead, and
- Proper procedures to report security incidents
Users should not include sensitive information in emails or attachments, should not keep or
transmit that information from a smartphone, or store it on cloud services or removable
storage drives.
User training
All users should be required to participate in periodic formal training so that they become
familiar with all corporate security policies.
Physical access control
Infrastructure locations, such as network closets and data centers, should remain securely
locked. Badge access to sensitive locations is a great solution, which offers the trace of
identities and when access is granted.
- Max. 2 acts can be applied on a single router interface. (One for inbound and other
for outbound traffic.)
- The list is read from top to bottom and first encountered statement is executed first.
There is an implicit deny at the end of every list.
25.2.2 Types of ACL
ACLs can be of 2 types: Standard and Extended.
Each list can further be of 2 types, based on how they are configured: Numbered & Named.
25.2.3 Standard numbered ACL
Step 1. IOS computes the MD5 hash of the password in the enable secret command and
stores the hash of the password in the configuration.
Step 2. When the user types the enable command to reach enable mode, a password that
needs to be checked against that configuration command, iOS hashes the clear-text
password as typed by the user.
Step 3. IOS compares the two hashed values: if they are the same, the user-typed password
must be the same as the configured password.
25.3.2 Password attacks
Cisco iOS now supports two more alternative algorithm types – SHA 256 and Scrypt. Both of
them use an SHA-256 hash instead of MD5.
As we had studied that access control lists enable the router to work as a firewall giving it
the role of packet filtering as an additional role.
(Remember that the dedicate role of a router is – Packet forwarding)
Normally a firewall do the same work that routers do with ACLs, but firewalls can perform
that packet-filtering function with many more options, and perform other security tasks.
Also, a router does stateless filtering whereas all effective firewalls are stateful firewalls.
Stateful – Keep state information by storing information about each packet, and make
decisions about filtering future packets based on the historical state information.
Firewalls use the following logic to make the choice of whether to deny or allow a packet:
- Like ACLs, match the source and destination IP addresses.
- Like ACLs, identify applications by matching their TCP and UDP port numbers.
- Observe application level flows to know what additional TCP and UDP ports are
used by a particular flow, and filter based on those ports.
25.4.2 Stateful & stateless firewall concept
Although, routers can be used as firewalls to some extent (using ACLs) but the point is
routers are dedicatedly designed for packet forwarding functions. So, routers must spend
the least time possible to process each packet so that the packets experience little delay
passing through the router.
To understand the concept of stateful firewall, let’s take the example of a simple denial of
service (DoS) attack. An attacker can attempt DoS attacks against a web server by using
some tools that can create a large number of TCP connections to the web server. Now, let’s
say normally a firewall allows the TCP connections to that server and the server typically
receive 20 new TCP connections per second.
A stateful firewall could be tracking the number of TCP connections per second – means,
recording state information based on earlier packets – Including the number of TCP
connection requests from each client IP address to each server address. The stateful firewall
could notice a large number of TCP connections, check its state information, and then notice
that the number of requests is very large from a small number of clients to that particular
server, which is typical of some kinds of DoS attacks.
The stateful firewall can filter such packets and can save the web server from crashing.
25.4.3 Security zones
Firewalls can control which hosts could initiates communications. No company wants any
random Internet user or attacker to be able to connect to their internal/private servers.
Firewalls use the concept of security zones when defining which hosts can initiate new
connections.
o Allow hosts from zone inside to initiate connections to hosts in zone outside, for a
predefined set of safe well-known ports (like HTTP port 80).
o There is also a separate zone dedicated for web servers know as DMZ. These are the
servers that need to be available for use by users in the public Internet.
o By this separation of web servers using DMZ, keeping it away from the rest of the
enterprise, the enterprise can prevent Internet users from attempting to connect to
the internal devices in the inside zone, and many types of attacks can be prevented.
- Then the IPS can examine packets, compare them to the known exploit signatures,
and notice when packets may be part of a known exploit.
- Once identified, the IPS can log the event, discard packets, or even redirect the
packets to another security application for further examination.
An IPS is as good as its signature database. So, to be good at what an IPS do, the IPS needs
to download and keep updating its signature database. Because attackers are evolving day
by day. New protocols and attack matrices are being developed to breach even the highly
secured networks and most sophisticated security devices. So, security experts need to stay
updated and work to create the signatures to prevent zero day attacks.
A next generation firewall – A firewall that looks at the application layer data to identify the
application instead of relying on the TCP/UDP port numbers used.
Cisco performs their deep packet inspection (DPI) using a feature called Application Visibility
and Control (AVC). Means a next generation firewall don’t just analyze the transport layer
but application and session layers as well.
Following are some of the important features of an NGFW:
Traditional firewall:
An NGFW performs traditional firewall features, like stateful firewall filtering, NAT/PAT, and
VPN termination.
Application Visibility and Control (AVC):
This feature looks deep into the application layer data to identify the application. For
example, it can identify the application based on the data, rather than port number, to
defend against attacks that use random port numbers.
Advanced Malware Protection:
NGFW platforms run multiple security services, not just as a platform to run a separate
service, but for better integration of functions. A network-based antimalware function can
run on the firewall itself, blocking file transfers that would install malware, and saving copies
of files for later analysis.
URL Filtering:
This feature examines the URLs in each web request, categorizes the URLs, and either filters
or rate limits the traffic based on rules. The Cisco security group monitors and creates
reputation scores for each domain known in the Internet, with URL filtering being able to
use those scores in its decision to categorize, filter, or rate limit.
NGIPS:
The Cisco NGFW products can also run their NGIPS feature along with the firewall. When the
design needs both a firewall and IPS at the same location in the network, these NGFW
products can run the NGIPS feature in the same device.
One of the biggest issues with a traditional IPS comes with the volume of security events
logged by the IPS.
An NGIPS helps with this issue in a couple of ways.
The NGIPS will know the:
- Operating system
- Software versions and the revision levels
- Running applications
- Port numbers in use, etc.
Using this data, the NGIPS can make much better choices about what events to log instead
of logging all of the events.
Let’s say an NGIPS is placed into a network to protect a campus LAN where end users
connect, but there is no data center in that part of the network.
Also, all PCs happen to be running Windows, and possibly the same version. The signature
database includes signatures for exploits of Linux hosts, Macs, Windows version nonexistent
in that part of the network, and exploits that apply to server applications that are not
running on those hosts.
After learning these facts, an NGIPS can suggest which checks to do and which not for the
exploits, spending more time and focus on events that could occur, reducing the number of
events logged.
Following are the features of an NGIPS:
Traditional IPS:
An NGIPS performs traditional IPS features, like using exploit signatures to compare packet
flows, creating a log of events, and possibly discarding and/or redirecting packets.
Application Visibility and Control (AVC):
As with NGFWs, an NGIPS has the ability to look deep into the application layer data to
identify the application.
Contextual Awareness:
NGFW platforms gather data from hosts – OS, software version/level, patches applied,
applications running, open ports, applications currently sending data, and so on.
Reputation-Based Filtering:
Some websites are friendly and others are not. If a website is continuously reported as a red
flag in the various signature database, it is reported as a bad website. Such scores are used
by the NGIPS to check whether to open certain websites or not. An NGIPS can perform
reputation-based filtering, taking the scores into account.
Switches are kind of devices which are placed in open locations in the network. As end
devices needs to be connected to switches, not all of them could be placed in the server
room. So, switches are required to be provided with some extra security feature to help in
preventing physical attacks.
Some of these feature are:
- Port Security
- DHCP Snooping
- Dynamic ARP Inspection
As switches are layer 2 devices; so, to implement security at switch level, we use layer 2
addresses. It means, Port Security is implemented on the basis of the MAC address. Port
security identifies devices based on the source MAC address of Ethernet frames that the
devices send.
It means, port security defines a maximum number of unique source MAC addresses
allowed for all frames coming in the interface. The MAC addresses allowed on an interface
can be all statically configured, all dynamically learned or some configured statically and
others learned dynamically.
Then it examines frames received on the interface to determine if a violation has occurred.
If the source mac address of the received frame matches with the mac address defined on
the port, it will be processed, otherwise the frame will be discarded.
25.5.2 Configuration
Port security can be configured on both access and trunk ports, but it requires you to
statically configure the port as a trunk or an access port, rather than let the switch decide.
Configuring Port Security
interface fa0/1
switchport mode access
switchport port-security
switchport port-security maximum 1
[Default value of allowed MAC address on an interface = 1]
switchport port-security mac-address (mac-address)
[Use the command multiple times to define more than one MAC address.]
switchport port-security mac-address sticky
When port security is enabled, MAC addresses are either statically configured or
dynamically learned. So, the default switch behavior changes and MAC addresses can’t be
seen in the MAC address table with the command:
#show mac address-table
So, the commands used will be:
#show mac address-table secure
#show mac address-table secure interface fa0/1
#show mac address-table static
Port Security Shutdown Mode
When port security is enabled on a switch interface and violation mode “shutdown” has
been configured by the network admin. In such cases, if some unauthorized frame is
received on that particular interface, means port security is violated, all frame forwarding is
stopped on the interface, both in and out.
It seems like port security has shut down the port; but the port is not literally down. Instead,
port security uses the err-disabled feature.
Err-disabled state can be used by Cisco switches for many reasons, but when using port
security shutdown mode and a violation occurs, the following happens:
- The switch interface state (#show interfaces & #show interfaces status) changes to
an err-disabled state
- The switch interface port security state (#show port-security) changes to a secure-
down state
- The switch stops sending and receiving frames on the interface
To recover from an err-disabled state, the interface must be shut down with the shutdown
command and then enabled with the no shutdown command, manually by the network
admin.
Alternately, the switch can be configured to automatically recover from the err-disabled
state, when caused by port security, with these commands:
err-disable recovery cause psecure-violation
A global command to enable automatic recovery for interfaces in an err-disabled state
caused by port security:
err-disable recovery interval seconds
As per its functions, DHCP server seem like some big, complex piece of hardware, placed in a
server room with lots of air conditioning to keep it cool. But, like most servers, the server is
actually a software, running on some server OS. You can install some third party app on your
windows PC and can use it as a DHCP server. But the thing is, this type of DHCP approach
can be used in a small office or training environment, not in the enterprises. In smaller
networks, even a router can be used as the DHCP server. But in enterprise networks, we
need separate hardware for DHCP server with a server OS and high availability features.
The DHCP service is still created by software, however.
25.6.2 DORA process
To get an IP address by DHCP, a host uses DORA process:
D – Discover
O – Offer
R – Request
A – Acknowledgement
For the sake of an example, let’s consider Host M wants to get an IP address from a DHCP
server.
Discover
Host M sends a Discover message, with source IP address of 0.0.0.0 because it does not
have an IP address to use yet and destination 255.255.255.255, which is sent in a LAN
broadcast frame, reaching all hosts in the subnet. The host hopes that there is a DHCP
server on the local subnet.
Why?
Because packets sent to 255.255.255.255 only go to hosts in the local subnet; router will not
forward this packet.
Offer
Now look at the Offer message sent back by the DHCP server. The server sets the
destination IP address to 255.255.255.255 again.
Why?
Host M still does not have an IP address, so the server cannot send a packet directly to the
host M. So, the server sends the packet to “all local hosts in the subnet” address
(255.255.255.255).
(The packet is also encapsulated in an Ethernet broadcast frame.)
Note that all hosts in the subnet receive the Offer message. However, the original Discover
message include a number called the client ID, which includes the host’s MAC address,
which identifies the original host M. As a result, the desired host, which is host M in our
case, knows that the Offer message is meant for it. The rest of the hosts will receive the
Offer message, but notice that the message lists another device’s DHCP client ID, so the rest
of the hosts ignore the Offer message.
25.6.3 DHCP relay
Network people have a main design choice to make while using DHCP server:
- Put a DHCP server in every LAN subnet or locate a DHCP server at a central location?
Cisco design documents suggest a centralized design as a best practice, because it allows for
centralized control and configuration of all the IPv4 addresses assigned throughout the
enterprise network.
• By using a centralized DHCP server approach, the DHCP messages that flowed only
on the local subnet somehow need to flow over the IP network to the centralized
DHCP server and back to get the IP addresses for the hosts.
• To make it work, the routers connected to the remote LAN subnets rely on a concept
called “DHCP Relay” and need an interface subcommand: the “ip helper-address
(server-ip)” command.
The ip helper-address (server-ip) subcommand tells the router to do the following for the
messages coming in an interface, from a DHCP client:
- Watch for incoming DHCP messages, with destination IP address 255.255.255.255.
- Change that packet’s source IP address to the router’s incoming interface IP address.
- Change that packet’s destination IP address to the address of the DHCP server (as
configured in the ip helper-address command).
- Route the packet to the DHCP server.
This feature, by which a router relays DHCP messages by changing the IP addresses in the
packet header, is called DHCP relay. Many enterprise networks use a centralized DHCP
server, so the normal router configuration includes an ip helper-address command on every
LAN interface/subinterface.
Cisco routers and switches can also act as DHCP clients, learning their IP addresses from a
DHCP server. (Use this command – “ip address dhcp” under interface sub-mode)
Dhcp server settings
- Subnet ID and subnet mask
- Reserved (excluded) addresses
- Default router(s)
- DNS IP address(es)
25.6.4 DHCP allocation modes
DHCP uses three allocation modes, based on small differences in the configuration at the
DHCP server.
- Dynamic allocation
- Automatic allocation
- Static allocation
Dynamic allocation – the default DHCP mechanisms and configuration method that we are
using so far in our discussion.
Automatic allocation – sets the DHCP lease time to infinite.
Static allocation – some addressed are reserved to be configured as static IP addresses.
Once such static IP is configured on a host, it can’t be used by any other host in the network.
The rules used by the DHCP snooping will be different for different type of messages. For
example DHCP Snooping rules will be different for DHCP messages coming from client side
(Discover & Request) messages and will be different for DHCP messages coming from server
side (Offer & Ack) messages.
DHCP messages which are supposed to be sent by DHCP servers (like offer and
acknowledgement) will be discarded if they are received on an untrusted port.
DHCP messages which are supposed to be sent by DHCP clients (like discover and
request) received on an untrusted port, may be filtered if they appear to be part of
an attack.
DHCP messages received on a trusted port will be forwarded; trusted ports do not
filter (discard) any DHCP messages.
25.7.2 DHCP attacks
An example of dhcp attack – good ip but wrong default gateway
An example of dhcp attack – dhcp attack leading to man in the middle attack
The switch port connected to a DHCP server, should be trusted; otherwise DHCP would not
work, because the switch would filter all DHCP messages sent by the DHCP server. So,
approve all the messages coming from DHCP server and received on trusted ports.
DHCP Snooping – logic for untrusted ports
1. Examine all incoming DHCP messages.
2. If the messages are found to be the types of messages normally sent by the DHCP servers,
discard the messages.
3. If the messages are found to be the types of messages normally sent by the DHCP clients,
filter as follows:
- For DISCOVER and REQUEST messages, check for MAC address consistency
between the Ethernet frame and the DHCP message.
- For RELEASE or DECLINE messages, check the incoming interface plus IP address
versus the DHCP Snooping binding table.
4. For messages not filtered that result in a DHCP lease, build a new entry to the DHCP
Snooping binding table.
DHCP Snooping – summary of rules
It checks the Ethernet header source MAC address and compares that address to the MAC
address in the DHCP header, and if the values do not match, DHCP Snooping discards the
message.
A normal user may lease address 192.168.1.5, and at some point release the address back to
the server; however, before the client has finished with its lease, an attacker could send
DHCP RELEASE message to release that address back into the pool. The attacker could then
immediately try to lease that address, hoping the DHCP server assigns that same
192.168.1.5 address to the attacker.
DHCP snooping – defeats a dhcp release attack from another port
The figure shows the action by which the attacker off port Fa0/3 attempts to release PC1’s
address. DHCP Snooping compares the incoming message, incoming interface, and matching
table entry:
- The incoming message is a DHCP RELEASE message in port Fa0/3 listing address
192.168.1.5.
- The DHCP Snooping binding table lists 192.168.1.5 as being originally leased via
messages arriving on port Fa0/2.
- DHCP Snooping discards the DHCP RELEASE message.
25.7.4 DHCP snooping configuration
ip dhcp snooping
ip dhcp snooping vlan 10
no ip dhcp snooping information option
interface fa0/1
ip dhcp snooping trust
(all other ports are untrusted by default)
#show ip dhcp snooping
interface fa0/1
ip dhcp snooping limit rate 10
DHCP relay agents add new fields to DHCP requests—defined as option 82 DHCP header
fields. And the switch defaults to use ip dhcp snooping information option.
So, to make DHCP Snooping work on a switch that is not also a DHCP relay agent, disable the
option 82 feature using the no ip dhcp snooping information option global command.
Configure this command only on those switches which are working as a DHCP relay agent.
Limiting DHCP Message Rates
As DHCP Snooping prevents their attacks, what attackers can do in return? He may try to
attack the DHCP Snooping itself.
They know that DHCP snooping uses the general-purpose CPU in a switch.
So, they can generate large volumes of DHCP messages in an attempt to overload the DHCP
Snooping feature and the switch CPU itself.
It can be as a simple denial-of-service that may cause DHCP Snooping to fail to inspect every
message, so that other DHCP attacks may work.
To prevent this type of attacks, another optional feature is included by DHCP Snooping that
tracks the number of incoming DHCP messages.
If the number of incoming DHCP messages exceeds that limit over a one-second period, it is
considered as an attack by DHCP snooping and the port is changed into err-disabled state.
This feature can be enabled both on trusted and untrusted interfaces.
DHCP Snooping – err-disabled recovery configuration
SW1()#errdisable recovery cause dhcp-rate-limit
SW1()#errdisable recovery interval (seconds)
In normal cases, a host uses ARP when it knows the IP address of another host and wants to
know the MAC address of that host.
But, sometimes a host might also want to inform all the hosts in the subnet about its MAC
address. That might be useful when a host changes its MAC address, for example and wants
all other hosts in its subnet to update their ARP table with the new MAC address.
Gratuitous ARP features:
- It is an ARP reply which is sent without having first received an ARP request.
- It is sent to an Ethernet destination broadcast address so that all hosts in the subnet
receive the message.
Gratuitous ARP as an Attack Vector
If a host’s MAC address is MAC 1, and it changes to MAC 2. Now to cause all the other hosts
to update their ARP tables, the host could send a gratuitous ARP that lists an origin MAC of
MAC 2.
Attackers can take advantage of gratuitous ARPs because they let the sending host make
other hosts change their ARP tables. Attackers can initiate the gratuitous ARP and cause the
ARP table of other hosts change and register their own MAC addresses as the valid MACs
which can lead to more dangerous Man in the Middle attacks.
DAI has features that can prevent these kinds of ARP attacks.
25.8.3 Inspection logic
DAI works with the idea of trusted and untrusted ports with the same general rules as DHCP
Snooping.
- Messages with an Ethernet header source MAC address that is not equal to the ARP
origin hardware (MAC) address.
- ARP reply messages with an Ethernet header destination MAC address that is not
equal to the ARP target hardware (MAC) address.
- Messages with unexpected IP addresses in the two ARP IP address fields.
Like DHCP Snooping, DAI also limit the number of ARP messages on a port to prevent
attacks on the DAI itself.
25.8.4 Configuring DAI
Dynamic ARP Inspection configuration on L2 SW
Before configuring DAI, decide:
- Whether to rely on DHCP Snooping, or ARP ACLs, or both.
- If you choose to use DHCP Snooping, configure it and make the correct ports trusted.
- Choose the VLAN(s) on which to enable DAI.
- And then make the DAI trusted on selected ports in those VLANs.
(Typically these ports will be the same ports you trusted for DHCP Snooping.)
We know how switches and routers do their frame and packet forwarding. How switches
use their mac address table and routers use their routing table to forward data.
Network programmability and Software Defined Networking (SDN):
Software defined networks take those concepts, improve them and implement with a new
fresh futuristic approach. The devices in the network still forward messages, but the how
they do this and why they do this has changed over time.
26.1.2 Processing planes
Stop and think about what networking devices do.
What does a router do?
What does a switch do?
All processes, everything that a networking device does, can be put in a particular plane. So,
in networking language, the functions of all network devices are said to be according to the
given 3 planes:
1. Data plane
2. Control plane
3. Management plane
What is Data Plane?
It refers to the tasks that a networking device does to forward a message. Means, it has
everything to do with processes like: receiving data, processing it, and forwarding the data.
All PDUs, whether you call that a frame, a packet, or just a message – is part of the data
plane. It is also called the forwarding plane.
Data Plane functions
- Encapsulation and de-encapsulation a packet in a data-link frame (router, L3 switch)
- Adding/removing an 802.1Q trunking header (routers and switches)
- Forwarding an Ethernet frame as per its destination MAC address by matching it to
the MAC address table (L2 switches)
- forwarding an IP packet as per its destination IP address by matching it to the IP
routing table (routers, L3 switches)
- Encryption of the data by adding a new IP header (for VPN)
- Conversion of private IP address to the public and vice-versa (for NAT)
- Allowing or discarding a message based on a filter (for ACLs and port security)
What is CONTROL Plane?
Now let’s think about the kinds of information that the data plane needs for its processing:
- Routers need routes in its routing table before it can forward packets.
- Switch need MAC address entries in its MAC table before it can forward frames.
CONTROL Plane functions
From above examples, it is clear that the information supplied to the data plane controls
what the data plane does.
Now what controls the contents of the routing table?
What controls the content of the mac address table?
The answer is: Various control plane processes.
So, control plane refers to any action that controls the data plane.
Most of these actions have to do with creating the tables used by the data plane, like:
- The IP routing table,
- IP Address Resolution Protocol (ARP) table,
- Switch MAC address table, etc.
By adding, removing, and changing entries to the tables used by the data plane, the control
plane processes control what the data plane does.
Distributed planes
Traditional IP networks use both a distributed data plane and a distributed control plane.
It means, each device in the network has its own data plane and control plane, and the
network distributes those functions into each device, individually.
Let’s say for example, OSPF, the control plane protocol, runs on each router. Means, it is
distributed among all the routers. Once populated with useful routes, the data plane’s IP
routing table on each router can forward incoming packets.
CONTROL Plane protocols
Routing protocols (OSPF, EIGRP, RIP, BGP), IPv4 ARP, IPv6 NDP and STP etc.
So, in short, the data plane rely on the control plane to provide useful information.
What is Management Plane?
The control plane directly impact the behavior of the data plane. The management plane
work does not directly impact the data plane. Instead, the management plane includes
protocols that allow network engineers to manage the devices.
Such management protocols includes: Telnet, SSH, SNMP and Syslog, etc.
Cisco Switch Data Plane Internals
To better understand SDN and network programmability, let’s understand the internals of
switches first. Switches need to process millions of frames per second on each port and a
switch can have at least 24 ports. So, we kind of need 24millions fps processing.
That’s why, LAN switches needed a faster data plane than a generalized CPU could process
in software (like bridges did). As a result, switches have always had specialized hardware to
perform data plane processing (ASIC). In switches, the switching logic occurs not in the CPU
with software, but in a dedicated hardware chip known as application-specific integrated
circuit (ASIC).
ASIC needs to perform table lookup in the MAC address table, so for fast table lookup, the
switch uses a specialized type of memory to store the equivalent of the MAC address table:
ternary content-addressable memory (TCAM).
So, instead of executing the loops through an algorithm, ASIC can feed the fields to be
matched, like a MAC address value, into the TCAM, and the TCAM returns the matching
table entry, without a need to run a search algorithm.
A switch still has a general-purpose CPU and RAM as well. IOS runs in the CPU and uses
RAM. Most of the control and management plane functions run in IOS. The data plane
function (and the control plane function of MAC learning) happens in the ASIC. Note that
some routers also use hardware for data plane functions, for the same kinds of reasons that
switches use hardware.
The ideas of a hardware data plane in routers are similar to those in switches: use a
purpose-built ASIC for the forwarding logic, and TCAM to store the required tables for fast
table lookup.
With the advancement of the programming in networking field, specific pieces of software
were built to automate the configuration and management processes of the network. These
software are named as “controllers.”
New networking concepts, which emerged around 2010, changed the location of control
plane. Traditional networks are known to use a distributed control plane. But in new
networking approaches, which are based on controllers, control plane is no more
distributed but central at a location where it is reachable by all the network devices in the IP
network.
Many of those approaches move parts of the control plane work into software that runs as a
centralized application – a controller. The degree of the control held by the controller
depends upon the SDN solution you are designing for your network.
Controllers & Centralized Control
A controller, or SDN controller, centralizes the control of the networking devices. Each of
the network devices still has a data plane; however, note that none of the devices has a
control plane. The controller directly programs the data plane entries into each device’s
tables.
The controller needs an interface, a way to communicate with the network devices. This is
because all of the network devices need to get its configuration from the controller.
So, there should be a way, an interface or a protocol which could do this, which could
connect the controller with the network devices and make the communication possible
between the controller and the network devices.
This way, this interface or this protocol is known as – SBI – the South Bound Interface.
Why southbound?
Because it seems according to the north and sound direction references, as the network
devices are south to the controller.
(Interface – Software interface – that communicates between controller and networking
devices - it often includes an application programming interface (API))
API – application programming interface
An API is a method for one application (program) to exchange data with another application.
In other words, an API is an interface to access an application program.
(Will discuss later in detail)
SBI – It is an interface between a program (which is the controller) and a program (on the
networking device) which lets the two programs communicate.
The goal is – To allow the controller to program the data plane forwarding tables of the
networking device.
The Southbound Interface – examples
OpenFlow (from the ONF; www.opennetworking.org)
OpFlex (from Cisco; used with ACI)
CLI (Telnet/SSH) and SNMP (used with Cisco APIC-EM)
CLI (Telnet/SSH) and SNMP, and NETCONF (used with Cisco Software-Defined Access)
(The comparisons of SBIs go far beyond the scope of CCNA)
NBI – northbound interface
As we had discussed that a controller is responsible to configure the control plane of the
network devices. But how a controller would know what to be filled in the control plane of
the devices? It can’t know by itself. It needs another interface to get the data required for its
own processing.
In short, the controller can add entries to the networking device’s forwarding tables; but,
how does the controller know what to add?
The ONF – A group of users (operators) and vendors to help establish SDN in the
marketplace and defines protocols, SBIs, NBIs, and anything that helps people implement
their vision of SDN.
ONF’s Open SDN model use the controller with an OpenFlow SBI.
The OpenDaylight Controller (ODL)
OpenDaylight open-source SDN controller is one of the successful SDN controller platforms
which emerged from the consolidation process in 2010s.
Many different vendors worked under OpenDaylight project, with the idea that if enough
vendors worked together on a common open-source controller, then all would benefit.
All those vendors could then use the open-source controller as the basis for their own
products, with each vendor focusing on the product differentiation part of the effort, rather
than the fundamental features.
The result was the birth of the OpenDaylight SDN controller in mid of 2010. OpenDaylight
(ODL) began as a separate project but now exists as a project managed by the Linux
Foundation.
OpenDaylight and OpenFlow
A vendor can then take ODL, use the parts that make sense for that vendor, add its
proprietary functions to it, and create a commercial ODL controller of its own.
The Cisco Open SDN Controller – OSC
In the 2010s, Cisco offered a commercial version of the OpenDaylight controller – called the
Cisco Open SDN Controller (OSC). That controller followed the intended model for the ODL
project. But now Cisco no longer produces OSC.
Cisco had made a futuristic move and adopted a totally different approach to SDN by
introducing the concept of IBN - intent-based networking. That move took Cisco away from
OpenFlow-based SDN.
2 Cisco offerings that use an IBN approach to SDN:
- Application Centric Infrastructure (ACI) [Cisco’s data centre SDN product]
- Software-Defined Access (SDA) [for enterprise campus]
26.1.6 Cisco ACI
Cisco Application Centric Infrastructure – ACI
The new networking concepts which were built for data centers, were around application
architectures. Let’s understand this for a minute. Facebook.com for example is a social
media application. To store and access its data, we need data centers at the back end to
support the application functions. Same is for WhatsApp and Telegram etc. So, it’s evident
that modern data centers are built for such application, majorly.
As a result and initiative, Cisco made the network infrastructure become application centric,
hence the name of the Cisco data center SDN solution: Application Centric Infrastructure, or
ACI.
26.1.7 Spine & leaf network design
Cisco ACI uses a specific physical switch topology called spine and leaf. With ACI, the
physical network has a number of spine switches and a number of leaf switches, as shown in
the following figure.
Spine & Leaf architecture operational conditions:
- Each leaf switch must connect to every spine switch
- Each spine switch must connect to every leaf switch
- Leaf switches cannot connect to each other
- Spine switches cannot connect to each other
- Endpoints connect only to the leaf switches
The endpoints can be connections to devices outside the data center, like the router on the
left. By volume, most of the endpoints will be either physical servers running a native OS or
servers running virtualization software with numbers of VMs and containers as shown in the
center of the following figure.
To understand the working of ACI & functions of APIC, consider the application architecture
of a typical enterprise web app for a moment, Facebook.com for example.
Generally, a web application is combination of 3 different servers:
Web server:
Users from outside the data center connect to a web server, which sends web page content
to the user.
App (Application) server:
Because most web pages contain dynamic content, the app server does the processing to
build the next web page for that particular user based on the user’s profile and latest
actions and input.
DB (Database) server:
Many of the app server’s actions require data; the DB server retrieves and stores the data as
requested by the app server.
Now using intent-based networking (IBN) model, the controller must also be told by
network engineer or any automation program about the access policies, which define which
EPGs should be able to communicate (and which should not).
For example, the routers that connect to the network external to the data center should be
able to send packets to all web servers, but not to the app servers or DB servers.
End point groups and policies
Notice that in such network architectures, we don’t talk about the physical interfaces like
which should be assigned to which VLAN, or which ports should be added in which
EtherChannel; the discussion moves to an application-centric view of what happens in the
network.
To make it all work, ACI uses a centralized controller called the Application Policy
Infrastructure Controller (APIC). The name defines the function in this case: it is the
controller that creates application policies for the data center infrastructure.
The APIC takes the intent (EPGs, policies, and so on), which completely changes the
operational model away from configuring VLANs, trunks, EtherChannels, ACLs, and so on.
26.1.9 Cisco APIC-EM
While Cisco was defining network automation in the enterprise, they faced a challenge:
their own products being used in most of their customer’s networks.
APIC EM basics:
Cisco rejected the idea of replacing all of the previous hardware with new hardware
compatible for its enterprise-wide SDN solution. Instead, Cisco looked for ways to add the
benefits of network programmability with a centralized controller while keeping the same
traditional switches and routers in place.
APIC-EM does not directly program the data or control planes, but it does interact with the
management plane via Telnet, SSH, and/or SNMP. But it can ask and learn the configuration
and operational state of each device, and then it can reconfigure each device.
APIC-EM Replacement
Cisco announced the end of marketing for the APIC-EM product at the time when it
announced the new CCNA 200-301 exam in 2019.
But we have kept a small section on it because
- Many of the functions of APIC-EM have become core features of the Cisco DNA
Center (DNAC) product.
So, they are necessary to be learnt for a better understanding of DNA center products.
(Discussed in coming sections)
Openflow, aci, apic-em – comparison
Following are some of the newer networking options that are said to be used when talking
about controllers and controllers based networking:
Software Defined Networking
Software Defined Architecture
Programmable Networks
Controller-Based Networks
26.1.11 How Automation Impacts Network Management
Let’s take the following example:
APIC-EM and DNA Center (its successor) both provide a feature – Path trace.
Path trace is used to see the path taken by a packet from source to destination, explaining
its forwarding logic used at each node.
If you can analyze these 2 approaches, one from the traditional networks (using output of
show commands) and other from automation tools like DNA center (using NBIs and SBIs).
You will find that the second option is doing like all of the work by itself, while the first
option is leaving out most of the work to be done by you and your program.
Now, understand that the second option is only possible because of a centralized controller.
The controller is provided with the data it use for the configuration and forwarding table
information – using NBIs. Going beyond that, Cisco controllers analyze the given data to
provide much more useful data. This is just one example. The power of these APIs is
amazing.
26.1.12 Traditional v/s controller based networks
Easier to automate networking functions in comparison to traditional networks.
It is possible to automate functions that were not easily automated without using
controllers.
Time taken to complete projects is reduced to a great extent.
- The network engineer need not to think about every command on every network
device.
These models include 3 most likely to be seen models from Cisco used in its different types
of networks:
1. Software-Defined Access (SDA) – for Campuses
2. Software Defined WAN (SD-WAN), - for WAN solutions
3. Application Centric Infrastructure (ACI) – for Data Centers
This new concept of intent-based networking was only possible through the controller
based architectures. New operational models allow the configuration of the network rather
than per-device configuration. The automation features enabled by the controller’s
northbound APIs – allows third-party applications – to automatically configure the network.
It also includes a completely different operational model inside SDA, with a network fabric
composed of an underlay network and an overlay network.
In 2010s, Cisco re-invented their campus networking model – and SDA was the result.
It still uses a physical network with switches and routers, cables, and different endpoints.
DNA Center is the controller for SDA networks.
SDA is one implementation of Cisco DNA which is useful in Campus Area Networks and uses
the DNA Center controller to configure and operate SDA.
The mechanisms to create VXLAN tunnels between SDA switches, which are then used to
transport traffic from one fabric endpoint to another over the fabric.
Underlay:
The underlay network looks like a more traditional network architectures, with several
devices and links.
Fabric:
Both concepts (underlay and overlay) together create the SDA fabric.
SDA – Underlay
In simple words, the underlay exists as multilayer switches and their links, with IP
connectivity. The underlay supports some new concepts with a tunneling method called
VXLAN. Traffic sent by the endpoint devices flows through VXLAN tunnels in the overlay—a
completely different process than traditional LAN switching and IP routing.
For example, think about the idea of sending packets from hosts on the left of a network,
over SDA, to hosts on the right.
On the other hand, SDA fabric uses a routed access layer design. SDA makes good use of the
routed access layer design, and it works very well for the underlay with its goal to support
VXLAN tunnels in the overlay network.
Routed access layer design:
It means that all the LAN switches are Layer 3 switches, with routing enabled, so all the links
between switches operate as Layer 3 links. Greenfield SDA deployment – Means using all
new devices. DNA Center will configure the devices’ underlay configuration to use a routed
access layer. All new devices can be configured using DNA Center with the best underlay
configuration to support SDA.
Features of Routed access layer design
All switches act as Layer 3 switches. (Even on access layer) All links between switches (single,
or EtherChannels) are routed Layer 3 links (not Layer 2 links).
The switches use the IS-IS routing protocol. (For IP connectivity) So, STP/RSTP is not needed,
because routing protocol chooses which links to use based on the IP routing tables.
The equivalent of a traditional access layer switch – an SDA edge node – acts as the default
gateway for the endpoint devices, not the distribution switches. So, HSRP (or any FHRP) is
no longer needed.
Benefits of SDA fabric with layer 3 access
The following process takes place, when an endpoint sends a frame to be delivered across
the SDA network.
The first SDA node which receives the frame – will encapsulates the frame in a new message
– using a tunneling method called VXLAN – and forwards the frame into SDA fabric. Now,
other SDA nodes will forward the frame based on the VXLAN tunnel details. Finally, the last
SDA node removes the VXLAN details – and forwards the original frame toward the
destination endpoint.
All of the work happens in the ASIC of each switch. So, there is no performance delay for the
switches to perform this extra work. (It is one of the conditions described for the SDA
hardware compatibility list: the switches must have ASICs that can perform the work.)
The use of VXLAN tunnels opened up the possibilities for a number of new networking
features that did not exist without VXLAN.
26.2.3 Concept of VXLAN tunnel
VXLAN Tunnels in the Overlay (Data Plane)
The VXLAN encapsulation supply header fields that SDA needs for its features, so the
tunneling protocol is flexible and extensible, still supported by the switch ASICs.
It encapsulate the entire data link frame instead of encapsulating the IP packet.
But for us, the northbound API matters most, because as the user of SDA networks, we
interact with SDA using Cisco DNA Center’s northbound REST API or the GUI interface.
There are two categories of protocols used:
1. Protocols to support traditional networking devices/software versions: Telnet, SSH,
SNMP (to support the older Cisco devices and iOS versions)
Now, by looking at the configuration file you had left, no engineer could tell from looking at
the ACL whether any lines in the ACL could be safely removed. He or she never know if an
ACE was useful for one requirement or for many.
If a requirement was removed, and they were even told which old project caused the
original requirement so that you could look at your notes, you would not know if removing
the ACEs would harm other requirements.
In short, ACL management suffers with issues like:
Now, just imagine if you could implement security on router without even thinking about
ACLs. Imagine that over time, there was need for 5 different security requirements. Each
time, the engineer would define the policy with DNA Center, a different one every time.
DNA center – northbound ip security policies – to simplify operations
Look at the following grid of configuration designed by a network engineer and try to
identify which SGTs can send packets to which other SGTs.
Access table for SDA scalable group access
Source\Destination A B C
Remember the example when an endpoint tried to send it’s a packet to another endpoint.
The ingress SDA node starts a process by sending messages to DNA Center. DNA Center then
works with security tools in the network, like Cisco’s Identity Services Engine (ISE), to
identify the users and then match them to their respective SGTs.
Then DNA Center checks the logic according to the previously designed grid.
If DNA Center sees a permit action between the source/destination pair of SGTs, DNA
Center directs the edge nodes to create the VXLAN tunnel, as shown in the following figure.
If the security policies find that the two SGTs should not be allowed to communicate, DNA
Center would not direct the fabric to create the tunnel, and the packets do not flow.
Vxlan header with source & destination SGTs
The above figure indicates why SDA use VXLAN encapsulation for its data plane, rather than
performing traditional Layer 2 switching or Layer 3 routing.
So, this single example of using SGTs, make it evident that Cisco DNA Center is more than a
management platform, and instead acts as a controller of the activities in the network,
providing much more powerful set of features and capabilities.
26.2.7 Network management platform
In this part, we will discuss Cisco’s traditional enterprise network management platform,
known as: Cisco Prime Infrastructure (PI) and also the newer network management solution
by Cisco: Digital Networking Architecture or DNA center.
We will discuss the features of each and then we will compare both of them. For many
years, Cisco Prime Infrastructure has been Cisco’s primary network management product
for the enterprise.
26.2.8 Cisco prime infrastructure as network management platform
Cisco PI as a Network Management Platform
It provides:
Single control for all functions
Discovery, inventory, and topology of network devices
Support for all kinds of networks
Manage all functions of a device (Lifecycle Management)
Application visibility
Converged wired and wireless networks
Plug-and-Play:
26.2.9 Cisco DNA center as network management platform
DNA Center as a Network Management Platform
Cisco PI runs as an application on a server platform with GUI access via a web browser. The
PI server can be purchased from Cisco as a software package to be installed and run on your
servers, or as a physical appliance. After getting to know the features of Cisco PI, let’s
compare and contrast DNA Center to traditional management tools like PI.
Traditional Management v/s DNA center
Google: DNA center topology map using cisco sandbox
DNA Center can work with PI, using the data already discovered by PI rather than
performing the discovery work again.
The biggest difference which really stand out Cisco DNA Center:
Cisco DNA Center supports SDA, whereas other management apps do not.
On the other hand, Cisco PI still has some traditional management features which are not
found in Cisco DNA Center. Cisco DNA Center have many of those features (not all), with
main focus on future features like SDA support.
By providing Digital Network Architecture (DNA – as a set of all the tools), Cisco is
committed to help their customers to achieve some of their big goals, which includes:
- Reduced costs & risks
- Better security
- Faster deployment of services through automation and simplified processes
- And the list goes on
Cisco DNA Center represents the future of network management for Cisco enterprises.
Software are not smart. They are dumb. They only do what we tell them to do. One
software can do its own work only. It has no idea by itself how to connect and work with
another software.
And as we know, today’s world is the world of interconnectivity. Literally everything is
connected to every other thing. If you open “amazon.com” for example, you will find
thousands of hyperlinks interconnected. They are not all of Amazon’s but interconnected
with Amazon somehow.
Now, how is that possible? How 2 software pieces interconnect with each other? How does
data gets from here to there – from source to destination – as a request – and then back to
the source – as a reply? How does different devices and applications connect with each
other to allow to make a reservation, to place an order or book a flight?
It is all possible using Application Programming Interfaces – APIs. But, exactly, what is an
API? API – is the unsung hero of the programming world.
It is a messenger that takes requests from your side and tells the computer what you want
to do and then return the response back to you. It’s like a waiter in a restaurant where you
have placed an order for something and the chef (computer or system) is there back in the
kitchen waiting for the waiter to place your order. Now the chef/system will prepare only
what order you have placed according to the instructions of the waiter/API.
Applications use application programming interfaces (APIs) to communicate with each
other. APIs allow programs running on different computers to work cooperatively,
exchanging data to achieve some goal.
Software developers add APIs to their software so other application software can make use
of the features of the first application. To write an application, the developer will write
some code, but most of the time the developer try to find an API that can provide the data
and functions, reducing the amount of new code that he or she has to write.
A number of APIs exist out there – each with a different set of features – to meet a different
set of needs. Modern software development approach – Use prebuilt software to
accomplish tasks rather than writing everything from scratch.
The CCNA blueprint mentions one type of API – REpresentational State Transfer (REST) –
because of its popularity as a type of API in networking automation applications.
26.3.2 RESTful APIs
REST APIs make use of the 3 variables mainly
Client/server architecture
Stateless operation
Clear statement of cacheable/un-cacheable
Client/Server Architecture
1. The REST client executes a REST API call, which generates a message which is then sent to
the REST server.
2. The REST server has API code that considers the request and decides how to reply.
3. Finally, the REST server sends back the reply message with the suitable data variables in
its reply message.
Many REST APIs use HTTP, but the use of HTTP is not a must for an API to be considered
RESTful.
Stateless Operation
REST does not record and use information about one API exchange for the purpose of the
processing of another exchange. Each API request and reply does not use any other past
history considered when processing the request. It means all API requests are independent
of one another.
Cacheable (or Not)
Cacheable means – saving some constant necessary information at the time of first time
processing – so that it can save time and resources on next login.
Let’s understand it by taking the example of opening of a webpage.
First time all of the data like images, logos and text etc. is loaded from scratch from the
server itself. But some of its data is stored on the client itself for the fast processing the next
time. This is caching.
Now some data is cacheable – like company name and logo etc. – but some data is not – like
product and price lists etc.
26.3.3 Basics of programming
To understand the upcoming topics – it’s better to have some introductory understanding of
some of the basic concepts of programming and programming languages – especially data
and variables.
Variables
All applications needs data to be processed. Now this data is provided as an input to the
application in form of variables. Then this data provided as input by using variables, is
processed by the program. There are simple variables and complex variables.
It is required for:
- Making comparisons
- Making decisions, and
- Performing mathematical formulas to analyze the data.
So, in simple words:
A variable is a name or label that has a value assigned to it.
This value can be:
- Unsigned integers (X = 7)
- Assigned integers (Y = -9)
- Floating point numbers (Z = 1.234)
- Text (Output = “And the winner is “)
Key Value
Speed Auto
Duplex Auto
IP 192.168.1.1
Allows the client to create some new instances of variables and data structures at the server
and initialize their values as kept at the server.
Read:
Allows the client to retrieve (read) the current value of variables that exist at the server,
storing a copy of the variables, structures, and values at the client.
Update:
Allows the client to change (update) the value of variables that exist at the server.
Delete:
Allows the client to delete from the server different instances of data variables.
26.3.6 HTTP verbs
HTTP uses verbs that mirror CRUD actions. HTTP defines the concept of an HTTP request and
reply – with the client sending a request and with the server answering back with a reply.
Each request/reply lists an action verb in the HTTP request header, which defines the HTTP
action.
In simple words, HTTP verbs in HTTP header tells a server what to do with the request and
what kind of reply to give back to the client. The HTTP messages also include a URI, which
identifies the resource that the client is trying to access.
HTTP request header including verbs & URI
In simple words, it is difficult for the REST clients to understand and interpret data
transferred by the REST server and vice-versa.
Exchange of internal representation of variables – incorrect concept
Data serialization languages give us a way to represent variables with text rather than in the
internal representation used by any particular programming language. Each data
serialization language enables API servers to return data so that the API client can replicate
the same variable names as well as data structures as found on the API server.
Exchange of internal representation of variables – correct concept
At the end of the process, the REST client application now has equivalent variables to the
ones it requested from the server in the API call. Note that the final step—to convert from
the data serialization language to the native format—can be as little as a single line of code!
Remember that applications can also store data in JSON format.
(Current CCNA mentions only JSON)
But there are more such languages: like XML and YAML
26.3.8 Interpreting JSON output
JSON – java script object notation
JSON provides a balance between human and machine readability. Once familiar with a few
necessary JSON rules, most humans can read JSON data.
Comparing data modelling languages
JSON JavaScript Object Notation Data Modeling & Data REST APIs
Serialization
XML extensible Markup Language Data focused text markup REST APIs
that allows data modeling
But to analyze the JSON data to find the data structures, including objects, lists, and key:
value pairs, you need to know a bit more about JSON syntax.
Key value pairs
Key-Value Pair:
Each and every colon identifies one key: value pair, with the key before the colon and the
value after the colon.
o Key:
Text, inside double quotes, before the colon, used as the name that references a value.
o Value:
The item after the colon that represents the value of the key, which can be:
- Text: Listed in double quotes.
- Numeric: Listed without quotes.
- Array: A special value (more details later).
- Object: A special value (more details later)
Multiple Pairs:
When listing multiple key: value pairs, separate the pairs with a comma at the end of each
pair (except the last pair).
One JSON Object (Dictionary) with Three Key: Value Pairs
{
“Rank1": “CEO",
“Rank2": “GM",
“Rank3": “AM"
}
Key – Rank1, Rank2, Rank3
Value – CEO, GM, AM
Commas and the curly brackets – Special characters
One pair of curly brackets – One JSON object
JSON files, and JSON data exchanged over an API, exist first as a JSON object, with an
opening (left) and closing (right) curly bracket.
Objects and Arrays
JSON uses JSON objects and JSON arrays to communicate data structures beyond a simple
key: value pair.
Objects can be flexible, but in most uses, they act like a dictionary.
Arrays list a series of values.
For general conversation, many people refer to the JSON structures as dictionaries and lists
rather than as objects and arrays.
A series of key: value pairs enclosed in a matched pair of curly brackets, with an opening left
curly bracket and its matching right curly bracket.
[ ] - Array:
A series of values (not key: value pairs) enclosed in a matched pair of square brackets, with
an opening left square bracket and its matching right square bracket.
Key-value pairs inside objects:
All key-value pairs inside an object conform to the earlier rules for key-value pairs.
Values inside arrays:
All values conform to the earlier rules for formatting values (for example, double quotes
around text, and no quotes around numbers).
Arrays
[
“Manav",
“Deep",
“Manavdeep"
]
JSON Object with Two Key: Value Pairs
{
“Team_1": [
“Himansh",
“Himanshi",
],
“Team_2": [
“Spyder",
“Serena",
]
}
- How to configure one device using global mode commands and how to save the
running-config file to the startup-config file.
But it doesn’t answers the following questions like:
- Can one engineer change the running-config of a device in a way that others don’t
know about it?
- Can the responsible people know if the configuration file was changed?
- Who changed the file?
- What was changed?
So on and on…
Not all of the companies need to deal with the configuration management due to its size.
Small sized companies can be managed by a small networking staff and monitoring
everything manually. But as a company grows, it adds a number of devices and networking
staff. It results into higher rates of configuration changes and manual management becomes
a problem in such scenarios. So, it takes more than good practices and people to deal with
device configuration management.
Configuration drift
It is the changes in the configuration file of a network device (a router for example) from the
desired configuration. For example, the hostname of a router is supposed to be: BR1. But
somebody changed it to Branch1. This change is the example of configuration drift.
26.4.2 Central configuration management
Why central configuration management?
A company has hundreds or thousands of network devices and many network engineers. As
a result, per-device manual configuration model does not work as well for such larger
networks. So, medium to large enterprises store configurations in a central location along
with the startup-config files in the devices. The files are placed in a shared folder accessible
to the entire network team.
By using the above, so perfectly looking solution – there are still dangers.
What if someone directly changes the configuration of network devices by using console
connections? It means - some configuration drift can still occur.
As a solution:
The Configuration Management Tools can monitor device configurations to discover when
the device configuration differs from the intended ideal configuration, and then:
- Either reconfigure the device, or
- Notify the network engineering staff to make the change.
Configuration Provisioning
Provisioning – means – how to copy the changed configuration file from the configuration
management system to the network devices.
26.4.5 Features of a configuration management tool
Implement automated configuration changes in devices.
These management tool can store the logical steps in a file, can provide the schedules for
execution, so that the changes can be implemented by the automation tool without the
presence of the engineer.
26.4.6 Templates & variables
In enterprise networks, there are a large number of devices which are assigned the same
kind of work so they use almost identical configurations except some standard changes.
These tools can represent configuration files as templates and variables so that devices with
similar roles can use the same template but with different values.
These management tools can divide the total configuration components into 2 parts:
• Fixed Part (common to all devices – known as template)
• Changeable Part (unique to one device – known as variable)
Network engineers can then edit both the files (template and variable) as separate files
according to the requirements of the devices. The configuration management tool can then
process the template and variables to create the ideal configuration file for each device.
Each configuration management tool use different type of language for each type of file.
(Means a different language for template writing and a different language for variable
writing)
For example Ansible uses:
- Jinja2 language for templates and YAML for variables
Each tool uses a different language to define the action steps (often – a domain-specific
language) but are normally easier to learn than the programming languages.
These configuration management tools can also be configured with additional feature by
using programming languages like Python.
Due to its richness of features, Python is widely used across the globe for many purposes
including network automation and programmability.
Important files used by configuration management tools
For example:
Action – change the router-id of OSPF router R7
Logic – change the router-id of OSPF router R7 on the weekend (date)
Ansible uses an agentless architecture. Means it does not rely on any code (agent) running
on the network device. Instead, Ansible relies on features typical in network devices, namely
SSH and/or NETCONF, to make changes and extract information.
When using SSH, the Ansible control node actually makes changes to the device like any
other SSH user would do, but doing the work with Ansible code, rather than with a human.
Ansible can be described as using a push model, rather than a pull model (like Puppet and
Chef).
After installing Ansible, an engineer needs to create and edit all the various Ansible files,
including an Ansible playbook. Then the engineer runs the playbook, which tells Ansible to
perform the steps.
Ansible push model
Like all other configuration management tools, Ansible can do both configuration
provisioning (configuring devices after changes are made in the files) and configuration
monitoring (checking to find out whether the device config matches the ideal configuration
on the control node).
However, Ansible’s architecture more naturally fits with configuration provisioning, as seen
in the above figure.
26.4.8 Puppet
Puppet master – a linux server where Puppet is installed to be used in production networks.
There are free versions with limited feature sets and there are paid versions with enhanced
functionalities.
Files create and used by Puppet:
Manifest: A human-readable text file on the Puppet master
Resource, Class, Module: Components of the manifest [Largest component –
Module; composed of – Classes; Composed – Resources.]
Templates:
Imperative (Ansible) configure all OSPF internal routers in these locations, and if errors
occur for any device, do these extra tasks for that device.
Declarative (Puppet) these OSPF internal routers should have the configuration in this file
(Manifest) by the end of the process.
Puppet can use agent-based or agentless architecture for network device support
depending upon whether the network device support Puppet agents or not.
[Puppet agent – a feature that can be configured on the network devices]
If network device support Puppet agent, Puppet will use agent-based approach otherwise
agentless approach will be used.
Agentless operation:
Not every Cisco OS supports Puppet agents, so a proxy agent is used which runs on some
external host. The external agent then uses SSH to communicate with the network device.
Agent based & agentless operation for Puppet
Puppet uses a pull model to make that configuration appear in the device.
Step 1.
The engineer creates the files on the Puppet server. (Puppet master)
Step 2.
The engineer configures and enables the on-device agent or a proxy agent for each device.
Step 3.
The agent pulls manifest details from the server, which tells the agent what its configuration
should be.
Step 4.
If the agent device’s configuration should be updated, the Puppet agent performs additional
pulls to get all required detail, with the agent updating the device configuration.
Pull model with Puppet
26.4.9 Chef
Chef automate – or simply Chef.
Files created and used by Chef:
Resource:
Just like the ingredients of a recipe in a cookbook. (The configuration objects)
Recipe:
The Chef logic applied to resources to determine when and how to act against the resources
– just like a recipe in a cookbook.
Cookbooks:
Just like a set of recipes about the same kinds of work, grouped together for better
management.
Runlist:
Just like an ordered list of recipes to be run against a particular device.
Chef and Puppet use the same architecture where each device (a Chef node/client) runs an
agent.
The Chef client pulls recipes and resources from the Chef server and then adjusts its
configuration to stay in sync with the details in those recipes and runlists.
Remember: Chef requires on-device Chef Client code.
As many Cisco devices do not support a Chef client, so you will likely see more use of Ansible
and Puppet for Cisco device configuration management.
Summary of Configuration Management Tools
All configuration management tools have their unique strengths, use scenarios and
limitations.
Among all three, Ansible appears to have the most interest, then Puppet, and Chef.
Ansible provides support for a wide range of Cisco devices due to its agentless architecture
and its use of SSH.
Puppet’s agentless model also provide wide support for Cisco devices.
Term for the file that lists action Playbook Manifest Recipe, Runlist
THANK YOU
&
ALL THE BEST