UNIT 3 notes
UNIT 3 notes
Sabarigirivason
lOMoARcPSD| 300 160 45
For a local interface, the SDN applications are running on the same server as the
control plane software (controller network operating system). Alternatively, the
applications could be run on remote systems and the northbound interface is a
protocol or application programming interface (API) that connects the applications
to the controller network operating system (NOS) running on central server. Both
architectures are likely to be implemented.
Network Services Abstraction Layer
RFC 7426 defines a network services abstraction layer between the control and
application planes and describes it as a layer that provides service abstractions that
can be used by applications and services.
This layer could provide an abstract view of network resources that hides the
details of the underlying data plane devices.
This layer could provide a generalized view of control plane functionality, so that
applications could be written that would operate across a range of controller
network operating systems.
This functionality is similar to that of a hypervisor or virtual machine monitor that
decouples applications from the underlying OS and underlying hardware.
This layer could provide a network virtualization capability that allows different
views of the underlying data plane infrastructure.
Network Applications
There are many network applications that could be implemented for an SDN.
Different published surveys of SDN have come up with different lists and even
different general categories of SDN-based network applications.
User Interface
The user interface enables a user to configure parameters in SDN applications and
to interact with applications that support user interaction.
Network Services Abstraction Layer
An abstraction layer is a mechanism that translates a high-level request into the
low-level commands required to perform the request. An API is one such
mechanism. It shields the implementation details of a lower level of abstraction
from software at a higher level. A network abstraction represents the basic
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
operates in the SDN controller. It translates high-level policies and queries into
low-level flow rules and then issues the needed OpenFlow commands to install
these rules on the switches.
Frenetic Architecture
The program combines forwarding functionality with monitoring web traffic
functionality. Consider the following Python program, which executes at the run-
time level, to control OpenFlow switches:
def switch_join(s):
pat1 = {inport:1}
pat2web = {inport:2, srcport:80}
pat2 = {inport:2}
install(s, pat1, DEFAULT, [fwd(2)])
install(s, pat2web, HIGH, [fwd(1)])
install(s, pat2, DEFAULT, [fwd(1)])
query_stats(s, pat2web)
def stats_in(s, xid, pat, pkts, bytes):
print bytes
sleep(30)
query_stats(s, pat)
When a switch joins the network, the program installs three forwarding rules in the
switch for three types of traffic: traffic arriving on port 1, web traffic arriving on
port 2, and other traffic arriving on port 2. The second rule has HIGH priority and
so takes precedence over the third rule, which has default priority. The call to
query_stats generates a request for the counters associated with the pat2web rule.
Sabarigirivason
lOMoARcPSD| 300 160 45
When the controller receives the reply, it invokes the stats_in handler. This
function prints the statistics polled on the previous iteration of the loop, waits 30
seconds, and then issues a request to the switch for statistics matching the same
rule.
With Frenetic, these two functions can be expressed separately, as follows:
def repeater():
rules=[Rule(inport:1, [fwd(2)])
Rule(inport:2, [fwd(1)])]
register(rules)
def web monitor():
q = (Select(bytes) *
Where(inport=2 & srcport=80) *
Every(30))
q >> Print()
def main():
repeater()
monitor()
With this code, it would be easy to change the monitor program or swap it out for
another monitor program without touching the repeater code, and similarly for the
changes to the repeater program.
Traffic Engineering
Traffic engineering is a method for dynamically analyzing, regulating, and
predicting the behavior of data flowing in networks with the aim of performance
optimization to meet service level agreements (SLAs). Traffic engineering involves
establishing routing and forwarding policies based on QoS requirements. With
SDN, the task of traffic engineering should be considerably simplified compared
with a non-SDN network. SDN offers a uniform global view of heterogeneous
equipment and powerful tools for configuring and managing network switches.
• On-demand virtual private networks
• Load balancing
• Energy-aware routing
• Quality of service (QoS) for broadband access networks
• Scheduling/optimization
• Traffic engineering with minimal overhead
Sabarigirivason
lOMoARcPSD| 300 160 45
PolicyCop
• Dynamic traffic steering
• Flexible Flow level control
• Dynamic traffic classes
• Custom flow aggregation levels
Key features of PolicyCop are that it monitors the network to detect policy
violations (based on a QoS SLA) and reconfigures the network to reinforce the
violated policy.PolicyCop consists of eleven software modules and two databases,
installed in both the application plane and the control plane. PolicyCop uses the
control plane of SDNs to monitor the compliance with QoS policies and can
automatically adjust the control plane rules and flow tables in the data plane based
on the dynamic network traffic statistics.
PolicyCop Architecture
Sabarigirivason
lOMoARcPSD| 300 160 45
In the control plane, PolicyCop relies on four modules and a database for storing
control rules, described as follows:
Admission Control: Accepts or rejects requests from the resource provisioning
module for reserving network resources, such as queues, flow-table entries, and
capacity.
Routing: Determines path availability based on the control rules in the rule
database.
Device Tracker: Tracks the up/down status of network switches and their ports.
Statistics Collection: Uses a mix of passive and active monitoring techniques to
measure different network metrics.
Rule Database: The application plane translates high-level network-wide policies
to control rules and stores them in the rule database.
A RESTful northbound interface connects these control plane modules to the
application plane modules, which are organized into two components: a policy
validator that monitors the network to detect policy violations, and a policy
enforcer that adapts control plane rules based on network conditions and high-level
policies. Both modules rely on a policy database, which contains QoS policy rules
entered by a network manager. The modules are as follows:
Traffic Monitor: Collects the active policies from policy database, and
determines appropriate monitoring interval, network segments, and metrics to be
monitored.
Policy Checker: Checks for policy violations, using input from the policy
database and the Traffic Monitor.
Event Handler: Examines violation events and, depending on event type, either
automatically invokes the policy enforcer or sends an action request to the network
manager.
Topology Manager: Maintains a global view of the network, based on input from
the device tracker.
Resource Manager: Keeps track of currently allocated resources using admission
control and statistics collection.
Sabarigirivason
lOMoARcPSD| 300 160 45
Policy Adaptation: Consists of a set of actions, one for each type of policy
violation.
PolicyCop Workflow
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
ODL Reps: A pluggable module set for different versions of the ODC. Comprises
two functions in two submodules: stats collection for and traffic diversion of
relevant traffic.
SDN Stats Collector: Responsible for setting “counters” for every PN at specified
network locations (physical or logical). A counter is a set of OpenFlow flow entries
in ODC-enabled network switches and routers. The module periodically collects
statistics from those counters and feeds them to the SDNBasedDetectionMgr. The
module uses the SDNStatsCollectionRep to both set the counters and read latest
statistics from those counters. A stat report consists of read time, counter
specification, PN label, and a list of trafficData information, where each trafficData
element contains the latest bytes and packet values for flow entries configured for
<protocol,port,direction> in the counter location. The protocol can be
{tcp,udp,icmp,other ip}, the port is any Layer 4 port, and the direction can be
{inbound, outbound}.
SDN Based Detection Manager: A container for pluggable SDN-based detectors.
It feeds stat reports received from the SDNStatsCollector to plugged-in SDN based
detectors. It also feeds all SDN based detectors notifications from the
AttackDecisionPoint about ended attacks (so as to allow reset of detection
mechanisms). Each detector learns for each PN its normal traffic behavior over
time, and notifies AttackDecisionPoint when it detects traffic anomalies.
Attack Decision Point: Responsible for maintaining attack lifecycle, from
declaring a new attack, to terminating diversion when an attack is considered over.
Mitigation Manager: A container for pluggable mitigation drivers. It maintains the
lifecycle of each mitigation being executed by an AMS. Each mitigation driver is
responsible for driving attack mitigations using AMSs in their sphere of
management.
AMS Based Detector: This module is responsible for monitoring/querying attack
mitigation by AMSs.
AMS Rep: Controls the interface to AMSs.
Finally, it is worth noting that Radware has developed a commercial version of
Defese4All, named DefenseFlow. DefenseFlow implements more sophisticated
algorithms for attack detection based on fuzzy logic. The main benefit is that
DefenseFlow has a greater ability to distinguish attack traffic from abnormal but
legitimate high volume of traffic.
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
data servers and, more significantly, react to changing application patterns by using
SDN to reconfigure flows in the network.
Compared to electronic switches, optical switches have the advantages of greater
data rates with reduced cabling complexity and energy consumption. A number of
projects have demonstrated how to collect network-level traffic data and
intelligently allocate optical circuits between endpoints (for example, top-of-rack
switches) to improve application performance. However, circuit utilization and
application performance can be inadequate unless there is a true application-level
view of traffic demands and dependencies. Combining an understanding of the big
data computation patterns with the dynamic capabilities of SDN, efficient data
center networking configurations can be used to support the increasing big data
demands.
A simple hybrid electrical and optical data center network, in which OpenFlow-
enabled top-of-rack (ToR) switches are connected to two aggregation switches: an
Ethernet switch and an optical circuit switch (OCS). All the switches are controlled
by a SDN controller that manages physical connectivity among ToR switches over
optical circuits by configuring the optical switch. It can also manage the
forwarding at ToR switches using OpenFlow rules.
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
pair of groups, the service is applied between any VM in the first group and any
VM in the second group. Virtual networks can also connect to some predefined
groups, such as EXTERNAL, which indicates all endpoints outside of the cloud.
Its two main components are a cloud controller and a network controller. The cloud
controller provides a base Infrastructure as a Service (IaaS) service for managing
VM instances. The user can communicate standard IaaS requests, such as setting
up VMs and storage. In addition, the network policy constructs enable the user to
define the virtual network capabilities for the VMs. The cloud controller manages a
software programmable virtual switch on each physical server in the cloud that
supports network services for tenant applications, including the management of the
user-defined virtual network segments. The cloud controller constructs the
communication matrix and transmits this to the network controller.
CloudNaaS Architecture
The network controller uses the communication matrix to configure data plane
physical and virtual switches. It generates virtual networks between VMs and
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason
lOMoARcPSD| 300 160 45
With ICN, a distinction exists between location and identity, thus decoupling
information for its sources. The essence of this approach is that information
sources can place, and information users can find, information anywhere in the
network, because the information is named, addressed, and matched independently
of its location. In ICN, instead of specifying a source-destination host pair for
communication, a piece of information itself is named. In ICN, after a request is
sent, the network is responsible for locating the best source that can provide the
desired information. Routing of information requests thus seeks to find the best
source for the information, based on a location-independent name.
Deploying ICN on traditional networks is challenging, because existing routing
equipment would need to be updated or replace with ICN-enabled routing devices.
Further, ICN shifts the delivery model from host to user to content to user. This
creates a need for a clear separation between the task of information demand and
supply, and the task of forwarding. SDN has the potential to provide the necessary
technology for deploying ICN because it provides for programmability of the
forwarding elements and a separation of control and data planes.
A number of projects have proposed using SDN capabilities to implement ICNs.
There is no consensus approach to achieving this coupling of SDN and ICN.
Suggested approaches include substantial enhancements/modifications to the
OpenFlow protocol, developing a mapping of names into IP addresses using a hash
function, using the IP option header as a name field, and using an abstraction layer
between an OpenFlow (OF) switch and an ICN router, so that the layer, OF switch,
and ICN router function as a single programmable ICN router.
The approach is built on an open protocol specification and a software reference
implementation of ICN known as CCNx.
CCNx
CCNx is being developed by the Palo Alto Research Center (PARC) as an open
source project, and a number of implementations have been experimentally
deployed.
CCNx
Communication in CCN is via two packet types: Interest packets and Content
packets. A consumer requests content by sending an Interest packet. Any CCN
node that receives the Interest and has named data that satisfies the Interest
responds with a Content packet (also known as a Content). Content satisfies an
Sabarigirivason
lOMoARcPSD| 300 160 45
Interest if the name in the Interest packet matches the name in the Content Object
packet. If a CCN node receives an Interest, and does not already have a copy of the
requested Content, it may forward the Interest toward a source for the content. The
CCN node has forwarding tables that determine which direction to send the
Interest. A provider receiving an Interest for which it has matching named content
replies with a Content packet. Any intermediate node can optionally choose to
cache the Content Object, and it can respond with a cached copy of the Content
Object the next time it receives an Interest packet with the same name.
The basic operation of a CCN node is similar to an IP node. CCN nodes receive
and send packets over faces. A face is a connection point to an application, or
another CCN node, or some other kind of channel. A face may have attributes that
indicate expected latency and bandwidth, broadcast or multicast capability, or other
useful features. A CCN node has three main data structures:
Content Store: Holds a table of previously seen (and optionally cached) Content
packets.
Forwarding Information Base (FIB): Used to forward Interest packets toward
potential data sources.
Pending Interest Table (PIT): Used to keep track of Interests forwarded upstream
by that CCN node toward the content source so that Content packets later received
can be sent back to their requestors.
The details of how content sources become known and how routes are set up
through the CCN network are beyond our scope. Briefly, content providers
advertise names of content and routes are established through the CCN network by
cooperation among the CCN nodes.
ICN relies substantially on in-network caching—that is, to cache content on the
path from content providers to requesters. This on-path caching achieves good
overall performance but is not optimal as content may be replicated on routers,
thus reducing the total volume of content that can be cached. To overcome this
limitation, off-path caching can be used, which allocates content to well-defined
off-path caches within the network and deflects the traffic off the optimal path
toward these caches that are spread across the network. Off-path caching improves
the global hit ratio by efficiently utilizing the network-wide available caching
capacity and permits to reduce egress links’ bandwidth usage.
Use of an Abstraction Layer
Sabarigirivason
lOMoARcPSD| 300 160 45
The central design issue with using an SDN switch (in particular an OF switch) to
function as an ICN router is that the OF switch forwards on the basis of fields in
the IP packet, especially the destination IP address, and an ICN router forwards on
the basis of a content name. In essence, the proposed approach hashes the name
inside the fields with an OF switch can process.
To link a CCNx node software module with an OF switch, an abstraction layer,
called the wrapper, is used. The wrapper pairs a switch interface to a CCNx face,
decodes and hashes content names in CCN messages into fields that an OF switch
can process (for example, IP addresses, port numbers). The large naming space
offered by these fields limits the probability of having collisions between two
different content names. The forwarding tables in the OF switch are set to forward
based on the contents of the hashed fields. The switch does not “know” that the
contents of these fields are no longer legitimate IP addresses, TCP port numbers,
and so forth. It forwards as always, based on the values found in the relevant fields
of incoming IP packets.
Sabarigirivason
lOMoARcPSD| 300 160 45
The abstraction layer solves the problem of how to provide CCN functionality
using current OF switches. For efficient operation, two additional challenges need
to be addressed: how to measure the popularity of content accurately and without a
large overhead, and how to build and optimize routing tables to perform deflection.
To address these issues, the architecture calls for three new modules in the SDN
controller:
Measurement: Content popularity can be inferred directly from OF flow statistics.
The measurement module periodically queries and processes statistics from ingress
OF switches to return the list of most popular content.
Optimization: Uses the list of most popular contents as an input for the
optimization algorithm. The objective is to minimize the sum of the delays over
deflected contents under the following constraints: (1) each popular content is
cached at exactly one node, (2) caching contents at a node does not exceed node’s
capacity, and (3) caching should not cause link congestion.
Deflection: Uses the optimization results to build a mapping, for every content,
between the content name (by means of addresses and ports computed from the
content name hash) and an outgoing interface toward the node where the content is
cached (for example, ip.destination = hash(content name), action = forward to
interface 1).
Finally, mappings are installed on switches’ flow tables using the OF protocol such
that subsequent Interest packets can be forwarded to appropriate caches.
The OpenFlow switch forwards every packet it receives from other ports to the
wrapper, and the wrapper forwards it to the CCNx module. The OpenFlow switch
needs to help the wrapper identify the switch source port of the packet. To achieve
this, the OF switch is configured to set the ToS value of all packets it receives to
the corresponding incoming port value and then forward all of them to the
wrapper’s port.
Sabarigirivason
lOMoARcPSD| 300 160 45
Sabarigirivason