0% found this document useful (0 votes)
28 views27 pages

UNIT 3 notes

Unit 3

Uploaded by

duraisandhiya623
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views27 pages

UNIT 3 notes

Unit 3

Uploaded by

duraisandhiya623
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

lOMoARcPSD| 300 160 45

lOMoARcPSD| 300 160 45

UNIT III : SDN APPLICATIONS


SDN Application Plane Architecture – Network Services Abstraction Layer –
Traffic Engineering – Measurement and Monitoring – Security – Data Center
Networking.
SDN Application Plane Architecture
The application plane contains applications and services that define, monitor, and
control network resources and behavior. These applications interact with the SDN
control plane via application-control interfaces, for the SDN control layer to
automatically customize the behavior and the properties of network resources. The
programming of an SDN application makes use of the abstracted view of network
resources provided by the SDN control layer by means of information and data
models exposed via the application-control interface.

SDN Application Plane Functions and Interfaces


Northbound Interface
“SDN Control Plane,” the northbound interface enables applications to access
control plane functions and services without needing to know the details of the
underlying network switches. Typically, the northbound interface provides an
abstract view of network resources controlled by the software in the SDN control
plane.

Sabarigirivason
lOMoARcPSD| 300 160 45

For a local interface, the SDN applications are running on the same server as the
control plane software (controller network operating system). Alternatively, the
applications could be run on remote systems and the northbound interface is a
protocol or application programming interface (API) that connects the applications
to the controller network operating system (NOS) running on central server. Both
architectures are likely to be implemented.
Network Services Abstraction Layer
RFC 7426 defines a network services abstraction layer between the control and
application planes and describes it as a layer that provides service abstractions that
can be used by applications and services.
This layer could provide an abstract view of network resources that hides the
details of the underlying data plane devices.
This layer could provide a generalized view of control plane functionality, so that
applications could be written that would operate across a range of controller
network operating systems.
This functionality is similar to that of a hypervisor or virtual machine monitor that
decouples applications from the underlying OS and underlying hardware.
This layer could provide a network virtualization capability that allows different
views of the underlying data plane infrastructure.
Network Applications
There are many network applications that could be implemented for an SDN.
Different published surveys of SDN have come up with different lists and even
different general categories of SDN-based network applications.
User Interface
The user interface enables a user to configure parameters in SDN applications and
to interact with applications that support user interaction.
Network Services Abstraction Layer
An abstraction layer is a mechanism that translates a high-level request into the
low-level commands required to perform the request. An API is one such
mechanism. It shields the implementation details of a lower level of abstraction
from software at a higher level. A network abstraction represents the basic

Sabarigirivason
lOMoARcPSD| 300 160 45

properties or characteristics of network entities (such as switches, links, ports, and


flows) is such a way that network programs can focus on the desired functionality
without having to program the detailed actions.
Abstractions in SDN

SDN Architecture and Abstractions


Forwarding Abstraction
The forwarding abstraction allows a control program to specify data plane
forwarding behavior while hiding details of the underlying switching hardware.
This abstraction supports the data plane forwarding function. By abstracting away
from the forwarding hardware, it provides flexibility and vender neutrality.
Distribution Abstraction
This abstraction arises in the context of distributed controllers. A cooperating set of
distributed controllers maintains a state description of the network and routes
through the networks. The distributed state of the entire network may involve
partitioned data sets, with controller instances exchanging routing information, or a
replicated data set, so that the controllers must cooperate to maintain a consistent
view of the global network.

Sabarigirivason
lOMoARcPSD| 300 160 45

This abstraction aims at hiding complex distributed mechanisms (used today in


many networks) and separating state management from protocol design and
implementation. It allows providing a single coherent global view of the network
through an annotated network graph accessible for control via an API. An
implementation of such an abstraction is an NOS, such as OpenDaylight or Ryu.
Specification Abstraction
The distribution abstraction provides a global view of the network as if there is a
single central controller, even if multiple cooperating controllers are used. The
specification abstraction then provides an abstract view of the global network. This
view provides just enough detail for the application to specify goals, such as
routing or security policy, without providing the information needed to implement
the goals.
Forwarding interface: An abstract forwarding model that shields higher layers
from forwarding hardware.
Distribution interface: A global network view that shields higher layers from
state dissemination/collection.
Specification interface: An abstract network view that shields application
program from details of physical network.
The physical network is a collection of interconnected SDN data plane switches.
The abstract view is a single virtual switch. The physical network may consist of a
single SDN domain. Ports on edge switches that connect to other domains and to
hosts are mapped into ports on the virtual switch. At the application level, a
module can be executed to learn the media access control (MAC) address of hosts.
When a previously unknown host sends a packet, the application module can
associate that address with the input port and direct this host to this port. Similarly,
if a packet arrives at one of the virtual switch ports with an unknown destination
address, the module floods that packet to all output ports. The abstraction layer
translates these actions on the entire physical network, performing the internal
forwarding with the domain.

Sabarigirivason
lOMoARcPSD| 300 160 45

Virtualization of a Switching Fabric for MAC Learning


Frenetic
An example of a network services abstraction layer is the programming language
Frenetic. Frenetic enables networks operators to program the network as a whole
instead of manually configuring individual network elements. Frenetic was
designed to solve challenges with the use of OpenFlow-based models by working
with an abstraction at the network level as opposed to OpenFlow, which directly
goes down to the network element level.
Frenetic includes an embedded query language that provides effective abstractions
for reading network state. This language is similar to SQL and includes segments
for selecting, filtering, splitting, merging and aggregating the streams of packets.
Another special feature of this language is that it enables the queries to be
composed with forwarding policies. A compiler produces the control messages
needed to query and tabulate the counters on switches.
Frenetic consists of two levels of abstraction. The upper level, which is the
Frenetic source-level API, provides a set of operators for manipulating streams of
network traffic. The query language provides means for reading the state of the
network, merging different queries, and expressing high-level predicates for
classifying, filtering, transforming, and aggregating the packet streams traversing
the network. The lower level of abstraction is provided by a run-time system that

Sabarigirivason
lOMoARcPSD| 300 160 45

operates in the SDN controller. It translates high-level policies and queries into
low-level flow rules and then issues the needed OpenFlow commands to install
these rules on the switches.

Frenetic Architecture
The program combines forwarding functionality with monitoring web traffic
functionality. Consider the following Python program, which executes at the run-
time level, to control OpenFlow switches:
def switch_join(s):
pat1 = {inport:1}
pat2web = {inport:2, srcport:80}
pat2 = {inport:2}
install(s, pat1, DEFAULT, [fwd(2)])
install(s, pat2web, HIGH, [fwd(1)])
install(s, pat2, DEFAULT, [fwd(1)])
query_stats(s, pat2web)
def stats_in(s, xid, pat, pkts, bytes):
print bytes
sleep(30)
query_stats(s, pat)
When a switch joins the network, the program installs three forwarding rules in the
switch for three types of traffic: traffic arriving on port 1, web traffic arriving on
port 2, and other traffic arriving on port 2. The second rule has HIGH priority and
so takes precedence over the third rule, which has default priority. The call to
query_stats generates a request for the counters associated with the pat2web rule.

Sabarigirivason
lOMoARcPSD| 300 160 45

When the controller receives the reply, it invokes the stats_in handler. This
function prints the statistics polled on the previous iteration of the loop, waits 30
seconds, and then issues a request to the switch for statistics matching the same
rule.
With Frenetic, these two functions can be expressed separately, as follows:
def repeater():
rules=[Rule(inport:1, [fwd(2)])
Rule(inport:2, [fwd(1)])]
register(rules)
def web monitor():
q = (Select(bytes) *
Where(inport=2 & srcport=80) *
Every(30))
q >> Print()
def main():
repeater()
monitor()
With this code, it would be easy to change the monitor program or swap it out for
another monitor program without touching the repeater code, and similarly for the
changes to the repeater program.
Traffic Engineering
Traffic engineering is a method for dynamically analyzing, regulating, and
predicting the behavior of data flowing in networks with the aim of performance
optimization to meet service level agreements (SLAs). Traffic engineering involves
establishing routing and forwarding policies based on QoS requirements. With
SDN, the task of traffic engineering should be considerably simplified compared
with a non-SDN network. SDN offers a uniform global view of heterogeneous
equipment and powerful tools for configuring and managing network switches.
• On-demand virtual private networks
• Load balancing
• Energy-aware routing
• Quality of service (QoS) for broadband access networks
• Scheduling/optimization
• Traffic engineering with minimal overhead

Sabarigirivason
lOMoARcPSD| 300 160 45

• Dynamic QoS routing for multimedia apps


• Fast recovery through fast-failover groups
• QoS policy management framework
• QoS enforcement
• QoS over heterogeneous networks
• Multiple packet schedulers
• Queue management for QoS enforcement
• Divide and spread forwarding tables

PolicyCop
• Dynamic traffic steering
• Flexible Flow level control
• Dynamic traffic classes
• Custom flow aggregation levels
Key features of PolicyCop are that it monitors the network to detect policy
violations (based on a QoS SLA) and reconfigures the network to reinforce the
violated policy.PolicyCop consists of eleven software modules and two databases,
installed in both the application plane and the control plane. PolicyCop uses the
control plane of SDNs to monitor the compliance with QoS policies and can
automatically adjust the control plane rules and flow tables in the data plane based
on the dynamic network traffic statistics.

PolicyCop Architecture

Sabarigirivason
lOMoARcPSD| 300 160 45

In the control plane, PolicyCop relies on four modules and a database for storing
control rules, described as follows:
Admission Control: Accepts or rejects requests from the resource provisioning
module for reserving network resources, such as queues, flow-table entries, and
capacity.
Routing: Determines path availability based on the control rules in the rule
database.
Device Tracker: Tracks the up/down status of network switches and their ports.
Statistics Collection: Uses a mix of passive and active monitoring techniques to
measure different network metrics.
Rule Database: The application plane translates high-level network-wide policies
to control rules and stores them in the rule database.
A RESTful northbound interface connects these control plane modules to the
application plane modules, which are organized into two components: a policy
validator that monitors the network to detect policy violations, and a policy
enforcer that adapts control plane rules based on network conditions and high-level
policies. Both modules rely on a policy database, which contains QoS policy rules
entered by a network manager. The modules are as follows:
Traffic Monitor: Collects the active policies from policy database, and
determines appropriate monitoring interval, network segments, and metrics to be
monitored.
Policy Checker: Checks for policy violations, using input from the policy
database and the Traffic Monitor.
Event Handler: Examines violation events and, depending on event type, either
automatically invokes the policy enforcer or sends an action request to the network
manager.
Topology Manager: Maintains a global view of the network, based on input from
the device tracker.
Resource Manager: Keeps track of currently allocated resources using admission
control and statistics collection.

Sabarigirivason
lOMoARcPSD| 300 160 45

Policy Adaptation: Consists of a set of actions, one for each type of policy
violation.

TABLE Functionality of Some Example Policy Adaptation Actions (PAAs)


Resource Provisioning: This module either allocates more resources or releases
existing ones or both based on the violation event.

PolicyCop Workflow

Sabarigirivason
lOMoARcPSD| 300 160 45

Measurement and Monitoring


The area of measurement and monitoring applications can roughly be divided into
two categories: applications that provide new functionality for other networking
services, and applications that add value to OpenFlow-based SDNs.
An example of the first category is in the area of broadband home connections. If
the connection is to an SDN-based network, new functions can be added to the
measurement of home network traffic and demand, allowing the system to react to
changing conditions. The second category typically involves using different kinds
of sampling and estimation techniques to reduce the burden of the control plane in
the collection of data plane statistics.
Security
Applications in this area have one of two goals:
Address security concerns related to the use of SDN: SDN involves a three-layer
architecture (application, control, data) and new approaches to distributed control
and encapsulating data. All of this introduces the potential for new vectors for
attack. Threats can occur at any of the three layers or in the communication
between layers. SDN applications are needed to provide for the secure use of SDN
itself.
Use the functionality of SDN to improve network security: Although SDN
presents new security challenges for network designers and managers, it also
provides a platform for implementing consistent, centrally managed security
policies and mechanisms for the network. SDN allows the development of SDN
security controllers and SDN security applications that can provision and
orchestrate security services and mechanisms.
OpenDaylight DDoS Application
In 2014, Radware, a provider of application delivery and application security
solutions for virtual and cloud data centers, announced its contribution to the
OpenDaylight Project with Defense4All, an open SDN security application
integrated into OpenDaylight. Defense4All offers carriers and cloud providers
distributed denial of service (DDoS) detection and mitigation as a native network
service. Using the OpenDaylight SDN Controller that programs SDN-enabled
networks to become part of the DoS/DDoS protection service itself, Defense4All

Sabarigirivason
lOMoARcPSD| 300 160 45

enables operators to provision a DoS/DDoS protection service per virtual network


segment or per customer.
Defense4All uses a common technique for defending against DDoS attacks, which
consists of the following elements:
Collection of traffic statistics and learning of statistics behavior of protected
objects during peacetime. The normal traffic baselines of the protected objects are
built from these collected statistics.
Detection of DDoS attack patterns as traffic anomalies deviating from normal
baselines.
Diversion of suspicious traffic from its normal path to attack mitigation systems
(AMSs) for traffic scrubbing, selective source blockage, and so on. Clean traffic
exiting out of scrubbing centers is re-injected back into the packet’s original
destination.
The underlying SDN network consists of a number of data plane switches that
support traffic among client and server devices. Defense4All operates as an
application that interacts with the controller over an OpenDaylight controller
(ODC) northbound API. Defense4All supports a user interface for network
managers that can either be a command line interface or a RESTful API. Finally,
Defense4All has an API to communicate with one or more AMSs.

OpenDaylight DDoS Application

Sabarigirivason
lOMoARcPSD| 300 160 45

Administrators can configure Defense4All to protect certain networks and servers,


known as protected networks (PNs) and protected objects (POs). The application
instructs the controller to install traffic counting flows for each protocol of each
configured PO in every network location through which traffic of the subject PO
flows.
Defense4All then monitors traffic of all configured POs, summarizing readings,
rates, and averages from all relevant network locations. If it detects a deviation
from normal learned traffic behavior in a protocol (such as TCP, UDP, ICMP, or
the rest of the traffic) of a particular PO, Defense4All declares an attack against
that protocol in the subject PO. Specifically, Defese4All continuously calculates
traffic averages for real time traffic it measured using OpenFlow; when real time
traffic deviates by 80% from average then an attack is assumed.
To mitigate a detected attack, Defense4All performs the following procedure:
1. It validates that the AMS device is alive and selects a live connection to it.
Currently, Defense4All is configured to work with Radware’s AMS, known as
DefensePro.
2. It configures the AMS with a security policy and normal rates of the attacked
traffic. This provides the AMS with the information needed to enforce a mitigation
policy until traffic returns to normal rates.
3. It starts monitoring and logging syslogs arriving from the AMS for the subject
traffic. As long as Defense4All continues receiving syslog attack notifications from
the AMS regarding this attack, Defense4All continues to divert traffic to the AMS,
even if the flow counters for this PO do not indicate any more attacks.
4. It maps the selected physical AMS connection to the relevant PO link. This
typically involves changing link definitions on a virtual network, using OpenFlow.
5. It installs higher-priority flow table entries so that the attack traffic flow is
redirected to the AMS and re-injects traffic from the AMS back to the normal
traffic flow route. When Defense4All decides that the attack is over (no attack
indication from either flow table counters or from the AMS), it reverts the previous
actions: It stops monitoring for syslogs about the subject traffic, it removes the
traffic diversion flow table entries, and it removes the security configuration from
the AMS. Defense4All then returns to peacetime monitoring.

Sabarigirivason
lOMoARcPSD| 300 160 45

Defense4All Software Architecture Detail

Sabarigirivason
lOMoARcPSD| 300 160 45

Web (REST) Server: Interface to network manager.


Framework Main: Mechanism to start, stop, or reset the framework.
Framework REST Service: Responds to user requests received through the web
(REST) server.
Framework Management Point: Coordinates and invokes control and
configuration commands.
Defense4All Application: Described subsequently.
Common Classes and Utilities: A library of convenient classes and utilities from
which any framework or SDN application module can benefit.
Repository Services: One of the key elements in the framework philosophy is
decoupling the compute state from the compute logic. All durable states are stored
in a set of repositories that can be then replicated, cached, and distributed, with no
awareness of the compute logic (framework or application).
Logging and Flight Recorder Services: The logging service uses logs error,
warning, trace, or informational messages. These logs are mainly for Defense4All
developers. The Flight Recorder records events and metrics during run time from
Java applications.
Health Tracker: Holds aggregated run-time indicators of the operational health of
Defense4All and acts in response to severe functional or performance
deteriorations.
Cluster Manager: Responsible for managing coordination with other Defense4All
entities operating in a cluster mode.
The Defense4All Application module consists of the following elements.
DF App Root: The root module of the application.
DF Rest Service: Responds to Defense4All application REST requests.
DF Management Point: The point to drive control and configuration commands.
DFMgmtPoint in turn invokes methods against other relevant modules in the right
order.

Sabarigirivason
lOMoARcPSD| 300 160 45

ODL Reps: A pluggable module set for different versions of the ODC. Comprises
two functions in two submodules: stats collection for and traffic diversion of
relevant traffic.
SDN Stats Collector: Responsible for setting “counters” for every PN at specified
network locations (physical or logical). A counter is a set of OpenFlow flow entries
in ODC-enabled network switches and routers. The module periodically collects
statistics from those counters and feeds them to the SDNBasedDetectionMgr. The
module uses the SDNStatsCollectionRep to both set the counters and read latest
statistics from those counters. A stat report consists of read time, counter
specification, PN label, and a list of trafficData information, where each trafficData
element contains the latest bytes and packet values for flow entries configured for
<protocol,port,direction> in the counter location. The protocol can be
{tcp,udp,icmp,other ip}, the port is any Layer 4 port, and the direction can be
{inbound, outbound}.
SDN Based Detection Manager: A container for pluggable SDN-based detectors.
It feeds stat reports received from the SDNStatsCollector to plugged-in SDN based
detectors. It also feeds all SDN based detectors notifications from the
AttackDecisionPoint about ended attacks (so as to allow reset of detection
mechanisms). Each detector learns for each PN its normal traffic behavior over
time, and notifies AttackDecisionPoint when it detects traffic anomalies.
Attack Decision Point: Responsible for maintaining attack lifecycle, from
declaring a new attack, to terminating diversion when an attack is considered over.
Mitigation Manager: A container for pluggable mitigation drivers. It maintains the
lifecycle of each mitigation being executed by an AMS. Each mitigation driver is
responsible for driving attack mitigations using AMSs in their sphere of
management.
AMS Based Detector: This module is responsible for monitoring/querying attack
mitigation by AMSs.
AMS Rep: Controls the interface to AMSs.
Finally, it is worth noting that Radware has developed a commercial version of
Defese4All, named DefenseFlow. DefenseFlow implements more sophisticated
algorithms for attack detection based on fuzzy logic. The main benefit is that
DefenseFlow has a greater ability to distinguish attack traffic from abnormal but
legitimate high volume of traffic.

Sabarigirivason
lOMoARcPSD| 300 160 45

Data Center Networking


So far we’ve discussed three areas of SDN applications: traffic engineering,
measurement and monitoring, and security. The provided examples of these
applications suggest the broad range of use cases for them, in many different kinds
of networks. The remaining three applications areas (data center networking,
mobility and wireless, and information-centric networking) have use cases in
specific types of networks.
Cloud computing, big data, large enterprise networks, and even in many cases,
smaller enterprise networks, depend strongly on highly scalable and efficient data
centers. [KREU15] lists the following as key requirements for data centers: high
and flexible cross-section bandwidth and low latency, QoS based on the
application requirements, high levels of resilience, intelligent resource utilization
to reduce energy consumption and improve overall efficiency, and agility in
provisioning network resources (for example, by means of network virtualization
and orchestration with computing and storage).
With traditional network architectures, many of these requirements are difficult to
satisfy because of the complexity and inflexibility of the network. SDN offers the
promise of substantial improvement in the ability to rapidly modify data center
network configurations, to flexibly respond to user needs, and to ensure efficient
operation of the network.
The remainder of this subsection, examines two example data center SDN
applications.
Big Data over SDN
The approach leverages the capabilities of SDN to provide application-aware
networking. It also exploits characteristics of structured big data applications as
well as recent trends in dynamically reconfigurable optical circuits. With respect to
structured big data applications, many of these applications process data according
to well-defined computation patterns, and also have a centralized management
structure that makes it possible to leverage application-level information to
optimize the network. That is, knowing the anticipated computation patterns of the
big data application, it is possible to intelligently deploy the data across the big

Sabarigirivason
lOMoARcPSD| 300 160 45

data servers and, more significantly, react to changing application patterns by using
SDN to reconfigure flows in the network.
Compared to electronic switches, optical switches have the advantages of greater
data rates with reduced cabling complexity and energy consumption. A number of
projects have demonstrated how to collect network-level traffic data and
intelligently allocate optical circuits between endpoints (for example, top-of-rack
switches) to improve application performance. However, circuit utilization and
application performance can be inadequate unless there is a true application-level
view of traffic demands and dependencies. Combining an understanding of the big
data computation patterns with the dynamic capabilities of SDN, efficient data
center networking configurations can be used to support the increasing big data
demands.
A simple hybrid electrical and optical data center network, in which OpenFlow-
enabled top-of-rack (ToR) switches are connected to two aggregation switches: an
Ethernet switch and an optical circuit switch (OCS). All the switches are controlled
by a SDN controller that manages physical connectivity among ToR switches over
optical circuits by configuring the optical switch. It can also manage the
forwarding at ToR switches using OpenFlow rules.

Integrated Network Control for Big Data Applications


The SDN controller is also connected to the Hadoop scheduler, which forms
queues of jobs to be scheduled and the HBase primary controller of a relational
database holding data for the big data applications. In addition, the SDN controller
connects to a Mesos cluster manager. Mesos is an open source software package

Sabarigirivason
lOMoARcPSD| 300 160 45

that provides scheduling and resource allocation services across distributed


applications.
The SDN controller makes available network topology and traffic information to
the Mesos cluster manager. In turn, the SDN controller accepts traffic demand
request from Mesos managers.
Cloud Networking over SDN
Cloud Network as a Service (CloudNaaS) is a cloud networking system that
exploits OpenFlow SDN capabilities to provide a greater degree of control over
cloud network functions by the cloud customer [BENS11]. CloudNaaS enables
users to deploy applications that include a number of network functions, such as
virtual network isolation, custom addressing, service differentiation, and flexible
interposition of various middleboxes. CloudNaaS primitives are directly
implemented within the cloud infrastructure itself using high-speed programmable
network elements, making CloudNaaS highly efficient.

Various Steps in the CloudNaaS Framework

Sabarigirivason
lOMoARcPSD| 300 160 45

a. A cloud customer uses a simple policy language to specify network services


required by the customer applications. These policy statements are issued to a
cloud controller server operated by the cloud service provider.
b. The cloud controller maps the network policy into a communication matrix that
defines desired communication patterns and network services. The matrix is used
to determine the optimal placement of virtual machines (VMs) on cloud servers
such that the cloud can satisfy the largest number of global policies in an efficient
manner. This is done based on the knowledge of other customers’ requirements and
their current levels of activity.
c. The logical communication matrix is translated into network-level directives for
data plane forwarding elements. The customer’s VM instances are deployed by
creating and placing the specified number of VMs.
d. The network-level directives are installed into the network devices via
OpenFlow.
The abstract network model seen by the customer consists of VMs and virtual
network segments that connect VMs together. Policy language constructs identify
the set of VMs that comprise an application and define various functions and
capabilities attached to virtual network segments. The main constructs are as
follows:
address: Specify a customer-visible custom address for a VM.
group: Create a logical group of one or more VMs. Grouping VMs with similar
functions makes it possible for modifications to apply across the entire group
without requiring changing the service attached to individual VMs.
middlebox: Name and initialize a new virtual middlebox by specifying its type and
a configuration file. The list of available middleboxes and their configuration
syntax is supplied by the cloud provider. Examples include intrusion detection and
audit compliance systems.
Network service: Specify capabilities to attach to a virtual network segment, such
as Layer 2 broadcast domain, link QoS, and list of middleboxes that must be
traversed.
virtualnet: Virtual network segments connect groups of VMs and are associated
with network services. A virtual network can span one or two groups. With a single
group, the service applies to traffic between all pairs of VMs in the group. With a

Sabarigirivason
lOMoARcPSD| 300 160 45

pair of groups, the service is applied between any VM in the first group and any
VM in the second group. Virtual networks can also connect to some predefined
groups, such as EXTERNAL, which indicates all endpoints outside of the cloud.
Its two main components are a cloud controller and a network controller. The cloud
controller provides a base Infrastructure as a Service (IaaS) service for managing
VM instances. The user can communicate standard IaaS requests, such as setting
up VMs and storage. In addition, the network policy constructs enable the user to
define the virtual network capabilities for the VMs. The cloud controller manages a
software programmable virtual switch on each physical server in the cloud that
supports network services for tenant applications, including the management of the
user-defined virtual network segments. The cloud controller constructs the
communication matrix and transmits this to the network controller.

CloudNaaS Architecture
The network controller uses the communication matrix to configure data plane
physical and virtual switches. It generates virtual networks between VMs and

Sabarigirivason
lOMoARcPSD| 300 160 45

provides VM placement directives to the cloud controller. It monitors the traffic


and performance on the cloud data plane switches and makes changes to the
network state as needed to optimize use of resources to meet tenant requirements.
The controller invokes the placement optimizer to determine the best location to
place VMs within the cloud (and reports it to the cloud controller for provisioning).
The controller then uses the network provisioner module to generate the set of
configuration commands for each of the programmable devices in the network and
configures them accordingly to instantiate the tenant’s virtual network segment.
Thus, CloudNaaS provides the cloud customer with the ability to go beyond simple
requesting a processing and storage resource, to defining a virtual network of VMs
and controlling the service and QoS requirements of the virtual network.
Mobility and Wireless
In addition to all the traditional performance, security, and reliability requirements
of wired networks, wireless networks impose a broad range of new requirements
and challenges. Mobile users are continuously generating demands for new
services with high quality and efficient content delivery independent of location.
Network providers must deal with problems related to managing the available
spectrum, implementing handover mechanisms, performing efficient load
balancing, responding to QoS and QoE requirements, and maintaining security.
SDN can provide much-needed tools for the mobile network provider and in recent
years a number of SDN-based applications for wireless network providers have
been designed.
SDN support for wireless network providers is an area of intense activity, and a
wide range of application offerings is likely to continue to appear.
Information-Centric Networking
Information-centric networking (ICN), also known as content-centric networking,
has received significant attention in recent years, mainly driven by the fact that
distributing and manipulating information has become the major function of the
Internet today. Unlike the traditional host-centric networking paradigm where
information is obtained by contacting specified named hosts, ICN is aimed at
providing native network primitives for efficient information retrieval by directly
naming and operating on information objects.

Sabarigirivason
lOMoARcPSD| 300 160 45

With ICN, a distinction exists between location and identity, thus decoupling
information for its sources. The essence of this approach is that information
sources can place, and information users can find, information anywhere in the
network, because the information is named, addressed, and matched independently
of its location. In ICN, instead of specifying a source-destination host pair for
communication, a piece of information itself is named. In ICN, after a request is
sent, the network is responsible for locating the best source that can provide the
desired information. Routing of information requests thus seeks to find the best
source for the information, based on a location-independent name.
Deploying ICN on traditional networks is challenging, because existing routing
equipment would need to be updated or replace with ICN-enabled routing devices.
Further, ICN shifts the delivery model from host to user to content to user. This
creates a need for a clear separation between the task of information demand and
supply, and the task of forwarding. SDN has the potential to provide the necessary
technology for deploying ICN because it provides for programmability of the
forwarding elements and a separation of control and data planes.
A number of projects have proposed using SDN capabilities to implement ICNs.
There is no consensus approach to achieving this coupling of SDN and ICN.
Suggested approaches include substantial enhancements/modifications to the
OpenFlow protocol, developing a mapping of names into IP addresses using a hash
function, using the IP option header as a name field, and using an abstraction layer
between an OpenFlow (OF) switch and an ICN router, so that the layer, OF switch,
and ICN router function as a single programmable ICN router.
The approach is built on an open protocol specification and a software reference
implementation of ICN known as CCNx.
CCNx
CCNx is being developed by the Palo Alto Research Center (PARC) as an open
source project, and a number of implementations have been experimentally
deployed.
CCNx
Communication in CCN is via two packet types: Interest packets and Content
packets. A consumer requests content by sending an Interest packet. Any CCN
node that receives the Interest and has named data that satisfies the Interest
responds with a Content packet (also known as a Content). Content satisfies an

Sabarigirivason
lOMoARcPSD| 300 160 45

Interest if the name in the Interest packet matches the name in the Content Object
packet. If a CCN node receives an Interest, and does not already have a copy of the
requested Content, it may forward the Interest toward a source for the content. The
CCN node has forwarding tables that determine which direction to send the
Interest. A provider receiving an Interest for which it has matching named content
replies with a Content packet. Any intermediate node can optionally choose to
cache the Content Object, and it can respond with a cached copy of the Content
Object the next time it receives an Interest packet with the same name.
The basic operation of a CCN node is similar to an IP node. CCN nodes receive
and send packets over faces. A face is a connection point to an application, or
another CCN node, or some other kind of channel. A face may have attributes that
indicate expected latency and bandwidth, broadcast or multicast capability, or other
useful features. A CCN node has three main data structures:
Content Store: Holds a table of previously seen (and optionally cached) Content
packets.
Forwarding Information Base (FIB): Used to forward Interest packets toward
potential data sources.
Pending Interest Table (PIT): Used to keep track of Interests forwarded upstream
by that CCN node toward the content source so that Content packets later received
can be sent back to their requestors.
The details of how content sources become known and how routes are set up
through the CCN network are beyond our scope. Briefly, content providers
advertise names of content and routes are established through the CCN network by
cooperation among the CCN nodes.
ICN relies substantially on in-network caching—that is, to cache content on the
path from content providers to requesters. This on-path caching achieves good
overall performance but is not optimal as content may be replicated on routers,
thus reducing the total volume of content that can be cached. To overcome this
limitation, off-path caching can be used, which allocates content to well-defined
off-path caches within the network and deflects the traffic off the optimal path
toward these caches that are spread across the network. Off-path caching improves
the global hit ratio by efficiently utilizing the network-wide available caching
capacity and permits to reduce egress links’ bandwidth usage.
Use of an Abstraction Layer

Sabarigirivason
lOMoARcPSD| 300 160 45

The central design issue with using an SDN switch (in particular an OF switch) to
function as an ICN router is that the OF switch forwards on the basis of fields in
the IP packet, especially the destination IP address, and an ICN router forwards on
the basis of a content name. In essence, the proposed approach hashes the name
inside the fields with an OF switch can process.
To link a CCNx node software module with an OF switch, an abstraction layer,
called the wrapper, is used. The wrapper pairs a switch interface to a CCNx face,
decodes and hashes content names in CCN messages into fields that an OF switch
can process (for example, IP addresses, port numbers). The large naming space
offered by these fields limits the probability of having collisions between two
different content names. The forwarding tables in the OF switch are set to forward
based on the contents of the hashed fields. The switch does not “know” that the
contents of these fields are no longer legitimate IP addresses, TCP port numbers,
and so forth. It forwards as always, based on the values found in the relevant fields
of incoming IP packets.

Sabarigirivason
lOMoARcPSD| 300 160 45

ICN Wrapper Approach

The abstraction layer solves the problem of how to provide CCN functionality
using current OF switches. For efficient operation, two additional challenges need
to be addressed: how to measure the popularity of content accurately and without a
large overhead, and how to build and optimize routing tables to perform deflection.
To address these issues, the architecture calls for three new modules in the SDN
controller:
Measurement: Content popularity can be inferred directly from OF flow statistics.
The measurement module periodically queries and processes statistics from ingress
OF switches to return the list of most popular content.
Optimization: Uses the list of most popular contents as an input for the
optimization algorithm. The objective is to minimize the sum of the delays over
deflected contents under the following constraints: (1) each popular content is
cached at exactly one node, (2) caching contents at a node does not exceed node’s
capacity, and (3) caching should not cause link congestion.
Deflection: Uses the optimization results to build a mapping, for every content,
between the content name (by means of addresses and ports computed from the
content name hash) and an outgoing interface toward the node where the content is
cached (for example, ip.destination = hash(content name), action = forward to
interface 1).
Finally, mappings are installed on switches’ flow tables using the OF protocol such
that subsequent Interest packets can be forwarded to appropriate caches.
The OpenFlow switch forwards every packet it receives from other ports to the
wrapper, and the wrapper forwards it to the CCNx module. The OpenFlow switch
needs to help the wrapper identify the switch source port of the packet. To achieve
this, the OF switch is configured to set the ToS value of all packets it receives to
the corresponding incoming port value and then forward all of them to the
wrapper’s port.

Sabarigirivason
lOMoARcPSD| 300 160 45

Packet Flow Between CCNx and OpenFlow Switch


The wrapper maps a face of CCNx to an interface (that is, port) of OpenFlow
switches using ToS value. Face W is a special face between wrapper and the
CCNx module. W receives every Content packet from the wrapper and is
used to send every Interest packet from CCNx to the wrapper.
For an Interest packet, the wrapper extracts the face value from the ToS field
and forwards the packet to the corresponding CCNx face. If the CCNx node
holds a copy of the requested content, it composes a Content packet and
returns it back to the incoming face. Otherwise, it forwards this Interest to
face W and updates its PIT accordingly. Upon Content packet arrival from
the OF switch, the wrapper forwards it directly to face W.
The operation of the wrapper on packets received from the CCNx module.
For content packets, it sets the ToS field accordingly, specifying the output
port. Then, for any packet, it decodes the packet to extract the content name
related to the packet. The name is hashed and the source IP address of the
packet is set to correspond to the hashed value. Finally, the wrapper forwards
the packets to OF switches. Content packets are returned to their
corresponding incoming face. Interest packets have the ToS value set to zero
so they are forwarded to next hop by the OF switch.

Sabarigirivason

You might also like