SDN Unit V
SDN Unit V
SDN 5
NFV Functionality
NFV Infrastructure
The heart of the NFV architecture is a collection of resources and functions
known as the NFV infrastructure (NFVI).
NFV Domains
Compute domain: Provides commercial o昀昀-the-shelf (COTS) high-volume
servers and storage.
Hypervisor domain: Mediates the resources of the compute domain to the VMs
of the software appliances, providing an abstraction of the hardware.
Infrastructure network domain (IND): Comprises all the generic high volume
switches interconnected into a network that can be con昀椀gured to supply
infrastructure network services.
Container Interface
The ETSI documents make a distinction between a functional block interface and
a container interface, as follows:
Functional block interface: An interface between two blocks of software that
perform separate (perhaps identical) functions. The interface allows
communication between the two blocks. The two functional blocks may or may
not be on the same physical host.
Container interface: An execution environment on a host system within which
a functional block executes. The functional block is on the same physical host as
the container that provides the container interface.
Application plane SDN functions may execute on the same host as the control
plane functions but may also execute remotely on another host. The numbers in
the second column of the table correspond to the numbered arrows in the 昀椀gure.
Interfaces 4, 6, 7, and 12 are container interfaces, so that components on both
side of the interface are executing on the same host. Interfaces 3, 8, 9, 10, 11, and
14 are functional block interfaces and, in most cases, the functional blocks on the
two sides of the interface execute on di昀昀erent hosts. The NFV documents
anticipate that typically NFV will be introduced over time into an enterprise
facility, so that interaction with non-NFV network is necessary.
When a VNF is composed of multiple VNFCs, it is not necessary that all the VNFCs
execute in the same host. The VNFCs can be distributed across multiple compute
nodes interconnected by network hosts forming the infrastructure network
domain.
Logical Structure of NFVI Domains
The ISG NFV standards documents lay out the logical structure of the NFVI
domains and their interconnections. The speci昀椀cs of the actual implementation of
the elements of this architecture will evolve in both open source and proprietary
implementation e昀昀orts.
are in the compute domain and encompass the following types of compute
domain nodes:
Compute node: A functional entity which is capable of executing a generic
computational instruction set (each instruction be being fully atomic and
deterministic) in such a way that the execution cycle time is of the order of units
to tens of nanoseconds irrespective of what speci昀椀c state is required for cycle
execution. In practical terms, this de昀椀nes a compute node in terms of memory
access time. A distributed system cannot meet this requirement as the time taken
to access state stored in remote memory cannot meet this requirement.
Gateway node: A single identi昀椀able, addressable, and manageable element
within an NFVI-Node that implements gateway functions. Gateway functions
provide the interconnection between NFVI-PoPs and the transport networks.
They also connect virtual networks to existing network components. A gateway
may process packets going between di昀昀erent networks, such as removing
headers and adding headers. A gateway may operate at the transport level,
dealing with IP and data-link packets, or at the application level.
Storage node: A single identi昀椀able, addressable, and manageable element
within an NFVI-Node that provides storage resource using compute, storage, and
networking functions. Storage may be physically implemented in a variety of
ways. It could, for example be implemented as a component within a compute
node. An alternative approach is to implement storage nodes independent of the
compute nodes as physical nodes within the NFVI-Node. An example of such a
storage node may be a physical device accessible via a remote storage technology,
such as Network File System (NFS) and Fibre Channel.
Network node: A single identi昀椀able, addressable, and manageable element
within an NFVI-Node that provides networking (switching/routing) resources
using compute, storage, and network forwarding functions.
A compute domain within an NFVI node will often be deployed as a number of
interconnected physical devices. Physical compute domain nodes may include a
number of physical resources, such as a multicore processor, memory
subsystems, and NICs. An interconnected set of these nodes comprise one NFVI-
Node and constitutes one NFVI point of presence (NFVI-PoP). An NFV provider
might maintain a number of NFVI-PoPs at distributed locations, providing service
to a variety of users, each of whom could implement their VNF software on
compute domain nodes at various NFVI-PoP locations.
interfaces for virtual network entities. The [Vn-Nf]/N reference point is the virtual
network (VN) container interface (for example, a link or a LAN) for carrying
communication between VNFC instances. Note that a single VN can support
communication between more than a single pairing of VNFC instances (for
example, a LAN).
There is an important distinction to be made between the virtualization function
provided by the hypervisor domain and that provided by the infrastructure
network domain. Virtualization in the hypervisor domain uses VM technology to
create an execution environment for individual VNFCs. Virtualization in IND
creates virtual networks for interconnecting VNFCs with each other and with
network nodes outside the NFV ecosystem. These latter types of nodes are called
physical network functions (PNFs).
Virtual Networks
Before proceeding, we need to clarify how the term virtual network is used in the
ISG NFV documents. In general terms, a virtual network is an abstraction of
physical network resources as seen by some upper software layer. Virtual
network technology enables a network provider to support multiple virtual
networks that are isolated from one another. Users of a single virtual network are
not aware of the details of the underlying physical network or of the other virtual
network tra昀케c sharing the physical network resources. Two common approaches
for creating virtual networks are (1) protocol-based methods that de昀椀ne virtual
networks based on 昀椀elds in protocol headers, and (2) virtual-machine-based
methods, in which networks are created among a set of VMs by the hypervisor.
The NFVI network virtualization combines both these forms.
L2 Versus L3 Virtual Networks
Protocol-based virtual networks can be classi昀椀ed by whether they are de昀椀ned at
protocol Layer 2 (L2), which is typically the LAN media access control (MAC)
layer, or Layer 3 (L3), which is typically the Internet Protocol (IP). With an L2 VN,
a virtual LAN is identi昀椀ed by a 昀椀eld in the MAC header, such as the MAC address
or a virtual LAN ID 昀椀eld inserted into the header. Normally, an IP router will strip
o昀昀 the MAC header of incoming Ethernet frames and insert a new MAC header
when forwarding the packet to the next network. The L2 VN could be extended
across this router only if the router had additional capability to support the L2
VN, such as being able to reinsert the virtual LAN ID 昀椀eld in the outgoing MAC
frame.
Each VNF is assigned its own unique IP address that does not overlap with any
other address of elements within the NFVI.
Logical partitioning of the VNFs into their VNs is achieved by managing access
control lists in the L3 forwarding function in each compute node.
The L3 forwarding between VNFs and the physical fabric can then be handled
by the L3 forwarding information base running on the hosting compute node.
Control plane solutions, such as Border Gateway Protocol (BGP), can be used to
advertise reachability of the VNFs to other compute hosts.
The other two approaches are referred to as layered virtual network approaches.
These approaches allow overlapping address spaces. That is, a VNF may
participate in more than one VN using the same IP address. The virtualization
layer of the IND essentially creates private topologies on the underlying NFVI
network fabric, using either virtual overlays or virtual partitioning.
The virtual overlay VN uses the concept of an overlay network. In essence, an
overlay network is a logical network that is built on the top of another network.
Nodes in the overlay network can be thought of as being connected by virtual or
logical links, each of which corresponds to a path, perhaps through many
physical links in the underlying network. However, overlay networks do not have
the ability to control the routing between two overlay network nodes. In the NFV
context, the overlay networks are the VNs used by the VNFs and the underlay
network consists of the infrastructure network resources. These overlay
networks are normally created by edge nodes which have a dual personality,
participating in both the creation of the virtual networks and also acting as
infrastructure network resources. In contrast, the core nodes of the
infrastructure network only participate in the infrastructure network and have
no overlay awareness. The L2 and L3 virtual networks discussed earlier 昀椀t into
this category.
The virtual partitioning VN approach directly integrates VNs, called virtual
network partitions in this context, into the infrastructure network on an end-to-
end basis. Discrete virtual topologies are built in both the edge and core nodes of
the infrastructure network for each virtual network.
Virtualized Network Functions
A VNF is a virtualized implementation of a traditional network function.
It does the topology management of the network services instances (also called
VNF forwarding graphs).
Repositories
Associated with NFVO are four repositories of information needed for the
management and orchestration functions:
Network services catalog: List of the usable network services. A deployment
template for a network service in terms of VNFs and description of their
connectivity through virtual links is stored in NS catalog for future use.
VNF catalog: Database of all usable VNF descriptors. A VNF descriptor (VNFD)
describes a VNF in terms of its deployment and operational behavior
requirements. It is primarily used by VNFM in the process of VNF instantiation
and lifecycle management of a VNF instance. The information provided in the
VNFD is also used by the NFVO to manage and orchestrate network services and
virtualized resources on NFVI.
NFV instances: List containing details about network services instances and
related VNF instances.
NFVI resources: List of NFVI resources utilized for the purpose of establishing
NFV services.
Element Management
The element management is responsible for fault, con昀椀guration, accounting,
performance, and security (FCAPS) management functionality for a VNF. These
management functions are also the responsibility of the VNFM. But EM can do it
through a proprietary interface with the VNF in contrast to VNFM. However, EM
needs to make sure that it exchanges information with VNFM through open
reference point (VeEm-Vnfm). The EM may be aware of virtualization and
collaborate with VNFM to perform those functions that require exchange of
information regarding the NFVI resources associated with VNF. EM functions
include the following:
Con昀椀guration for the network functions provided by the VNF
Fault management for the network functions provided by the VNF
Accounting for the usage of VNF functions
Collecting performance measurement results for the functions provided by the
VNF
Security management for the VNF functions
OSS/BSS
The OSS/BSS are the combination of the operator’s other operations and business
support functions that are not otherwise explicitly captured in the present
architectural framework, but are expected to have information exchanges with
functional blocks in the NFV-MANO architectural framework. OSS/BSS functions
may provide management and orchestration of legacy systems and may have full
end-to-end visibility of services provided by legacy network functions in an
operator’s network.
In principle, it would be possible to extend the functionalities of existing OSS/BSS
to manage VNFs and NFVI directly, but that may be a proprietary implementation
of a vendor. Because NFV is an open platform, managing NFV entities through
open interfaces (as that in MANO) makes more sense. The existing OSS/BBS,
however, can add value to the NFV MANO by o昀昀ering additional functions if they
are not supported by a certain implementation of NFV MANO. This is done
through an open reference point (Os-Ma) between NFV MANO and existing
OSS/BSS.
NFV Use Cases
ISG NFV has developed a representative set of service models and high-level use
cases that may be addressed by NFV. These use cases are intended to drive
further development of standards and products for network-wide
implementation. The Use Cases document identi昀椀es and describes a 昀椀rst set of
service models and high-level use cases that represent, in the view of NFV ISG
member companies, important service models and initial 昀椀elds of application for
NFV, and that span the scope of technical challenges being addressed by the NFV
ISG.
There are currently nine use cases, which can be divided into the categories of
architectural use cases and service-oriented use cases.
NFVIaaS Example
VNF as a Service
Whereas NFVIaaS is similar to the cloud model of Infrastructure as a Service
(IaaS), VNFaaS corresponds to the cloud model of Software as a Service (SaaS).
NFVIaaS provides the virtualization infrastructure to enable a network service
provider to develop and deploy VNFs with reduced cost and time compared to
implementing the NFVI and the VNFs. With VNFaaS, a provider develops VNFs
that are then available o昀昀 the shelf to customers. This model is well suited to
virtualizing customer premises equipment such as routers and 昀椀rewalls.
Virtual Network Platform as a Service
VNPaaS is similar to an NFVIaaS that includes VNFs as components of the virtual
network infrastructure. The primary di昀昀erences are the programmability and
development tools of the VNPaaS that allow the subscriber to create and
con昀椀gure custom ETSI NFV-compliant VNFs to augment the catalog of VNFs
o昀昀ered by the service provider. This allows all the third-party and custom VNFs
to be orchestrated via the VNF FG.
VNF Forwarding Graphs
VNF FG allows virtual appliances to be chained together in a 昀氀exible manner.
This technique is called service chaining. For example, a 昀氀ow may pass through
a network monitoring VNF, a load-balancing VNF, and 昀椀nally a 昀椀rewall VNF in
passing from one endpoint to another. The VNF FG use case is based on an
information model that describes the VNFs and physical entities to the
appropriate management/orchestration systems used by the service provider.
The model describes the characteristics of the entities including the NFV
infrastructure requirements of each VNF and all the required connections among
VNFs and between VNFs and the physical network included in the IaaS service.
To ensure the required performance and resiliency of the end-to-end service, the
information model must be able to specify the capacity, performance and
resiliency requirements of each VNF in the graph. To meet SLAs, the management
and orchestration system will need to monitor the nodes and linkages included in
the service graph. In theory, a VNF FG can span the facilities of multiple network
service providers.
Service-Oriented Use Cases
These use cases focus on the provision of services to end customers, in which the
underlying infrastructure is transparent.
Virtualization of Mobile Core Network and IP Multimedia Subsystem
Mobile cellular networks have evolved to contain a variety of interconnected
network function elements, typically involving a large variety of proprietary
hardware appliances. NFV aims at reducing the network complexity and related
operational issues by leveraging standard IT virtualization technologies to
consolidate di昀昀erent types of network equipment onto industry standard high-
volume servers, switches, and storage, located in NFVI-PoPs.
Virtualization of Mobile Base Station
The focus of this use case is radio access network (RAN) equipment in mobile
networks. RAN is the part of a telecommunications system that implements a
wireless technology to access the core network of the mobile network service
provider. At minimum, it involves hardware on the customer premises or in the
mobile device and equipment forming a base station for access to the mobile
network. There is the possibility that a number of RAN functions can be
virtualized as VNFs running on industry standard infrastructure.
Virtualization of the Home Environment
This use case deals with network provider equipment located as customer
premises equipment (CPE) in a residential location. These CPE devices mark the
operator/service provider presence at the customer premises and usually include
a residential gateway (RGW) for Internet and Voice over IP (VoIP) services (for
example, a modem/router for digital subscriber line [DSL] or cable), and a set-top
box (STB) for media services normally supporting local storage for personal video
recording (PVR) services. NFV technologies become ideal candidates to support
this concentration of computation workload from formerly dispersed functions
with minimal cost and improved time to market, while new services can be
introduced as required on a grow-as-you-need basis. Further, the VNFs can reside
on services in the network service provider’s PoP. This greatly simpli昀椀es the
electronics environment of the home, reducing end user and operator capital
expenditure (CapEx).
Virtualization of CDNs
Delivery of content, especially of video, is one of the major challenges of all
operator networks because of the massive growing amount of tra昀케c to be
delivered to end customers of the network. The growth of video tra昀케c is driven
by the shift from broadcast to unicast delivery via IP, by the variety of devices
used for video consumption and by increasing quality of video delivered via IP
networks in resolution and frame rate.
Some Internet service providers (ISPs) are deploying proprietary Content
Delivery Network (CDN) cache nodes in their networks to improve delivery of
video and other high-bandwidth services to their customers. Cache nodes
typically run on dedicated appliances running on custom or industry standard
server platforms. Both CDN cache nodes and CDN control nodes can potentially
be virtualized. The bene昀椀ts of CDN virtualization are similar to those gained in
other NFV use cases, such as VNFaaS.
Fixed Access Network Functions Virtualization
NFV o昀昀ers the potential to virtualize remote functions in the hybrid 昀椀ber/copper
access network and passive optical network (PON) 昀椀ber to the home and hybrid
昀椀ber/wireless access networks. This use case has the potential for cost savings by
moving complex processing closer to the network. An additional bene昀椀t is that
virtualization supports multiple tenancy, in which more than one organizational
entity can either be allocated, or given direct control of, a dedicated partition of a
virtual access node. Finally, virtualizing broadband access nodes can enable
synergies to be exploited by the co-location of wireless access nodes in a common
NFV platform framework (that is, common NFVI-PoPs), thereby improving the
deployment economics and reducing the overall energy consumption of the
combined solution.
An indication of the relative importance of the various use cases is found in a
survey of 176 network professionals from a range of industries, reported in 2015
Guide to SDN and NFV [METZ14] and conducted in late 2014. The survey
respondents were asked to indicate the two use cases that they think will gain the
most traction in the market over the next two years. The data indicates that
although IT organizations have interest in a number of the ETSI-de昀椀ned use
cases, by a wide margin they are most interested in the NFVIaaS use case.
Over the past few years, the hottest topics in networking have been SDN and NFV.
Separate standards bodies are pursuing the two technologies, and a large,
growing number of providers have announced or are working on products in the
two 昀椀elds. Each technology can be implemented and deployed separately, but
there is clearly a potential for added value by the coordinated use of both
technologies. It is likely that over time, SDN and NFV will tightly interoperate to
provide a broad, uni昀椀ed software-based networking approach to abstract and
programmatically control network equipment and network-based resources.
The relationship between SDN and NFV is perhaps viewed as SDN functioning as
an enabler of NFV. A major challenge with NFV is to best enable the user to
con昀椀gure a network so that VNFs running on servers are connected to the
network at the appropriate place, with the appropriate connectivity to other
VNFs, and with desired QoS. With SDN, users and orchestration software can
dynamically con昀椀gure the network and the distribution and connectivity of VNFs.
If demand for load-balancing capacity increases, a network orchestration layer
can rapidly spin up new load-balancing instances and also adjust the network
switching infrastructure to accommodate the changed tra昀케c patterns. In turn,
the load-balancing VNF entity can interact with the SDN controller to assess
network performance and capacity and use this additional information to
balance tra昀케c better, or even to request provisioning of additional VNF
resources.
Some of the ways that ETSI believes that NFV and SDN complement each other
include the following:
The SDN controller 昀椀ts well into the broader concept of a network controller in
an NFVI network domain.
SDN can play a signi昀椀cant role in the orchestration of the NFVI resources, both
physical and virtual, enabling functionality such as provisioning, con昀椀guration of
network connectivity, bandwidth allocation, automation of operations,
monitoring, security, and policy control.
SDN can provide the network virtualization required to support multitenant
NFVIs.
Forwarding graphs can be implemented using the SDN controller to provide
automated provisioning of service chains, while ensuring strong and consistent
implementation of security and other policies.
The SDN controller can be run as a VNF, possibly as part of a service chain
including other VNFs. For example, applications and services originally
developed to run on the SDN controller could also be implemented as separate
VNFs.