0% found this document useful (0 votes)
22 views26 pages

SDN Unit V

The document discusses the architecture and functionality of Network Functions Virtualization (NFV) and its components, including the NFV infrastructure (NFVI), which consists of compute, hypervisor, and infrastructure network domains. It explains the distinction between functional block interfaces and container interfaces, the deployment of virtual network functions (VNFs), and the roles of various nodes within the NFVI. Additionally, it outlines realistic deployment scenarios for NFV, emphasizing the importance of virtualization in both the hypervisor and infrastructure network domains.

Uploaded by

asheena578
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views26 pages

SDN Unit V

The document discusses the architecture and functionality of Network Functions Virtualization (NFV) and its components, including the NFV infrastructure (NFVI), which consists of compute, hypervisor, and infrastructure network domains. It explains the distinction between functional block interfaces and container interfaces, the deployment of virtual network functions (VNFs), and the roles of various nodes within the NFVI. Additionally, it outlines realistic deployment scenarios for NFV, emphasizing the importance of virtualization in both the hypervisor and infrastructure network domains.

Uploaded by

asheena578
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

lOMoARcPSD|44726045

SDN 5

Software Defined Networks (Anna University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by 003- ASHEENA ([email protected])
lOMoARcPSD|44726045

UNIT V NFV FUNCTIONALITY


NFV Infrastructure – Virtualized Network Functions – NFV Management and
Orchestration – NFV Use cases – SDN and NFV.

NFV Functionality
NFV Infrastructure
The heart of the NFV architecture is a collection of resources and functions
known as the NFV infrastructure (NFVI).

NFV Domains
Compute domain: Provides commercial o昀昀-the-shelf (COTS) high-volume
servers and storage.
Hypervisor domain: Mediates the resources of the compute domain to the VMs
of the software appliances, providing an abstraction of the hardware.
Infrastructure network domain (IND): Comprises all the generic high volume
switches interconnected into a network that can be con昀椀gured to supply
infrastructure network services.
Container Interface
The ETSI documents make a distinction between a functional block interface and
a container interface, as follows:
Functional block interface: An interface between two blocks of software that
perform separate (perhaps identical) functions. The interface allows

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

communication between the two blocks. The two functional blocks may or may
not be on the same physical host.
Container interface: An execution environment on a host system within which
a functional block executes. The functional block is on the same physical host as
the container that provides the container interface.

General Domain Architecture and Associated Interfaces


The ETSI NFVI Architecture Overview document makes the following points
concerning this 昀椀gure:
The architecture of the VNFs is separated from the architecture hosting the
VNFs (that is, the NFVI).
The architecture of the VNFs may be divided into a number of domains with
consequences for the NFVI and vice versa.
Given the current technology and industrial structure, compute (including
storage), hypervisors, and infrastructure networking are already largely separate
domains and are maintained as separate domains within the NFVI.
Management and orchestration tends to be su昀케ciently distinct from the NFVI
as to warrant being de昀椀ned as its own domain.
The interface between the VNF domains and the NFVI is a container interface
and not a functional block interface.
The management and orchestration functions are also likely to be hosted in the
NFVI (as VMs) and therefore also likely to sit on a container interface.
The user view of a network of interconnected VNFs is of a virtualized network in
which the physical and lower level logical details are transparent. But the VNFs
and logical links between VNFs are hosted on an NFVI container, which in turn is
hosted on VM and VM containers running on physical hosts. Therefore, if we
view the VNF architecture as having three layers (physical resource,
virtualization, application), all three layers are present on a single physical host.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Application plane SDN functions may execute on the same host as the control
plane functions but may also execute remotely on another host. The numbers in
the second column of the table correspond to the numbered arrows in the 昀椀gure.
Interfaces 4, 6, 7, and 12 are container interfaces, so that components on both
side of the interface are executing on the same host. Interfaces 3, 8, 9, 10, 11, and
14 are functional block interfaces and, in most cases, the functional blocks on the
two sides of the interface execute on di昀昀erent hosts. The NFV documents
anticipate that typically NFV will be introduced over time into an enterprise
facility, so that interaction with non-NFV network is necessary.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Inter-Domain Interfaces Arising from Domain Architecture

Deployment of NFVI Containers


A single compute or network host can host multiple virtual machines (VMs), each
of which can host a single VNF. The single VNF hosted on a VM is referred to as a
VNF component (VNFC). A network function may be virtualized by a single VNFC,
or multiple VNFCs may be combined to form a single VNF. The compute container
interface hosts a hypervisor, which in turn can host multiple VMs, each hosting a
VNFC.

Deployment of NVFI Containers

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

When a VNF is composed of multiple VNFCs, it is not necessary that all the VNFCs
execute in the same host. The VNFCs can be distributed across multiple compute
nodes interconnected by network hosts forming the infrastructure network
domain.
Logical Structure of NFVI Domains
The ISG NFV standards documents lay out the logical structure of the NFVI
domains and their interconnections. The speci昀椀cs of the actual implementation of
the elements of this architecture will evolve in both open source and proprietary
implementation e昀昀orts.

Logical Structure of NFVI Domains


Compute Domain
The principal elements in a typical compute domain may include the following:
CPU/memory: A COTS processor, with main memory, that executes the code of
the VNFC.
Internal storage: Nonvolatile storage housed in the same physical structure as
the processor, such as 昀氀ash memory.
Accelerator: Accelerator functions for security, networking, and packet
processing may also be included.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

External storage with storage controller: Access to secondary memory


devices.
Network interface card (NIC): Provides the physical interconnection with the
infrastructure network domain, which is labeled Ha/CSr-Ha/Nr and corresponds
to interface 14
Control and admin agent: Connects to the virtualized infrastructure manager
(VIM);
Eswitch: Server embedded switch. The eswitch function, described in the
following section, is implemented in the compute domain. However, functionally
it forms an integral part of the infrastructure network domain.
Compute/storage execution environment: This is the execution environment
presented to the hypervisor software by the server or storage device
Eswitch
Control plane workloads: Concerned with signaling and control plane
protocols such as BGP. Typically, these workloads are more processor rather than
I/O intensive and do not place a signi昀椀cant burden on the I/O system.
Data plane workloads: Concerned with the routing, switching, relaying or
processing of network tra昀케c payloads. Such workloads can require high I/O
throughput.
In a virtualized environment such as NFV, all VNF network tra昀케c would go
through a virtual switch in the hypervisor domain, which invokes a layer of
software between virtualized VNF software and host networking hardware. This
can create a signi昀椀cant performance penalty. The purpose of the eswitch is to
bypass the virtualization software and provide the VNF with a direct memory
access (DMA) path to the NIC. The eswitch approach accelerates packet processing
without any processor overhead.
NFVI Implementation Using Compute Domain Nodes
A VNF consists of one or more logically connected VNFCs. The VNFCs run as
software on hypervisor domain containers that in turn run on hardware in the
compute domain. Although virtual links and networks are de昀椀ned through the
infrastructure network domain, the actual implementation of network functions
at the VNF level consists of software on compute domain nodes. The IND
interfaces with the compute domain and not directly with the hypervisor domain
or the VNFs.
Before proceeding, we need to de昀椀ne the term node, which is used often in the
ISG NFV documents. The documents de昀椀ne an NFVI-Node as collection of
physical devices deployed and managed as a single entity, providing the NFVI
functions required to support the execution environment for VNFs. NFVI nodes

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

are in the compute domain and encompass the following types of compute
domain nodes:
Compute node: A functional entity which is capable of executing a generic
computational instruction set (each instruction be being fully atomic and
deterministic) in such a way that the execution cycle time is of the order of units
to tens of nanoseconds irrespective of what speci昀椀c state is required for cycle
execution. In practical terms, this de昀椀nes a compute node in terms of memory
access time. A distributed system cannot meet this requirement as the time taken
to access state stored in remote memory cannot meet this requirement.
Gateway node: A single identi昀椀able, addressable, and manageable element
within an NFVI-Node that implements gateway functions. Gateway functions
provide the interconnection between NFVI-PoPs and the transport networks.
They also connect virtual networks to existing network components. A gateway
may process packets going between di昀昀erent networks, such as removing
headers and adding headers. A gateway may operate at the transport level,
dealing with IP and data-link packets, or at the application level.
Storage node: A single identi昀椀able, addressable, and manageable element
within an NFVI-Node that provides storage resource using compute, storage, and
networking functions. Storage may be physically implemented in a variety of
ways. It could, for example be implemented as a component within a compute
node. An alternative approach is to implement storage nodes independent of the
compute nodes as physical nodes within the NFVI-Node. An example of such a
storage node may be a physical device accessible via a remote storage technology,
such as Network File System (NFS) and Fibre Channel.
Network node: A single identi昀椀able, addressable, and manageable element
within an NFVI-Node that provides networking (switching/routing) resources
using compute, storage, and network forwarding functions.
A compute domain within an NFVI node will often be deployed as a number of
interconnected physical devices. Physical compute domain nodes may include a
number of physical resources, such as a multicore processor, memory
subsystems, and NICs. An interconnected set of these nodes comprise one NFVI-
Node and constitutes one NFVI point of presence (NFVI-PoP). An NFV provider
might maintain a number of NFVI-PoPs at distributed locations, providing service
to a variety of users, each of whom could implement their VNF software on
compute domain nodes at various NFVI-PoP locations.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Some Realistic Deployment Scenarios


Monolithic operator: One organization owns and houses the hardware
equipment and deploys and operates the VNFs and the hypervisors they run on.
A private cloud or a data center are examples of this deployment model.
Network operator hosting virtual network operators: Based on the
monolithic operator scenario, with the addition that the monolithic operator host
other virtual network operators within the same facility. A hybrid cloud is an
example of this deployment model.
Hosted network operator: An IT services organization (for example, HP,
Fujitsu) operates the compute hardware, infrastructure network, and hypervisors
on which a separate network operator (for example, BT, Verizon) runs VNFs.
These are physically secured by the IT services organization.
Hosted communications providers: Similar to the hosted network operator
scenario, but in this case multiple communications providers are hosted. A
community cloud is an example of this deployment model.
Hosted communications and application providers: Similar to the previous
scenario. In addition to host network and communications providers, servers in a
data center facility are o昀昀ered to the public for deploying virtualized
applications. A public cloud is an example of this deployment model.
Managed network service on customer premises: Similar to the monolithic
operator scenario. In this case, the NFV provider’s equipment is housed on the
customer’s premises. One example of this model is a remotely managed gateway
in a residential or enterprise location. Another example is remotely managed
networking equipment such as 昀椀rewalls or virtual private network gateways.
Managed network service on customer equipment: Similar to the monolithic
operator scenario. In this case, the equipment is housed on the customer’s
premises on customer equipment. This scenario could be used for managing an
enterprise network. A private cloud could also be deployed in this fashion.
Hypervisor Domain

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

The hypervisor domain is a software environment that abstracts hardware and


implements services, such as starting a VM, terminating a VM, acting on policies,
scaling, live migration, and high availability. The principal elements in the
hypervisor domain are the following:
Compute/storage resource sharing/management: Manages these resources
and provides virtualized resource access for VMs.
Network resource sharing/management: Manages these resources and
provides virtualized resource access for VMs.
Virtual machine management and API: This provides the execution
environment of a single VNFC instance.
Control and admin agent: Connects to the virtualized infrastructure manager
(VIM);
Vswitch: The vswitch function, described in the next paragraph, is
implemented in the hypervisor domain. However, functionally it forms an
integral part of the infrastructure network domain.
The vswitch is an Ethernet switch implemented by the hypervisor that
interconnects virtual NICs of VMs with each other and with the NIC of the
compute node. If two VNFs are on the same physical server, they would be
connected through the same vswitch. If two VNFs are on di昀昀erent servers, the
connection passes through the 昀椀rst vswitch to the NIC and then to an external
switch. This switch forwards the connection to the NIC of the desired server.
Finally, this NIC forwards it to its internal vswitch and then to the destination
VNF.
Infrastructure Network Domain
The infrastructure network domain (IND) performs a number of roles. It provides
The communication channel between the VNFCs of a distributed VNF
The communications channel between di昀昀erent VNFs
The communication channel between VNFs and their orchestration and
management
The communication channel between components of the NFVI and their
orchestration and management
The means of remote deployment of VNFCs
The means of interconnection with the existing carrier network
As already mentioned, Ha/CSr-Ha/Nr de昀椀nes the interface between the IND and
the servers/storage of the compute domain, connecting the NIC in the compute
domain to a network resource in the infrastructure network domain. Ex-Nf is the
reference point between any existing/nonvirtualized network Reference point
[VI-HA]/Nr is the interface between the hardware network resources of the IND
and the virtualization layer. The virtualization layer provides container

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

interfaces for virtual network entities. The [Vn-Nf]/N reference point is the virtual
network (VN) container interface (for example, a link or a LAN) for carrying
communication between VNFC instances. Note that a single VN can support
communication between more than a single pairing of VNFC instances (for
example, a LAN).
There is an important distinction to be made between the virtualization function
provided by the hypervisor domain and that provided by the infrastructure
network domain. Virtualization in the hypervisor domain uses VM technology to
create an execution environment for individual VNFCs. Virtualization in IND
creates virtual networks for interconnecting VNFCs with each other and with
network nodes outside the NFV ecosystem. These latter types of nodes are called
physical network functions (PNFs).
Virtual Networks
Before proceeding, we need to clarify how the term virtual network is used in the
ISG NFV documents. In general terms, a virtual network is an abstraction of
physical network resources as seen by some upper software layer. Virtual
network technology enables a network provider to support multiple virtual
networks that are isolated from one another. Users of a single virtual network are
not aware of the details of the underlying physical network or of the other virtual
network tra昀케c sharing the physical network resources. Two common approaches
for creating virtual networks are (1) protocol-based methods that de昀椀ne virtual
networks based on 昀椀elds in protocol headers, and (2) virtual-machine-based
methods, in which networks are created among a set of VMs by the hypervisor.
The NFVI network virtualization combines both these forms.
L2 Versus L3 Virtual Networks
Protocol-based virtual networks can be classi昀椀ed by whether they are de昀椀ned at
protocol Layer 2 (L2), which is typically the LAN media access control (MAC)
layer, or Layer 3 (L3), which is typically the Internet Protocol (IP). With an L2 VN,
a virtual LAN is identi昀椀ed by a 昀椀eld in the MAC header, such as the MAC address
or a virtual LAN ID 昀椀eld inserted into the header. Normally, an IP router will strip
o昀昀 the MAC header of incoming Ethernet frames and insert a new MAC header
when forwarding the packet to the next network. The L2 VN could be extended
across this router only if the router had additional capability to support the L2
VN, such as being able to reinsert the virtual LAN ID 昀椀eld in the outgoing MAC
frame.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Levels of Network Virtualization


An L3 VN makes use of one or more 昀椀elds in the IP header. A good example of
this is the virtual private network (VPN) de昀椀ned using IPsec. Packets traveling on
a VPN are encapsulated in a new outer IP header and the data are encrypted so
that VPN tra昀케c is isolated and protected as it transits third-party network such as
the Internet.
NFVI Virtual Network Alternatives
ISG NFV de昀椀nes a virtual network as the network construct that provides
network connectivity to one or more VNFs that are hosted on the NFVI.
Therefore, the concept of a virtual network that extends beyond the NFV
infrastructure is not currently addressed. In NFV, a virtual network is a network
among VNFs.
The Network Domain document indicates that three approaches are envisioned
for providing a virtual network service:
Infrastructure-based VNs
Layered VNs using virtual overlays
Layered VNs using virtual partitioning
A facility can use one or a combination of these approaches.
The infrastructure-Based VN uses the native networking functions of the NFVI
compute and networking components. The address space is partitioned so that
VNF membership in a VN is de昀椀ned by IP address. The IND document gives the
following example of an L3 infrastructure-based VN:

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Each VNF is assigned its own unique IP address that does not overlap with any
other address of elements within the NFVI.
Logical partitioning of the VNFs into their VNs is achieved by managing access
control lists in the L3 forwarding function in each compute node.
The L3 forwarding between VNFs and the physical fabric can then be handled
by the L3 forwarding information base running on the hosting compute node.
Control plane solutions, such as Border Gateway Protocol (BGP), can be used to
advertise reachability of the VNFs to other compute hosts.
The other two approaches are referred to as layered virtual network approaches.
These approaches allow overlapping address spaces. That is, a VNF may
participate in more than one VN using the same IP address. The virtualization
layer of the IND essentially creates private topologies on the underlying NFVI
network fabric, using either virtual overlays or virtual partitioning.
The virtual overlay VN uses the concept of an overlay network. In essence, an
overlay network is a logical network that is built on the top of another network.
Nodes in the overlay network can be thought of as being connected by virtual or
logical links, each of which corresponds to a path, perhaps through many
physical links in the underlying network. However, overlay networks do not have
the ability to control the routing between two overlay network nodes. In the NFV
context, the overlay networks are the VNs used by the VNFs and the underlay
network consists of the infrastructure network resources. These overlay
networks are normally created by edge nodes which have a dual personality,
participating in both the creation of the virtual networks and also acting as
infrastructure network resources. In contrast, the core nodes of the
infrastructure network only participate in the infrastructure network and have
no overlay awareness. The L2 and L3 virtual networks discussed earlier 昀椀t into
this category.
The virtual partitioning VN approach directly integrates VNs, called virtual
network partitions in this context, into the infrastructure network on an end-to-
end basis. Discrete virtual topologies are built in both the edge and core nodes of
the infrastructure network for each virtual network.
Virtualized Network Functions
A VNF is a virtualized implementation of a traditional network function.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Potential Network Functions to Be Virtualized


VNF Interfaces
As discussed earlier, a VNF consists of one or more VNF components (VNFCs). The
VNFCs of a single VNF are connected internal to the VNF. This internal structure
is not visible to other VNFs or to the VNF user.

VNF Functional View


SWA-1: This interface enables communication between a VNF and other VNFs,
PNFs, and endpoints. Note that the interface is to the VNF as a whole and not to
individual VNFCs. SWA-1 interfaces are logical interfaces that primarily make use
of the network connectivity services available at the SWA-5 interface.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

SWA-2: This interface enables communications between VNFCs within a VNF.


This interface is vendor speci昀椀c and therefore not a subject for standardization.
This interface may also make use of the network connectivity services available
at the SWA-5 interface. However, if two VNFCs within a VNF are deployed on the
same host, other technologies may be used to minimize latency and enhance
throughput, as described below.
SWA-3: This is the interface to the VNF manager within the NFV management
and orchestration module. The VNF manager is responsible for lifecycle
management (creation, scaling, termination, and so on). The interface typically is
implemented as a network connection using IP.
SWA-4: This is the interface for runtime management of the VNF by the element
manager.
SWA-5: This interface describes the execution environment for a deployable
instance of a VNF. Each VNFC maps to a virtualized container interface to a VM.
VNFC to VNFC Communication
As mentioned earlier, the internal structure of a VNF, in terms of multiple VNFCs,
is not exposed externally. The VNF appears as a single functional system in the
network it supports. However, internal connectivity between VNFCs within the
same VNF or across co-located VNFs needs to be speci昀椀ed by the VNF provider,
supported by the NFVI, and managed by the VNF manager. The VNF Architecture
document describes a number of architecture design models that are intended to
provide desired performance and quality of service (QoS), such as access to
storage or compute resources. One of the most important of these design models
relates to communication between VNFCs.

VNFC to VNFC Communication

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

1. Communication through a hardware switch. In this case, the VMs supporting


the VNFCs bypass the hypervisor to directly access the physical NIC. This provides
enhanced performance for VNFCs on di昀昀erent physical hosts.
2. Communication through the vswitch in the hypervisor. This is the basic
method of communication between co-located VNFCs but does not provide the
QoS or performance that may be required for some VNFs.
3. Greater performance can be achieved by using appropriate data processing
acceleration libraries and drivers compatible with the CPU being used. The
library is called from the vswitch. An example of a suitable commercial product is
the Data Plane Development Kit (DPDK), which is a set of data plane libraries and
network interface controller drivers for fast packet processing on Intel
architecture platforms. Scenario 3 assumes a Type 1 hypervisor (see Figure 7.3).
4. Communication through an embedded switch (eswitch) deployed in the NIC
with Single Root I/O Virtualization (SR-IOV). SR-IOV is a PCI-SIG speci昀椀cation that
de昀椀nes a method to split a device into multiple PCI express requester IDs (virtual
functions) in a fashion that allows an I/O memory management unit (MMU) to
distinguish di昀昀erent tra昀케c streams and apply memory and interrupt translations
so that these tra昀케c streams can be delivered directly to the appropriate VM, and
in a way that prevents nonprivileged tra昀케c 昀氀ows from impacting other VMs.
5. Embedded switch deployed in the NIC hardware with SR-IOV, and with data
plane acceleration software deployed in the VNFC.
6. A serial bus connects directly two VNFCs that have extreme workloads or very
low-latency requirements. This is essentially an I/O channel means of
communication rather than a NIC means.
VNF Scaling
An important property of VNFs is referred to as elasticity, which simply means
the ability to scale up/down or scale out/in. Every VNF has associated with it an
elasticity parameter of no elasticity, scale up/down only, scale out/in only, or both
scale up/down and scale out/in.
A VNF is scaled by scaling one or more of its constituent VNFCs. Scale out/in is
implemented by adding/removing VNFC instances that belong to the VNF being
scaled. Scale up/down is implemented by adding/removing resources from
existing VNFC instances that belong to the VNF being scaled.
NFV Management and Orchestration1
The NFV management and orchestration (MANO) component of NFV has as its
primary function the management and orchestration of an NFV environment.
This task, by itself, is complex. Further complicating MANO functionality is its
need to interoperate with and cooperate with existing operations support
systems (OSS) and business support systems (BSS) in providing management

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

functionality for customers whose networking environment consists of a mixture


of physical and virtual elements. As can be seen, there are 昀椀ve management
blocks: three within NFV-MANO, EMS associated with VNFs, and OSS/BSS. These
two latter blocks are not part of MANO but do exchange information with MANO
for the purpose of the overall management of a customer’s networking
environment.

The NFV-MANO Architectural Framework with Reference Points


Virtualized Infrastructure Manager
Virtualized infrastructure management (VIM) comprises the functions that are
used to control and manage the interaction of a VNF with computing, storage,
and network resources under its authority, as well as their virtualization. A single
instance of a VIM is responsible for controlling and managing the NFVI compute,
storage, and network resources, usually within one operator’s infrastructure
domain. This domain could consist of all resources within an NFVI-PoP, resources
across multiple NFVI-PoPs, or a subset of resources within an NFVI-PoP. To deal
with the overall networking environment, multiple VIMs within a single MANO
may be needed.
A VIM performs the following:
Resource management, in charge of the
Inventory of software (for example, hypervisors), computing, storage and
network resources dedicated to NFV infrastructure.
Allocation of virtualization enablers, for example, VMs onto hypervisors,
compute resources, storage, and relevant network connectivity
Management of infrastructure resource and allocation, for example, increase
resources to VMs, improve energy e昀케ciency, and resource reclamation
Operations, for
Visibility into and management of the NFV infrastructure

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Root cause analysis of performance issues from the NFV infrastructure


perspective
Collection of infrastructure fault information
Collection of information for capacity planning, monitoring, and optimization
Virtual Network Function Manager
A VNF manager (VNFM) is responsible for VNFs. Multiple VNFMs may be
deployed; a VNFM may be deployed for each VNF, or a VNFM may serve multiple
VNFs. Among the functions that a VNFM performs are the following:
VNF instantiation, including VNF con昀椀guration if required by the VNF
deployment template (for example, VNF initial con昀椀guration with IP addresses
before completion of the VNF instantiation operation)
VNF instantiation feasibility checking, if required
VNF instance software update/upgrade
VNF instance modi昀椀cation
VNF instance scaling out/in and up/down
VNF instance-related collection of NFVI performance measurement results and
faults/events information, and correlation to VNF instance-related events/faults
VNF instance assisted or automated healing
VNF instance termination
VNF lifecycle management change noti昀椀cation
Management of the integrity of the VNF instance through its lifecycle
Overall coordination and adaptation role for con昀椀guration and event reporting
between the VIM and the EM
NFV Orchestrator
The NFV orchestrator (NFVO) is responsible for resource orchestration and
network service orchestration.
Resource orchestration manages and coordinates the resources under the
management of di昀昀erent VIMs. NFVO coordinates, authorizes, releases and
engages NFVI resources among di昀昀erent PoPs or within one PoP. This does so by
engaging with the VIMs directly through their northbound APIs instead of
engaging with the NFVI resources directly.
Network services orchestration manages/coordinates the creation of an end-to-
end service that involves VNFs from di昀昀erent VNFMs domains. Service
orchestration does this in the following way:
It creates end-to-end service between di昀昀erent VNFs. It achieves this by
coordinating with the respective VNFMs so that it does not need to talk to VNFs
directly. An example is creating a service between the base station VNFs of one
vendor and core node VNFs of another vendor.
It can instantiate VNFMs, where applicable.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

It does the topology management of the network services instances (also called
VNF forwarding graphs).
Repositories
Associated with NFVO are four repositories of information needed for the
management and orchestration functions:
Network services catalog: List of the usable network services. A deployment
template for a network service in terms of VNFs and description of their
connectivity through virtual links is stored in NS catalog for future use.
VNF catalog: Database of all usable VNF descriptors. A VNF descriptor (VNFD)
describes a VNF in terms of its deployment and operational behavior
requirements. It is primarily used by VNFM in the process of VNF instantiation
and lifecycle management of a VNF instance. The information provided in the
VNFD is also used by the NFVO to manage and orchestrate network services and
virtualized resources on NFVI.
NFV instances: List containing details about network services instances and
related VNF instances.
NFVI resources: List of NFVI resources utilized for the purpose of establishing
NFV services.
Element Management
The element management is responsible for fault, con昀椀guration, accounting,
performance, and security (FCAPS) management functionality for a VNF. These
management functions are also the responsibility of the VNFM. But EM can do it
through a proprietary interface with the VNF in contrast to VNFM. However, EM
needs to make sure that it exchanges information with VNFM through open
reference point (VeEm-Vnfm). The EM may be aware of virtualization and
collaborate with VNFM to perform those functions that require exchange of
information regarding the NFVI resources associated with VNF. EM functions
include the following:
Con昀椀guration for the network functions provided by the VNF
Fault management for the network functions provided by the VNF
Accounting for the usage of VNF functions
Collecting performance measurement results for the functions provided by the
VNF
Security management for the VNF functions
OSS/BSS
The OSS/BSS are the combination of the operator’s other operations and business
support functions that are not otherwise explicitly captured in the present
architectural framework, but are expected to have information exchanges with
functional blocks in the NFV-MANO architectural framework. OSS/BSS functions

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

may provide management and orchestration of legacy systems and may have full
end-to-end visibility of services provided by legacy network functions in an
operator’s network.
In principle, it would be possible to extend the functionalities of existing OSS/BSS
to manage VNFs and NFVI directly, but that may be a proprietary implementation
of a vendor. Because NFV is an open platform, managing NFV entities through
open interfaces (as that in MANO) makes more sense. The existing OSS/BBS,
however, can add value to the NFV MANO by o昀昀ering additional functions if they
are not supported by a certain implementation of NFV MANO. This is done
through an open reference point (Os-Ma) between NFV MANO and existing
OSS/BSS.
NFV Use Cases
ISG NFV has developed a representative set of service models and high-level use
cases that may be addressed by NFV. These use cases are intended to drive
further development of standards and products for network-wide
implementation. The Use Cases document identi昀椀es and describes a 昀椀rst set of
service models and high-level use cases that represent, in the view of NFV ISG
member companies, important service models and initial 昀椀elds of application for
NFV, and that span the scope of technical challenges being addressed by the NFV
ISG.
There are currently nine use cases, which can be divided into the categories of
architectural use cases and service-oriented use cases.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

ETSI NFV Use Cases


Architectural Use Cases
The four architectural use cases focus on providing general-purpose services and
applications based on the NFVI architecture.
NFVI as a Service
NFVIaaS is a scenario in which a service provider implements and deploys an
NFVI that may be used to support VNFs both by the NFVIaaS provider and by
other network service providers. For the NFVIaaS provider, this service provides
for economies of scale. The infrastructure is sized to support the provider’s own
needs for deploying VNFs and extra capacity that can be sold to other providers.
The NFVIaaS customer can o昀昀er services using the NFVI of another service
provider. The NFVIaaS customer has 昀氀exibility in rapidly deploying VNFs, either
for new services or to scale out existing services. Cloud computing providers may
昀椀nd this service particularly attractive.
Service provider X o昀昀ers a virtualized load balancing service. Some of carrier X’s
customers need load balancing services at locations where X does not maintain
NFVI, but where service provider Z does. NFVIaaS o昀昀ers a means for carrier Z to
lease NFV infrastructure (computer, network, hypervisors, and so on) to service
provider X, which gives the latter access to infrastructure that would otherwise
be prohibitively expensive to obtain. Through leasing, such capacity is available
on demand, and can be scaled as needed.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

NFVIaaS Example
VNF as a Service
Whereas NFVIaaS is similar to the cloud model of Infrastructure as a Service
(IaaS), VNFaaS corresponds to the cloud model of Software as a Service (SaaS).
NFVIaaS provides the virtualization infrastructure to enable a network service
provider to develop and deploy VNFs with reduced cost and time compared to
implementing the NFVI and the VNFs. With VNFaaS, a provider develops VNFs
that are then available o昀昀 the shelf to customers. This model is well suited to
virtualizing customer premises equipment such as routers and 昀椀rewalls.
Virtual Network Platform as a Service
VNPaaS is similar to an NFVIaaS that includes VNFs as components of the virtual
network infrastructure. The primary di昀昀erences are the programmability and
development tools of the VNPaaS that allow the subscriber to create and
con昀椀gure custom ETSI NFV-compliant VNFs to augment the catalog of VNFs
o昀昀ered by the service provider. This allows all the third-party and custom VNFs
to be orchestrated via the VNF FG.
VNF Forwarding Graphs
VNF FG allows virtual appliances to be chained together in a 昀氀exible manner.
This technique is called service chaining. For example, a 昀氀ow may pass through
a network monitoring VNF, a load-balancing VNF, and 昀椀nally a 昀椀rewall VNF in
passing from one endpoint to another. The VNF FG use case is based on an
information model that describes the VNFs and physical entities to the
appropriate management/orchestration systems used by the service provider.
The model describes the characteristics of the entities including the NFV
infrastructure requirements of each VNF and all the required connections among
VNFs and between VNFs and the physical network included in the IaaS service.
To ensure the required performance and resiliency of the end-to-end service, the
information model must be able to specify the capacity, performance and
resiliency requirements of each VNF in the graph. To meet SLAs, the management
and orchestration system will need to monitor the nodes and linkages included in

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

the service graph. In theory, a VNF FG can span the facilities of multiple network
service providers.
Service-Oriented Use Cases
These use cases focus on the provision of services to end customers, in which the
underlying infrastructure is transparent.
Virtualization of Mobile Core Network and IP Multimedia Subsystem
Mobile cellular networks have evolved to contain a variety of interconnected
network function elements, typically involving a large variety of proprietary
hardware appliances. NFV aims at reducing the network complexity and related
operational issues by leveraging standard IT virtualization technologies to
consolidate di昀昀erent types of network equipment onto industry standard high-
volume servers, switches, and storage, located in NFVI-PoPs.
Virtualization of Mobile Base Station
The focus of this use case is radio access network (RAN) equipment in mobile
networks. RAN is the part of a telecommunications system that implements a
wireless technology to access the core network of the mobile network service
provider. At minimum, it involves hardware on the customer premises or in the
mobile device and equipment forming a base station for access to the mobile
network. There is the possibility that a number of RAN functions can be
virtualized as VNFs running on industry standard infrastructure.
Virtualization of the Home Environment
This use case deals with network provider equipment located as customer
premises equipment (CPE) in a residential location. These CPE devices mark the
operator/service provider presence at the customer premises and usually include
a residential gateway (RGW) for Internet and Voice over IP (VoIP) services (for
example, a modem/router for digital subscriber line [DSL] or cable), and a set-top
box (STB) for media services normally supporting local storage for personal video
recording (PVR) services. NFV technologies become ideal candidates to support
this concentration of computation workload from formerly dispersed functions
with minimal cost and improved time to market, while new services can be
introduced as required on a grow-as-you-need basis. Further, the VNFs can reside
on services in the network service provider’s PoP. This greatly simpli昀椀es the
electronics environment of the home, reducing end user and operator capital
expenditure (CapEx).
Virtualization of CDNs
Delivery of content, especially of video, is one of the major challenges of all
operator networks because of the massive growing amount of tra昀케c to be
delivered to end customers of the network. The growth of video tra昀케c is driven
by the shift from broadcast to unicast delivery via IP, by the variety of devices

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

used for video consumption and by increasing quality of video delivered via IP
networks in resolution and frame rate.
Some Internet service providers (ISPs) are deploying proprietary Content
Delivery Network (CDN) cache nodes in their networks to improve delivery of
video and other high-bandwidth services to their customers. Cache nodes
typically run on dedicated appliances running on custom or industry standard
server platforms. Both CDN cache nodes and CDN control nodes can potentially
be virtualized. The bene昀椀ts of CDN virtualization are similar to those gained in
other NFV use cases, such as VNFaaS.
Fixed Access Network Functions Virtualization
NFV o昀昀ers the potential to virtualize remote functions in the hybrid 昀椀ber/copper
access network and passive optical network (PON) 昀椀ber to the home and hybrid
昀椀ber/wireless access networks. This use case has the potential for cost savings by
moving complex processing closer to the network. An additional bene昀椀t is that
virtualization supports multiple tenancy, in which more than one organizational
entity can either be allocated, or given direct control of, a dedicated partition of a
virtual access node. Finally, virtualizing broadband access nodes can enable
synergies to be exploited by the co-location of wireless access nodes in a common
NFV platform framework (that is, common NFVI-PoPs), thereby improving the
deployment economics and reducing the overall energy consumption of the
combined solution.
An indication of the relative importance of the various use cases is found in a
survey of 176 network professionals from a range of industries, reported in 2015
Guide to SDN and NFV [METZ14] and conducted in late 2014. The survey
respondents were asked to indicate the two use cases that they think will gain the
most traction in the market over the next two years. The data indicates that
although IT organizations have interest in a number of the ETSI-de昀椀ned use
cases, by a wide margin they are most interested in the NFVIaaS use case.

Interest in ETSI NFV Use Cases


SDN and NFV

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Over the past few years, the hottest topics in networking have been SDN and NFV.
Separate standards bodies are pursuing the two technologies, and a large,
growing number of providers have announced or are working on products in the
two 昀椀elds. Each technology can be implemented and deployed separately, but
there is clearly a potential for added value by the coordinated use of both
technologies. It is likely that over time, SDN and NFV will tightly interoperate to
provide a broad, uni昀椀ed software-based networking approach to abstract and
programmatically control network equipment and network-based resources.
The relationship between SDN and NFV is perhaps viewed as SDN functioning as
an enabler of NFV. A major challenge with NFV is to best enable the user to
con昀椀gure a network so that VNFs running on servers are connected to the
network at the appropriate place, with the appropriate connectivity to other
VNFs, and with desired QoS. With SDN, users and orchestration software can
dynamically con昀椀gure the network and the distribution and connectivity of VNFs.
If demand for load-balancing capacity increases, a network orchestration layer
can rapidly spin up new load-balancing instances and also adjust the network
switching infrastructure to accommodate the changed tra昀케c patterns. In turn,
the load-balancing VNF entity can interact with the SDN controller to assess
network performance and capacity and use this additional information to
balance tra昀케c better, or even to request provisioning of additional VNF
resources.
Some of the ways that ETSI believes that NFV and SDN complement each other
include the following:
The SDN controller 昀椀ts well into the broader concept of a network controller in
an NFVI network domain.
SDN can play a signi昀椀cant role in the orchestration of the NFVI resources, both
physical and virtual, enabling functionality such as provisioning, con昀椀guration of
network connectivity, bandwidth allocation, automation of operations,
monitoring, security, and policy control.
SDN can provide the network virtualization required to support multitenant
NFVIs.
Forwarding graphs can be implemented using the SDN controller to provide
automated provisioning of service chains, while ensuring strong and consistent
implementation of security and other policies.
The SDN controller can be run as a VNF, possibly as part of a service chain
including other VNFs. For example, applications and services originally
developed to run on the SDN controller could also be implemented as separate
VNFs.

Downloaded by 003- ASHEENA ([email protected])


lOMoARcPSD|44726045

Mapping of SDN Components with NFV Architecture


SDN enabled switch/NEs include physical switches, hypervisor virtual switches,
and embedded switches on the NICs.
Virtual networks created using an infrastructure network SDN controller
provide connectivity services between VNFC instances.
SDN controller can be virtualized, running as a VNF with its EM and VNF
manager.
SDN enabled VNF includes any VNF that may be under the control of an SDN
controller (for example, virtual router, virtual 昀椀rewall).
SDN applications, for example service chaining applications, can be VNF
themselves.
Nf-Vi interface allows management of the SDN enabled infrastructure.
Ve-Vnfm interface is used between the SDN VNF (SDN controller VNF, SDN
network functions VNF, SDN applications VNF) and their respective VNF Manager
for lifecycle management.
Vn-Nf allows SDN VNFs to access connectivity services between VNFC
interfaces.

Downloaded by 003- ASHEENA ([email protected])

You might also like