AvayaAuraCoreSolutionDescription R10.1.x Issue4 January2024
AvayaAuraCoreSolutionDescription R10.1.x Issue4 January2024
Release 10.1.x
Issue 4
January 2024
© 2018-2024, Avaya LLC OR IF YOU DO NOT WISH TO ACCEPT THESE TERMS OF
All Rights Reserved. USE, YOU MUST NOT ACCESS OR USE THE HOSTED SERVICE
OR AUTHORIZE ANYONE TO ACCESS OR USE THE HOSTED
Notice SERVICE.
While reasonable efforts have been made to ensure that the Licenses
information in this document is complete and accurate at the time
of printing, Avaya assumes no liability for any errors. Avaya reserves THE SOFTWARE LICENSE TERMS AVAILABLE ON THE
the right to make changes and corrections to the information in this AVAYA WEBSITE, HTTPS://SUPPORT.AVAYA.COM/LICENSEINFO,
document without the obligation to notify any person or organization UNDER THE LINK “AVAYA SOFTWARE LICENSE TERMS (Avaya
of such changes. Products)” OR SUCH SUCCESSOR SITE AS DESIGNATED BY
AVAYA, ARE APPLICABLE TO ANYONE WHO DOWNLOADS,
Documentation disclaimer USES AND/OR INSTALLS AVAYA SOFTWARE, PURCHASED
“Documentation” means information published in varying mediums FROM AVAYA INC., ANY AVAYA AFFILIATE, OR AN AVAYA
which may include product information, operating instructions and CHANNEL PARTNER (AS APPLICABLE) UNDER A COMMERCIAL
performance specifications that are generally made available to users AGREEMENT WITH AVAYA OR AN AVAYA CHANNEL PARTNER.
of products. Documentation does not include marketing materials. UNLESS OTHERWISE AGREED TO BY AVAYA IN WRITING,
Avaya shall not be responsible for any modifications, additions, or AVAYA DOES NOT EXTEND THIS LICENSE IF THE SOFTWARE
deletions to the original published version of Documentation unless WAS OBTAINED FROM ANYONE OTHER THAN AVAYA, AN
such modifications, additions, or deletions were performed by or on AVAYA AFFILIATE OR AN AVAYA CHANNEL PARTNER; AVAYA
the express behalf of Avaya. End User agrees to indemnify and hold RESERVES THE RIGHT TO TAKE LEGAL ACTION AGAINST
harmless Avaya, Avaya's agents, servants and employees against YOU AND ANYONE ELSE USING OR SELLING THE SOFTWARE
all claims, lawsuits, demands and judgments arising out of, or in WITHOUT A LICENSE. BY INSTALLING, DOWNLOADING OR
connection with, subsequent modifications, additions or deletions to USING THE SOFTWARE, OR AUTHORIZING OTHERS TO
this documentation, to the extent made by End User. DO SO, YOU, ON BEHALF OF YOURSELF AND THE
ENTITY FOR WHOM YOU ARE INSTALLING, DOWNLOADING
Link disclaimer OR USING THE SOFTWARE (HEREINAFTER REFERRED TO
Avaya is not responsible for the contents or reliability of any linked INTERCHANGEABLY AS “YOU” AND “END USER”), AGREE TO
websites referenced within this site or Documentation provided by THESE TERMS AND CONDITIONS AND CREATE A BINDING
Avaya. Avaya is not responsible for the accuracy of any information, CONTRACT BETWEEN YOU AND AVAYA INC. OR THE
statement or content provided on these sites and does not APPLICABLE AVAYA AFFILIATE (“AVAYA”).
necessarily endorse the products, services, or information described Avaya grants You a license within the scope of the license types
or offered within them. Avaya does not guarantee that these links will described below, with the exception of Heritage Nortel Software,
work all the time and has no control over the availability of the linked for which the scope of the license is detailed below. Where the
pages. order documentation does not expressly identify a license type,
Warranty the applicable license will be a Designated System License as
set forth below in the Designated System(s) License (DS) section
Avaya provides a limited warranty on Avaya hardware and software. as applicable. The applicable number of licenses and units of
Refer to your sales agreement to establish the terms of the capacity for which the license is granted will be one (1), unless a
limited warranty. In addition, Avaya’s standard warranty language, different number of licenses or units of capacity is specified in the
as well as information regarding support for this product while under documentation or other materials available to You. “Software” means
warranty is available to Avaya customers and other parties through computer programs in object code, provided by Avaya or an Avaya
the Avaya Support website: https://support.avaya.com/helpcenter/ Channel Partner, whether as stand-alone products, pre-installed on
getGenericDetails?detailId=C20091120112456651010 under the link hardware products, and any upgrades, updates, patches, bug fixes,
“Warranty & Product Lifecycle” or such successor site as designated or modified versions thereto. “Designated Processor” means a single
by Avaya. Please note that if You acquired the product(s) from an stand-alone computing device. “Server” means a set of Designated
authorized Avaya Channel Partner outside of the United States and Processors that hosts (physically or virtually) a software application
Canada, the warranty is provided to You by said Avaya Channel to be accessed by multiple users. “Instance” means a single copy
Partner and not by Avaya. of the Software executing at a particular time: (i) on one physical
“Hosted Service” means an Avaya hosted service subscription that machine; or (ii) on one deployed software virtual machine (“VM”) or
similar deployment.
You acquire from either Avaya or an authorized Avaya Channel
Partner (as applicable) and which is described further in Hosted SAS License types
or other service description documentation regarding the applicable
hosted service. If You purchase a Hosted Service subscription, Designated System(s) License (DS). End User may install and use
the foregoing limited warranty may not apply but You may be each copy or an Instance of the Software only: 1) on a number
entitled to support services in connection with the Hosted Service of Designated Processors up to the number indicated in the order;
as described further in your service description documents for the or 2) up to the number of Instances of the Software as indicated
applicable Hosted Service. Contact Avaya or Avaya Channel Partner in the order, Documentation, or as authorized by Avaya in writing.
(as applicable) for more information. Avaya may require the Designated Processor(s) to be identified in
the order by type, serial number, feature key, Instance, location or
Hosted Service other specific designation, or to be provided by End User to Avaya
THE FOLLOWING APPLIES ONLY IF YOU PURCHASE AN through electronic means established by Avaya specifically for this
AVAYA HOSTED SERVICE SUBSCRIPTION FROM AVAYA OR purpose.
AN AVAYA CHANNEL PARTNER (AS APPLICABLE), THE TERMS Concurrent User License (CU). End User may install and use the
OF USE FOR HOSTED SERVICES ARE AVAILABLE ON THE Software on multiple Designated Processors or one or more Servers,
AVAYA WEBSITE, HTTPS://SUPPORT.AVAYA.COM/LICENSEINFO so long as only the licensed number of Units are accessing and
UNDER THE LINK “Avaya Terms of Use for Hosted Services” using the Software at any given time as indicated in the order,
OR SUCH SUCCESSOR SITE AS DESIGNATED BY AVAYA, AND Documentation, or as authorized by Avaya in writing. A “Unit” means
ARE APPLICABLE TO ANYONE WHO ACCESSES OR USES THE the unit on which Avaya, at its sole discretion, bases the pricing of
HOSTED SERVICE. BY ACCESSING OR USING THE HOSTED its licenses and can be, without limitation, an agent, port or user, an
SERVICE, OR AUTHORIZING OTHERS TO DO SO, YOU, ON e-mail or voice mail account in the name of a person or corporate
BEHALF OF YOURSELF AND THE ENTITY FOR WHOM YOU ARE function (e.g., webmaster or helpdesk), or a directory entry in the
DOING SO (HEREINAFTER REFERRED TO INTERCHANGEABLY administrative database utilized by the Software that permits one
AS “YOU” AND “END USER”), AGREE TO THE TERMS OF USE. user to interface with the Software. Units may be linked to a specific,
IF YOU ARE ACCEPTING THE TERMS OF USE ON BEHALF A identified Server or an Instance of the Software.
COMPANY OR OTHER LEGAL ENTITY, YOU REPRESENT THAT
YOU HAVE THE AUTHORITY TO BIND SUCH ENTITY TO THESE Named User License (NU). End User may: (i) install and use
TERMS OF USE. IF YOU DO NOT HAVE SUCH AUTHORITY, each copy or Instance of the Software on a single Designated
Processor or Server per authorized Named User (defined below);
or (ii) install and use each copy or Instance of the Software on Terms impose greater restrictions on You than the applicable Third
a Server so long as only authorized Named Users access and Party Terms.
use the Software as indicated in the order, Documentation, or as
The following applies only if the H.264 (AVC) codec is distributed
authorized by Avaya in writing. “Named User”, means a user or
with the product. THIS PRODUCT IS LICENSED UNDER THE AVC
device that has been expressly authorized by Avaya to access and
PATENT PORTFOLIO LICENSE FOR THE PERSONAL USE OF A
use the Software. At Avaya’s sole discretion, a “Named User” may
CONSUMER OR OTHER USES IN WHICH IT DOES NOT RECEIVE
be, without limitation, designated by name, corporate function (e.g.,
REMUNERATION TO (i) ENCODE VIDEO IN COMPLIANCE WITH
webmaster or helpdesk), an e-mail or voice mail account in the
THE AVC STANDARD (“AVC VIDEO”) AND/OR (ii) DECODE AVC
name of a person or corporate function, or a directory entry in the
VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A
administrative database utilized by the Software that permits one
PERSONAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO
user to interface with the Software.
PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE
Shrinkwrap License (SR). End User may install and use the Software IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE.
in accordance with the terms and conditions of the applicable ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA,
license agreements, such as “shrinkwrap” or “clickthrough” license L.L.C. SEE HTTP://WWW.MPEGLA.COM.
accompanying or applicable to the Software (“Shrinkwrap License”)
as indicated in the order, Documentation, or as authorized by Avaya Service Provider
in writing. THE FOLLOWING APPLIES TO AVAYA CHANNEL PARTNER’S
Heritage Nortel Software HOSTING OF AVAYA PRODUCTS OR SERVICES. THE PRODUCT
OR HOSTED SERVICE MAY USE THIRD PARTY COMPONENTS
“Heritage Nortel Software” means the software that was acquired SUBJECT TO THIRD PARTY TERMS AND REQUIRE A
by Avaya as part of its purchase of the Nortel Enterprise Solutions SERVICE PROVIDER TO BE INDEPENDENTLY LICENSED
Business in December 2009. The Heritage Nortel Software is the DIRECTLY FROM THE THIRD PARTY SUPPLIER. AN AVAYA
software contained within the list of Heritage Nortel Products located CHANNEL PARTNER’S HOSTING OF AVAYA PRODUCTS MUST
at https://support.avaya.com/LicenseInfo under the link “Heritage BE AUTHORIZED IN WRITING BY AVAYA AND IF THOSE
Nortel Products” or such successor site as designated by Avaya. HOSTED PRODUCTS USE OR EMBED CERTAIN THIRD PARTY
For Heritage Nortel Software, Avaya grants Customer a license to SOFTWARE, INCLUDING BUT NOT LIMITED TO MICROSOFT
use Heritage Nortel Software provided hereunder solely to the extent SOFTWARE OR CODECS, THE AVAYA CHANNEL PARTNER
of the authorized activation or authorized usage level, solely for the IS REQUIRED TO INDEPENDENTLY OBTAIN ANY APPLICABLE
purpose specified in the Documentation, and solely as embedded LICENSE AGREEMENTS, AT THE AVAYA CHANNEL PARTNER’S
in, for execution on, or for communication with Avaya equipment. EXPENSE, DIRECTLY FROM THE APPLICABLE THIRD PARTY
Charges for Heritage Nortel Software may be based on extent of SUPPLIER.
activation or use authorized as specified in an order or invoice.
WITH RESPECT TO CODECS, IF THE AVAYA CHANNEL
Copyright PARTNER IS HOSTING ANY PRODUCTS THAT USE OR
EMBED THE H.264 CODEC OR H.265 CODEC, THE AVAYA
Except where expressly stated otherwise, no use should be made
CHANNEL PARTNER ACKNOWLEDGES AND AGREES THE
of materials on this site, the Documentation, Software, Hosted
AVAYA CHANNEL PARTNER IS RESPONSIBLE FOR ANY AND
Service, or hardware provided by Avaya. All content on this site, the
ALL RELATED FEES AND/OR ROYALTIES. THE H.264 (AVC)
documentation, Hosted Service, and the product provided by Avaya
CODEC IS LICENSED UNDER THE AVC PATENT PORTFOLIO
including the selection, arrangement and design of the content is
LICENSE FOR THE PERSONAL USE OF A CONSUMER
owned either by Avaya or its licensors and is protected by copyright
OR OTHER USES IN WHICH IT DOES NOT RECEIVE
and other intellectual property laws including the sui generis rights
REMUNERATION TO: (I) ENCODE VIDEO IN COMPLIANCE WITH
relating to the protection of databases. You may not modify, copy,
THE AVC STANDARD (“AVC VIDEO”) AND/OR (II) DECODE AVC
reproduce, republish, upload, post, transmit or distribute in any way VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A
any content, in whole or in part, including any code and software PERSONAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO
unless expressly authorized by Avaya. Unauthorized reproduction, PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE
transmission, dissemination, storage, and or use without the express IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE.
written consent of Avaya can be a criminal, as well as a civil offense ADDITIONAL INFORMATION FOR H.264 (AVC) AND H.265 (HEVC)
under the applicable law. CODECS MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://
Virtualization WWW.MPEGLA.COM.
The following applies if the product is deployed on a virtual machine. Compliance with Laws
Each product has its own ordering code and license types. Unless You acknowledge and agree that it is Your responsibility for
otherwise stated, each Instance of a product must be separately complying with any applicable laws and regulations, including, but not
licensed and ordered. For example, if the end user customer or limited to laws and regulations related to call recording, data privacy,
Avaya Channel Partner would like to install two Instances of the intellectual property, trade secret, fraud, and music performance
same type of products, then two products of that type must be rights, in the country or territory where the Avaya product is used.
ordered.
Preventing Toll Fraud
Third Party Components
“Toll Fraud” is the unauthorized use of your telecommunications
“Third Party Components” mean certain software programs or
system by an unauthorized party (for example, a person who is not a
portions thereof included in the Software or Hosted Service may corporate employee, agent, subcontractor, or is not working on your
contain software (including open source software) distributed under
company's behalf). Be aware that there can be a risk of Toll Fraud
third party agreements (“Third Party Components”), which contain associated with your system and that, if Toll Fraud occurs, it can
terms regarding the rights to use certain portions of the Software
result in substantial additional charges for your telecommunications
(“Third Party Terms”). As required, information regarding distributed
services.
Linux OS source code (for those products that have distributed Linux
OS source code) and identifying the copyright holders of the Third Avaya Toll Fraud intervention
Party Components and the Third Party Terms that apply is available
If You suspect that You are being victimized by Toll Fraud and You
in the products, Documentation or on Avaya’s website at: https://
need technical assistance or support, call Technical Service Center
support.avaya.com/Copyright or such successor site as designated
Toll Fraud Intervention Hotline at +1-800-643-2353 for the United
by Avaya. The open source software license terms provided as
States and Canada. For additional support telephone numbers,
Third Party Terms are consistent with the license rights granted in
see the Avaya Support website: https://support.avaya.com or such
these Software License Terms, and may contain additional rights
successor site as designated by Avaya.
benefiting You, such as modification and distribution of the open
source software. The Third Party Terms shall take precedence over
these Software License Terms, solely with respect to the applicable
Third Party Components to the extent that these Software License
Security Vulnerabilities
Information about Avaya’s security support policies can be
found in the Security Policies and Support section of https://
support.avaya.com/security.
Suspected Avaya product security vulnerabilities are handled
per the Avaya Product Security Support Flow (https://
support.avaya.com/css/P8/documents/100161515).
Downloading Documentation
For the most current versions of Documentation, see the Avaya
Support website: https://support.avaya.com, or such successor site
as designated by Avaya.
Contact Avaya Support
See the Avaya Support website: https://support.avaya.com for
product or Hosted Service notices and articles, or to report a
problem with your Avaya product or Hosted Service. For a list of
support telephone numbers and contact addresses, go to the Avaya
Support website: https://support.avaya.com (or such successor site
as designated by Avaya), scroll to the bottom of the page, and select
Contact Avaya Support.
Trademarks
The trademarks, logos and service marks (“Marks”) displayed in this
site, the Documentation, Hosted Service(s), and product(s) provided
by Avaya are the registered or unregistered Marks of Avaya, its
affiliates, its licensors, its suppliers, or other third parties. Users
are not permitted to use such Marks without prior written consent
from Avaya or such third party which may own the Mark. Nothing
contained in this site, the Documentation, Hosted Service(s) and
product(s) should be construed as granting, by implication, estoppel,
or otherwise, any license or right in and to the Marks without the
express written permission of Avaya or the applicable third party.
Avaya is a registered trademark of Avaya Inc.
All non-Avaya trademarks are the property of their respective owners.
Contents
Chapter 1: Introduction.......................................................................................................... 10
Purpose................................................................................................................................ 10
Product compatibility.............................................................................................................. 10
Change history...................................................................................................................... 10
Chapter 2: Solution overview................................................................................................ 12
®
Avaya Aura overview........................................................................................................... 12
Topology............................................................................................................................... 12
®
Avaya Aura core components............................................................................................... 14
System Manager overview............................................................................................... 14
Communication Manager overview................................................................................... 15
Session Manager overview.............................................................................................. 15
®
Avaya Aura Application Enablement Services overview.................................................... 16
Branch Gateways............................................................................................................ 17
®
Avaya Aura Media Server overview................................................................................. 17
Presence Services overview............................................................................................. 18
Avaya Device Adapter Snap-in Overview.......................................................................... 18
®
Avaya Breeze platform overview..................................................................................... 19
WebLM overview............................................................................................................. 20
®
Avaya Aura applications deployment offers............................................................................ 21
Virtualized Environment overview..................................................................................... 22
Software-only environment overview................................................................................. 24
®
Benefits of deploying the Avaya Aura platform........................................................................ 29
®
Avaya Aura Solution for Midsize Enterprise............................................................................ 30
®
Avaya Aura Suite Licensing V2............................................................................................. 30
Solution Deployment Manager................................................................................................ 31
Solution Deployment Manager overview............................................................................ 31
Solution Deployment Manager Client................................................................................ 32
Chapter 3: Hardware and software components................................................................. 34
Hardware components........................................................................................................... 34
®
Supported servers for Avaya Aura applications...................................................................... 36
®
Supported embedded Red Hat Enterprise Linux operating system versions of Avaya Aura
application OVAs................................................................................................................... 37
Supported Red Hat Enterprise Linux operating system versions for Software-only Environment.. 39
Supported ESXi version......................................................................................................... 40
Supported gateways.............................................................................................................. 41
Software components............................................................................................................ 41
Chapter 4: Solution specification.......................................................................................... 43
Reference configurations....................................................................................................... 43
Messaging............................................................................................................................ 43
Purpose
This document describes Avaya Aura® core solution from a holistic perspective focusing on the
strategic, enterprise and functional views of the architecture. This document also includes a
high-level description of each verified reference configuration for the solution.
This document is intended for people who want to understand how the solution and related
verified reference configurations meet customer requirements.
Product compatibility
For the latest and most accurate compatibility information, go to TOOLS > Product Compatibility
Matrix on the Avaya Support website.
Change history
Topology
The following depicts the Avaya Aura® architecture and various components of Avaya Aura®:
Branch Gateways
Branch Gateways work with Communication Manager software installed on any of the following
servers to help deliver communication services to enterprises:
• Avaya Solutions Platform S8300: From 10.1, migrate from S8300E to Avaya Solutions
Platform S8300 5.1 or later.
• Customer-provided server
• Infrastructure as a Service (IaaS)
• Avaya Solutions Platform 130 Appliance: Dell PowerEdge R640
Branch Gateways connect telephone exchange and data networking by routing data and VoIP
traffic over the WAN or LAN. Branch Gateways provide support for IP, digital, and analog devices.
Branch Gateways are controlled by Communication Manager operating either as External Call
Controller (ECC) or Internal Call Controller (ICC). In a configuration that includes both ICC and
ECC, ICC acts as a survivable remote server (SRS). ICC takes over call control when ECC fails or
the WAN link between the main office and the branch office is down.
Branch Gateways also provide the standard local survivability (SLS) when the connection to the
primary ECC fails and an SRS is not available. This feature is available only for IPv4 setups.
and economic benefits for the enterprise. As Avaya Aura® MS consolidates multiple systems into a
single server, you can manage the entire communications infrastructure from one location. Avaya
Aura® MS provides scalability, redundancy, and high availability.
Avaya Aura® MS supports SIP TLS, SRTP, VoiceXML 2.1, CCXML 1.0, MRCP, QOS Monitoring,
Audio, Video, MLPP, IM, and Webpush features.
Avaya Aura® MS powers diverse applications such as voice messaging, consumer conferencing,
self service, contact centers, basic media services, and communication applications.
and that work with Avaya Communication Server 1000 (CS 1000) to migrate to Avaya Aura®
without significant investment on the existing infrastructure. Device Adapter offers a feasible
solution to CS 1000 customers to take advantage of Avaya Aura® features while minimizing
expenses on the cables and hardware.
Device Adapter is deployed on the Avaya Breeze® platform. A Device Adapter node runs on an
Avaya Breeze® platform cluster that can have one or more Avaya Breeze® platform servers. A
standard deployment solution has one or more Avaya Breeze® platform clusters. Implementing
Device Adapter does not introduce any new hardware. Device Adapter works as a part of the
Avaya Breeze® platform solution.
In this deployment, phone sets are connected to Device Adapter by replacing CS 1000. For SIP
signaling and terminal registration of phone sets, Device Adapter is connected to Avaya Aura®
Session Manager. Session Manager communicates with Avaya Aura® Communication Manager
to provide call-related services to the terminals. Device Adapter communicates with Avaya Aura®
System Manager for management operations as available in a typical Avaya Aura® deployment.
To support analog and digital/TDM set migration, Media Gateway Controllers (MGC) or Media
Gateway Extended Peripheral Equipment Controllers (MG-XPEC) must be in place to drive the
Digital/Analog Line Cards. Only Intelligent Peripheral Equipment (IPE) Digital/Analog Line cards
are supported.
Device Adapter support in an Avaya Aura® Call Center Elite environment
Device Adapter Release 8.1.2 supports migration of call center (CC) endpoints that are used in an
Avaya Aura® Call Center Elite environment and that work with a CS 1000 environment to Avaya
Aura®. Device Adapter retains the Call Center Elite functions on these endpoints and provides a
near CS 1000 user experience to the call center agents and supervisors.
Device Adapter supports only 1140e (1140) IP phone and i2050 (2050) soft phone in a call center
environment.
Customers can use these phones either as Unified Communications (UC) or Call Center (CC)
phones in their call center environment. When a call center agent or supervisor logs in to the
phone, the phone operates as a CC phone and provides the call center features. Otherwise, it
operates as a UC phone. Customers can also use these phones exclusively as UC phones in their
call center environment.
Device Adapter for call center does not support any services that are not supported by a 96x1
SIPCC endpoint.
WebLM overview
Avaya provides a Web-based License Manager (WebLM) to manage licenses of one or more
Avaya software products for your organization. WebLM facilitates easy tracking of licenses. To
track and manage licenses in an organization, WebLM requires a license file from the Avaya
Product Licensing and Delivery System (PLDS) website at https://plds.avaya.com.
WebLM supports two configurations models:
• WebLM standard model. In this model, a single WebLM server supports one or more
licensed products. The WebLM standard model supports the Standard License File (SLF)
and Enterprise License File (ELF) types.
• WebLM enterprise model. This model includes multiple WebLM servers. One WebLM server
acts as a master WebLM server and hosts the license file from PLDS. The remaining WebLM
servers act as the local WebLM servers and host the allocation license files from the master
WebLM server. You require an ELF to set up the WebLM enterprise model. PLDS generates
license files that are SLFs or ELFs.
Note:
The master and local WebLM servers must be deployed on the same major release. The
master WebLM server must be on same or latest service pack than the local WebLM
server resides on.
For example, if the local WebLM server is on Release 7.1, the master WebLM server
must be on Release 7.1, 7.1.1, 7.1.2, or 7.1.3. The master WebLM server cannot be
higher than Release 7.1.x.
You can purchase two products and choose the enterprise model of licensing for one product and
the standard model of licensing for the other product. PLDS generates a separate license file for
each product.
The license file is an SLF or ELF based on how the product is configured in PLDS. Verify the
installation options that the product supports before you install the WebLM server. To configure
the standard licensing, you can use an ELF or SLF. To configure enterprise licensing, you must
have an ELF. After you install the license file on the WebLM server, a product with an ELF can
have multiple instances of the WebLM server. However, a product with an SLF can have only one
instance of the WebLM server.
The license file of a software product is in an XML format. The license file contains information
regarding the product, the major release, the licensed features of the product, and the licensed
capacities of each feature that you purchase. After you purchase a licensed Avaya software
product, you must activate the license file for the product in PLDS and install the license file on the
WebLM server.
Topology
The following is an example of a deployment infrastructure for System Manager on VMware.
Note:
With VMware vSphere ESXi 6.7 onwards, only HTML5 based vSphere Client is supported.
Avaya Communication Manager Security Service Packs (SSP) can be incompatible or fail to install
on a customer controlled operating system.
For more details, see Avaya Aura® Release Notes on the Avaya Support website.
Supported third-party applications
With the software-only (ISO) offer, you can install third-party applications on the system. For the
list of supported third-party software applications in Release 10.1 and later, see Avaya Product
Support Notices.
Avaya Aura® Software-Only environment RPMs
In a software-only installation, the customer installs the Red Hat provided RPM updates. To avoid
possible issues or incompatibilities with new RPMs, check the list of tested RPMs and follow the
instructions in the PSN020558u that Avaya publishes periodically on the Avaya Support website.
Note:
For information about RPM updates for the Red Hat Enterprise Linux operating system and
required changes to operating system files on Software only installation, see Avaya Aura®
Software Only White paper on the Avaya Support website.
With Release 10.1 and later, there are no separate Kernel Service Packs (KSP), and Linux
Security Update (LSU).
Supported platforms
You can deploy the Avaya Aura® application software-only ISO image on the following:
• On-premise platforms:
- VMware
- Kernel-based Virtual Machine (KVM)
- Hyper-V
• Cloud platforms:
- Amazon Web Services
- Google Cloud Platform
- Microsoft Azure
- IBM Cloud for VMware Solutions
Specifications for Avaya Aura® applications on IBM Cloud for VMware Solutions is same
as that of the Virtualized Environment offer.
For information about IBM Cloud for VMware Solutions, see IBM Cloud for VMware
Solutions product documentation.
Note:
With Release 10.1.x and later, Avaya Aura® will no longer have the Amazon Web
Services OVA. Deployment on Amazon Web Services is supported through the software
only offer.
• Microsoft Azure
• Google Cloud Platform
• IBM Cloud for VMware Solutions
For information about IBM Cloud for VMware Solutions, see IBM Cloud for VMware Solutions
product documentation.
The Infrastructure as a Service environment supports the following offers:
Offer Supported environments
ISO Simplex
• Amazon Web Services
• Microsoft Azure
• Google Cloud Platform
• IBM Cloud for VMware Solutions
Duplex
• Amazon Web Services
• Microsoft Azure
• Google Cloud Platform
• IBM Cloud for VMware Solutions
Supporting the Avaya Aura® applications on the IaaS platforms provide the following benefits:
• Minimizes the capital expenditure on infrastructure. The customers can move from capital
expenditure to operational expense.
• Reduces the maintenance cost of running the data centers.
• Provides a common platform for deploying the applications.
• Provides a flexible environment to accommodate the changing business requirements of
customers.
Important:
The setup must follow the Infrastructure as a Service deployment guidelines, but does not
need to include all the applications.
For the latest and most accurate information about other Avaya product compatibility information,
go to TOOLS > Product Compatibility Matrix on the Avaya Support website.
Note:
When an application is configured with Out of Band Management, Solution Deployment
Manager does not support upgrade for that application.
For information about upgrading the application, see the application-specific upgrade
document on the Avaya Support website.
• Download Avaya Aura® applications.
• Install service packs, feature packs, and software patches for the following Avaya Aura®
applications:
- Communication Manager and associated devices, such as gateways, media modules, and
TN boards.
- Session Manager
- Branch Session Manager
- AVP Utilities Release 8.x
- Avaya Aura® Appliance Virtualization Platform Release 8.x or earlier, the ESXi host that is
running on the Avaya Aura® Virtualized Appliance.
- AE Services
The upgrade process from Solution Deployment Manager involves the following key tasks:
• Discover the Avaya Aura® applications.
• Refresh applications and associated devices and download the necessary software
components.
• Run the preupgrade check to ensure successful upgrade environment.
• Upgrade Avaya Aura® applications.
• Install software patch, service pack, or feature pack on Avaya Aura® applications.
For more information about the setup of the Solution Deployment Manager functionality that is
part of System Manager 10.1.x, see Avaya Aura® System Manager Solution Deployment Manager
Job-Aid.
A technician can gain access to the user interface of the Solution Deployment Manager client from
the web browser.
Use the Solution Deployment Manager client to:
• Deploy System Manager and Avaya Aura® applications on Avaya appliances, VMware-based
Virtualized Environment, and Software-only environment.
• Upgrade VMware-based System Manager from Release 7.x, or 8.x to Release 10.1 and later.
• Install System Manager software patches, service packs, and feature packs.
• Configure Remote Syslog Profile.
• Create the Appliance Virtualization Platform Release 8.x or earlier Kickstart file.
• Generate the Avaya Solutions Platform S8300 (Avaya-Supplied ESXi 7.0) Release 5.1
Kickstart file.
• Install Appliance Virtualization Platform patches.
• Restart and shutdown the Appliance Virtualization Platform host.
• Start, stop, and restart a virtual machine.
• Change the footprint of Avaya Aura® applications that support dynamic resizing. For example,
Session Manager and Avaya Breeze® platform.
Note:
• You can deploy or upgrade the System Manager virtual machine only by using the
Solution Deployment Manager client.
• You must always use the latest Solution Deployment Manager client for deployment.
Hardware components
The Avaya Aura® solution includes supported hardware. The hardware includes servers,
gateways, desk telephones, and video devices.
Servers
Avaya software applications are installed on the following supported servers:
• Avaya S8300E Server, embedded servers that reside in G430 and G450 Branch Gateways.
Note:
Avaya Aura® Release 10.1 supports Avaya Solutions Platform S8300 Release 5.1.
The Avaya Solutions Platform S8300 Release 5.1 is only supported on an S8300E server
and not in the earlier versions of the S8300 server such as S8300C and S8300D.
• Standalone servers that come in a 1U configuration:
Avaya Solutions Platform 130 Appliance: Dell PowerEdge R640
Gateways
The Avaya Aura® solution uses the following supported gateways:
• Avaya G650 Media Gateway, a traditional gateway that houses TN circuit packs and is used
in port networks
• Branch gateways
- Avaya G430 Branch Gateway: a gateway that provides H.248 connectivity and houses
media modules.
- Avaya G450 Branch Gateway: a gateway that provides H.248 connectivity and houses
media modules.
• AudioCodes M3000 Gateway, a high-density SIP trunk gateway that provides SIP
connectivity to Communication Manager and Session Manager.
Circuit packs and media modules
Communication Manager often uses port networks made up of Avaya G650 Media Gateways that
houses TN circuit packs. The following circuit packs support IP connectivity:
• TN2312BP IP Server Interface (IPSI), with Communication Manager on a server provides
transport of control (signaling) messages.
• TN799DP Control LAN (C-LAN), provides TCP/IP connectivity over Ethernet or PPP to
adjuncts
• TN2302AP IP Media Processor (MedPro), the H.323 audio platform
• TN2501AP voice announcements over LAN (VAL), an integrated announcement circuit pack
that uses announcement files in the .wav format
• TN2602AP IP Media Resource 320, provides high-capacity voice over Internet protocol
(VoIP) audio access
Communication Manager also uses branch gateways in lieu of or in addition to port networks. The
G430 and G450 Branch Gateways house media modules.
For more information on circuit packs and media modules, see Avaya Aura® Communication
Manager Hardware Description and Reference.
Telephones, endpoints, and video devices
The Avaya Aura® solution supports the following Avaya and third-party digital IP (H.323/H.320)
and SIP telephones and video devices:
• Avaya IP telephones and devices
- Avaya IP deskphone series
- Avaya 1600/9600–series specialty handsets
- Avaya 4600–series IP telephones
- Avaya E159 and E169 media stations
- Avaya IP conference telephones
- Avaya H100 Video Collaboration device
- Avaya 2400 and 9400 series DCP devices
- Avaya 1000–series video devices
- Avaya 1400–series digital deskphones
- Avaya Workplace Client for Android, iOS, and Windows
- Avaya J129/J169/J169CC/J179/J179CC
• Third-party telephones and video devices
- Polycom VSX/HDX endpoints
- Tandberg MXP endpoint
• Scopia® endpoints
- Scopia® XT Telepresence
- Scopia® XT4200, XT5000, and XT7000 Room Systems
- Scopia® XT Meeting Center Room System
- Scopia® Control
- Scopia® XT Executive 240
- Scopia® Video Gateway for Microsoft Lync
For more information on telephones and video devices, see Avaya Aura® Communication
Manager Hardware Description and Reference and documentation on the individual telephones
and video devices.
1 You can migrate the S8300E server to Avaya Solutions Platform S8300 Release 5.1. For
information, see Migrating from Appliance Virtualization Platform deployed on S8300 Server to
Avaya Solutions Platform S8300 on the Avaya Support website.
2Avaya Solutions Platform 120 Appliance supports virtualization using Appliance Virtualization
Platform.
3You can migrate the Avaya Solutions Platform 120 Appliance to Avaya Solutions Platform 130
Appliance Release 5.1.x.x. For information, see Migrating from Appliance Virtualization Platform to
Avaya Solutions Platform 130 on the Avaya Support website.
4Avaya Solutions Platform 130 Appliance supports virtualization using VMware vSphere ESXi
Standard License.
5 Avaya Solutions Platform S8300 supports virtualization using VMware vSphere ESXi Foundation
License for Communication Manager and Branch Session Manager.
Note:
• From Avaya Aura® Release 10.1 and later, Avaya-provided HP ProLiant DL360p G8,
HP ProLiant DL360 G9, Dell™ PowerEdge™ R620, Dell™ PowerEdge™ R630, and Avaya
Solutions Platform 120 servers are not supported.
However, in Release 10.1.x, Avaya Solutions Platform 120 can be upgraded to Avaya
Solutions Platform 130 Release 5.x.
• From Avaya Aura® Release 8.0 and later, S8300D, Dell™ PowerEdge™ R610, and HP
ProLiant DL360 G7 servers are not supported.
Note:
• As of October 15, 2022, VMware has ended support for VMware vSphere 6.x. Therefore,
it is recommended to upgrade to supported vSphere versions.
For Avaya-provided environments (Avaya Solutions Platform 120 and 130 Release 4.0.x)
only use Avaya-provided updates. Updating directly from the Dell or VMware’s website
will result in an unsupported configuration.
For customer-provided environments and how to upgrade to supported vSphere version,
see the VMware website.
• From VMware vSphere ESXi 6.7 onwards, only HTML5 based vSphere Client is
supported.
• Avaya Aura® applications support the particular ESXi version and its subsequent update.
For example, the subsequent update of VMware ESXi 6.7 can be VMware ESXi 6.7
Update 3.
• Device Adapter and Presence Services are deployed on the Avaya Breeze® platform,
which supports VMware 6.7 and 7.0.
• WebLM Release 10.1.2 OVA and higher are certified with ESXi 8.0 and 8.0 Update 2
(U2) deployments.
Supported gateways
The following table lists the supported gateways of Avaya Aura® applications.
Software components
The Avaya Aura® solution consists of several Avaya software applications in addition to the core
components. The following products are part of the Avaya Aura® solution:
• Avaya Session Border Controller for Enterprise
• Avaya Workplace Client Conferencing
• Avaya Communication Server 1000
Reference configurations
This chapter covers the following sample configurations that can be deployed in customer
environment. The sample configurations can be integrated with third-party applications for
complete network interconnections.
• Messaging
• Avaya Workplace Client Solution: Equinox Conferencing
• Survivability
• Avaya Aura® in a virtualized environment
• Avaya Breeze® platform
Messaging
This configuration uses Session Manager, Communication Manager, Avaya Aura® Messaging,
and Avaya 9600 Series IP Deskphones and Avaya 9601 Series SIP deskphones. Deskphones
have the SIP firmware installed. Two Session Manager instances are deployed, where one
Session Manager serves as backup for the other if the network or Session Manager fails.
In this configuration, Avaya 9600 Series IP Deskphones and Avaya 9601 Series SIP deskphones
are configured as SIP endpoints. These endpoints register to Session Manager and use
Communication Manager for feature support.
Communication Manager Evolution Server also supports Avaya 2420 Digital telephones and
Avaya 9600 Series and 9601 Series IP deskphones running H.323 firmware. Communication
Manager is connected over SIP trunks to Session Manager servers. Communication Manager
uses the SIP Signaling network interface on each Session Manager.
Messaging consists of an Avaya Aura® Messaging Application Server (MAS) and Avaya Message
Storage Server (MSS) running on a single Avaya S8300 server. Messaging is also connected over
SIP trunks to both Session Manager instances. All users have mailboxes defined on Messaging
which they access through a dedicated pilot number.
All intersystem calls are carried over these SIP trunks. Calls between stations are re-directed to
Messaging and the calling party can leave a voicemail message for the appropriate subscriber.
The following equipment and software are used for the sample configuration.
Avaya Messaging
For information about Avaya Messaging, see Avaya Messaging documentation on the Avaya
Support website.
The result is the software-based Avaya Meetings Server deployable in a virtualized environment:
• You do not need a dedicated appliance taking up rack space for each function. Less boxes or
appliances mean it is considerably more efficient.
• End users have a single conferencing system to learn.
• IT managers have one system to support and one vendor to call for assistance.
• Avaya sales and partners have a single conferencing solution to sell.
Avaya Meetings Server is a single platform for:
• Avaya Meetings Server for Team Engagement with Avaya Aura® components
• Avaya Meetings Server for Over The Top for customers who want their conferencing solution
to be a standalone entity and not integrated with Avaya Unified Communications
• Service Provider offerings
Avaya has enhanced the room system product line for much easier deployment in enterprise
applications. For service providers, this means easy bundling of our endpoints with services,
while enterprise customers can enjoy much simpler installation and administration. You do
not need an expert or technical resource to install or provision a room system. Anyone who
can hook up the cables, connect the components together, and turn on the power can get
a room system operational without an onsite technical resource. For example, the general
facilities personnel.
As an open mobile enterprise engagement company, Avaya continues to extend its solutions
portfolio to address a wider set of customer challenges and areas of higher value. Avaya
solutions, innovation roadmap, and channel development plans position the company to address
trends over the coming years, including:
• Video becoming mainstream
• Increasing mobility demands driven by smartphones and tablets
• IT consumerization
• Demand for open, flexible platforms
• Common place adoption of communication-enabled business processes
• Context-driven communications
• Federation of communications across enterprise boundaries.
Related links
Solution specifications for large enterprises on page 46
Solution specifications for medium to large enterprises on page 50
Solution specifications for small to medium enterprises on page 53
as a standalone infrastructure without Avaya Aura® components. A solution that integrates with
Avaya Aura® is called Avaya Meetings Server for Team Engagement .
The following figures illustrate examples of distributed deployments:
• Adds the Avaya Meetings Management node for specific loads and geographic distribution
requirements. Usually, the customer must distribute User Portal + Web Gateway in large
deployments when numerous users access the portal to join conferences, download client
plugin, and schedule meetings. Likewise, a large deployment with numerous H.323 calls
requires a distributed H.323 Gatekeeper. The node includes these modules that can be
installed in one of the following ways:
- H.323 Gatekeeper
- User Portal + Web Gateway
- User Portal, when Web Gateway is disabled. This occurs in base upgrades or migrations,
or in non-encrypted versions of the core Avaya Meetings Management.
• Deploys Avaya Meetings Media Server that provides rich audio, video, and data conferencing
functionalities to the solution. The server includes: HD video transcoding Media Server for
processed video, High Scale Audio engine, and Web Collaboration engine. The server
supports two working modes per single OVA: video, audio, and web collaboration; high
capacity audio and web collaboration. The administrator can switch the working mode from
the Avaya Meetings Management interface. Avaya Meetings Media Server cannot work
in a mixed mode. For a solution with both working modes, the deployment must include
two Avaya Meetings Media Server: one for Full Audio, Video, Web Collaboration, and one
for High Capacity Audio and Web Collaboration. For WebRTC, the Avaya Meetings Media
Server uses Avaya Aura® Media Server as a WebRTC Gateway.
To handle WebRTC calls in Over The Top from release 9.1 SP3, Avaya Meetings Media
Server instances are configured to run as a WebRTC Gateway and front other Avaya
Meetings Media Servers.
• Deploys Avaya Meetings H.323 Edge. The server provides firewall and NAT traversal for
Avaya remote H.323 video HD room systems and standard third-party rooms. The server is
installed as an OVA.
• Supports Avaya Session Border Controller for Enterprise or an Avaya-approved edge device,
as an option. Avaya SBCE provides SIP firewall traversal, HTTP Reverse proxy, and STUN/
TURN firewall traversal. Avaya SBCE is deployed as an OVA, or as an appliance with
pre-installed software.
• Adds the Avaya Meetings Streaming and Recording facility as an option. It is deployed as a
pre-installed appliance on Avaya Solutions Platform server.
Related links
Avaya Meetings Server overview on page 45
The solution is called Avaya Meetings Server for Over The Top when it ties to the customer
existing infrastructure and provides services over the top of this infrastructure without requiring it
to be upgraded or replaced.
The solution that tightly integrates with Avaya Aura® components is called Avaya Meetings Server
for Team Engagement and is deployed in medium and large enterprises.
The following figures illustrate distributed deployments:
This solution:
• Targets up to 30,000 registered users.
• Supports up to 2,000 concurrent sessions.
• Requires port-based licenses when deployed as Avaya Meetings Server for Over The Top ,
and Avaya Aura® Power Suite user licenses when set up for Avaya Meetings Server for Team
Engagement.
• Recommends the use of two DMZ zones with three firewalls: the web zone for publicly
accessed servers; the application zone for application servers.
• Includes Avaya Meetings Management for managing the organization’s network, web
services, and signaling/control components. This virtual application, which is delivered as an
OVA, fully integrates with the enterprise active directory and provides intelligent cross-zone
bandwidth management regardless of protocols being used for calls.
Avaya Meetings Management includes these modules: Management, User Portal + Web
Gateway for web services, SIP B2BUA for signaling and control, Avaya Meetings Control,
and H.323 Gatekeeper.
• Adds the Avaya Meetings Management node for specific loads and geographic distribution
requirements. Usually, the customer must distribute User Portal + Web Gateway in large
deployments when numerous users access the portal to join conferences, download client
plugin, and schedule meetings. Likewise, a large deployment with numerous H.323 calls
requires a distributed H.323 Gatekeeper. The node includes these modules that can be
installed in one of the following ways:
- H.323 Gatekeeper
- User Portal + Web Gateway
- User Portal, when Web Gateway is disabled. This occurs in base upgrades or migrations,
or in non-encrypted versions of the core Avaya Meetings Management.
• Deploys Avaya Meetings Media Server that provides rich audio, video, and data conferencing
functionalities to the solution. The server includes: HD video transcoding Media Server for
processed video, High Scale Audio engine, and Web Collaboration engine. The server
supports two working modes per single OVA: video, audio, and web collaboration; high
capacity audio and web collaboration. The administrator can switch the working mode from
the Avaya Meetings Management interface. Avaya Meetings Media Server cannot work
in a mixed mode. For a solution with both working modes, the deployment must include
two Avaya Meetings Media Server: one for Full Audio, Video, Web Collaboration, and one
for High Capacity Audio and Web Collaboration. For WebRTC, the Avaya Meetings Media
Server uses Avaya Aura® Media Server as a WebRTC Gateway.
To handle WebRTC calls in Over The Top from release 9.1 SP3, Avaya Meetings Media
Server instances are configured to run as a WebRTC Gateway and front other Avaya
Meetings Media Servers.
• Deploys Avaya Meetings H.323 Edge. The server provides firewall and NAT traversal for
Avaya remote H.323 video HD room systems and standard third-party rooms. The server is
installed as an OVA.
• Supports Avaya Session Border Controller for Enterprise or an Avaya-approved edge device,
as an option. Avaya SBCE provides SIP firewall traversal, HTTP Reverse proxy, and STUN/
TURN firewall traversal. Avaya SBCE is deployed as an OVA, or as an appliance with
pre-installed software.
• Adds the Avaya Meetings Streaming and Recording facility as an option. It is deployed as a
pre-installed appliance on Avaya Solutions Platform server.
Related links
Avaya Meetings Server overview on page 45
supports two working modes per single OVA: video, audio, and web collaboration; high
capacity audio and web collaboration. The administrator can switch the working mode from
the Avaya Meetings Management interface. Avaya Meetings Media Server cannot work
in a mixed mode. For a solution with both working modes, the deployment must include
two Avaya Meetings Media Server: one for Full Audio, Video, Web Collaboration, and one
for High Capacity Audio and Web Collaboration. For WebRTC, the Avaya Meetings Media
Server uses Avaya Aura® Media Server as a WebRTC Gateway.
To handle WebRTC calls in Over The Top from release 9.1 SP3, Avaya Meetings Media
Server instances are configured to run as a WebRTC Gateway and front other Avaya
Meetings Media Servers.
• Deploys Avaya Meetings H.323 Edge. The server provides firewall and NAT traversal for
Avaya remote H.323 video HD room systems and standard third-party rooms. The server is
installed as an OVA.
• Supports Avaya Session Border Controller for Enterprise or an Avaya-approved edge device,
as an option. Avaya SBCE provides SIP firewall traversal, HTTP Reverse proxy, and STUN/
TURN firewall traversal. Avaya SBCE is deployed as an OVA, or as an appliance with
pre-installed software.
• Adds the Avaya Meetings Streaming and Recording facility as an option. It is deployed as a
pre-installed appliance on Avaya Solutions Platform server.
Related links
Avaya Meetings Server overview on page 45
Survivability
The Embedded Survivable Remote solution supports survivable local call processing and SIP
routing for a branch when the connection with the core site fails. Branch Session Manager
provides a SIP-enabled branch survivability solution. When the core Session Manager is
unreachable, SIP phones receive Communication Manager features from Avaya Aura® that is
installed on the Embedded Survivable Remote server. Branch Session Manager provides services
to the SIP endpoints when the connection with the core site is fails.
The sample configuration consists of the Embedded Survivable Remote server, Branch Session
Manager, and an Avaya Aura® 8.x infrastructure.
The embedded survivable remote template is installed on an Avaya S8300E server with G430
Branch Gateway and G450 Branch Gateway.
The site where the embedded survivable remote server is installed includes:
• Session Manager
• Branch Session Manager
• Communication Manager
• AVP Utilities (Formerly known as Utility Services)
The Avaya Aura® core setup is in the head office, Location 1. The head office connects to
Location 2, a branch office, which is the parts warehouse. Location 2 requires a new setup for 150
users. Location 2 uses SIP endpoints. The network environment uses POE. The communication
system requires a 30–channel ISDN PRI trunk for inbound and outbound calling. The branch office
connects over WAN to the head office.
The second branch office, Location 3, requires a setup to support up to 40 users. The branch
office uses SIP endpoints and a 30–channel ISDN PRI trunk for inbound and outbound calls. The
branch office connects over WAN to the head office.
Proposed solution
Location 1
The Location 1 datacenter consists of Communication Manager, Session Manager, and System
Manager. Virtualized Environment is on customer-provided hardware and VMware. The servers
are installed on VMware. Location 1 uses one G430 Branch Gateway for media resources. The
Location 1 system hosts all the licenses and provides services and control over WAN to Location
2 and Location 3. The Location 1 system has licenses for 190 users, 150 for Location 2 and 40 for
Location 3. Number of EC-500 licenses are available as a startup are 20.
Location 2
Location 2 uses G430 Branch Gateway for media resources. The branch office uses 150 Avaya
9608 IP and SIP telephones and a 30-channel ISDN PRI card for PSTN connectivity. All endpoints
work on POE and do not require local power supply. G430 Branch Gateway connects to the head
office over WAN. The system uses Branch Session Manager and Survivable Remote in case of a
connectivity failure at the head office.
Location 3
Location 3 uses G430 Branch Gateway for media resource and local connectivity. Location 2
uses 40 Avaya 9608 IP and SIP Phones. Location 3 uses a 30-channel PRI card for PSTN
connectivity. All endpoints use POE and do not require local power supply. G430 Branch Gateway
connects over WAN to Avaya servers in the head office . The setup uses the standard survivability
capabilities with limited survivability features.
Solution overview
The Avaya Breeze® platform server runs in the Avaya Aura® environment. Avaya Breeze®
platform complements and expands the core communication capabilities of Session Manager and
Communication Manager. System Manager manages Avaya Breeze® platform that interoperates
with Communication Manager 7.0.
Traditional H.248 gateways provide access to the PSTN and support for H.323 and legacy
endpoints. Connection to SIP service provider trunks is provided through Avaya Session Border
Controller for Enterprise to Session Manager.
All incoming and outgoing PSTN calls use Call Intercept services that run on Avaya Breeze®
platform, regardless of the type of endpoint and the type of trunk. For ISDN trunks,
Communication Manager routes outbound PSTN calls first to Session Manager and then to the
ISDN trunk. Similar configuration is required for incoming calls over an ISDN trunk. Station-to-
station calls cannot run Call Intercept services even if the endpoints are SIP endpoints.
Avaya Breeze® platform is deployed on one of the following:
• In Avaya appliance offer, on Appliance Virtualization Platform.
• In customer Virtualized Environment, on VMware™
Avaya Device Adapter Snap-in is connected to Avaya Aura® Session Manager over TLS for SIP
signaling. Session Manager works with Avaya Aura® Communication Manager for call services
and Avaya Aura® System Manager for management traffic.
The CS 1000 UNIStim endpoints and Media Gateways connect to Avaya Device Adapter Snap-in
over the IP network. The snap-in then presents these endpoints as Avaya SIP Telephony (AST)
sets to Session Manager. The Personal Directory for UNIStim endpoints migrates to Avaya Device
Adapter Snap-in. Corporate directory support for UNIStim and digital endpoints using Avaya Aura®
Device Services (AADS) is optional.
Security philosophy
This section describes the security-related considerations, features, and services for the Avaya
Aura® solution and its various components. Avaya Aura® needs to be resilient to attacks that can
cause service disruption, malfunction or theft of service. Avaya‘s products inherit a number of
mechanisms from legacy communications systems to protect against toll fraud or the unauthorized
use of communications resources. However, Unified Communications capabilities, which converge
telephony services with data services on the enterprise data network, have the additional need
for protections previously specific only to data networking. That is, telephony services need to be
protected from security threats such as:
• Denial of Service (DoS) attacks
• Malware (viruses, worms and other malicious code)
• Theft of data
• Theft of service
To prevent security violations and attacks, Session Manager uses Avaya‘s multilayer hardening
strategy:
• Secure by design
• Secure by default
• Secure communications
For more information on security design for the various Avaya Aura® components, see the
following documents:
• Avaya Aura® Session Manager Security Design
• Avaya Aura® Communication Manager Security Design
• Avaya Aura® System Manager Security Design
• Avaya Aura® Messaging Security Design
Secure by design
Secure by design encompasses a secure deployment strategy that separates Unified
Communications (UC) applications and servers from the enterprise production network. Since
all SIP sessions flow through Session Manager, being the SIP routing element, it is able to protect
the UC applications and servers from network, transport, and SIP Denial of Service (DoS) attacks,
as well as protect against other malicious network attacks. For customers that deploy SIP trunks
to SIP service providers, use Avaya Aura® Session Border Controller to provide an additional layer
of security between the SIP service provider and Session Manager.
The architecture is related to the trusted communication framework infrastructure security layer
and allows for the specification of trust relationships and the design of dedicated security zones
for:
• Administration
• Gateway control network
• Enterprise network
• Adjuncts
• SIP Elements
For Communication Manager, Avaya isolates assets such that each of the secure zones is not
accessible from the enterprise or branch office zones. The zones are like dedicated networks for
particular functions or services. They do not need to have access from or to any other zones
because they only accommodate the data they are built for. This provides protection against
attacks from within the enterprise and branch office zone.
Gateways with dedicated gatekeeper front-end interfaces (C-LAN) inspect the traffic and protect
the server zone from flooding attacks, malformed IP packets, and attempts to gain unauthorized
administrative access of the server through the branch gateways. This architecture and framework
can also flexibly enhance the virtual enterprise and integrate branch offices into the main
corporate network. The security zone from the branch office can terminate at the central branch
gateway interfaces, again protecting the heart of Communication Manager.
Secure by default
Secure by default is a security strategy of ensuring Avaya products only install software, services
required for the operation of the product. For Avaya turnkey products, this includes a hardened
configuration of the operating system, and wherever possible the default configuration of the
product is to by default enable security features of the product.
In many cases, for Avaya products that run on the Linux operating system, modified kernels are
used. The Linux operating system limits the number of access ports, services, and executables
and helps protect the system from typical modes of attack. At the same time, the reduction of
Linux services limits the attack surface.
Secure communications
Secure communications uses numerous features and protocols to protect access to and the
transmissions from Avaya communications systems. Avaya uses media encryption to ensure
privacy for the voice stream. Alongside media encryption, integrated signaling security protects
and authenticates messages to all connected SIP elements, IP telephones, and gateways, and
minimizes an attacker's ability to tamper with confidential call information. These features protect
sensitive information like caller and called party numbers, user authorization, barrier codes,
sensitive credit card numbers, and other personal information that is keyed in during calls to
banks or automated retailers.
Critical adjunct connections are also encrypted. IP endpoints additionally authenticate to the
network infrastructure by supporting supplicant 802.1X protocols. Network infrastructure devices
like gateways or data switches act as an authenticator and forward this authentication request to a
customer authentication service.
Trust management
Various protocols are used for inter-element communication within a deployment. These protocols
include SIP, HTTPS, RMI (Remote Method Invocation), and JMX (Java Management Extensions).
The common method for securing these protocols is TLS (Transport Layer Security). TLS will be
used to secure the communication channel to prevent eavesdropping and message tampering.
In addition, credentials used to establish these mutually authenticated TLS sessions can be
leveraged to provide element–level authentication and authorization.
Identity (endpoint or Server) and Trusted (Root) Certificates are integral in establishing such
TLS sessions. PKI (Public Key Infrastructure) is a commonly used and scalable technology to
facilitate provisioning and remote management of these certificates and establish trust domains for
a deployment.
The Trust Management Service delivered via the System Manager Centralized Management
System is responsible for,
• Participating in a customer’s Public Key Infrastructure (PKI), if one exists.
- For customers that do have a PKI within their enterprise but would like to create a
separate domain of trust (derived from their Root CA) for Avaya components OR use a
third-party (e.g., Verisign) as their trust provider.
• Lifecycle management of identity certificates for Avaya products,
- Secure storage of Private Keys
- Issuance of Certificates
- Renewal of Certificates
- Revocation of Issued Certificates
Authentication
The Avaya Aura architecture defines authentication as the process of verifying an identity which
may belong to a user, an application or system.
The Avaya Aura Architecture’s Session Manager provides the SIP Registrar/Proxy function
referred to in this section. Devices connecting to a SIP Registrar/Proxy can be divided into two
categories:
• Un-trusted: The SIP Proxy/Registrar will NOT accept PAI from devices in un-trusted realm.
Any entity or device not identified in the SIP Registrar/Proxy’s trusted host list falls into this
category.
• Trusted: The SIP Proxy/Registrar will accept PAI from trusted entities or devices. A trusted
device is also referred to as a trusted host. To be trusted there must be a corresponding
entry in the SIP Registrar/Proxy’s trusted host list. To identify trusted hosts, the following
authentication mechanisms will be applied.
Authorization
Avaya Aura Session Manager is responsible for authentication of other SIP entities and acts as a
portal to the Aura. All service requests are dispatched through the portal and orchestrated across
applications to validate and complete each request. Asynchronous event responses are delivered
to clients by marshalling then through Session Manager.
Thus, client access authentication centralized and pivoted at Session Manager. However, access
control to resources and operations are distributed through the system depending on the
granularity of control.
Coarse-grained (high-level) access controls are enforced at the service portal, service type
handlers or interface handlers.
Finer-grained Access Control, however, is distributed through services where the requested action
is executed - where knowledge (context) for application-specific decisions is available.
Reliability
Customers need the full reliability of their traditional voice networks, including feature richness
and robustness, and they want the option of using converged voice and data infrastructures. With
the convergence of voice and data applications that run on common systems, a communications
failure could bring an entire business to a halt. Enterprises are looking to vendors to help them
design their converged infrastructure to meet their expected availability level.
Note:
If you have an S8300 server configured in embedded CM main, survivable remote, or
embedded survivable remote configurations, migrate to Avaya Solutions Platform S8300.
The reliability feature includes:
• Alternate gatekeeper: Provides survivability between Communication Manager and IP
communications devices such as IP telephones and IP softphones.
• Auto fallback to primary for Branch Gateway: Automatically returns a fragmented network,
where several Branch Gateways are serviced by one or more Communication Manager
Survivable Remote sites to the primary server. This feature is targeted for Branch Gateways.
• Connection preserving failover/failback for Branch Gateway: Preserves existing bearer or
voice connections while Branch Gateways migrate from one Communication Manager server
to another. A network or server failure can cause migration.
• Connection preserving upgrades for duplex servers: Provides connection preservation on
upgrades of duplex servers for:
- Connections involving IP telephones
- Connections involving TDM connections on port networks
- Connections on Branch Gateway
- IP connections between port networks and Branch Gateway
• Communication Manager Survivable Core: Provides survivability for backup servers to be
placed in various locations in the customer network. The backup servers supply service to
port networks where the main server or server pair fails or connectivity to the main server or
server pair is lost.
- When the Survivable Core is in control due to a network fragmentation or catastrophic
main server failure, the return to the main server is automatic. It is provided by the
scheduled, manual, and automatic options.
- Dial Plan Transparency for Survivable Remote and Survivable Core preserves dialing
patterns of users if a Branch Gateway registers with Survivable Remote or when a port
network registers with Survivable Core.
• IP endpoint Time-to-Service: Improves a customer’s IP endpoint time to service, especially
where the Communication Manager has many IP endpoints trying to register or re-register.
With this feature, the system considers that IP endpoints are in-service immediately after
registering. The feature of TTS-TLS supports TTS over a secure TLS connection. This is the
recommended configuration choice.
• Survivable processor: A survivable processor is an Internal Call Controller (ICC) with an
integral Branch Gateway. The ICC is administered to function as a spare processor rather
than the main processor. The standby Avaya Solutions Platform S8300 Server runs in
standby mode, with the main server ready to take control in an outage with no loss of
communication.
• Handling of split registrations: Occurs when resources on one network region are registered
to different servers. For example, after an outage activates the Survivable Remote server
Availability
Availability is an associated service implementation that ensures a prearranged level of
operational performance during a time period. For the Avaya Aura® solution it means users want
their telephones and video devices to be ready to serve them at all times.
In this context, the term availability describes the use of duplicated servers. The term high
availability describes specifically Appliance Virtualization Platform use of duplicated servers.
Server interchange
Server interchange is the process within a duplex server pair of a standby server becoming an
active server. An arbiter process analyzes the state-of-health of both the active and standby
servers and initiates a server interchange if the state of health of the active server is less than the
state of health of the standby server. During this process, the standby server sends a request for
the alias address. The ARP module resolves the IP address and sends an ARP reply packet with
its Ethernet MAC address. The active server is seen by all the devices in the same subnet.
Each server has a unique IP address for the Processor Ethernet interface. A separate shared
alias IP address is assigned to this interface on the active server and is used for connections
to the Processor Ethernet interface on the active server. As part of the operations for a server
interchange, the alias address is removed from the Processor Ethernet network interface on the
server going standby, and it is added to the Processor Ethernet network on the server going
active. After the interchange, a gratuitous ARP message is sent out from the Processor Ethernet
interface on the server going active to update the MAC address in the ARP data cache stored in
the IP endpoints on the local LAN that need to be connected to the PE interface.
The IP connection for the Processor-Ethernet-connected endpoints is not available during the
server interchange. This is similar to a network outage. After the interchange, the Processor-
Ethernet-connected endpoints use a short network IP address of the active server.
The IP connection for the C-LAN-connected endpoints is available during the server interchange.
However, some messages may be lost during the interchange. Normal operation resumes after
the interchange.
Fast server interchange
The fast server interchange process is available only for the devices connected to the Processor
Ethernet on duplicated servers. The branch gateways and IP telephones must have the updated
firmware. The active server preserves information about all the connections and connects to
IP telephones and branch gateways before resuming normal operation. The IP telephones and
branch gateways accept the incoming connection to replace the old connection.
In a scenario where some of the branch gateways and IP telephones are upgraded and others are
not, the following statements are true:
• The upgraded branch gateways and telephones reconnect faster
• The other branch gateways and telephones take longer time to reconnect
• The other branch gateways and telephones may negatively impact the performance of the
server following the server interchange
3. To add eth3 to the list of active adapters, type # esxcli network vswitch standard
policy failover set -v vSwitch0 --active-uplinks vmnic0,vmnic3.
The command changes the vmnic3 to the active mode.
4. To verify the mode of eth3, type # esxcli network vswitch standard policy
failover get -v vSwitch0.
The system displays the following message:
Load Balancing: srcport
Network Failure Detection: link
Notify Switches: true
Failback: true
Active Adapters: vmnic0, vmnic3
Standby Adapters:
Unused Adapters:
Warning:
The management and virtual machine network connections might be interrupted if you
do not use correct network commands. Do not remove or change vmnic0, vmnic1, and
vmnic2 from vSwitches or active modes.
Survivability
Survivability is the ability of the components within the Avaya Aura® solution to function during
and after a natural or man-made disturbance. Avaya qualifies survivability for a given range of
conditions over which the solution will survive.
This section addresses Session Manager and Communication Manager survivability options.
must have the same capacity as the original main. For example, when a simplex server is the
system-preferred server to duplex main server, it is configured to have the same capacities as the
duplex servers. This can be done based on its license files.
Depending on the type of failure and how the survivable servers are configured, an individual
survivable server may accept control of all port networks, several port networks, a single port
network, or no port networks. When a LAN or WAN failure occurs in configurations where port
networks are widely dispersed, multiple survivable servers may be required to collectively accept
control with each survivable server controlling some portion of the set of port networks.
When a survivable core server accepts control, it communicates directly with each port network
through the IPSI circuit pack.
Stable calls remain up in the same state as they were before the outage occurred. The stable
calls do not have access to any features such as hold and conference. The state of the stable call
cannot be changed.
Note:
Only one IP address is available to the IP endpoint regardless of the server (main or
survivable) in control.
Note:
Two IP addresses are available to the IP endpoint: the IP address of the main server
and the IP address of the survivable server. If the IP endpoint loses connectivity to its
current primary gatekeeper, the IP device uses the alternate gatekeeper list for automatic
recovery of service.
IP audio stream requires a VoIP RTP resource from either a TN2302AP IP Media Processor or a
branch gateway. Exactly how many audio streams can be supported by these resources depends
on the codec selection. Upon hitting the VoIP RTP resource limit, IGAR immediately attempts to
use an alternative path for a bearer connection to the network region of the called party using
PSTN facilities allocated for use by the IGAR feature.
In the branch office are the branch SIP endpoints and a branch gateway. The endpoints
are registered to both the core Session Managers and the survivable Session Manager. The
endpoints have the concept of an active controller. The active controller is defined as the
Session Manager to which the endpoints currently have subscriptions established. In sunny day
operations, the core Session Manager is always the active controller. The survivable Session
Manager receives no call traffic. The branch gateway is registered with the main Communication
Manager. In rainy day operations, the survivable Session Manager is always the active controller.
Currently, the only supported network outage is a complete branch WAN outage where all devices
in the branch have lost contact with all devices in the core. Partial network outages are not
guaranteed to exhibit desired redundancy behaviors.
For more information on the survivable remote server as it relates to Session Manager, see
Administering Avaya Aura® Session Manager.
Telephone perspective
Session Manager supports simultaneous registration of telephones, a method that provides
the greatest robustness using the SIP-outbound semantics. This means that SIP telephones
simultaneously register with core Session Managers and the survivable Session Manager. The
telephones accept incoming calls from any of these servers and automatically perform active
controller selection according to existing algorithms. This means that although SIP telephones
can receive calls from any of the registered controllers, telephones initiate calls through only the
highest priority controller, the active controller. With an active controller outage, telephones mark
the next controller in the list as the active controller for outbound services. Upon detecting the
revival of the highest priority server, telephones move back to the revived controller as the active
controller.
The connectivity path directly to the Processor Ethernet interface of the server is as follows:
branch gateway ⇔ IP network ⇔ PE interface of the server
Link connectivity between the main call controller and the branch gateway is monitored through
the exchange of keep-alive messages between the two components. If the link between the
active call controller and the branch gateway breaks, the branch gateway tries to reestablish the
link using the alternate gatekeeper list. The alternate gatekeeper list is divided into primary and
secondary addresses. The primary addresses receive priority over the secondary addresses.
In the event of a WAN failure, any IP endpoint or branch gateway that cannot reach the primary
controlling server registers with a survivable remote server controller in survivable mode. In the
duplex server/branch gateway configuration, up to 50 survivable remote servers are available and
ready for the fail-over process. The survivable remote server is always ready to acknowledge
service requests from IP telephones and branch gateways that can no longer communicate with
their main controller. Once the telephones and the branch gateway are registered, end users
at the remote site have full feature functionality. This failover process usually takes less than 3
minutes. After failover, the remote system is stable and autonomous.
When main server is embedded server
In this configuration, the connectivity path between the branch gateway and the embedded
S8300E server is:
Endpoint ⇔ IP Network ⇔ S8300E server
The link failure discovery and recovery process is the same as above, except there are no C-LAN
addresses in the alternate gatekeeper list. In this configuration, up to 10 survivable remote servers
can back up the branch gateways that are controlled by the S8300E server.
Redundancy
Geographic Redundancy overview
Avaya Aura® provides System Manager Geographic Redundancy, a resiliency feature that
handles scenarios where the primary System Manager server fails or the data network partially
loses connectivity. In such scenarios, the system manages and administers products such as
Avaya Aura® Session Manager and Avaya Aura® Communication Manager across the customer
enterprise using the secondary System Manager server.
For customers who need highly fault-tolerant deployments, System Manager supports System
Manager Geographic Redundancy deployments that can provide the Active-Standby mode of
resiliency.
From Release 8.0.1, System Manager also supports Geographic Redundancy in a mixed
deployment environment.
From Release 7.0.1, System Manager supports deployment on different server types and different
deployment modes in Geographic Redundancy. System Manager supports mixed:
• Servers from any combination of Avaya Supplied Avaya Solutions Platform 130 severs
supported for System Manager.
• The primary System Manager server running as the only Avaya application on the server,
while the secondary System Manager running along with other Avaya applications on another
server and vice-versa.
• Servers from both customer-provided virtualized environment and Avaya Solutions Platform
130.
For example, the primary System Manager server can be on Avaya Solutions Platform
130 and the secondary System Manager server can be on a customer-provided virtualized
environment.
The following are some key differences between Geographic Redundancy and High Availability
(HA) solutions:
Geographic Redundancy HA
Addresses sudden, site-wide disasters. Addresses server outages due to network card,
hard disk, electrical, or application failure.
Distributed across WAN. Deployed within a LAN.
Manual Automated
You must deploy System Manager on both the standalone servers with separate IP addresses
and configure Geographic Redundancy. If a managed product that supports the Geographic
Redundancy feature loses connectivity to the primary System Manager server, the secondary
System Manager server provides the complete System Manager functionality. However, you must
manually activate the secondary System Manager server.
Note:
Only the system administrator can perform Geographic Redundancy-related operations.
You must reconfigure the elements that do not support Geographic Redundancy so that the
elements can interact with the secondary System Manager server to receive configuration
information. For more information about configuring elements that do not support Geographic
Redundancy, see Geographic Redundancy-unaware elements overview.
During the installation of GR-unaware elements such as Presence Server, you must specify
whether you want to enable the Geographic Redundancy feature on the element.
Out of Band Management in a Geographic Redundancy setup
When you configure Geographic Redundancy, provide Management network details only.
Validation fails if you configure Geographic Redundancy with Public network details. In
Geographic Redundancy setup, you do not disable or enable Out of Band Management
on both primary and secondary System Manager virtual machine. You can enable Out of
Band Management on the primary System Manager virtual machine and disable Out of Band
Management on the secondary System Manager virtual machine, and vice versa.
Recovery
Network recovery
Conventional wisdom holds that network reliability is typically 3-9s (99.9%) on a LAN, and 2-9s
(99%) on a WAN. The leading causes of network failure are a WAN link failure, administrator
error, cable failure, issues that involve connecting new devices or services, and malicious activity,
including DoS attacks, worms, and viruses. Somewhere lower down on the list are equipment
failures. To achieve the highest levels of availability, it is important that a strong change control
policy and network management strategy be implemented.
There are numerous techniques for improving the reliability of data networks, including spanning
tree, self-healing routing protocols, network management, and change control.
Related links
Change control on page 83
Dial backup on page 85
Convergence times on page 85
Change control
Change control describes a process by which an organization can control nonemergency network
changes and reduce the likelihood of administrator errors that cause network disruption. It involves
carefully planning for network changes (including back-out plans), reviewing proposed changes,
assessing risk, scheduling changes, notifying affected user communities, and performing changes
when they will be least disruptive. By implementing a strict change control process, organizations
can reduce the likelihood of administrator errors, which are a major cause of network disruption,
and increase the reliability of their networks.
Related links
Network recovery on page 82
Related links
Network recovery on page 82
become degraded (such as if a WAN link fails), the secondary router takes over. This is a useful
mechanism to protect endpoints from router failures, and works with IP Telephony endpoints.
Related links
Network recovery on page 82
Multipath routing
Modern routers and Layer 3 switches allow multiple routes for a particular destination to be
installed in the routing table. Depending on the implementation, this can be as high as six routes.
Some implementations require that all routes that are inserted in the routing table have the same
metric, while others allow unequal metric routing. In cases where the metric for all installed routes
are the same, the router will load balance traffic evenly across each path. When the metric for
multiple routes vary, the traffic is load balanced in proportion to the metric (in other words, if one
path is twice as good as another, two-thirds of the traffic travels down the good path, and one-third
of the traffic selects the other one). Asymmetric routing is suboptimal for voice, so route-caching
(described earlier) should be considered in this environment.
In addition to using all (up to 6) active paths and optimally using available bandwidth, multipath
routing greatly improves convergence time. As soon as a router detects a path failure, it remove
it from the routing table, and sends all traffic over the remaining links. If this is a physical link
failure, the detection time is nearly instantaneous. Therefore, you must use multipath routing,
where available, across multiple links to a particular location.
Related links
Network recovery on page 82
Dial backup
One cost-effective technique for installing backup WAN links is to use dial backup. This can
be done using either ISDN-BRI or analog lines. ISDN lines typically take 2 s to connect, while
56-kbps analog modems take approximately 1 min. Although this strategy is effective for data
traffic, it is less effective for voice. First, the bandwidth may have been greatly reduced. If this
is the case, the number of voice channels that can be supported might have been reduced
proportionally. Also, if QoS is not properly applied to the backup interface, high packet loss and
jitter can adversely affect voice quality. Finally, the time that is required to establish the new link
can be up to 1 minute, which disrupts active calls. However, providing that these considerations
are taken into account, proper QoS is applied, and a compressed codec is chosen, dial backup
can be an effective solution for two to four users.
Related links
Network recovery on page 82
Convergence times
Convergence is the time that it takes from the instant a failure occurs in the network until a new
path through the network is discovered, and all routers or switches are aware of the new path.
Convergence times vary, based on the complexity and size of a network. Sample convergence
times (single link failure) on page 86 lists some sample convergence times that are based
on a single link failing in a relatively simple network. They reflect update and/or hello timers
expiring. Dialup convergence times reflect the time that it takes to dial, connect, and authenticate
a connection. These times do not take into account LAG, fast spanning tree, or multipath routing,
which speed up convergence. This table shows the importance of carefully planning for fail-over
in a network. For example, both OSPF and EIGRP (Layer 3) protocols converge faster than
spanning tree (Layer 2). When designing a highly available data network, it is more advantageous
to use Layer 3 protocols, especially link-state (OSPF) or hybrid (EIGRP) protocols, than Layer 2
(spanning tree).
Related links
Network recovery on page 82
IP endpoint recovery
Avaya’s distributed IP-based systems experience increased availability by virtue of the alternate
gatekeeper feature. When IP telephones register with Communication Manager, they are given a
list of alternate gatekeepers to which they can re-register in the event of a failure. Thus, if a C-LAN
fails or becomes unavailable, users registered to a particular C-LAN can re-register to another
C-LAN that is unaffected by the failure.
The Avaya servers have a scalable architecture with different server components. These
components provide processing and relay signaling information between Communication Manager
and the Avaya IP endpoints. The system architecture is inherently distributed, providing the
scalability to support a large number of endpoints and the flexibility to work in various network
configurations.
This distributed nature of the architecture introduces additional complexity in dealing with endpoint
recovery, since failure of any element in the end-to-end connectivity path between an IP endpoint
and the switch software can result in service failure at the endpoint.
The recovery algorithm outlined here deals with detection and recovery from the failure of
signaling channels for IP endpoints. Such failures are due to connectivity outages between the
server and the endpoint, which could be due to failure in the IP network or any other component
between the endpoint and the server.
The connectivity path between the endpoint and the server are:
Endpoint ⇔ IP network ⇔ C-LAN ⇔ PN backplane ⇔ IPSI ⇔ IP network ⇔ server
Endpoint ⇔ IP network ⇔ Server PE interface
In this configuration, IP endpoints register to the C-LAN circuit pack within the port network or
directly register to the server Processor Ethernet interface.
A C-LAN circuit pack provides two basic reliability functions:
• A C-LAN hides server interchanges from the IP endpoints. The signaling channels of the
endpoints remain intact during server interchanges and do not have to be re-established with
the new active server.
• A C-LAN terminates TCP keep-alive messages from the endpoints and thus frees the server
from handling frequent keep-alive messages.
Recovery algorithm
The recovery algorithm is designed to minimize service disruption to an IP endpoint in the case of
a signaling channel failure. When connectivity to a gatekeeper is lost, the IP endpoint progresses
through three phases:
• Recognition of the loss of the gatekeeper
• Search for (discovery of) a new gatekeeper
• Re-registration
When the IP endpoint first registers with the C-LAN circuit pack, the endpoint receives a list of
alternate gatekeeper addresses from the DHCP server. The endpoint uses the list of addresses to
recover from a signaling link failure to the C-LAN circuit pack and gatekeeper.
When the IP endpoint detects a failure with the signaling channel (H.225/Q.931), its recovery
algorithm depends on the call state of the endpoint:
• If the user of the telephone is on a call and the endpoint loses its call signaling channel, the
new IP robustness algorithm allows the telephone to reestablish the link with its gatekeeper
without dropping the call. As a result, the call is preserved. Call features are not available
during the time the telephone is trying to reestablish the connection.
• If the user of the telephone is not on a call, the telephone closes its signaling channels and
searches for a gatekeeper using the algorithm defined in the following section.
To reestablish the link, the telephone tries to register with a C-LAN circuit pack on its gatekeeper
list. The C-LAN circuit pack load balancing algorithm looks for the C-LAN on the list with the least
number of telephones registered to it. As a result, the recovery time is short, and there is no
congestion due to too many telephones trying to register to a single C-LAN circuit pack.
In this configuration, the telephone registers to the server's Processor Ethernet Interface and the
IP endpoint connects directly to the server Processor Ethernet (there is no C-LAN circuit pack).
The connectivity path between the telephone and the server is:
Endpoint ⇔ IP network ⇔ Server
To discover connectivity failure, keep-alive messages are exchanged between the IP endpoint
and the server. When the endpoint discovers that it no longer has communication with its primary
gatekeeper, it looks at the next address on its list. If the next address is for a survivable remote
server, then that server accepts the registration and begins call processing as long as media
resources are available.
While the survivable server is not call preserving, the fail-over from primary gatekeeper to
survivable server is an automatic process and does not require human intervention. The failback
from a survivable server to a primary gatekeeper, however, is not currently automatic and requires
a system reset on the survivable server. During the fallback to the primary gatekeeper, all calls are
dropped with the exception of IP-to-IP calls.
If reregistration is not required, only the re-establishment of the TCP socket is needed, which
is also done in an on-demand fashion. Currently, in a call center environment, the agents must
always log in again whenever the endpoint becomes unregistered. As a consequence of not
requiring reregistrations after most outages, the agents’ log-ins persist and they do not need to log
in again.
Note that reregistration is still required for outages that cause the IP endpoints to failover to an
survivable server (and then again when they recover back to the main server). In addition, a
Communication Manager reset of level 2 (or higher) or a power cycle on the IP endpoints also
requires IP endpoints to reregister because the information for the registration is erased under
these conditions. For security reasons, IP endpoints also need to reregister with Communication
Manager if they have not been able to communicate with Communication Manager over the RAS
signaling channel for an extended period of time.
Changes in IP endpoints
Time to Service (TTS) features work only if corresponding changes are made to the Avaya
H.323-based IP endpoints. The TTS algorithms are implemented in the IP endpoints. These TTS-
enabled endpoints continue to support previous link recovery algorithms when communicating with
a server that does not support TTS or does not have TTS enabled.
The TTS features works seamlessly with older IP endpoints. However, the benefits of the
features are limited to the number of TTS-capable endpoints that support TTS deployed with
Communication Manager.
Note:
16xx-series endpoints do not support TTS.
Operation with NAT/firewall environment
With the Time to Service (TTS) algorithm, the TCP connection for the call signaling channel is
initiated by the server, not by the endpoints. With server-based NAT or firewall environments,
the firewalls must be configured appropriately to allow TCP connections from the server to the
endpoints.
Network outage time line for port networks
The port network survivability during short network outages is relatively short at 15 s. This allows
time for Communication Manager and the affected port network to recover from a network outage
without closing the IPSI socket connection, which can cause data loss and port network warm
restarts.
If the network outage is shorter than 15 s (interval A in the figure):
• All stable calls that go through the port network are preserved.
• Most transient calls will complete with a delay but some may fail
If the network outage is between 15 s and 60–s (interval B in the figure):
• Most stable calls that go through the port network are preserved.
• Most transient calls will fail
If the network outage is between 60–s but shorter than the port network cold reset delay timer
setting (interval C in the figure):
• Most stable calls that go through the port network are preserved.
• Most transient calls will fail
If the network outage is longer than the port network cold reset delay timer setting (interval D in
the figure):
• The port network is reset
• All calls are dropped
The following figure shows the survivability time line.
Performance metrics
The following are Network Readiness Assessment requirements for VoIP specific to quality of a
call.
Metric Recommended Acceptable
One-way Network Delay < 80 milliseconds < 180 milliseconds
Network Jitter < 10 milliseconds < 20 milliseconds
Network Packet Loss (Voice) 1.0% 3.0%
Network Packet Loss (Video) 0.1% 0.2%
QoS Enabled Required Required
Network delay
In IP networks, packet delay (latency) is the length of time for a packet to traverse the network.
Each element of the network, such as switches, routers, WAN circuits, firewalls, and jitter buffers,
adds to packet delay.
Delay can have a noticeable effect on voice quality, but can be controlled in a private environment,
such as a LAN or a WAN. Enterprises can reduce packet delays by managing the network
infrastructure or by agreeing on a Service Level Agreement (SLA) with their network provider. An
enterprise has less control over the delay when using the public Internet for VoIP.
Previously, ITU-T suggested 150 ms one-way delay as a limit for conversations. However, this
value was largely misinterpreted as the limit to calculate a network delay budget for connections.
Depending on the desired voice quality, network designers can choose to increase or decrease
this number for their network.
Customers must consider the following issues when designing a VoIP network:
• One-way delays of more than 250 ms can cause the well-known problem of talk-over.
Talk-over occurs when both parties talk at the same time as the delay prevents them from
realizing that the other person has already started talking.
• In some applications, delays less than 150 ms can impact the voice quality, particularly when
the voice is accompanied with an echo.
• Long WAN networks is a major contributor to the network delay budget, averaging
approximately 10-20 ms per 1000 miles. Some transport mechanisms, such as Frame Relay,
can add additional delay. Additionally, staying within 150 ms, end to end, cannot be possible
for all types of connections.
• One-way delays of over 400 ms on signaling links between port networks and the S8300E
server can cause port network instability.
Again, there is a trade-off between voice quality and the technical and monetary constraints
which businesses must consider. For this reason, the following guidelines assist customers for
configuring one-way LAN/WAN delay between endpoints, not including IP telephones:
• 80 ms delay or less provides the best quality.
• 80 ms to 180 ms delay provides Business Communication quality. This delay range is
better than cell phone quality if echo is properly controlled and well suited for a majority
of businesses.
• Delays exceeding 180 ms can be acceptable depending on customer expectations, analog
trunks used, codec type, and the presence of echo control feature in endpoints or network
equipment.
Codec delay
In addition to packet delays, codecs also add some delay in the network. The delay of the
G.711 codec is minimal. However, the G.729 codec, for example, adds approximately 10 ms of
algorithmic delay in each direction, another 5 ms look-ahead, and signal processing delays.
The compression algorithm in G.723.1 uses multiple blocks, called frames, of 30 ms voice
samples per packet, resulting in an increased latency over codecs configured to use 20 ms or
less samples per packet.
The G.722 codec adds a 0.82 ms delay.
Jitter
Jitter is the difference in the time between the arrival of packets in an IP network. To compensate
for jitter, VoIP endpoints contain a de-jitter buffer also called as a jitter buffer. Jitter buffers hold
incoming packets for a specified duration so that voice samples can play at a normal rate to the
user. In doing so, the jitter buffer also adds packet delay.
Excessive jitter can add to delay if the jitter still fits the size of the jitter buffer. Excessive jitter
can also result in packet discard creating voice quality problems when the variation is greater
than the jitter buffer size. The size of the static jitter buffers must be twice the largest statistical
variance between packet arrivals. Dynamic jitter buffers give the best quality. However, the
resizing algorithm of dynamic buffers must not result in adverse effects. Dynamic jitter buffering
can exacerbate problems in an uncontrolled network. The network topology can also affect jitter.
Multiple paths between endpoints with and routers enabled with load balancing can contribute
significant amounts of jitter.
The following Avaya products have dynamic jitter buffers to minimize delay by automatically
adjusting the jitter buffer size:
• Avaya G430 and G450 Branch Gateways and the G650 Media Gateway with the TN2302AP
IP Media Processor or TN2602 IP Media Resource 320 circuit pack
• Avaya IP SoftPhone software
Packet loss
Packet loss occurs when the jitter buffer of an endpoint does not receive packets or receives the
packets too late for processing. A longer delay or disordered packets can also amount to packet
loss. Also, the network might appear to be losing packets when the network intentionally discards
the packets because of late arrival at the endpoint. Unintentional packet loss in the network and
discarded packets in the jitter buffers of the receiving endpoints characterize the quality of IP
networks.
Packet loss can be bursty or more evenly distributed. Bursty packet loss has a greater effect on
voice quality than distributed packet loss. Therefore, a 1% bursty loss has a more adverse effect
than a 1% distributed loss.
The following are some effects of packet loss on a VoIP network:
• Every codec has a Packet Loss Concealment (PLC) method and because of the PLC,
it becomes difficult for the network to detect packet loss. Therefore, a PLC-enabled
compression codec, such as the G.729A, provides better voice quality than a full bandwidth
G.711 codec without a PLC.
• Packet loss is more noticeable for tones such as fax tones or modem tones (other than
DTMF) than for voice. The human ear can most likely detect packet loss during a tone, which
uses a consistent pitch, than during speech, which uses a variable pitch.
• Packet loss is more noticeable for contiguous packet loss than for random packet loss over
time. For example, the effect of losing 10 contiguous packets is worse than losing 10 packets
evenly spaced over an hour.
• Packet loss is usually more noticeable with larger voice payloads per packet than with
smaller packets, because more voice samples are lost in a larger payload.
• In the presence of packet loss, the time for a codec to return to normal operation depends on
the codec type.
• Even minimal packet loss such as 0.12% can greatly affect the capability of a TTY/TDD
device meant for people who are hard of hearing.
• Packet loss for signaling traffic increases network traffic substantially when the loss exceeds
3%, possibly impacting voice quality.
Echo
The two main types of echo are acoustic echo and electrical echo caused by hybrid impedance
mismatch. Usually, in a two-party call, only the speaker hears an echo but the listener does not.
However, in a conference call, many parties might hear an echo.
Acoustic echo occurs when the voice of the speaker traverses through the airpath in the acoustic
environment of the listener and reflects back to the microphone of the terminal of the listener. The
severity of the echo effect depends on the acoustic properties of the room of the listener, such as,
room size and wall reflection characteristics.
Electrical echo is also a reflection effect but is due to an impedance mismatch between four-wire
and two-wire systems or in the interface between a headset and its adapter.
The perception of echo for the listener increases with delay. Usually, human ears ignore echo
received within 30 ms. However, if the level of the received echo signal is extremely high, even
2 ms of delay causes a perception of echo. Echo received after 30 ms is usually perceived as
annoyance. The perception of echo can be greater in the IP Telephony system because the
end-to-end latency in some IP Telephony implementations exceeds the latency in some circuit-
switched systems.
To reduce echo, customers must deploy echo cancellers at strategic places in telephones or
network equipment. Echo cancellers, which have varying amounts of memory, store incoming
voice streams in a digital form in a buffer and compare the received voice with the previously
transmitted voice patterns stored in memory. If the patterns match, the echo canceller attempts
to remove the newly received voice stream, but a residual level of echo is left even in optimal
operating conditions.
Echo cancellers function properly only if the one-way delay between the echo canceller and the
echo source, for example, the acoustic airpath at the telephone set or electrical hybrid, is not
larger than the capacity of the echo canceller. Otherwise, the echo canceller does not find a
pattern to cancel.
The Avaya G430 and G450 Branch Gateways, the Avaya TN2302AP IP Media Processor and
TN2602AP IP Media Resource in the G650 Media Gateways, the Avaya IP SoftPhone, and all IP
Telephones incorporate echo cancellation designed for IP Telephony to improve voice quality.
Signal levels
To provide better sound quality in telephone conversations, voice communication systems add an
acoustic loss of 10 dB between the listener and the speaker. This 10 dB acoustic loss provides
the level of sound quality that emulates a scenario where the speaker and listener are only one
meter apart and having a face-to-face conversation. Any significant difference from this loss level
is audible as too soft or too loud and can result in some degree of listener discomfort.
In IP Telephony networks, the voice communication system implements the 10 dB acoustic loss as
follows:
• 8 dB in the telephone of the speaker
• 0 dB in the IP network
• 2 dB in the telephone of the receiver
To account for personal preferences or the presence of background noise, listeners can adjust the
volume control of the telephone relative to the 10 dB loss value. The IP Telephony loss values are
globally identical and specified in ITU-T Recommendations.
In traditional circuit-switched networks, the telephone that send, receive, and interport line or
interport trunk losses are country-dependent. The end-to-end country-specified losses often differ
somewhat from the 10 dB loss value. The country-dependency of loss values makes it more
challenging to guarantee a proper listener signal level when the PSTN is involved or when signals
traverse country borders.
IP Telephony gateways must provide proper signal level adjustments from the IP network to the
circuit-switched network and in the reverse direction, and also between circuit-switched ports.
To configure Avaya endpoints across the globe, the devices must be programmed for loss values.
To ensure that the signal levels are controlled properly within the scope of a voice network
consisting of Avaya systems, customers must administer the appropriate country-dependent loss
plan.
In addition to administering loss for two-party calls, Communication Manager provides country-
dependent conference call loss administration. Loss is applied depending on the number of parties
involved in the conference.
Tone Levels
The level of call progress and DTMF tones played out through telephones must adhere to
specified levels. Different countries follow different tone level standards which can be administered
in Communication Manager. You can adjust the volume of received call progress tones using the
telephone volume control.
Audio codecs
Codecs (Coder-Decoders) convert analog voice signals to digital signals and vice versa. Avaya
supports several different codecs that offer varying bandwidth usage and voice quality. The
following are some codecs that Avaya supports:
• G.711: This codec produces uncompressed audio at 64 kbps.
• G.729: This codec produces compressed audio at 8 kbps.
• G.723.1: This codec produces compressed audio at 5.3 or 6.3 kbps.
• G.722: This codec produces compressed audio at 64, 56, or 48 kbps.
• G.726: This codec produces compressed audio at 32 kbps.
Note:
The PolyCom-proprietary Siren codecs are audio only and support wide band. There are three
Siren codecs:
• Siren 7 supports 7 KHz
• Siren 14 supports 14 KHz
In a properly functioning IP network, the G.711 codec offers the highest level of voice quality as
the codec does not use compression. Unfortunately, there is a trade-off with higher bandwidth
usage. In situations where bandwidth is limited, such as across WAN links, G.729 provides good
audio clarity and consumes less bandwidth.
Codecs with compression use twice as many signal processing resources than the G.711 codec.
On the TN2302AP IP Media Processor circuit pack, there are 64 DSP resources. Therefore, one
Media Processor circuit pack or G650 Media Gateway supports:
• A maximum of 64 calls that use the G.711 codec
• A maximum of 32 calls that use the G.729 codec with compression
The formula for calculating the number of calls one Media Processor board supports is
(Number of uncompressed calls) + 2 × (Number of compressed calls) ≤ 64
The TN2602AP circuit pack supports:
• 320 channels of G.711 (u/a-law)
• 320 channels of G.729A/G.729AB
• 320 channels of G.726 (32 kbps only)
• 320 channels of T.38
• 320 channels of V.32 SPRT
The above channel counts are the same if Advanced Encryption Standard (AES) encryption and
SHA-1 authentication are enabled.
The Avaya One-X Deskphones (96xx) support the G.722 codec with 64 kbps and with 20 ms
packets.
Usually, G.711 is used on LANs because bandwidth is abundant and inexpensive whereas G.729
is used across bandwidth-limited WAN links. G430 and G450 Branch Gateways can have varying
amounts of DSP resources depending on the size and number of DAR daughter cards installed.
The functions of these resources is the same as the TN2602 IP Media Resource 320 circuit packs.
Video codecs
A video codec is a device or software that enables video compression or decompression or both.
There are various kinds of video codecs available. Several companies have implemented these
codecs by different algorithms, therefore, the codecs have different specifications and applications
in various fields. These video codecs generally comply with Industry standards.
Avaya uses the following signaling and content codecs in video:
• Video codecs for transmitting content:
- H.261: Known as MPEG-1. This codec is the first to support video over ISDN.
- H.263: This is a well-known video-conferencing codec that is optimized for low data rates
and low motion.
- H.264: This codec supports high definition video and is used by Blu-Ray players, YouTube,
iTunes, and Adobe Flash. You can use this codec for high definition video conferencing.
• Video codecs for multimedia signaling:
- H.224: This codec is well-supported by Microsoft and is primarily used by soft clients, such
as Avaya one-X® Communicator, that support video signaling.
- H.224.1 (data, far-end camera control): This signaling codec is used by video conferencing
companies like PolyCom and LifeSize.
Various video codecs are technically differentiated from each other based on several factors such
as compression technology, compression algorithm, supported platform, sampling, and supported
OS.
Transcoding overview
Transcoding or tandeming occurs when a voice signal passes through multiple codecs, for
example, when the call coverage is applied on a branch office system to a centralized voice mail
system. These calls might experience multiple transcodings including, for example, G.729 across
the WAN and G.711 into the voice mailbox. Each transcoding action results in degradation of voice
quality. Avaya products minimize transcoding using methods such as shuffling and hairpinning.
Layer Description
Core The core layer is the heart of the network. The core layer forwards packets as quickly
as possible. The core layer must be designed with high availability in mind. Usually,
these high-availability features include redundant devices, redundant power supplies,
redundant processors, and redundant links. Today, core interconnections increasingly
use 10 Gigabit Ethernet or higher.
Distribution The distribution layer links the access layer with the core. The distribution layer is
where policy like the QoS feature and access lists are applied. Generally, Gigabit
Ethernet connects to the core, and either Gigabit Ethernet or 100base-TX/FX links
connect the access layer. Redundancy is important at this layer but not as important as
in the core. This layer is combined with the core in smaller networks.
Table continues…
Layer Description
Access The access layer connects servers and workstations. Switches at this layer are smaller,
usually 24 to 48 ports. Desktop computers, workstations, access points, and servers
are usually connected at 100 Mbps or 1 Gbps. Limited redundancy is used. Some QoS
and security features can be implemented in the access layer.
Mostly, Power over Ethernet (PoE) is included to power IP telephones and other
access devices.
For IP Telephony to work well, WAN links must be properly sized with sufficient bandwidth for
voice and data traffic. Each voice call uses 9.6 kbps to 120 kbps, depending on the desired
codec, payload size, and header compression used. Additional bandwidth might be used if video
or redundancy for fax, modem, and TTY is implemented. The addition of video can stress WAN
links engineered for voice only. WAN links must be re-engineered when video is introduced to an
existing network. The G.729 compression algorithm, which uses about 27 kbps of bandwidth, is
one of the most used standards today. Traditional telephone metrics, such as average call volume,
peak volume, and average call length, can be used to size interoffice bandwidth demands. For
more information, see Traffic engineering on page 157.
Quality of Service (QoS) also becomes increasingly important with WAN circuits. In this case, QoS
means the classification and the prioritization of real-time traffic such as voice, video, or FoIP.
Real-time traffic must be given absolute priority through the WAN. If links are not properly sized or
queuing strategies are not properly implemented, the quality and the timeliness of voice and data
traffic will be less than optimal.
The following WAN technologies are commonly used with IP Telephony:
• Multiprotocol Label Switching (MPLS)
• Asynchronous Transfer Mode (ATM)
• Frame Relay
• Point-to-point (PPP) circuits
• Internet VPNs
MPLS, ATM, Frame Relay, and PPP circuits, all have good throughput, low latency, and low jitter.
MPLS and ATM have the added benefit of enhanced QoS. MPLS is a relatively new service
offering and can have issues with momentary outages of 1 to 50 sec duration.
Frame Relay WAN circuits can be difficult to use with IP Telephony. Congestion in Frame Relay
networks can cause frame loss, which can significantly degrade the quality of IP Telephony
conversations. With Frame Relay, proper sizing of the committed information rate (CIR) is critical.
In a Frame Relay network any traffic that exceeds the CIR is marked as discard eligible, and is
discarded at the option of the carrier if it experiences congestion in its network. Because voice
packets and other real-time packets must not be dropped during periods of congestion, CIR must
be sized to maximum traffic usage. Also, Service Level Agreements (SLAs) must be established
with the carrier to define maximum levels of delay and frame loss and remediation if the agreed-to
levels are not met.
Internet VPNs are economical but more prone to quality issues than the other four technologies
because there is no control or SLA to modify the handling of voice packets over data packets.
Voice quality
Defining good voice quality varies with business needs, cultural differences, customer
expectations, and the hardware and software used. The requirements set forth are based on
the ITU-T and EIA/TIA guidelines and extensive testing. Avaya requirements meet or exceed most
customer expectations. However, the final determination of acceptable voice quality lies with the
customer definition of quality and the design, implementation, and monitoring of the end-to-end
data network.
Quality is not a discrete value where the low side is good and the high side is bad. A trade-off
exists between real-world limits and acceptable voice quality. Lower delay, jitter, and packet loss
values can produce the best voice quality, but might also come with a cost to upgrade the network
infrastructure to get to the low values. Another real-world limit is the inherent WAN delay. An IP
trunk that links the west coast of the United States to India could add a fixed delay of 150 ms into
the overall delay budget.
Perfectly acceptable voice quality is attainable but will not be toll quality. Therefore, Avaya
presents a tiered choice of elements that make up the requirements.
The critical objective factors in assessing IP Telephony quality are delay, jitter, and packet loss. To
ensure good and consistent levels of voice quality, Factors that affect voice quality on page 102
lists Avaya’s suggested network requirements. These requirements are valid for both LAN only
and for LAN and WAN connections. Note that all measurement values are between endpoints and
therefore reflect the performance of the network without endpoint consideration.
For more information, see Voice quality network requirements on page 91.
Best practices
To consistently ensure the highest quality voice, you must follow industry best practices when
implementing IP Telephony. Note that these suggestions are only options and might not fit
individual business needs in all cases.
• QoS/CoS
QoS for real-time packets is obtained only after a Class of Service (CoS) mechanism tags
voice packets as having priority over data packets. Networks with periods of congestion can
still provide excellent voice quality when using a QoS/CoS policy. The recommendation for
switched networks is to use IEEE 802.1p/Q. The recommendation for routed networks is
to use DiffServ Code Points (DSCP). The recommendation for mixed networks is to use
both. You can also port priority to enhance DiffServ and IEEE 802.1p/Q. Even networks
with sufficient bandwidth should implement CoS/QoS to protect voice communications from
periods of unusual congestion that a computer virus might cause. For more information,
• Switched network
A fully switched LAN network is a network that allows full duplex and full endpoint bandwidth
for every endpoint that exists on that LAN. Although IP Telephony systems can work in
a shared or hub-based LAN, a switched network provides consistently high results to IP
Telephony.
• Network assessment
A Basic Network Readiness Assessment Offer from Avaya is vital to a successful
implementation of IP Telephony products and solutions. Go to the Avaya Support website
at https://support.avaya.com for current documentation, product notices, knowledge articles
related to the topic, or to open a service request.
• VLANs
Placing voice packets on a separate VLAN or subnetwork from data packets is a generally
accepted practice to reduce broadcast traffic and to reduce contention for the same
bandwidth as voice. Note that Avaya IP Telephones provide excellent broadcast storm
protection. Other benefits become available when using VLANs, but there can be a
substantial cost with initial administration and maintenance.
Common issues
Some common negative practices that can severely impact network performance, especially when
using IP Telephony, include:
• A flat, non-hierarchical network
For example, cascading small workgroup switches together in a flat non-hierarchical network.
This technique quickly results in bottlenecks, because all traffic must flow across the uplinks
at a maximum of 10 Gbps, versus traversing switch fabric at speeds of 256 Gbps or greater.
The greater the number of small switches or layers, the greater the number of uplinks, and
the lower the bandwidth for an individual connection. Under a network of this type, voice
performance can quickly degrade to an unacceptable level.
• Multiple subnets on a VLAN
A network of this type can have issues with broadcasts, multicasts, and routing protocol
updates. This practice can have a significant negative impact on voice performance and
complicate troubleshooting.
• A hub-based network
All hubs must be replaced with switches if they will lie in the path of IP telephony. Hubs are
half-duplex by definition and can degrade the performance of real-time communications over
IP.
• Too many access lists
Access lists slow down a router. While access lists are appropriate for voice networks,
care must be taken not to apply them to unnecessary interfaces. Traffic should be modeled
beforehand and access lists applied only to the appropriate interface in the appropriate
direction, not to all interfaces in all directions.
Customers must exercise caution when using the following:
• Network Address Translation (NAT)
IP Telephony cannot work across NAT because if private IP addresses are exchanged in
signaling messages, these addresses are not reachable from the public side of the NAT and
cannot be used for the media sessions.
• Analog dial-up
Be careful in using analog dial-up (56 kbps) to connect two locations. Upstream bandwidth
can be limited to a maximum of 33.6 kbps and is lesser in most cases, resulting in
insufficient bandwidth to provide quality voice. Some codecs and network parameters provide
connections that are acceptable, but consider each connection individually.
• Virtual Private Network (VPN)
Large delays are inherent in some VPN software products due to encryption, decryption, and
additional encapsulation. Some hardware-based products that encrypt at near wire speed
can be used. In addition, if the VPN is run over the Internet, sufficient quality for voice cannot
be guaranteed unless delay, jitter, and packet loss are contained within the listed parameters.
LAN issues
This section covers Local Area Network (LAN) issues, including speed and duplex, inline power,
and hubs versus switches.
General guidelines
Because of the time-sensitive nature of IP telephony applications, IP telephony should be
implemented on an entirely switched network. Ethernet collisions, which are a major contributor to
delay and jitter, are virtually eliminated on switched networks. Additionally, the C-LAN, Media
Processor circuit, and IP telephones should be placed on a separate subnetwork or VLAN
(that is, separated from other non-IP telephony hosts). This separation provides for a cleaner
design where IP telephony hosts are not subjected to broadcasts from other hosts and where
troubleshooting is simplified. This separation also provides a routed boundary between the IP
telephony segments and the rest of the enterprise network, where restrictions can be placed to
prevent unwanted traffic from crossing the boundary. When personal computers are attached to IP
telephones, the uplink to the Ethernet switch should be a 100 Mbps link or greater, so that there is
more bandwidth to be shared between the telephone and the computer.
Avaya solutions for large flat subnets with thousands of devices on them is not a supported
configuration. If IP telephones and Avaya servers will share a subnetwork with other hosts, the
IP telephones and Avaya servers should be placed on a subnetwork of manageable size (24-bit
subnet mask or larger, with 254 hosts or less), with as low a rate of broadcasts as possible. With
this situation, a worst-case example is the scenario where IP telephones and Avaya servers are
deployed on a large subnetwork that is running IPX or other broadcast-intensive protocol, with
broadcasts approaching 500 per second. There is an arp cache limit of 1024. When the arp cache
is full, it will be unable to communicate with any new hosts until the arp cache times out on other
hosts. So, network segregation into smaller subnets like /24 or the creation of VLANs, or doing
both is strongly recommended.
Ethernet switches
The following recommendations apply to Ethernet switches to optimize operation with Avaya
endpoints. These recommendations are meant to provide the simplest configuration by removing
unnecessary features.
• Enable spanning tree fast start feature or disable spanning tree at the port level. The
Spanning Tree Protocol (STP) is a Layer 2 loop-avoidance protocol. When a device is first
connected or reconnected to a port that is running spanning tree, the port takes 31 to 50 s
to cycle through the Blocking, Listening, and Learning states. This delay is neither necessary
nor desired on ports that are connected to IP endpoints. Instead, enable a fast start feature
on these ports to put them into the Forwarding state almost immediately. If this feature is
not available, you can consider the option of disabling the spanning tree on the port. Do
not disable spanning tree on an entire switch or VLAN. Also, Rapid Spanning Tree Protocol
(802.1w) is always preferred over STP (802.1D). When using RSTP, the Ethernet switch
ports connected to IP phones must be in the Edge-Type mode. This places the port in a
fast-start mode. Bridge Protocol Data Unit (BPDU) guard is also desirable if it is available on
the Ethernet switch to protect against a loop created through the IP phone.
• Disable the vendor features that are not required. Some vendor features that are not required
by Avaya endpoints include EtherChannel/LAG, cdp, and proprietary (not 802.3af) inline
power. These features are nonstandard mechanisms that are relevant only to vendor-specific
devices and can sometimes interfere with Avaya devices.
• Properly configure 802.1Q trunking on Cisco switches. When trunking is required on a Cisco
CatOS switch that is connected to an Avaya IP telephone, enable it for 802.1Q encapsulation
in the no-negotiate mode. This causes the port to become a plain 802.1Q trunk port with no
Cisco autonegotiation features. When trunking is not required, explicitly disable it.
Speed and duplex
One major issue with Ethernet connectivity is proper configuration of the speed and duplex
settings. The speed and duplex settings must be configured properly and must match.
A duplex mismatch condition results in a state where one side perceives a high number of
collisions, while the other side does not. This results in packet loss. Although it degrades
performance in all cases, this level of packet loss might go unnoticed in a data network because
protocols such as TCP retransmit lost packets. In voice networks, however, this level of packet
loss is unacceptable. Voice quality rapidly degrades in one direction. When voice quality problems
are experienced, you must first check the duplex mismatches.
The best practice is to use autonegotiation on both sides of an IP connection. You can also
lock down interfaces on both sides of a link. However, many a times, this practice requires a
coordination between the Ethernet switch data team and the voice team. Gigabit links should
always use Auto-Negotiation. For details of all aspects of Auto-Negotiation and lockdown, see the
Ethernet Link Guidelines for Avaya Aura Unified Communications Products whitepaper at http://
support.avaya.com/.
Whether you choose the autonegotiation mode or the lock down mode, make sure that both the
ends of the link use the same mode which results in 100 Mbps and full duplex for 10/100 Mbps
links. Also, ensure that Gigabit links result in 1 Gbps and full duplex in autonegotiation mode.
Caution:
Do not use the autonegotiation mode on one side of the IP connection and the lock down
mode on the other side as this can result in a duplex mismatch and cause voice quality and
signaling problems.
Virtual LANs
Virtual Local Area Networks (VLANs) are an often-misunderstood concept. This section defines
VLANs and addresses configurations that require the Avaya IP telephone to connect to an
Ethernet switch port that is configured for multiple VLANs. The IP telephone is on one VLAN,
and a personal computer that is connected to the telephone is on a separate VLAN. Two sets of
configurations are given: Cisco CatOS, and Cisco IOS.
VLAN defined
With simple Ethernet switches, the entire switch is one Layer 2 broadcast domain that usually
equates to one IP subnetwork (Layer 3 broadcast domain). Consider a single VLAN on a VLAN
capable Ethernet switch as being equivalent to a simple Ethernet switch. A VLAN is a logical
Layer 2 broadcast domain that typically equates to one IP subnetwork. Therefore, multiple VLANs
are same as logically separated subnetworks. This arrangement is analogous to multiple switches
being physically separated subnetworks. A Layer 3 routing process is required to route between
VLANs. This routing process can take place on a connected router or a router module within a
Layer 2/Layer 3 Ethernet switch. If no routing process is associated with a VLAN, devices on that
VLAN can only communicate with other devices on the same VLAN.
Port or native VLAN
Port VLAN and native VLAN are synonymous terms. The IEEE 802.1Q standard and most vendor
switches use the term port VLAN, but Cisco switches use the term native VLAN.
Every port has a port VLAN or a native VLAN. Unless otherwise configured, VLAN 1 is the default
VLAN. It can be configured on a per-port basis or over a range of ports.
All untagged Ethernet frames (with no 802.1Q tag, for example, from a personal computer) are
forwarded on the port VLAN or the native VLAN. This is true even if the Ethernet switch port is
configured as an 802.1Q trunk or otherwise configured for multiple VLANs.
Trunk configuration
A trunk port on an Ethernet switch is one that is capable of forwarding Ethernet frames on multiple
VLANs through the mechanism of VLAN tagging. IEEE 802.1Q specifies the standard method for
VLAN tagging.
A trunk link is a connection between two devices across trunk ports. This connection can be
between a router and a switch, between two switches, or between a switch and an IP telephone.
Some form of trunking or forwarding multiple VLANs must be enabled to permit the IP telephone
and the attached personal computer to appear on separate VLANs. The following commands
enable VLAN trunking.
Note that Cisco and other vendor switches can remove VLANs from a trunk port. This feature
is highly desirable because only a maximum of two VLANs should appear on a trunk port that
is connected to an IP telephone. That is, broadcasts from nonessential VLANs should not be
permitted to bog down the link to the IP telephone. Cisco IOS switches can have an implicit trunk
that contains only two VLANs, one for data and one for voice. You can configure an implicit trunk
using the following commands:
• switchport access vlan <vlan-id>
• switchport voice vlan <vlan-id>
Trunking for one-X Communicator and other softphones
You can set the Layer 2 priority on a softphone (or physical phone) using IEEE-802.1p bits in the
IEEE-802.1Q VLAN tag. This is useful if the telephone and the attached personal computer are on
the same VLAN (same IP subnetwork), but the telephone traffic requires higher priority (Trunking
for softphones or physical phones on page 108). Enable 802.1Q tagging on the IP phone, set the
priorities as desired, and set the VID to zero. As per the IEEE standard, a VID of zero assigns the
Ethernet frame to the port VLAN or the native VLAN.
Cisco switches function differently in this scenario, depending on the hardware platforms and OS
versions.
Note:
Setting a Layer 2 priority is useful only if QoS is enabled on the Ethernet switch. Otherwise,
the priority-tagged frames are treated the same as clear frames.
WAN
Because of the high costs and lower bandwidths available, there are some fundamental
differences in running IP telephony over a Wide Area Network (WAN) versus a LAN. As more
problems occur in WAN environment, you must consider network optimizations and proper
network design.
WAN QoS
In particular, Quality of Service (QoS) becomes more important in a WAN environment than
in a LAN. In many cases, transitioning from the LAN to the WAN reduces bandwidth by
approximately 99%. Because of this severe bandwidth crunch, strong queuing, buffering, and
packet loss management techniques have been developed. Severe bandwidth crunch, strong
queuing, buffering, and packet loss management techniques are covered in more detail in Quality
of Service guidelines on page 125.
Recommendations for QoS
For both small and medium customers, a simple configuration is more effective than a complex
configuration when implementing QoS for voice, data, signaling and video. If traffic engineering
is done properly and sufficient bandwidth is available, especially for WAN links, voice and voice
signaling traffic can both be tagged as DSCP 46. This Class of Service (CoS) tagging places both
types of packets into the same High Priority queue with minimum of effort. The key is to have
enough bandwidth to prevent any packets from dropping.
Large enterprises and multinational companies might find a stratified approach to CoS more
beneficial. This approach allows maximum control for many data and voice services. For this
environment, customers must use DSCP 46 (Expedited Forwarding) for voice (bearer), but
voice signaling and especially, IPSI signaling could have their own DSCP values and dedicated
bandwidth. This would prevent traffic from contending with signaling. Although the configuration
can be more complex to manage and administer, the granularity will give the best results and is
regarded as a best practice.
For the routers, customers must use strict priority queuing for voice packets and weighted-fair
queuing for data packets. Voice packets should always get priority over non-network-control data
packets. This type of queuing is called Class-Based Queuing (CBQ) on Avaya data networking
products or Low-Latency Queuing (LLQ) on Cisco routers.
Codec selection and compression
Because of the limited bandwidth available on the WAN, using a compression codec allows
efficient use of resources without a significant decrease in voice quality. IP telephony
implementations across a WAN must use the G.729 codec with 20 ms packets. This configuration
uses 24 kbps (excluding Layer 2 overhead), 30% of the bandwidth of the G.711 uncompressed
codec (80 kbps).
To conserve bandwidth, RTP header compression (cRTP) can be used on point-to-point links.
cRTP reduces the IP/UDP/RTP overhead from 40 bytes to 4 bytes. With 20 ms packets, this
translates to a savings of 14.4 kbps, making the total bandwidth required for G.729 approximately
9.6 kbps. The trade-off for cRTP is a higher CPU utilization on the router. The processing power
of the router determines the amount of compressed RTP traffic that the router can handle. Avaya
testing indicates that a typical small branch-office router can handle 768 kbps of compressed
traffic. Larger routers can handle greater amounts. cRTP is available on several Avaya secure
routers (1000–series, 2330, 3120, and 4134) and on the Extreme, Juniper, Cisco, and other
vendor routers.
Serialization delay
Serialization delay refers to the delay that is associated with sending bits across a physical
medium. Serialization delay is important to IP telephony because this delay can add significant
jitter to voice packets, and impair voice quality.
Network design
Routing protocols and convergence
While designing an IP telephony network across a WAN, care should be taken when selecting a
routing protocol or a dial-backup solution. Different routing protocols have different convergence
times, which is the time that it takes to detect a failure and route around it. While a network is in
the process of converging, all voice traffic is lost.
The selection of a routing protocol depends on several factors:
• If a network has a single path to other networks, static routes are sufficient.
• If multiple paths exist, is convergence time an issue? If yes, Enhanced Interior Gateway
Routing Protocol (EIGRP) and Open Shortest Path First (OSPF) are appropriate.
• Are open standards-based protocols required? If yes, OSPF and RIP are appropriate, but not
EIGRP or IGRP, which are Cisco proprietary.
In general, you must use OSPF when routing protocols. OSPF allows relatively fast convergence
and does not rely on proprietary networking equipment.
In many organizations, because of the expense of dedicated WAN circuits, dial-on-demand circuits
are provisioned as backup if the primary link fails. The two principal technologies are ISDN (BRI)
and analog modem. ISDN dial-up takes approximately 2 s to connect and offers 64 kbps to
128 kbps of bandwidth. Analog modems take approximately 60 s to connect and offer up to 56
kbps of bandwidth. If G.729 is used as the codec, either technology can support IP telephony
traffic. If G.711 is used as the codec, only ISDN is appropriate. Also, because of the difference in
connection time, ISDN is the preferred dial-on-demand technology for implementing IP telephony.
Multipath routing
Many routing protocols, such as OSPF, install multiple routes for a particular destination into a
routing table. Many routers attempt to load-balance across the two paths. There are two methods
for load balancing across multiple paths. The first method is per-packet load balancing, where
each packet is serviced in a round-robin fashion across the two links. The second method is
per-flow load balancing, where all packets in an identified flow (source and destination addresses
and ports) take the same path. IP telephony does not operate well over per-packet load-balanced
paths. This type of setup often leads to choppy quality voice. In situations with multiple active
paths, you must use per-flow load balancing instead of per-packet load balancing.
Balancing loads per-flow
About this task
In the presence of multiple links, data can be balanced across them by a round-robin fashion for
either packets or a stream (flow) of data. Real-time media like voice and video should use flow
balancing only.
Frame relay
The nature of Frame Relay poses a challenge for IP telephony, as described in this section.
Overview of frame relay
Frame Relay service is composed of three elements: the physical access circuit, the Frame, Relay
port, and the virtual circuit. The physical access circuit is usually a T1 or fractional T1 and is
provided by the local exchange carrier (LEC) between the customer premise and the nearest
central office (CO). The Frame Relay port is the physical access into the Frame Relay network, a
port on the Frame Relay switch itself.
The access circuit rate and the Frame Relay port rate must match to eliminate the possibility
of discarded packets during periods of congestion. The virtual circuit is a logical connection
between Frame Relay ports that can be provided by the LEC for intra-lata Frame Relay or by
the inter-exchange carrier (IXC) for inter-lata Frame Relay. The most common virtual circuit is
a permanent virtual circuit (PVC), which is associated with a committed information rate (CIR).
The PVC is identified at each end by a separate data-link connection identifier (DLCI) in Data-link
connection identifiers over an interexchange carrier Frame Relay network on page 111.
Figure 14: Data-link connection identifiers over an interexchange carrier Frame Relay network
This hypothetical implementation shows the Dallas corporate office connected to three branch
offices in a common star topology (or hub and spoke). Each office connects to a LEC CO over
a fractional T1 circuit, which terminates onto a Frame Relay port at the CO, and on to a Frame
Relay capable router at the customer premise. The port rates and the access circuit rates match.
PVCs are provisioned within the Frame Relay network between Dallas and each branch office.
The CIR of each PVC is sized so that it is half the respective port rate, which is a common
implementation. Each branch office is guaranteed its respective CIR, but it is also allowed to burst
up to the port rate without any guarantees.
The port rate at Dallas is not quite double the aggregate CIR, but it does not need to be, because
the expectation is that not all three branch offices will burst up to the maximum at the same time.
In an implementation like this, the service is probably negotiated through a single vendor. But it
is likely that Dallas and Houston are serviced by the same LEC, and that the Frame Relay is
intra-lata, even if the service was negotiated through an IXC, such as AT&T or Sprint. The service
between Dallas and the other two branch offices, however, is most likely inter-lata.
A frame relay issue and alternatives
The obstacle in running IP telephony over Frame Relay involves the treatment of traffic within and
outside the CIR, commonly termed the burst range.
As Committed information rate (burst range) on page 112 shows, traffic up to the CIR is
guaranteed, whereas traffic beyond the CIR usually is not. This is how Frame Relay is intended
to work. CIR is a committed and reliable rate, whereas burst is a bonus when network conditions
permit it without infringing upon the CIR of any user. For this reason, burst frames are marked
as discard eligible (DE) and are queued or discarded when network congestion exists. Although
customers can achieve significant burst throughput, burst throughput is unreliable, unpredictable,
and not suitable for real-time applications like IP telephony.
Therefore, the objective is to prevent voice traffic from entering the burst range and being marked
DE. One way to accomplish this is to prohibit bursting by shaping the traffic to the CIR and setting
the excess burst size (Be – determines the burst range) to zero. However, this also prevents data
traffic from using the burst range.
Additional frame relay information
Most IXCs convert the long-haul delivery of Frame Relay into ATM, that is, the Frame Relay
PVC is converted to an ATM PVC at the first Frame Relay switch after leaving the customer
premises. It is not converted back to Frame Relay until the last Frame Relay switch before
entering the customer premise. This is significant because ATM has built-in Class of Service
(CoS). A customer can enter a contract with a carrier to convert the Frame Relay PVC into a
constant bit rate (CBR) ATM PVC. ATM CBR cells are delivered with lower latency and higher
reliability.
Finally, under the best circumstances, Frame Relay is still inherently more susceptible to delay
than ATM or TDM. Therefore, after applying the best possible queuing mechanism, you can still
expect a longer delay over Frame Relay than over ATM or TDM.
VPN overview
VPNs refer to encrypted tunnels that carry packetized data between remote sites. VPNs can use
private lines or use the Internet through one or more Internet Service Providers (ISPs). VPNs
are implemented in both dedicated hardware and software but can also be integrated as an
application to existing hardware and software packages. A common example of an integrated
package is a firewall product that can provide a barrier against unauthorized intrusion as well as
perform the security features that are needed for a VPN session.
The encryption process can take from less than 1 ms to 1 s or more, at each end. VPNs
can represent a significant source of delay and therefore, have a negative impact on voice
performance. Also, because most VPN traffic runs over the Internet and there is little control
over QoS parameters for traffic crossing the Internet, voice quality might suffer due to excessive
packet loss, delay, and jitter. You can negotiate a service-level agreement with the VPN provider
to guarantee an acceptable level of service. Before implementing IP telephony with a VPN,
you should test their VPN network over time to ensure that it consistently meets the Avaya
requirements.
Convergence advantages
For an increasing numbers of enterprises, VPN carries both data and voice communications.
Though voice communication over IP networks (IP telephony) creates new quality of service
(QoS) and other challenges for network managers, there are compelling reasons for moving
forward with convergence over maintaining a traditional voice and data infrastructure:
• A converged infrastructure makes it easier to deploy eBusiness applications such as
customer care applications that integrate voice, data, and video.
• Enterprises can reduce network costs by combining disparate network infrastructures and
eliminating duplicate facilities.
• A converged infrastructure can increase the efficiencies of the IT organization.
• Long distance charges can be reduced by sending voice over IP networks.
Voice over IP VPN is emerging as a viable way to achieve these advantages. The emergence
of public and virtual private IP services promises to make it easier for customers, suppliers,
and businesses to use data networks to carry voice services. As with any powerful new
technology, however, VPNs require skilled management to achieve top performance. The highest
network performance becomes imperative when the VPN network must deliver high-quality voice
communication. Not all IP networks can meet these quality requirements today. For instance,
the public Internet is a transport option for voice communication only when reduced voice
performance is acceptable and global reach has the highest priority. When high voice quality
is a requirement, ISPs and Network Service Providers (NSPs) can provide other VPN connections
that meet the required Service Level Agreements (SLAs).
Communication security
The public nature of the Internet, its reach, and its shared infrastructure provide cost savings when
compared to leased lines and private network solutions. However, those factors also contribute to
make Internet access a security risk. To reduce these risks, network administrators must use the
appropriate security measures.
A managed service can be implemented either as a premises-based solution or a network-based
VPN service. A premises-based solution includes customer premises equipment (CPE) that allows
end-to-end security and Service Level Agreements (SLAs) that include the local loop. These
end-to-end guarantees of quality are key differentiators. A network-based VPN, on the other
hand, is provisioned mainly by equipment at the service provider’s point-of-presence (PoP), so
it does not provide equivalent guarantees over the last mile. For a secure VPN that delivers
robust, end-to-end SLAs, an enterprise must demand a premises-based solution that is built on an
integrated family of secure VPN platforms.
The private in virtual private networking is also a matter of separating and insulating the traffic
of each customer so that other parties cannot compromise the confidentiality or the integrity of
data. IPSec tunneling and data encryption achieves this insulation by essentially carving private
end-to-end pipes or tunnels out of the public bandwidth of the Internet and then encrypting the
information within those tunnels to protect against wrongful access. In addition to IPSec, there are
two standards for establishing tunnels at Layer 2: Point-to-Point Tunneling Protocol (PPTP) and
Layer 2 Tunneling Protocol (L2TP). Neither PPTP nor L2TP include the encryption capabilities of
IPSec. The value of IPSec beyond these solutions is that IPSec operates at IP Layer 3. IPSec at
IP Layer 3 allows for native, end-to-end secure tunneling. As an IP-layer service, IPSec is also
more scalable than the connection-oriented Layer 2 mechanisms.
Also, note that IPSec can be used with either L2TP or PPTP, since IPSec encrypts the payload
that contains the L2TP/PPTP data. IPSec provides a highly robust architecture for secure wide-
area VPN and remote dial-in services. IPSec is complementary to any underlying Layer 2 network
architecture. With its addition of security services that can protect the VPN of a company, IPSec
marks the clear transition from early tunneling to full-fledged Internet VPN services.
However, different implementations of IPSec confer varying degrees of security services. Products
must be compliant with the latest IPSec drafts, must support high-performance encryption, and
must scale to VPNs of industrial size.
A VPN platform should support a robust system for authentication of the identity of end users
based on industry standard approaches and protocols.
Firewall technologies
To reduce security risks, appropriate network access policies should be defined as part of
business strategy. Firewalls can be used to enforce such policies. A firewall is a network
interconnection element that polices traffic flows between internal or protected networks and
external or public networks such as the Internet. Firewalls can also be used to segment internal
networks.
The application of firewall technologies only represents a portion of an overall security strategy.
Firewall solutions do not guarantee 100% security by themselves. These technologies must be
complemented with other security measures, such as user authentication and encryption, to
achieve a complete solution.
The three technologies that are most commonly used in firewall products are packet filtering,
proxy servers, and hybrid. These technologies operate at different levels of detail and provide
varying degrees of network access protection. Therefore, these technologies are not mutually
exclusive. A firewall product might implement several such technologies simultaneously.
Network Management and outsourcing models
While enterprises acknowledge the critical role that the Internet and IP VPNs can play in their
strategic eBusiness initiatives, they face a range of choices for implementing their VPNs. The
options range from enterprise-based or do-it-yourself VPNs that are fully built, owned, and
operated by the enterprise to VPNs that are fully outsourced to a carrier or other partner. In
the near term, enterprise-operated and managed VPN services are expected to hover around a
50/50 split, including hybrid approaches.
Increasingly, enterprises are assessing their VPN implementation options across a spectrum of
enterprise-based, carrier-based/outsourced, or hybrid models. Each approach offers a unique
business advantage.
• Enterprise based
This option operates over a public network facility, most commonly, the Internet, using
equipment that is owned and operated by the enterprise. The benefit of the enterprise-
based option is the degree of flexibility and control this option offers over VPN deployment,
administration, and adaptability or change.
• Fully outsourced
This managed service can be implemented by a collection of partners, including an ISP and
a security integration partner. The advantages of the fully outsourced option include quick
deployment, easy global scalability, and freedom from overhead Network Management.
• Shared management
With this hybrid approach, a partner can take responsibility for major elements of
infrastructure deployment and management, but the enterprise retains control over key
aspects of policy definition and security management.
Conclusion
Moving to a multipurpose packet-based VPN that transports both voice and data with high quality
poses a number of significant management challenges. Managers must determine whether to
operate the network using an enterprise-based model, an outsourced or carrier-based model,
or a hybrid model. They must settle security issues that involve several layers of the network.
And they must ensure that they and their vendors can achieve the required QoS levels across
these complex networks. Yet, the advantages of converged, multipurpose VPNs remain a strong
attraction. The opportunity to eliminate separate, duplicate networks and costly dedicated facilities,
minimize costly public network long distance charges, and reduce administrative overhead
provides a powerful incentive. Most important, by helping integrate voice and data communication,
multimedia Messaging, supplier and customer relationship management, corporate data stores,
and other technologies and resources, converged networks promise to become a key enabler for
eBusiness initiatives.
NAT overview
IP telephony cannot work across Network Address Translation (NAT) because if private IP
addresses (RFC-1918) are exchanged in signaling messages, these addresses are not reachable
from the public side of the NAT and cannot be used for the media sessions.
The problem is not encountered in all VoIP scenarios. It is not used for VPN-based remote access,
and NATs are usually not needed internally within the enterprise network. VoIP has to traverse
NAT usually at the border between the enterprise and a VoIP trunk to a service provider as well as
in hosted VoIP service.
If the network design includes a firewall within the enterprise network to protect certain servers
or some part of the network so that IP telephony traffic has to traverse the internal firewall, then
the firewall should not perform a NAT function. IP telephony will then work across the firewall
once the appropriate ports are open on the firewall. For information on the list of needed ports for
any Avaya product, go to the Avaya Support website at https://support.avaya.com to refer current
documentation and knowledge articles related to opening a service request.
When you connect the enterprise to an IPT SP through a VoIP trunk, either SIP or H.323, a NAT
is done at the enterprise border. The solution for this scenario is to deploy a Session Border
Controller (SBC) near the NAT, for example, in the enterprise DMZ. SBCs from multiple vendors
have been tested for interoperability with Avaya’s IP telephony solutions.
Alternatively, in certain cases, you can use an Application Layer Gateway (ALG). Another
alternative is to set up a C-LAN and a Media Processor card in the DMZ and to use
Communication Manager as a proxy server between the internal and external networks.
Solutions based on standards such as ICE and STUN are expected to be supported in some NAT
traversal scenarios.
IP Telephony without NAT on page 117 shows IP telephony without a NAT scenario.
• Extensible with minimum reconfiguration, that is, designed with enough resources to grow
with the business the IP supports
Topologies
The network topology recommended consists of a redundant core with building blocks of layered
routers and switches as shown in Figure 17: Typical network topology design on page 119. This is
the defacto standard for network design supporting both modularity and reuse.
Real networks are far more complex with many more nodes and services. In addition, real
deployments typically have legacy constraints, multiple sites, and heterogeneous equipment. It
is beyond the scope of this document to detail solutions for the potential configurations of entire
networks. To address those issues, Avaya provides a full range of service offers from assessment
to outsourced management. For more information regarding these services, see the Avaya Web
site at https://support.avaya.com.
Server cluster
A review of the server cluster configuration as applied to a set of G650 IP-connected port
networks will serve to illustrate the principles discussed and validate the modular topology.
For example, each G650 is equipped with redundant TN2602AP Media Resource 320 circuit
packs optionally configured for load balancing or IP bearer duplication. Each G650 also contains
duplicated TN2312BP IP Server Interfaces (IPSI). The number of TN799DP Control Lan (C-LAN)
socket termination boards would be sized to accommodate the devices in the wider network and
the call capacity of the cluster, but for this small configuration, a C-LAN is assigned for each G650.
Also the Layer 3 devices use hardware switching for Layer 3 forwarding and that they are also
capable of Layer 2 switching between ports. Remember that IP telephony LAN traffic consists
primarily of smaller packets, that is, approximately 218 octet. The per packet overhead impacts
software-based routing. Consider that telephony traffic is roughly an order of magnitude, that is,
more packets per unit bandwidth than typical Web page transfers.
Layers
There is a separate Layer 2 access level when the devices at the Layer 3 distribution layer are
capable of Layer 2 switching. The access layer reduces the complexity of the network block by
separating the functions of the devices and provides scalability when more ports are required
as the network grows. To ensure network modularity, the routers serving this cluster should be
dedicated to the cluster and sized to the task. Simplification argues for the reduction of subnets
and routed interfaces in the cluster since the service is common. If remote IPSIs or multiple server
clusters are implemented across the network, using a single subnet within the cluster simplifies
the configuration of the entire network. The addition of static subnets in the direction of the
cluster increases the configuration complexity with little benefit in terms of availability unless the
subnets terminate on different routers, which in turn implies separate modular clusters. A separate
management subnet is created but is unrelated to the service address configuration. Separate
subnets simplify diagnostic activities, but this benefit is achievable with address partitioning
within the subnet. Port densities for smaller full featured routers can be inadequate to scale to
the connectivity requirements of even this small cluster when the extra ports for management,
troubleshooting, and testing are considered.
An alternative design uses the smaller high density integrated switching and routing platforms
that are becoming popular as routing functions have moved into commodity Application-Specific
Integrated Circuits (ASIC).
When selecting this type of configuration, bandwidth and inter-switch traffic capacity must be
considered. In a load balanced configuration under fault conditions, approximately half the call
load can travel on the inter-switch link. The inter-switch link must be redundant to prevent a
single failure from causing a bifurcated network, and if a Link Aggregation Group (LAG) is used
to eliminate potential spanning tree loops, the individual link bandwidth must still be capable of
supporting the required traffic.
Redundancy
Hardware redundancy is a proven and well defined tool for increasing the availability of a system.
Avaya Critical Availability solutions has traditionally employed this technique to achieve 99.999%
availability. One question to consider in the deployment of redundant hardware is symmetric
(active-active) or asymmetric (active-standby) configurations. Well known reliability expert Evan
Marcus suggests asymmetric configurations for pure availability. The control network of Avaya and
TDM bearer redundancy solutions follow asymmetric configuration model. For IP-PNC designs,
bearer duplication supports asymmetric redundancy for bearer flows, but symmetric redundancy
or load balanced configurations are the default setting. Because of the inherent complexity of TCP
state replication, the C-LAN configurations are always symmetric.
It is a good practice to connect the redundant boards of each PN to redundant Layer 2 switches
as shown in the figure to protect each PN from failure of the Layer 2 switch itself. If asymmetric
redundancy is configured through IP bearer duplication, it is essential for proper failover operation
that the active and standby TN2602 circuit packs have equivalent Layer 2 connectivity. In the case
of IP bearer duplication, the secondary TN2602 circuit pack takes over by assuming the L2 and
L3 address of the connection terminations. This takeover minimizes the disruption due to failover
but requires the network to be configured to accommodate the apparent move of an endpoint from
one switch to the other, as for a spanning tree change.
Moving the L2 address to the standby device limits the disruption to the address forwarding tables
of the L2 switches, which are designed to accommodate rapid connectivity moves.
Layer 2
Layer 2 configuration of the switches supporting the cluster should use IEEE 802.1w Rapid
Spanning Tree Protocol (RSTP) to prevent loops and for selection between redundant links. Most
modern switches implement this protocol. Selecting a device for Layer 2 access that does not
support RSTP should be very carefully considered since those devices are likely to be obsolete
and lacking in other highly desirable features in areas such as Quality of Service, security,
and manageability. RSTP is also preferred over most alternative solutions that are typically
not standards based and can cause problems with interoperability, scalability and configuration
complexity. The selected redundancy protocol must be well understood by the IT staff responsible
for maintaining the network.
It is good policy to enable RSTP on all ports of the Layer 2 switches, including the ports directly
connected to hosts. Misconfiguration and human error are more likely to occur than link failure
and the added protection of loop avoidance is worth the minimal overhead. This possibility is an
additional argument in favor of using RSTP as a redundancy protocol since other solutions cannot
be uniformly applicable to the subnet.
With modular configuration, the spanning tree is kept simple and deterministic. Consider the
sample spanning tree configuration in Figure 20: Sample spanning tree on page 123. The
topology has been redrawn and the host connections have been removed to simplify the
explanation. Assume the bridge priorities are assigned such that the VRRP primary router has
the highest priority, the secondary router is next, Switch 1 is third, and Switch 2 is last. It is also
important that the bandwidth of all links be equivalent and adequate to handle the aggregated
traffic.
In Figure 20: Sample spanning tree on page 123, links A and B are directly attached to the root
bridge so links A and B will be in forwarding mode. Link C connects to a higher priority bridge than
link D, so link D will be disabled and Switch 1 will be the designated root for the secondary router.
In this configuration, traffic from the attached devices flows directly to the primary router on links A
and B.
If the primary router fails, the secondary router becomes both the active router and the root bridge,
and traffic from the switches flows on the reconfigured spanning tree along links C and D. If bridge
priorities are not managed, traffic from one switch can be directed through the secondary router
and the other switch as normal operation.
In the alternate integrated device configuration, bridge priority is less significant but other factors
such as link sizing becomes an issue if there are not enough Gigabit Ethernet aggregation ports.
If a link aggregation group (LAG) is used, flow distributions must be understood to ensure correct
behavior.
Layer 3
The symmetric or asymmetric question is linked to the configuration of redundancy for the routers
serving this cluster. If the single subnet model is used, the router configuration in the direction of
the cluster also follows the asymmetric model. Virtual Router Redundancy (VRRP) is configured
with one router as the primary and the other router as the secondary. If multiple subnets are
configured, it is common practice to make one router the primary for some of the subnets and
the other router as the primary for the rest. Note that VRRP should be configured with a failover
latency greater than the latency required for the Layer 2 loop avoidance protocol to prevent LAN
failures from disturbing the wider network. Typical defaults are between two and three seconds,
which should be enough to prevent LAN failures in a simple well configured spanning tree.
The cluster subnet is exported to OSPF through the interfaces to the core so that the devices are
reachable, but OSPF needs to know which router interface to use for the packets directed to the
cluster. For proper operation, the link to the primary router must be the preferred OSPF path. If
the primary router fails but the link to the core is still operational, packets do not reach the cluster
until the neighbor adjacency times out. Making these timeouts too small makes the protocol overly
sensitive and may still provide inadequate results.
The probability of a VRRP interchange that occurs asymmetrically is arguably lower than a router
failure that leaves the physical link state unchanged. Some implementations address this by
allowing the link state of different interfaces to be coupled. These techniques are also applicable
to the OSPF solution but are typically proprietary. Combining the link state of different interfaces
with the decoupling route core disruption from local failure are arguments for this configuration.
QoS guidelines
This section contains guidelines for deploying Quality of Service (QoS) for an IP Telephony
network.
Class of Service refers to mechanisms that tags traffic in such a way that the traffic can be
differentiated and segregated into various classes. Quality of Service refers to what the network
does to the tagged traffic to give higher priority to specific classes. If an endpoint tags its traffic
with Layer 2 802.1p priority 6 and Layer 3 Differentiated Services Code Point (DSCP) 46, for
example, the Ethernet switch must be configured to give priority to value 6, and the router must be
configured to give priority to DSCP 46. The fact that certain traffic is tagged with the intent to give
it higher priority does not necessarily mean it will receive higher priority. CoS tagging is ineffective
without the supporting QoS mechanisms in the network devices.
CoS overview
IEEE 802.1p/Q at the Ethernet layer (Layer 2) and DSCP at the IP layer (Layer 3) are two
standards-based CoS mechanisms that are used by Avaya products. These mechanisms are
supported by the IP telephone and the C-LAN and Media Processor circuit packs. Although
TCP/UDP source and destination ports are not CoS mechanisms, they can be used to identify
specific traffic and can be used much like CoS tags. Another non-CoS methods to identify specific
traffic is to key in on source and destination IP addresses and specific protocols, such as RTP.
The Media Processor circuit pack and IP telephones use RTP to encapsulate audio.
Because of this format change, older switches had to be explicitly configured to accept 802.1Q
tagged frames. Otherwise, the switches reject the tagged frames. However, this problem has not
been significant during the last few years.
The two fields to be concerned with are the Priority and Vlan ID (VID) fields. The Priority field
is the p in 802.1p/Q, and ranges from 0 to 7. 802.1p/Q is a common term used to indicate that
the Priority field in the 802.1Q tag has significance. Prior to real-time applications, 802.1Q was
used primarily for VLAN trunking, and the Priority field was not important. The VID field is used to
indicate the VLAN to which the Ethernet frame belongs.
The IP header with its 8-bit Type of Service (ToS) field was originally used and is still used in some
cases. This original scheme was not widely used, and the IETF developed a new Layer 3 CoS
tagging method for IP called Differentiated Services (DiffServ, RFC 2474/2475). DiffServ uses the
first 6 bits of the ToS field and ranges in value from 0 to 63. Comparison of DSCP with original
ToS on page 127 shows the original ToS scheme and DSCP in relation to the 8 bits of the ToS
field.
Ideally, any DSCP value should map directly to a precedence and traffic parameter combination of
the original scheme. However, this mapping does not exist in all cases and might cause problems
on some older devices.
On any device, new or old, a nonzero value in the ToS field has no effect if the device is not
configured to examine the ToS field. Problems arise on some legacy devices when the ToS field
is examined, either by default or by enabling QoS. These legacy devices (network and endpoint)
might contain code that implemented only the precedence portion of the original ToS scheme, with
the remaining bits defaulted to zeros. This means that only DSCP values that are divisible by 8
(XXX000) can map to the original ToS scheme. For example, if an endpoint is tagging with DSCP
40, a legacy network device can be configured to look for precedence 5, because both values
show up as 10100000 in the ToS field. However, a DSCP of 46 (101110) cannot be mapped to
any precedence value alone. Another problem arises if the existing code implemented precedence
with only one traffic parameter permitted to be set high. In this case, a DSCP of 46 still does not
work, because it requires 2 traffic parameter bits to be set high. When these mismatches occur, an
older device, about 10 years older, might reject the DSCP tagged IP packet or exhibit some other
abnormal behavior. Most newer devices support both DSCP and the original ToS scheme.
priority levels. When configured, the Ethernet switch can identify the high-priority traffic by the
802.1p/Q tag and assign that traffic to a high-priority queue. On some switches, a specific port can
be designated as a high-priority port, which causes all traffic that originates from that port to be
assigned to a high-priority queue.
QoS guidelines
There is no all-inclusive rule regarding the implementation of QoS because all networks and their
traffic characteristics are unique. A good practice is to baseline the IP telephony response on a
network without QoS and then apply QoS as necessary. Avaya Professional Services (APS) can
help with baselining services. Conversely, enabling multiple QoS features simultaneously without
knowing the effects of respective features is a bad practice.
For newer network equipment, best practices involve enabling Layer 3 (DiffServ) QoS on WAN
links traversed by voice. Tag voice and data with DiffServ Code Point 46 (Expedited Forwarding),
and set up a strict priority queue for voice. If voice quality is still not acceptable, or if QoS is
desired for contingencies such as unexpected traffic storms, QoS can then be implemented on the
LAN segments as necessary.
Caution:
There is one caution to keep in mind about QoS with regard to the processor load on network
devices.
Simple routing and switching technologies have been around for many years and have advanced
significantly. Packet forwarding at Layer 2 and Layer 3 is commonly done in hardware. Cisco calls
this fast switching, with switching being used as a generic term here, without heavy processor
intervention. When selection criteria such as QoS and other policies are added to the routing
and switching process, it inherently requires more processing resources from the network device.
Many new devices can handle this additional processing in hardware and maintain speed without
a significant processor burden. However, to implement QoS, some devices must move a hardware
process to software. Cisco calls this process switching. Process switching not only reduces the
speed of packet forwarding, but it also adds a processor penalty that can be significant. Processor
penalty can result in an overall performance degradation from the network device and even
device failure. You must examine each network device individually to determine if enabling QoS
will reduce its overall effectiveness by moving a hardware function to software or for any other
reason. Since most QoS policies are implemented on WAN links, the following points increase the
effectiveness of QoS remains:
• Hardware platforms such as the 2600, 3600, 7200, 7500 series, or later are required. Newer
platforms such as the 1800, 2800 and 3800 series can handle QoS well because of powerful
processors.
• Newer interface modules such as WIC, and VIP are required.
Note:
If you are using Cisco devices with the interfaces such as WIC, and VIP, you must
consult Cisco to determine which hardware revision is required for any given module.
• Sufficient memory is required: device dependent.
• Recommended IOS 12.0 or later.
The IEEE 802.1Q standard is a Layer 2 tagging method that adds four bytes to the Layer 2
Ethernet header. IEEE 802.1Q defines the open standard for VLAN tagging. Two bytes house 12
bits that are used to tag each frame with a VLAN identification number. The IEEE 802.1p standard
uses three of the remaining bits in the 802.1Q header to assign one of eight different classes
of service. Communication Manager users can add the 802.1Q bytes and set the priority bits as
desired. Avaya suggests that a priority of 6 be used for both voice and signaling for simplicity.
However, the default values are: 5-Video, 6-Voice, and 7-Control. IEEE 802.1p and IEEE 802.1Q
are OSI layer 2 solutions and work on frames.
Because 802.1Q is a Layer 2 (Ethernet) standard, it only applies to the Ethernet header. At every
Layer 3 boundary (router hop), the Layer 2 header, including CoS parameters, is stripped and
replaced with a new header for the next link. Therefore, 802.1Q does not enable end-to-end QoS.
Differentiated services
The Differentiated Services (DiffServ) prioritization scheme redefines the existing ToS byte in the
IP header (Differentiated Services (DiffServ) ToS byte on page 132) by combining the first 6
bits into 64 possible combinations. The ToS byte can be used by Communication Manager, IP
telephones, and other network elements such as routers and switches in the LAN and WAN.
Queuing methods
This section discusses common queuing methods and their appropriateness for voice.
Priority queuing
Strict priority queuing (PQ) divides traffic into different queues. These queues are usually high,
medium, normal, and low, based on traffic type. This form of queuing services the queues in order
of priority, from high to low. If there is a packet in the high-priority queue, it will always be serviced
before the queue manager services the lower-priority queues. With priority queuing, however, it
is possible to starve out lower-priority flows if sufficient traffic enters the high-priority queue. This
mechanism works very well for IP telephony traffic where IP telephony bearer and signaling are
inserted in the high-priority queue but does not work as well for routine data traffic that is starved
out in case of sufficient high-priority traffic.
Round-robin
Round-robin queuing, also called custom queuing, sorts data into queues and services each
queue in order. An administrator manually configures which type of traffic enters each queue, the
queue depth, and the amount of bandwidth to allocate to each queue.
Round-robin queuing is not particularly suited to IP telephony. It does not ensure strict priority to
voice packets, so they may still wait behind other traffic flows in other queues. Latency and jitter
can be at unacceptable levels.
work by randomly discarding packets from a queue. RED takes advantage of the congestion
control mechanism of TCP. By randomly dropping packets prior to periods of high congestion,
RED causes the packet source to decrease its transmission rate. Assuming that the packet
source is using TCP, it will decrease its transmission rate until all the packets reach their
destination, which indicates that the congestion is cleared. Some implementations of RED, called
Weighted Random Early Detection (WRED), combines the capabilities of the RED algorithm
with IP Precedence. This combination provides for preferential traffic handling for higher-priority
packets. RED/WRED can selectively discard lower-priority traffic when the interface begins to get
congested and provide differentiated performance characteristics for different classes of service.
RED and WRED are useful tools for managing data traffic but should not be used for voice.
Because IP telephony traffic runs over UDP, IP telephony protocols do not retransmit lost packets,
and IP telephony transmits at a constant rate. The IP telephony queue should never be configured
for WRED. WRED only adds unnecessary packet loss and reduces voice quality.
This technique reduces the CIR in response to backwards explicit congestion notification
(BECN) messages from the service provider. Because traffic is being transmitted at the
CIR in the first place, it does not need to be throttled.
2. Set cir and mincir to the negotiated CIR.
If FRF.12 fragmentation is implemented, reduce the cir and mincir values to account for
the fragment headers.
3. Set be, the excess burst rate, to 0.
4. Set bc, the committed burst rate, to cir/100.
This accounts for a serialization delay of maximum 10 ms .
5. Apply this map class to an interface, subinterface, or VC.
Example
The complete configuration for Frame Relay traffic shaping looks like the following:
map-class frame-relay
NoBurst no frame-relay adaptive shaping
frame-relay cir 384000! (for a 384K CIR)
frame-relay mincir 384000
frame-relay be 0
frame-relay bc 3840
interface serial 0
frame-relay class NoBurst
Fragmentation
A big cause of delay and jitter across WAN links is serialization delay or the time that it takes
to put a packet on a wire. For example, a 1500 byte FTP packet takes approximately 214 ms to
be fed onto a 56 Kbps circuit. For optimal voice performance, the maximum serialization delay
should be close to 10 ms. Therefore, a voice packet to wait for a large data packet over a slow
circuit. The solution to this problem is to fragment the large data packet into smaller pieces for
propagation. If a smaller voice packet comes in, it can be squeezed between the data packet
fragments and be transmitted within a short period of time.
The following sections discuss some of the common fragmentation techniques:
For these reasons, you must decrease the MTU only as a last resort. The techniques described
later in this section are more efficient and should be used before changing the values of the MTU.
When changing the MTU, size it such that the serialization delay is less than or equal to 10 ms.
Thus, for a 384 kbps circuit, the MTU should be sized as follows: 384 kbps *0.01 second (10 ms)/8
bits/byte = 480 bytes. As the circuit size diminishes, however, care should be taken to not reduce
the MTU below 200 bytes. Below that size, telephony signaling and bearer (voice) packets can
also be fragmented, which reduces the link efficiency and degrades voice performance.
FRF.12
FRF.12 is a Frame Relay standard for fragmentation. It works for Frame Relay in the same
way that LFI works for PPP, with similar increases in efficiency over MTU manipulation. When
implementing a Frame Relay network, you must use FRF.12 for fragmentation and size the
fragments such that the serialization delay is no more than 10 ms.
Application perspective
Anatomy of 20 ms G.729 audio packet on page 137 shows the anatomy of a 20 ms G.729 audio
packet, which is preferably used across limited bandwidth WAN links. Notice that two-thirds of the
packet is consumed by overhead such as IP, UDP, and RTP and only one-thirds is used by the
actual audio.
All 20-ms G.729 audio packets, regardless of the vendor, are constructed like this. Not only is
the structure of the packet the same, but the method of encoding and decoding the audio itself
is also the same. This similarity allows an Avaya IP telephone to communicate directly with a
Cisco IP telephone or any other IP telephone when using matching codecs. The packets from the
application perspective are identical.
Network perspective
RTP header compression is a mechanism that routers use to reduce 40 bytes of protocol
overhead to approximately 2 to 4 bytes. Cisco routers uses this RTP header compression. The
RTP header compression can drastically reduce the IP telephony bandwidth consumption on a
WAN link when using 20 ms G.729 audio. When the combined 40 byte header is reduced to 4
bytes, the total IP packet size is reduced by 60% (from 60 bytes to 24 bytes). This equates to
reducing the total IP telephony WAN bandwidth consumption by roughly half, and it applies to all
20 ms G.729 audio packets, regardless of the vendor.
Recommendations for RTP header compression
Enterprises that deploy routers capable of this feature can benefit from the feature. However,
Cisco suggests caution in using RTP header compression on its routers because it can
significantly tax the processor if the compression is done in software. Depending on the processor
load before compression, enabling RTP header compression can significantly slow down the
router or cause the router to stop completely. For best results, use a hardware/IOS/interface
module combination that permits the compression to be done in hardware.
RTP header compression has to function with precision or audio will be disrupted. If, for any
reason, the compression at one end of the WAN link and decompression at the other end do
not function properly, the result can be intermittent loss of audio or one-way audio. RTP header
compression has been very difficult to quantify, but there is evidence that cRTP sometimes
leads to voice-quality issues. One production site in particular experienced intermittent one-way
audio, the cause of which was garbled RTP audio samples inserted by the cRTP device.
When, for experimentation purposes, RTP header compression was disabled, the audio problems
disappeared.
Attaching a PC
Ports 12 through 20 are assigned to the voice vlan and this config is suitable for IP phones and
video devices that have a PC attached to them..
Switch(config)# int range fa 0/11 – 20 context for ports 11 through 20
Switch(config)# des “IP phones with PCs
attached” description of ports’ use
Switch(config)# speed 100 lock port speed to 100-Mbps
(optional setting to Auto-Neg)
Switch(config)# duplex Full lock port duplex to Full (optional
setting to Auto-Neg)
Switch(config)# switchport voice vlan 20 config implicit trunk for IP phones or
video endpoints
Switch(config)# no cdp enable disable CDP for ports 11 through 20
Switch(config)# spanning-tree portfast place ports in forwarding mode
immediately
Switch(config)# spanning-tree bpduguard enable enable bpdu guard in case of a layer-2
loop
Using the Web interface built into the EVAT server, simulated calls can be setup, executed,
and monitored. The SBCs simulate calls, collect QoS measurements, and aggregating the
measurements on to the EVAT server for processing. All processed measurements are stored
in the EVAT server database. An Avaya Professional Services engineer can then generate QoS
metric graph reports from the stored information using the EVAT Web interface.
The graphs focus on the following factors that affect successful VoIP and video deployment:
• Bandwidth utilization
• Codec selection
• One-way delay
• Jitter
• Packet loss
• Packet prioritization
• Reliability
EVAT provides:
• Synthetic VoIP traffic generation and measurement of VoIP metrics
• Analysis of integrated VoIP, video and data
• Graphical depiction of measured calls
• OSI layer-2/layer-3 topology obtained from Simple Network Management Protocol (SNMP)
• Layer-3 topology obtained from Traceroute probes
• SNMP data collection from network devices
Note:
The SNMP data collection and the network topology discoveries are optional features. If they
are allowed, a more comprehensive analysis can be provided.
EVAT key differentiators
• Provides assessment of a live network 24x7, over a period of several days.
• Simulates IP calls, measures effectiveness of the QoS mechanisms, and optionally measures
bandwidth utilization across the network in real time.
• Delivers both layer-2 and layer-3 topology discovery for a more complete network view.
EVAT features
Voice traffic generation and measurement
EVAT uses Real-time Transport Protocol (RTP) that simulates VoIP calls between two endpoints
in a pattern appropriate to the agreed upon test plan. You can configure the calling pattern for
minimum or maximum network load as required for the VoIP/video call volume. The various
parameters of the calls are:
• Codec selection
• DSCP value
• Call volume
• Port range
• Payload
The SBCs use a User Datagram Protocol (UDP) port in the range of 2048 to 3329 (configurable)
to simulate synthetic RTP calls.
Video traffic generation and measurement
Avaya Professional Services engineers can configure the video call patterns between SBCs or
Windows Agents for minimum or maximum network load. EVAT provides data which is analyzed
by Avaya engineers, who conduct assessments and provide detailed network readiness reports
based on parameters such as the number of video calls to simulate and DSCP values for the
synthetic video calls. The video calls use a configurable UDP port in the range of 2048 to 3329.
IPSI/TCP/SIP traffic generation assessment
In addition to voice and video traffic path analysis, EVAT also supports call signaling path
analysis. For call signaling analysis, EVAT supports Transmission Control Protocol (TCP), IP
Server Interface (IPSI), and Session Initiation Protocol (SIP) test patterns.
You can select a pair of SBCs can be selected in TCP test to replicate the optimum message size
and frequency with set characteristics for endpoints. The options for a TCP test include the DSCP
values, message size, message frequency, bandwidth, and port number.
You can configure SBC in an IPSI test to simulate a Communication Manager server and another
SBC to simulate an IPSI board. The SBC that simulates the Communication Manager server
sends a message every second to the SBC representing the IPSI board. The SBC at the
Communication Manager side then records an error if the time between responses from the IPSI
SBC is longer than the specified time.
You can select a pair of SBCs in SIP test to replicate the expected call traffic, SIP endpoints,
and SIP trunks. Avaya Professional Services engineers can set the call volume selection to the
expected value on the network. The call volume selection options include the DSCP value, call
volume, and port number.
SNMP monitoring
You can configure EVAT to analyze SNMP data from routers and Ethernet switches that lie in the
path of the synthetic calls. EVAT gathers the information while making the synthetic calls. This
information can be divided into two categories:
• Device level management information bases (MIBs) that gather packet level counters for
traffic and errors.
• Interface level MIBs that gather octet level traffic and errors.
Scheduled calls
Using the scheduled calls feature of EVAT, Avaya Professional Services engineers can start and
end a call unattended. Call patterns can be scheduled to start and stop at specified dates and
times.
EVAT benefits
Real-time network assessment
EVAT has initiate synthetic IP calls and measures QoS and utilization across the network in real
time.
Powerful network analysis
EVAT supports call signaling and video simulation, thereby providing a powerful network analysis,
used in concert with other gathered information to complete a network readiness assessment.
EVAT operation
Figure 26: Network schematic of Avaya ExpertNet™ VoIP assessment tool on page 143 shows
the network schematic of Avaya ExpertNet™ VoIP Assessment Tool. The EVAT call placement
software runs on SBCs.
Reports
EVAT provides the following types of metrics graphs/charts:
• Time Series QoS
• Summary All
• Summary One To Many
• Summary SNMP Device Errors One Device
• Summary SNMP Interface Errors One Interface
• Summary SNMP Utilization
• Time Series SNMP Device
• Time Series SNMP Interface
• TCP Bandwidth Report
• TCP Delay Report
• IPSI Bandwidth Report
• IPSI Delay Report
Advanced Services
Avaya SBCE Advanced Services is a specialized Unified Communications (UC) security product.
Advanced Services protects all IP-based real-time multimedia applications, endpoints, and
network infrastructure from potentially catastrophic attacks and misuse. This product provides the
real-time flexibility to harmonize and normalize enterprise communications traffic to maintain the
highest levels of network efficiency and security.
Advanced Services provides the security functions required by the ever changing and expanding
UC market. Advanced Services protects any wire-line or wireless enterprise or service provider
that has deployed UC from malicious attacks such as denial of service, teardrop, and IP sweep
attacks. These attacks can originate from anywhere in the world anytime. Advanced Services is
the only UC-specific security solution that effectively and seamlessly incorporates all approaches
into a single, comprehensive system.
Avaya SBCE Advanced Services incorporates the best practices of all phases of data security
to ensure that new UC threats are immediately recognized, detected, and eliminated. Advanced
Services incorporates security techniques that include UC protocol anomaly detection and filtering,
and behavior learning-based anomaly detection. Together, these techniques monitor, detect, and
protect any UC network from known security vulnerabilities by:
• Validating and supporting remote users for extension of Avaya Aura® UC services.
• Using encryption services such as SRTP.
Standard services
Avaya SBCE Standard Services provides a subset of the functionality of the Advanced Services
offer. Standard services has the functionality required for an enterprise to terminate SIP trunks
without the complexity and higher price associated with a typical Session Border Controller (SBC).
Avaya SBCE Standard Services is a true enterprise SBC, not a repackaged carrier SBC. This
product provides a lower-cost alternative to the more expensive Carrier SBCs. Standard Services
also provide an Enterprise SBC that is affordable, highly scalable, and easy to install and manage.
Standard Services is a Plug and Play solution for Enterprises and Small to Medium Businesses.
With this product, customers can benefit from Avaya’s extensive experience in SIP trunk
deployments and supporting large numbers of enterprise users. Avaya SBCE Standard
Services features the unique Signaling Manipulation module (SigMa module), which dramatically
simplifies the deployment of SIP trunks. The SigMa module streamlines integration of SIP
trunks into thousands of variations of enterprise SIP telephony environments, greatly reducing
implementation time. As a result, SIP trunk deployment in many standard configurations can occur
in 2 hours or less.
• Portwell CAD-0230
• Dell 3240
• Portwell CAF-0251
Other applications
Communication applications
Communication Manager supports a large variety of communication capabilities and applications,
including:
• CallCenter on page 149
• Unified Communication Center on page 149
• Avaya overview on page 149
• Computer Telephony Integration (CTI) on page 150
• Application Programming Interfaces (APIs) on page 150
• BSR polling on page 150
Call Center
Avaya Call Center provides a total solution for the sales and service needs of a customer. Avaya
Call Center connects callers with the appropriate agents. When a caller places call to a contact
center, Communication Manager captures the information about the caller and, depending on this
information, routes the call to the appropriate caller in the contact center.
The Call Center solution consists of new and existing versions of Avaya servers, Communication
Manager, and Call Center peripherals. This solution supports:
• Extensions of up to 13 digits
• LAN backup of Call Management System for high availability
• Customer-requested enhancements
Some of the Call Center applications that integrate with Communication Manager are:
• Avaya Call Management System for real-time reporting and performance statistics
• Avaya Business Advocate for expert, predictive routing based on historical data and incoming
calls
improves the readability of management reports, and provides an administrative interface to the
ACD feature on Communication Manager.
Administrators can access the CMS database, generate reports, modify ACD parameters, and
monitor call activities to improve call processing efficiency.
CMS uses dual TCP/IP links for duplicated data collection and high availability. To prevent data
loss from ACD link failures, CMS hardware or software failures, and maintenance or upgrades, the
ACD data is sent to both servers over different network routes. You can administer the ACD data
from either server.
Soft clients
Avaya Workplace Client overview
Avaya Workplace Client is a soft phone application that provides access to Unified
Communications (UC) and Over the Top (OTT) services. You can access Avaya Workplace Client
on the following platforms:
• Mobile:
- Android: From a mobile phone, tablet, or an Avaya Vantage™ device
- iOS: From an iPad, iPhone, or iPod Touch
• Desktop:
- Mac
- Windows
- Chrome: From a Google Chromebook
Based on your feature requirement, you can deploy Avaya Workplace Client in several ways.
In the basic deployment type, you can have only voice calling. You can then include additional
features such as directory search, contact management, presence, instant messaging, and
conferencing.
With Avaya Workplace Client, you can use the following functionalities:
• Make point-to-point audio and video calls.
• Answer calls, send all calls to voice mail, and forward calls
• Extend calls to your mobile phone if EC500 is configured
• Log in to your extension and join calls with multiple devices if Multiple Device Access (MDA)
is configured.
• Listen to your voice mail messages.
• View your call history.
• Access your Avaya Aura® and local contacts.
• Perform an enterprise-wide search using Avaya Aura® Device Services, Client Enablement
Services, Avaya Cloud Services, ActiveSync on mobile platforms and Avaya Aura® Device
Services, LDAP, or Avaya Cloud Services on desktop platforms.
• Manage your presence status and presence status message.
• Send instant messages.
• Capture photo, audio, and video files, and send generic file attachments in an IM
conversation.
• Join and host conference calls with moderator controls.
• Use point-to-point and conference call control functionality. You can also add participants to a
conference.
• Share a screen portion, the entire display screen, an application, or a whiteboard while on a
conference call on desktop platforms.
• View a portion of the screen, the entire display screen, an application, or a whiteboard shared
by another conference participant on mobile and desktop platforms.
Note:
Some Avaya Workplace Client features must be configured for your enterprise before you can
use them.
Features
Avaya Workplace Client provides the following features:
• Enterprise capabilities with ease of use in a single experience.
- Enterprise voice: Supports mission critical voice services, which ensure people can talk
when and how they need to.
- Video everywhere: Enriches the quality of communication interactions.
- Persistent multimedia messaging: Provides a social style conversation hub with rich
multimedia and multiparty capabilities.
- Rich presence: Makes it easy to determine availability and reachability of your contacts.
- Integrated video collaboration with interactive content sharing: Makes remote team
meetings just as effective as face-to-face meetings.
• Available across a full range of platforms, such as Android, iOS, Mac, and Windows.
• Remote worker support with Avaya Session Border Controller for Enterprise. Enables secure
VPN-less access to services when working outside of the private network.
• Simplified provisioning. Avaya Workplace Client is designed to import administrator-defined
settings and remove virtually all end-user configuration tasks short of entering user name and
password.
• Solution resiliency. Includes automated Avaya Aura® Session Manager failover support with
primary, secondary, and branch simultaneous registration.
• Secure communication channels. Protects end-user privacy. Enhancements in this release
also include client identity certificate support to enable trusted connections and to reliably
authenticate both servers and connecting clients.
For a detailed list of features, see Using Avaya Workplace Client for Android, iOS, Mac, and
Windows.
Mobility
IP/SIP telephones and softphones
Using IP and SIP telephones, you can gain access to the features of Communication Manager
from multiple locations. Mobility is a major benefit of IP and SIP telephones. For example, you
can move the telephones by plugging the telephones anywhere in the network. Similarly, another
benefit of mobility in IP softphones is that after you install the softphones on a laptop computer,
you can connect these softphones to Communication Manager from any remote location. Users
can place calls and handle multiple calls on their computers.
IP telephones support the following features: Time-To-Service (TTS) capability, gratuitous ARP
reply, and acceptance of incoming TCP connection from an active server. The following table
provides the capabilities of different types of IP telephones:
IP telephones TTS aware Incoming TCP Gratuitous ARP
96x0 series Yes Yes Yes
9610, 9620/C/L, 9630/G, 9640/G,
9650/C, 9670G
96x1–series
9608, 9611, 9621, 9641
46xx Broadcom series Yes Yes No
4601+, 4602SW, 4602SW+, 4610SW,
4620SW, 4621SW, 4622SW, 4625SW,
4630
46xx Agere series No No No
4601, 4602, 4606, 4612, 4620, 4624
16xx series No No No
1603, 1603SW-I, 1603SW, 1608, 1616
IP wireless (Polycom) No No No
3641, 3645
IP conference (Polycom) No No No
1692, 4690
When both the PBFMC and the PVFMC applications are administered for a station, incoming
calls to that station are forked to both the public and private destinations specified in the station-
mapping administration list. If the private FMC application receives a message indicating that
the far-end has answered the call, Communication Manager cancels the call on the public FMC
application. Reception of an alerting indication means that the wireless endpoint must be present
in the private wireless network and therefore cannot be in the cellular network.
See also Application RTUs for Fixed Mobile Convergence.
Message Networking
Message Networking simplifies customer network topology and administration by supporting store-
and-forward message protocols. With Message Networking, customers can exchange messages
between supported multimedia messaging systems. The features of Message Networking include:
• Support for multiple network configurations, including hub and spoke, bridge, and hybrid. The
bridge and hybrid configurations use the bridging feature of Message Networking.
• Support for multisite-enabled Modular Messaging remote machines.
• Transport and protocol conversion that automatically transcodes message formats between
all supported networking protocols.
• Directory views to download a subset of names and subscriber remote pages from the
Message Networking system to a specific location.
• Variable-length numeric addressing from Modular Messaging MultiSite and Avaya Aura®
Messaging systems.
• Dial Plan Mapping of existing mailbox addresses to unique network addresses.
• Enterprise lists to which subscribers can forward multimedia messages. Enterprise lists use
the virtual mailbox of the Message Networking system.
Design inputs
This section discusses the essential design elements to be specified by the customer. Those
elements pertain to the configuration topology, the various endpoints involved, and the nature of
the traffic flow between those endpoints.
Topology
An Avaya Aura® enterprise solution consists of a network of various applications, including
Session Manager, Communication Manager, Experience Portal (Voice Portal), Messaging, and
Voice Recording. Communication Manager, which is currently the most prominent application,
consists of a server and all of the components under that server’s control. The various
components can be placed into logical and/or physical groups.
A single Communication Manager system comprises one or more network regions. Each network
region is a logical grouping of components such as endpoints, gateways, and certain circuit
packs. The components of a Communication Manager system could also span various physical
placements including gateways and geographical locations (sites).
Knowledge of the details of the configuration topology, from both logical and physical standpoints,
is essential to properly conduct a traffic analysis. In particular, the topology often plays a role in
determining the routes that are traversed by various call types.
Related links
Erlang and ccs definitions on page 159
Endpoint usages on page 161
Erlang B and C models on page 160
Required number of branch gateways and port networks on page 179
Determining the number of TN2602 circuit packs on page 181
Determining G450 Branch Gateway media resources on page 183
Determining G430 Branch Gateway media resources on page 182
Traffic usages
Erlang and ccs definitions
Consider a stream of calls flowing across a group of trunks from one population of endpoints to
another. The number of simultaneous calls traversing the trunks generally varies over time (that
is, it increments by one every time a new call arrives on an available trunk, and it decrements by
one every time an existing call terminates). The corresponding carried load (or usage), expressed
in Erlangs, is defined as the average number of simultaneous calls that are traversing the trunks
during a given time period (for example, during the busy hour). Note that in this example, the
number of active calls always equals the number of busy trunks (since each active call requires
exactly one trunk). Therefore, the call usage (that is, the average number of simultaneous active
calls) equals the trunk usage (that is, the average number of simultaneous busy trunks) in this
example.
If a call arrives while all trunks are busy, it is said to be blocked at the trunk group. In other
words, not all calls that are offered to the trunks are actually carried by the trunks. Accordingly,
the corresponding offered load, expressed in Erlangs, is defined as the average number of
simultaneous calls that would have been traversing the trunks during a given time period (for
example, during the busy hour), had there been enough trunks to prevent blocking. Note that
in this example, the offered call load (that is, the average number of simultaneous active calls
had there been enough trunks to carry all call attempts) equals the offered trunk load (that is,
the average number of simultaneous busy trunks had there been enough trunks to carry all call
attempts) in this example.
To summarize so far, the traffic load, expressed in Erlangs, represents the average number of
simultaneous active calls or busy resources, during a given time period (for example, the busy
hour).
Also note that the usage of a single station, when expressed in Erlangs, represents the fraction of
time the station is in use. For example, a station that carries 0.1 Erlang of usage is busy 10% of
the time (during the time interval of interest; for example, the busy hour).
Two fundamental characteristics of a stream of call traffic are the call rate (usually expressed in
calls per hour) and the average call duration (usually expressed in seconds). The corresponding
call usage can be defined as follows:
Usage (in Erlangs) = [(calls per hour)(seconds per call)]/3600
Note that in some traffic reports, the call rates are termed as call counts. If a particular report is
associated with a period of time other than one hour, care must be taken not to mistakenly apply
the call count as the calls per hour in the preceding formula. Be careful to convert call counts to
calls per hour before applying the formula.
The term ccs stands for centum call seconds, which is a period of time 100 s in duration. To
minimize confusion, although ccs is technically a unit of time and could be used as such, in this
case it is only used to designate traffic loads.
Recall that a traffic load expressed in Erlangs is tacitly associated with a given time period
(typically one hour). If that is the case, the relationship between a traffic load expressed in Erlangs
and that same load expressed in ccs is:
• Number of resources
• GoS (grade of service, which is the probability of blocking at the resources)
Given the values of any two of those three parameters, the Erlang C model produces the third
value.
Note that the GoS is often expressed as P01 or P001. P01 represents at most 1 call out of every
100 being blocked at the resource of interest (that is, 1% blocking), and P001 represents at most 1
call out of every 1000 being blocked at the resource of interest (that is, 0.1% blocking).
Consider a situation in which a call that is blocked is constantly retried until it receives service,
meaning that as soon as a busy signal is heard, the caller hangs up and immediately redials. This
is the most extreme form of retrial, and it is almost as if each blocked call is simply placed in
queue and receives service as soon as a resource frees up for it. In other words, the Erlang C
model is a reasonable approximation for constant retrials.
So, since Erlang B represents no retrials and Erlang C approximates constant retrials, the average
of the two models is a reasonable approximation for moderate retrials. In this document, the pure
Erlang B model is used when ignoring the effect of retrials, and the average of the Erlang B and
C models (that is, a mixed Erlang B/C model) is used when the effect of retrials is deemed to be
relevant.
Although the Erlang C model deals with queueing effects, it is not a particularly reasonable model
for inbound Call Centers unless the number of trunks is significantly higher than (for example,
several orders of magnitude greater than) the number of agents. The M/M/c/k Finite Queue model,
which is beyond the scope of this discussion, should be used instead. A pure Erlang C model is
never used in this discussion.
Related links
Topology on page 157
Erlang and ccs definitions on page 159
Endpoint usages on page 161
Required number of branch gateways and port networks on page 179
Determining the number of TN2602 circuit packs on page 181
Determining G450 Branch Gateway media resources on page 183
Determining G430 Branch Gateway media resources on page 182
Endpoint usages
The three fundamental components of general business call traffic are intercom (that is, calls
between two enterprise stations), outbound (that is, enterprise station to PSTN trunk), and
inbound (that is, PSTN trunk to enterprise station). There are two possible approaches for
determining default values for the corresponding per-station call usages; one approach typically
applies if the number of PSTN trunks is unknown and needs to be sized, and the other can only be
applied if the number of PSTN trunks is known (or assumed to be a specific value) a priori.
Endpoint usages in a 1/3-1/3-1/3 call mix
In a general business environment, the intercom, outbound, and inbound call usages are often
assumed to be equal. In other words, each of those three components represents 1/3 of the traffic.
The average duration of a general business call is typically assumed to be 200 s (20 s for call
set-up, and 180 s of talk time) as a default. Furthermore, the average station is assumed to induce
the following call rates during the busy hour.
Light General Business Traffic:
• originate 0.25 intercom call per hour
• originate 0.25 outbound call per hour
• receive 0.25 inbound call per hour
Moderate General Business Traffic:
• originate 0.50 intercom call per hour
• originate 0.50 outbound call per hour
• receive 0.50 inbound call per hour
Heavy General Business Traffic:
• originate 0.75 intercom call per hour
• originate 0.75 outbound call per hour
• receive 0.75 inbound call per hour
The corresponding default per-station busy-hour usages can be calculated using the preceding
call rates, a 200-s average hold time, and the formulas in the Erlang and ccs definitions section.
Light General Business Traffic:
• originate 0.5 ccs = 0.014 Erlang of intercom call usage
• originate 0.5 ccs = 0.014 Erlang of outbound call usage
• receive 0.5 ccs = 0.014 Erlang of inbound call usage
Moderate General Business Traffic:
• originate 1.0 ccs = 0.028 Erlang of intercom call usage
• originate 1.0 ccs = 0.028 Erlang of outbound call usage
• receive 1.0 ccs = 0.028 Erlang of inbound call usage
Heavy General Business Traffic:
• originate 1.5 ccs = 0.042 Erlang of intercom call usage
• originate 1.5 ccs = 0.042 Erlang of outbound call usage
• receive 1.5 ccs = 0.042 Erlang of inbound call usage
Endpoint usages driven by the number of trunks
If the number of PSTN trunks is known (or is assigned some assumed value as part of the given
information), then an alternate approach to the one provided in Endpoint Usages in a 1/3-1/3-1/3
Call Mix can be used. Actually, the procedure for determining the per-station intercom usage is
identical to the procedure used in the 1/3-1/3-1/3 model. The difference appears in the outbound
and inbound usages; specifically, the outbound and inbound components of the traffic are derived
by assuming the trunks have been engineered to a P01 GOS. The results are as follows:
• The default per-station intercom usage either 0.5 ccs = 0.014 Erlang (light general business
traffic), 1.0 ccs = 0.028 Erlang (moderate general business traffic), or 1.5 ccs = 0.042 Erlang
(heavy general business traffic)
• The default per-station outbound usage is determined by calculating the carried load
associated with the given number of outbound trunks, an assumed grade of service (P01
is standard for PSTN trunks), and the mixed Erlang B/C model
• The default per-station inbound usage is determined by calculating the carried load
associated with the given number of inbound trunks, an assumed grade of service (P01
is standard for PSTN trunks), and the mixed Erlang B/C model
One drawback to using this method is that it assumes the trunks have been engineered to a
P01 GOS. If the trunks are not being heavily used (for example, if a lot of extra trunks have
been added solely for redundancy purposes), this model produces estimates for the outbound and
inbound usages that are far greater than the actual usages.
Related links
Topology on page 157
Erlang and ccs definitions on page 159
Erlang B and C models on page 160
Required number of branch gateways and port networks on page 179
Determining the number of TN2602 circuit packs on page 181
Determining G450 Branch Gateway media resources on page 183
Determining G430 Branch Gateway media resources on page 182
Figure 29: Call between two SIP stations with same Session Manager instance and same
Communication Manager
The Session Manager resources associated with the call depicted in Figure 29: Call between
two SIP stations with same Session Manager instance and same Communication Manager on
page 165 include:
• 3 SIP sessions
- SIP A - Session Manager - Communication Manager
- Communication Manager - Session Manager - Communication Manager
- Communication Manager - Session Manager - SIP B
Communication Manager resources associated with the call depicted in Figure 29: Call between
two SIP stations with same Session Manager instance and same Communication Manager on
page 165 include:
• 2 SIP trunk channels if evolution server; 4 SIP trunk channels if feature server
• CPU for 2 SIP trunk call legs if evolution server; CPU for 4 SIP trunk call legs if feature server
Figure 30: Call between two SIP stations with same Session Manager instance and different
Communication Managers
Session Manager resources associated with the call depicted in Figure 30: Call between two
SIP stations with same Session Manager instance and different Communication Managers on
page 166 include:
• 3 SIP sessions
- SIP A - Session Manager - Communication Manager
- Communication Manager - Session Manager - Communication Manager
- Communication Manager - Session Manager - SIP B
Communication Manager resources associated with each Communication Manager for the call
depicted in Figure 30: Call between two SIP stations with same Session Manager instance and
different Communication Managers on page 166 include:
• 2 SIP trunk channels (for either feature server or evolution server)
• CPU for 2 SIP trunk call legs (for either feature server or evolution server)
Figure 31: Call between two SIP stations with different Session Manager instances and same
Communication Manager
Resources associated with each Session Manager instance for the call depicted in Figure 31: Call
between two SIP stations with different Session Manager instances and same Communication
Manager on page 167 include:
• 2 SIP sessions
- SIP A or SIP B - Session Manager - Communication Manager
- Communication Manager - one Session Manager - other Session Manager
Communication Manager resources associated with the call depicted in Figure 31: Call between
two SIP stations with different Session Manager instances and same Communication Manager on
page 167 include:
• 2 SIP trunk channels if evolution server; 4 SIP trunk channels if feature server
• CPU for 2 SIP trunk call legs if evolution server; CPU for 4 SIP trunk call legs if feature server
Figure 32: Call between two SIP stations with different Session Manager instances and different
Communication Managers
Resources associated with each Session Manager instance for the call depicted in Session
Manager call types: Example 3 on page 166 include:
• 2 SIP sessions
- SIP A or SIP B - Session Manager - Communication Manager
- Communication Manager - one Session Manager - other Session Manager
Communication Manager resources associated with each Communication Manager for the call
depicted in Session Manager call types: Example 3 on page 166 include:
• 2 SIP trunk channels (for either feature server or evolution server)
• CPU for 2 SIP trunk call legs (for either feature server or evolution server)
Figure 33: Call between a SIP station and a non-IMS SIP element
Session Manager resources associated with the call depicted in Figure 33: Call between a SIP
station and a non-IMS SIP element on page 169 include:
• 2 SIP sessions
- SIP - Session Manager - Communication Manager
- Communication Manager - Session Manager - non-IMS SIP element
Communication Manager resources associated with the call depicted in Figure 33: Call between a
SIP station and a non-IMS SIP element on page 169 include:
• Case 1
Non-IMS SIP Element is a non-SIP endpoint on the same evolution server that’s associated
with the SIP station
- 3 SIP trunk channels
- CPU for 3 SIP trunk call legs and for 1 non-SIP call leg
• Case 2
non-IMS SIP Element is a non-SIP endpoint on a different Communication Manager than the
one that’s associated with the SIP station or is any other type of non-IMS SIP element as
defined at the beginning of the Call Types Encountered in an Session Manager Enterprise
section.
- 2 SIP trunk channels
- CPU for 2 SIP trunk call legs
Call between a SIP station and a non-SIP Communication Manager or The PSTN on page 170
shows the signaling flow associated with a call between a SIP station and a non-SIP
Communication Manager or the PSTN.
Figure 34: Call between a SIP station and a non-SIP Communication Manager or the PSTN
Session Manager resources associated with the call depicted in Figure 34: Call between a SIP
station and a non-SIP Communication Manager or the PSTN on page 170 include:
• 2 SIP sessions
- SIP - Session Manager - Communication Manager
- Communication Manager - Session Manager - Communication Manager
Communication Manager resources associated with the call depicted in Figure 34: Call between a
SIP station and a non-SIP Communication Manager or the PSTN on page 170 include:
• 3 SIP trunk channels
• 1 non-SIP trunk channel
• CPU for 3 SIP trunk call legs and for 1 non-SIP trunk call leg
Note:
Session Manager skips origination processing and application sequencing for emergency
calling.
Session Manager resources associated with the call depicted in Figure 35: Call between two
non-IMS SIP elements on page 171 include
• 1 non-IMS - non-IMS SIP session
- non-IMS SIP A - Session Manager - non-IMS SIP B
Figure 36: Call between a non-IMS SIP element and a non-SIP Communication Manager
Session Manager resources associated with the call depicted in Figure 36: Call between a non-
IMS SIP element and a non-SIP Communication Manager on page 171 include:
• 2 SIP sessions
- non-IMS SIP Element - Session Manager - Communication Manager
- Communication Manager - Session Manager - Communication Manager
Note:
The signaling path from Session Manager to Communication Manager to Session
Manager consists of two IMS SIP legs if Communication Manager is a feature server
or two non-IMS legs if Communication Manager is an evolution server.
Communication Manager resources associated with the call depicted in Figure 36: Call between a
non-IMS SIP element and a non-SIP Communication Manager on page 171 include:
• 3 SIP trunk channels
• 1 non-SIP trunk channel
• CPU for 3 SIP trunk call legs and for 1 SIP non-SIP trunk call leg
Session Manager resources associated with the call depicted in Figure 37: Call between two
non-SIP Communication Managers on page 172 include:
• 1 non-IMS - non-IMS SIP session
- Communication Manager - Session Manager - Communication Manager
Communication Manager resources associated with the call depicted in Figure 37: Call between
two non-SIP Communication Managers on page 172 include:
• 2 SIP trunk channels
• 2 non-SIP trunk channels
• CPU for 2 SIP trunk call legs and for 2 non-SIP trunk call legs
Table 11: Session Manager SIP sessions required per call for various call types
Communication Manager core resources required per call for various call types on page 174
summarizes the number of SIP trunk channels per Communication Manager administered as
either a feature server or evolution server, the number of SIP trunk call legs per Communication
Manager administered as either a feature server or evolution server, and the number of non-SIP
trunk call legs per Communication Manager administered as either a feature server or evolution
server for each of the calls described in the Call Types Encountered in an Session Manager
Enterprise section.
Table 12: Communication Manager core resources required per call for various call types
Endpoints involved in the call SIP trunk call legs Non-SIP trunk call legs
per FS or ES per FS or ES
Two Avaya SIP stations registered to the same 2 or 41 NA
Session Manager instance, using the same core
Communication Manager as a feature server
Two Avaya SIP stations registered to the same 2 NA
Session Manager instance, using different core
Communication Managers as feature servers
Two Avaya SIP stations registered to different 2 or 4 1 NA
Session Manager instances, using the same core
Communication Manager as a feature server
Two Avaya SIP stations registered to different 2 NA
Session Manager instances, using different core
Communication Managers as feature servers
An Avaya SIP station and a non-IMS SIP element 2 or 32 0 or 1 2
An Avaya SIP station and a non-SIP Communication 3 1
Manager or the PSTN
Two non-IMS SIP elements NA NA
A non-IMS SIP element and a non-SIP 3 1
Communication Manager
Two non-SIP Communication Managers 2 2
A Communication Manager as access element or NA 1
non-SIP Communication Manager and the PSTN via
non-SIP trunks
1SIP trunk channels per call if evolution server; 4 SIP trunk channels per call if
feature server
2For a call between an Avaya SIP station and a non-IMS SIP element, the only time the
larger numbers apply are when the non-IMS SIP element is a non-SIP endpoint on the
same evolution server that is associated with the SIP station.
Each non-SIP Communication Manager involved in a call with another element in the Session
Manager enterprise requires one non-SIP trunk channel for that call.
Static occupancy: The processing required for keep-alive operations. Despite the nomenclature,
the value of static occupancy in a List Measurements report can slightly vary.
Call processing occupancy: The processing required for setting up, maintaining, and tearing
down calls, and for executing vectoring operations in call centers. The processor occupancy of
Communication Release 6.3.6 and later for H323 RAS registration limit is 65% .
System management occupancy: The processing required for maintaining the sanity of
the system, including periodic maintenance and audits. Due to the bursty nature of system
management functions, a fixed portion of the overall processing capacity is allocated to system
management for design purposes. For all Communication Manager servers, 27% of the total
system processing capacity is assigned for system management. The 27% occupancy is not
dedicated to system management but only used for traffic configuration calculations.
If the overall processor occupancy, ST + CP + SM, exceeds approximately 92%, all system
management operations are temporarily delayed and subsequent call attempts are disallowed.
Therefore, the recommended total system processing occupancy is not more than 65%. That is,
100% - 27% for system management - 8% for the call throttling region.
Processing occupancy budgets for Communication Manager on page 176 shows the various
occupancy budgets involved. To illustrate, the relationship between Communication Manager
processor occupancy and the call rate is depicted as linear, although that is not always the case.
If the value of ST + CP occupancy is between 65% and 92%, some system management
functions will be postponed to a quieter traffic period to allow static occupancy and call processing
processes to use processor cycles from the system management budget. If the value of ST + CP
occupancy exceeds 92%, all system management functions are suppressed and call throttling is
initiated.
For more information, see Avaya Aura® Call Center Elite Performance Report.
allocated to support adjuncts such as CDR recording and PMS links. Alternatively, a single
Processor Ethernet interface can support up to two CDR applications.
• A single Processor Ethernet interface can support up to 16 AES applications.
The information provided in the preceding bullet items is used to determine the number of C-LAN
circuit packs and/or Processor Ethernet interfaces that are required to support just the adjuncts,
based on registration considerations.
Each C-LAN socket and each ISDN D-channel requires one DLCI resource. Therefore, the
minimum number of TN2312 IPSI circuit packs required to support DLCIs is the minimum number
that is greater than the result in the following formula:
(total number of C-LAN sockets + total number of ISDN D-channels)/2480
The number of IPSI circuit packs used must be great enough to support all of the DLCI resources
required for the system.
IPSI throughput
While the throughput bottleneck on a C-LAN circuit pack is its on-board CPU, the throughout
bottleneck on an IPSI circuit pack is its packet interface buffer. To safeguard against buffer
overflow, enough IPSI circuit packs must be used to ensure that no single IPSI circuit pack
handles more than 15K busy hour call completions (BHCC).
Related links
Topology on page 157
There are at least three cases in which a TN2602 IP Media Resource 320 circuit pack can be a
blocking entity, despite there being only 242 time slot pairs per port network:
• No shuffling
Suppose there are 160 simultaneous nonshuffled IP station-to-IP station calls using a single
TN2602 circuit pack. In this case, all 320 media resource channels are in use (that is, one
two-way talk path between each of the 320 IP stations involved to the TDM bus), but only 160
out of 242 time slot pairs (that is, 320 out of 484 time slots) are in use. Therefore, subsequent
IP calls are blocked at the circuit pack, despite the fact that there is spare capacity on the
TDM bus.
• Conference calls
Suppose there are 106 simultaneous three-party IP station calls using a single TN2602 circuit
pack. Since calls involving more than two IP endpoints can not shuffle, a total of 318 media
resource channels are required (that is, one two-way talk path between each of the 318 IP
stations involved to the TDM bus), but only 318 out of 484 time slots are in use. Therefore,
subsequent three-party IP calls are blocked at the TN2602 circuit pack, despite the fact that
there is spare capacity on the TDM bus.
• Music on Hold
Suppose there are 320 simultaneous IP trunk calls using a single TN2602 circuit pack, and
every call is listening to the same Music on Hold. A total of 320 media resource channels are
required (that is, one two-way talk path for each of the 320 calls). Even though each caller is
only listening and not talking, there is a talk path allocated in advance for an agent’s voice,
but only 321 out of 484 time slots are in use. Because all parties are listening to the same
music in this example, they are all listening to the same time slot. Therefore, subsequent
IP calls are blocked at the media resource circuit pack, despite the fact that there is spare
capacity on the TDM bus.
Divide that number by 242 and round up to get the required number of TN2602 circuit
packs.
Related links
Determining G450 Branch Gateway media resources on page 183
Determining G430 Branch Gateway media resources on page 182
Topology on page 157
Erlang and ccs definitions on page 159
Endpoint usages on page 161
Erlang B and C models on page 160
Required number of branch gateways and port networks on page 179
If all TTRs in the system are busy, the request is put in a queue. The event of a full queue is
treated as an error and results in intercept treatment; that is, a reorder tone is returned to the
caller.
TTRs are used to collect digits from the following originating endpoints:
• analog sets
• DCP sets
• DS1 OPS (line-side T1)
• DS1 OPS (line-side E1)
• BRI sets
• analog trunks
• RBS digital trunks (T1)
• CAS digital trunks (E1)
TTRs are not used to collect digits from the following originating endpoints:
• IP telephones and trunks
• SIP telephones and trunks
• PRI T1 trunks
• PRI E1 trunks
TTR resources are determined by the originating station or trunk. For an outbound PSTN call,
its TTR resource must reside in the same port network or branch gateway as the originating
station, which is not necessarily the same port network or branch gateway as the trunk. IP or SIP
endpoints do not need the use of a TTR. Incoming DID calls that do not use touch-tone dialing do
not require TTRs. Incoming PRI calls that use authorization codes do require TTRs.
TTRs are engineered to 0.001 blocking using the blocked calls cleared model. This is conservative
in that there is a small (4 entries) buffer for calls who find all TTRs busy.
Default holding time values for the different calls can be obtained by multiplying the number of
digits in the call by 0.65 s and adding 3 s, which represents the period from off-hook to the first
digit. The TTR usage, expressed in Erlangs, is calculated by multiplying the TTR holding time
by the calls per hour, then dividing by 3600. The Erlang B formula with a P001 grade of service
is then used to determine the required number of TTR resources. Each G430 branch gateway
supports 32 TTR resources, and each G450 branch gateway supports 64 TTR resources. The
TTR resources on a port network are scalable through the use of various circuit packs supporting
TTR.
Media bandwidth
An IP packet consists of a payload and some amount of overhead, where the payload consists of
actual sampled voice, and the overhead represents headers and trailers, which serve to navigate
the packet to its proper destination. The overhead due to IP, UDP, and RTP is 40 bytes, while the
Ethernet overhead is between 18 and 22 bytes (18 is assumed in this discussion). This represents
a total overhead of 58 bytes (464 bits), regardless of the nature of the payload. For this example,
Layer 2 (Ethernet) overhead has been included in that total. At every router boundary, because we
have included Ethernet overhead in this example, our calculations are for bandwidth on a LAN. As
WAN protocol (for example, ppp) Layer 2 headers are generally smaller than Ethernet headers,
WAN bandwidth is typically less than LAN bandwidth.
The size of the payload depends upon certain parameters relating to the codec being used.
The two most common codecs used with Communication Manager products are (uncompressed)
G.711 and (compressed) G.729. The transmission rates associated with those codecs are 64 kbps
for G.711 (this is the Nyquist sampling rate for human voice) and 8 kbps for G.729.
The packet size is sometimes expressed in units of time (specifically, in milliseconds). The
following formula yields the packet size, expressed in bits:
number of bits of payload per packet = transmission rate (kbps) x milliseconds per packet
Payload size per packet on page 185, which has been populated using this formula, provides the
payload size per packet (expressed in bits), as a function of packet size (milliseconds per packet)
and codec:
Note that the number of bits of payload per packet depends on the packet size, but it is
independent of the sizes of the individual frames contained in that packet. For example, a packet
size of 60 ms could be referring to six 10-ms frames per packet, or three 20-ms frames per packet,
or two 30-ms frames per packet. Presently, the most commonly-used packet sizes are 20 ms.
Both G.711 and G.729 codecs typically utilize two 10-ms frames per packet.
As stated earlier, there is typically an overhead of 464 bits per packet in a LAN scenario. So,
the bandwidth (expressed in kbps) associated with a unidirectional media stream (assuming no
Silence Suppression is used) is augmented from 64 kbps and 8 kbps (for G.711 and G.729,
respectively) to account for this overhead. The results of this exercise are provided in the Typical
LAN bandwidth requirements for media streams on page 186:
The kilobits per second values in Typical LAN bandwidth requirements for media streams on
page 186 were calculated by multiplying the transmission rate by the ratio of the total bits per
packet (payload plus overhead) to the payload bits per packet. For example, for the G.711 codec,
20–ms packets, and 58 bytes of overhead per packet, the bandwidth per call is
(64 kbps)[(1280 + 464) / 1280] = 87.2 kbps
Note that the entries in Typical LAN bandwidth requirements for media streams on page 186
correspond with unidirectional media streams. A full-duplex connection with a kilobits per second
capacity at least as large as the number in one of the table cells would be sufficient for carrying a
two-way voice stream using the corresponding codec, packet size, and packet overhead. In other
words, a full-duplex connection with a particular capacity rating would support enough bandwidth
to carry that capacity in both directions. Alternatively, two half-duplex connections of the same
capacity rating could be used.
3. The parties on the call are members of incompatible (in the sense of codec) network
regions
4. The call was forcibly redirected over the PSTN for testing or debugging purposes
Dial Plan Transparency is somewhat similar to IGAR in that calls whose primary routes are
through IP networks are rerouted through the PSTN. However, IGAR applies only to intra-
Communication Manager calls, and Dial Plan Transparency applies only to inter-Communication
Manager calls. For example, consider a Communication Manager system in which endpoints in
two distinct geographic sites can only talk to each other via a particular WAN or via the PSTN.
Suppose that the WAN is lost because of a failure, and that the main server complex is coresident
with one of the two sites. In that case, the other site must have a survivable core or remote
server to keep the endpoints in that site active. In such a scenario, the call in question becomes
an inter-Communication Manager call (that is, a call between an endpoint controlled by the main
server and an endpoint controlled by a survivable server), and could be rerouted through the
PSTN through the use of Dial Plan Transparency. IGAR would not apply to such a scenario.
When engineering a configuration supporting IGAR or Dial Plan Transparency, it is important to
engineer the PSTN trunks to be able to support the traffic that would be rerouted if IGAR or Dial
Plan Transparency was invoked. For example, if Dial Plan Transparency is being used to provide
inter-site connectivity over the PSTN in the event of a WAN failure, the PSTN trunks in both sites
should be engineered to an appropriate grade of service, assuming the PSTN call usage includes
all of the traffic that would be rerouted pursuant to a WAN failure. For more information see Sizing
of PSTN trunks.
Signaling bandwidth
The signaling bandwidth is normally considerably smaller than the corresponding media
bandwidth. However, we often must estimate it, especially in SIP configurations and when
separating the bearer and signaling network. Two components typically make up signaling
bandwidth:
• Bandwidth supporting keep-alive signaling
• Bandwidth supporting per-call signaling.
The value of the keep-alive signaling and per-call signaling associated with a particular
configuration depends on the types of endpoints and gateways involved and must be determined
empirically. Once we determine the per-call signaling bandwidth for the various call types involved,
those values are multiplied by the corresponding call rates, and those results are then added
together.
We can then apply the Erlang B formula with a P001 grade of service to determine the 99.9th
percentile bandwidth. See 99.9th percentile traffic on page 186.
Documentation
The following table lists the documents related to the components of Avaya Aura® Release 10.1.x.
Download the documents from the Avaya Support website at https://support.avaya.com.
Title Description Audience
Implementation
Deploying Avaya Aura® System Deploy the Avaya Aura® System Implementation
Manager in Virtualized Environment Manager application in a virtualized personnel
environment.
Deploying Avaya Aura® System Deploy the Avaya Aura® System Implementation
Manager in Software-Only and Manager application in a software personnel
Infrastructure as a Service only and Infrastructure as a Service
Environments Environments
Upgrading Avaya Aura® System Upgrade the Avaya Aura® System System administrators
Manager Manager application to Release 10.1 and IT personnel
Deploying Avaya Aura® Describes the implementation Implementation
Communication Manager in Virtualized instructions while deploying personnel
Environment Communication Manager in
virtualized environment.
Deploying Avaya Aura® Describes the implementation Implementation
Communication Manager in Software- instructions while deploying personnel
Only and Infrastructure as a Service Communication Manager in a
Environments software only and Infrastructure as a
Service environments.
Upgrading Avaya Aura® Describes instructions while System administrators
Communication Manager upgrading Communication Manager. and IT personnel
Deploying Avaya Aura® Session Describes how to deploy the Session Implementation
Manager and Avaya Aura® Branch Manager virtual application in a personnel
Session Manager in Virtualized virtualized environment.
Environment
Deploying Avaya Aura® Session Describes how to deploy the Implementation
Manager in Software-Only and Session Manager in a software personnel
Infrastructure as a Service only and Infrastructure as a Service
Environment environments.
Table continues…
Training
The following courses are available on the Avaya Learning website at www.avaya-learning.com.
After logging into the website, enter the course code or the course title in the Search field and
click Go to search for the course.
Course code Course title
20460W Virtualization and Installation Basics for Avaya Team Engagement Solutions
20970W Introducing Avaya Device Adapter
20980W What's New with Avaya Aura®
Table continues…
Note:
Videos are not available for all products.
Support
Go to the Avaya Support website at https://support.avaya.com for the most up-to-date
documentation, product notices, and knowledge articles. You can also search for release notes,
downloads, and resolutions to issues. Use the online service request system to create a service
request. Chat with live agents to get answers to questions, or request an agent to connect you to a
support team if an issue requires additional expertise.