RUCKUS-SZ (ST-GA) SmartZone Upgrade Guide, 7.0.0-RevC-20240430
RUCKUS-SZ (ST-GA) SmartZone Upgrade Guide, 7.0.0-RevC-20240430
No part of this content may be reproduced in any form or by any means or used to make any derivative work (such as translation, transformation, or adaptation)
without written permission from CommScope, Inc. and/or its affiliates (“CommScope”). CommScope reserves the right to revise or change this content from time to
time without obligation on the part of CommScope to provide notification of such revision or change.
Export Restrictions
These products and associated technical data (in print or electronic form) may be subject to export control laws of the United States of America. It is
your responsibility to determine the applicable regulations and to comply with them. The following notice is applicable for all products or
technology subject to export control:
These items are controlled by the U.S. Government and authorized for export only to the country of ultimate destination for use by the ultimate
consignee or end-user(s) herein identified. They may not be resold, transferred, or otherwise disposed of, to any other country or to any person other
than the authorized ultimate consignee or end-user(s), either in their original form or after being incorporated into other items, without first
obtaining approval from the U.S. government or as otherwise authorized by U.S. law and regulations.
Disclaimer
THIS CONTENT AND ASSOCIATED PRODUCTS OR SERVICES ("MATERIALS"), ARE PROVIDED "AS IS" AND WITHOUT WARRANTIES OF ANY KIND,
WHETHER EXPRESS OR IMPLIED. TO THE FULLEST EXTENT PERMISSIBLE PURSUANT TO APPLICABLE LAW, COMMSCOPE DISCLAIMS ALL
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE, TITLE, NON-INFRINGEMENT, FREEDOM FROM COMPUTER VIRUS, AND WARRANTIES ARISING FROM COURSE OF DEALING
OR COURSE OF PERFORMANCE. CommScope does not represent or warrant that the functions described or contained in the Materials will be
uninterrupted or error-free, that defects will be corrected, or are free of viruses or other harmful components. CommScope does not make any
warranties or representations regarding the use of the Materials in terms of their completeness, correctness, accuracy, adequacy, usefulness,
timeliness, reliability or otherwise. As a condition of your use of the Materials, you warrant to CommScope that you will not make use thereof for
any purpose that is unlawful or prohibited by their associated terms of use.
Limitation of Liability
IN NO EVENT SHALL COMMSCOPE, COMMSCOPE AFFILIATES, OR THEIR OFFICERS, DIRECTORS, EMPLOYEES, AGENTS, SUPPLIERS, LICENSORS AND
THIRD PARTY PARTNERS, BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL, PUNITIVE, INCIDENTAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES, OR
ANY DAMAGES WHATSOEVER, EVEN IF COMMSCOPE HAS BEEN PREVIOUSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES, WHETHER IN AN
ACTION UNDER CONTRACT, TORT, OR ANY OTHER THEORY ARISING FROM YOUR ACCESS TO, OR USE OF, THE MATERIALS. Because some jurisdictions
do not allow limitations on how long an implied warranty lasts, or the exclusion or limitation of liability for consequential or incidental damages,
some of the above limitations may not apply to you.
Trademarks
CommScope and the CommScope logo are registered trademarks of CommScope and/or its affiliates in the U.S. and other countries. For additional
trademark information see https://www.commscope.com/trademarks. All product names, trademarks, and registered trademarks are the property
of their respective owners.
Release Compatibility...........................................................................................................................................................................................23
Upgrade Tasks...................................................................................................................................................................................................... 25
Controller Upgrade.................................................................................................................................................................................................. 25
Performing the Upgrade.................................................................................................................................................................................. 25
Verifying the Upgrade...................................................................................................................................................................................... 26
Rolling Back to a Previous Software Version.................................................................................................................................................... 26
Creating a Cluster Backup................................................................................................................................................................................ 26
SigPack Upgrade....................................................................................................................................................................................................... 27
Upgrading Application Signature Package........................................................................................................................................................ 27
Verifying the SigPack Upgrade......................................................................................................................................................................... 27
Rolling Back SigPack Upgrade to the Previous Version.....................................................................................................................................28
AP Upgrade.............................................................................................................................................................................................................. 28
Upgrading the APs............................................................................................................................................................................................ 28
Verifying the AP Upgrade................................................................................................................................................................................. 32
Rolling Back the AP Upgrade............................................................................................................................................................................ 33
AP Bundle Upgrade.................................................................................................................................................................................................. 33
Uploading an AP Firmware Bundle.................................................................................................................................................................. 33
Data Plane Upgrade................................................................................................................................................................................................. 34
Upgrading the Data Plane................................................................................................................................................................................ 34
Verifying the DP Upgrade................................................................................................................................................................................. 35
Rolling Back the DP Upgrade............................................................................................................................................................................ 35
Switch Upgrade........................................................................................................................................................................................................ 36
Upgrade FAQs...................................................................................................................................................................................................... 61
Do I Need a Valid Support Contract to Upgrade Firmware?.....................................................................................................................................61
Is My Controller Supported by the Firmware Upgrade?.......................................................................................................................................... 61
How Do I Get Support?............................................................................................................................................................................................ 61
For product support information and details on contacting the Support Team, go directly to the RUCKUS Support Portal using https://
support.ruckuswireless.com, or go to https://www.ruckusnetworks.com and select Support.
Open a Case
When your entire network is down (P1), or severely impacted (P2), call the appropriate telephone number listed below to get help:
• Continental United States: 1-855-782-5871
• Canada: 1-855-782-5871
• Europe, Middle East, Africa, Central and South America, and Asia Pacific, toll-free numbers are available at https://
support.ruckuswireless.com/contact-us and Live Chat is also available.
• Worldwide toll number for our support organization. Phone charges will apply: +1-650-265-0903
We suggest that you keep a physical note of the appropriate support number in case you have an entire network outage.
Self-Service Resources
The RUCKUS Support Portal at https://support.ruckuswireless.com offers a number of tools to help you to research and resolve problems with your
RUCKUS products, including:
• Technical Documentation—https://support.ruckuswireless.com/documents
• Community Forums—https://community.ruckuswireless.com
• Knowledge Base Articles—https://support.ruckuswireless.com/answers
• Software Downloads and Release Notes—https://support.ruckuswireless.com/#products_grid
• Security Bulletins—https://support.ruckuswireless.com/security
Using these resources will help you to resolve some issues, and will provide the Technical Assistance Center (TAC) with additional data from your
troubleshooting analysis if you still require assistance through a support case or Return Merchandise Authorization (RMA). If you still require help,
open and manage your case at https://support.ruckuswireless.com/case_management.
Document Feedback
RUCKUS is interested in improving its documentation and welcomes your comments and suggestions.
For example:
• RUCKUS SmartZone Upgrade Guide, Release 5.0
• Part number: 800-71850-001 Rev A
• Page 7
Release Notes and other user documentation are available at https://support.ruckuswireless.com/documents. You can locate the documentation by
product or perform a text search. Access to Release Notes requires an active support contract and a RUCKUS Support Portal user account. Other
technical documentation content is available without logging in to the RUCKUS Support Portal.
White papers, data sheets, and other product documentation are available at https://www.ruckusnetworks.com.
Document Conventions
The following table lists the text conventions that are used throughout this guide.
NOTE
A NOTE provides a tip, guidance, or advice, emphasizes important information, or provides a reference to related information.
ATTENTION
An ATTENTION statement indicates some information that you must read before continuing with the current action or task.
CAUTION
A CAUTION statement alerts you to situations that can be potentially hazardous to you or cause damage to hardware, firmware,
software, or data.
DANGER
A DANGER statement indicates conditions or situations that can be potentially lethal or extremely hazardous to you. Safety labels are
also attached directly to products to warn of these conditions or situations.
Convention Description
bold text Identifies command names, keywords, and command options.
italic text Identifies a variable.
[] Syntax components displayed within square brackets are optional.
Multiple vSZ instances is not supported Updated: Multiple vSZ instances is not supported. Virtual SmartZone Deployment Requirements on
Each vSZ instance requires a dedicated hardware page 12
server.
Supported AP models Updated: Revised the table with the supported AP Supported Matrix and Unsupported Models on page
models. 59
Upgrade overview
NOTE
RUCKUS recommends SmartZone R7.0.0 release for users utilizing Wi-Fi7 APs. For those with legacy APs, RUCKUS suggests using
SmartZone R6.1.2 release.
One complete controller release includes software for several components in this architecture, such as:
• Control Plane
• Data Plane
• Access Points
Each one may have its own version number, but all of them are grouped in a single controller version.
Apart from this, you can have other software components also managed from the controller.
Software upgrades for these components might be done separately as it will be covered in 'Upgrade tasks' section. This is a summary:
• Controllers: Control and management plane that will be the first devices to upgrade
• Access Points: They are grouped in Zones inside the controller. Once the controllers are upgraded to a new version or a new AP bundle
release is uploaded to the controller, the corresponding AP software release will be available. Then, they can be upgraded per zone.
• Data plane: For this component there are two possible scenarios:
– Contoller physical appliances: This component is inside the appliance, and it gets upgraded at the same time as the controller.
– Physical or virtual data planes: This component is an independent device from the controller. It is upgraded from the controller
WebUI, after the controllers have been upgraded, and using its own software file. It should not be upgraded before the controller,
because management from controller would be lost.
• Application Signature package: This is used by AP Packet inspection features, and it is updated independently from Access Point firmware
using its own software file. This update is done from controller WebUI.
• ICX Switch Management: They are grouped in Groups inside the controller. When a new ICX switch management software release is
uploaded to the controller, the corresponding software release will be available. Then, they can be upgraded individually or per group.
IMPORTANT
The controller upgrades are only done to update the current cluster from the current version to a later one if the path is supported, and
not to an older one. For that reason, a cluster or data plane backup is strongly recommended as the rollback point.
There are other devices or components that will interact or are included in some of the previous devices like IOT, Cloudpath or SCI. But they have
their own management system, so they are out of the scope of this guide.
Upgrade Considerations
CAUTION
Beginning with SmartZone 6.1, when using three interfaces, the SZ300 and vSZ-H platforms do not support network configuration with
the Control, Cluster, and Management interfaces in the same subnet or VLAN. As a workaround, separate the controller Control,
Cluster, and Management interfaces to different subnets or VLANs before upgrading.
CAUTION
Data migration is not supported if system upgrades from release 3.6.x. Existing system and network configuration is preserved, but
data such as status and statistics, alarms or events, administrator logs, and mesh uplink history is not migrated to the new release.
Contact RUCKUS support for concerns or additional clarifications. [SCG-73771]
CAUTION
When the controller meets the following conditions before upgrading:
• Access & core separation feature is enabled
• UDI interface exists
• There are static routes for UDI interface
Then, after upgrading to release 6.1.0, static routes for the UDI interface will be placed in an incorrect routing table so these
destinations will not be reachable. [ER-9597]
For assistance, contact RUCKUS support team using https://support.ruckuswireless.com.
NOTE
• Due to change in EAP supplicant timeout from default 12 seconds to 60 seconds (SCG-124967), client fails to get IP when
RADIUS proxy switch to secondary server.
It is recommended to change the Radius Option values in WLAN before upgrading the controller or AP to SZ6.1.
When upgrading vSZ-E/vSZ-H, if the memory/CPU allocation of the current VM instance does not match the lowest resource level of the new VM
instance to which the new vSZ-E/vSZ-H version will be installed, you will lose the capacity for APs. On the other hand, if the new VM instance has
insufficient hard disk space, a warning message is displayed after you upload the upgrade image, but you will still be able to perform the upgrade.
NOTE
Supported version of the hypervisor may change for every vSZ release. To know the hypervisor version supported for a specific release,
refer to the respective release of the Upgrade Guide.
vSZ, the leading network management solution, operates optimally when deployed on appropriate hardware resources. Following the deployment
requirements ensures efficient and reliable performance:
• Hypervisor Hardware Resource Dependency: vSZ demands sufficient hardware resources for service stability. It is not compatible with
hypervisors that offer low performance. Deploying vSZ on insufficient hardware risks degraded performance and service interruptions.
• Shared Hypervisor Hardware Limitations:
– Sharing hypervisor hardware among multiple vSZ instances is not recommended. Each vSZ VM must be deployed on dedicated
hardware to prevent resource contention, especially in deployments with thousands of devices.
– A single hypervisor hardware failure could lead to the loss of vSZ N-1 redundancy capability, potentially jeopardizing the entire vSZ
cluster and causing service disruptions
– Service disruptions in vSZ can be attributed to the shared hypervisor hardware. Therefore, it is essential to segregate these instances
onto different hypervisor hardware to effectively prevent such interruptions.
– RUCKUS discourage deploying different services alongside vSZ on the same hypervisor hardware.
• CPU and I/O Requirements: For optimal performance, vSZ VMs are required to adhere to specific CPU and I/O standards. Benchmarking
the CPU and I/O performance of the private hypervisor ensures compatibility with vSZ deployment standards. Disk I/O is critical for vSZ
cluster performance; any inadequacy in disk I/O capabilities can result in performance bottlenecks.
• Running multiple vSZ instances on a shared hypervisor may lead to conflicts over CPU and I/O resources, potentially impacting stability,
especially in large-scale deployments.
• Deploying vSZ on a low-performance hypervisor may result in frequent service outages or sluggish performance due to resource
constraints.
• Failure in meeting CPU and I/O requirements may lead to suboptimal vSZ performance, including network latency, packet loss, or service
downtime during peak usage periods.
• The performance of shared hypervisor hardware may vary, affecting CPU, I/O, and network bandwidth during unexpected situations.
vSZ require high IO performance deploy environment. Measure the hypervisor IO performance before deployment. vSZ IO throughput
requirements :
• IO requirement per resource level - Refer to the Disk IO Requirement column in the resource table.
• Avoid network-attached storage (NAS/SAN). The general claim is that the NAS/SAN solution is faster and more reliable than the local
drives. NAS/SAN is often slower, displays larger latencies with a wider deviation in average latency, and is a single point of failure.
• Virtual Disk - RUCKUS recommends Thick Preallocated/Eager Zeroed/Fixed Size to provide good performance and low latency for IO. Avoid
using “Thin Provision “or "Thick Provision Lazy Zeroed/Dynamic Expanding" as it could impact IO performance. [*].
[*] If there are limitations with the virtualization platform and Thin or Thick Provision/Lazy Zeroed/Dynamic Expanding is the only
available option, you can skip the initial validation setup under the understanding of the potential risk of low performance. In a high APs
deployment environment, high IO performance is required.
1-vSZ# debug
1-vSZ(debug)# debug-tools
[Change to system]
Welcome to Debug CLI Framework!
(debug tool-set) system $ use sz
[Change to sz]
(debug tool-set) sz $ skip-setup-capability-check
[Execution Done!]
(debug tool-set) sz $
vSZ requires low network latency between vSZ nodes on the control & cluster interface network. vSZ cannot support deployment in high network
latency environment.
vSZ interface network latency between nodes: Cluster Interface network latency need < 34 ms.
Before upgrading vSZ to this release, verify that the virtual machine on which vSZ is installed has sufficient resources to handle the number of APs,
wireless clients and ICX Switches that you plan to manage. See the resource tables below for the required virtual machine system resources.
The vCPU, RAM, and Disk Size values are interconnected and must fulfill the minimum requirements within each resource level. When adjusting any
of these parameters, all three values must be equal to or greater than the requirements of an existing Resource Level. For instance, considering vSZ-
H Resource Level 5, if the number of vCPUs is increased from 4 to 6 or more, the RAM must be adjusted to 22GB or more, and the Disk Size must be
adjusted to 300GB or more, ensuring compatibility with or exceeding all the minimum values of Resource Level 6. Failure to meet the minimum
requirements of level 6 will result in the vSZ remaining at level 5.
NOTE
The vSZ deployed in the Nutanix Hypervisor introduces more overhead on memory. The 10K AP per node is not sustained on 48GB
memory setting. [SCG-113477]
Workaround: When deploying vSZ on Nutanix it is recommended to allocate more memory for vSZ usage. For a 10K AP resource level,
setup needs 24 core CPU and 50 GB (+2GB) memory to control. Alternatively decrease to 25% AP deployment in vSZ resource level. For
example, 7500 AP in 10K AP resource level.
WARNING
These vSZ required resources may change from release to release. Before upgrading vSZ, always check the required resource tables for
the release to which you are upgrading.
NOTE
When initially building up the network it can use a higher Resource Level than needed for the number of APs first deployed, if all the
three parameters (vCPU, RAM and Disk Size) match exactly with that higher Resource Level.
ATTENTION
It is recommended that there should be only one concurrent CLI connection per cluster when configuring vSZ.
In the following tables the high scale resources are broken into two tables for easy readability. These tables are based on the AP Count Range.
In the following tables the essential scale resources are broken into two tables for easy readability. These tables are based on the AP Count Range.
NOTE
The recommended vCPU core for the vSZ-E with AP Count Range 1 through 100 is 2-4.
NOTE
[1] - vSZ-H and vSZ-E have different report interval. For example, AP sends the status to vSZ-E every 90 seconds but to vSZ-H it is sent
every 180 seconds, which means that vSZ-E need more RAM in scaling environment based on the resource level.
[2] - NICs assigned to direct IO cannot be shared. Moreover, VMware features such as vMotion, DRS, and HA are unsupported.
D4s_v3 (4 vCPU/16
GB RAM)
In the following tables the essential scale resources are broken into two tables for easy readability. These tables are based on the AP Count Range.
NOTE
The recommended vCPU core for the vSZ-E with AP Count Range 1 through 100 is 2-4.
NOTE
• [1] - Increase the vSZ total memory 2~4 GB when running on special or extreme deploy environment when vSZ raise a memory
exceed (90%) alarm. For example:
– Deploy 4-node vSZ cluster on Nutanix with full 30K AP capacity.
– One vSZ node down in 4-node vSZ cluster to long term sustain 30K AP in 3 alive vSZ nodes.
The solution should be recovered the fail vSZ node as soon as possible. But if user need run 3 nodes with 30K AP in long
term sustain, it need to increase the vSZ memory to run.
– All APs with full statistic reports (AVC, HCCD, UE, ...) to the controller on full load stress condition.
• [2] - Required Disk Type
– AWS: General Purpose SSD (gp2)
– GCE: SSD
– Azure: Standard-SSD
• [3] - If deployed hardware CPU computing performance is not good as recommended in 100 AP resource level, the 2 cores CPU
setting cannot be supported. Upgrade to 4 cores instead of 2 cores in this case.
• [4] - If deployed hardware CPU computing performance is not good as recommended Hypervisor (like Hyper-V) in 4-CPU setting
to support 1000K AP, upgrade to 6 cores instead of 4 cores in this case.
• [5] - The resource level 6.6 - 6000 AP resource profile level could support to 4-nodes cluster. The total number of supported APs
will be up to 18,000 APs in a 4-node vSZ cluster.
• [6] - Resource level 9 is added for situations where resource level 8 cannot sustain the load. It is introduced when level 8
reaches its upper bound in handling heavy loads, especially in fully-loaded vSZ environments. These environments might
include configurations with 30,000 access points across thousands of zones, along with resource-intensive features like HCCD,
SCI, and MLISA. This upgrade is crucial to ensure optimal performance under such demanding conditions, thereby enhancing
the vSZ’s ability to effectively manage complex networks.
• The 6000 AP resource profile level could support to 4 nodes cluster. The total supported AP number will be up to 18,000 APs in
a 4-node vSZ cluster.
• Workaround if virtualization platform always need Thin Provision/Lazy Zeroed/Dynamic Expanding case.
1. Go to Administration > System Info > System Summary > Total Capacity.
The AP capacity license refers to the number of approved APs, while the Connected AP represents the total number of APs that are
currently connected to the controller. AP capacity is based on system resources (CPU/RAM) and not the AP license count.
Supported Platforms
This section displays summary of platforms supported by vSZ in this release version.
Hypervisors supported by vSCG/vSZ
NOTE
RUCKUS only supports usage of vSZ on the above named virtualization platforms.
Sangfor HCI
VMWare Workstation/Workstation Player
Oracle VM Virtual Box
NOTE
The above mentioned hypervisor platforms are not currently supported by the RUCKUS controller. If vSZ is run on any of the platforms
officially not supported or assured by RUCKUS, any errors encountered in vSZ cannot be investigated by RUCKUS support. The above list is
not exhuastive, it serves only to clarify the unsupported hypervisors.
System Benchmark Tool included in this release can be used to measure Hypervisor performance. It provides the benchmark result and
performance measure to run vSZ on CPU (Central Processing Unit) and IO (Input Output).
Command Location:
Performance Requirement
CPU CPU single core events per second/per core need > 180 events/second.
IO Requirements change per resource level. Refer to resource table for minimum values (column 'Disk IO requirement')
SmartZone 6.1.x support dynamic (linear) AP/Switch capacity based on capacity ratio. No AP/Switch mode, only mix mode and AP/Switch support
number base on total amount connect AP/Switch capacity.
Capacity Ratio
High scale profile with higher switch support capacity to 5:1 from 8:1
vSZ-H L6 ~ L8
5:1 (10000 AP : 2000 switches)
(200 x 1) + (100 x 5) = 700 (Total Capacity) This requirement could use L5, since the total capacity is smaller than 1,000.
• 400 APs + 10 switches (1:5)
(400 x 1) + (10 x 5) = 450 (Total Capacity) This requirement could use L4, since the total capacity is smaller than 500.
NOTE
These required resources may change from release to release. Before upgrading, always check the required resource tables for the
release to which you are upgrading.
In the following tables for three and four nodes are broken into two tables for easy readability.
Capacity AP Mode Switch Mode AP/Switch Capacity AP Mode Switch Mode AP/Switch Capacity
Ratio Ratio
SZ144 4,000 0 0 800 5:1 6,000 0 0 1,200 5:1
SZ300 20,000 0 0 4,000 5:1 30,000 0 0 6,000 5:1
vSZ-E L3 2,000 0 0 400 5:1 3,000 0 0 600 5:1
vSZ-H L8 20,000 0 0 4,000 5:1 30,000 0 0 6,000 5:1
NOTE
The controller data plane devices follow the AP zone compatibility policy as set forth in this section. Data plane devices are compatible
with controller in the same way APs are compatible with the controller.
The supported paths for Short Term (ST) and Long Term (LT) releases are the same, and maintain the following compatibilities:
• An ST release is compatible with its current release train and the immediate prior LT GD release train for upgradability and AP Zone
support.
• An LT release is compatible with its current release train, the immediate prior ST release, and the latest LT GD (General Deployment)
release in immediate prior LT train for upgradability and AP Zone support.
• An LT GD release is compatible with its current release train, the immediate prior ST release, and two immediate prior LT GD releases for
upgradability and AP Zone support.
For more information on release definition, refer to RUCKUS Support Portal at https://support.ruckuswireless.com.
This SmartZone release is a long-term (LT) release. Refer to the following table for a compatibility matrix for all controller platforms.
NOTE
The AP firmware version that comes in compatible controller releases included in the table, or any later AP firmware patch from those
versions can be used.
NOTE
*- Additional support for this path beyond the base policy has been added due to special requirements.
If you are running an earlier version, you must first upgrade to a compatible version, as shown in the table, before upgrading to this
release.
CAUTION
To help ensure that the cluster firmware upgrade process can be completed successfully, the cluster interfaces of all nodes must be
connected and up.
REMEMBER
Before you proceed with upgrading the controller to this release, ensure that all AP zones are running one of the supported firmware
versions. Otherwise, the cluster upgrade will be blocked.
Controller Upgrade
Performing the Upgrade
Consult the RUCKUS Support website on a regular basis for updates that can be applied to your RUCKUS network devices.
CAUTION
Although the software upgrade process has been designed to preserve all controller settings, RUCKUS strongly recommends that you
back up the controller cluster before performing an upgrade. Having a cluster backup will ensure that you can easily restore the
controller system if the upgrade process fails for any reason. Upload the backup files from all the nodes in a cluster to a remote FTP
server or download them from SZ WebUI
CAUTION
RUCKUS strongly recommends that you ensure that all interface cables are intact during the upgrade procedure.
CAUTION
RUCKUS strongly recommends that you ensure that the power supply is not disrupted during the upgrade procedure.
Before starting this procedure, you should have already obtained a valid controller software upgrade file from RUCKUS Support or an authorized
reseller.
1. Copy the software upgrade file that you received from RUCKUS to the computer where you are accessing the controller web interface or
to any location on the network that is accessible from the web interface.
2. Select Administration > Administration > Upgrade > Upgrade.
In Current System Information, the controller version information is displayed.
NOTE
The Upgrade History tab displays information about previous cluster upgrades.
3. From Upload, turn on Run Pre-Upgrade Validations. It triggers data migration validation during upload process.
4. Click Browse to select the patch file.
5. Click Upload to upload the new image (.ximg file) to system. The controller uploads the file to its database, and then performs the data
migration verification. After the verification is done, the Patch for Pending Upgrades section is populated with information about the
upgrade file. If verification fails, the following error is displayed:
Exception occurred during the validation of data migration. Please apply the system
configuration backup and contact system administrator.
6. If the controller configuration upload is successful, click Backup & Upgrade to back up the controller cluster and system configuration
before performing the upgrade.
When the upgrade (or backup-and-upgrade) process is complete, the controller logs you off the web interface automatically. When the controller
login page is displayed again, you have completed upgrading the controller.
To be able to restore the software to the previous software version, you must perform a cluster backup before upgrading. Refer to Creating a Cluster
Backup on page 26. To roll back to the previous version, perform either Step 1 or Step 2 depending on the outcome of the software upgrade.
1. If the upgrade fails, access all the node's command-line interface (CLI) and run the restore command to the local system simultaneously.
2. If the upgrade succeeded, the restore can be run from either the CLI or through the WebUI.
For details about performing a cluster backup, see the “Backing Up and Restoring Clusters” section of the appropriate product Management Guide.
When the cluster backup process is complete, a new entry is displayed in the Cluster Backups History section with a Created On value
that is approximate to the time when you started the cluster backup process.
SigPack Upgrade
Upgrading Application Signature Package
AP DPI feature uses an Application Signature Package that in general it can be optionally updated when a new version is available.
NOTE
As R5.1.x to R6.1 release upgrade is not supported, RUCKUS does not have any signature-package upgrade restrictions during zone
upgrade.
There are two types of Application Signature Package files available in the Support site:
• Regular Signature package: This package can be used for any scenario but is required if you intend to use the Client Virtual ID Extraction
feature available in WLAN configuration. This type of file is not supported by 802.11ac Wave 1 AP running firmware prior to 6.0.
• Non-regular Signature package: This package can be used for any other scenario and is compatible with any supported AP model and
firmware version.
This package can be updated to the latest available version that is suggested by the controller or to a file downloaded from RUCKUS download
center.
Complete the following steps to update the signature package in all APs in a cluster.
1. Download the following file as required from the RUCKUS support site. In other case, you can skip this step.
• Regular Signature package only for SZ7.0.0: https://support.ruckuswireless.com/admin/softwares/3960-smartzone-7-0-ga-
sigpack-1-670-2-regular-application-signature-package.
• Non-Regular Signature package for SZ7.0.0 and older releases: https://support.ruckuswireless.com/admin/softwares/3961-
smartzone-7-0-ga-sigpack-1-670-2-application-signature-package.
2. From the controller web UI, select Security > Application Control > Application Signature package.
a. If you choose to install the latest version available in the Support site, go to Latest available from support site > Check Now > For any
latest update > Install.
NOTE
If Regular and Non-regular Signature package files are available in latest version from support site, SZ will offer to
download and install the Regular version. If your AP models do not support it, they will fail to download it and stay in their
current Signature package version.
b. If you choose to install a version downloaded in step 1, go to Upload Signature Package section and click Browse.
c. Select the file the version you intend to install and click Upload to install that signature package.
After the signature package file is installed or uploaded successfully, the controller will logoff the users.
NOTE
More details can be found in Security Guide, in section Working with Application Signature Package
2. From the Current Signature Package Info section, verify if the filename matches with the one that is uploaded and installed.
1. Download the desired Sigpack version from the RUCKUS support site.
2. From the controller web UI, select Security > Application Control > Application Signature package.
3. Go to the Upload Signature Package section, click Browse and select the file and the version you intend to install.
4. Click on Upload to install that signature package.
After the signature package file is installed or uploaded successfully, the controller will logoff the users.
AP Upgrade
Upgrading the APs
When the controller is upgraded, a new software version is available for the APs, which is not upgraded automatically along with the controller.
Instead, they are upgraded independently per zone (which can have different software versions) following these steps:
• Manual upgrade per zone or group of zones: Refer, Changing the AP Firmware Version of the Zone on page 29.
• Scheduled upgrade per zone or group of zones: Refer, Schedule Zone Firmware Upgrade on page 30.
Complete the following steps to change the AP firmware version of the zone.
1. From the Access Point page, locate a zone for which you want to upgrade the AP firmware version.
NOTE
To upgrade multiple zones, click the Zone view mode and select the zones by holding down the Ctrl key and clicking each of the
zones.
2. Click More and select Change AP Firmware. The Change AP Firmware dialog box displays the current AP firmware version.
3. Select the firmware version you need. If you upgrade to a new firmware version, a backup configuration file will be created. You can use
this backup file to downgrade to the original firmware version.
NOTE
If the multiple zones do not have the same supported firmware version, the dialog box displays the following message:These
Zones do not have same supported AP firmware available for upgrade/downgrade.
4. Click Yes, and a confirmation message is displayed stating that the firmware version was updated successfully.
NOTE
If any zone fails to upgrade, a dialog box displays to download an error CSV list.
5. Click OK. You have completed changing the AP firmware version of the zone.
After a zone firmware upgrade/downgrade task is executed, user can see the zone firmware change history.
2. click Create, the Create Schedule Zone Firmware Upgrade Task page is displayed.
3. Select the required zone from the Select Zones list and move it to the Selected Zones list.
5. Enter a Name.
6. Select the upgrade version from the list. The Change Firmware to field will automatically be updated with the selected version.
7. Select the Schedule time.
8. Click Next.
9. Review the task and click OK.
NOTE
With this process you will also rollback to the configuration and AP list that this zone had when it was upgraded from the selected
version.
AP Bundle Upgrade
Uploading an AP Firmware Bundle
An AP Patch is a separate software file only containing a new version for AP component. It can be uploaded into an SZ cluster to add that new
version for this component. Main purposes:
• New AP model: Patch is introducing a new AP model not supported into that release version yet.
• Bug fix: Patch containing additional software fixes compared to previous official AP version.
After the patch file is updated, you will be prompted to log out.
When you login again, the AP Patch History section displays information about the patch file such as start time, AP firmware and model.
You have successfully updated the AP models and AP firmware with the patch file, without having to upgrade the controller software.
For upgrading, verifying and rolling back tasks using this AP patch, you can refer to the same steps described in AP Upgrade on page 28.
CAUTION
RUCKUS strongly recommends that you back up the data plane before performing an upgrade. Having this backup will ensure that you
can easily restore the data plane if the upgrade process fails for any reason. For this you have Backup and Restore options in the same
page where Upgrade is performed.
NOTE
This task is applicable only for external Data planes
3. In Patch File Upload, click Browse to select the patch file (.ximg file).
4. Click Upload.
The controller automatically identifies the Type of DP (vSZ-D or SZ-D) and switches to the specific Tab page. Uploads the file to its
database, and then performs file verification. After the file is verified, the Patch for Pending Upgrade section is populated with
information about the upgrade file.
Switch Upgrade
Uploading the Switch Firmware to the Controller
You can upload the latest available firmware to a switch from the controller, thereby upgrading the firmware version of the switch.
3. In Firmware Upload click Browse to select the firmware file for upgrading the switch.
4. Click Open.
5. Click Upload. The upload status bar is displayed, and after the firmware file is uploaded, the Uploaded Switch Firmwares section is
populated with the firmware version and switch models it supports.
Upgrading Switches
RUCKUS switches starting from 08.0.90 releases supports unified images which require two step process from prior releases. The two step process
is:
1. Step 1 - Upgrade from 08.0.80 (non- Unified FastIron Image (UFI) or UFI) > 08.0.90 UFI
2. Step 2 - Upgrade from 08.0.90 UFI > 08.0.90a UFI
NOTE
Refer to RUCKUS FastIron Software Upgrade Guide, 08.0.90 for details.
Ensure that the image version in both primary and secondary partition use 8.0.80 or later version.
You can upgrade switches per switch group or selected switches as explained in the following sections.
NOTE
If the switch group has a default firmware selected the Firmware Upgrade option is unavailable.
NOTE
Beginning with FastIron release 10.0.0, a switch ("Layer 2") image will no longer be provided for ICX devices. Only the router ("Layer 3")
image will be available. On Upgradeto FastIron 10.0.00, the configuration of any ICX devices operating with the switch image will
automatically be translated to the equivalent router image configuration.The target upgrade to 10.0.0 supports only router code.
The following features are deprecated as a result of this change:
• The IP default gateway
• The management VLAN
• Global configuration of the IP address (Going forward, the IP address must be configured at the interface level for each port.)
Refer to the RUCKUS FastIron Software Upgrade Guide for additional details.
Complete the following steps to perform a firmware upgrade on the switch group.
1. On the menu, click Network > Wired > Switches to display the Switches window.
2. In the Organization tab, select a Domain > Switch Group or Switch Group.
3. Click More > Firmware Upgrade to display the Upgrade Firmware (Group) dialog box.
Prerequisites
• Upload a valid FastIron firmware version (newer than version 8.0.80) to the controller.
• Sync the controller with the NTP server. On the controller user interface, navigate to Administration > System > Time then click Sync
Server.
NOTE
To upgrade the firmware for multiple switches simultaneously, hold down the Ctrl key as you select the desired switches.
5. Click OK.
6. To monitor the firmware upgrade progress, select the target switch and click the Firmware History tab. Hover your cursor over any
message in the Status field for a tooltip providing additional information regarding that stage of the upgrade process.
The images of six stages of completion along with their tooltips are shown below.
You must upgrade the switch firmware as described in Scheduling a Firmware Upgrade for Selected Switches on page 39
1. On the menu, click Network > Wired > Switches to display the Switches window.
2. From the system tree, select a Domain > Switch Group or Switch Group and select the Switch.
4. In the Upgrade Job Status section, you can verify the upgrade status including the time, switch ID, firmware version, image name, status
and any failure reasons (if applicable).
5. In the Firmware Upgrade History section, you can see the times of previous upgrades and the firmware versions used.
TABLE 17 Ports to Open Between Various RUCKUS Devices, Servers, and Controllers
From To (Listener) Communication Port Number Layer 4 Interface Configurable Purpose
(Sender) Protocol from Web
Interface?
SZ-300
vSZ
AP AP 1883 TCP Control No AP-AP communication for
neighbor AP information
exchange in FT, Client Load
Balance, etc.
AP Control plane 22 TCP Control No SSH Tunnel for management
of :
SZ-100
SZ-300
vSZ
AP ZD SZ 69 UDP Control No ZD Migration
TABLE 17 Ports to Open Between Various RUCKUS Devices, Servers, and Controllers (continued)
From To (Listener) Communication Port Number Layer 4 Interface Configurable Purpose
(Sender) Protocol from Web
Interface?
AP vSZ control 91 (AP firmware version 2.0 to TCP Control No AP firmware upgrade
plane 3.1.x) and 443 (AP firmware APs need Port 91 to download
version 3.2 and later) the Guest Logo and to update
the signature package for the
ARC.
NOTE
Starting with SZ
3.2 release, the
controller uses
an HTTPS
connection and
an encrypted
path for the
firmware
download. The
port used for AP
firmware
downloads has
been changed
from port 91 to
443 to distinguish
between the two
methods. To
ensure that all
APs can be
upgraded
successfully to
the new
firmware, open
both ports 91
and 443 in the
network firewall.
AP RAC (RADIUS 1813 UDP Management, Cluster, Control No RADIUS_Auth profile defines
Access both inbound and outbound
Controller) NOTE traffic. Information specified
The Management here is for inbound traffic
interface is only.
applicable when
vSZ-H is in single-
interface mode.
If in 3-interface
mode, Access
and Core
separation
disabled, it
depends on the
configured
Management
traffic interface.
TABLE 17 Ports to Open Between Various RUCKUS Devices, Servers, and Controllers (continued)
From To (Listener) Communication Port Number Layer 4 Interface Configurable Purpose
(Sender) Protocol from Web
Interface?
AP SZ 8222 TCP Control No Captive Portal OAuth service
port for HTTPS
DP
AP SZ 8280 TCP Control No Captive Portal Web Proxy
service port for HTTPS
DP
AP-MD SZ-MD 9191 TCP Cluster No Communication between AP-
MD and SZ-MD
AP vSZ control 12223 UDP Control No LWAPP discovery sends image
plane upgrade request to ZD-APs via
LWAPP (RFC 5412).
AP SZ 18301 UDP Management, Cluster, Control No SpeedFlex tests the network
performance between AP, UE,
UE and SZ.
ICX vSZ control 22 TCP Control No SSH Tunnel.
plane
ICX vSZ control 443 TCP Control No Access to the vSZ/SZ control
plane plane over secure HTTPS.
SZ External FTP 20-21 TCP Control, Cluster, Management No Transfer date to external FTP
server servers
Follower SZ Master SZ 123 UDP Cluster No Sync system time among SZ
nodes node nodes
SZ External 443 TCP Management No Download licensing and
Licensing support entitlements from the
Server licensing server.
SZ External 443 TCP Management No Download licensing and
Licensing support entitlements from the
server licensing server.
SZ-RAC External AAA 1812 UDP Management, Cluster, Control Yes To Support RADIUS Proxy
Authentication
NOTE
The Management
interface is
applicable when
vSZ-H is in single-
interface mode.
If in 3-interface
mode, Access
and Core
separation
disabled, it
depends on the
configured
Management
traffic interface.
TABLE 17 Ports to Open Between Various RUCKUS Devices, Servers, and Controllers (continued)
From To (Listener) Communication Port Number Layer 4 Interface Configurable Purpose
(Sender) Protocol from Web
Interface?
SZ SZ 7500 UDP Cluster No SZ Clustering Operation
SZ SZ 7800 TCP/UDP Cluster No Cluster node communication
for cluster's operations
SZ SZ 7800-7805 TCP Cluster No A protocol stack using TCP on
JGroups library for node to
node communication
SZ SZ 7810 TCP Cluster No A protocol stack using
FD_SOCK on JGroups library
for node-to-node
communication
SZ SZ 7811 TCP Cluster No A protocol stack using
FD_SOCK on JGroups library
for node-to-node
communication
SZ SZ 7812 TCP Cluster No A protocol stack using
FD_SOCK on JGroups library
for node-to-node
communication
SZ SPoT 8883 TCP Management, Cluster, Control No Communication between SZ
and SPoT
NOTE
The connection
between the
controller and
vSPoT is an
outbound
connection, so it
depends on the
destination IP
address. If the
destination IP
address falls in
the subnet of
one interface, it
is routed to that
interface.
Otherwise, it is
routed via the
default route.
TABLE 17 Ports to Open Between Various RUCKUS Devices, Servers, and Controllers (continued)
From To (Listener) Communication Port Number Layer 4 Interface Configurable Purpose
(Sender) Protocol from Web
Interface?
DNS Server DNS 53 TCP/UDP Management, Cluster, Control No DNS
DHCP Server SZ 67,68 UDP Management, Cluster, Control No DHCP
Walled- Captive Portal 80 TCP Management, Cluster, Control No WISPr_WalledGarden
Garden Web with HTTP
Server Proxy
SNMP Client SZ 161 UDP Management No Simple Network Management
Protocol (SNMP)
LDAP Server RAC 389 TCP/UDP Management, Cluster, Control Yes SZ to LDAP
SZ rsyslog 514 TCP/UDP Management, Cluster, Control No Remote Syslog
DHCP v6 SZ 546, 547 UDP Management, Cluster, Control No DHCPv6 Protocol
Server
LDAPS Server RAC 636 TCP Management, Cluster, Control Yes SZ to LDAPS Server
AAA server SZ 2083 (RadSec) TCP Management, Cluster, Control No The default destination port
number for RADIUS over TLS is
TCP/2083 (As per RFC-6614)
AAA server SZ 2084 (CoA/DM Over RadSec) TCP Management, Cluster, Control No SZ as RadSec server listens on
port 2084 for incoming TLS
connection from client (AAA
Client) to process CoA/DM
messages over RadSec.
AD Server RAC 3268 TCP Management, Cluster, Control Yes SZ to AD (MSTF-GC)
(MSTF-GC)
External AAA SZ-RAC (vSZ 3799 UDP Management, Cluster, Control No Supports Disconnect Message
Server (free control plane) and CoA (Change of
RADIUS) Authorization) which allows
dynamic changes to a user
session such as disconnecting
users and changing
authorizations applicable to a
user session.
JITC CAC SZ 4443 TCP Control No Since SZ 5.1.2 release, mainly
for JITC CAC login support.
This port is opened for NGINX
to configure for client
certificate authentication.
Legacy Public SZ 7443 TCP Management No Deprecated Public API
API Client
Any Management 8022 No (SSH) Management Yes When the management ACL is
interface enabled, you must use port
8022 (instead of the default
port 22) to log on to the CLI or
to use SSH.
Any vSZ control 8090 TCP Control No Allows unauthorized UEs to
plane browse to an HTTP website
Any vSZ control 8099 TCP Control No Allows unauthorized UEs to
plane browse to an HTTPS website
Any vSZ control 8100 TCP Control No Allows unauthorized UEs to
plane browse using a proxy UE
TABLE 17 Ports to Open Between Various RUCKUS Devices, Servers, and Controllers (continued)
From To (Listener) Communication Port Number Layer 4 Interface Configurable Purpose
(Sender) Protocol from Web
Interface?
Any vSZ 8443 TCP Management No Access to the controller web
management interface via HTTPS
plane NOTE
The Public API
port has changed
from 7443 to
8443.
Any vSZ control 9080 HTTP Management, Control No Northbound Portal Interface
plane for hotspots
Any vSZ control 9443 HTTPS Management, Control No Northbound Portal Interface
plane for hotspots
Client device SZ control 9997 TCP Control No Internal Subscriber Portal in
Plane HTTP
Any vSZ control 9998 TCP Control No Hotspot WISPr subscriber
plane portal login/logout over
HTTPS
NOTE
The destination interfaces are meant for three-interface deployments. In a single-interface deployment, all the destination ports must be
forwarded to the combined management and control interface IP address.
NOTE
Communication between APs is not possible across NAT servers.
Active-Standby Mode
When an active cluster becomes inaccessible for APs, external DPs (vSZ-D), and ICX switches, a standby cluster restores the latest configuration of
the out-of-service (OOS) active cluster, then takes over all external devices (including APs, external DPs, and ICXswitches). The AP or ICX switch
capacity is limited by the AP or ICX switch High Availability (HA) licenses on the standby cluster and the services license limits from the failed active
cluster. When the active cluster returns to the in-service state, the end user can "rehome" all APs, external DPs, and ICX switches back to the active
cluster.
The behavior of the standby cluster changes automatically when there is a configuration change in the following deployment types:
• One-to-one (one active cluster to one standby cluster) deployment
The standby cluster restores the configuration from the active cluster after the configuration synchronization is completed. The standby
cluster is always in backup mode and ready to receive the APs, external DPs, and ICX switches from the out-of-service active cluster.
• Many-to-one (two or three active clusters to one standby cluster) deployment
The time taken by the standby cluster between detecting the active cluster is out-of-service and being ready to serve APs and external DPs
has been enhanced.
Active-Active Mode
When there are multiple clusters, one cluster can be the configuration source cluster, and all other active cluster restores its configuration
periodically to make sure the configuration between the clusters are synchronized constantly. When the active cluster becomes inaccessible for APs
and external DPs (vSZ-D), they fail over to the target active cluster with priority. Refer to SOP Z on page 56 for more information.
SOP A
This section provides information on upgrade paths, preconditions, the applicable topology, and the upgrade flow for SOP A.
SZ Upgrade Path
• 5.0.x to 5.1.x
• 5.1.x to 5.2.x
• 5.2.x to 6.0.x
Precondition
The standby cluster must have the following license in order to be upgraded:
• SZ300: SUP_SZ300_HA_EU or SUP_SZ300_HA_PTNR
• vSZ-H: SUPPORT_HA_EU or SUPPORT_HA_PTNR
1. Turn off the monitoring status of each active cluster from the standby cluster. In the standby cluster interface, go to Network > Data and
Control Plane > Cluster.
2. Select the cluster root, and click the Configuration tab.
For example, select the active cluster in the table and click Switch monitor. Ensure the Monitoring Status is set to Off.
5. Repeat Step 2 and Step 3 to upgrade the Active 2 cluster and the Active 3 cluster.
6. Upgrade the standby cluster after all the active clusters are successfully upgraded.
7. Click Sync Now in each active cluster after all the active and standby clusters are successfully upgraded.
8. From the active cluster, go to Monitor > Events and Alarms > Eventss and check the details of event code 814. Refer Performing the
Upgrade on page 25.
9. Turn on the monitoring status of each active cluster from the standby cluster after the standby cluster is successfully upgraded.
10. If the new SZ version is 5.2.x (including 5.2), and the standby cluster monitors only one active cluster, the standby cluster will
automatically switch to Backup mode regardless of the number of nodes of the Active cluster. After cluster redundancy is enabled, the
standby cluster will take some time to restore the configuration of the latest active cluster.
SOP B
This section provides information on upgrade paths, preconditions, the applicable topology, and the upgrade for SOP B.
SZ Upgrade Path
• 5.0.x to 5.1.x
• 5.1.x to 5.2.x
• 5.2.x to 6.0.x
Preconditions
The standby cluster must have the following license in order to be upgraded:
• SZ300: SUP_SZ300_HA_EU or SUP_SZ300_HA_PTNR
• vSZ-H: SUPPORT_HA_EU or SUPPORT_HA_PTNR
1. Rehome any AP from the standby cluster and ensure all the APs are on the active cluster.
2. Disable cluster redundancy from the active cluster and check if all APs are online and up-to-date.
3. Upgrade the active cluster and ensure the active SZ is successfully upgraded.
4. From the Active cluster, go to Events and Alarms > Events and check the details of event code 814
5. From the active cluster, go to Administration > Upgrade > Upgrade History to verify previous cluster upgrades.
6. Ensure all APs are online and up-to-date after the upgrade.
7. Upgrade the standby cluster, and ensure it is successfully upgraded.
8. Repeat Step 4 and Step 5.
9. Enable cluster redundancy on the active cluster.
10. Verify if the active SZ engages the latest cluster redundancy resource-type admin activities.
If the new SZ version is 5.2.x (including 5.2), the standby cluster will automatically switch to Backup mode after cluster redundancy is
enabled. The standby cluster will take some time to restore the configuration of the latest active cluster.
11. In the standby cluster interface, go to Network > Data and Control Plane > Cluster.
12. Repeat Step 4.
13. Select the cluster root, and click the Configuration tab.
SOP C
This section provides information on upgrade paths, preconditions, the applicable topology, and the upgrade flow for SOP C.
SZ Upgrade Path
• 3.6.x to 5.x
Preconditions
The standby cluster must have the following license in order to be upgraded:
• SZ300: SUP_SZ300_HA_EU or SUP_SZ300_HA_PTNR
• vSZ-H: SUPPORT_HA_EU or SUPPORT_HA_PTNR
1. From the active cluster, go to Network > Data and Control Plane > Cluster > Configuration and disable schedule configuration sync. Do
not change the cluster redundancy-enabled settings.
2. Upgrade the active cluster and ensure that the active SZ is successfully upgraded.
3. From the active cluster, go to Monitor > Events and Alarms > Events and check the details of event code 814.
4. From the active cluster, go to Administration > Administration > Upgrade > Upgrade History to verify previous cluster upgrades.
5. Rehome all APs in the standby cluster to the active cluster and ensure there is no AP on the standby cluster.
6. Ensure all APs are on the Active cluster with Status online and Configuration Status up-to-date after the upgrade.
7. Upgrade the standby cluster, and ensure it is successfully upgraded.
8. Repeat Step 3 and Step 4.
9. In the active cluster interface, go to Network > Data and Control Plane > Cluster > Configuration and disable cluster redundancy.
10. In the standby cluster interface, go to Network > Data and Control Plane > Cluster > Configuration and ensure that all active data is
deleted.
11. Enable cluster redundancy again from the active cluster.
12. From the active cluster, go to Administration > Administration > Admin Activities and check if cluster redundancy is set successfully by
investigating admin activities.
SOP Z
This section provides information on preconditions, the applicable topology, and the upgrade flow for SOP Z.
Preconditions
1. Ensure that all the APs, DPs, and switches are at their home clusters before upgrading the active cluster that is selected as the master
configuration. The switchover operation can be used to let the APs, DPs, and switches connect to another cluster.
2. From the active cluster, go to Network > Data and Control Plane > Cluster > Configuration and disable schedule configuration sync.
3. Upgrade the active cluster A.
4. When the Active A cluster is upgraded, begin to upgrade clusters Active B, Active C, and Active D sequentially.
5. After all the active clusters are upgraded successfully, click Sync Now to sync the configuration from the Active A cluster.
6. Select Configuration Sync from the scheduler task in the Active A cluster.
APs preconfigured with the SmartZone AP firmware may be used with SZ300 or vSZ in their native default configuration. APs factory-configured with
the ZoneFlex-AP firmware may be used with the controller when LWAPP discovery services are enabled.
LWAPP2SCG must be disabled on the controller if Solo APs running 104.x are being moved under controller management. To disable the
LWAPP2SCG service on the controller, log on to the CLI, and then go to enable > mode > config > lwapp2scg > policy deny-all. Enter Yes to save your
changes.
NOTE
Solo APs running releases 104.x or higher are capable of connecting to both Zone Director and SmartZone platforms. If an AP is running
release 104.x or later and the LWAPP2SCG service is enabled on the controller, a race condition will occur.
IMPORTANT
AP PoE power modes: AP features may be limited depending on power provided via PoE. Refer to AP datasheets for more information.
Supported AP Models
This release supports the following RUCKUS AP models.
Indoor Outdoor
R850 T750SE
R770 T750
R760 T350SE
R750 T350D
R650 T350C
R560
R550
R350
H550
H350
The following lists the supported AP models in this SmartZone release when placed in an AP Zone that uses an older AP version.
ATTENTION
The R730 AP must be removed from the AP Zone before upgrading the AP Zone to the AP firmware version 6.1.1 or later.
ATTENTION
For APs that are not compatible with R7.0.0, it is essential to maintain them with AP firmware versions of R6.1, 6.1.1, and 6.1.2. The
upgrade of the Zone for APs that are not supported in R6.1, 6.1.1, and 6.1.2 is not feasible.
Indoor Outdoor
NOTE
Supported on R6.1.0, 6.1.1, and 6.1.2.
ATTENTION
AP R310 is Wave 1 and supports WPA3 – this is the one exception, the rest of the APs that support WPA3 are 802.11ac Wave2 or
802.11ax.
Unsupported AP Models
The following lists the AP models have reached end-of-life (EoL) status and, therefore, are no longer supported in this release.
NOTE
By downloading the SmartZone software and subsequently upgrading SmartZone to version 6.1.0, be advised that the software will
periodically connect to RUCKUS and RUCKUS will collect the hardware serial number, software version and build number. RUCKUS will
transmit a file back to the SmartZone device that will be used to display the current status of your SmartZone support contract. Any
information collected from the SmartZone device may be transferred and stored outside of your country of residence where data
protection standards may be different.
For information about the specific models and modules supported in a SmartZone model, refer to the appropriate hardware installation guide.
Concurrent CLI connections, limited to a certain maximum per node, allow segmented and controlled interaction with vSZ systems, crucial for management under stress. High AP loads may strain resources, but properly managed CLI connections ensure command processing remains efficient without overwhelming node capacities .
Changing AP firmware involves selecting the desired firmware version and applying it to a zone, which might reset configurations to a previous state . Applying an AP patch offers incremental updates such as introducing new models or bug fixes without altering the existing configuration of the AP zone .
The vSZ High Scale configuration requires larger virtual resources compared to vSZ Essentials for similar AP ranges. For AP Count Range 10,001 to 30,000, vSZ High Scale needs a minimum of 24 vCPUs, 56 GB RAM, and 600 GB disk size, while vSZ Essentials for AP Count 1025 to 3000 requires 8 vCPUs, 20 GB RAM, and 250 GB disk size .
If one node goes down in a 4-node vSZ cluster, sustaining 30K APs long-term without failing remaining nodes requires increasing overall memory to avoid overload and potential service disruption. Such measures ensure performance stability but must be temporary with prompt recovery of the failed node to normal operation .
"Sync Now" ensures configurations are up-to-date post-upgrade, beneficial for operational continuity. "Disable Cluster Redundancy" might streamline upgrades but poses risks if failures occur, as backup is unavailable, potentially leading to data loss or downtime until redundancy is re-enabled .
The steps include selecting the Administration > Upgrade section, uploading the DP patch file, and applying the patch while ensuring a backup of the current data plane to revert in case of failure. Precautions involve verifying the patch file and ensuring backup completeness before performing the upgrade .
The networking port configurations, such as opening ports 21 for control, 1883 for AP communication, and 69 for ZD migration, are critical to ensure uninterrupted communication among RUCKUS devices. Misconfiguration could lead to communication failures, affecting AP control and upgrade processes they support .
On-premise infrastructure may have limited scalability due to hardware constraints and upfront costs, whereas cloud-based setups like AWS or Azure provide flexible, scalable solutions able to handle increased resource demands dynamically. The latter can adapt more easily to scaling requirements but may incur higher operational costs .
Upgrading vSZ RPM packages can enhance SmartZone controller capabilities by optimizing resource allocation and improving throughput management, vital for maintaining performance in high-traffic scenarios. This requires careful capacity planning to align resources with anticipated demand peaks, ensuring SLAs are upheld .
Resource level 9 is required in vSZ High Scale environments where resource level 8 cannot handle the load, such as configurations with 30,000 APs across thousands of zones with resource-intensive features like HCCD and MLISA. This setting ensures optimal performance under heavy loads but may demand significant infrastructure investments .