Doc05 - FusionCompute Product Documentation
Doc05 - FusionCompute Product Documentation
FusionCompute
8.8.0
FusionCompute Product Documentation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 1/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the
products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise
specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties,
guarantees or representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to
ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute a warranty of
any kind, express or implied.
Website: https://www.huawei.com
Email: [email protected]
127.0.0.1:51299/icslite/print/pages/resource/print.do? 2/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Contents
Contents
1 Library Information
1.1 Change History
1.2 Conventions
Symbol Conventions
General Conventions
Command Conventions
Command Use Conventions
GUI Conventions
GUI Image Conventions
Keyboard Operations
Mouse Operations
1.3 Library Organization
Solution Documentation Overview
1.4 How to Obtain and Update Documentation
Obtaining Documentation
Updating Documentation
1.5 Obtaining the Open-source Software Notice
Obtaining the Open-source Software Notice
1.6 Feedback
1.7 Technical Support
2 Descriptions
2.1 Virtualization Suite Description
2.1.1 Introduction to FusionCompute Virtualization Suite
2.1.1.1 Virtualization Suite Overview
Application Scenarios
2.1.1.2 Features and Functions of the Virtualization Suite
On-demand Resource Allocation for Applications
Virtual Resource SLA
Centralized VDC Management
Wide Compatibility
Automated Resource Scheduling
Comprehensive Rights Management Functions
Intelligent Application Management
Sophisticated Metering
Various O&M Functions
Cloud Security
Container Management
2.1.1.3 Positioning
Cloud Facilities
Hardware infrastructure layer
Scale-Out Block Storage
FusionCompute Virtualization Suite
2.1.1.4 Technical Highlights
Unified Virtualization Platform
Supporting various hardware devices
Big Cluster
Automated Resource Scheduling
Comprehensive Rights Management Functions
Comprehensive O&M
Cloud Security
2.1.2 System Architecture
127.0.0.1:51299/icslite/print/pages/resource/print.do? 3/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 4/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Definition
Benefits
Constraints and Limitations
2.1.5.8 DR and Backup
Definition
Benefits
Dependency
2.1.5.9 Dynamic Resource Scheduling
Definition
Benefits
Dependency
2.1.5.10 VM Resource QoS Control
Definition
Benefits
Dependency
2.1.5.11 User-Mode Switching Mode (x86 Architecture)
Definition
Benefits
Dependency
2.1.5.12 Distributed Virtual Switch
Definition
Benefits
Dependency
2.1.5.13 SR-IOV
Definition
Benefits
Dependency
2.1.5.14 GPU Passthrough
Definition
Benefits
Compatibility
2.1.5.15 GPU Virtualization (Intel)
Definition
Benefits
Compatibility
2.1.5.16 Container Management
Definition
Benefits
Dependency
2.1.6 System Principle
2.1.6.1 Communication Principles
Communication Planes
VLAN Planning Rules
2.1.6.2 Time Synchronization Mechanism
2.1.7 Reliability
2.1.7.1 FusionCompute System Reliability
FusionCompute Management Nodes in Active/Standby Mode
Quorum Server
Host OS Fault Locating Tool of FusionCompute: Black Box
Management Data Backup and Restoration
2.1.7.2 FusionCompute Software Reliability
VM HA
VM Live Migration
VM Load Balancing
127.0.0.1:51299/icslite/print/pages/resource/print.do? 5/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Snapshot
VM Isolation
VM OS Fault Detection
Black Box
2.1.7.3 FusionCompute Architecture Reliability
Management Node HA
Management Data Backup and Restoration
Traffic Control
Fault Detection
Data Consistency Check
2.1.8 Technical Specifications
2.1.8.1 Compatibility
2.1.8.1.1 Hardware Compatibility
2.1.8.1.2 Supported OSs
2.2 Security Description
2.2.1 Security Overview
2.2.1.1 Security Threats and Advantages
Security Threats and Challenges
Security Advantages
2.2.1.2 Security Structure
2.2.2 Cloud Platform Security
2.2.2.1 Data Store Security
User Data Isolation
Data Access Control
Residual Information Protection
Data Storage Reliability
2.2.2.2 VM Isolation
Physical and Virtual Resource Isolation
vCPU Scheduling Isolation
Memory Isolation
Internal Network Isolation
Disk I/O Isolation
2.2.2.3 Network Transmission Security
2.2.2.3.1 Network Isolation Security
Network Plane Isolation
2.2.2.3.2 Transmission Security
2.2.3 O&M Management Security
2.2.3.1 Rights Management
2.2.3.2 Account Password Management
Password Encryption and Change Principles
2.2.3.3 Log Management
Log Type
Log Source
Log Storage
Log Display and Query
2.2.4 Host Security
2.2.4.1 Web Security
2.2.4.2 OS Hardening
2.2.4.3 Database Hardening
Database Type
Database Security Configuration
Database Backup
2.2.4.4 Security Patch
3 Installation and Configuration
127.0.0.1:51299/icslite/print/pages/resource/print.do? 6/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 7/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Process
Procedure
3.2.3.5 Checking Key Data
Scenarios
Process
Procedure
3.2.3.6 Typical Configuration Tasks (Aggregation Layer)
Typical Tasks
Configuring the DHCP Relay for Aggregation Switches
Configuring an Eth-Trunk Interface
Disabling the Strict ARP Learning Function
Configuring a Rate Limit for ARP Packets
3.2.3.7 Typical Configuration Tasks (Core Layer)
Typical Tasks
Configuring Data for Core Switches to Interconnect with Service Gateways
Configuring Static Routes
Configuring OSPF Dynamic Routes
Isolating VMs
3.2.4 Configuring the 2288H V5 Servers or TaiShan 200 Servers (Model: 2280)
3.2.4.1 Overview
Purpose
Process
3.2.4.2 Preparation
Obtaining the License
Preparing Documents and Tools
3.2.4.3 Logging In to a Server Using the BMC
Scenarios
Process
Procedure
3.2.4.4 Checking the Server
Scenarios
Procedure
3.2.4.5 Configuring RAID 1
3.2.4.5.1 (Recommended) Configuring RAID 1 on the BMC WebUI
Scenarios
Procedure
3.2.4.5.2 Logging In to a Server Using the BMC WebUI to Configure RAID 1
Scenarios
Process
Procedure (x86 Architecture)
Procedure (Arm Architecture)
3.2.4.6 Setting the BIOS (x86 Architecture)
Scenarios
Process
Procedure
3.2.5 Common Operations
3.2.5.1 Logging In to Devices Using Serial Port Tools
Scenarios
Prerequisites
Procedure
3.2.5.2 Configuring VLANs That Are Allowed to Pass Through the Port
Scenarios
Prerequisites
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 8/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 9/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Problem 7: Automatic Logout After Login Using a Firefox Browser Is Successful but an Error Message Indicating that the
User Has Not Logged In or the Login Times Out Is Displayed When the User Clicks on the Operation Page
3.3.5 Manual Installation
3.3.5.1 Installation Overview
3.3.5.2 Installing FusionCompute
3.3.5.2.1 Installing Hosts Using ISO Images (x86)
Scenarios
Prerequisites
Data
Procedure
3.3.5.2.2 Installing Hosts Using ISO Images (Arm)
Scenarios
Prerequisites
Data
Procedure
3.3.5.2.3 Installing VRM Nodes Using ISO Images (x86)
Scenarios
Prerequisites
Data
Procedure
3.3.5.2.4 Installing VRM Nodes Using ISO Images (Arm)
Scenarios
Prerequisites
Data
Procedure
3.3.6 (Optional) Installing a Quorum Server
3.3.6.1 Solution Overview
Network Planning
Hardware Device Requirements
3.3.6.2 Preparing for Installation
3.3.6.3 Installing the Basic Package of the Quorum Server
3.3.6.3.1 By ISO Image
Prerequisites
Data
Procedure
3.3.6.3.2 By Template
3.3.6.4 Installing the Quorum Server Software
3.3.6.5 Configuring the Quorum Network
Prerequisites
Procedure
3.3.6.6 Configuring the Quorum Server
3.3.7 Checking Before Service Provisioning
3.3.7.1 System Management on SmartKit
Prerequisites
Procedure
3.3.7.2 Site Deployment Quality Inspection
Scenarios
Prerequisites
Procedure
3.3.7.3 Automated Acceptance
Scenarios
Prerequisites
Procedure
3.3.8 Appendix
127.0.0.1:51299/icslite/print/pages/resource/print.do? 10/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.3.8.1 FAQ
3.3.8.1.1 How Do I Handle the Issue that System Installation Fails Because the Disk List Cannot Be Obtained?
Symptom
Possible Causes
Troubleshooting Guideline
Procedure
3.3.8.1.2 How Do I Handle the Issue that VM Creation Fails Due to Time Difference?
Symptom
Procedure
3.3.8.1.3 How Do I Handle the Issue that a Service Port Has Been Occupied on FusionCompute Installer?
Symptom
Possible Causes
Troubleshooting Guideline
Procedure
3.3.8.1.4 How Do I Uninstall the FusionCompute Web Tool?
3.3.8.1.5 What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported During
System Installation?
Symptom
Possible Causes
Procedure
3.3.8.1.6 How Do I Handle Common Problems During Hygon Server Installation?
3.3.8.1.7 How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be Added Due to a GPT Partition During
Tool-based Installation?
Symptom
Procedure
3.3.8.1.8 How Can I Handle the Issue that the Node Fails to Be Remotely Connected During the Host Configuration for
Customized VRM Installation?
Symptom
Solution
3.3.8.1.9 How Do I Handle the Issue that the Host Cannot Be Started Properly and the grub rescue Page Is Displayed During
the Starting Process?
Symptom
Possible Causes
Solution
3.3.8.1.10 How Do I Handle the Issue that the VRM Installation Fails Because Importing the Template Takes a Long Time?
Symptom
Procedure
3.3.8.1.11 What Can I Do If Disk Selection Fails When a Host Is Being Reinstalled After a Fault Is Rectified?
Scenarios
Procedure
3.3.8.1.12 How Do I Uninstall a Mellanox NIC Driver?
Scenarios
Prerequisites
Procedure
3.3.8.1.13 How Do I Log In to the Quorum Server as User root Using SSH?
Prerequisites
Procedure
3.3.8.2 Verifying the Software Package
3.3.8.3 Configuring the BIOS on Hygon Servers
Two Methods for Accessing the BIOS
3.3.8.4 How Do I Install the linux-firmware Firmware Package?
Procedure
3.4 Initial Configurations
3.4.1 Configuration Overview
127.0.0.1:51299/icslite/print/pages/resource/print.do? 11/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 12/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
Procedure
3.4.3.1.8 How Do I Reconfigure Host Parameters?
Scenarios
Prerequisites
Procedure
3.4.3.1.9 How Do I Replace Huawei-related Information in FusionCompute?
Scenarios
Prerequisites
Procedure
Additional Information
3.4.3.1.10 How Do I Manually Change the System Time on a Node?
Scenarios
Prerequisites
Procedure
3.4.3.1.11 Installing and Configuring VPN (Windows)
Scenarios
Prerequisites
Procedure
3.4.3.1.12 How Do I Handle the Issue that VRM Services Become Abnormal Because the DNS Is Unavailable?
Symptom
Possible Causes
Procedure
3.4.3.1.13 What Can I Do If an Error Message Is Displayed Indicating That the Sales Unit HCore Is Not Supported When I
Import Licenses on FusionCompute?
Symptom
Possible Causes
Fault Diagnosis
Procedure
Related Information
3.4.3.2 Common Operations
3.4.3.2.1 Setting Google Chrome (Applicable to Self-Signed Certificates)
Scenarios
Prerequisites
Procedure
3.4.3.2.2 Setting Mozilla Firefox
Scenarios
Prerequisites
Procedure
3.4.3.2.3 Logging In to FusionCompute
Scenarios
Prerequisites
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 13/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1 Library Information
Change History
Conventions
Library Organization
Feedback
Technical Support
1.2 Conventions
Symbol Conventions
The symbols that may be found in this document are defined as follows:
Symbol Description
Indicates a hazard with a high level of risk which, if not avoided, could result in death or serious injury.
Indicates a hazard with a medium level of risk which, if not avoided, could result in death or serious injury.
Indicates a hazard with a low level of risk which, if not avoided, could result in minor or moderate injury.
Indicates warning information about device or environment security which, if not avoided, could result in equipment
damage, data loss, performance deterioration, or unanticipated results.
NOTICE is used to address practices not related to personal injury.
General Conventions
127.0.0.1:51299/icslite/print/pages/resource/print.do? 14/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Convention Description
Boldface Heading 1, Heading 2, Heading 3, and Block Label are in Book Antiqua.
Courier New Examples of information displayed on the screen are in Courier New. The messages input on terminals by users
are displayed in boldface.
Command Conventions
Convention Description
{ x | y | ... } Optional items are grouped in brackets and separated by vertical bars. One item is selected.
[ x | y | ... ] Optional items are grouped in brackets and separated by vertical bars. One item is selected or no item is selected.
{ x | y | ... Optional items are grouped in brackets and separated by vertical bars. A minimum of one item or a maximum of all
* items can be selected.
}
* Optional items are grouped in brackets and separated by vertical bars. Several items or no item can be selected.
[ x | y | ... ]
GUI Conventions
Convention Description
"" Buttons, menus, parameters, tabs, windows, and dialog titles are in boldface. For example, click OK.
> Multi-level menus are in boldface and separated by the ">" signs. For example, choose File > Create > Folder.
Italic The names of the variable nodes in navigation trees and multi-level menus are in italic.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 15/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Keyboard Operations
Format Description
Key Press the key. For example, press Enter, Tab, Backspace, and a.
Key 1+Key 2 Press the keys concurrently. For example, pressing Ctrl+Alt+A means the three keys should be pressed concurrently.
Key 1, Key 2 Press the keys in turn. For example, pressing Alt, A means the two keys should be pressed in turn.
Mouse Operations
Action Description
Click Select and release the primary mouse button without moving the pointer.
Double-click Press the primary mouse button twice continuously and quickly without moving the pointer.
Drag Press and hold the primary mouse button and move the pointer to a certain position.
Procedure documentation which describes how to operate and maintain the product
Reference documentation which provides additional information helpful for operating and maintaining the product
Obtaining Documentation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 16/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Use the online search function provided by ICS Lite to find the documentation package you want and download it. This
method is recommended because you can directly load the desired documentation package to ICS Lite. For details about how
to download a documentation package, see the online help of ICS Lite.
Visit the Huawei support website to download the desired documentation package.
Apply for the documentation CD-ROM from your local Huawei office.
To use ICS Lite or to visit Huawei technical support website, you need a registered user account. You can apply for a user account at the support
website or contact the service manager of your local Huawei office.
Updating Documentation
You can update a documentation package in the following ways:
Enable the documentation upgrade function of HedEx Lite to automatically detect the latest version for your local
documentation and load it to ICS Lite as required. This method is recommended because HedEx Lite detects the latest
version for your local documentation packages and prompts you to conduct an upgrade. For details about how to enable the
library upgrade function, see the online help of ICS Lite.
Download the latest documentation packages from the Huawei technical support websites.
To use ICS Lite or to visit Huawei technical support website, you need a registered user account. You can apply for a user account at the support
website or contact the service manager of your local Huawei office.
1.6 Feedback
Huawei welcomes your suggestions and comments. Please provide your feedback for us in any of the following ways:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 17/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Give your feedback using the information provided on the Contact Us page at the technical support websites:
If the issue cannot be solved using the preceding methods, contact our local office or company headquarters.
Give your feedback using the information provided on the Contact Us page at the technical support websites:
2 Descriptions
Virtualization Suite Description
Security Description
System Architecture
Deployment Plan
Functions
Key Features
System Principle
Reliability
Technical Specifications
127.0.0.1:51299/icslite/print/pages/resource/print.do? 18/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Positioning
Technical Highlights
Leverage high availability (HA) and powerful restoration capabilities of virtualized infrastructure to provide rapid fault
recovery for services, thereby cutting data center costs and increasing system runtime.
The FusionCompute virtualization suite virtualizes hardware resources using the virtualization software deployed on physical
servers, so that one physical server can function as multiple virtual servers. This solution maximizes resource utilization by
centralizing existing VMs workloads on some servers and therefore releasing more servers to carry new applications and
solutions.
Application Scenarios
Single-hypervisor scenarios
Single-hypervisor applies to scenarios in which an enterprise only uses FusionCompute as a unified operation, maintenance, and
management platform to operate and maintain the entire system. These scenarios include resource monitoring, resource
management, and system management.
FusionCompute virtualizes hardware resources and centrally manages virtual resources, service resources, and user resources. It
virtualizes compute, storage, and network resources using the virtual computing, virtual storage, and virtual network technologies.
FusionCompute centrally schedules and manages virtual resources using a unified interface, thereby reducing the operating
expense (OPEX) and ensuring high system security and reliability.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 19/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The FusionCompute virtualization suite allows users to define service level agreement (SLA) policies to control VM resources,
thereby allocating physical resources based on application importance.
Wide Compatibility
The FusionCompute virtualization suite supports x86- or Arm-based servers, various storage devices, and mainstream
Linux/Windows OSs, allowing mainstream applications to run on virtualization platforms.
Sophisticated Metering
The FusionCompute virtualization suite collects information about the resource usage for each user and reports the statistics to
third-party systems to calculate service charges.
Cloud Security
The FusionCompute virtualization suite is compliant with local information security laws and regulations and incorporates various
security measures to provide end-to-end protection for user access, management and maintenance, data, networks, and
virtualization services.
Container Management
127.0.0.1:51299/icslite/print/pages/resource/print.do? 20/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The FusionCompute virtualization suite provides tenant project management, cluster lifecycle management, container image and
application management, as well as container monitoring and O&M capabilities. It is an optimal foundation platform for
application modernization.
2.1.1.3 Positioning
FusionCompute is a cloud OS. It virtualizes hardware resources and centrally manages virtual resources, service resources, and
user resources.
Figure 1 shows the FusionCompute position in the FusionCompute virtualization suite.
Cloud Facilities
Cloud facilities refer to the auxiliaries and space required by the cloud data center, including the power supply system, fire-
fighting system, wiring system, and cooling system.
Huawei has been devoted to continuously enhancing the competitiveness of the data center facilities based on the concept called
SAFE (smartness, availability, flexibility, and efficiency).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 21/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute
FusionCompute is a cloud OS. It virtualizes hardware resources and centrally manages virtual resources, service resources,
and user resources. It virtualizes compute, storage, and network resources using the virtual computing, virtual storage, and
virtual network technologies. It centrally schedules and manages virtual resources over unified interfaces. FusionCompute
provides high system security and reliability and reduces the OPEX, helping carriers and enterprises build secure, green, and
energy-saving data centers.
eBackup
eBackup is a virtualized backup software product, which works with the FusionCompute snapshot function and the Changed
Block Tracking (CBT) function to back up VM data.
UltraVR
UltraVR is a DR management software product, which provides data protection and DR for the key VM data using the
asynchronous remote replication feature provided by the underlying SAN storage system.
Big Cluster
A cluster supports up to 128 hosts and 8000 VMs.
FusionCompute implements centralized IT resource scheduling, heat management, and power consumption management,
reducing maintenance costs.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 22/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute dynamically schedules resources based on the load of servers and services, achieving load balancing across
servers and service provisioning systems and optimizing system response and user experience.
Comprehensive O&M
FusionCompute provides various O&M functions to control and manage services, improving O&M efficiency:
Black box
The black box provides logs and program heaps, which help carriers or enterprises to rapidly locate faults. This function
facilitates fault locating and rectification.
Web interfaces
FusionCompute provides web interfaces, through which users can manage all hardware resources, virtual resources, and
service provisioning.
Cloud Security
FusionCompute complies with local information security laws and regulations. It adopts various security measures and policies to
provide end-to-end protection for user access, management and maintenance, data, networks, and virtualization.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 23/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Component Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 24/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Module Function
Host Requirements
VRM, the FusionCompute management Deployed in 1+1 Network interface card (NIC): 2 x 10 Gbit/s
component active/standby mode on (recommended)
NOTE: physical servers or VMs For details about other configuration requirements, see
Deployment Rules .
A VRM node is a FusionCompute
management component and manages
resources in host clusters and logical
clusters.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 25/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
machines
If the hosts have been used before, restore the hosts to factory settings before configuring the basic input/output system (BIOS).
Item Requirement
CPU x86: Intel 64-bit CPUs and Hygon 64-bit CPUs; Arm: Kunpeng 920 processors and Phytium 64-bit CPUs
In the x86 architecture, the CPU supports hardware virtualization technologies, such as Intel VT-x, and the BIOS system
must have the CPU virtualization function enabled.
The models of CPUs in one cluster are the same. Otherwise, the VM migration between hosts will fail. Therefore, you are
advised to deploy servers of the same model in a cluster.
If the host is connected to scale-out block storage, set the FSA vCPU reservation as instructed in Configuring Service
Resource Reservation for the Host Management Domain .
If eBackup is connected, two more vCPUs need to be reserved for the host management domain.
NOTE:
If the CPU virtualization function is disabled on an x86 host, VMs cannot be created on the host.
If the live migration function is used, you are advised to use servers of the same model in a cluster and ensure that the settings of
CPUs in the BIOS are the same. If the settings of CPUs are different, the live migration may fail.
Memory > 8 GB
If the host is used to deploy a management VM, the host memory must be greater than or equal to the total size of the
management VM memory and the memory of the management domain of the host accommodating the management VM.
If you need to configure the user-mode switching specifications for an x86 host, reserve another 5 GB to the original host
management domain memory size.
If the host is connected to scale-out block storage, set the FSA memory reservation. For details, see Configuring Service
Resource Reservation for the Host Management Domain .
Recommended memory size: ≥ 48 GB
When Huawei servers are installed, the memory needs to be set based on the recommended configurations. Otherwise, the
system cannot achieve optimal performance. For details about the recommended configurations, visit
https://support.huawei.com/onlinetoolsweb/smca/.
NOTE:
V5 servers (x86 architecture) must use recommended configurations. Otherwise, the system performance deteriorates obviously.
Disk System disk of the server where the VRM node resides: ≥ 270 GB
System disks of compute nodes: ≥ 150 GB
For compute hosts, it is recommended that two SAS disks form RAID 1 used as the system disks.
If service VMs use local storage, you need to plan independent local storage for them. It is recommended that local disks
form RAID 1 to provide storage space.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 26/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Whether to use the GE networking or 10GE networking depends on the amount of the estimated network traffic. It is
recommended that the network load be less than 60% of the network port bandwidth.
The RAID controller cards on certain servers require that server disks must form RAID groups. Otherwise, the host OS cannot
be installed. For details about the RAID controller card requirements, see the server product documentation.
2.1.4 Functions
Virtual Computing
Virtual Network
Virtual Storage
Availability
Security
Container Management
Server Virtualization
Server Virtualization enables physical server resources to be converted to logical resources. With virtualization technologies, a
server can be divided into multiple virtual compute resources that are isolated with each other. CPU, memory, disks, and I/O
resources become pooled resources that are dynamically managed. Server virtualization increases the resource utilization,
simplifies system management, and implements server integration. In addition, the hardware-assisted virtualization technology
increases virtualization efficiency and enhances VM security.
Bare-metal architecture
The FusionCompute hypervisor adopts the bare-metal architecture and can run directly on servers to virtualize hardware
resources. With the bare-metal architecture, FusionCompute delivers VMs with almost server-level performance, reliability,
and scalability.
CPU virtualization
FusionCompute converts physical CPUs to virtual CPUs (vCPU) for VMs. When multiple vCPUs are running,
FusionCompute dynamically allocates CPU capabilities among the vCPUs.
Memory virtualization
127.0.0.1:51299/icslite/print/pages/resource/print.do? 27/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute adopts the hardware-assisted virtualization technology to reduce memory virtualization overhead.
GPU passthrough
In FusionCompute, a Graphic Processing Unit (GPU) on a physical server can be directly attached to a specified VM to
improve graphics and video processing capabilities. With this feature enabled, the system can meet user requirements for
high-performance graphics processing capabilities.
USB passthrough
In FusionCompute, a USB device on a physical server can be directly attached to a specified VM. This feature allows users
to use USB devices in virtualization scenarios.
VM Resource Management
VM resource management allows administrators to create VMs using a VM template or in a custom manner, and manage cluster
resources. This feature provides the following functions: automated resource scheduling (including the load balancing mode and
dynamic energy-saving mode), VM life cycle management (including creating, deleting, starting, stopping, and restarting , and
hibernating (in the x86 architecture)VMs), storage resource management (including managing common disks and shared disks),
VM security management (including using custom VLANs), and VM QoS adjustment based on the service load (including setting
CPU QoS and memory QoS).
VM template
A user can customize a standard template, which can be used to create VMs.
CPU QoS
The CPU QoS ensures optimal allocation of compute resources for VMs and prevents resource contention between VMs due
to different service requirements. It effectively increases resource utilization and reduces costs.
During creation of VMs, the CPU QoS is specified based on the services to be deployed. The CPU QoS determines VM
computing capabilities. The system ensures the CPU QoS of VMs by ensuring the minimum compute resources and resource
allocation priority.
The CPU QoS is determined by the following aspects:
CPU quota
CPU quota defines the proportion based on which CPU resources to be allocated to each VM when multiple VMs
compete for the physical CPU resources.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 28/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For example, three VMs (A, B, and C) run on the host that uses a single-core physical CPU with 2.8 GHz frequency,
and their quotas are set to 1000, 2000, and 4000, respectively. When the CPU workloads of the VMs are heavy, the
system allocates CPU resources to the VMs based on the CPU quotas. Therefore, VM A with 1000 CPU quota can
obtain a computing capability of 400 MHz. VM B with 2000 CPU quota can obtain a computing capability of 800
MHz. VM C with 4000 CPU quota can obtain a computing capability of 1600 MHz. (This example explains the
concept of CPU quota and the actual situations are more complex.)
The CPU quota takes effect only when resource contention occurs among VMs. If the CPU resources are sufficient, a
VM can exclusively use physical CPU resources on the host if required. For example, if VMs B and C are idle, VM A
can obtain all of the 2.8 GHz computing capability.
CPU reservation
CPU reservation defines the minimum CPU resources to be allocated to each VM when multiple VMs compete for
physical CPU resources.
If the computing capability calculated based on the CPU quota of a VM is less than the CPU reservation value, the
system allocates the computing capability to the VM according to the CPU reservation value. The offset between the
computing capability calculated based on the CPU quota and the CPU reservation value is deducted from computing
capabilities of other VMs based on their CPU quotas and is added to the VM.
If the computing capability calculated based on the CPU quota of a VM is greater than the CPU reservation value, the
system allocates the computing capability to the VM according to the CPU quota.
For example, three VMs (A, B, and C) run on the host that uses a single-core physical CPU with 2.8 GHz frequency,
their quotas are set to 1000, 2000, and 4000, respectively, and their CPU reservation values are set to 700 MHz, 0 MHz,
and 0 MHz, respectively. When the CPU workloads of the three VMs are heavy:
According to the VM A CPU quota, VM A should have obtained a computing capability of 400 MHz. However,
its CPU reservation value is greater than 400 MHz. Therefore, VM A obtains a computing capability of 700 MHz
according to its CPU reservation value.
The system deducts the offset (700 MHz minus 400 MHz) from VMs B and C based on their CPU quotas.
VM B obtains a computing capability of 700 (800 minus 100) MHz, and VM C obtains a computing capability of
1400 (1600 minus 200) MHz.
The CPU reservation takes effect only when resource contention occurs among VMs. If the CPU resources are
sufficient, a VM can exclusively use physical CPU resources on the host if required. For example, if VMs B and C
are idle, VM A can obtain all of the 2.8 GHz computing capability.
CPU limit
CPU limit defines the upper limit of physical CPUs that can be used by a VM. For example, if the CPU upper limit of a
VM with two vCPUs is set to 3 GHz, the computing capability of each of the two virtual CPUs of the VM is limited to
1.5 GHz.
VM statistics
The system collects information about the resource usages for user VMs and disks.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 29/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute supports dynamic adjustment of VM resources. Users can dynamically adjust resource usage based on the
changing workloads. VM resource adjustment includes:
Adjusting the number of vCPUs for VMs in the running or stopped state
You can increase or decrease the number of vCPUs for a VM that is stopped or running. When a VM is offline, you can
reduce the number of vCPUs for the VM as required. This allows compute resources to be adjusted in a timely manner.
Adjusting the memory size for VMs in the running or stopped state
You can expand or shrink the memory capacity for online or offline VM. When a VM is offline, you can reduce the memory
capacity for the VM as required. This allows memory resources to be adjusted in a timely manner.
When a VM uses virtual storage and is in the running or stopped state, users can expand the VM storage capacity by enlarging the capacity of
existing disks on the VM.
VM Live Migration
FusionCompute allows VMs to migrate among the hosts that share the same storage. During the migration, services are not
interrupted. This reduces the service interruption time caused by server maintenance and saves power consumption for data
centers.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 30/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Virtual NIC
Each virtual NIC (vNIC) has an IP address and a MAC address. It has the same functions as a physical NIC on a network.
FusionCompute implements multiple queues, virtual swapping, QoS, and uplink aggregation to improve the I/O performance of
virtual NICs.
Bandwidth control based on the sending direction and receiving direction of a port group member port
Traffic shaping and bandwidth priority control for each member port in a port group
DVS
Each host connects to a distributed virtual switch (DVS), which functions as a physical switch. In the downstream direction, the
DVS connects to VMs through virtual ports. In the upstream direction, the DVS connects to physical Ethernet adapters on hosts
where VMs reside. The DVS implements network communication between hosts and VMs. In addition, a DVS serves as a single
virtual switch to which associated hosts connect. In addition, the DVS ensures unchanged network configuration for VMs when
the VMs are migrated across hosts.
Logical unit numbers (LUNs) on SAN storage, including Internet Small Computer Systems Interface (iSCSI) and fiber
channel (FC) SAN storage
127.0.0.1:51299/icslite/print/pages/resource/print.do? 31/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The VIMS is a high-performance file system that is designed for storing VM files. The VIMS data can be stored on a local or
shared storage device that is based on Small Computer System Interface (SCSI), such as Fibre Channel (FC), fiber channel
over Ethernet (FCoE), and iSCSI SAN devices.
Ext4
FusionCompute supports virtualization of local disks on servers.
Capacity monitoring
This function enables alarms over datastore usage. If the data usage exceeds the preset threshold, an alarm will be generated.
VM Snapshot
Users can save the static data of a VM at a specific moment as a snapshot. The snapshot can be used to restore the VM to the state
when the snapshot was taken. A VM snapshot captures the entire status of the VM, including information about all VM disks. This
function applies to data backup and DR systems, for example, eBackup, to improve system security and availability.
2.1.4.4 Availability
VM Live Migration
In FusionCompute, this feature enables VMs to be migrated from one host to any host across compute clusters. During the
migration, services are not interrupted. If the migration fails, the VM on the destination server will be destroyed. The user can still
use the VM on the source server. This reduces the service interruption time caused by server maintenance and saves power
consumption for data centers.
VM Fault-based Migration
127.0.0.1:51299/icslite/print/pages/resource/print.do? 32/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If a VM becomes faulty, FusionCompute automatically restarts the VM. When configuring a cluster, you can determine whether to
enable the HA function. The system periodically checks the VM status. When detecting that the physical server on which a VM
runs is faulty, the system will restart the VM on the original physical server or another physical server based on the host fault
handling policies so that the VM can be restored in a timely manner. Because the restarted VM will be recreated and loaded with
the OS like a physical server, the unsaved data is lost when the VM encountered the error.
The system can detect errors on the hardware and system software that cause VM failures.
2.1.4.5 Security
The port group to which a VM NIC belongs can be dynamically modified, implementing dynamic modification of VLAN
IDs.
After the NIC VLAN ID is dynamically changed, the NIC VLAN can also be changed by binding a new VLAN to the NIC
without adding a NIC.
Container Functions
Container management allows you to manage container functions and applications in a unified manner.
Content Library
This function allows you to view the name and capacity usage of the content library, and edit the content library and quota.
You can upload VM images and software packages, and view the names of VM images and software packages, the OS type,
and the CPU architecture.
Project Management
This function allows you to create container projects and manage users, clusters, and image namespaces.
Container Image
This function allows you to configure image namespaces, image repositories, and image versions.
Task Center
This function allows you to view details about container tasks.
Container Policy
This function allows you to set silence rules for container alarms, configure automatic recycling of container image garbage
based on time policies, and restrict network speeds for uploading and downloading files in the content library and image
repository.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 33/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VM HA
Thin Provisioning
Virtualization Antivirus
DR and Backup
SR-IOV
GPU Passthrough
Container Management
Context
The CPUs supported by FusionCompute are provided by Intel, Hygon, AMD, HiSilicon, and Phytium. Intel, AMD, and Hygon
belong to the x86 architecture, and HiSilicon and Phytium belong to the Arm architecture. For details about the features and
functions supported by servers of different CPU vendors on FusionCompute, visit https://support.huawei.com/enterprise, search
for the feature list, and download it.
Definition
127.0.0.1:51299/icslite/print/pages/resource/print.do? 34/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The live migration feature allows users to migrate VMs in a cluster from one physical server to another without interrupting
services. The VM manager provides quick recovery of memory data and memory sharing technologies to ensure that the VM data
before and after the live migration remains unchanged. The VM live migration applies to the following scenarios:
Before performing O&M operations on a physical server, system maintenance engineers need to migrate VMs from this
physical server to another physical server. This minimizes the risk of service interruption during the O&M process.
Before upgrading a physical server, system maintenance engineers need to migrate VMs from this physical server to other
physical servers. This minimizes the risk of service interruption during the upgrade process. After the upgrade is complete,
system maintenance engineers can migrate the VMs back to the original physical server.
System maintenance engineers need to migrate VMs from a light-loaded server to other servers and then power off the
server. This helps reduce service operation costs.
Manual migration By destination On the FusionCompute web client, system maintenance engineers manually migrate one
VM to another server.
Automated VM resource The system automatically migrates VMs to other servers in the cluster based on the preset
migration scheduling VM scheduling policies.
Benefits
For Benefit
Customers The feature applies to planned server maintenance requiring no stop of any user services.
Dependency
None
Definition
Memory overcommitment allows a VM to use more memory space than the physical host has. Commonly used technologies
include memory ballooning, memory sharing, and memory swapping. This feature allows more VMs to be supported by a server
since it offers more memory resources than the physical server has.
This feature increases memory utilization, reduces the investment on storage devices, and prolongs the memory service time for
servers.
FusionCompute supports the following memory overcommitment technologies:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 35/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Memory ballooning: The system automatically reclaims the unused memory from a VM and allocates it to other VMs to use.
Applications on the VMs are not aware of memory reclamation and allocation. The total amount of the memory used by all
VMs on a physical server cannot exceed the physical memory of the server.
Memory switching: External storage is virtualized into memory for VMs to use. Data that is not used temporarily is stored to
external storage. If the data needs to be used, it is exchanged with the data reserved on the memory.
Memory sharing: Multiple VMs share the memory page on which the data content is the same.
The memory overcommitment degree is inversely proportional to the actual VM memory usage. Therefore, you must specify the
QoS of the memory overcommitment for compute nodes.
After memory overcommitment is enabled, the memory overcommitment policy is used to allocate memory. VMs can use all
physical memory when the memory is sufficient. If the memory is insufficient, the system schedules memory resources based on
the memory overcommitment policies using memory overcommitment technologies to release free memory.
Benefits
For Benefit
Dependency
The total memory size reserved for all VMs running on each compute node cannot exceed the total memory size of virtualization
domains on the compute node.
Constraints
Sufficient space for memory swapping must be configured for hosts to ensure the stable running of the memory
overcommitment function. The maximum memory overcommitment ratio depends on the swap partition size. The specific
calculation formula is as follows:
Maximum memory overcommitment ratio supported by a host = 1 + (Size of the swap partition of the host – Physical
memory size of the virtualization domain x 0.1)/Physical memory size of the virtualization domain.
The maximum memory overcommitment ratio supported by the host can be viewed on the Summary page of the host after the host
memory overcommitment function is enabled for the cluster.
For details about how to query the memory overcommitment ratio of the host, see Configuring the Host List Display Options .
The memory swap partition and the host OS are configured on the same disk by default (default size: 30 GB). A maximum of
150% overcommitment ratio is supported. If you manually configure the disk size, a minimum of 30 GB is required. For
details about how to expand the memory swap partition, see Adding a Memory Swap Partition to a Host .
The default system memory swap partition does not have its own datastore. The default memory swap partition is /dev/xxx.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 36/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Memory overcommitment is mutually exclusive with SR-IOV passthrough, GPU passthrough, and NVMe SSD passthrough.
Passthrough VMs must exclusively occupy the memory. The memory exclusively used by VMs cannot be exchanged to the
space for memory swapping. Memory overcommitment-enabled VMs with memory reservation less than 100% cannot be
bound with physical devices.
Precautions
When the memory usage of a host reaches more than 70%, the VM services on the host are memory-consuming. In this case,
you are not advised to enable memory overcommitment. If memory overcommitment is enabled, the memory is probably
insufficient and the memory swap policy is used to release free memory. As a result, the performance of the VM without full
memory reserved deteriorates.
After memory overcommitment is enabled, if the host memory is insufficient, the memory usage of the VM with low
memory resource reservation will decrease. By default, host memory overcommitment is disabled. To enable the function,
see Enabling Memory Overcommitment for a Cluster .
After memory overcommitment is enabled, adjust the threshold of the alarm of the major severity provided in ALM-
15.1000033 Host Memory Usage Exceeds the Threshold to 80%. If the alarm is generated and the alarm object is a host in a
cluster with memory overcommitment enabled, migrate VMs on the host or stop some VMs on the host based on the alarm
handling suggestion to prevent large-scale memory swapping.
To balance I/O pressure of memory swapping on local disks and maximize memory overcommitment, you are advised to
configure the memory swap partitions on different local virtual datastores of the same host and deploy the host OS and
memory swap partitions on different disks. In addition, you are advised to use high-performance local SSDs as swap
partitions. If no free disk is used for memory swapping, check the alarm provided in ALM-15.1000033 Host Memory Usage
Exceeds the Threshold every day (recommended). If the alarm object is a host in a cluster with memory overcommitment
enabled, handle the alarm in a timely manner.
The recommended memory overcommitment ratio is less than 150%. A major alarm is generated if the memory
overcommitment ratio exceeds 120%. A critical alarm is generated if the actually used memory size of a VM is greater than
90% of the total memory size of the virtualization domain and swap partition.
When memory overcommitment is enabled, perform live migration operations. If some VM memory is swapped to the
memory swap disk, the migration time becomes long.
2.1.5.4 VM HA
This feature allows VMs on a server to automatically start on another properly-running server within a few minutes if their
original server becomes faulty. The services running on the VMs can also automatically recover on the new server after the VMs
are migrated to it.
Definition
The VM HA feature ensures quick recovery of a VM. With this feature, when a VM is faulty, the system automatically re-creates
the VM on another normal compute node.
When detecting a VM fault, the system selects a normal compute node to create the faulty VM on the normal compute node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 37/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Benefits
For Benefit
Dependency
Ensure that sufficient resources are available in the cluster for VM HA.
Definition
This function enables disks on VMs to be manually migrated to other storage units when the VMs are running. The disks can be
migrated between different storage devices or storage units on one storage device under virtual storage management. With this
function enabled, storage resources of VMs can be dynamically migrated, thereby facilitating device maintenance.
Benefits
For Benefit
Customers The feature applies to planned storage resource maintenance or migration requiring no stop of any user services.
A VM in the Running state does not allow non-persistent disks to be migrated. You need to migrate disks after stopping
VMs if permitted.
Definition
Thin Provisioning provides users with larger virtual storage space than the actual memory space. The system allocates physical
storage space only when data is written into the virtual storage.
Thin Provisioning of the FusionCompute virtualization suite does not depend on the storage device.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 38/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Thin Provisioning applies to the user data volumes of VMs. When the declared storage capacity is far beyond user requirements,
Thin Provisioning can be used to reduce initial investments for carriers.
Benefits
For Benefit
Customers Thin Provisioning increases storage resource utilization and helps customers reduce initial investment on storage.
Dependency
None.
Definition
To protect VMs on hosts against virus attacks, the antivirus function is required. However, if traditional antivirus products are
used, the products must be installed on each VM. In this case, the products occupy VM resources and even may cause an antivirus
storm when users perform a global scan or antivirus update. To address this problem, FusionCompute provides dedicated antivirus
APIs, which support secondary development by antivirus product vendors, to offer a VM antivirus solution. In this solution, the
antivirus engine is installed on a dedicated secure VM on a host, and a lightweight antivirus driver is installed for the other VMs
on the host. All the other user VMs can scan for and remove viruses using the services provided by the secure VM, consuming
only a few VM resources.
Benefits
For Benefit
Customers This feature implements central management on antivirus services. Customers do not need to install and update antivirus
databases on each VM.
This feature protects VMs against antivirus storms.
For details about how to obtain the OSs supported by guest virtual machines (GVMs), visit FusionCompute Compatibility
Query.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 39/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The FusionCompute antivirus virtualization function has the following restrictions on secure service VMs:
Secure service VMs cannot be stopped or hibernated, because they must provide real-time services.
An SVM cannot be automatically migrated from a host due to cluster dynamic resource scheduling or a host failure, because
it is bound to the host.
The FusionCompute antivirus virtualization function has the following restrictions on secure user VMs:
A GVM must be migrated to a host that has an SVM deployed due to dynamic resource scheduling or a host failure, because
GVMs use antivirus functions provided by an SVM.
Therefore, each host in a cluster must have the antivirus virtualization function enabled and an SVM deployed.
Definition
DR is the ability to provide continuous services after unexpected problems, such as fire or earthquake. This is achieved by setting
up two or more sets of IT systems that are geographically far from each other. These IT systems provide the same functions and
monitor the health status of each other. When one system stops, another system will take over the services from the faulty system.
Backup is the process of copying data to a dump device. A dump device is a tape or disk used to store data copies. When a system
is faulty or data loss occurs, the backup data can be used to restore the system or data.
For the FusionCompute virtualization suite, Huawei provides metropolitan active-active DR, array-based replication DR, geo-
redundant 3DC DR, scale-out block storage replication DR, scale-out block storage HA DR, and VM backup solutions. Customers
can choose solutions based on service requirements.
The metropolitan active-active DR solution allows two sites far from each other to use the HyperMetro feature of Huawei
OceanStor V3, OceanStor V5, or OceanStor Dorado series storage and the HA and DRS functions of FusionCompute to
implement DR. The two sites can both function as production sites that provide services and serve as redundancy sites for
each other.
The array-based replication DR solution is implemented by creating two sites, a production site and a DR site, at two places,
using the remote replication function of storage devices to copy the VM data from the production site to the DR site, and
using the DR management software UltraVR to register the VMs on the DR storage at the DR site with a hypervisor and
automatically start the VMs.
The geo-redundant 3DC DR solution is typically used in the networking with an intra-city DR center and a remote DR
center, thereby providing multi-level protection for data and services in the production center. If a disaster occurs in the
production center, services can be quickly switched to the intra-city DR center. When a disaster occurs in both the production
center and the intra-city DR center, services can be quickly started in the remote DR center to ensure service continuity.
The scale-out block storage replication DR solution is implemented by creating two sites, a production site and a DR site, at
two places, using the remote replication function of storage devices to copy the VM data from the production site to the DR
127.0.0.1:51299/icslite/print/pages/resource/print.do? 40/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
site, and using the DR management software UltraVR to register the VMs on the DR storage at the DR site with a hypervisor
and automatically start the VMs.
The scale-out block storage HA DR solution is implemented by creating two sites, a production site and a DR site, at two
places, configuring HA VMs using UltraVR, and using scale-out block storage HyperMetro to write the VM data into the
storage devices at the two sites. If a fault occurs, VM HA can be implemented on different storage devices to ensure rapid
service recovery and cross-storage migration.
In the replication DR solution for eVol storage, a production site and a DR site are established in two places. The two sites
use the remote replication function of storage devices to copy VM data from the production site to the DR site, and use the
DR management software UltraVR to register VMs on the DR storage at the DR site to a virtualization platform and
automatically start the VMs.
In the HA solution for eVol storage, two sites are created in two places. The two sites use the DR management software
UltraVR to configure HA VMs, and use HyperMetro of Huawei centralized eVol storage devices to write VM data to the
storage devices deployed at the two sites. If a fault occurs, VM HA can be implemented on storage devices that have
HyperMetro configured to ensure rapid service recovery. In the HA solution for centralized eVol storage, the RPO is 0 and
the RTO is minutes. The specific RPO and RTO vary with service conditions.
The VM backup plan uses Huawei eBackup combined with the FusionCompute snapshot backup function and the CBT
backup function to back up VM data. Working together with FusionCompute, eBackup can back up a specific object based
on the configured policy. When a VM is faulty or its data is lost, the VM can be restored using the backup data. The backup
data is stored on the shared storage devices connected to eBackup. The snapshot-based backup and CBT-based backup
support full backup and incremental backup.
Benefits
For Benefit
Customers This feature shortens the system downtime after a disaster occurs.
This feature allows important data to be rapidly restored, minimizing the impact of data loss.
This feature enhances service reliability.
Dependency
For details, see FusionCompute 8.8.0 DR and Backup.
Definition
DRS uses intelligent scheduling algorithms to periodically monitor the work load on hosts in a cluster and migrates VMs between
the hosts based on the work load to achieve load balancing. This feature collaborates with the dynamic power management (DPM)
to increase resource utilization and reduce power consumption.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 41/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When the system is lightly loaded, the system migrates some VMs to one or more physical hosts and powers off the idle
hosts.
When the system is heavily loaded, the system starts some VMs and physical hosts and allocates VMs evenly on hosts to
ensure resource supply.
Scheduled tasks can be set to enable different resource scheduling policies at different times based on the system running
status to meet user requirements in different scenarios.
Benefits
For Benefit
Customers This feature optimizes resource allocation in different scenarios, reduces power consumption, and improves resource
utilization.
Dependency
Dynamic resource scheduling does not take effect on the following types of VMs:
Definition
CPU QoS
The CPU QoS ensures optimal allocation of compute resources for VMs and prevents resource contention between VMs due
to different service requirements. It effectively increases resource utilization and reduces costs.
During creation of VMs, the CPU QoS is specified based on the services to be deployed. The CPU QoS determines VM
computing capabilities. The system ensures the CPU QoS of VMs by ensuring the minimum compute resources and resource
allocation priority.
The CPU QoS is determined by the following aspects:
CPU quota
CPU quota defines the proportion based on which CPU resources to be allocated to each VM when multiple VMs
compete for the physical CPU resources.
For example, three VMs (A, B, and C) run on the host that uses a single-core physical CPU with 2.8 GHz frequency,
and their quotas are set to 1000, 2000, and 4000, respectively. When the CPU workloads of the VMs are heavy, the
system allocates CPU resources to the VMs based on the CPU quotas. Therefore, VM A with 1000 CPU quota can
obtain a computing capability of 400 MHz. VM B with 2000 CPU quota can obtain a computing capability of 800
127.0.0.1:51299/icslite/print/pages/resource/print.do? 42/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
MHz. VM C with 4000 CPU quota can obtain a computing capability of 1600 MHz. (This example explains the
concept of CPU quota and the actual situations are more complex.)
The CPU quota takes effect only when resource contention occurs among VMs. If the CPU resources are sufficient, a
VM can exclusively use physical CPU resources on the host if required. For example, if VMs B and C are idle, VM A
can obtain all of the 2.8 GHz computing capability.
CPU reservation
CPU reservation defines the minimum CPU resources to be allocated to each VM when multiple VMs compete for
physical CPU resources.
If the computing capability calculated based on the CPU quota of a VM is less than the CPU reservation value, the
system allocates the computing capability to the VM according to the CPU reservation value. The offset between the
computing capability calculated based on the CPU quota and the CPU reservation value is deducted from computing
capabilities of other VMs based on their CPU quotas and is added to the VM.
If the computing capability calculated based on the CPU quota of a VM is greater than the CPU reservation value, the
system allocates the computing capability to the VM according to the CPU quota.
For example, three VMs (A, B, and C) run on the host that uses a single-core physical CPU with 2.8 GHz frequency,
their quotas are set to 1000, 2000, and 4000, respectively, and their CPU reservation values are set to 700 MHz, 0 MHz,
and 0 MHz, respectively. When the CPU workloads of the three VMs are heavy:
According to the VM A CPU quota, VM A should have obtained a computing capability of 400 MHz. However,
its CPU reservation value is greater than 400 MHz. Therefore, VM A obtains a computing capability of 700 MHz
according to its CPU reservation value.
The system deducts the offset (700 MHz minus 400 MHz) from VMs B and C based on their CPU quotas.
VM B obtains a computing capability of 700 (800 minus 100) MHz, and VM C obtains a computing capability of
1400 (1600 minus 200) MHz.
The CPU reservation takes effect only when resource contention occurs among VMs. If the CPU resources are
sufficient, a VM can exclusively use physical CPU resources on the host if required. For example, if VMs B and C are
idle, VM A can obtain all of the 2.8 GHz computing capability.
CPU limit
CPU limit defines the upper limit of physical CPUs that can be used by a VM. For example, if a VM with two virtual
CPUs has a CPU limit of 3 GHz, each virtual CPU of the VM can obtain a maximum of 1.5 GHz compute resources.
The network QoS policy controls the bandwidth configuration. The QoS function does not support traffic control among
VMs on the same host.
Bandwidth control based on the sending direction and receiving direction of a port group member port
Traffic shaping and bandwidth priority are configured for each member port in a port group to ensure network QoS.
Benefits
For Benefit
Customers This feature allows IT administrators to set an upper limit of resources, available to a VM, preventing non-critical
applications or malicious users from preempting shared resources.
Dependency
127.0.0.1:51299/icslite/print/pages/resource/print.do? 43/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
None
Definition
Data Plane Development Kit (DPDK) is a set of libraries and drivers and used for fast data packet processing on the x86 platform.
It uses multiple technologies, including the bypass kernel protocol stack at the abstraction layer, uninterrupted packet sending and
receiving in polling mode, memory/buffer area/queue management optimization, and load balancing among multiple NIC queues
and data flows, achieving high-performance packet forwarding in the x86 processor framework and improving VM network
performance.
Benefits
For Benefit
Dependency
For details about the NIC models supported by the user-mode switching mode, see FusionCompute Compatibility.
If Mellanox MT27712A0 NICs are used as user-mode NICs, you need to install the NIC drivers first. For details, see Installing the
Mellanox NIC Driver .
If the user-mode switching mode is used, you need to configure hugepage memory and user-mode switching specifications
for the host. For details, see Configuring the Hugepage Memory for a Host and Configuring User-mode Switch for a Host .
Definition
DVS management allows the system administrator to configure and maintain physical and virtual ports of DVSs on one or CNA
servers.
Figure 1 shows the DVS model.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 44/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Firewall Uplink Port: cascading port vNIC: virtual network interface card
1. Multiple DVSs can be configured, and each DVS can serve multiple CNA nodes in a cluster.
2. A DVS provides several VSPs with their own attributes, such as the rate, statistics, and ACL. The ports with the same
attributes are assigned to a port group for management. The port groups with the same attributes are allocated to the same
VLAN.
3. An uplink port group can be configured for each DVS to enable external communication of VMs served by the DVS. An
uplink port group comprises multiple physical NICs working in load balancing mode.
4. Each VM provides multiple vNIC ports which connect to VSPs of the switch in one-to-one mapping.
Benefits
For Benefit
Customers The cloud computing management system, which integrates DVS management, implements centralized management of
the virtual networks of all CNA nodes, significantly reducing the management workload and minimizing misoperations.
Visualized network management provides customers with a clear view on traffic information, helping customers easily
maintain the network.
Dependency
For details, see Distributed Virtual Switch Management .
2.1.5.13 SR-IOV
The single-root I/O virtualization (SR-IOV) feature enables FusionCompute to use all advantages brought by physical NIC
forwarding performance acceleration.
Definition
SR-IOV enables a Peripheral Component Interconnect Express (PCIe) device to be shared in a virtual environment using multiple
virtual interfaces, offering different virtual functions. SR-IOV directly allocates I/O data to VMs, which allows the I/O data to
bypass the software emulation layer, thereby reducing the I/O overhead at the software emulation layer.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 45/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Benefits
For Benefit
Customers Use hardware device functions on networks, which reduce the I/O overhead at the software emulation layer.
Dependency
By default, the V6 server does not support SR-IOV. To use this function, set PCIe SR-IOV to Enabled in the BIOS.
For details about the NIC models supported by the SR-IOV passthrough mode, see FusionCompute Compatibility Query.
For a port group created on a DVS in SR-IOV mode, Port Type can only be set to Access, and only Send Bandwidth
Limiting can be set in Advanced Settings.
Aggregation network ports cannot be created for physical NICs in SR-IOV passthrough mode.
If a VM uses the SR-IOV-enabled NICs, the following functions become unavailable to the VM: VM hibernation, VM
waking up, VM live migration (changing host or changing host and datastore), memory snapshot, memory hot add, NIC
online adding and deletion, modifying port groups online, IP-MAC address binding, and security groups.
For details about the supported VM OSs, see FusionCompute SIA Guest OS Compatibility Guide (x86).
For details about how to query the FusionCompute SIA version, see How Do I Query the FusionCompute SIA Version? .
To improve GPU passthrough performance, you are advised to enable Auto Adjustment of NUMA Topology for VMs.
Definition
A GPU on a physical server can be directly attached to a specified VM to improve graphics and video processing capabilities.
With this feature enabled, the system can meet user requirements for high-performance graphics processing capabilities.
Benefits
For Benefit
Customers This feature allows users to use the graphic processing capabilities of powerful GPUs.
In the calculating scenario, you need to download the Tesla driver from the official website of NVIDIA. The Tesla series
drivers can be used only in computing scenarios.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 46/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In the graphic scenario, contact the customer service personnel of NVIDIA to obtain the GRID series drivers and the
corresponding license (the license needs to be paid separately). After applying for a license, perform the following operations
during installation:
1. Log in to the NVIDIA official website and obtain the license installation package and activation key.
For HA deployment, go to 4.
3. Obtain GRID License Server User Guide from the NVIDIA official website. For details about the installation
process, see section "Installing the NVIDIA vGPU Software License Server" in GRID License Server User Guide.
The section "Installing the NVIDIA vGPU Software License Server" in GRID License Server User Guide describes
the installation preparations and process of Windows and Linux servers. For details about subsequent configuration
operations, see section "Managing licenses on the NVIDIA vGPU Software License Server".
4. If you have high availability requirements on the license server, you can deploy the license server in active/standby
mode. You need to obtain Virtual GPU Software License Server Documentation from the NVIDIA official website.
For details about the installation process, see section "NVIDIA vGPU Software License Server High Availability" in
Virtual GPU Software License Server Documentation.
If exceptions occur on NVIDIA products, contact NVIDIA technical support by sending emails to
[email protected].
Compatibility
For details about the device models that support GPU passthrough and their GPU driver versions, see FusionCompute
Compatibility Query.
For details about the device models that support GPU passthrough and the corresponding guest OSs, see the SIA
compatibility guide for the desired version. You can upgrade the SIA version to increase the compatibility.
When a GPU is set to the passthrough mode, refer to the SIA compatibility guide for the desired version to know about the
supported OSs.
For details about how to query the FusionCompute SIA version, see How Do I Query the FusionCompute SIA Version? .
For the SIA compatibility guide, select FusionCompute SIA Guest OS Compatibility Guide (x86) or FusionCompute SIA
Guest OS Compatibility Guide (Arm) based on the scenario.
VM HA: Only the automatically distributed VM with a GPU bound support HA. Other VMs with GPUs bound do
not support HA.
VM live migration: A VM with a GPU bound does not support live migration.
VM snapshot: A VM with a GPU bound does not support memory snapshot creation.
Cluster scheduling policy: This policy does not take effect for a VM with a GPU bound.
VNC login: You can log in to Windows VMs using the following methods rather than FusionCompute VNC login:
Remote desktop
127.0.0.1:51299/icslite/print/pages/resource/print.do? 47/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Definition
GPUs on an x86 physical server can be virtualized into multiple vGPUs used by multiple VMs based on the hardware technology.
GPUs provide the VMs with 2D graphics processing and 3D graphics rendering acceleration services and feature high
performance and low costs. In this way, the GPUs are shared and user costs are lowered.
For NVIDIA RTX A5000 and NVIDIA RTX A6000 GPUs, you need to use the NVIDIA Display Mode Selector Tool to switch the mode
for creating vGPUs. For details, see Using the NVIDIA Display Mode Selector Tool to Switch the Mode of a GPU .
In the vGPU scheduling mechanism, a single VM may fail to fully use physical GPU resources. You are advised to use multiple VMs to
contend for physical GPU resources.
Benefits
For Benefit
Customers This feature allows GPUs to be shared and users to remotely access the VDI.
In the virtualization scenario, contact the customer service personnel of NVIDIA to obtain the GRID series drivers and the
corresponding license (the license needs to be paid separately). After applying for a license, perform the following operations
during installation:
1. Log in to the NVIDIA official website and obtain the license installation package and activation key.
For HA deployment, go to 4.
3. Obtain GRID License Server User Guide from the NVIDIA official website. For details about the installation process, see
section "Installing the NVIDIA vGPU Software License Server" in GRID License Server User Guide.
The section "Installing the NVIDIA vGPU Software License Server" in GRID License Server User Guide describes the
installation preparations and process of Windows and Linux servers. For details about subsequent configuration operations,
see section "Managing licenses on the NVIDIA vGPU Software License Server".
4. If you have high availability requirements on the license server, you can deploy the license server in active/standby mode.
You need to obtain Virtual GPU Software License Server Documentation from the NVIDIA official website. For details
about the installation process, see section "NVIDIA vGPU Software License Server High Availability" in Virtual GPU
Software License Server Documentation.
If exceptions occur on NVIDIA products, contact NVIDIA technical support by sending emails to
[email protected].
Compatibility
127.0.0.1:51299/icslite/print/pages/resource/print.do? 48/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details about the device models that support GPU virtualization and their GPU driver versions, see FusionCompute
Compatibility Query.
For details about the device models that support GPU virtualization and the corresponding guest OSs, see the SIA
compatibility guide for the desired version. You can upgrade the SIA version to increase the compatibility.
When a GPU is set to the virtualization mode, refer to the SIA compatibility guide for the desired version to know about the
supported OSs.
For details about how to query the FusionCompute SIA version, see How Do I Query the FusionCompute SIA Version? .
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download
the document of the required version.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document
of the required version.
The Kelper, Maxwell, Pascal, Tesla, and Ampere GPUs cannot be deployed on the same PM, otherwise the GPU
virtualization function will be affected.
VM live migration: A VM with a GPU bound does not support live migration.
VM snapshot: A VM with a GPU bound does not support memory snapshot creation.
After a GPU virtual resource group is bound to a VM, if the page for logging in to the VM using VNC is abnormal (black screen or
blue screen), press Ctrl+Alt+1 or Ctrl+Alt+2 to switch to the native page.
Some NVIDIA cards do not support VM login using VNC after vGPUs are configured. If the method of switching to the native page
by pressing Ctrl+Alt+1 or Ctrl+Alt+2 does not work, you can log in to the VM only through a remote desktop connection or after
detaching the vGPUs. For details about such NVIDIA cards, see the description on the NVIDIA official website.
Definition
Container cluster management: supports K8s cluster lifecycle management, including deployment, upgrade, connection,
scaling, and configuration.
Container image management: supports container image repositories. Users can upload, push, and pull container images. This
feature can be interconnected with user-built image repositories and supports image synchronization.
Containerized application management: provides full-lifecycle visualized management for containerized applications,
including application templates, application instances, and various K8s resource objects.
Container project management: logically isolates tenant resources by container project, enabling self-service and self-
management of container resources for multiple teams in an enterprise.
Container monitoring and O&M: provides comprehensive K8s event monitoring and performance monitoring capabilities.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 49/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Benefits
Beneficiary Benefits
Customers Visualized management of VMs and containerized applications can be completed on a single interface, meeting service
evolution requirements and simplifying management.
Dependency
None
Communication Planes
The FusionCompute system consists of the following communication planes:
Management plane: provides a communication plane to implement system monitoring, O&M (including system
configuration, system loading, and alarm reporting), and VM management (such as creating, deleting, and scheduling VMs).
Storage plane: provides a communication plane for the storage system and storage resources for VMs. This plane is used for
storing and accessing VM data (including data in the system disk and user disk of VMs).
Service plane: provides a plane for NICs of VMs to communicate with external devices.
This section uses IP SAN devices and scale-out block storage devices as an example to describe how storage devices in the FusionCompute
system communicate with other planes.
IP SAN: Figure 1 and Figure 3 show the communication between planes in the FusionCompute system.
scale-out block storage: Figure 2 and Figure 4 show the relationship between FusionCompute communication planes.
Figure 1 Communication between planes (four network ports) in the x86 architecture
127.0.0.1:51299/icslite/print/pages/resource/print.do? 50/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Figure 2 Communication between planes (four network ports) in the Arm architecture
Figure 3 Communication between planes (six network ports) in the x86 architecture
127.0.0.1:51299/icslite/print/pages/resource/print.do? 51/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Figure 4 Communication between planes (six network ports) in the Arm architecture
The Baseboard Management Controller (BMC) network port of each node can be assigned to the BMC plane or the management plane.
You are advised to bind network ports on different NICs to the same plane to prevent network interruption caused by the fault of a single
NIC.
When binding network ports on different NICs, ensure that the models of the NICs to be bound are the same. If the models of the NICs to
be bound are different, bind the network ports on the same NIC.
Management Network port eth0 on Network port eth0 on each node is assigned to the management plane VLAN, and the
plane the host VLAN to which port eth0 on the node belongs becomes the default VLAN of the
management plane.
Network port eth0 on
the active and standby
VRM nodes
BMC network ports on The switch port connected to the BMC network port on each node is assigned to the BMC
the VRM and host plane VLAN, and the VLAN to which the BMC network port on the node belongs becomes
the default VLAN for the BMC plane.
NOTE:
The BMC network ports can be assigned to an independent BMC plane or to the same VLAN
as management network ports. The specific assignment depends on network planning.
Storage plane Storage network ports The storage plane is divided into four VLANs:
A1, A2, A3, A4, B1,
A1 and B1 are assigned to VLAN 1.
B2, B3, and B4 on the
SAN storage devices A2 and B2 are assigned to VLAN 2.
A3 and B3 are assigned to VLAN 3.
A4 and B4 are assigned to VLAN 4.
eth2 is assigned to VLAN 1 and VLAN 2.
Service plane Service network port The service plane is divided into multiple VLANs to isolate VMs. All data packets from
eth1 on the host different VLANs are forwarded over the service network ports on the CNA node. The data
packets are marked with VLAN tags and sent to the service network port of the switch at
the access layer.
Management Network ports eth0 bond 1 Network port eth0 on each node is assigned to the management plane VLAN,
plane and eth1 on the host and the VLAN to which port eth0 on the node belongs becomes the default
VLAN of the management plane.
Network port eth0 N/A
on the active and
standby VRM nodes
BMC network ports N/A The switch port connected to the BMC network port on each node is assigned to
on the VRM and the BMC plane VLAN, and the VLAN to which the BMC network port on the
host node belongs becomes the default VLAN for the BMC plane.
NOTE:
The BMC network ports can be assigned to an independent BMC plane or to the
same VLAN as management network ports. The specific assignment depends on
network planning.
Storage plane Storage network bond 2 You are advised to configure storage network ports to form a load balancing
ports eth2 and eth3 bond. Hosts can access scale-out block storage through the storage ports created
on the host
127.0.0.1:51299/icslite/print/pages/resource/print.do? 53/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
on the storage network port bond over the layer 2 network based on the planned
storage VLAN.
Service plane Service network bond 1 The service plane is divided into multiple VLANs to isolate VMs. All data
ports eth0 and eth1 packets from different VLANs are forwarded over the service network ports on
on the host the CNA node. The data packets are marked with VLAN tags and sent to the
service network port of the switch at the access layer.
Management Network ports eth0 bond0 Network ports eth0 and eth1 on each node are assigned to the management plane
plane and eth1 on the host VLAN, and the VLAN to which network ports eth0 and eth1 on each node
belong becomes the default VLAN of the management plane.
Network port eth0 on N/A
the active and
standby VRM nodes
BMC network ports N/A The switch port connected to the BMC network port on each node is assigned to
on the VRM and host the BMC plane VLAN, and the VLAN to which the BMC network port on the
node belongs becomes the default VLAN for the BMC plane.
NOTE:
The BMC network ports can be assigned to an independent BMC plane or to the
same VLAN as management network ports. The specific assignment depends on
network planning.
Storage plane Storage network N/A The storage plane is divided into four VLANs:
ports A1, A2, A3, A4,
A1 and B1 are assigned to VLAN 1.
B1, B2, B3, and B4
on the SAN storage A2 and B2 are assigned to VLAN 2.
devices
A3 and B3 are assigned to VLAN 3.
A4 and B4 are assigned to VLAN 4.
eth2 is assigned to VLAN 1 and VLAN 2.
eth3 is assigned to VLAN 3 and VLAN 4.
Storage network N/A Network port eth2 can communicate with ports A1, A2, B1, and B2 over the
ports eth2 and eth3 layer 2 network. Network port eth3 can communicate with ports A3, A4, B3, and
on the host B4 over the layer 2 network. This allows compute resources to access storage
resources through multiple paths (each computing server has eight iSCSI links to
connect to the same storage device). Therefore, the storage network reliability is
ensured.
Service plane Service network ports N/A The service plane is divided into multiple VLANs to isolate VMs. All data
eth4 and eth5 on the packets from different VLANs are forwarded over the service network ports
host (eth4 and eth5) on the CNA node. The data packets are marked with VLAN tags
and sent to the service network port of the switch at the access layer.
Management Network ports eth0 bond 1 Network ports eth0 and eth1 on each node are assigned to the management plane
plane and eth1 on the VLAN, and the VLAN to which network ports eth0 and eth1 on each node
host belong becomes the default VLAN of the management plane.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 54/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
BMC network ports N/A The switch port connected to the BMC network port on each node is assigned to
on the VRM and the BMC plane VLAN, and the VLAN to which the BMC network port on the
host node belongs becomes the default VLAN for the BMC plane.
NOTE:
The BMC network ports can be assigned to an independent BMC plane or to the
same VLAN as management network ports. The specific assignment depends on
network planning.
Storage plane Storage network bond 2 You are advised to configure storage network ports to form a load balancing
ports eth2 and eth3 bond. Hosts can access scale-out block storage through the storage ports created
on the host on the storage network port bond over the Layer 2 network based on the planned
storage VLAN.
Service plane Service network bond 3 The service plane is divided into multiple VLANs to isolate VMs. All data
ports eth4 and eth5 packets from different VLANs are forwarded over the service network ports (eth4
on the host and eth5) on the CNA node. The data packets are marked with VLAN tags and
sent to the service network port of the switch at the access layer.
Component Description
FusionCompute In the FusionCompute virtualization suite, to ensure precise system time, you are advised to configure an external
Network Time Protocol (NTP) clock source to serve FusionCompute.
eBackup backup Backup server: To ensure time accuracy, an external NTP clock source is required. The backup server
server synchronizes time with the clock source.
Backup proxy: After the backup server is configured, the backup proxy synchronizes time with the backup server.
UltraVR If UltraVR is deployed on a physical server, an external NTP clock source needs to be deployed based on the
server OS. UltraVR synchronizes time with the external NTP clock source.
If UltraVR is deployed on a VM, UltraVR synchronizes time with the host where the VM runs. When creating a
VM template, set the clock policy to synchronize time with the host. The system time on the VM changes
depending on the host system time.
Application- Users can select a time synchronization policy based on their specific VM time precision requirements.
oriented VM
(Recommended) Free clock policy: Users customize a VM time synchronization policy, with which VM time is
not affected by the FusionCompute virtualization suite system time. To use this policy, set the clock policy to not
127.0.0.1:51299/icslite/print/pages/resource/print.do? 55/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
synchronizing time with the host clock when you create a VM template.
Synchronize time with the host on which the VMs are running: VM time is determined by the host time. To use
this policy, set the clock policy to synchronizing time with the host clock when you create the VM.
If a user does not expect application-oriented VM time to be determined by the FusionCompute system time or
the user is using a reliable clock source, the free clock policy is recommended.
2.1.7 Reliability
FusionCompute System Reliability
Quorum Server
FusionCompute management nodes support deployment of an independent quorum server for active/standby arbitration. If the
network between the active and standby management nodes is disconnected and they cannot obtain the fault status of each other,
split-brain will occur. In this scenario, the standby management node can use the quorum server to determine the fault status of the
active node. The quorum server mode is supported only when the management nodes are deployed in the active/standby mode
(including the scenario when the nodes are deployed on physical servers) and is not supported when only a single node is
deployed.
VM HA
127.0.0.1:51299/icslite/print/pages/resource/print.do? 56/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The VM HA feature ensures quick recovery of a VM. With this feature, when a VM is faulty, the system automatically re-creates
the VM on another normal compute node.
When detecting a VM fault, the system selects a normal compute node to create the faulty VM on the normal compute node.
VM Live Migration
If users migrate the VMs from one server to another server without interrupting the service, it is called live migration of VMs. The
VM manager provides quick recovery of memory data and memory sharing technologies to ensure that the VM data before and
after the live migration remains unchanged.
Before performing O&M operations on a physical server, system maintenance engineers need to migrate VMs from this
physical server to another physical server. This minimizes the risk of service interruption during the O&M process.
Before upgrading a physical server, system maintenance engineers need to migrate VMs from this physical server to other
physical servers. This minimizes the risk of service interruption during the upgrade process. After the upgrade is complete,
system maintenance engineers can migrate the VMs back to the original physical server.
System maintenance engineers need to migrate VMs from a light-loaded server to other servers and then power off the
server. This helps reduce service operation costs.
Manual migration By destination On the FusionCompute web client, system maintenance engineers manually migrate one
VM to another server.
Automated VM resource The system automatically migrates VMs to other servers in the cluster based on the preset
migration scheduling VM scheduling policies.
VM Load Balancing
In the load balancing mode, the system dynamically allocates the load based on the current load status of each physical server
node to implement load balancing in a cluster.
Snapshot
The snapshot feature enables FusionCompute to restore a damaged VM using its snapshots.
A snapshot stores VM status information (including hard disk information) at a certain point of time.
When a VM is faulty, a user can quickly create a VM based on the VM snapshot that has been backed up.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 57/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VM Isolation
The isolation feature ensures that all VMs running on the same physical server are independent. Therefore, the fault of one VM
does not affect other VMs.
VM isolation is implemented based on virtualization software. Each VM has independent memory space, network address space,
CPU stack register, and disk storage space.
VM OS Fault Detection
If a VM becomes faulty due to a VM failure or physical server failure, the system automatically restarts the faulty VM from the
physical server where the VM is located or from another physical server, depending on the preset policy. Users can also configure
the system to neglect the faults. The system can detect and rectify VM OS faults, such as the BSOD on Windows VMs (in the x86
architecture) or the panic status of Linux VMs.
Black Box
The black box embedded in FusionCompute collects information about the system. If a fault occurs, the black box collects and
stores the last information about the system. This facilitates fault locating.
The black box stores the following information:
System snapshots
Management Node HA
Management nodes work in active/standby mode to ensure HA. If the active node is faulty, the standby node takes over services
from the active node, ensuring uninterrupted service processing of management nodes.
The active and standby nodes check each other's status using the heartbeat detection mechanism. The active node is automatically
determined.
In normal condition, only the active node provides services. The standby node only provides basic functions and periodically
synchronizes data with the active node.
If the active node is faulty, the standby node takes over services from the active node and changes to the active state. The
original active node changes to the idle state.
The active node faults include the network interruption, abnormal state, and faulty service process.
If both the active and standby nodes are faulty, HA will be triggered to start the active node on another normal node in the
cluster when certain conditions are met. For details about the constraints, see Configuring the HA Policy for a Cluster .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 58/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
devastating fault occurs, both the active and standby management nodes are faulty at the same time, and they cannot be restored
by restarting, they can be restored using the remote data backup rapidly (within 1 hour). With this service, the time for restoration
is reduced.
Traffic Control
The traffic control mechanism helps the management node provide concurrent services of high availability without system
collapse due to excessive traffic. Traffic control is enabled for the access point, so that excessive load on the front end can be
prevented to enhance system stability. Traffic control performed in each internal phase with a bottleneck prevents traffic overload
and service failure, such as traffic control on image downloading, authentication, VM services (including VM migration, HA, and
creation, as well as VM hibernation, waking up, starting, and stopping in the x86 architecture), and O&M.
Fault Detection
The system provides the fault detection and alarm reporting functions, and the tool for displaying fault on web browsers. When a
cluster is running properly, users can monitor cluster management and load balancing using a data visualization tool to detect
faults, including load balancing problems, abnormal processes, or hardware performance deterioration trend. This fault detection
function allows the system to properly adjust and distribute resources and therefore improves entire system performance. Users
can view historical record to obtain the information about daily, weekly, and even annual hardware resource consumption.
2.1.8.1 Compatibility
Hardware Compatibility
Supported OSs
127.0.0.1:51299/icslite/print/pages/resource/print.do? 59/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details about the OSs that support the FusionCompute RDM function, see content about support for disk scsi (virtio-scsi) in
FusionCompute SIA Guest OS Compatibility Guide (x86) or FusionCompute SIA Guest OS Compatibility Guide (Arm).
For details about how to query the FusionCompute SIA version, see How Do I Query the FusionCompute SIA Version? .
For details about the OSs compatible with FusionCompute, see the compatibility query website. You are advised to upgrade
FusionCompute SIA to the latest version.
Host Security
Security Structure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 60/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
physical environment. The virtualization capability of network firewalls or Intrusion Prevention Systems (IPSs) may be
insufficient, which causes the static network partition and isolation model to fail to meet the requirement of dynamic
resource sharing.
The data source authentication is different from data integrity check as the latter only proves that data is not modified accidentally or
intentionally.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 61/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Security Advantages
The cloud computer provides comprehensive and unified security management for compute resources.
The centralized management of compute resources makes it easier to deploy boundary protection. Comprehensive security
management measures, such as security policies, unified data management, security patch management, and unexpected
event management, can be taken to manage compute resources. In addition, professional security expert teams can protect
resources and data for users.
VM isolation security
The hypervisor can isolate VMs running on the same physical machine to prevent data theft and attacks and ensure that
use of VM resources is not affected by other VMs. Users can only access resources allocated to their own VMs, such as
hardware and software resources and data, ensuring secure VM isolation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 62/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Network plane isolation, firewalls, and data transfer encryption are used to ensure the security of service O&M.
In addition, the security of each physical host is ensured by repairing web application loopholes, hardening the system and
database, and installing patches and antivirus software. For details, see Host Security .
VM Isolation
Frontend driver: runs in a VM and transfers I/O request of the VM to the backend driver in the host.
Backend driver: runs in a host, parses the I/O requests, maps them to the physical devices, and hands them to the device
driver controlling hardware.
When a VM or a data volume is deleted, the system reclaims resources, and a linked list of small data blocks is released to
the resource pool. These small data blocks are reorganized for storage resources reuse. Therefore, the possibility of restoring
original data from the reallocated virtual disks is low, protecting residual information from being illegally obtained.
When system resources are reclaimed, the physical bits of logical volumes are formatted to ensure data security.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 63/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the physical disks of the data center are replaced, the system administrator of the data center degausses them or
physically destroys them to prevent data leakage.
2.2.2.2 VM Isolation
The hypervisor can isolate VMs running on the same physical machine to prevent data theft and attacks and ensure that use of VM
resources is not affected by other VMs. Users can only use VMs to access resources belonging to their own VMs, such as
hardware and software resources and data. Figure 1 shows the VM isolation.
Figure 1 VM isolation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 64/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Memory Isolation
The VM uses the Memory Virtualization technology to virtualize the physical memory and isolates the virtual memory. The
technology introduces a new address called physical address for clients based on the existing mapping between the virtual address
and the machine address. On a VM, the client OS translates the virtual address into the physical address. The hypervisor first
transfers the physical address of a client into a machine address, and then sends the machine address to the physical server.
Transmission Security
127.0.0.1:51299/icslite/print/pages/resource/print.do? 65/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Service plane
The service plane provides service channels for users and works as the communication plane of the NICs of VMs to provide
services.
Storage plane
The storage plane works as the communication plane for iSCSI storage devices and provides storage resources for VMs. The
storage plane communicates with VMs over the virtualization platform.
Management plane
The management plane works as the communication plane for cloud computing system management, service deployment,
and system loading.
The BMC plane is used for server management, and whether this plane is isolated from the management plane is
configurable.
Some functions and features provided by the FusionCompute virtualization suite require the communication between the
management plane and the service plane. To reduce management plane vulnerability, different networking modes are
recommended in the following scenarios:
In high security scenario, it is recommended that the layer 2 networking mode be used for the FusionCompute virtualization
suite. In this case, the management plane cannot communicate with the service plane over the layer 2 network. If the
communication between the two planes is required, they can communicate with each other in users' networks over layer 3,
and therefore, users must take high security measures, such as firewalls.
In common security scenario, the FusionCompute virtualization suite can adopt the layer 3 networking mode. In this case,
the management plane communicates with the service plane over the layer 3 network by default. To ensure the isolation
security of the management plane from the service plane, ACL policy can be configured on the layer 3 switch.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 66/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Data transfer may be interrupted, and data may be replicated, modified, forged, intercepted, or monitored during transfer.
Therefore, it is necessary to ensure the integrity, confidentiality, and validity of data during network transmission. The solution
ensures data transfer security as follows:
When accessing management systems, administrators browse data sensitive pages using Hypertext Transfer Protocol Secure
(HTTPS), and data transfer channels are encrypted using secure socket layer (SSL).
Users access VMs using HTTPS, and data transfer channels are encrypted using SSL.
The HTTPS provides encrypted data transfer and identity authentication. HTTPS encrypts data using SSL, which has the
following functions:
Authenticates users and servers and ensures that data is sent to correct clients and servers.
Log Management
Table 1 Principles for encrypting, setting, and changing passwords for accounts
Item Principle
Initial password The default password must be changed after the user logs in to the system for the first time.
setting
The password must comply with the password policies.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 67/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the password validity period expires, users are required to change the passwords when they log in to the
system.
The password must be changed regularly. It is recommended that administrators passwords be changed at least
every 180 days.
To ensure the account and password security of such management systems, Table 2 provides the password policies for setting
account passwords and all passwords can be changed.
Password length Specifies the allowed password range. A password is in the format of m-n. m and n are 8-32
integers ranging from 8 to 32 and m must be less than n.
A password contains a string of 8 to 32 characters by default.
Shortest period between each Specifies the minimum interval between two password changes. The value ranges from 0 5
password change (min) to 9999. (0 indicates that a new password can be changed immediately.)
Password validity period Specifies the password validity period. The value ranges from 0 to 999. (0 indicates that 90
(days) the password can be used permanently.)
Change password upon a Specifies whether the user needs to change the password upon the first login or after the Yes
reset or first login password is reset.
Whether weak password Specifies whether to support weak password verification. The default value is Yes. Yes
verification is supported for
Yes: Weak password verification is supported.
passwords of local users and
interface interconnection NOTE:
users You can run the cat /etc/galax/sm/weakword.conf command to query the weak password
dictionary.
You can run the vi /etc/galax/sm/weakword.conf command to expand the weak password
dictionary. After opening file weakword.conf, press i to enter the insert mode. A weak
password dictionary across lines is supported. Each line in a file indicates a password. You
are not allowed to delete the default weak passwords in the original dictionary. After the
modification is complete, press Esc, enter :wq, and press Enter to save the modification and
exit.
Number of previous Specifies the number of the previous passwords that cannot be repeated. The value ranges 5
passwords that cannot be from 3 to 32.
repeated
Password expiry alert (days) Specifies the days in advance when a password expiration alert starts to be generated. The 7
value ranges from 0 to 15. (0 indicates that no alert will be generated.)
Passwords can contain Yes: The password is allowed to contain the username or reversed username. No
usernames or reversed
usernames No: The password is not allowed to contain the username or reversed username.
Account lockout threshold Specifies the threshold for the number of unsuccessful login attempts. 3
When the number of unsuccessful login attempts is greater than the threshold, the account
will be locked. When the parameter is set to 0, the account will not be locked.
The value ranges from 0 to 10.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 68/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Account lockout duration Specifies the duration in which the account is locked due to incorrect inputs. When the 5
(min) account is locked for a specified period, it will be unlocked automatically.
When this parameter is set to 0, the account must be unlocked by the administrator who
has the required permission.
The value ranges from 0 to 1440.
Log Type
Logs record system running statuses and users'; operations on the system, and can be used to query user behavior and locate
problems. Logs are divided into run logs and operation logs. Operation logs record information about system security.
An operation log records all user operations on the system and operation results, and helps identify whether the system is
maliciously operated and attacked. In the FusionCompute virtualization suite, operation logs record the following information:
Operations performed by the administrator on FusionCompute, such as logging in to and out of the system, and creating
VMs
Log Source
The following systems generate logs:
FusionCompute
Log Storage
Logs generated by systems in the FusionCompute virtualization suite are stored in their local databases. Table 1 describes the
database for storing each type of log.
System Database
Only administrators who have the permission to query logs can export them.
OS Hardening
Database Hardening
127.0.0.1:51299/icslite/print/pages/resource/print.do? 69/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Security Patch
Session security
If users do not perform any operation within the configured session timeout duration, they automatically log out of the
system. To use the system again, you need to log in again. For details about how to set the session timeout, see Changing the
Login Timeout Duration .
2.2.4.2 OS Hardening
In the FusionCompute virtualization suite, compute nodes, storage nodes, and management nodes use the Euler Linux OS. The
following basic security configurations are supported to ensure OS security on these nodes:
Stop unnecessary services, such as Telnet service and file transfer protocol (FTP) service.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 70/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Database Type
In the FusionCompute solution, the following database is included:
GaussDB database
Redis database
For details about how to set the password complexity of the common service accounts, see Changing the Password for
Accessing Common Services in GaussDB on the VRM Node .
For details about how to set the password complexity of the administrator accounts, see Changing the Password for the
GaussDB Administrator on the VRM Node .
For details about how to set the password complexity of the administrator accounts, see Changing the Password of the Redis
Administrator on a CNA Node .
Database Backup
To ensure data security, databases must periodically back up their data to prevent loss of important data. The databases support the
following backup modes:
Local backup: Backup scripts are executed at 02:00 every day to back up data.
Remote backup: If a third-party backup server is configured for a database, the database automatically executes the backup
script at 02:00 every day to back up data and upload the backup data to the third-party server.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 71/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
User VM patches
The FusionCompute virtualization suite does not provide any additional security patch for user VMs. You are advised to
obtain OS security patches from the official OS website and install patches for user VMs.
Installation Guide
Initial Configurations
127.0.0.1:51299/icslite/print/pages/resource/print.do? 72/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Configuration Overview
In the FusionCompute solution, the network device configuration includes configurations of physical ports on hosts and storage
devices, cabling between physical switches, physical switch parameters, and network logic.
Whether to use the GE networking or 10GE networking depends on the amount of the estimated network traffic. It is
recommended that the network load be less than 60% of the network port bandwidth.
Based on different service types, a network can be manually divided into multiple network planes by VLAN. Each network plane
can be configured with one or more VLANs.
To ensure networking reliability, you are advised to deploy switches in stacking mode. If NICs are sufficient, you can use two or
more NICs for connecting the host to each plane.
Figure 1 or Figure 2 shows the typical internal networking for a host that has six physical NICs deployed and uses IP SAN or
scale-out block storage.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 73/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the number of NICs deployed on the host is less than six, reduce the number of NICs for each network or combine some planes,
for example, combine the service plane and the management plane. Then, isolate them logically by VLAN. For example,
configure the same physical NIC for the service and management plane and separate the two planes using different VLANs.
If compute devices and storage devices are installed in different cabinets, connect storage devices to access switches that are not
the ones hosts connects to, and enable their communication on aggregation switches.
In the 25GE to 10GE negotiation networking scenario where a Mellanox NIC is used (switches use a 10GE network port to connect to the 25GE
Mellanox NIC), about 1% of TCP packets may be retransmitted when the data transmission pressure is high. If customer applications have high
requirements on TCP retransmission, you are advised to use the 25GE to 25GE or 10GE to 10GE negotiation networking.
Configuration Requirements
127.0.0.1:51299/icslite/print/pages/resource/print.do? 74/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
SNMP and SSH must be enabled for the switches to enhance security. SNMPv3 is recommended.
The Spanning Tree Protocol (STP) must be disabled for the switches. Otherwise, a host fault alarm may be generated
incorrectly.
Network devices are interconnected on the planned planes. Table 1 shows the requirements for the interconnection.
BMC plane Specifies the plane used by the BMC network port on the host. This The management plane and the BMC
plane enables remote access to the BMC system of a server. plane of the VRM node can
communicate with each other. The
management plane and the BMC plane
can be combined.
Management Specifies the plane used by the management system to manage all nodes The VRM node communicates
plane in a unified manner. All nodes communicate on this plane, which properly with CNA nodes over the
provides the following IP addresses: management plane.
Management IP addresses of all hosts, that is, IP addresses of the
management network ports on hosts
IP addresses of management VMs
IP addresses of storage device controllers
NOTE:
The management plane is accessible to the IP addresses in all network
segments by default, because the network plans of different customers
vary. You can deploy physical firewalls to deny access from IP addresses
that are not included in the network plan.
If you use a firewall to set access rules for the floating IP address of the
VRM node, set the same access rules for the management IP addresses of
the active and standby VRM nodes.
On the FusionCompute management plane, some ports provide
management services for external networks. If the management plane is
deployed on an untrusted network, it is prone to denial of service (DoS)
and DDoS attacks. Therefore, you are advised to deploy this management
plane on a dedicated network or in the trusted zone of the firewall,
protecting the FusionCompute system against external attacks.
SSH and SFTP ports are high-risk ports. Do not expose the SSH and
SFTP ports of the system to the Internet without passing through the
firewall. If the preceding services must be exposed to the Internet, take
measures to protect perimeter network security.
It is recommended that you configure eth0 on a host as the management
network port. If a host has more than four network ports, configure both
eth0 and eth1 on the host as the management network ports, and bind
them to work in active/standby mode after FusionCompute is installed.
Storage plane Specifies the network plane on which hosts communicate with storage Hosts communicate properly with
units on storage devices. This plane provides the following IP addresses: storage devices over the storage plane.
Storage IP addresses of all hosts, that is, IP addresses of the storage You are not advised to use the
network ports on hosts management plane to carry storage
services, which ensures storage service
Storage IP addresses of storage devices continuity even when you subsequently
If the multipathing mode is in use, configure multiple VLANs for the expand the capacity for the storage
storage plane. plane.
Service plane Specifies the network plane used by service data of user VMs (such as -
VM migration service data).
Quorum network If communication between the storage systems is interrupted or a An independent quorum server must be
storage system malfunctions, the quorum server determines which deployed.
storage system is accessible.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 75/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Each controller of the storage systems
must have quorum links.
Configuration Requirements
Configuration requirements vary depending on the storage device type.
If shared storage devices are used, including SAN and NAS storage devices, you must configure the management IP
addresses and storage link IP addresses for them. The following conditions must be met for different storage devices:
If SAN devices are used and you have requirements for thin provisioning and storage cost reduction, you are advised
to use the thin provisioning function provided by the VIMS, rather than the Thin LUN function of SAN devices.
Data may fail to be written due to insufficient space caused by Thin LUN overcommitment.
If SAN devices are used, configure LUNs or storage pools (datastores) as planned and map them to corresponding
hosts.
If NAS storage devices are used, configure shared directories (datastores) and a list of hosts that can access the
shared directories as planned, and configure no_all_squash and no_root_squash.
The OS compatibility of some non-Huawei SAN devices varies depending on the LUN space. For example, if the
storage space of a LUN on a certain SAN device is greater than 2 TB, certain OSs can identify only 2 TB storage
space on the LUN. Therefore, review your storage device product documentation to understand the OS compatibility
of the non-Huawei SAN devices before you use the devices.
If SAN devices are used, you are advised to use iSCSI to connect hosts and storage devices. The iSCSI connection
does not require additional switches, thereby reducing costs.
If local storage resources are used, only available space on the disk where you install the host OS and other bare disks can be
used as datastores.
Datastores can be virtualized. Creating a common disk on a virtual datastore takes a long time. However, the virtualized datastore
supports some advanced feature, such as creating thin provisioning disks. It also supports more advanced features to improve
storage utilization and system security and reliability. Creating a common disk on a non-virtual datastore takes a shorter time. A
common disk created on a non-virtual datastore has higher I/O performance than that created on a virtual datastore, but a non-
virtual datastore does not support advanced features. A virtual datastore provides high performance when it serves a small number
of hosts. Therefore, you are advised to add one virtual datastore to a maximum of 16 hosts.
FusionCompute can be installed only on a local storage device.
If Thin LUNs are used as SAN storage, the storage pool may be overcommitted. As a result, the storage capacity may be used up and no storage
space is available, affecting VMs on FusionCompute and customer services on VMs. Do not use Thin LUNs as FusionCompute storage devices
unless otherwise specified.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 76/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
A virtualized SAN datastore in a FusionCompute system can be added to hosts of the same CPU type.
Local disks can be provided only for the host accommodating the disks. The size of the local disk is configured in approximate
equivalence to the number of host compute resources. With this equivalence configured, if the host compute resources become exhausted,
local storage resources will also become exhausted, preventing unequal resource waste.
Except the management nodes, you are advised to deploy shared storage for service VMs.
Datastore Requirements
A virtualized SAN datastore in a FusionCompute system can be associated with hosts of the same CPU type.
One datastore can be added to only one FusionCompute system. If it is added to different FusionCompute systems, its data will be
overwritten.
In a system that uses datastores provided by shared storage, add the datastores to all hosts in the same cluster to allow VM
migration between hosts in a cluster.
To deploy management nodes on VMs, ensure that the datastores to be used by the VMs meet the requirements for the VM disk
space. That is, when you plan datastores, ensure that the capacity of datastores used by management node VMs is greater than or
equal to the disk space used by the VMs.
Requirements for disk spaces on the VMs are as follows: The disk space of each VM on which the FusionCompute management
node is deployed must be 120 GB. Considering the datastore management overhead, it is recommended that the disk space be
greater than or equal to 140 GB. The management node (VRM node) is mandatory.
For details about the server hardware requirements, see Host Requirements .
Do not install any third-party software on CNA or VRM nodes. Otherwise, services may be abnormal.
For details about specific requirements for the server hardware and software, see the product documentation of servers.
Configuration Requirements
Table 1 describes the requirements for the server hardware in FusionCompute.
CPU FusionCompute can virtualize compute resources The CPU virtualization function must be enabled in the BIOS
only when CPU virtualization is enabled for CPUs system.
on physical servers. NOTE:
RAID Use RAID 1 that consists of two disks to install the You are advised to use RAID 1 that consists of disks 1 and 2 on the
OS and service software on the server to enhance server.
reliability. Configure the RAID controller card boot option. The following uses
If VRM VMs use local storage resources, an 5288 V2/V3 as an example:
additional unpartitioned disk is required after the OS
During server startup, press Ctrl+C as prompted to go to the RAID
and service software are installed. Otherwise, no
controller card configuration page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 77/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
available local storage resources can be provided for On the Adapter Properties page, select SAS Topology and press
VRM VMs. Enter.
The SAS Topology page is displayed.
On the SAS Topology page, press ↑ or ↓ to select a disk or RAID,
press Alt+B to set the selected device as the first boot device, and
press Alt+A to set the selected device as the second boot device.
NOTE:
Boot Hard disk, Network, and CD-ROM are enabled as Set the first boot device to Hard disk, the second boot device to
device boot devices on servers. Network and CD-ROM are Network, and the third boot device to CD-ROM.
used for OS and service software installation. After
the installation is completed, configure servers to be
started from Hard disk by default.
PXE Enable the PXE function for the NICs used to install If another OS has been installed on the server before, ensure that the
OSs and service software in PXE mode. Disable the NIC driver is an onboard driver.
PXE function for other NICs.
Enable the PXE function for the NICs used to install OSs in PXE
mode and disable the PXE function for other NICs
If the IPv6 protocol is used by x86 and Arm, check whether the BIOS
server and NIC support IPv6 before installing a host in PXE mode.
For details, see the product documentation of the corresponding
server or contact technical support of the server vendor. If the server
does not support IPv6, use a server that supports IPv6.
Before installing a host in the x86 architecture using PXE, configure
the BIOS server and NIC, set Boot Type to UEFI Boot, and set PXE
Boot Capability to UEFI:IPV6.
Configuring S5352/S5752
Configuring S9300
Common Operations
Scenarios
The typical configuration for FusionCompute consists of rack servers, IP SAN storage devices, and GE switches, or rack servers,
scale-out block storage, and GE switches. Figure 1 shows the typical networking.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 78/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
All switching devices are deployed in stacking mode on the standard layer 3 network.
Access switches connect all the network ports on host network planes (management plane, service plane, and storage plane)
to the network, as shown in Figure 2.
Access switches connect the management plane and the storage plane of storage devices to the network, as shown in Figure
3 for the IP SAN storage access. For details about the scale-out block storage access, see Planning and Design > Network
Planning Guide > Typical Networking Schemes in OceanStor Pacific Series Product Documentation.
The aggregation switch connects to the access switches for the interconnection between different planes on the connected
devices. The core switch connects the cloud data center to external networks. The core switch and the aggregation switch can
be the same physical switch.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 79/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Figure 3 Networking between storage devices (IP SAN storage) and access switches
127.0.0.1:51299/icslite/print/pages/resource/print.do? 80/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Typical Case
In the typical configuration example, the following devices are used:
Access switch: HUAWEI Quidway S5300/S5700 series. For details about the configuration operations, see Configuring
S5352/S5752 .
Aggregation or core switch: HUAWEI Quidway S9300 series. For details about the configuration operations, see
Configuring S9300 .
Storage device: For details about how to configure Huawei OceanStor 5500 V3 storage system, see Configure > Quick
Configuration Guide for Block in OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&6800 V3 Product Documentation.
For details about how to configure the scale-out block storage system, see Installation > Hardware Installation Guide and
Installation > Software Installation Guide > Installing the Block Service > Connecting to FusionCompute in
OceanStor Pacific Series Product Documentation.
Host: HUAWEI 2288H V5 rack servers or TaiShan 200 servers (model: 2280). For details about the configuration method,
see Configuring the 2288H V5 Servers or TaiShan 200 Servers (Model: 2280) .
Preparations
127.0.0.1:51299/icslite/print/pages/resource/print.do? 81/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Checking Switches
Configuring Switches
Typical Scripts
3.2.2.1 Overview
Purpose
Configure the HUAWEI Quidway S5352C-EI Ethernet switch (S5352 for short) and HUAWEI Quidway S5700-52C-EI Ethernet
switch (S5752 for short) using a script in FusionCompute.
The methods of configuring S5752 and S5352 switches are similar. This document uses S5352 switches as an example.
Process
Figure 1 shows the process for configuring S5352 switches.
3.2.2.2 Preparations
Software PC terminal emulator This software product can emulate The Windows OS supports serial port tools, such as
the function of the switch terminal PuTTY, ttermpro.exe, and HyperTrm.exe.
through serial interactions with the
switch.
Documents Integration Design Data This document describes the data Network integration design documentation
Plan Template planning for deployment of the
FusionCompute.
FusionCompute 8.8.0 This document provides the version For enterprise users: Visit
Version Mapping mapping of FusionCompute https://support.huawei.com/enterprise , search for the
products. document by name, and download it.
For carrier users: Visit https://support.huawei.com ,
search for the document by name, and download it.
Quidway S5300 Series This document provides Visit https://support.huawei.com, search for the
Ethernet Switches information about S5300 series document by name, and download it.
Product Documentation switches and the configuration and
reference information.
vrpcfg.cfg It is the switch configuration script Obtain this script from integration design engineers.
generated in the network
integration design.
Scenarios
Before configuring a switch, check the status of a switch and ensure that the switch meets the basic requirements of data
configurations.
Process
Figure 1 shows the process for checking the switch.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 83/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
Log in to the switch through the Console port.
1. Run the emulation program on the local computer to create a connection between the local computer and the switch.
Key parameters are as follows:
Position of the switch port: CONSOLE port on the right of the front panel
After the hardware installation of the S5352 switch is complete and the switch is powered on, the stack system is configured. The local
computer needs to be connected to the active switch of the stack system.
If the local computer is connected to the standby switch of the stack system, an error will be generated during the execution of the
commands on this page.
When the switches are powered on for the first time, the one that is powered on first is the active switch.
3. Run the following command to check the software version of the switch:
[Quidway] display version
The command is successfully executed if information similar to the following is displayed. The value of Software Version
indicates the switch software version.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 84/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
CPLD Version : 74
HINDCARD information
FANCARD information
PWRCARD I information
CPLD Version : 74
HINDCARD information
FANCARD information
PWRCARD I information
4. Confirm that the switch software version meets the product version mapping.
If the switch software version is not consistent with the planned version, download the corresponding software and the
upgrade guide and perform an upgrade.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 85/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Run the following command to check the logs in the log buffer:
[Quidway] display logbuffer summary
The command is successfully executed if information similar to the following is displayed. Columns EMERG to DEBUG
indicate the exception logs of different severities. The number corresponding to each parameter indicates the quantity of
the specified type of exception logs.
0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0
If exception logs are generated, handle the issue by performing the operations provided in "Reference" > "Log
Reference" in Quidway S5300 Series Ethernet Switches Product Documentation.
Alarm:
-------------------------------------------------------------------
If alarms are generated, clear the alarm by performing the operations provided in "Maintenance and Fault
Management" > "Alarm Reference" in Quidway S5300 Series Ethernet Switches Product Documentation.
Scenarios
Configure the S5352 switches in stacking mode using a script.
For a switch stack, the lower switch is numbered 1 and the upper switch is numbered 2. In this case, you need to configure only
switch 1, and power on switch 2. The configuration specified in the script takes effect for switch 2 after it is powered on.
Process
Figure 1 shows the process for configuring the switch.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 86/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
SSHv1 has security risks. You are advised to use secure protocols such as SSHv2.
Procedure
Power off switch 2.
For a switch stack, the lower switch is numbered 1 and the upper switch is numbered 2.
2. Run the emulation program on the local computer to create a connection between the local computer and the switch.
Key parameters are as follows:
Position of the switch port: CONSOLE port on the right of the front panel
127.0.0.1:51299/icslite/print/pages/resource/print.do? 87/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The initial switch only has VLAN 1. The management IP address and FTP information configured in this section are temporary data and
only used for uploading the switch configuration scripts. Such data will be overwritten after the configuration scripts take effect.
FTP has security risks. You are advised to use secure protocols such as SFTP and FTPS.
5. Run the following command to configure the IP address and the subnet mask of the switch:
[Quidway-Vlanif1] ip address Management IP address of the switch Subnet mask
For example, to set the management IP address of the switch to 10.85.199.48 and subnet mask to 24, run the following
command:
[Quidway-Vlanif1] ip address 10.85.199.48 24
(l): loopback
(s): spoofing
Vlanif1 10.85.199.48/24 up up
10. Run the following command to configure the FTP username and password:
[Quidway-aaa] local-user FTP username password simple FTP password
For example, to set the FTP username to ftp and password to Huawei123, run the following command:
[Quidway-aaa] local-user ftp password simple Huawei123
The command is successfully executed if information similar to the following is displayed:
11. Run the following command to configure the user access type:
[Quidway-aaa] local-user FTP username service-type ftp
For example, to configure the access type for the ftp user to ftp, run the following command:
[Quidway-aaa] local-user ftp service-type ftp
127.0.0.1:51299/icslite/print/pages/resource/print.do? 88/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
12. Run the following command to configure the user FTP directory:
[Quidway-aaa] local-user FTP username ftp-directory flash:/
For example, to configure the FTP directory for the ftp user, run the following command:
[Quidway-aaa] local-user ftp ftp-directory flash:/
13. Run the following command to switch back to the system view:
[Quidway-aaa] quit
14. Connect the local computer to a switch by a network cable, and assign the IP address of the local computer and
management IP address of the switch to the same network segment.
The port used to connect the local computer must be idle. If the port has been disabled, perform the operations provided in Configuring
VLANs That Are Allowed to Pass Through the Port .
17. Run the following command to establish an FTP connection with the switch on the CLI:
For example, if the management IP address of the switch is 10.85.199.47, run the following command:
ftp 10.85.199.47
19. Run the following command to set the FTP transfer mode to binary:
ftp> binary
20. Run the following command to configure the path in which the script is stored on the local computer:
ftp> lcd Path in which the script is stored on the local computer
For example, if the script file is saved in E:\A01_LSW01_S5300 on the local computer, run the following command:
21. Run the following command to upload the script to the switch:
ftp> put Local file name FTP file name
For example, to upload the vrpcfg.cfg file to the switch and save it as vrpcfg.cfg, run the following command:
ftp> put vrpcfg.cfg vrpcfg.cfg
The command is successfully executed if information similar to the following is displayed:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 89/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
23. Run the following command in the switch system view to check whether the slot ID is 0:
[Quidway] display stack
The command is successfully executed if information similar to the following is displayed:
If yes, go to 25.
If no, go to 24.
The slot number of switch 1 in a stacking system must be 0. If the slot number is not 0, change it manually.
24. Run the following command to change the slot number of switch 1 to 0:
[Quidway] stack slot 1 renumber 0
Restart switch 1.
25. Run the following command to switch back to the user view:
[Quidway] quit
26. Run the following command to set the script as the default configuration file:
<Quidway> startup saved-configuration vrpcfg.cfg
The command is successfully executed if information similar to the following is displayed:
Warning: All the configuration will be saved to the configuration file for the n
To prevent switch configuration errors in case that the current configuration data overwrites the data in the configuration script, do not
save the configuration in this step.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 90/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Power on switch 2.
SSHv1 has security risks. You are advised to use secure protocols such as SSHv2.
31. Run the following command to switch to the system view after switch 2 is powered on:
<Quidway> system-view
32. Run the following command to create a local Rivest Shamir Adleman (RSA) key pair:
[Quidway] rsa local-key-pair create
Information similar to the following is displayed:
...
The key length of the RSA asymmetric encryption algorithm must be 2048 bits or more.
34. Run the following command to set the maximum number of logins supported by SSH:
[Quidway] user-interface maximum-vty 15
36. Run the following command to set the authentication mode to aaa:
[Quidway-ui-vty-0-14] authentication-mode aaa
38. Run the following command to switch back to the system view:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 91/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
[Quidway-ui-vty-0-14] quit
39. Run the following command to set the authentication type to password authentication:
[Quidway] ssh authentication-type default password
41. Run the following command to set the password of the root user:
[Quidway-aaa] local-user root password cipher Password of the root user
The password length cannot exceed 24 characters.
The password is set successfully if information similar to the following is displayed:
42. Run the following command to set rights for the root user:
[Quidway-aaa] local-user root privilege level 3
43. Run the following command to set the type of the available services for the root user to SSH:
[Quidway-aaa] local-user root service-type ssh telnet
44. Run the following command to switch back to the system view:
[Quidway-aaa] quit
46. Run the following command to switch back to the user view:
[Quidway] quit
Scenarios
After the S5352 switches have been configured, check key data of the switches to ensure that they are correctly configured.
Process
127.0.0.1:51299/icslite/print/pages/resource/print.do? 92/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Figure 1 shows the process for checking the key data of the switch.
Telnet has security risks. You are advised to use secure protocols such as SSHv2.
Procedure
Check the VLANs that are allowed to pass through the ports.
The default view name of the switch is <Quidway>. If the name of the switch is changed, the switch view is <New switch name>.
The information in the following steps is an example.
sysname A02_LSW01_S5352
......
interface Vlanif2
interface MEth0/0/1
127.0.0.1:51299/icslite/print/pages/resource/print.do? 93/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface Eth-Trunk0
stp disable
interface GigabitEthernet0/0/1
ntdp enable
ndp enable
bpdu enable
......
return
3. Ensure that the configured VLAN ID of each port is the same as planned.
If the network is configured as described in Network Device Configuration Requirements , ensure that the switch port to
which each host network plane NIC connects allows the packets tagged with the network plane VLAN to pass through. If a
NIC is shared by multiple planes, ensure that the switch port to which the NIC connects allows the packets tagged with
VLANs of these network planes to pass through.
4. Run the following command to check the detailed information about VLAN 1:
[Quidway] display vlan 1 verbose
Information similar to the following is displayed:
* : management-vlan
---------------------
VLAN ID :1
Status : Enable
Broadcast : Enable
Statistics : Disable
Property : default
127.0.0.1:51299/icslite/print/pages/resource/print.do? 94/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
----------------
GigabitEthernet1/0/13 GigabitEthernet1/0/14
----------------
Interface Physical
GigabitEthernet0/0/11 DOWN
GigabitEthernet0/0/12 DOWN
GigabitEthernet0/0/13 DOWN
GigabitEthernet0/0/14 DOWN
If there is no information under VLAN 1, all switch ports have been removed from VLAN 1.
If there is port information under VLAN 1 and Physical of the port is DOWN, the port has been disabled.
VLAN 1 is the default VLAN, and all switch ports are assigned to VLAN 1 by default.
To reduce the broadcast domain and enhance user security, VLAN 1 must be removed from all ports.
If VLAN 1 is not removed from a port, the idle port must be disabled.
6. Run the following command to check the read/write community name of the SNMP:
[Quidway] display snmp-agent community
Information similar to the following is displayed:
Community name:comaccess1
Group name:comaccess1
Storage-type: nonVolatile
Version :3
127.0.0.1:51299/icslite/print/pages/resource/print.do? 95/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Hello time(s) :2
10. Run the following command to check the status of the switch port:
[Quidway] display interface brief
Information similar to the following is displayed:
PHY: Physical
^down: standby
(l): loopback
(s): spoofing
......
11. According to the data table, ensure that the idle ports have been disabled.
If the value of PHY is up, run the shutdown command to disable ports manually.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 96/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
0 0 100 100
1 1 100 100
13. Confirm that the values of Current slot-id are 0 and 1, and are the same as the values of Next slot-id.
15. Confirm that the values of Slot# and Role are as follows:
Telnet has security risks. You are advised to use secure protocols such as SSHv2.
16. Connect the local computer to a switch by a network cable, and assign the IP address of the local computer and
management IP address of the switch to the same network segment.
17. Configure the idle port that is used to connect to the local computer.
Configure the VLAN that is allowed to pass through the port as the management plane VLAN. For details, see Configuring
VLANs That Are Allowed to Pass Through the Port .
Microsoft Telnet>
127.0.0.1:51299/icslite/print/pages/resource/print.do? 97/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
22. Enter the login password of the switch based on the prompt.
When the user view of the switch <Quidway> is displayed in the command output, the switch has been logged in to in the
Telnet mode remotely.
23. Disable the idle port that is used to connect to the local computer.
For details, see Disabling LAN Switch Ports .
Typical Networking
The two S5352 switches function as access switches and work in stacking mode. They are used to connect the management,
storage, and service planes.
If the number of servers in a cabinet is greater than 12, configure an S3328 switch to connect to the BMC.
This section uses a standard six-port server as an example to illustrate the scripts.
Figure 1 shows the downlink port allocation for the S5352 and S3328 switches.
Figure 1 Downlink port allocation for LAN switches in the server cabinet
1: The interfaces connect to the eth1 2: The interfaces connect to the eth3 3: The interfaces connect to the eth5
interfaces of high-density servers 1 to 16 in interfaces of high-density servers 1 to 16 in interfaces of high-density servers 1 to 16 in
an ordered sequence. an ordered sequence. an ordered sequence.
4: The interfaces connect to the eth0 5: The interfaces connect to the eth2 6: The interfaces connect to the eth4
interfaces of high-density servers 1 to 16 in interfaces of high-density servers 1 to 16 in interfaces of high-density servers 1 to 16 in
an ordered sequence. an ordered sequence. an ordered sequence.
The uplink ports of the two S5352 switches are the 10GE optical ports provided by the optical port cards. Each switch connects to
the aggregation switch through two 10GE optical ports: one interface is used to transmit packets of the storage plane, and the
other is used to transmit packets of the management and service planes.
The uplink ports of the S3328 switch are two GE optical ports.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 98/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Data Planning
Table 1 describes the data planning for the S5352 switches.
The two stacked switches share the management IP address and the switch name.
If the S5352 switches are stacked, the port format is [X]GigabitEthernetStacking number/Subboard number/Port number.
If the port rate is GE, the port number is prefixed with GigabitEthernet. If the port rate is 10GE, the port number is prefixed with
XGigabitEthernet.
Stacking number: indicates the stacking ID. The value ranges from 0 to 8.
Subboard number: indicates the subboard number supported by the interface board. The value is 0 or 1.
The number of the subboard on the switch is 0, and the subboard number is 1 when the optical interface board is used.
Port number: indicates the port number of the device.
The ports on the switch are numbered from 1 in ascending order from bottom to top and left to right.
The ports on the front board are numbered from 1 in ascending order from left to right.
Server Server Switch Mgmt IP Switch VLAN VLAN Plane VLAN Mode
Port Address of the Port ID
Switch
A02_CNA02_X6000_S4 0/0/4
A02_CNA03_X6000_S5 0/0/5
A02_CNA04_X6000_S6 0/0/6
A02_CNA05_X6000_S7 0/0/7
A02_CNA06_X6000_S8 0/0/8
A02_CNA07_X6000_S9 0/0/9
A02_CNA08_X6000_S10 0/0/10
A02_CNA09_X6000_S11 0/0/11
A02_CNA10_X6000_S12 0/0/12
A02_CNA11_X6000_S13 0/0/13
A02_CNA12_X6000_S14 0/0/14
A02_CNA13_X6000_S15 0/0/15
A02_CNA14_X6000_S16 0/0/16
A02_CNA04_X6000_S6 0/0/22
127.0.0.1:51299/icslite/print/pages/resource/print.do? 99/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
A02_CNA05_X6000_S7 0/0/23
A02_CNA06_X6000_S8 0/0/24
A02_CNA07_X6000_S9 0/0/25
A02_CNA08_X6000_S10 0/0/26
A02_CNA09_X6000_S11 0/0/27
A02_CNA10_X6000_S12 0/0/28
A02_CNA11_X6000_S13 0/0/29
A02_CNA12_X6000_S14 0/0/30
A02_CNA13_X6000_S15 0/0/31
A02_CNA14_X6000_S16 0/0/32
A02_CNA04_X6000_S6 0/0/38
A02_CNA05_X6000_S7 0/0/39
A02_CNA06_X6000_S8 0/0/40
A02_CNA07_X6000_S9 0/0/41
A02_CNA08_X6000_S10 0/0/42
A02_CNA09_X6000_S11 0/0/43
A02_CNA10_X6000_S12 0/0/44
A02_CNA11_X6000_S13 0/0/45
A02_CNA12_X6000_S14 0/0/46
A02_CNA13_X6000_S15 0/0/47
A02_CNA14_X6000_S16 0/0/48
A02_CNA02_X6000_S4 1/0/4
A02_CNA03_X6000_S5 1/0/5
A02_CNA04_X6000_S6 1/0/6
A02_CNA05_X6000_S7 1/0/7
A02_CNA06_X6000_S8 1/0/8
127.0.0.1:51299/icslite/print/pages/resource/print.do? 100/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
A02_CNA07_X6000_S9 1/0/9
A02_CNA08_X6000_S10 1/0/10
A02_CNA09_X6000_S11 1/0/11
A02_CNA10_X6000_S12 1/0/12
A02_CNA11_X6000_S13 1/0/13
A02_CNA12_X6000_S14 1/0/14
A02_CNA13_X6000_S15 1/0/15
A02_CNA14_X6000_S16 1/0/16
A02_CNA04_X6000_S6 1/0/22
A02_CNA05_X6000_S7 1/0/23
A02_CNA06_X6000_S8 1/0/24
A02_CNA07_X6000_S9 1/0/25
A02_CNA08_X6000_S10 1/0/26
A02_CNA09_X6000_S11 1/0/27
A02_CNA10_X6000_S12 1/0/28
A02_CNA11_X6000_S13 1/0/29
A02_CNA12_X6000_S14 1/0/30
A02_CNA13_X6000_S15 1/0/31
A02_CNA14_X6000_S16 1/0/32
A02_CNA04_X6000_S6 1/0/38
A02_CNA05_X6000_S7 1/0/39
A02_CNA06_X6000_S8 1/0/40
A02_CNA07_X6000_S9 1/0/41
A02_CNA08_X6000_S10 1/0/42
A02_CNA09_X6000_S11 1/0/43
A02_CNA10_X6000_S12 1/0/44
127.0.0.1:51299/icslite/print/pages/resource/print.do? 101/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
A02_CNA11_X6000_S13 1/0/45
A02_CNA12_X6000_S14 1/0/46
A02_CNA13_X6000_S15 1/0/47
A02_CNA14_X6000_S16 1/0/48
Access Switch Access Aggregation Switch Aggregation Eth-Trunk Eth-Trunk VLAN Plane
Switch Port Switch Port Number VLAN ID
Example Scripts
For details, see "Reference" > "Command Reference" in Quidway S5300 Series Ethernet Switches Product Documentation.
FTP has security risks. You are advised to use secure protocols such as SFTP and FTPS.
Telnet has security risks. You are advised to use secure protocols such as SSHv2.
sysname A02_LSW01_S5352
#By default, VLAN 4093 is used to exchange/transmit stack protocol messages for stacked switches. Other services cannot use this VLAN.
#The management plane VLAN ID is 2, storage plane VLAN IDs is 11 to 14, service plane VLAN IDs is 51 to 52 101 to 2000.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 102/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
#To disable the Address Resolution Protocol (ARP) strict learning function
#To enable the Huawei Group Management Protocol (HGMP) cluster function
cluster enable
ntdp enable
ntdp hop 16
ndp enable
#To configure the FTP username, password, and directory. For example, set the username to ftp, password to ftppwd, and directory to flash:/.
#To configure the Telnet username and password. For example, set the username to telnet and password to telnetpwd.
#To configure the Console username and password. For example, set the username to console and password to consolepwd.
aaa
authentication-scheme default
authorization-scheme default
accounting-scheme default
domain default
domain default_admin
127.0.0.1:51299/icslite/print/pages/resource/print.do? 103/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
#To configure the management IP address of the switch. This IP address and the server management IP address can be on the same VLAN
interface Vlanif2
interface MEth0/0/1
#To create Eth-Trunk 0 to connect the management and service planes to the aggregation switch
interface Eth-Trunk0
stp disable
#To create Eth-Trunk 1 to connect the storage plane to the aggregation switch
interface Eth-Trunk1
stp disable
#To assign ports 0/0/1 to 0/0/16 of switch 1 to the management plane VLAN
interface GigabitEthernet0/0/1
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/2
127.0.0.1:51299/icslite/print/pages/resource/print.do? 104/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/3
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/4
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/5
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/6
ntdp enable
ndp enable
bpdu enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 105/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface GigabitEthernet0/0/7
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/8
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/9
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/10
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/11
ntdp enable
ndp enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 106/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
bpdu enable
interface GigabitEthernet0/0/12
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/13
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/14
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/15
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet0/0/16
ntdp enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 107/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ndp enable
bpdu enable
#To disable ports 0/0/17 and 0/0/18 because two VRM servers do not need to connect to the storage plane
interface GigabitEthernet0/0/17
shutdown
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/18
shutdown
bpdu enable
ntdp enable
ndp enable
#To assign ports 0/0/19 to 0/0/32 of switch 1 to the storage plane VLAN 11 and VLAN 12
interface GigabitEthernet0/0/19
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/20
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/21
bpdu enable
ntdp enable
ndp enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 108/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface GigabitEthernet0/0/22
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/23
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/24
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/25
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/26
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/27
bpdu enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 109/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ntdp enable
ndp enable
interface GigabitEthernet0/0/28
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/29
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/30
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/31
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/32
bpdu enable
ntdp enable
ndp enable
#To assign ports 0/0/33 to 0/0/48 of switch 1 to the service plane VLAN
127.0.0.1:51299/icslite/print/pages/resource/print.do? 110/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface GigabitEthernet0/0/33
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/34
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/35
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/36
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/37
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/38
bpdu enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 111/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ntdp enable
ndp enable
interface GigabitEthernet0/0/39
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/40
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/41
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/42
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/43
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/44
127.0.0.1:51299/icslite/print/pages/resource/print.do? 112/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/45
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/46
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/47
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet0/0/48
bpdu enable
ntdp enable
ndp enable
#To configure two 10GE optical ports of switch 1 in the trunk mode, as the uplink port connected to the aggregation switch
#To connect the 10GE port of 0/1/1 as the common Eth-Trunk port of management plane and service plane to the aggregation switch
#To connect the 10GE port of 0/1/2 as the Eth-Trunk port of storage plane to the aggregation switch
interface XGigabitEthernet0/1/1
eth-trunk 0
127.0.0.1:51299/icslite/print/pages/resource/print.do? 113/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface XGigabitEthernet0/1/2
eth-trunk 1
#To assign ports 0/0/1 to 0/0/16 of switch 2 to the management plane VLAN
interface GigabitEthernet1/0/1
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/2
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/3
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/4
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/5
127.0.0.1:51299/icslite/print/pages/resource/print.do? 114/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/6
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/7
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/8
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/9
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/10
127.0.0.1:51299/icslite/print/pages/resource/print.do? 115/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/11
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/12
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/13
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/14
ntdp enable
ndp enable
bpdu enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 116/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface GigabitEthernet1/0/15
ntdp enable
ndp enable
bpdu enable
interface GigabitEthernet1/0/16
ntdp enable
ndp enable
bpdu enable
#To disable ports 1/0/17 and 1/0/18 because two VRM servers do not need to connect to the storage plane
interface GigabitEthernet1/0/17
shutdown
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/18
shutdown
bpdu enable
ntdp enable
ndp enable
#To assign ports 1/0/19 to 1/0/32 of switch 2 to the storage plane VLAN 13 and VLAN 14
interface GigabitEthernet1/0/19
bpdu enable
ntdp enable
ndp enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 117/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface GigabitEthernet1/0/20
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/21
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/22
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/23
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/24
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/25
bpdu enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 118/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ntdp enable
ndp enable
interface GigabitEthernet1/0/26
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/27
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/28
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/29
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/30
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/31
127.0.0.1:51299/icslite/print/pages/resource/print.do? 119/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/32
bpdu enable
ntdp enable
ndp enable
#To assign ports 1/0/33 to 1/0/48 of switch 2 to the service plane VLAN
interface GigabitEthernet1/0/33
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/34
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/35
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/36
bpdu enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 120/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
ntdp enable
ndp enable
interface GigabitEthernet1/0/37
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/38
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/39
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/40
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/41
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/42
127.0.0.1:51299/icslite/print/pages/resource/print.do? 121/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/43
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/44
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/45
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/46
bpdu enable
ntdp enable
ndp enable
interface GigabitEthernet1/0/47
bpdu enable
ntdp enable
ndp enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 122/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface GigabitEthernet1/0/48
bpdu enable
ntdp enable
ndp enable
#To configure two 10GE optical ports of switch 2 in the trunk mode, as the uplink port connected to the aggregation switch
#To connect the 10GE port of 1/1/1 as the common Eth-Trunk port of management plane and service plane to the aggregation switch
#To connect the 10GE port of 1/1/2 as the Eth-Trunk port of storage plane to the aggregation switch
interface XGigabitEthernet1/1/1
eth-trunk 0
interface XGigabitEthernet1/1/2
eth-trunk 1
interface NULL0
#To set the SNMP community name to comaccess1 and assign the read and write permissions
snmp-agent
#To configure the switch system to synchronize time with the VRM host
#To configure the verification mode and command level for Console and Telnet users to connect to the switch
user-interface con 0
user-interface vty 0 4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 123/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
authentication-mode aaa
idle-timeout 5 0
return
Preparations
Checking Switches
Configuring Switches
3.2.3.1 Overview
Purpose
Configure Huawei Quidway S9303 Terabit routing switches (S9303 for short), Huawei Quidway S9306 Terabit routing switches
(S9306 for short), or Huawei Quidway S9312 Terabit routing switches (S9312 for short) by using a script in FusionCompute.
The commands and methods for configuring the S9306 switch are the same as for the S9303 and S9312 switches; the only difference is that the
numbers of supported Line Processing Units (LPUs) are different. This section uses the S9306 switch as the aggregation switch to describe the
data configuration.
Process
Figure 1 shows the process for configuring S9300 series switches.
Figure 1 Process
127.0.0.1:51299/icslite/print/pages/resource/print.do? 124/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.2.3.2 Preparations
Software PC terminal This software product can emulate the The Windows OS supports serial port tools, such as
emulator function of the switch terminal PuTTY, ttermpro.exe, and HyperTrm.exe.
through serial interactions with the
switch.
Documents Integration Design This document describes the data Network integration design documentation
Data Plan Template planning for FusionCompute
deployment.
S9300 Product This document provides information For enterprise users: Visit
Documentation about S9300 series switches and the https://support.huawei.com/enterprise , search for the
configuration and reference document by name, and download it.
information.
For carrier users: Visit https://support.huawei.com ,
FusionCompute This document provides the version search for the document by name, and download it.
8.8.0 Version mapping of FusionCompute products.
Mapping
127.0.0.1:51299/icslite/print/pages/resource/print.do? 125/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
Before configuring a switch, connect the switch and the local computer through the Console port to check the switch working
status and ensure that conditions are met for configuring the switch.
Process
Figure 1 shows the process for checking the switch.
Procedure
Log in to the switch through the Console port.
1. Run the emulation program on the local computer to create a connection between the local computer and the switch.
Key parameters are as follows:
Position of the switch port: CONSOLE port on the right of the front panel
127.0.0.1:51299/icslite/print/pages/resource/print.do? 126/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
<Quidway> system-view
Quidway S9306 Terabit Routing Switch uptime is 0 week, 0 day, 1 hour, 3 minutes
2. If Supporting PoE : No
4. Confirm that the switch software version meets the product version mapping.
If the switch software version is not consistent with the planned version, download the corresponding software and the
upgrade guide and perform an upgrade.
5. Run the following command to check switch log information in the log buffer:
[Quidway] display logbuffer summary
The command is successfully executed if information similar to the following is displayed. Columns EMERG to DEBUG
indicate the exception logs of different severities. The number corresponding to each parameter indicates the quantity of
the specified type of exception logs.
13 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0
4 0 0 0 0 0 0 0 0
5 0 0 0 0 0 0 0 0
9 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0
If all the values from EMERG to DEBUG are 0, no exception log is generated.
If exception logs are generated, handle the issue by performing the operations provided in "S9300 User Manual" >
"Reference" > "Log Reference" in Quidway S9300 Terabit Routing Switch Product Documentation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 127/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
----------------------------------------------------------------------------
NO alarm
----------------------------------------------------------------------------
If alarms are generated, clear the alarms by performing the operations provided in "S9300 User Manual" >
"Reference" > "Log Reference" in Quidway S9300 Terabit Routing Switch Product Documentation.
Scenarios
After ensuring that the switches have met the basic requirements for data configuration, configure the two S9300 switches to
stacking mode by running required commands and configure the switch data to enable data switching between layer 2 (data link
layer) and layer 3 (network layer).
The commands and methods used to configure the S9306, S9303, and S9312 switches are the same. This document uses S9306
switches as an example.
Process
Figure 1 shows the process for configuring the switch.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 128/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
SNMPv1 and SNMPv2 have security risks. You are advised to use secure protocols such as SNMPv3.
Procedure
Configure switch stacking.
2. Run the following command to set the frame IDs of the member switches:
[Quidway] set css id Frame ID
For example, if the switch frame ID is 2, run the following command:
[Quidway] set css id 2
The frame IDs of the member switches must be 1 and 2, respectively. If the same frame ID is set to the two switches, they cannot
be configured as a switch stack.
The default frame ID is 1. Change the frame ID to 2 for one of the switches.
A larger value indicates a higher priority. The default priority is 1. This setting is not required if the default priority 1 is used.
4. Run the following command to specify one member switch as the active switch:
[Quidway] css master force frame Frame ID
For example, to define switch 1 as the active switch in the switch stack, run the following command:
[Quidway] css master force frame 1
5. Run the following command to enable the stacking function of the switches:
[Quidway] css enable
The command is successfully executed if information similar to the following is displayed:
Reboot needed to change CSS config. Are you sure this operation and reboot now?
[Y/N]
Frame ID 1
Priority 255
127.0.0.1:51299/icslite/print/pages/resource/print.do? 129/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Enable switch On
Run this command again after the two stacked switches have been configured. In the command output, the switch for which CSS status
is master is the active one; the switch for which CSS status is backup is the standby one.
Subboard number: number of the subboard supported by the interface board, either 0 or 1
9. Run the following command to switch to the switch's management port Ethernet1/0/0/0:
[Quidway] interface Ethernet1/0/0/0
11. Run the following command to create multiple VLANs in batches in the system view:
[Quidway] vlan batch vlan-id1 to vlan-idn
For example, to create VLANs with IDs 2 to 7, 20, 301, 351, 401, 451, and 4000 in one batch, run the following command:
[Quidway] vlan batch 2 to 7 20 301 351 401 451 4000
13. Run the following command to configure the IP address for the VLANIF interface:
[Quidway-Vlanifx] ip address ip-address mask
For example, to set the IP address of the VLANIF interface to 10.18.161.2 and subnet mask to 255.255.255.0, run the
following command:
[Quidway-Vlanif2] ip address 10.18.161.2 255.255.255.0
14. Repeat 12 and 13 to create multiple VLANIF interfaces and configure their IP addresses.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 130/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VLANIF interfaces, which are logical interfaces of VLANs, need to be created only when the S9306 switches communicate with the
network layer. A VLANIF interface is a network layer interface and can be configured with an IP address. With the VLANIF interface,
the S9306 switches can communicate with other devices at the network layer.
16. Run the following command to configure the link type of the interface according to the network plan:
[Quidway-GigabitEthernet1/4/0/1] port link-type access | hybrid | trunk
For example, to set the link type of the interface to trunk, run the following command:
[Quidway-GigabitEthernet1/4/0/1] port link-type trunk
Application scenarios of different link types are as follows:
Ports of the access link type are used to connect user hosts over access links. Ethernet frames transmitted over the
access links do not carry tags. If a default VLAN is configured for the access port, the packets sent from this port are
tagged with VID set to the default VLAN ID. In this case, only Ethernet frames tagged with the default VLAN ID can
be transmitted over the access link.
Ports of the trunk link type are used to connect other switch devices over trunk links. A trunk port allows frames of
multiple VLANs to pass through.
Ports of the hybrid type are used to connect user hosts and other switch devices over access or trunk link. A hybrid
port allows frames of multiple VLANs to pass through, and can determine whether to strip the VLAN tags off the
VLAN frames when sending the VLAN frames out.
Link Method
Type
access Run the following command to set the default VLAN so that after receiving an untagged frame, the access port can add a
default VLAN tag to the frame:
[Quidway-GigabitEthernet1/4/0/1] port default vlan vlan-id
For example, if the default VLAN ID is 4000, run the following command:
[Quidway-GigabitEthernet1/4/0/1] port default vlan 4000
trunk Run the following command to remove VLAN 1 from the VLANs that are allowed to pass through the trunk port:
[Quidway-GigabitEthernet1/4/0/1] undo port trunk allow-pass vlan 1
Run the following command to configure the VLANs that are allowed to pass through the trunk port:
[Quidway-GigabitEthernet1/4/0/1] port trunk allow-pass vlan vlan-id range
For example, to add interface 1/4/0/1 to VLAN 4001, run the following command:
[Quidway-GigabitEthernet1/4/0/1] port trunk allow-pass vlan 4001
127.0.0.1:51299/icslite/print/pages/resource/print.do? 131/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
[Quidway-GigabitEthernet1/4/0/1] port hybrid pvid vlan 4001
Run the following command to configure the VLANs that are allowed to pass through the hybrid port:
[Quidway-GigabitEthernet1/4/0/1] port hybrid tagged | untagged vlan vlan-id range
For example, to add interface 1/4/0/1 to VLANs 2 to 4 and 4001, run the following command:
[Quidway-GigabitEthernet1/4/0/1] port hybrid untagged vlan 2 to 4 4001
19. Run the following command to start the SNMP Agent service:
[Quidway] snmp-agent
SNMPv1 and SNMPv2 have security risks. You are advised to use security protocols such as SNMPv3.
21. Run the following commands to configure the SNMP community name and read/write permissions:
[Quidway] snmp-agent community read comaccess1
[Quidway] snmp-agent community write comaccess1
22. Run the following command to switch back to the user view:
[Quidway]quit
23. Run the following command to reset the local time zone to the Universal Time Coordinated (UTC) time zone:
<Quidway>undo clock timezone
24. Run the following command to set the local time zone name and the offset time compared with the UTC time:
<Quidway>clock timezone time-zone-name {add | minus} offset
For example, to set the local time zone to BeiJing and add 08:00:00 time offset, run the following command:
<Quidway>clock timezone BeiJing add 08:00:00
26. Run the following command to check the current time and date of the system:
<Quidway>display clock
127.0.0.1:51299/icslite/print/pages/resource/print.do? 132/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When the S9306 switch serves as an aggregation switch, configure data based on Typical Configuration Tasks
(Aggregation Layer) .
When the S9306 switch serves as a core switch, configure data based on Typical Configuration Tasks (Core
Layer) .
When the S9306 switch serves as both an aggregation switch and a core switch, configure data based on both
Typical Configuration Tasks (Aggregation Layer) and Typical Configuration Tasks (Core Layer) .
Typical configuration tasks described in this document are intended for use by software testing engineers as a reference. The
configuration for a specific site must be performed in accordance with the networking plan and the switch's configuration guide.
Scenarios
After S9300 series switches have been configured, check key data of the switches to ensure that they are correctly configured.
Process
Figure 1 shows the process for checking the key data of the switch.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 133/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
Check the VLANs that are allowed to pass through the ports.
The default view name of the switch is <Quidway>. If the name of the switch is changed, the switch view is <New switch name>.
The information in the following steps is an example.
sysname Quidway
vlan batch 10
interface Vlanif10
......
127.0.0.1:51299/icslite/print/pages/resource/print.do? 134/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
interface GigabitEthernet1/0/4
......
user-interface con 0
history-command max-size 30
user-interface vty 0 4
return
3. Ensure that the configured VLAN ID of each port is the same as planned.
If the network is configured as described in Network Device Configuration Requirements , ensure that the switch port to
which each host network plane NIC connects allows the packets tagged with the network plane VLAN to pass through. If a
NIC is shared by multiple planes, ensure that the switch port to which the NIC connects allows the packets tagged with
VLANs of these network planes to pass through.
4. Run the following command to check the detailed information about VLAN 1:
[Quidway] display vlan 1 verbose
Information similar to the following is displayed:
* : management-vlan
---------------------
VLAN ID :1
Status : Enable
Broadcast : Enable
Statistics : Disable
Property : default
----------------
127.0.0.1:51299/icslite/print/pages/resource/print.do? 135/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
GigabitEthernet1/0/13 GigabitEthernet1/0/14
----------------
Interface Physical
GigabitEthernet0/0/11 DOWN
GigabitEthernet0/0/12 DOWN
GigabitEthernet0/0/13 DOWN
GigabitEthernet0/0/14 DOWN
If there is no information under VLAN 1, all switch ports have been removed from VLAN 1.
If there is port information under VLAN 1 and Physical of the port is DOWN, the port has been disabled.
VLAN 1 is the default VLAN, and all switch ports are assigned to VLAN 1 by default.
To reduce the broadcast domain and enhance user security, VLAN 1 must be removed from all ports.
If VLAN 1 is not removed from a port, the idle port must be disabled.
6. Run the following command to check detailed information about the DHCP server group:
[Quidway] display dhcp relay all
Information similar to the following is displayed:
No information is displayed for the VLANIF interface that is not configured with the DHCP relay function.
7. Check that data is correctly configured for the DHCP relay function.
If the Server IP address and Gateway address in use values are the same as those planned for the DFCP server, the
DHCP relay function is enabled.
8. Run the following command to check the IP address of the VLANIF interface:
[Quidway] display ip interface brief vlanif interface-number
For example, to view the IP address of VLANIF interface 15, run the following command:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 136/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
(l): loopback
(s): spoofing
Vlanif15 10.1.1.119/24 up up
9. Confirm that the displayed IP address is the same as that planned for the VLANIF interface.
10. Run the following command to check the general information about the stacking:
[Quidway] display css status all
Information similar to the following is displayed:
Frame ID 1
Priority 1
Enable switch On
Frame ID 2
Priority 1
Enable switch On
CSS status is displayed as master for one switch and as backup for the other switch.
Typical Tasks
Table 1 lists the tasks and purpose of configuring the key functions of S9306 switches at the aggregation layer.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 137/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Function Purpose
Configuring the DHCP Relay for Applicable to the scenario where the DHCP server and the DHCP pool use different
Aggregation Switches subnets when you install hosts in PXE mode.
Configuring an Eth-Trunk Interface Connect the S9306 switch to the access switch in Eth-Trunk mode.
Disabling the Strict ARP Learning Ensure that VMs can connect to the Internet immediately after being migrated.
Function
Configuring a Rate Limit for ARP Prevent repeated server switchovers when the network is disconnected or is under capacity
Packets expansion.
1. Run the following command to enable DHCP relay in the system view:
[Quidway] dhcp enable
3. Run the following command to add a DHCP server to the DHCP server group:
[Quidway-dhcp-server-group-dhcp-group1] dhcp-server dhcp-server-ip-address
For example, if the IP address of the DHCP server is 10.18.163.15, run the following command:
[Quidway-dhcp-server-group-dhcp-group1] dhcp-server 10.18.163.15
4. Run the following commands to switch to the VLANIF interface view and enable the DHCP relay function:
[Quidway] interface vlanif 351
With the DHCP relay function, the VLANIF interface can forward DHCP packets to an external DHCP server, which then assigns IP
addresses to DHCP clients. DHCP relay is required if the DHCP server and clients are not on the same subnet.
6. Run the following command to configure the DHCP server group to which the DHCP relay belongs:
[Quidway-Vlanif351] dhcp relay server-select group-name
For example, if the DHCP server group is dhcp-group1, run the following command:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 138/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prerequisites
The purpose of configuring the Eth-Trunk interface is to enhance the reliability or improve the bandwidth between two
switches. At least two network cables are used to connect the two switches.
The access switches have also been configured to enable connection to the aggregation switches in Eth-Trunk mode.
Procedure
1. Run the following command to create an Eth-Trunk interface in the system view:
[Quidway] interface eth-trunk Eth-Trunk number
For example, to create an Eth-Trunk interface with ID 0, run the following command:
[Quidway] interface eth-trunk 0
2. Run the following command to set the link type of the Eth-Trunk interface to trunk:
[Quidway-Eth-Trunk0] port link-type trunk
3. Run the following command to remove VLAN 1 from the VLANs that are allowed to pass through the Eth-Trunk
interface:
[Quidway-Eth-Trunk0] undo port trunk allow-pass vlan 1
4. Run the following command to configure the VLANs that are allowed to pass through the Eth-Trunk interface:
[Quidway-Eth-Trunk0] port trunk allow-pass vlan vlan-id range
For example, to add Eth-Trunk 0 to VLAN 2 to VLAN 6, run the following command:
[Quidway-Eth-Trunk0] port trunk allow-pass vlan 2 to 6
6. Run the following command to switch to the interface view of the port to be added to the Eth-Trunk interface:
[Quidway] interface GigabitEthernet interface-number
For example, to enter the interface view of Gigabit Ethernet interface 1/1/0/24, run the following command:
[Quidway] interface GigabitEthernet 1/1/0/24
127.0.0.1:51299/icslite/print/pages/resource/print.do? 139/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the port has been configured, run the related command to delete the configuration.
The commands used commonly to delete the configuration information are as follows:
undo ntdp enable
undo ndp enable
8. Run the following command to add the port to the Eth-Trunk interface:
[Quidway-GigabitEthernet1/1/0/24] eth-trunk trunk-id
For example, to add the port with ID 0 to the Eth-Trunk interface, run the following command:
[Quidway-GigabitEthernet1/1/0/24] eth-trunk 0
10. Repeat 6 to 9 to add the other port to the same Eth-Trunk interface.
2. Run the following command to disable the strict ARP learning function:
[Quidway] undo arp learning strict
1. Set the speed limit on ARP Miss packets from each IP address to 5 packets per second.
[Quidway] arp speed-limit source-ip maximum 5
2. Set the speed limit on ARP Miss packets from any of the IP addresses of the active and standby VRM servers to 200
packets per second.
The setting must be made for all the management and floating IP addresses of the active and standby VRM servers.
For example, if the IP address of the active VRM is 10.85.199.5 and the maximum rate of ARP Miss packets is 200, run the
following command:
[Quidway] arp speed-limit source-ip 10.85.199.5 maximum 200
Typical Tasks
Table 1 describes the tasks and purpose of configuring the key functions of S9306 switches at the core layer.
Function Purpose
Configuring Data for Core Switches to Configure data for the switches to communicate with service gateways, including firewall
Interconnect with Service Gateways and VPN access gateways, if they are interconnected with switches in bypass mode.
Configuring Static Routes Enable static routing for precise routing control.
Configuring OSPF Dynamic Routes Enable dynamic routing using the Open Shortest Path First (OSPF) protocol in the case of
complicated network configurations.
Prerequisites
The VLAN to connect to the service gateways has been created. The VLAN IDs used by the core switches must be the same as
those configured on the service gateways.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 141/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Run the following command to configure the IP address for the VLANIF interface:
[Quidway-Vlanif20] ip address ip-address mask
For example, to set the IP address of the VLANIF interface to 10.18.163.130 and subnet mask to 255.255.255.192, run the
following command:
[Quidway-Vlanif20] ip address 10.18.163.130 255.255.255.192
6. Run the following command to set the link type of the Eth-Trunk interface to trunk:
[Quidway-Eth-Trunk10] port link-type trunk
7. Run the following command to remove VLAN 1 from the VLANs that are allowed to pass through the Eth-Trunk
interface:
[Quidway-Eth-Trunk0] undo port trunk allow-pass vlan 1
8. Run the following command to configure the VLANs that are allowed to pass through the Eth-Trunk interface:
[Quidway-Eth-Trunk10] port trunk allow-pass vlan vlan-id
For example, to add Eth-Trunk 10 to VLAN 20, run the following command:
[Quidway-Eth-Trunk10] port trunk allow-pass vlan 20
10. Run the following command to switch to the interface view of the port to be added to the Eth-Trunk interface:
[Quidway] interface GigabitEthernet interface-number
For example, to enter the interface view of Gigabit Ethernet interface 1/1/0/44, run the following command:
[Quidway] interface GigabitEthernet 1/1/0/44
11. Run the following command to check whether the port is configured:
[Quidway-GigabitEthernet1/1/0/44] display this
If the command output is blank, the port is not configured.
If the port has been configured, run the related command to delete the configuration.
12. Run the following command to add the port to the Eth-Trunk interface:
[Quidway-GigabitEthernet1/1/0/44] eth-trunk trunk-id
For example, to add the port with ID 10 to the Eth-Trunk interface, run the following command:
[Quidway-GigabitEthernet1/1/0/44] eth-trunk 10
13. Run the following command to switch back to the system view:
[Quidway-GigabitEthernet1/1/0/44] quit
14. Repeat 10 to 13 to add the other port to the same Eth-Trunk interface.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 142/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
15. Configure a static route to the service gateway on the switch. For details, see Configuring Static Routes.
ip-address and mask specify the range of the destination IP address. Use 0.0.0.0 for both parameters if you need to set a default route.
nexthop-address specifies the IP address of the next hop, for example, the IP address of a sub-interface of the firewall.
Router ID of the To ensure the OSPF stability, the router ID must be determined during network planning and be 10.1.1.1
switch manually configured. Each switch in an independent system must have a unique router ID.
Generally, the router ID of the switch is configured as the IP address of an interface (for
example, Loopback0) on the switch.
OSPF process ID Several different OSPF processes can run simultaneously on a switch, independent of and not 100
interfering with each other. The route interaction between different OSPF processes is treated as
an interaction of processes that use different routing protocols.
Each interface of the switch belongs to only one OSPF.
ID of the area to Logically, a switch is divided into several groups (or areas). Each group has its own area ID. The 0
which an interface boundary of areas is router, not link.
belongs
IP address of the The network segment refers to the network segment that uses the IP address of the interface 192.168.0.0/24
network segment running OSPF. A network segment must belong to an area, that is, each interface running OSPF
must belong to a specific area.
Prerequisites
127.0.0.1:51299/icslite/print/pages/resource/print.do? 143/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The VLAN connecting core switches to external routers, VLANIF interface, and IP addresses have been configured.
Procedure
3. Run the following command to view the OSPF area (area-id needs to be confirmed with the customer):
[Quidway-ospf-1] area area-id
For example, if the area ID is 0, run the following command:
[Quidway-ospf-1] area 0
4. Run the following command to specify the network segment running OSPF:
[Quidway-ospf-1-area-0.0.0.0] network ip-address wildcard-mask
For example, to set the network segments running OSPF to 192.168.0.0 0.0.0.255 and 192.168.1.0 0.0.0.255, run the
following commands:
[Quidway-ospf-1-area-0.0.0.0] network 192.168.0.0 0.0.0.255
[Quidway-ospf-1-area-0.0.0.0] network 192.168.1.0 0.0.0.255
6. Run the following command to view the VLANIF interface that connects the core switch to the external router:
[Quidway] interface interface-number
For example, to enter the interface view of VLANIF 4000, run the following command:
[Quidway] interface Vlanif4000
7. Run the following command to configure the overhead needed for OSPF to run on the VLANIF interface:
[Quidway-Vlanif4000] ospf cost cost
For example, if the cost required for running the OSPF protocol is 10, run the following command:
[Quidway-Vlanif4000] ospf cost 10
8. Run the following command to configure P2P as the network type of the VLANIF interface:
[Quidway-Vlanif4000] ospf network-type P2P
10. Configure the physical interface that connects the core switch to the external router to allow the VLAN of the connected
external router to pass through.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 144/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For example, to add Ethernet interface 1/0/0/0 to VLAN 4000, run the following commands:
[Quidway] interface Ethernet 1/0/0/0
[Quidway-GigabitEthernet1/0/0/0] port hybrid pvid vlan 4000
[Quidway-GigabitEthernet1/0/0/0] port hybrid untagged vlan 4000
Isolating VMs
Purpose
In the current cloud platform networking, the S9300 switch provides service gateway functions for VMs. To isolate services on
the S9300 switch, VMs need to be isolated using the ACL.
Procedure
1. Run the following command to create an ACL and enter the ACL view:
[Quidway] acl number acl-number
For example, to create and access an ACL numbered 3001, run the following command:
[Quidway] acl number 3001
2. Run the following command to deny all IP packets from a specified source network segment to a specified destination
network segment:
[Quidway-acl-adv-3001] rule deny ip source source-address source-wildcard destination destination-address
destination-wildcard
For example, to isolate VMs for three network segments, 172.16.128.0/24, 192.168.0.0/16, and 10.0.0.0/8, run the
following commands:
[Quidway-acl-adv-3001] rule deny ip source 172.16.128.0 0.0.0.255 destination 10.0.0.0 0.255.255.255
[Quidway-acl-adv-3001] rule deny ip source 172.16.128.0 0.0.0.255 destination 192.168.0.0 0.0.255.255
[Quidway-acl-adv-3001] rule deny ip source 192.168.0.0 0.0.255.255 destination 172.16.128.0 0.0.0.255
[Quidway-acl-adv-3001] rule deny ip source 192.168.0.0 0.0.255.255 destination 10.0.0.0 0.255.255.255
[Quidway-acl-adv-3001] rule deny ip source 10.0.0.0 0.255.255.255 destination 172.16.128.0 0.0.0.255
[Quidway-acl-adv-3001] rule deny ip source 10.0.0.0 0.255.255.255 destination 192.168.0.0 0.0.255.255
4. Run the following command to create a traffic classifier and enter the traffic classification view:
[Quidway] traffic classifier classifier-name
For example, if the traffic classifier name is DENY, run the following command:
[Quidway] traffic classifier DENY
5. Run the following command to configure rules for matching the traffic classifier (IP packets of the same traffic
classification are processed in the same way):
[Quidway-classifier-DENY] if-match acl acl-number
For example, if the matching rule is 3001 for the ACL traffic classifier, run the following command:
[Quidway-classifier-DENY] if-match acl 3001
7. Run the following command to create a traffic behavior and enter the traffic behavior view:
[Quidway] traffic behavior behavior-name
For example, if the traffic behavior name is deny, run the following command:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 145/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. Run the following commands to deny sending packets to the CPU and exit from the traffic behavior view:
[Quidway-behavior-deny] deny
[Quidway-behavior-deny] quit
10. Run the following command to associate the traffic classifier with the traffic behavior:
[Quidway-trafficpolicy-ACL_DENY] classifier classifier-name behavior behavior-name
For example, if the traffic classifier name is DENY and traffic behavior is deny, run the following command:
[Quidway-trafficpolicy-ACL_DENY] classifier DENY behavior deny
13. Run the following command to apply the traffic policy for inbound traffic:
[Quidway-Eth-Trunk2] traffic-policy policy-name inbound
For example, if the traffic policy is ACL_DENY, run the following command:
[Quidway-Eth-Trunk2] traffic-policy ACL_DENY inbound
Preparation
Configuring RAID 1
3.2.4.1 Overview
Purpose
Configure the 2288H V5 servers (in the x86 architecture) or TaiShan 200 servers (model: 2280) (in the Arm architecture) that are
not preinstalled before the delivery so that the software can be installed on the 2288H V5 servers in the x86 architecture or
127.0.0.1:51299/icslite/print/pages/resource/print.do? 146/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
TaiShan 200 servers (model: 2280) in the Arm architecture properly in FusionCompute.
Process
Figure 1 shows the process for configuring the 2288H V5 servers.
Figure 2 shows the process for configuring the TaiShan 200 servers (model: 2280).
Figure 2 Process for configuring the TaiShan 200 servers (model: 2280) in the Arm architecture
3.2.4.2 Preparation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 147/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Documents Integration Design Data This document describes the data Obtain this document from integration design engineers.
Plan Template planning for FusionCompute
deployment.
FusionServer Pro Rack This document describes the For enterprise users: Visit
Server Product detailed information about the https://support.huawei.com/enterprise , search for the
Documentation 2288H V5 server. document by name, and download it.
For carrier users:
Visit https://support.huawei.com, search for the
document by name, and download it.
Documents Integration Design This document describes the data Obtain this document from integration design engineers.
Data Plan Template planning for FusionCompute
deployment.
TaiShan 200 Server This document describes the For enterprise users: Visit
Product detailed information about the https://support.huawei.com/enterprise, search for the
Documentation TaiShan 200 servers (model: 2280). Taishan 200 server, and download it.
For carrier users:
Visit https://support.huawei.com, search for the Taishan
200 server, and download the software package.
Scenarios
Log in to the iBMC page by the BMC IP address to set the parameters of the server.
Process
Figure 1 shows the process for logging in to the server using the BMC.
Procedure
Configure the login environment.
1. Connect the network port of the local computer to the BMC management port of the server using the network cable.
2. Set the IP address of the local computer and default BMC IP address of the server to the same network segment.
For example, set the IP address to 192.168.2.10, and subnet mask to 255.255.255.0.
The default BMC IP address of the server is 192.168.2.100, and the default subnet mask is 255.255.255.0.
3. On the menu bar of the Internet Explorer, choose Tools > Internet Options.
The Internet Options dialog box is displayed.
Windows 10 having Internet Explorer 11 installed is used as an example in the following descriptions.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 149/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. In the Proxy server area, deselect Use a proxy server for your LAN.
6. Click OK.
The Local Area Network(LAN) Setting dialog box is closed.
7. Click OK.
The Internet Options dialog box is closed.
8. Restart the browser, enter https://IP address of the BMC management port in the address bar, and press Enter.
For example, enter https://192.168.2.100.
The system prompts Certificate Error.
10. Enter the username and password and select This iBMC from the Log on to drop-down list.
The default username for logging in to the iBMC system is Administrator, and the default password is Admin@9000.
Change the default password upon your first login to ensure the system security.
12. Check whether the Security Information dialog box asking "Do you want to display the nonsecure items?" is displayed.
If yes, go to 13.
Scenarios
Log in to all servers through their BMC ports to check server version information and the number of hard drives.
Procedure
Check the number and status of the hard drive.
1. On the System Info > Storage > Views of iBMC, check the status of hard drives.
If Health Status is Normal, the hard drive is functional.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 150/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
iMana Version (x86 architecture) or iBMC Firmware Version (Arm architecture): indicates the BMC version number of the
server.
BIOS Version: indicates the BIOS version number of the server.
3. Check information about the firmware version in the Arm architecture that can be properly displayed or whether the
firmware version meets the requirements of the version mapping in the x86 architecture.
If the firmware version of an x86 server does not meet the version mapping requirements, obtain the corresponding
software packages and the upgrade guide to upgrade the firmware.
Scenarios
Configure RAID 1 for a server on the BMC WebUI.
Procedure
1. Log in to the iBMC WebUI.
For details, see Logging In to a Server Using the BMC .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 151/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Parameter Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 152/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Strip Size Specifies the size of a data strip on each physical drive.
Read Policy Specifies the data read policy of the logical drive.
Read Ahead: The controller pre-reads sequential data or the data predicted to be used and saves it in the cache.
No Read Ahead: The read ahead function is disabled.
Write Policy Specifies the data write policy of the logical drive.
Write Through: Once the drive subsystem receives all data, the controller card notifies the host that data
transmission is complete.
Write Back with BBU: When no battery backup unit (BBU) is configured or the configured BBU is faulty, the
controller automatically switches to the Write Through mode.
Write Back: After the controller cache receives all data, it sends the host a message indicating that data
transmission is complete.
IO Policy Specifies the I/O policy for reading data from special logical drives. This policy does not affect the pre-reading
cache. The value can be either of the following:
Cached IO: All the read and write requests are processed by the cache of the RAID controller. Select this value
only when CacheCade 1.1 is configured.
Direct IO: This value has different meanings in read and write scenarios.
In read scenarios, data is directly read from physical drives. (If Read Policy is set to Read Ahead, data read
requests are processed by the cache of the RAID controller.)
In write scenarios, data write requests are processed by the cache of the RAID controller. (If Write Policy is set
to Write Through, data is directly written into physical drives.)
Disk Cache Policy The disk cache policy can be any of the following:
Enable: indicates that data is written into the cache before being written into the hard drive. This option improves
data write performance. However, data will be lost if there is no protection mechanism against power failures.
Disable: indicates that data is written into a hard drive without caching the data. Data is not lost if power failures
occur.
Disk's default: indicates that the default cache policy is used.
Access Policy Specifies the access policy for the logical drive.
Read Write: Read and write operations are allowed.
Read Only: The logical drive is read-only.
Blocked: Access to the logical drive is denied.
Number of drives Set this parameter when the RAID level is 10, 50, or 60.
per span
127.0.0.1:51299/icslite/print/pages/resource/print.do? 153/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
Log in to the server through the BMC port to configure the hard disks of the server to RAID 1.
Process
Figure 1 or Figure 2 shows the process for configuring RAID 1.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 154/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 3.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 155/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5 Integrated
Remote Console (Private), or HTML5 Integrated Remote Console (Shared). The real-time desktop of the server is
displayed, as shown in Figure 4 or Figure 5.
Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and
perform operations on the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS
and perform operations on the server using the iBMC. The users can view the operations of each other.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 156/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Select Reset.
The Are you sure to perform this operation dialog box is displayed.
6. Click Yes.
The server restarts.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 157/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. Enter the BIOS password as prompted to switch to the BIOS setting page.
The default password for logging in to the BIOS is Admin@9000. Change the administrator password immediately after your
first login.
To alternate between the English, French, and Japanese keyboards, press F2.
For security purposes, change the administrator password periodically.
The system will be locked if incorrect passwords are entered three consecutive times. You need to restart the server to unlock it.
9. When message "Press <Ctrl><R> to Run MegaRAID Configuration Utility" shown in Figure 7 is displayed during the
server startup, press Ctrl+R.
The SAS3108 BIOS Configuration Utility screen is displayed, as shown in Figure 8. Table 1 describes the parameters.
Figure 7 Information
127.0.0.1:51299/icslite/print/pages/resource/print.do? 158/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Foreign View Manage foreign configurations. This menu is available only when foreign configurations are detected.
Data on a hard drive will be deleted after the hard drive is added to a RAID array. Before creating a RAID array, check that there
is no data on hard drives or the data on hard drives is not required.
Hard drives in the same RAID array must be of the same type and specifications.
For details about the number of hard drives required by each RAID level, see Table 2.
RAID Level Total Hard Drives Spans Total Hard Drives Per Span Maximum Failed Disks
RAID 1 2 1
RAID 5 3 to 32 1
RAID 6 3 to 32 2
Create RAID 1.
12. Press F2, select Create Virtual Drive on the displayed screen, and press Enter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 159/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
13. Press Enter in the RAID Level area, and use ↑ or ↓ to set the RAID level to RAID-1.
The default RAID level is RAID 0.
You do not need to perform this step on RAID 0, RAID 1, RAID 5, and RAID 6.
This step is mandatory for RAID 10, 50, and 60.
15. Set the number of hard drives for each RAID span.
17. Press ↑ or ↓ to select a hard drive to be added and press Enter, as shown in Figure 10.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 160/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If multiple virtual drives are not required, go to 24 after setting the virtual drive name. The RAID capacity is set to the maximum
value by default.
To divide a drive group into multiple virtual drives, you need to manually set the capacity. For details, see 21 to 23. You can create
multiple virtual drives based on the site requirements. Each drive group supports a maximum of 16 virtual drives.
19. Use ↓ to select Name, enter the RAID name, and select OK.
A message is displayed asking you whether to set the RAID name.
21. After creating a drive group, choose Drive Group on the VD Mgmt screen and press F2, and select Add New VD.
The Add VD in Drive Group screen is displayed.
22. Enter the virtual drive capacity in Size, select OK, and press Enter.
A confirmation message is displayed.
24. In the main menu displayed in Figure 10, select Advanced using ↓, and then press Enter. The page for setting advanced
properties for a RAID group is displayed, as shown in Figure 11.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 161/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Parameter Description
Strip Size Size of a data strip on each hard disk Default value: 256 KB
Write Policy Options for writing data into a virtual drive The write policy of virtual drives varies with the firmware version of the
LSI SAS3108 RAID controller card.
If the firmware version of the LSI SAS3108 RAID controller card is 4.270.00-4382 or earlier, the following write
policies are supported:
Write Back: After the controller cache receives all data, it sends the host a message indicating that data transmission
is complete.
Write Through: Once the drive subsystem receives all data, the controller card notifies the host that data transmission
is complete.
Write Back with BBU: The controller card automatically switches to the Write Through mode when the controller
card has no BBU, the BBU is on charge or discharge, the BBU is faulty, or the pinned/preserved cache reaches 50% of
the physical cache. It is recommended that you set the write policy to this mode.
If the firmware version of the LSI SAS3108 RAID controller card is 4.650.00-6121, the following write policies are
supported:
Write Back: The controller card automatically switches to the Write Through mode when the controller card has no
BBU, the BBU is on charge or discharge, the BBU is faulty, or the pinned/preserved cache reaches 50% of the
physical cache. It is recommended that you set the write policy to this mode.
Write Through: Once the drive subsystem receives all data, the controller card notifies the host that data transmission
is complete.
Always Write Back: After the controller cache receives all data, it sends the host a message indicating that data
transmission is complete.
NOTICE:
In Always Write Back mode, DDR write data of the controller card will be lost if the supercapacitor is not installed or
being charged. This mode is not recommended.
I/O Policy Options for data I/O of special virtual drives. This policy does not affect cache pre-read. The options are as follows:
Direct:
In the read scenario, data is directly read from the hard drive. (If Read Policy is set to Ahead, data read requests are
processed through the cache of the RAID controller card.)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 162/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In the write scenario, data write requests are processed through the cache of the RAID controller card. (If Write
Policy is set to Write Through, data is directly written into hard drives.)
Cached: Data is read from or written into the cache. Select this value only when CacheCade 1.1 is configured.
Initialize Performs initialization once the RAID array is created. The initialization will damage data stored on RAID member
drives.
[X] is displayed in front of the selected hard drive.
Initialization will damage data on hard drives. If the original data on the hard drives needs to be retained, do not select Initialize on the
screen shown in Figure 11.
28. On the Configuration Utility main screen, press → to query configuration details, as shown in Figure 12.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 163/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
b. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 13.
c. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5
Integrated Remote Console (Private), or HTML5 Integrated Remote Console (Shared). The real-time desktop
of the server, as shown in Figure 14 or Figure 15.
Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the
iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server
OS and perform operations on the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the
iBMC.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 164/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the
server OS and perform operations on the server using the iBMC. The users can view the operations of each other.
e. Select Reset.
The Are you sure to perform this operation dialog box is displayed.
f. Click Yes.
The server restarts.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 165/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The default BIOS password is Admin@9000. After the first login, change the administrator password immediately.
For security purposes, change the administrator password periodically.
Enter the administrator password to go to the administrator screen. The server will be locked after three consecutive
failures with wrong passwords. You can restart the server to unlock it.
d. On the Advanced screen, select Avago MegaRAID <SAS3508> Configuration Utility and press Enter. The
Dashboard View screen is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 166/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. On the Dashboard View screen, select Main Menu and press Enter. Then select Configuration Management and press
Enter. Select Create Virtual Drive and press Enter. The Create Virtual Drive screen is displayed.
If RAID has been configured, you need to format the hard disk. On the Configuration Management screen, select Clear
Configuration and press Enter. On the displayed confirmation screen, select Confirm and press Enter. Then select Yes and press
Enter to format the hard disk.
4. On the Create Virtual Drive screen, select Select RAID level using the up and down arrow keys and press Enter. Select
RAID1 from the drop-down list box and press Enter.
5. On the Create Virtual Drive screen, select Default Initialization using the up and down arrow keys and press Enter.
Select Fast from the drop-down list box and press Enter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 167/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
6. Select Select Drives From using the up and down arrow keys and press Enter. Select Unconfigured Capacity using the
up and down arrow keys.
7. Select Select Drives using the up and down arrow keys and press Enter. Select the first (Drive C0 & C1:01:02) and the
second (Drive C0 & C1:01:05) disks using the up and down arrow keys to configure RAID 1.
Drive C0 & C1 may vary on different servers. You can select a hard disk by entering 01:0x after Drive C0 & C1.
Press the up and down arrow keys to select the corresponding disk, and press Enter. [X] after a disk indicates that the disk has
been selected.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 168/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. Select Apply Changes using the up and down arrow keys to save the settings. The message "The operation has been
performed successfully." is displayed. Press the down arrow key to choose OK and press Enter to complete the
configuration of member disks.
9. Select Save Configuration and press Enter. The operation confirmation screen is displayed. Select Confirm and press
Enter. Select Yes and press Enter. The message "The operation has been performed successfully" is displayed. Select OK
using the down arrow key and press Enter.
b. Select Virtual Drive Management and press Enter. Current RAID information is displayed.
Scenarios
127.0.0.1:51299/icslite/print/pages/resource/print.do? 169/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Set the BIOS of the x86 servers on the iBMC page correctly.
Process
Figure 1 shows the process for setting the BIOS.
Procedure
Restart the server.
2. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 2.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 170/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5 Integrated
Remote Console (Private), or HTML5 Integrated Remote Console (Shared). The real-time desktop of the server, as
shown in Figure 3 or Figure 4.
Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and
perform operations on the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS
and perform operations on the server using the iBMC. The users can view the operations of each other.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 171/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Select Reset.
The Are you sure to perform this operation dialog box is displayed.
6. Click Yes.
The server restarts.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 172/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
To go to the Smart Provisioning GUI, press F6.
8. Enter the BIOS password as prompted to switch to the BIOS setting page.
The default password for logging in to the BIOS is Admin@9000. Change the administrator password immediately after your
first login.
To alternate between the English, French, and Japanese keyboards, press F2.
For security purposes, change the administrator password periodically.
The system will be locked if incorrect passwords are entered three consecutive times. You need to restart the server to unlock it.
10. In the displayed dialog box, select Legacy Boot or UEFI Boot, and press Enter.
The BIOS of the V5 platform uses the UEFI mode by default. If the legacy mode is used, ensure that the sum of option ROMs of
all PCIe devices does not exceed the upper limit (128 KB) specified by Intel. Otherwise, some PCIe devices may become
unavailable.
If the capacity of the target hard drive or RAID group is greater than 2 TB, set Boot Type to UEFI Boot. For details, see the
release notes of the OS.
If the OS is to be installed on an NVMe hard drive, you can only set Boot Type to UEFI Boot.
The default boot sequence is Hard Disk Drive > DVD-ROM Drive > PXE > Others.
12. Select a boot option and press F5 or F6 to change the boot order.
The PXE Configuration screen provides configurations of up to four on-board network ports. The default value is Enabled for
PXE 1 and PXE 3, and Disabled for other network ports.
The PXE settings for the I/O NIC are also displayed.
15. Select Enabled in the displayed dialog box and press Enter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 174/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
You can log in to switches, firewalls, and network gateways through serial ports. Operation personnel can log in to these devices
using serial port tools and configure them as required.
The Windows OS supports serial port tools, such as PuTTY, ttermpro.exe, and HyperTrm.exe.
This section uses PuTTY as an example.
Prerequisites
Conditions
A serial cable for connecting the local computer and the target device has been obtained.
Data
Data preparation is not required for logging in to a switch.
The following data is required for logging in to a firewall (Eudemon), IP SAN, or gateway (SVN3000 or SVN5000 series).
Username
Password
Procedure
1. Use the serial cable to connect the RS-232 port of the local computer and the target device.
Serial port names of different devices may vary. Names of these ports are listed as follows:
2. Run PuTTY, set Connection type to Serial, enter the serial port name, such as COM1, in the Serial line area, and enter
the serial port rate, such as 115200, in the Speed area.
For details about the serial port name and rate, query the name of the serial port connected to the local computer and the serial port
parameter of the target device.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 175/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.2.5.2 Configuring VLANs That Are Allowed to Pass Through the Port
Scenarios
During the software commissioning, enable a disabled LAN switch port. Configure the port to allow specified VLANs to pass
through so that the network of the plane to which the port belongs can be commissioned.
Prerequisites
Conditions
You have logged in to the switch through HyperTerminal.
Data
The following data has been obtained:
IDs of the VLANs that are allowed to pass through the port
Procedure
1. In the switch user view, run the following command to switch to the system view:
<Quidway>system-view
2. Run the following command to enter the interface view of a switch port:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 176/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4. Run the following command to set the default VLAN ID for a hybrid port:
[Quidway-GigabitEthernet0/0/2]port hybrid pvid vlan VLAN ID
For example, if the default VLAN ID is 5, run the following command:
[Quidway-GigabitEthernet0/0/2]port hybrid pvid vlan 5
5. Run the following command to set the port to the untagged hybrid mode, and specify a VLAN to pass the port:
[Quidway-GigabitEthernet0/0/2]port hybrid untagged vlan VLAN ID
For example, to set interface 0/0/2 to the untagged hybrid mode and allow VLAN 5 to pass, run the following command:
[Quidway-GigabitEthernet0/0/2]port hybrid untagged vlan 5
6. Run the following command to exit from the switch interface view:
[Quidway-GigabitEthernet0/0/2]quit
Scenarios
After the software commissioning is completed, disable all LAN switch ports that are enabled for the commissioning to ensure
system security.
Prerequisites
Conditions
You have logged in to the switch through HyperTerminal.
Data
The IDs of switch ports to be disabled have been obtained.
Procedure
1. In the switch user view, run the following command to switch to the system view:
<Quidway>system-view
2. Run the following command to enter the interface view of a switch port:
[Quidway]interface gigabitethernet Switch port number
For example, to enter the interface view of Gigabit Ethernet interface 0/0/2, run the following command:
[Quidway]interface gigabitethernet 0/0/2
3. Run the following command to restore the default VLAN ID for a hybrid port:
[Quidway-GigabitEthernet0/0/2]undo port hybrid pvid vlan
4. Run the following command to delete the VLAN that is allowed to pass the hybrid port:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 177/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
6. Run the following command to exit from the switch interface view:
[Quidway-GigabitEthernet0/0/2]quit
Installation Preparations
Manual Installation
Appendix
Installation Process
Deployment Rules
Logical Nodes
Figure 1 shows the logical nodes in FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 178/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Storage resources indicate storage units provided by SAN devices or local storage devices.
Node Description
VRM A VRM node manages the virtual resources in a unified manner through a management interface.
Host A host is a physical server that provides compute resources for FusionCompute. A host also provides storage resources when
local hard disks are used.
Deployment Scheme
Table 2 describes the FusionCompute node deployment schemes.
Host Deployed on Multiple hosts can be deployed based on customer requirements for compute resources. A host also
physical servers provides storage resources when local storage resources are used.
When VRM nodes are deployed on VMs, a host must be specified for creating a VRM VM.
If a small number of hosts (for example, less than 10) are deployed, you can add all the hosts to the
management cluster, which therefore also provides user services. If a large number of hosts are deployed,
you are advised to add the hosts providing different user services to multiple service clusters to facilitate
service management.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 179/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
To optimize compute resource utilization of each cluster, you are advised to configure the same types of
data stores and DVSs for hosts in the same cluster.
VRM Deployed on The active and standby VRM nodes must be deployed on two VMs on different hosts in the management
VMs cluster.
You are advised to deploy VRM on VMs.
Deployed on The active and standby VRM nodes must be deployed on different physical servers.
physical servers
Methods
The FusionCompute system supports manual installation and FusionCompute web tool-based installation. Table 1 describes the
methods for installing FusionCompute.
Installation using the FusionCompute installation tool Hosts and VRM nodes can be installed in a unified manner.
(recommended)
Install VRM nodes on VMs.
Manual installation Install hosts and VRM nodes separately and on different physical
servers.
Process
Figure 1 shows the process for installing FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 180/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute Install hosts. Manually mount an ISO image to install a host and configure its parameters.
Install VRM To install a VRM node on a physical server, manually mount an ISO image of the VRM node and
nodes. configure the VRM node.
When deploying a CNA node or VRM on a physical server, check whether the independent power supply system (battery or capacitor) of
the RAID works properly. If an exception occurs, disable the RAID cache. Otherwise, files may be damaged due to unexpected power
failures.
For details about how to check whether the independent power supply system of the RAID works properly and how to disable the RAID
cache, see the product documentation of the corresponding server.
VRM Node VRM Node Specifications of VRM VRM Node Specifications of VRM
Management Scale Specifications Nodes for Connecting to a Specifications Nodes for Connecting to a
(Container Non-FusionCompute (Container Non-FusionCompute
Management Disabled) Management Platform Management Management Platform
(Container Management Enabled) (Container Management
Disabled on Enabled on
FusionCompute) FusionCompute)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 181/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VRM Node VRM Node Specifications of VRM VRM Node Specifications of VRM
Management Scale Specifications Nodes for Connecting to a Specifications Nodes for Connecting to a
(Container Non-FusionCompute (Container Non-FusionCompute
Management Disabled) Management Platform Management Management Platform
(Container Management Enabled) (Container Management
Disabled on Enabled on
FusionCompute) FusionCompute)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 182/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5000 VMs or 101 to 200 vCPUs ≥ 30 Not supported vCPUs ≥ 30 Not supported
physical hosts
Memory size ≥ 40 GB Memory size ≥ 50
NOTE:
GB
Disk size ≥ 140 GB
VRM nodes can be
deployed on only NOTE: Disk size ≥ 1600
physical servers to GB
The default disk
support such capacity of the
specifications. VRM VM is 120
GB. The remaining
disk capacity of the
CNA host must be
greater than or equal
to 140 GB.
In physical deployment scenarios, vCPUs in the table refer to the number of hyper threads.
If a non-FusionCompute virtualization solution has specific requirements for VRM specifications, the VRM specifications of the solution
prevail. Otherwise, the preceding specifications prevail.
Non-FusionCompute management platforms include eDME and FusionAccess.
For details about the CPU model of the physical server, see the order information of the purchased server or the product manual of the
server model.
The required RAID configuration for VRM servers varies depending on the number of hosts and that of VMs in the system.
However, the actual number of hosts and that of VMs may vary from the following typical configurations. You need to configure
RAID to match the higher configuration.
50 hosts or 1000 VMs: RAID 10 consisting of four SAS disks or RAID 1 consisting of two SSDs
100 hosts or 3000 VMs: RAID 10 consisting of four SAS disks or RAID 1 consisting of two SSDs
200 hosts or 5000 VMs: RAID 10 consisting of four SAS disks or RAID 1 consisting of two SSDs
1000 hosts or 10,000 VMs: RAID 10 consisting of 10 SAS disks or RAID 1 consisting of two SSDs
127.0.0.1:51299/icslite/print/pages/resource/print.do? 183/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
All the mentioned disks must be SAS disks with 15000 revolutions per minute (rpm).
Data
Host Requirements
Network Requirements
3.3.2.1.1 PC Requirements
A local PC is required for FusionCompute software installation and configuration.
Table 1 describes the requirements for the local PC.
Item Requirement
Memory > 2 GB
Hard disk Excluding the partition for the OS, at least one partition has more than 16 GB of free space for installing FusionCompute.
Network Before installing FusionCompute, ensure that the local PC can communicate with the planned management plane.
Browser Google Chrome 118, Google Chrome 119, and Google Chrome 120
Mozilla Firefox 118, Mozilla Firefox 119, and Mozilla Firefox 120
Microsoft Edge 118, Microsoft Edge 119, and Microsoft Edge 120
127.0.0.1:51299/icslite/print/pages/resource/print.do? 184/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the hosts have been used before, restore the hosts to factory settings before configuring the BIOS.
You can use FusionCompute Compatibility Query to query the software and hardware compatibility supported by FusionCompute. If
Remark in the displayed compatibility list provides information, you need to perform operations based on the information.
Item Requirement
CPU x86: Intel 64-bit CPUs, AMD 64-bit CPUs and Hygon 64-bit CPUs; Arm: Kunpeng 920 processors and Phytium 64-bit
CPUs
The CPU supports hardware virtualization technologies, such as Intel VT-x, and the BIOS system must have the CPU
virtualization function enabled on the x86 servers.
The models of CPUs in one cluster are the same. Otherwise, the VM migration between hosts will fail. Therefore, you are
advised to deploy servers of the same model in a cluster.
If the host is connected to scale-out block storage, configure the CPUs based on the CPU configurations provided in
Installation > Software Installation Guide > Installing the Block Service > Node Requirements > Separate
Deployment of Compute and Storage Nodes > Compute Node in OceanStor Pacific Series Product Documentation.
If eBackup is connected, two more vCPUs need to be reserved for the host management domain.
NOTICE:
If the CPU virtualization function is disabled on a host, VMs cannot be created on the host.
Memory > 8 GB
If the host is used to deploy a management VM, the host memory must be greater than or equal to the total size of the
management VM memory and the memory of the management domain of the host accommodating the management VM.
If you need to configure the user-mode switching specifications for an x86 host, reserve another 5 GB to the original host
management domain memory size.
If the host is connected to scale-out block storage, the reserved memory of the management domain must be greater than
the total size of the scale-out block storage memory and the reserved memory of the management domain before the host is
connected to scale-out block storage. For details about the scale-out block storage memory requirements, see Installation
> Software Installation Guide > System Requirements > Node Requirements > VBS Deployed on Compute Nodes >
Compute Node in OceanStor Pacific Series Product Documentation of the required version.
Recommended memory size: ≥ 48 GB
When Huawei servers are installed, the memory needs to be set based on the recommended configurations. Otherwise, the
system cannot achieve optimal performance. For details about the recommended configurations, visit
https://support.huawei.com/onlinetoolsweb/smca/.
NOTE:
V5 servers (x86 architecture) must use recommended configurations. Otherwise, the system performance deteriorates obviously.
If the container function is enabled: ≥ 405 GB. You need to plan more capacity based on the number of container image
repositories (which can be estimated by 3 GB per image repository).
If the disk contains historical data, format the disk before the installation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 185/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the number of disks on the server is greater than 2, form RAID 10. If the number of disks on the server is 2, form RAID
1.
The configurations of compute nodes are as follows:
You are advised to use disks 1 and 2 to form RAID 1 for installing the host OS to improve storage reliability.
When setting the boot device in the host BIOS, set the first boot device to the disks that form the RAID 1 group.
If the host has multiple disks, you are advised to use all the disks except disks 1 and 2 to form a RAID 5 group.
If the customers have special requirements on RAID, adjust the configurations to suit their requirements.
NOTE:
The RAID controller cards on certain servers require that server disks must form RAID groups. Otherwise, the host OS cannot be
installed. For details about the RAID controller card requirements, see the server product documentation.
Ensure that the first boot device of the host is the location where the host OS is installed. Otherwise, the host may fail
to start or use another OS.
PXE configuration If another OS has been installed on the host before, ensure that the NIC driver is an onboard driver.
for NICs
If hosts are installed using ISO images, PXE must be disabled for all the NICs.
If hosts are installed using the FusionCompute installation tool, PXE must be enabled for the NICs used in the
PXE boot and must be disabled for other NICs. For some servers, such as TaiShan 200 (model 2280), you need
to set External Network Card Boot to Enable in the BIOS settings.
NOTE:
After host installation is complete using the FusionCompute installation tool, you are advised to disable PXE for all
the NICs to prevent incorrect installation caused by a PXE server on the management network when any host restarts.
Table 3 Requirements for advanced CPU configurations of the x86 host BIOS
Intel HT Enabled Intel hyper-threading technology. Enable it to allow the server to support multi-threading,
technology thereby improving CPU performance.
Intel Virtualization Enabled CPU virtualization technology. Enable it to allow the CPUs on the server to support
tech virtualization.
Execute Disable Enabled Hardware-based antivirus technology, which is displayed as NX or XD on some servers.
Bit Enable it to prevent the server from restarting due to exceptions.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 186/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
This function must be enabled if the cluster housing the host requires the IMC function. For
details about the IMC function of the cluster, see Creating a Cluster .
Intel SpeedStep Disabled CPU working mode switching technology, which is displayed as EIST on newly developed
tech servers. Disable it to prevent disk loss or NIC failures.
C-State Disabled CPU power-saving function. Disable it to prevent disk loss, NIC failures, and clock
inaccuracy.
Configuration item names may vary depending on different servers or BIOS versions. Therefore, you need to check whether the corresponding
items exist based on the requirements. If a server does not support a technology or function, you are not required to configure the function.
If local storage resources are used, only available space on the disk where you install the host OS and other bare disks can be
used as data stores.
If shared storage devices are used, including SAN and NAS storage devices, you must configure the management IP addresses
and storage link IP addresses for them. The following conditions must be met for different storage devices:
If SAN devices are used and you have requirements for thin provisioning and storage cost reduction, you are advised to use
the thin provisioning function provided by the VIMS, rather than the Thin LUN function of SAN devices.
If you use the Thin LUN function, an alarm indicating insufficient storage space may be generated after you delete a VM
commonly because the storage space used by the VM is not zeroed out.
If SAN devices are used, configure LUNs or storage pools (data stores) as planned and map them to corresponding hosts.
If NAS storage devices are used, configure shared directories (datastores) and a list of hosts that can access the shared
directories as planned, and configure no_all_squash and no_root_squash.
The OS compatibility of some non-Huawei SAN devices varies depending on the LUN space. For example, if the storage
space of a LUN on a certain SAN device is greater than 2 TB, certain OSs can identify only 2 TB storage space on the LUN.
Therefore, review your storage device product documentation to understand the OS compatibility of the non-Huawei SAN
devices before you use the devices.
If SAN devices are used, you are advised to use iSCSI to connect hosts and storage devices. The iSCSI connection does not
require additional switches, thereby reducing costs.
In a FusionCompute system, a virtualized SAN data store can be added only to hosts of the same CPU type.
In a system that uses data stores provided by shared storage, add the data stores to all hosts in the same cluster to allow VM migration
between hosts in a cluster.
Local disks can be provided only for the host accommodating the disks. Pay attention to the following when using local storage:
The size of the local disk is configured in approximate equivalence to the number of host compute resources. With this equivalence
configured, if the host compute resources become exhausted, local storage resources will also become exhausted, preventing unequal
resource waste.
Except the management nodes, you are advised to deploy shared storage for service VMs.
A virtual data store provides high performance when it serves a small number of hosts. Therefore, you are advised to add one
virtual data store to a maximum of 16 hosts.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 187/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
SNMP and SSH must be enabled for the switches to enhance security. SNMPv3 is recommended.
The STP protocol must be disabled for the switches. Otherwise, a host fault alarm may be generated incorrectly.
To ensure networking reliability, you are advised to deploy switches in stacking mode. If NICs are sufficient, you can use
two or more NICs for connecting the host to each plane.
Table 1 describes the requirements for communication between network planes in the system.
BMC plane Specifies the plane used by the BMC network port on the host. This The management plane and the BMC
plane enables remote access to the BMC system of a server. plane of the VRM node can
communicate with each other. The
management plane and the BMC plane
can be combined.
Management Specifies the plane used by the management system to manage all VRM and CNA nodes are on the same
plane nodes in a unified manner. All nodes communicate on this plane, which management plane and can
provides the following IP addresses: communicate with each other.
Management IP addresses of all hosts, that is, IP addresses of the
management network ports on hosts
IP addresses of management VMs
IP addresses of storage device controllers
NOTE:
The management plane is accessible to the IP addresses in all network
segments by default, because the network plans of different customers
vary. You can deploy physical firewalls to deny access from IP addresses
that are not included in the network plan.
If you use a firewall to set access rules for the floating IP address of the
VRM node, set the same access rules for the management IP addresses
of the active and standby VRM nodes.
On the FusionCompute management plane, some ports provide
management services for external networks. If the management plane is
deployed on an untrusted network, it is prone to DoS and DDoS attacks.
Therefore, you are advised to deploy this management plane on a
dedicated network or in the trusted zone of the firewall, protecting the
FusionCompute system against external attacks.
It is recommended that you configure eth0 on a host as the management
network port. If a host has more than four network ports, configure both
eth0 and eth1 on the host as the management network ports, and bind
them to work in active/standby mode after FusionCompute is installed.
Storage plane Specifies the network plane on which hosts communicate with storage Hosts communicate properly with
units on storage devices. This plane provides the following IP storage devices over the storage plane.
addresses: You are not advised to use the
Storage IP addresses of all hosts, that is, IP addresses of the storage management plane to carry storage
network ports on hosts services, which ensures storage service
continuity even when you subsequently
Storage IP addresses of storage devices expand the capacity for the storage
If the multipathing mode is in use, configure multiple VLANs for the plane.
storage plane.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 188/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details about how network planes communicate, see Communication Principles .
Item Requirement
Switch A single aggregation switch or two aggregation switches (in stacking mode) are deployed on the network.
Firewall A single firewall or two firewalls (active and standby) are connected to the aggregation switch in bypass mode. Virtual
firewalls have been created on the in-use firewalls to provide gateways for networks in the VPC.
For details about supported firewalls, see the compatibility list of network devices.
NOTE:
Port GigabitEthernet0/0/0 is the default firewall management port, and it cannot be used as a service port.
To enhance link reliability, connect the Trust zone of the firewall to the switch in trunk mode.
Only the Eudemon8000E firewall can be used in the dual-mode elastic IP address service, which allows one private IP address to
bind to multiple elastic IP addresses.
Cluster All servers in a cluster are of the same model, and the VLAN settings for all clusters are the same.
Documents
Table 1 describes the documents used for installing FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 189/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
FusionCompute 8.8.0 Integration Software deployment plan For enterprise users: Visit
Design Guide https://support.huawei.com/enterprise , search for the
document by name, and download it.
FusionCompute 8.8.0 Version Provides information about the mapping
Mapping between software and hardware For carrier users: Visit https://support.huawei.com ,
versions. search for the document by name, and download it.
Obtain SmartKit User Guide for For details about the SmartKit
the desired version according to installation process, see section
FusionCompute 8.8.0 Version "Installing SmartKit."
Mapping.
For details about how to use SmartKit
to download the software packages
required by FusionCompute, see section
"Software Packages."
Tools
Table 2 describes the tools to be prepared before the installation.
PuTTY A cross-platform remote access tool used for You can visit the chiark homepage to
accessing nodes on a Windows platform during download the PuTTY software.
software installation You are advised to use PuTTY of the latest
version for a successful login to the storage
system.
WinSCP A cross-platform file transfer tool, which is used You can visit the WinSCP homepage to
to transfer files between Windows and Linux download the WinSCP software.
OSs.
kvm An independent remote console in the Windows You can visit the enterprise website and
version. search for the software package
kvm_client_windows.zip on the homepage
to download it.
Software Packages
127.0.0.1:51299/icslite/print/pages/resource/print.do? 190/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
To ensure that the obtained software packages are valid for use, you need to verify the software package integrity and validity. For
details, see Verifying the Software Package .
After obtaining the software package, do not change the name of the software package. Otherwise, the software package cannot be verified when
it is uploaded. As a result, the software package cannot be installed.
Tool-based Installation
The FusionCompute installation tool can be used to install hosts and VRM nodes in a unified manner. Obtain the software
packages described in Table 3 or Table 4 before installing FusionCompute using the installation tool.
Table 3 Software packages required for tool-based installation in the x86 architecture
Table 4 Software packages required for tool-based installation in the Arm architecture
Manual Installation
Obtain the software packages described in Table 5 or Table 6 before manually installing FusionCompute.
Table 5 Software packages required for manual installation in the x86 architecture
Installing VRM nodes on physical FusionCompute_VRM-8.8.0- System image file of the VRM Carrier users: Click
servers X86_64.iso nodes here.
Table 6 Software packages required for manual installation in the Arm architecture
Installing VRM nodes on physical FusionCompute_VRM-8.8.0- System image file of the VRM Carrier users: Click
servers ARM_64.iso nodes here.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 191/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installing the Quorum server software package The quorum software version must match the VRM Enterprise users:
quorum server FusionCompute-QuorumServer- version. Click here.
x.x.x-Eluer-X86.zip or
The quorum server software must match the Carrier users:
FusionCompute-QuorumServer-
architecture type of the quorum server. Click here.
x.x.x-Eluer-X86.iso
NOTE:
To use a template for deployment, download
FusionCompute-QuorumServer-x.x.x-Eluer-
x.x.x indicates the actual version X86.zip.
number.
To use an image for deployment, download
FusionCompute-QuorumServer-x.x.x-Eluer-
X86.iso.
3.3.2.3 Data
VRM node information VRM Node Name This parameter is mandatory. VRM01
It identifies a VRM node in the system.
The value can contain only letters, digits, hyphens (-), and
underscores (_). It must start with a letter or a digit and cannot
exceed 64 characters.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 192/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Management Port The two parameters Sansec SJJ1212 HSM: 8008 (default port)
address of HSM 1 and are optional. Westone SJJ1744 HSM: 6666 (default port)
Management Port If no management
address of HSM 2 port is specified, the
default port is used.
Installing FusionCompute
Troubleshooting
127.0.0.1:51299/icslite/print/pages/resource/print.do? 193/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installation Process
Data Preparation
Component Description
Component Description
Host A physical server that provides compute resources for FusionCompute. A host also provides storage resources when
local hard disks are used.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 194/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In active/standby mode, the active and standby nodes are deployed on two physical servers or VMs. If the active node is
faulty, the system quickly switches services over to the standby node to ensure service continuity. Therefore, the
active/standby mode provides higher reliability than the single-node deployment mode.
Custom installation Typical installation Most parameters are set by default, which facilitates installation.
Rights management mode: common
One-click installation - Most parameters are set by default, which facilitates installation.
Permission management mode: customized (common or role-based)
127.0.0.1:51299/icslite/print/pages/resource/print.do? 195/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
gandalf Password Specifies the password of user gandalf for logging in to the CNA -
node to be installed.
Redis Password Redis password, which is set during CNA environment initialization. -
127.0.0.1:51299/icslite/print/pages/resource/print.do? 196/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The BIOS page varies depending on the server model and iBMC
version. The method of obtaining the MAC address described here is
for reference only.
Management The default value is 0 and you can retain the default value. -
plane VLAN tag
Network Port of The current node is the first node. You need to specify a network -
Management IP port in management IP address configuration, for example, eth0.
Address
127.0.0.1:51299/icslite/print/pages/resource/print.do? 197/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Installing FusionCompute
Prerequisites
The host to be installed has been configured as required. For details, see Host Requirements .
The system date and time of the host to be installed have been set to the current date and time (UTC).
You have obtained the IP address, username, and password for logging in to the BMC system of the host. (If the host does
not have a BMC system, you do not need to obtain these data.)
BMC manages host hardware using Intelligent Platform Management Interfaces (IPMIs), and enables the remote access, control, and
management functions of a host.
You have obtained the password for logging in to the host BIOS. You do not need to obtain the password if no password is
configured for the BIOS.
No DHCP server is running on the installation plane subnet. During the installation, no DHCP server is running except for
DHCP services provided by the tool for deploying hosts in batches.
Except the host to be installed, no other servers on the installation plane network need to obtain the DHCP address.
If a host has multiple disks, the tool automatically installs the host OS on the disk that is located in the first boot position. For
details, see the requirements for setting the BIOS boot mode in Host Requirements .
During the host OS installation, Storage media is set to Local disk and Installation scenario is set to New installation. You can
modify these parameter values in the customized installation mode as needed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 198/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The host management plane and the host BMC plane can communicate with each other. For details, see the network
environment requirements.
The installation planes between hosts can communicate with each other. The plane is the network plane that assigns DHCP
addresses and is used for installation. You are advised to use the management plane as the installation plane (except when the
management plane VLAN is configured). If the IP address of the host where the installation tool is located is not in the
installation plane network segment, you need to configure the DHCP relay agent on the switch. For details, see Example of
Configuring Switches .
During PXE-based installation, ensure that the data packets on the PXE port do not carry VLAN tags, and allow these data packets
in network settings.
The tag of nodes to be installed in PXE mode is the PVID of nodes to be installed in non-PXE mode + untag.
Procedure
1. Install the first CNA node by mounting the CD/DVD-ROM drive on the server. For details, see Installing Hosts Using ISO
Images (x86) or Installing Hosts Using ISO Images (Arm) .
You need to manually install one host to deploy the installation tool of FusionCompute.
Phytium does not support node installation using ISO images. The Phytium-based VRM node can be installed only on a Phytium-
based CNA node. To install a Phytium host, install a Kunpeng server as the Management Computing Node Agent (MCNA) node
(CNA node where the installation tool is located), and then use the MCNA node to install a Phytium-based CNA node.
After the Phytium-based CNA node is installed, uninstall the installation service from the MCNA node installed on a Kunpeng
server as instructed in How Do I Uninstall the FusionCompute Web Tool? . Then, perform 2 to 8 on the Phytium-based CNA node,
log in to the node using the node IP address, and install the Phytium-based VRM node as instructed in Installing FusionCompute .
If you need to install hosts and VRM nodes with VLANs, configure VLANs when installing hosts using an image.
2. Use WinSCP to log in to the CNA node as user gandalf using the management IP address and transfer the FusionCompute
installation tool package in ZIP format to the /home/GalaX8800 directory.
If you use WinSCP to transfer the installation tool package, you do not need to enable the SFTP service. If you use another remote
transfer tool, ensure that the SFTP service has been enabled for CNA. For details, see Enabling SFTP on CNA or VRM Nodes .
4. Run the following command and enter the password of user root to switch to user root:
su - root
5. Run the cd /home/GalaX8800/ command to go to the GalaX8800 directory. Manually verify the signature of the
installation package uploaded in 2. For details, see Verifying the Software Package .
7. Run the following command to access the directory generated after the installation package is decompressed:
cd Name of folder where the decompressed package is stored
127.0.0.1:51299/icslite/print/pages/resource/print.do? 199/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
During the installation service startup, if the bmc_ip value fails to be obtained, only parameter verification is affected, and the
installation process is not affected.
Other Operations
After the web tool-based installation of FusionCompute is complete, you can uninstall the tool as instructed in How Do I Uninstall
the FusionCompute Web Tool? .
If a CNA has been added to a FusionCompute cluster, the installation tool cannot be used to install VRM on the CNA. If you want to continue to
install the VRM, remove the CNA from the original site.
This document uses FusionCompute (CNA and VRM installation) as an example to describe the two installation modes.
FusionCompute Custom Installation
127.0.0.1:51299/icslite/print/pages/resource/print.do? 200/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. On the Prepare for Installation page, select Host and VRM and click Next.
The Install FusionCompute page is displayed.
If only the host needs to be installed, select only Host and click Next.
If the host needs to be installed and the CPU vendor of the server where the host is to be installed is Hygon, select the Hygon
server and configure the server BIOS in advance. For details, see Configuring the BIOS on Hygon Servers .
If the host needs to be installed and a PowerLeader AMD server is used, log in to the BMC or BIOS and set the boot mode to
legacy BIOS.
If only VRM needs to be installed, select VRM only. On the VRM Installation Principles page, click Next and go to 16.
If both Host and VRM are selected and VRM is to be installed on the host installed using the tool, Do not change the password of
user gandalf during the installation. Change the password after the VRM installation is complete. Otherwise, the VRM
installation may fail.
4. Click Next.
5. On the Upload Host Package page, save the CNA installation package (in ISO format) to the specified directory and click
Upload.
If the RoCE NIC driver needs to be installed on the host, on the Upload Host Package page, select Driver Upload and
click Import Driver Package. In the dialog box that is displayed, click Add File, add a driver file and a verification file
(optional), and click Upload. You can check the upload progress, upload status, and verification status at the bottom of the
page.
If the host uses Mellanox ConnectX-4 or Mellanox ConnectX-5 series NICs, the NICs are in the compatibility list, and the
Mellanox NIC driver needs to be installed, you can upload the Mellanox driver package.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 201/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
x86 and Arm support the installation of IPv6 hosts using PXE.
If the IPv6 protocol is used by x86 and Arm, check whether the BIOS server and NIC support IPv6 before installing
a host in PXE mode. For details, see the product documentation of the corresponding server or contact technical
support of the server vendor. If the server does not support IPv6, use a server that supports IPv6.
Before installing a host in the x86 architecture using PXE, configure the BIOS server and NIC, set Boot Type to
UEFI Boot, and set PXE Boot Capability to UEFI:IPV6.
After the installation package is uploaded, the installation mode is locked. You cannot switch between Custom Installation and
One-Click Installation.
After obtaining the software package, do not change the name of the software package. Otherwise, the software package cannot be
verified when it is uploaded. As a result, the software package cannot be installed.
The verification file in SHA-256 or ASC format with the same name as the RoCE NIC driver file must be uploaded. Otherwise,
security risks exist.
Before the upload, the system checks whether the target space is sufficient. If the space is insufficient, check whether other
redundant files are manually copied to /installer on the host. If so, delete them.
For RoCE NICs, the driver file named MLNX_OFED_LINUX-23.10-0.5.5.0-euleros2.0sp12-aarch64.tgz or
MLNX_OFED_LINUX-23.10-0.5.5.0-euleros2.0sp12-x86_64.tgz or the verification file with the same name in SHA-256 or
ASC format should be uploaded. The driver package is installed together with the host software package. For details, see 13. For
compatibility information, see FusionCompute Compatibility Query.
If the host has both Mellanox NICs and HBAs, the Mellanox NIC driver is installed in common mode by default and does not
carry the NVMe module. The functionality of HBAs is prioritized. If eVol storage devices need to be connected using the NVMe
over RoCEv2 protocol, you need to reinstall the NIC driver by referring to the Mellanox NIC driver installation guide. If the host
has only Mellanox NICs, you need to install the NIC driver in the mode where the NVMe module is carried to ensure that the
NVMe over RoCEv2 protocol can be used to connect eVol storage devices. You can run the lspci | grep 'Fibre' command to
check whether the host is configured with an HBA.
7. On the Configure Host Disk Mode page, select an installation mode and click Next.
This section uses New installation as an example.
In the x86 architecture, when setting Swap Partition Size, ensure that the value is an integer greater than or equal to 30.
If the system automatically configures the swap partition, the available space on the host may be insufficient for installing the
VRM VM (or other planned management VMs, such as FusionStorage). It is recommended that the disk space of the VRM VM be
no less than 140 GB. In this case, manually configure the swap partition based on the site requirements.
In the Arm architecture, if the Arm-based server does not have a RAID controller card, configure system disks to form software
RAID 1 to improve system reliability. In this scenario, set System Disk Software RAID to Yes. In other scenarios, set System
Disk Software RAID to No. When the web-based installation tool is used to install the host, only the first two disks can be used
as the system disks by default to form software RAID 1.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 202/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. On the Enter Node Information page, click Add to add node information.
You only need to add hosts other than the MCNA node (the CNA node where the installation tool is located).
Host Name: This parameter is mandatory and is used to uniquely identify a host.
BMC IP: This parameter is mandatory. Enter the BMC system IP address of the host server.
Login Authentication: Select this option if authentication is required for logging in to the BMC.
BMC Username: This parameter is mandatory if Login Authentication is selected. Otherwise, the installation will
fail.
BMC Password: This parameter is mandatory if Login Authentication is selected. Otherwise, the installation will
fail.
MAC Address: Specifies the MAC address of the physical port on the host for PXE booting host OSs. If the network
configuration needs to be specified before host installation, obtain the MAC address to identify the target host. For
details, see the host hardware documentation.
CNA Management IP: This parameter is optional and specifies the host IP address.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 203/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If this parameter is not set, the IP address is obtained from the DHCP pool. Otherwise, the host IP address is the configured IP
address.
Management Plane VLAN Tag: If you need to install hosts and VRM nodes with VLANs, configure VLANs.
DHCP Pool Start IP Address: This parameter is mandatory. For the DHCP service, the value is the start IP
address and the number of IP addresses that can be allocated to the host to be installed. Select the IP address of the
local PC that can communicate with the network segment where the DHCP address pool resides as the service
address. It is user-defined, and other parameters are automatically allocated by the system.
DHCP Mask: Specifies the subnet mask of the IP address segment assigned by the DHCP pool. This parameter is
mandatory.
DHCP Gateway: Specifies the gateway of the IP address segment assigned by the DHCP pool. This parameter is
mandatory.
DHCP Pool Capacity: This parameter is mandatory. You are advised to set the number of IP addresses in the
DHCP address pool to twice or more than the number of physical nodes to be loaded. The default value is 50.
Do not run other DHCP servers on the network segment where the DHCP address pool resides. Configure the DHCP relay
from the current network segment to the default network segment by referring to Configuring the DHCP Relay for
Aggregation Switches .
After the host is successfully installed, you can adjust the partition size only by reinstalling the system. If the disk where the
host OS is installed has VRM VM or user data, the partition size must be the same as that before the OS reinstallation.
Otherwise, the VM data will be overwritten.
SAN BOOT install: Specifies whether to use the SAN BOOT mode for installation. No is selected by default.
Before using the SAN BOOT mode for installation, configure the storage, HBA, and BIOS. For details about the application
scenarios and installation configuration of SAN BOOT, see FusionCompute 8.8.0 Best Practices for Installation in the FC SAN
BOOT Mode.
Gandalf User Password: This parameter is mandatory and specifies the password of the gandalf user for logging in
to the CNA node to be installed.
Confirm Gandalf User Password: This parameter is mandatory and specifies the verification of the password of the
gandalf user for logging in to the CNA node to be installed.
Root User Password: This parameter is mandatory and specifies the password of the root user for logging in to the
CNA node to be installed.
Confirm Root User Password: This parameter is mandatory and specifies the verification of the password of the
root user for logging in to the CNA node to be installed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 204/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
GRUB Password: This parameter is mandatory and specifies the GRUB password of the CNA to be installed.
Confirm GRUB Password: This parameter is mandatory and specifies the verification of the GRUB password for
logging in to the CNA node to be installed.
Redis Password: This parameter is mandatory and specifies the password of the Redis database account.
Confirm Redis Password: This parameter is mandatory and specifies the verification of the password of the Redis
database account.
12. In the Services configured successfully. dialog box, click Next.
The Install Host page is displayed.
13. Select the host to be installed and click Start Installation at the top of the page to install the host.
a. Before installing the host, check whether the BIOS password is required during the host restart. For details about the BIOS
password of the server, see the product documentation of the server.
b. Before installing the host, ensure that PXE is enabled for the NICs used in the PXE boot and disabled for other NICs. For some
servers, such as TaiShan 200 (model 2280), you need to set External Network Card Boot to Enable in the BIOS settings.
c. When installing hosts on some servers, you need to enter the IP addresses of the hosts. You are advised to log in to the BMC
environment to check the installation status during host installation. If you do not enter the host IP address during the
installation, the installation will be suspended for a long time. If information similar to "Please input the TFTP server IP address:
"is displayed, enter the host IP address.
d. If multiple hosts need to be installed at the same time, you are advised to install a maximum of 10 hosts at a time to prevent
slow host installation due to heavy traffic.
e. If a host fails to be installed or is abnormal, an alarm sound is displayed on the page. You can click in the upper right corner
to disable the alarm sound.
f. If the installation fails, locate and resolve the fault based on the following causes and click Start Installation in the Operation
column, or select the host that fails to be installed and click Start Installation.
The host fails to be booted from PXE (network). As a result, the installation times out.
1. Log in to the host node and run service dhcpd status to check whether the DHCP service
of the installation tool is normal.
2. Contact the network administrator to check whether the host to be installed and the host
where the installation tool is located are in the same network segment. If they are not in the
same network segment, configure the DHCP relay.
3. If the installation progress is still 0% 20 minutes after the host is installed using PXE, log in
to the BMC system and check whether the login page is displayed. If the login page is
displayed, the server does not enter the PXE boot mode. In this case, enter the BIOS to
check the PXE settings. If the login page is not displayed, contact technical support.
g. Before installing hosts with VLANs, configure the network for the switch ports corresponding to the host network ports. For
details about how to modify switch port information, see Example of Configuring Switches .
After the Phytium-based CNA node is installed, uninstall the installation service from the host node installed on a Kunpeng server as
instructed in How Do I Uninstall the FusionCompute Web Tool? . Then, perform 2 to 8 on the Phytium-based CNA node, log in to the
node using the node IP address, and install the Phytium-based VRM node and perform follow-up operations as instructed in this section.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 205/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
16. On the Upload VRM Package page, upload the VRM installation package (in ZIP format) and click Upload.
If the x86 architecture is used or x86 hosts need to be added in the Arm scenario, you need to upload the virtio-win driver
package FusionCompute_SIA-8.1.0.1-GuestOSDriver_X86.zip.
For enterprise users: Visit https://support.huawei.com/enterprise , search for the software package by name, and
download it.
For carrier users: Visit https://support.huawei.com , search for the software package by name, and download it.
After the installation package is uploaded, the installation mode is locked. You cannot switch between Custom Installation
and One-Click Installation.
After obtaining the software package, do not change the name of the software package. Otherwise, the software package
cannot be verified when it is uploaded. As a result, the software package cannot be installed.
Before the upload, the system checks whether the target space is sufficient. If the space is insufficient, check whether other
redundant files are manually copied to the /installer directory on the host. If yes, delete them.
17. After the file is uploaded, click Next. On the Select Installation Mode page, select an installation mode and click Next.
Typical installation
In this mode, the installation is easy as most parameters are configured by default (for example, the subnet gateway
is used as the quorum IP address, the management plane VLAN can be configured, storage devices can be selected
at random, and the system scale can be set by scale).
Rights Management Mode is set to Common by default and cannot be changed.
Custom installation
You can customize all parameters and select storage devices.
Rights Management Mode is set to Custom (common or role-based).
If you need to use scale-out block storage, select Custom installation.
If you need to install hosts and VRM nodes with VLANs or customize VLANs, you can select Custom installation or Typical
installation.
In typical installation mode, ensure that the space of the disk on which the host OS is running is greater than the minimum
capacity required by the host OS and the selected management node. The minimum capacity required by the host OS is 70 GB.
The minimum capacity required by VRM is 140 GB. Otherwise, the installation tool automatically selects other disks that meet
the requirements. If you need to customize the disk on which the host OS is to be installed, select the customized installation
mode.
18. On the Configure VRM page, enter the configuration information and click Next to verify the configuration information.
If the verification fails, an error message is displayed. Modify the configuration information based on the error message.
Installation Mode: Select the active/standby or single-node installation mode based on operation scenarios. The
active/standby mode is recommended.
If the VRM node works in standalone mode and is faulty, data cannot be restored, and the system reliability is low. The
active/standby installation mode is used as an example.
KMS Configuration: The KMS service must be enabled if vTPM devices are to be mounted to VMs.
Size of image repositories. The options are as follows: 10 GB (total required disk space: 332 GB), 100
GB (total required disk space: 422 GB), 200 GB (total required disk space: 522 GB), and 300 GB (total
required disk space: 622 GB).
Configuration Mode. This parameter is mandatory when Select Installation Mode is set to Custom
installation. You can select By scale or Custom.
Configuration Item. This parameter is mandatory when Select Installation Mode is set to Custom
installation. If Configuration Mode is set to By scale, the value range is the same as that of System
Scale. If you select Custom, the value range of CPU is 4 to 20, the value range of Memory is 8 to 30,
and the value range of Size of image repositories is 10 to 300.
If you select Container Management when installing the VRM for the first time, the container management function will be
automatically enabled after the installation is complete. If the container management function is not enabled when the VRM is
installed for the first time, you can manually enable the function. For details, see Enabling Container Management .
System scale: Set the VRM management scale based on the system deployment scale. In System Scale, VM
indicates the number of VMs, and PM indicates the number of hosts.
VM: Virtual machine; PM: Physical machine (host)
1000 VMs, 50 PMs, 10 K8s clusters, and 500 K8s nodes: VRM nodes require 4 CPUs, 8 GB memory, and 522 GB
disk space.
3000 VMs, 100 PMs, 20 K8s clusters, and 1000 K8s nodes: VRM nodes require 8 CPUs, 12 GB memory, and 522
GB disk space.
5000 VMs, 200 PMs, 50 K8s clusters, and 2500 K8s nodes: VRM nodes require 12 CPUs, 22 GB memory, and 522
GB disk space.
After the KMS service is enabled, the memory usage increases. You are advised to increase the memory specifications of the
VRM node by one level.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 207/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Floating IP Address: specifies the management plane floating IP address of the VRM nodes. This parameter is
required only when the VRM nodes are deployed in active/standby mode. It is user-defined, and other parameters are
automatically allocated by the system.
Management IP Address of Active VRM: This parameter is mandatory and is automatically generated.
Management IP Address of Standby VRM: This parameter is mandatory and is automatically generated.
MAC Address Pool: This parameter is mandatory and specifies the segment mode or the user-defined mode.
In the segment mode, any of the 10 segments can be selected. Each of the first nine segments contains 10,000 sequential
MAC addresses, and the tenth segment contains 5000 sequential MAC addresses.
1: 28:6E:D4:88:C6:29 to 28:6E:D4:88:ED:38
2: 28:6E:D4:88:ED:39 to 28:6E:D4:89:14:48
3: 28:6E:D4:89:14:49 to 28:6E:D4:89:3B:58
4: 28:6E:D4:89:3B:59 to 28:6E:D4:89:62:68
5: 28:6E:D4:89:62:69 to 28:6E:D4:89:89:78
6: 28:6E:D4:89:89:79 to 28:6E:D4:89:B0:88
7: 28:6E:D4:89:B0:89 to 28:6E:D4:89:D7:98
8: 28:6E:D4:89:D7:99 to 28:6E:D4:89:FE:A8
9: 28:6E:D4:89:FE:A9 to 28:6E:D4:8A:25:B8
10: 28:6E:D4:8A:25:B9 to 28:6E:D4:8A:39:40
In the user-defined mode, you can customize the range of the MAC address pool. The start MAC address cannot be smaller
than 28:6E:D4:88:C6:29.
The default value range is 28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40 (95,000).
Regardless of the segment mode or user-defined mode, you can modify the MAC address pool range or add MAC address
segments after the deployment is successful.
Configure Management VLAN Tag: You can select this option to configure the management plane VLAN.
VLAN ID: Specifies the management plane VLAN. If no value is specified, the system uses VLAN 0 by default. For
details about the configuration method, see Table 1.
If you need to install hosts and VRM nodes with VLANs, configure VLANs.
Admin User Password: This parameter is mandatory and specifies the password of the admin user for logging in to
the VRM node to be configured.
Confirm Admin User Password: This parameter is mandatory and specifies the verification of the password of the
admin user for logging in to the VRM node to be configured.
Gandalf User Password: This parameter is mandatory and specifies the password of the gandalf user for logging in
to the VRM node to be configured.
Confirm Gandalf User Password: This parameter is mandatory and specifies the verification of the password of the
gandalf user for logging in to the VRM node to be configured.
Root User Password: This parameter is mandatory and specifies the password of the root user for logging in to the
VRM node to be configured.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 208/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Confirm Root User Password: This parameter is mandatory and specifies the verification of the password of the
root user for logging in to the VRM node to be configured.
GRUB Password: This parameter is mandatory and specifies the GRUB password for logging in to the VRM node to
be configured.
Confirm GRUB Password: This parameter is mandatory and specifies the verification of the GRUB password for
logging in to the VRM node to be configured.
Postgres Password: This parameter is mandatory and specifies the password of the Postgres GaussDB database.
Confirm Postgres Password: This parameter is mandatory and specifies the verification of the password of the
Postgres GaussDB database.
Galax Password: This parameter is mandatory and specifies the password of the Galax GaussDB database.
Confirm Galax Password: This parameter is mandatory and specifies the verification of the password of the Galax
GaussDB database.
Access If the default VLANs of the switch ports are the same, you need to For details about how to view the VLAN of a
configure the management plane VLAN for the network ports on the switch port, see the official guide of the switch
host. vendor.
This port type does not support allowing multiple
VLANs and layer 2 isolation. Therefore, you are
advised to not use it as the uplink of the storage
plane or service plane.
Trunk Based on the actual network plan: For details about how to view the VLAN of a
If the VLAN has been added to the list of allowed VLANs of the switch switch port, see the official guide of the switch
ports, you need to configure the management plane VLAN for the vendor.
network ports on the host.
If the default VLANs of the switch ports have been added to the list of
allowed VLANs, you do not need to configure the management plane
VLAN for the network ports on the host.
Hybrid Based on the actual network plan: For details about how to view the VLAN of a
If the VLAN has been added to the list of allowed VLANs of the switch switch port, see the official guide of the switch
ports or the switch ports have been configured to carry the VLAN tag vendor.
when sending data frames, you need to configure a VLAN for the
network ports on the host.
If the default VLANs of the switch ports have been added to the list of
allowed VLANs or the switch ports have been configured to remove the
VLAN tag when sending data frames, you do not need to configure a
VLAN for the network ports on the host.
Table 1 is for reference only. The actual networking depends on the actual network plan.
19. On the VRM installation page, select the host where the VRM is to be installed and click Install VRM.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 209/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Perform the following operations. Otherwise, residual data may exist in the system.
If you want to continue the installation, click the page redirection link or click Finish. In the dialog box that is
displayed, click Continue to trigger the installation data clearance task.
If you need to uninstall the service, click the page redirection link or click Finish and click Confirm in the dialog
box that is displayed to trigger the uninstallation task.
After the new FusionCompute environment is installed, if you log in to the environment within 30 minutes, the CNA status is
normal, and the alarm "ALM-10.1000027 Heartbeat Communication Between the Host and VRM Interrupted" is generated, the
alarm will be automatically cleared after 30 minutes. Otherwise, clear the alarm as instructed in ALM-10.1000027 Heartbeat
Communication Between the Host and VRM Interrupted .
21. On the Configure VRM page, enter the configuration information and click Next to verify the configuration information.
If the verification fails, an error message is displayed. Modify the configuration information based on the error message.
Installation Mode: Select the active/standby or single-node installation mode based on operation scenarios. The
active/standby mode is recommended.
If the VRM node works in standalone mode and is faulty, data cannot be restored, and the system reliability is low. The
active/standby installation mode is used as an example.
Size of image repositories. The options are as follows: 10 GB (total required disk space: 332 GB), 100
GB (total required disk space: 422 GB), 200 GB (total required disk space: 522 GB), and 300 GB (total
required disk space: 622 GB).
Configuration Mode. This parameter is mandatory when Select Installation Mode is set to Custom
installation. You can select By scale or Custom.
Configuration Item. This parameter is mandatory when Select Installation Mode is set to Custom
installation. If Configuration Mode is set to By scale, the value range is the same as that of System
Scale. If you select Custom, the value range of CPU is 4 to 20, the value range of Memory is 8 to 30,
and the value range of Size of image repositories is 10 to 300.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 210/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you select Container Management when installing the VRM for the first time, the container management function will be
automatically enabled after the installation is complete. If the container management function is not enabled when the VRM is
installed for the first time, you can manually enable the function. For details, see Enabling Container Management .
Configuration Mode and Configuration Item: Set the VRM management scale based on the system deployment
scale or customize the VRM management scale based on the VM flavors. In System Scale, VM indicates the number
of VMs, and PM indicates the number of hosts.
Floating IP Address: specifies the management plane floating IP address of the VRM nodes. This parameter is
required only when the VRM nodes are deployed in active/standby mode. It is user-defined, and other parameters are
automatically allocated by the system.
Active VRM Node Name: uniquely identifies the active VRM node.
Standby VRM Node Name: uniquely identifies the standby VRM node.
The value of Subnet Mask must be the same as that of Netmask configured for the mounted CNA node.
Quorum IP Address: Enter 1 to 3 quorum IP addresses. The quorum IP address is required only when the VRM node
is deployed in active/standby mode. You are advised to set the first quorum IP address to the gateway address of the
management plane, and set other quorum IP addresses to IP addresses of global servers, such as the AD servers or the
DNS servers, that communicate with the management plane.
Configure Management VLAN Tag: You can select this option to configure the management plane VLAN.
VLAN ID: Specifies the management plane VLAN. If no value is specified, the system uses VLAN 0 by default. For
details about the configuration method, see Table 1.
If you need to install hosts and VRM nodes with VLANs, configure VLANs.
MAC Address Pool: This parameter is mandatory and specifies the segment mode or the user-defined mode.
In the segment mode, any of the 10 segments can be selected. Each of the first nine segments contains 10,000 sequential
MAC addresses, and the tenth segment contains 5000 sequential MAC addresses.
1: 28:6E:D4:88:C6:29 to 28:6E:D4:88:ED:38
2: 28:6E:D4:88:ED:39 to 28:6E:D4:89:14:48
3: 28:6E:D4:89:14:49 to 28:6E:D4:89:3B:58
4: 28:6E:D4:89:3B:59 to 28:6E:D4:89:62:68
5: 28:6E:D4:89:62:69 to 28:6E:D4:89:89:78
6: 28:6E:D4:89:89:79 to 28:6E:D4:89:B0:88
7: 28:6E:D4:89:B0:89 to 28:6E:D4:89:D7:98
8: 28:6E:D4:89:D7:99 to 28:6E:D4:89:FE:A8
9: 28:6E:D4:89:FE:A9 to 28:6E:D4:8A:25:B8
127.0.0.1:51299/icslite/print/pages/resource/print.do? 211/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
10: 28:6E:D4:8A:25:B9 to 28:6E:D4:8A:39:40
In the user-defined mode, you can customize the range of the MAC address pool. The start MAC address cannot be smaller
than 28:6E:D4:88:C6:29.
The default value range is 28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40 (95,000).
Regardless of the segment mode or user-defined mode, you can modify the MAC address pool range or add MAC address
segments after the deployment is successful.
Admin User Password: This parameter is mandatory and specifies the password of the admin user for logging in to
the VRM node to be configured.
Confirm Admin User Password: This parameter is mandatory and specifies the verification of the password of the
admin user for logging in to the VRM node to be configured.
Root User Password: This parameter is mandatory and specifies the password of the root user for logging in to the
VRM node to be configured.
Confirm Root User Password: This parameter is mandatory and specifies the verification of the password of the
root user for logging in to the VRM node to be configured.
GRUB Password: This parameter is mandatory and specifies the GRUB password for logging in to the VRM node to
be configured.
Confirm GRUB Password: This parameter is mandatory and specifies the verification of the GRUB password for
logging in to the VRM node to be configured.
Gandalf User Password: This parameter is mandatory and specifies the password of the gandalf user for logging in
to the VRM node to be configured.
Confirm Gandalf User Password: This parameter is mandatory and specifies the verification of the password of the
gandalf user for logging in to the VRM node to be configured.
Postgres Password: This parameter is mandatory and specifies the password of the Postgres GaussDB database.
Confirm Postgres Password: This parameter is mandatory and specifies the verification of the password of the
Postgres GaussDB database.
Galax Password: This parameter is mandatory and specifies the password of the Galax GaussDB database.
Confirm Galax Password: This parameter is mandatory and specifies the verification of the password of the Galax
GaussDB database.
If the task fails to be started, you can click View Log on the right to view the failure cause.
If you enter an incorrect password of the user gandalf for logging in to the host 02, the account will be locked for 5 minutes. After
the account is unlocked, enter the password again.
For details about how to manually unlock the account, see How Can I Handle the Issue that the Node Fails to Be Remotely
Connected During the Host Configuration for Customized VRM Installation? .
During the host configuration, if the system displays a message indicating that the remote connection to the node fails, rectify the
fault based on How Can I Handle the Issue that the Node Fails to Be Remotely Connected During the Host Configuration for
Customized VRM Installation? .
23. After the configuration is complete, click Next. On the Configure Datastore page, click Refresh Storage Device. After
the refresh task is complete, the storage device is loaded to the list. Select the storage device.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 212/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Only the storage devices whose capacity is greater than 100 GB are returned when you query storage devices.
24. Click Next. On the displayed Select Rights Management Mode page, select Common (recommended) or Role-based.
If Role-based is selected, ensure that the interconnected components (such as FusionAccess) support this mode.
In Role-based mode:
Sysadmin User Password: This parameter is mandatory and specifies the password of the sysadmin user for
logging in to the VRM node to be configured.
Confirm Sysadmin User Password: This parameter is mandatory and specifies the verification of the password of
the sysadmin user for logging in to the VRM node to be configured.
Secadmin User Password: This parameter is mandatory and specifies the password of the secadmin user for
logging in to the VRM node to be configured.
Confirm Secadmin User Password: This parameter is mandatory and specifies the verification of the password of
the secadmin user for logging in to the VRM node to be configured.
Secauditor User Password: This parameter is mandatory and specifies the password of the secauditor user for
logging in to the VRM node to be configured.
Confirm Secauditor User Password: This parameter is mandatory and specifies the verification of the password
of the secauditor user for logging in to the VRM node to be configured.
The rights management mode cannot be changed after FusionCompute installation is complete.
25. Click Next to switch to the Install VRM page and click Install VRM to start the installation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 213/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you want to save the information on the Complete page, click Export Installation Information to export all information on this
page.
Perform the following operations. Otherwise, residual data may exist in the system.
If you want to continue the installation, click the page redirection link or click Finish. In the dialog box that is
displayed, click Continue to trigger the installation data clearance task.
If you need to uninstall the service, click the page redirection link or click Finish and click Confirm in the dialog
box that is displayed to trigger the uninstallation task.
After the new FusionCompute environment is installed, if you log in to the environment within 30 minutes, the CNA status is
normal, and the alarm "ALM-10.1000027 Heartbeat Communication Between the Host and VRM Interrupted" is generated, the
alarm will be automatically cleared after 30 minutes. Otherwise, clear the alarm as instructed in ALM-10.1000027 Heartbeat
Communication Between the Host and VRM Interrupted .
28. Install the linux-firmware firmware package on all CNA nodes. For details, see How Do I Install the linux-firmware
Firmware Package? .
Prerequisites
The FusionCompute web tool has been installed.
Procedure
1. Enter https://Host IP address:8080 in the address box of the browser to open the tool page. Enter the administrator
username and password and click Login.
One-Click Installation
1. On the Prepare for Installation page, select FusionCompute and click Next.
This section uses FusionCompute and IPv4 as an example.
If the CPU vendor of the server where the host is to be installed is Hygon, select the Hygon server and configure the server BIOS
in advance. For details, see Configuring the BIOS on Hygon Servers .
If the host needs to be installed and a PowerLeader AMD server is used, log in to the BMC or BIOS and set the boot mode to
legacy BIOS.
Do not change the password of user gandalf during the installation. Change the password after the VRM installation is complete.
Otherwise, the VRM installation may fail.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 215/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. On the Upload Package page, save the CNA installation package (in ISO format) and VRM installation package (in ZIP
format) to the specified directory, and click Upload.
If the x86 architecture is used or x86 hosts need to be added in the Arm scenario, you need to upload the virtio-win driver
package FusionCompute_SIA-8.1.0.1-GuestOSDriver_X86.zip.
If the RoCE NIC driver needs to be installed on the host, on the Upload Package page, select Driver Upload and click
Import Driver Package. In the dialog box that is displayed, click Add File, add a driver file and a verification file
(optional), and click Upload. You can check the upload progress, upload status, and verification status at the bottom of the
page.
If the host uses Mellanox ConnectX-4 or Mellanox ConnectX-5 series NICs, the NICs are in the compatibility list, and the
Mellanox NIC driver needs to be installed, you can upload the Mellanox driver package.
x86 and Arm support the installation of IPv6 hosts using PXE.
If the IPv6 protocol is used by x86 and Arm, check whether the BIOS server and NIC support IPv6 before installing
a host in PXE mode. For details, see the product documentation of the corresponding server or contact technical
support of the server vendor. If the server does not support IPv6, use a server that supports IPv6.
Before installing a host in the x86 architecture using PXE, configure the BIOS server and NIC, set Boot Type to
UEFI Boot, and set PXE Boot Capability to UEFI:IPV6.
After the installation package is uploaded, the installation mode is locked. You cannot switch between Custom Installation and
One-Click Installation.
After obtaining the software package, do not change the name of the software package. Otherwise, the software package cannot be
verified when it is uploaded. As a result, the software package cannot be installed.
The verification file in SHA-256 or ASC format with the same name as the RoCE NIC driver file must be uploaded. Otherwise,
security risks exist.
Before installing the host, check whether the BIOS password is required during the host restart. For details about the BIOS
password of the server, see the product documentation of the server.
Before installing the host, ensure that PXE is enabled for the NICs used in the PXE boot and disabled for other NICs. For details,
see the product documentation of the server. For some servers, such as TaiShan 200 (model 2280), you need to set External
Network Card Boot to Enable in the BIOS settings.
When installing hosts on some servers, you need to enter the IP addresses of the hosts. You are advised to log in to the BMC
environment to check the installation status during host installation. If you do not enter the host IP address during the installation,
the installation will be suspended for a long time. If information similar to "Please input the TFTP server IP address: "is displayed,
enter the host IP address.
If multiple hosts need to be installed at the same time, you are advised to install a maximum of 10 hosts at a time to prevent slow
host installation due to heavy traffic. When using the one-click installation function, do not install a VRM node on a CNA node
where the FusionCompute web tool is located. Otherwise, the one-click installation function cannot be used in subsequent large-
scale installation scenarios. If a VRM node has been installed on such a CNA node, you are advised to use the custom installation
mode in subsequent installation operations.
Before installing hosts with VLANs, configure the network for the switch ports corresponding to the host network ports. For
details about how to modify switch port information, see Example of Configuring Switches .
Before the upload, the system checks whether the target space is sufficient. If the space is insufficient, check whether other
redundant files are manually copied to /installer on the host. If so, delete them.
If secure boot needs to be enabled on the host, disable secure boot before installing the host OS. After the installation is complete,
enable secure boot as instructed in Enabling or Disabling Secure Boot for a Host .
For RoCE NICs, the driver file named MLNX_OFED_LINUX-23.10-0.5.5.0-euleros2.0sp12-aarch64.tgz or
MLNX_OFED_LINUX-23.10-0.5.5.0-euleros2.0sp12-x86_64.tgz or the verification file with the same name in SHA-256 or
ASC format should be uploaded. For compatibility information, see FusionCompute Compatibility Query.
If the host has both Mellanox NICs and HBAs, the Mellanox NIC driver is installed in common mode by default and does not
carry the NVMe module. The functionality of HBAs is prioritized. If eVol storage devices need to be connected using the NVMe
over RoCEv2 protocol, you need to reinstall the NIC driver by referring to the Mellanox NIC driver installation guide. If the host
has only Mellanox NICs, you need to install the NIC driver in the mode where the NVMe module is carried to ensure that the
NVMe over RoCEv2 protocol can be used to connect eVol storage devices. You can run the lspci | grep 'Fibre' command to
check whether the host is configured with an HBA.
Other DHCP servers cannot be running in the network segment where the DHCP address pool resides.
After the host is successfully installed, you can adjust the partition size only by reinstalling the system. If the disk where the host
OS is installed has VRM VM or user data, the partition size must be the same as that before the OS reinstallation. Otherwise, the
VM data will be overwritten.
If the VRM node is deployed in standalone mode and is faulty, data may fail to be restored, and the system reliability is low. The
active/standby installation is used as an example. Set the VRM management scale based on the system deployment scale or
customize the VRM management scale based on the VM flavors. In System Scale, VM indicates the number of VMs, and PM
indicates the number of hosts.
To import parameters, go to 5.
6. Click Download Template and set the parameters listed in the template.
9. Set parameters.
Parameter Description
Swap Partition This parameter is mandatory and specifies the size of the swap partition.
Size NOTE:
System Disk This parameter is mandatory and specifies whether system disks form software RAID 1.
Software RAID NOTE:
DHCP Pool This parameter is mandatory. For the DHCP service, the value is the start IP address and the number of IP addresses
Start IP Address that can be allocated to the host to be installed. It is user-defined, and other parameters are automatically allocated
by the system.
DHCP Mask This parameter is mandatory and specifies the subnet mask of the IP address segment assigned by the DHCP pool.
DHCP Gateway This parameter is mandatory and specifies the gateway of the IP address segment assigned by the DHCP pool.
DHCP Pool This parameter is mandatory. You are advised to set the number of IP addresses in the DHCP address pool to at
Capacity least twice the number of physical nodes to be loaded. The default value is 50.
SAN BOOT This parameter is mandatory and specifies whether to use the SAN BOOT mode for installation. No is selected by
install default.
NOTE:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 217/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Before using the SAN BOOT mode for installation, configure the storage, HBA, and BIOS. For details about the
application scenarios and installation configuration of SAN BOOT, see FusionCompute 8.8.0 Best Practices for
Installation in the FC SAN BOOT Mode.
Gandalf User This parameter is mandatory and specifies the password of user gandalf for logging in to the CNA node to be
Password installed.
Confirm This parameter is mandatory and specifies the verification of the password of user gandalf for logging in to the
Gandalf User CNA node to be installed.
Password
Root User This parameter is mandatory and specifies the password of user root for logging in to the CNA node to be installed.
Password
Confirm Root This parameter is mandatory and specifies the verification of the password of user root for logging in to the CNA
User Password node to be installed.
GRUB This parameter is mandatory and specifies the GRUB password of the CNA to be installed.
Password
Confirm GRUB This parameter is mandatory and specifies the verification of the GRUB password for logging in to the CNA node
Password to be installed.
Redis Password This parameter is mandatory and specifies the password of the Redis database account.
Confirm Redis This parameter is mandatory and specifies the verification of the password of the Redis database account.
Password
b. Add information about hosts except the first CNA node, as described in Table 2.
Parameter Description
BMC IP This parameter is mandatory. Enter the BMC system IP address of the host server.
BMC Username This parameter is mandatory if Login Authentication is selected. If this parameter is left blank, the installation
will fail.
BMC Password This parameter is mandatory if Login Authentication is selected. If this parameter is left blank, the installation
will fail.
MAC Address Specifies the MAC address of the physical port on the host for PXE booting host OSs. If the network
configuration needs to be specified before host installation, obtain the MAC address to identify the target host.
For details, see the host hardware documentation.
NOTE:
CNA Management This parameter is mandatory and specifies the host IP address.
IP
The value of Subnet Mask must be the same as that of Netmask configured for the mounted CNA node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 218/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Management Plane The default value is 0. You do not need to change the value.
VLAN Tag
Parameter Description
Installation Mode Select an installation mode based on the operation scenario. You can select Single node or Active/standby.
Active/standby is recommended.
Container Whether to enable the container management function on the VRM node.
Management
Size of image repositories. The options are as follows: 10 GB (total required disk space: 332 GB), 100 GB (total
required disk space: 422 GB), 200 GB (total required disk space: 522 GB), and 300 GB (total required disk space:
622 GB).
Configuration Mode. This parameter is mandatory when Select Installation Mode is set to Custom
installation. You can select By scale or Custom.
Configuration Item. This parameter is mandatory when Select Installation Mode is set to Custom installation.
If Configuration Mode is set to By scale, the value range is the same as that of System Scale. If you select
Custom, the value range of CPU is 4 to 20, the value range of Memory is 8 to 30, and the value range of Size of
image repositories is 10 to 300.
NOTE:
If you select Container Management when installing the VRM for the first time, the container management function
will be automatically enabled after the installation is complete. If the container management function is not enabled
when the VRM is installed for the first time, you can manually enable the function. For details, see Enabling Container
Management .
Floating IP Specifies the management plane floating IP address of the VRM nodes. This parameter is required only when the
Address VRM nodes are deployed in active/standby mode. It is user-defined, and other parameters are automatically
allocated by the system.
Active VRM This parameter uniquely identifies the active VRM node.
Node Name
Standby VRM This parameter uniquely identifies the standby VRM node.
Node Name
127.0.0.1:51299/icslite/print/pages/resource/print.do? 219/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Quorum IP Enter 1 to 3 quorum IP addresses. This parameter is required only when VRM nodes are deployed in
Address active/standby mode. You are advised to set the first quorum IP address to the gateway of the management plane,
and set other quorum IP addresses to IP addresses of servers that can communicate with the management plane,
such as the AD server or the DNS server.
VLAN ID Specifies the VLAN of the management plane. If no value is specified, the system uses VLAN 0 by default.
MAC Address This parameter is mandatory and specifies the segment mode or the user-defined mode.
Pool NOTE:
In the segment mode, any of the 10 segments can be selected. Each of the first nine segments contains 10,000
sequential MAC addresses, and the tenth segment contains 5000 sequential MAC addresses.
1: 28:6E:D4:88:C6:29 to 28:6E:D4:88:ED:38
2: 28:6E:D4:88:ED:39 to 28:6E:D4:89:14:48
3: 28:6E:D4:89:14:49 to 28:6E:D4:89:3B:58
4: 28:6E:D4:89:3B:59 to 28:6E:D4:89:62:68
5: 28:6E:D4:89:62:69 to 28:6E:D4:89:89:78
6: 28:6E:D4:89:89:79 to 28:6E:D4:89:B0:88
7: 28:6E:D4:89:B0:89 to 28:6E:D4:89:D7:98
8: 28:6E:D4:89:D7:99 to 28:6E:D4:89:FE:A8
9: 28:6E:D4:89:FE:A9 to 28:6E:D4:8A:25:B8
10: 28:6E:D4:8A:25:B9 to 28:6E:D4:8A:39:40
In the user-defined mode, you can customize the range of the MAC address pool. The start MAC address cannot be
smaller than 28:6E:D4:88:C6:29.
The default value range is 28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40 (95,000).
Regardless of the segment mode or user-defined mode, you can modify the MAC address pool range or add MAC
address segments after the deployment is successful.
Root User This parameter is mandatory and specifies the password of user root for logging in to the VRM node to be
Password configured.
Confirm Root This parameter is mandatory and specifies the verification of the password of user root for logging in to the VRM
User Password node to be configured.
GRUB Password This parameter is mandatory and specifies the GRUB password for logging in to the VRM node to be configured.
Confirm GRUB This parameter is mandatory and specifies the verification of the GRUB password for logging in to the VRM node
Password to be configured.
Gandalf User This parameter is mandatory and specifies the password of user gandalf for logging in to the VRM node to be
Password configured.
Confirm Gandalf This parameter is mandatory and specifies the verification of the password of user gandalf for logging in to the
User Password VRM node to be configured.
Postgres This parameter is mandatory and specifies the password of the Postgres GaussDB database.
Password
Confirm Postgres This parameter is mandatory and specifies the verification of the password of the Postgres GaussDB database.
Password
Galax Password This parameter is mandatory and specifies the password of the Galax GaussDB database.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 220/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Confirm Galax This parameter is mandatory and specifies the verification of the password of the Galax GaussDB database.
Password
If the installation progress is still 0% 20 minutes after the host is installed using PXE, log in to the BMC system and check whether the
login page is displayed. If the login page is displayed, the server does not enter the PXE boot mode. In this case, enter the BIOS to check
the PXE settings. If the login page is not displayed, contact technical support.
12. When the installation progress of both VRM and the host reaches 100%, the installation is complete. In the case, click
Next.
The Complete page is displayed.
If the host installation progress is not 100%, hosts other than the active and standby nodes fail to be installed. In this case, mount
the CNA nodes manually.
If a host fails to be installed or is abnormal, an alarm sound is played on the page. You can click in the upper right corner to
disable the alarm sound.
If the installation fails, locate and resolve the fault based on the following causes and click Start Installation in the Operation
column, or select the host that fails to be installed and click Start Installation.
The host fails to be booted from PXE (network). As a result, the installation times out.
a. Log in to the host node and run service dhcpd status to check whether the DHCP service of the
installation tool is normal.
b. Contact the network administrator to check whether the host to be installed and the host where
the installation tool is located are in the same network segment. If they are not in the same
network segment, configure the DHCP relay.
Perform the following operations. Otherwise, residual data may exist in the system.
If you want to continue the installation, click the page redirection link or click Finish. In the dialog box that is
displayed, click Continue to trigger the installation data clearance task.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 221/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you need to uninstall the service, click the page redirection link or click Finish and click Confirm in the dialog
box that is displayed to trigger the uninstallation task.
After the new FusionCompute environment is installed, if you log in to the environment within 30 minutes, the CNA status is
normal, and the alarm "ALM-10.1000027 Heartbeat Communication Between the Host and VRM Interrupted" is generated, the
alarm will be automatically cleared after 30 minutes. Otherwise, clear the alarm as instructed in ALM-10.1000027 Heartbeat
Communication Between the Host and VRM Interrupted .
14. Install the linux-firmware firmware package on all CNA nodes. For details, see How Do I Install the linux-firmware
Firmware Package? .
3.3.4.3 Troubleshooting
Solution:
Reinstall the target host and assign an IP address that does not conflict with that of other devices on the network to the host.
Possible cause 4: When the installation tool PXE is used to install hosts in a batch, the host installation progress varies. The IP
addresses of the installed hosts are temporarily occupied by those are being installed.
Solution: Ensure that the address segment of the DHCP pool is different from the IP addresses of the planned host nodes to avoid
IP address conflicts. For details, see Data Preparation . You are advised to install a maximum of 10 hosts at a time.
The root account is locked because incorrect passwords are entered for multiple consecutive times.
Solution:
If the password of the root user is incorrect, enter the correct password.
If the root account is locked, wait for 5 minutes and try again.
Possible cause: The host OS names at the same site are duplicate.
Solution:
Log in to the OS of host 02 and run the following command:
sudo hostnamectl --static set-hostname host-name
If no command output is displayed, the change is successful. Continue the installation.
Problem 5: The Host Where the Installation Tool Is Installed Does Not Automatically
Start Services After Being Restarted
Possible cause: The command for automatically starting services fails to be executed during the host startup.
Solution:
The MCNA node is the CNA node on which the installation tool is installed.
2. Run the following command and enter the password of user root to switch to user root:
su - root
Possible cause: The IP address of the installation tool node (that is, the configured DHCP service address) cannot
communicate with the installation plane.
Solution:
Check the physical connection between the installation tool node and the host to be installed. Ensure that no hardware
fault, such as network cable or network port damage, occurs.
Check the physical devices between the installation tool node and the host to be installed, such as switches and
firewalls. Ensure that the DHCP, TFTP, and FTP ports are not disabled or the rates of the ports are not limited.
TFTP and FTP have security risks. You are advised to use secure protocols such as SFTP and FTPS.
If the IP address of the installation tool is not in the network segment of the installation plane, check whether the DHCP
relay is configured on the switch.
If a VLAN is configured for the host management plane, ensure that the installation plane and the host management
plane are in different VLANs and the installation tool can communicate with the two planes.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 223/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
During PXE-based installation, ensure that the data packets on the PXE port do not carry VLAN tags, and allow these data packets
in network settings.
The tag of nodes to be installed in PXE mode is the PVID of nodes to be installed in non-PXE mode + untag.
After the possible faults are rectified, boot the hosts from the network again.
Possible cause: Multiple DHCP servers are deployed on the installation plane.
Solution: Disable redundant DHCP servers to ensure that the installation tool provides DHCP services.
Possible cause: The host to be installed is connected to multiple network ports, and DHCP servers exist on the network
planes of multiple network ports.
Solution: Disable DHCP servers on non-installation planes to ensure that the installation tool provides DHCP services.
Possible cause: The host to be installed supports boot from the network, but this function is not configured during booting.
Solution: Configure hosts to be installed to boot from the network (by referring to corresponding hardware documentation),
and then update the host installation progress to Installation Progress in the Install Host step of the PXE process.
Possible cause: Hosts to be installed do not support booting from the network.
Solution: Install the hosts by mounting ISO images.
Possible cause: Packet loss or delay occurs due to network congestion or high loads on the switch.
Solution: Ensure that the network workloads are light during the installation process. If more than 10 hosts are to be
installed, boot 10 hosts from the network per batch.
Problem 7: Automatic Logout After Login Using a Firefox Browser Is Successful but
an Error Message Indicating that the User Has Not Logged In or the Login Times
Out Is Displayed When the User Clicks on the Operation Page
Possible cause: The time of the server where the FusionCompute web installation tool is deployed is not synchronized with the
local time. As a result, the Firefox browser considers that the registered session has expired.
Solution: Change the local time or run the date -s xx:xx:xx command (xx:xx:xx:xx indicates hours:minutes:seconds respectively)
on the server to ensure that the local time is the same as the time of the server where the web installation tool is deployed, refresh
the browser, and log in again.
Installing FusionCompute
127.0.0.1:51299/icslite/print/pages/resource/print.do? 224/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Phytium does not support installation using an ISO image. For details about how to install Phytium-based hosts and VRM
nodes, see FusionCompute Web Tool-based Installation (Recommended) .
Phase Description
Install Install a host by using an ISO image. For details about the installation procedure, see Installing Hosts Using ISO Images
hosts (x86) or Installing Hosts Using ISO Images (Arm) .
If you plan to install a VRM node using an ISO image, you need to install it on a new physical server but cannot install it on a
physical server where a host is installed. If you need to install host and a VRM node on the same physical server, install the
VRM node on a VM using the tool.
Scenarios
127.0.0.1:51299/icslite/print/pages/resource/print.do? 225/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When there are a small number of hosts in the system, install the OSs on the hosts by mounting an ISO image to them. This
enables them to provide hardware virtualization services.
For details about hardware models, firmware versions, and VM OS versions supported by the FusionCompute host OS, visit
FusionCompute Compatibility Query.
After the installation, if "unknown error" is displayed during the startup, uninstall the ISO and restart it again.
When the installation progress reaches 100%, the error message that contains "isopackage.sdf file does not match" is displayed. In this
case, rectify the fault based on What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported
During System Installation? .
Prerequisites
The host to be installed has been configured as required. For detailed requirements, see Host Requirements .
The system date and time of the host to be installed have been set to the current date and time (UTC).
The local PC is properly communicating with the management plane and BMC plane of the host. It is recommended that you
connect the local PC and the host to be installed to the same switch and assign an IP address to the PC from the planned
network segment of the management plane.
You have obtained the IP address, username, and password for logging in to the BMC system of the physical server.
You have obtained the BIOS passwords of the hosts if these passwords have been configured.
KVM is available. For details about how to obtain the tool, see Table 2 .
You have obtained the image file FusionCompute_CNA-8.8.0-X86_64.iso for installing the host and verified the file. For
details about the verification method, see Verifying the Software Package .
Data
Host name
The name can contain only digits, letters, hyphens (-), and underscores (_). It must start with a letter or a digit and cannot
exceed 64 characters.
Information about the management NIC, including the IP address, gateway address, and subnet mask/subnet prefix length
If you specify a management plane VLAN, set the type of the VLAN on the access switch port connected with the management
network port to tagged so that the management plane and the switch can communicate with each other.
If you do not specify a management plane VLAN, set the types of some VLANs on the access switch port connected with the
management network port to untagged so that the aggregation switch is reachable to the uplink IP packets from the management
plane through these VLANs.
Procedure
Switch to the host installation window.
The following example describes how to install the CNA host to a local disk on a 2288H V5 server whose BIOS version is 0.25 (U47) by
mounting an image file to the host using the KVM tool.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 226/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
In the x86 scenario, FusionCompute supports the CNA host installation in SAN BOOT or local disk mode. For details about installation in
the SAN BOOT mode, see FusionCompute 8.8.0 Best Practices for Installation in the FC SAN BOOT Mode.
This section uses eth0 which functions as the management NIC as an example to describe operations related to NICs.
When you log in to a remote server using HTML5, if Caps Lock fails to switch between uppercase and lowercase letters, press
Shift+Letter to enter an uppercase letter.
For details about the default username and password for logging in to the BMC system, see the desired server
documentation. If the username and password have been changed, obtain the new username and password from the
administrator.
If you cannot log in to the BMC system of a single blade server, you are advised to log in to the SMM of the blade server and open the
remote control window of the server.
2. On the main menu of the remote control page of the host, choose Configuration > Boot Device. Set Effective to One-
time and Boot Medium to DVD-ROM. After the setting is complete, click Save.
PowerLeader AMD servers do not support UEFI deployment. You need to set the boot mode to legacy BIOS in the BIOS configuration.
3. Use the KVM tool to log in to the host remote control console using the BMC IP address, username, and password.
4. Mount an image.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 227/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
b. Click Connect to connect to the image, as shown in . When Connect changes to Disconnect, the image is
mounted to the host.
c. Click Forced System Reset to restart the server, as shown in to . After the restart is successful, go to 5.
If the restart fails, perform the following operations and then go to 5:
i. Repeat to . When the host restarts, press F11 repeatedly until the screen for entering the BIOS
password is displayed.
For hosts of some models, when the hosts are being restarted, you do not need to enter the BIOS password. In this
case, go to 4.c.ii.
The default BIOS password for a V5 server is Admin@9000. The default BIOS password for a Huawei RH-
series rack server, X-series high-density server, E6000 blade server, or Huawei Tecal E9000 server is
Huawei12#$ or uniBIOS123. Change the password upon the first login.
iii. On the displayed screen, select Virtual DVD-ROM VM 1.1.0 to set the boot device to CD/DVD-ROM
drive.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 228/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. On the displayed screen, select Install within 30s and press Enter.
If Install is not selected within 30s, the system is booted from local disks by default. In this case, restart the host and select
Install.
If the host OS needs to be reinstalled due to an OS fault, select Install(recover) to rectify the fault.
When installing the host OS in UEFI mode, select Installation. In this mode, select Installation(recover) to rectify the fault.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 229/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The system starts automatic loading. The loading operation takes about 3 minutes. After the loading is successful, the host
configuration page is displayed.
If the error message "cdrom not found" is displayed due to a network problem during the system loading, reconnect the image file and
restart the system.
Figures in this section include sample parameter values, which may vary from the actual parameter values on the site.
Press Tab or the up and down arrow keys to move the cursor.
Press Enter to select or execute the item on which the cursor is located.
Press the space bar to toggle between options.
Do not use an SSD card to install the OS. You are advised to select disks of the Local-Disk type as system disks.
If you need to install the host OS again because the host OS is faulty, you cannot select Format all partitions. Otherwise, the
system disk partitions used by users are formatted, causing user data loss. After selecting the disk for installing the host OS,
deselect Format all partitions and click OK.
You are advised to disable the JBOD mode of the RAID controller card, create RAID 1, and then select sda for installation.
Otherwise, all disks are displayed on the disk selection page and are difficult to distinguish from each other. In this case, an
incorrect disk may be selected.
Set Hard Disk to its default value to allow the system to install the OS on the first identified disk, which is usually used
for a RAID 1 array.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 230/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you need to modify the disk information, click Edit in , as shown in the following figure.
a. In the Choose the disk to install area, select a disk where an OS is to be installed. Local-Disk indicates
that the disk is a local disk. If you want to configure system disks as software RAID 1, select another disk in
the Choose the disk to set up software RAID1 area.
c. Set the swap partition size. The default value is the minimum size 30720 MB.
If memory overcommitment is not enabled for the cluster to which the host belongs, set the swap partition size
to the minimum value 30720 MB. If memory overcommitment needs to be enabled, calculate the swap
partition size as instructed in Smart Memory Overcommitment . For details about how to set the memory
overcommitment function for a cluster, see Enabling Memory Overcommitment for a Cluster .
If you need to restore the swap partition size to the default value, click Auto size. After you click Auto size, the
swap partition size is set to the smaller one of the remaining system disk space and 60% of the total host memory
size by default. If you need to install a VRM VM on this host and the total host memory size is close to or greater
than the system disk capacity, the system automatically sets the swap partition size. As a result, there may be no
sufficient space for installing a VRM VM on the host. Therefore, you need to manually configure the swap partition
size as required.
After the installation disk is switched, you need to click Auto size to update the swap partition size. Otherwise, the
swap partition size remains unchanged.
d. Click OK. If is selected, the system will ask you whether to format all partitions of the disk. In this
case, click Yes.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 231/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If Failed to check whether the system can be restored is displayed when a host is being reinstalled after a fault is
rectified, perform operations as instructed in What Can I Do If Disk Selection Fails When a Host Is Being Reinstalled
After a Fault Is Rectified? .
7. Choose the IP address type of the host management plane and configure network information for the host.
The IP address type must be consistent with that of the actual system management plane. After the system is installed, you
are not allowed to switch the IP address type of the management plane.
IPv4
IPv6
127.0.0.1:51299/icslite/print/pages/resource/print.do? 232/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Select the IP address type of the host management plane based on network planning.
The management plane supports only single-stack deployment. During installation, select only IPv4 or IPv6. Do not configure two
types of IP addresses at the same time.
In a resource pool, the IP address types of all host management planes must be the same.
Configure only one management NIC for a host. If you configure IP addresses for other NICs, network communication may fail.
If a host is configured with nine NICs ranging from eth0 to eth9, configure an IP address for any NIC.
If the host OS needs to be reinstalled due to an OS fault, select the default network port (the first port added to the management
aggregation port) of the original host.
If yes, go to 9.
The management plane VLAN must be planned, and the management plane VLANs used by the active and standby
management nodes must be the same. The management VLAN used by hosts can be different from that used by the
management nodes, but the packets with the VLAN tagged must be able to transfer between the hosts and
management nodes.
If no, go to 8.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 233/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you specify a management plane VLAN, set the type of the VLAN on the access switch port connected with the management network
port to tagged so that the management plane and the switch can communicate with each other.
If you do not specify a management plane VLAN, set the types of some VLANs on the access switch port connected with the
management network port to untagged so that the aggregation switch is reachable to the uplink IP packets from the management plane
through these VLANs.
8. Configure host network information (without configuring the VLAN for the management plane).
IPv4
IPv6
127.0.0.1:51299/icslite/print/pages/resource/print.do? 234/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prefix: Enter the subnet prefix length of the host management plane.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical
keypad on the right.
9. Configure host network information (with configuring the VLAN for the management plane).
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 235/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VLAN ID: Enter the planned VLAN for the management plane.
IPv6
Prefix: Enter the subnet prefix length of the host management plane.
VLAN ID: Enter the planned VLAN for the management plane.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical
keypad on the right.
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 236/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IPv6
If no configuration is required, go to 11.
Both the time zone and system time must be set, even if the time in Date/Time is the correct local time.
The initial value of Date/Time is the time of the default time zone (Asia/Beijing). After the time zone is configured, change the value of
Date/Time to the time of the new time zone. If the value of Date/Time is not changed, the user changes only the time zone by default,
and the system hardware time keeps unchanged. After the system is installed, the hardware time before the system installation and newly
configured time zone are used to set the current system time.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 237/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the host
after installation. In this case, you need to install the host again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
Digits
The password cannot be the same as the username or the reverse username.
If yes, go to 15.
If no, go to 16.
a. Choose Cmdline > Edit to enter the Choose boot command line screen.
b. On the Choose boot command line screen, manually add, modify, or delete new command line configurations.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 238/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When installing the Huawei KunLun server, add kernel.watchdog_thresh=30 udev.event-timeout=600 at the end
of the existing command line parameters.
Typically, add new configurations instead of deleting or modifying the existing system command line parameters.
c. Click OK to confirm the modification of system command line parameters. Click Reset to restore the parameters to
default system command line parameters.
You do not need to set the host TPM parameters.
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the host
after installation. In this case, you need to install the host again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
Digits
The password cannot be the same as the username or the reverse username.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 239/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The installation process takes about 15 minutes. After the installation, the host restarts automatically. When the login
information is displayed, the host installation is complete.
During the host restart, Failed may be displayed for some items. Failed items do not have adverse impact on the host.
If no operation is performed on the screen for a long period of time, a black screen may be displayed. Press Ctrl to switch
to the installation page.
If the mounted CD/DVD-ROM drive is disconnected during installation due to a network failure, reinstall the host.
After the host is restarted, if the system displays an error message indicating that the partition does not exist when you access
another OS or during the startup, the possible cause is that the first boot device of the host is not the one configured during host
OS installation. For details, see "How Do I Change the Boot Sequence of a Server?" in FusionCompute 8.8.0 Maintenance Cases.
If an error is reported during the restart after the host is installed, cancel the ISO file mounting and restart the host again to enter
the system.
18. After the host is installed and automatically restarts, log in to the host as user root in the remote control window and
install the linux-firmware firmware package. For details, see How Do I Install the linux-firmware Firmware Package? .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 240/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
19. Run the following command to set the passwords of the gandalf user and Redis database account:
cnaInit
New password:
The password must contain at least three types of the following characters:
Uppercase letters
Lowercase letters
Digits
Spaces or special characters `~!@#$%^&*()-_=+|[{}];:'",<.>/?
The password cannot be the same as the username or the reverse username.
The password cannot contain any words in the Linux password dictionary.
Enter the password of the gandalf user again and press Enter.
The password of the gandalf user has been reset if the following information is displayed:
Enter the password of the Redis database account and press Enter.
The following command output is displayed:
Please input new password again:
If the host is installed for the first time, the passwords of the Redis database account set for different hosts must be the same.
If the host is faulty, you are advised to set the password of the Redis database account to the one before the fault occurs after the
fault is rectified.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 241/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Enter the password of the Redis database account again and press Enter.
If the following information is displayed, the password of the Redis database account is set successfully:
20. When setting the password of the gandalf user, check whether the user exits abnormally.
If yes, go to 19 to reset the password of user gandalf and the password of the Redis database account.
If no, go to 21.
21. Run the following command to check whether the host contains and uses a Mellanox ConnectX-4 or Mellanox ConnectX-
5 series NIC. If any command output is displayed, the host contains and uses such a NIC.
lspci -k | grep -i mlx5_core | grep 'Kernel driver in use: mlx5_core'
If no, go to 22.
23. Install the OSs and GPU drivers on the other hosts.
For details, see 1 to 22.
To ensure system security, it is recommended that administrators change the preset passwords immediately after the system
installation is complete and periodically change the passwords during the subsequent maintenance process. For details, see
Account Information Overview .
Scenarios
When there are a small number of hosts in the system, install the OSs on the hosts by mounting an ISO image to them. This
enables them to provide hardware virtualization services.
For details about hardware models, firmware versions, and VM OS versions supported by the FusionCompute host OS, visit
FusionCompute Compatibility Query.
After the installation, if "unknown error" is displayed during the startup, uninstall the ISO and restart it again.
When the installation progress reaches 100%, the error message that contains "isopackage.sdf file does not match" is displayed. In this
case, rectify the fault based on What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported
127.0.0.1:51299/icslite/print/pages/resource/print.do? 242/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
During System Installation? .
If secure boot needs to be enabled on the host, disable secure boot before installing the host OS. After the installation is complete, enable
secure boot as instructed in Enabling or Disabling Secure Boot for a Host .
Prerequisites
The host to be installed has been configured as required. For detailed requirements, see Host Requirements .
The system date and time of the host to be installed have been set to the current date and time (UTC).
The local PC is properly communicating with the management plane and BMC plane of the host. It is recommended that you
connect the local PC and the host to be installed to the same switch and assign an IP address to the PC from the planned
network segment of the management plane.
You have obtained the IP address, username, and password for logging in to the BMC system of the physical server.
You have obtained the BIOS passwords of the hosts if these passwords have been configured.
KVM is available. For details about how to obtain the tool, see Table 2 .
You have obtained the image file FusionCompute_CNA-8.8.0-ARM_64.iso for installing the host and verified the file. For
details about the verification method, see Verifying the Software Package .
Data
Host name
The name can contain only digits, letters, hyphens (-), and underscores (_). It must start with a letter or a digit and cannot
exceed 64 characters.
Information about the management NIC, including the IP address, gateway address, and subnet mask/subnet prefix length
If you specify a management plane VLAN, set the type of the VLAN on the access switch port connected with the management
network port to tagged so that the management plane and the switch can communicate with each other.
If you do not specify a management plane VLAN, set the types of some VLANs on the access switch port connected with the
management network port to untagged so that the aggregation switch is reachable to the uplink IP packets from the management
plane through these VLANs.
Procedure
Switch to the host installation window.
The following example describes how to install the host OS on a TaiShan 200 server (model: 2280) whose BIOS version is 0.59 (U75) by
mounting an image file to the host using the KVM tool.
This section uses eth0 which functions as the management NIC as an example to describe operations related to NICs.
When you log in to a remote server using HTML5, if Caps Lock fails to switch between uppercase and lowercase letters, press
Shift+Letter to enter an uppercase letter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 243/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details about the default username and password for logging in to the BMC system, see the required server
documentation. If the username and password have been changed, obtain the new username and password from the
administrator.
If you cannot log in to the BMC system of a single blade server, you are advised to log in to the SMM of the blade server and open the
remote control window of the server.
2. On the main menu of the remote control page of the host, choose Configuration > Boot Device. Set Effective to One-
time and Boot Medium to DVD-ROM. After the setting is complete, click Save.
3. Use the KVM tool to log in to the host remote control console using the BMC IP address, username, and password.
4. Mount an image.
b. Click Connect to connect to the image, as shown in . When Connect changes to Disconnect, the image is
mounted to the host.
c. Click Forced System Reset to restart the server, as shown in to . After the restart is successful, go to 5.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 244/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the host fails to boot from DVD-ROM, perform through until the host begins to restart. Press F2 repeatedly
until the Boot Option screen is displayed.
Select UEFI DVD-ROM VM 1.1.0 and press Enter.
The system starts automatic loading. The loading operation takes about 10 minutes. After the loading is successful, the host
configuration page is displayed.
If the error message "cdrom not found" is displayed due to a network problem during the system loading, reconnect the image file and
restart the system.
During the host configuration process, the following configuration items are mandatory: Hard Drive, Network, Hostname,
Timezone, Password, and Grubpassword.
Figures in this section include sample parameter values, which may vary from the actual parameter values on the site.
Press Tab or the up and down arrow keys to move the cursor.
Press Enter to select or execute the item on which the cursor is located.
Press the space bar to toggle between options.
If you need to modify the disk information, click Edit in , as shown in the following figure.
a. In the Choose the disk to install area, select a disk where an OS is to be installed. If you want to
configure system disks as software RAID 1, select another disk in the Choose the disk to set up software
RAID1 area. Local-Disk indicates that the disk is a local disk.
c. Set the swap partition size. The default value is the minimum size 30720 MB.
You are advised to disable the JBOD mode of the RAID controller card, create RAID 1, and then select sda for
installation. Otherwise, all disks are displayed on the disk selection page and are difficult to distinguish from each
other. In this case, an incorrect disk may be selected.
Select disks of the Local-Disk type as system disks. Do not use an SSD card as the system installation disk.
If you need to install the host OS again because the host OS is faulty, do not select Format all partitions.
Otherwise, the system disk partitions used by users will be formatted, causing user data loss. After selecting the
disk for installing the host OS, deselect Format all partitions and click OK.
If the server does not have a RAID controller card, configure system disks as software RAID to improve system
reliability.
Only two local disks can form RAID 1. Network disks and other RAID levels are not supported.
After two disks form a software RAID group, the capacity of the software RAID disk is the smaller value of the
disk capacity.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 246/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the software RAID information exists on the specified disk, the installation program clears the software RAID
information.
d. Click OK. If is selected, the system will ask you whether to format all partitions of the disk. In this
case, click Yes.
If Failed to check whether the system can be restored is displayed when a host is being reinstalled after a fault is
rectified, perform operations as instructed in What Can I Do If Disk Selection Fails When a Host Is Being Reinstalled
After a Fault Is Rectified? .
7. Choose the IP address type of the host management plane and configure network information for the host.
The IP address type must be consistent with that of the actual system management plane. After the system is installed, you
are not allowed to switch the IP address type of the management plane.
IPv4
IPv6
127.0.0.1:51299/icslite/print/pages/resource/print.do? 247/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Select the IP address type of the host management plane based on network planning.
The management plane supports only single-stack deployment. During installation, select only IPv4 or IPv6. Do not configure two
types of IP addresses at the same time.
In a resource pool, the IP address types of all host management planes must be the same.
Configure only one management NIC for a host. If you configure IP addresses for other NICs, network communication may fail.
If a host is configured with nine NICs ranging from eth0 to eth9, configure an IP address for any NIC.
If the host OS needs to be reinstalled due to an OS fault, select the default network port (the first port added to the management
aggregation port) of the original host.
If yes, go to 9.
The management plane VLAN must be planned, and the management plane VLANs used by the active and standby
management nodes must be the same. The management VLAN used by hosts can be different from that used by the
management nodes, but the packets with the VLAN tagged must be able to transfer between the hosts and
management nodes.
If no, go to 8.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 248/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you specify a management plane VLAN, set the type of the VLAN on the access switch port connected with the management network
port to tagged so that the management plane and the switch can communicate with each other.
If you do not specify a management plane VLAN, set the types of some VLANs on the access switch port connected with the
management network port to untagged so that the aggregation switch is reachable to the uplink IP packets from the management plane
through these VLANs.
8. Configure host network information (without configuring the VLAN for the management plane).
IPv4
IPv6
127.0.0.1:51299/icslite/print/pages/resource/print.do? 249/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Prefix: Enter the subnet prefix length of the host management plane.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical
keypad on the right.
9. Configure host network information (with configuring the VLAN for the management plane).
If you specify a management plane VLAN, set the type of the VLAN on the access switch port connected with the management network
port to tagged so that the management plane and the switch can communicate with each other.
If you do not specify a management plane VLAN, set the types of some VLANs on the access switch port connected with the
management network port to untagged so that the aggregation switch is reachable to the uplink IP packets from the management plane
through these VLANs.
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 250/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VLAN ID: Enter the planned VLAN for the management plane.
IPv6
Prefix: Enter the subnet prefix length of the host management plane.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 251/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VLAN ID: Enter the planned VLAN for the management plane.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical
keypad on the right.
IPv4
IPv6
If no configuration is required, go to 11.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 252/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Both the time zone and system time must be set, even if the time in Date/Time is the correct local time.
The initial value of Date/Time is the time of the default time zone (Asia/Beijing). After the time zone is configured, change the value of
Date/Time to the time of the new time zone. If the value of Date/Time is not changed, the user changes only the time zone by default,
and the system hardware time keeps unchanged. After the system is installed, the hardware time before the system installation and newly
configured time zone are used to set the current system time.
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the host
after installation. In this case, you need to install the host again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
127.0.0.1:51299/icslite/print/pages/resource/print.do? 253/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Digits
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the host
after installation. In this case, you need to install the host again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
Digits
The password cannot be the same as the username or the reverse username.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 254/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The installation process takes about 30 to 40 minutes. After the installation, the host restarts automatically. When the login
information is displayed, the host installation is complete.
During the host restart, Failed may be displayed for some items. Failed items do not have adverse impact on the host.
If no operation is performed on the screen for a long period of time, a black screen may be displayed. Press Ctrl to switch
to the installation page.
If the mounted CD/DVD-ROM drive is disconnected during installation due to a network failure, reinstall the host.
After the host is restarted, if the system displays an error message indicating that the partition does not exist when you access
another OS or during the startup, the possible cause is that the first boot device of the host is not the one configured during host
OS installation. For details, see "How Do I Change the Boot Sequence of a Server?" in FusionCompute 8.8.0 Maintenance Cases.
If an error is reported during the restart after the host is installed, cancel the ISO file mounting and restart the host again to enter
the system.
After the host is installed, if you enter the BIOS installation page again during the host restart, you need to cancel the ISO file
mounting and restart the host again to enter the system.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 255/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
16. After the host is installed and automatically restarts, log in to the host as user root in the remote control window and
install the linux-firmware firmware package. For details, see How Do I Install the linux-firmware Firmware Package? .
17. Run the following command to set the passwords of the gandalf user and Redis database account: For details about
password complexity requirements, see FusionCompute 8.8.0 Account List.
cnaInit
New password:
Enter the password of the gandalf user again and press Enter.
The password of the gandalf user has been reset if the following information is displayed:
Enter the password of the Redis database account and press Enter.
The following command output is displayed:
If the host is installed for the first time, it is recommended that the passwords of the Redis database accounts set for different hosts
be the same.
If the host is faulty, you are advised to set the password of the Redis database account to the one before the fault occurs after the
fault is rectified.
Enter the password of the Redis database account again and press Enter.
If the following information is displayed, the password of the Redis database account is set successfully:
18. When setting the password of the gandalf user, check whether the user exits abnormally.
If yes, go to 17 to reset the password of user gandalf and the password of the Redis database account.
If no, go to 19.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 256/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
19. Run the following command to check whether the host contains and uses a Mellanox ConnectX-4 or Mellanox ConnectX-
5 series NIC. If any command output is displayed, the host contains and uses such a NIC.
lspci -k | grep -i mlx5_core | grep 'Kernel driver in use: mlx5_core'
If no, go to 20.
Scenarios
VRM can be installed on a physical server using an ISO image file or on a VM accommodated by a CNA host using the
installation tool. This task provides guidance for the administrator to install VRM on a physical server by mounting an image file.
If two VRM nodes are deployed on one site, configure the active/standby mode of the VRM nodes.
If you install VRM on a VM, use the installation tool to install it.
After the installation, if "unknown error" is displayed during the startup, uninstall the ISO and restart it again.
When the installation progress reaches 100%, the error message that contains "isopackage.sdf file does not match" is displayed. In this
case, rectify the fault based on What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported
During System Installation? .
Prerequisites
The first boot device is set to disk, the second boot device is set to network, and the third boot device is set to CD-ROM for
the server.
You have obtained the IP address, username, and password for logging in to the BMC system of the physical server.
You have obtained the BIOS passwords of the physical servers if these passwords have been set.
An application, such as PuTTY, which can be used for remote access on various platforms, is available.
KVM is available. For details about how to obtain the tool, see Table 2 .
You must set the first boot device to disk for the physical server to be installed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 257/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You have obtained the image file FusionCompute_VRM-8.8.0-X86_64.iso for installing the VRM node and verified the
file. For details about the verification method, see Verifying the Software Package .
Data
Table 1 describes the data required for performing this operation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 258/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
VRM MAC Randomly obtained from 10 segments and set in the There are 10 segments. Each of the first nine
configuration Address environment by default. If multiple FusionCompute segments contains 10,000 sequential MAC
information Pool systems are deployed, you are advised to modify addresses, and the tenth segment contains
the range of the MAC address pool after the 5000 sequential MAC addresses.
deployment. 1: 28:6E:D4:88:C6:29 to 28:6E:D4:88:ED:38
2: 28:6E:D4:88:ED:39 to 28:6E:D4:89:14:48
3: 28:6E:D4:89:14:49 to 28:6E:D4:89:3B:58
4: 28:6E:D4:89:3B:59 to 28:6E:D4:89:62:68
5: 28:6E:D4:89:62:69 to 28:6E:D4:89:89:78
6: 28:6E:D4:89:89:79 to 28:6E:D4:89:B0:88
7: 28:6E:D4:89:B0:89 to 28:6E:D4:89:D7:98
8: 28:6E:D4:89:D7:99 to 28:6E:D4:89:FE:A8
9: 28:6E:D4:89:FE:A9 to 28:6E:D4:8A:25:B8
10: 28:6E:D4:8A:25:B9 to
28:6E:D4:8A:39:40
Procedure
Switch to the host installation window.
The following example describes how to install the CNA host to a local disk on a 2288H V5 server whose BIOS version is 0.25 (U47) by
mounting an image file to the host using the KVM tool.
This section uses eth0 which functions as the management NIC as an example to describe operations related to NICs. If two VRM nodes
are deployed in active/standby mode, they must share one NIC, example, eth0.
If two VRM nodes are deployed in active/standby mode, install the active VRM node first.
When you log in to a remote server using HTML5, if Caps Lock fails to switch between uppercase and lowercase letters, press
Shift+Letter to enter an uppercase letter.
For details about the default username and password for logging in to the BMC system, see the desired server
documentation. If the username and password have been changed, obtain the new username and password from the
administrator.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 259/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you cannot log in to the BMC system of a single blade server, you are advised to log in to the SMM of the blade server and open the
remote control window of the server.
2. On the main menu of the remote control page of the host, choose Configuration > Boot Device. Set Effective to One-
time and Boot Medium to DVD-ROM. After the setting is complete, click Save.
PowerLeader AMD servers do not support UEFI deployment. You need to set the boot mode to legacy BIOS in the BIOS configuration.
3. Use the KVM tool to log in to the host remote control console using the BMC IP address, username, and password.
4. Mount an image.
b. Click Connect to connect to the image, as shown in . When Connect changes to Disconnect, the image is
mounted to the host.
c. Click Forced System Reset to restart the server, as shown in to . After the restart is successful, go to 5 .
If the restart fails, perform the following operations and then go to 5 :
i. Repeat to . When the host restarts, press F11 repeatedly until the screen for entering the BIOS
password is displayed.
For hosts of some models, when the hosts are being restarted, you do not need to enter the BIOS password. In this
case, go to 4.c.ii.
The default BIOS password for a V5 server is Admin@9000. The default BIOS password for a Huawei RH-
series rack server, X-series high-density server, E6000 blade server, or Huawei Tecal E9000 server is
Huawei12#$ or uniBIOS123. Change the password upon the first login.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 260/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
iii. On the displayed screen, select Virtual DVD-ROM VM 1.1.0 to set the boot device to CD/DVD-ROM
drive.
5. On the displayed screen, select Install within 30s and press Enter.
If Install is not selected within 30s, the system is booted from local disks by default. In this case, restart the server and select Install.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 261/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The system starts automatic loading. The loading operation takes about 3 minutes. After the loading is successful, the
server configuration page is displayed.
If the error message "cdrom not found" is displayed due to a network problem during the system loading, reconnect the image file and
restart the system.
During the VRM configuration process, the following configuration items are mandatory: Disk, Network, Hostname, Timezone,
Password, and Grubpassword.
Press Tab or the up and down arrow keys to move the cursor.
Press Enter to select or execute the item on which the cursor is located.
Press the space bar to toggle between options.
6. Select a disk.
Do not use an SSD card to install the OS. You are advised to select disks of the Local-Disk type as system disks.
Set Hard Disk to its default value to allow the system to install VRM on the first identified disk, which is usually used for
a RAID 1 array.
If disk information does not need to be modified, configure network information for the host.
You are advised to disable the JBOD mode of the RAID controller card, create RAID 1, and then select sda for installation. Otherwise,
all disks are displayed on the disk selection page and are difficult to distinguish from each other. In this case, incorrect disks may be
selected.
Select the disk where the VRM node is to be installed. Local-Disk indicates a local disk.
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 262/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IPv6
Select the IP address type of the management plane where the VRM node is located based on network planning.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 263/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The management plane supports only single-stack deployment. During installation, select only IPv4 or IPv6. Do not configure two
types of IP addresses at the same time.
In a resource pool, the IP address types of all host management planes must be the same.
Configure only one management NIC for a host. If you configure IP addresses for other NICs, network communication may fail.
If the management NIC is not eth0, ensure that eth0 is not on the management VLAN. Otherwise, eth0 obtains the IP address of
the management plane when a DHCP server is deployed on the management plane, which results in the network exception.
If you specify a management plane VLAN, set the type of the VLAN on the access switch port connected with the management network
port to tagged so that the management plane and the switch can communicate with each other.
If you do not specify a management plane VLAN, set the types of some VLANs on the access switch port connected with the
management network port to untagged so that the aggregation switch is reachable to the uplink IP packets from the management plane
through these VLANs.
8. Configure VRM node network information (without configuring the VLAN for the management plane).
IPv4
IP Address: specifies the IP address of the management plane where the VRM node is located.
Netmask: specifies the subnet mask of the management plane where the VRM node is located.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 264/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IPv6
IP Address: specifies the IP address of the management plane where the VRM node is located.
Prefix: specifies the subnet prefix length of the management plane where the VRM node is located.
Default Gateway: specifies the gateway of the management plane where the VRM node is located.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical
keypad on the right.
9. Configure VRM node network information (with configuring the VLAN for the management plane).
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 265/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IP Address: specifies the IP address of the management plane where the VRM node is located.
Netmask: specifies the subnet mask of the management plane where the VRM node is located.
VLAN ID: Enter the planned VLAN for the management plane.
IPv6
IP Address: specifies the IP address of the management plane where the VRM node is located.
Prefix: specifies the subnet prefix length of the management plane where the VRM node is located.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 266/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Default Gateway: specifies the gateway of the management plane where the VRM node is located.
VLAN ID: Enter the planned VLAN for the management plane.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical
keypad on the right.
IPv4
IPv6
If no configuration is required, go to 11.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 267/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Both the time zone and system time must be set, even if the time in Date/Time is the correct local time.
The initial value of Date/Time is the time of the default time zone (Asia/Beijing). After the time zone is configured, change the value of
Date/Time to the time of the new time zone. If the value of Date/Time is not changed, the user changes only the time zone by default,
and the system hardware time keeps unchanged. After the system is installed, the hardware time before the system installation and newly
configured time zone are used to set the current system time.
13. Configure the password of the root user of the VRM node.
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the node
after installation. In this case, you need to install the node again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
Digits
127.0.0.1:51299/icslite/print/pages/resource/print.do? 268/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The password cannot be the same as the username or the reverse username.
14. Check whether the VRM server is a KunLun server.
If yes, go to 15.
If no, go to 16.
a. Choose Cmdline > Edit to enter the Choose boot command line screen.
b. On the Choose boot command line screen, manually add, modify, or delete new command line configurations.
When installing the Huawei KunLun server, add kernel.watchdog_thresh=30 udev.event-timeout=600 at the end
of the existing command line parameters.
Typically, add new configurations instead of deleting or modifying the existing system command line parameters.
c. Click OK to confirm the modification of system command line parameters. Click Reset to restore the parameters to
default system command line parameters.
16. Configure the GRUB password for logging in to the VRM node.
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the node
after installation. In this case, you need to install the node again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 269/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
Digits
The password cannot be the same as the username or the reverse username.
Install the VRM node.
If a dialog box is displayed, informing you that the current disk partitions do not meet VRM requirements and the
new partitioning will delete original disk data, perform the operation on .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 270/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
This dialog box will be displayed if the system disk has not been partitioned or the partitions do not meet VRM
requirements.
If a dialog box is displayed, asking you whether all configurations are complete, select Yes and press Enter.
This dialog box will be displayed if the system disk had the VRM node or a similar OS installed before.
The installation process takes about 20 minutes. After the installation, the server restarts automatically. When the login
information is displayed, the VRM installation is complete.
If no operation is performed on the screen for a long period of time, a black screen may be displayed. Press Ctrl to switch
to the installation page.
If the mounted CD/DVD-ROM drive is disconnected during installation due to a network failure, reinstall the VRM node.
You can log in to the VRM node using SSH as the gandalf user. You can also switch to the root user when required.
After the host is restarted, if the system displays an error message indicating that the partition does not exist when you access
another OS or during the startup, the possible cause is that the first boot device of the host is not the one configured during host
OS installation. For details, see "How Do I Change the Boot Sequence of a Server?" in FusionCompute 8.8.0 Maintenance Cases.
If an error is reported during the restart after the host is installed, cancel the ISO file mounting and restart the host again to enter
the system.
Initialize VRM.
18. After the VRM node is installed and automatically restarts, log in to the host as user root in the remote control window
and install the linux-firmware firmware package. For details, see How Do I Install the linux-firmware Firmware Package?
.
19. Set the passwords of the gandalf user, Portal user, Postgres GaussDB database, and Galax GaussDB database. The
password must conform to the following rules:
Password of user gandalf The password must contain at least eight characters. GalaX8800!
Password of the user for logging in to the portal The password contains 8 to 30 characters. GalaX8800!
127.0.0.1:51299/icslite/print/pages/resource/print.do? 271/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Lowercase letters
Digits
The password cannot be the same as the username or the
reverse username.
The password cannot contain any words in the Linux
password dictionary.
Password of the Postgres GaussDB database and The password contains 8 to 30 characters. GalaX8800!
password of the Galax GaussDB database
The password must contain at least one of the following
special characters: ~!@#$%^&*()-_=+|{};:<.>
The password must contain at least two types of the following
characters:
Uppercase letters
Lowercase letters
Digits
The password cannot be the username or the username in
reverse order.
The password cannot contain any words in the Linux
password dictionary.
Run the following command to set the passwords of the gandalf user, Portal user, Postgres GaussDB database, and Galax
GaussDB database:
vrmInit
New password:
Enter the password of the gandalf user again and press Enter.
The password of the gandalf user has been reset if the following information is displayed:
passwd: all authentication tokens updated successfully.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 272/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When the following information is displayed, select an installation mode. Enter 1 for the common mode and enter 2 for the
role-based mode and press Enter.
installation mode:
1.Common
2.Role-based
Enter the password of the admin user again and press Enter.
installation mode:
1.Common
2.Role-based
Enter the password of the sysadmin user again and press Enter.
Information similar to the following is displayed:
Enter the password of the secadmin user again and press Enter.
Information similar to the following is displayed:
Enter the password of the secauditor user again and press Enter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 273/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Common mode: This mode ensures high usability. In this mode, one account can be granted all operation permissions in the
system.
Role-based mode: This mode provides high security. One account can be granted permissions of only one of the following
administrators: sysadmin, secadmin, or secauditor. In this mode, administrator permissions are separated from each other and
mutually supervised.
System administrator (sysadmin): has permission to operate and maintain system services and create and delete
user accounts. The created user accounts are locked and do not belong to any role.
Security administrator (secadmin): has the permission to manage the rights for users and roles but has no
permission to create a user. A user account created by the system administrator can be used only after the security
administrator assigns a role to it and unlocks it.
Security auditor (secauditor): has the permission to view and export logs and audit other users' operations.
Before selecting the role-based mode for FusionCompute, ensure that components that interconnect with FusionCompute support
this mode.
The rights management mode cannot be changed after FusionCompute installation is complete.
Enter the password of the Postgres GaussDB database and press Enter.
Information similar to the following is displayed:
If the VRM is installed for the first time, the passwords of Postgres GaussDB database on active and standby nodes must be the
same, and the passwords of Galax GaussDB database on active and standby nodes must be the same.
If the VRM is faulty, you are advised to set the passwords of Postgres and Galax GaussDB databases to those before the fault
occurs after the fault is rectified.
Enter the password of the Postgres GaussDB database again and press Enter.
If the following information is displayed, the password of the Postgres GaussDB database is set successfully:
Enter the password of the Galax GaussDB database and press Enter.
Information similar to the following is displayed:
Enter the password of the Galax GaussDB database again and press Enter.
If the following information is displayed, the VRM initialization is complete:
20. When setting the password of the gandalf user, check whether the user exits abnormally.
If yes, run the following command to reset the passwords of the gandalf user, Postgres GaussDB database, and
Galax GaussDB database:
vrmInit
127.0.0.1:51299/icslite/print/pages/resource/print.do? 274/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
New password:
Enter the password of the gandalf user again and press Enter.
The password of the gandalf user has been reset if the following information is displayed:
If you do not need to set the password of the gandalf user, enter N. The command execution is complete.
Enter the password of the Postgres GaussDB database and press Enter.
Information similar to the following is displayed:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 275/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the VRM is installed for the first time, the passwords of Postgres GaussDB database on active and standby nodes must be
the same, and the passwords of Galax GaussDB database on active and standby nodes must be the same.
If the VRM is faulty, you are advised to set the passwords of Postgres and Galax GaussDB databases to those before the
fault occurs after the fault is rectified.
Enter the password of the Postgres GaussDB database again and press Enter.
If the following information is displayed, the password of the Postgres GaussDB database is set successfully:
Modified successfully!
Enter the password of the Galax GaussDB database and press Enter.
Information similar to the following is displayed:
Enter galax New password again:
Enter the password of the Galax GaussDB database again and press Enter.
21. If you need to use the vTPM function, run the following command to start the KMS service:
sh /opt/galax/root/kms/script/startKms.sh
For enterprise users: Visit https://support.huawei.com/enterprise , search for the software package by name, and
download it.
For carrier users: Visit https://support.huawei.com , search for the software package by name, and download it.
23. Ensure that the SFTP service has been enabled on the VRM node. If the SFTP service is not enabled, enable it as
instructed in Enabling SFTP on CNA or VRM Nodes . If you use WinSCP to transfer the installation tool package, you do
not need to enable the SFTP service.
24. Upload the driver package to the /home/GalaX8800 directory on the VRM node.
25. Run the following command and enter the password of user root to switch to user root. Go to the /home/GalaX8800/
directory.
su - root
127.0.0.1:51299/icslite/print/pages/resource/print.do? 276/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the KMS service is enabled, the memory usage increases. You are advised to increase the memory specifications of the VRM node by one
level.
Configure the maximum memory resource for each service on the VRM node based on the VRM memory size.
If yes, go to 32.
If no, go to 33.
32. Perform the operations provided in How Do I Configure NIC Binding for a VRM Node?
33. Determine whether two VRM nodes are deployed in active/standby mode.
Active/standby deployment is recommended. If the VRM node works in standalone mode and is faulty, data cannot be restored, and the
system reliability is low.
If yes, go to 34.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 277/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Before the configuration, ensure that both VRM nodes are powered on. During the configuration, do not power them off. Otherwise, the system
will break down.
35. Run the date command on each of the active and standby VRM nodes to query the current time.
36. In case of inconsistency, run the following command to modify the earlier time to the later time.
date -s "xxxx-xx-xx xx:xx:xx"
Example: date -s "2023-08-08 16:55:00"
37. Run the following command to synchronize the software time with the hardware time:
hwclock -w
38. Run the date command to check whether the modification is successful.
39. Log in to FusionCompute using the management IP address of the active VRM node.
For details, see Logging In to FusionCompute .
41. Choose System > System Configuration > Services and Management Nodes.
The Services and Management Nodes page is displayed.
42. Locate the row that contains VRM service in Service List and click Configure Deployment Mode.
A dialog box is displayed.
43. Enter the host name of the active VRM node in the Host Name of Local Node area.
44. Enter the management IP address of the standby VRM node in Peer IP Address.
45. Enter the host name of the standby VRM node in the Host Name of Peer Node area.
Change the default password of the gandalf user for logging in to the peer node.
If The node name already exists. Please change the name of the peer node is displayed, enter a new host name for the standby
VRM node in Host Name of Peer Node.
46. Set Password of gandalf on the peer node and Password of root on the peer node.
Floating IP Address: Enter the floating IP address of the active and standby VRM nodes. It must be
an idle IP address in the IP address segment planned for the management port of the VRM node.
Floating IP Address: Enter the floating IP address of the active and standby VRM nodes. It must be
an idle IP address in the IP address segment planned for the management port of the VRM node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 278/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Subnet Prefix Length: Enter the subnet prefix length of the management plane.
48. Set Quorum IP Address.
Quorum IP Address indicates the IP address which is used to check the active/standby VRM node status. You can enter up
to three quorum IP addresses. You are advised to set the first quorum IP address to the gateway address of the management
plane, and set other quorum IP addresses to IP addresses of servers that can communicate with the management plane, such
as the AD server or the DNS server.
If all the quorum IP addresses become invalid, the system fails to recognize the active VRM node. Therefore, both VRM nodes become
standby nodes and stop providing services.
You are advised to change the VRM quorum IP addresses to the new ones before the IP addresses are changed.
To ensure system security, it is recommended that administrators change the preset passwords immediately after the system
installation is complete and periodically change the passwords during the subsequent maintenance process. For details, see
Account Information Overview .
After the VRM is installed, you can manually enable the container management function. For details, see Enabling Container Management .
Scenarios
VRM can be installed on a physical server using an ISO image file or on a VM accommodated by a CNA host using the
installation tool. This task provides guidance for the administrator to install VRM on a physical server by mounting an image file.
If two VRM nodes are deployed on one site, configure the active/standby mode of the VRM nodes.
If you install VRM on a VM, use the installation tool to install it.
After the installation, if "unknown error" is displayed during the startup, uninstall the ISO and restart it again.
When the installation progress reaches 100%, the error message that contains "isopackage.sdf file does not match" is displayed. In this
case, rectify the fault based on What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported
During System Installation? .
If secure boot needs to be enabled for the VRM VM, disable secure boot when installing the VRM VM image. After the installation is
complete, enable secure boot for the VRM VM as instructed in Enabling or Disabling Secure Boot for a VM (Arm) .
Prerequisites
The first boot device is set to disk, the second boot device is set to network, and the third boot device is set to CD-ROM for
the server.
You have obtained the IP address, username, and password for logging in to the BMC system of the physical server.
You have obtained the BIOS passwords of the physical servers if these passwords have been set.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 279/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
An application, such as PuTTY, which can be used for remote access on various platforms, is available.
KVM is available. For details about how to obtain the tool, see Table 2 .
You must set the first boot device to disk for the physical server to be installed.
You have obtained the image file FusionCompute_VRM-8.8.0-ARM_64.iso for installing the host and verified the file. For
details about the verification method, see Verifying the Software Package .
Data
Table 1 describes the data required for performing this operation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 280/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The active and standby VRM nodes periodically send a ping command to
all quorum IP addresses. If the active VRM node cannot ping any
quorum IP address, but the standby VRM can ping at least one quorum IP
address, the active and standby switchover is triggered.
You are advised to set the first quorum IP address to the gateway address
of the management plane, and set other quorum IP addresses to IP
addresses of servers that can communicate with the management plane,
such as the AD server or the DNS server.
VRM MAC Randomly obtained from 10 segments and set in the There are 10 segments. Each of the first nine
configuration Address environment by default. If multiple FusionCompute segments contains 10,000 sequential MAC
information Pool systems are deployed, you are advised to modify addresses, and the tenth segment contains
the range of the MAC address pool after the 5000 sequential MAC addresses.
deployment. 1: 28:6E:D4:88:C6:29 to 28:6E:D4:88:ED:38
2: 28:6E:D4:88:ED:39 to 28:6E:D4:89:14:48
3: 28:6E:D4:89:14:49 to 28:6E:D4:89:3B:58
4: 28:6E:D4:89:3B:59 to 28:6E:D4:89:62:68
5: 28:6E:D4:89:62:69 to 28:6E:D4:89:89:78
6: 28:6E:D4:89:89:79 to 28:6E:D4:89:B0:88
7: 28:6E:D4:89:B0:89 to 28:6E:D4:89:D7:98
8: 28:6E:D4:89:D7:99 to 28:6E:D4:89:FE:A8
9: 28:6E:D4:89:FE:A9 to 28:6E:D4:8A:25:B8
10: 28:6E:D4:8A:25:B9 to
28:6E:D4:8A:39:40
Procedure
Switch to the host installation window.
The following example describes how to install the host OS on a TaiShan 200 server (model: 2280) whose BIOS version is 0.59 (U75) by
mounting an image file to the host using the KVM tool.
This section uses eth0 which functions as the management NIC as an example to describe operations related to NICs. If two VRM nodes
are deployed in active/standby mode, they must share one NIC, example, eth0.
If two VRM nodes are deployed in active/standby mode, install the active VRM node first.
When you log in to a remote server using HTML5, if Caps Lock fails to switch between uppercase and lowercase letters, press
Shift+Letter to enter an uppercase letter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 281/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
For details about the default username and password for logging in to the BMC system, see the required server
documentation. If the username and password have been changed, obtain the new username and password from the
administrator.
If you cannot log in to the BMC system of a single blade server, you are advised to log in to the SMM of the blade server and open the
remote control window of the server.
2. On the main menu of the remote control page of the host, choose Configuration > Boot Device. Set Effective to One-
time and Boot Medium to DVD-ROM. After the setting is complete, click Save.
3. Use the KVM tool to log in to the host remote control console using the BMC IP address, username, and password.
4. Mount an image.
b. Click Connect to connect to the image, as shown in . When Connect changes to Disconnect, the image is
mounted to the host.
c. Click Forced System Reset to restart the server, as shown in to . After the restart is successful, go to 5 .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 282/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the host fails to boot from DVD-ROM, perform through until the host begins to restart. Press F2 repeatedly
until the Boot Option screen is displayed.
Select UEFI DVD-ROM VM 1.1.0 and press Enter.
The system starts automatic loading. The loading operation takes about 10 minutes. After the loading is successful, the
server configuration page is displayed.
If the error message "cdrom not found" is displayed due to a network problem during the system loading, reconnect the image file and
restart the system.
During the VRM configuration process, the following configuration items are mandatory: Disk, Network, Hostname, Timezone,
Password, and Grubpassword.
Press Tab or the up and down arrow keys to move the cursor.
Press Enter to select or execute the item on which the cursor is located.
Press the space bar to toggle between options.
6. Select a disk.
Do not use an SSD card to install the OS. You are advised to select disks of the Local-Disk type as system disks.
Set Hard Disk to its default value to allow the system to install VRM on the first identified disk, which is usually used for
a RAID 1 array.
If disk information does not need to be modified, configure network information for the host.
You are advised to disable the JBOD mode of the RAID controller card, create RAID 1, and then select sda for installation. Otherwise,
all disks are displayed on the disk selection page and are difficult to distinguish from each other. In this case, incorrect disks may be
selected.
In the Choose the disk to install area, select the disk where VRM is to be installed. If you want to configure system
disks as software RAID 1, select another disk in the Choose the disk to set up software RAID1 area. Local-Disk
indicates a local disk.
If the server does not have a RAID controller card, configure system disks as software RAID to improve system reliability.
Only two local disks can form RAID 1. Network disks and other RAID levels are not supported.
After two disks form a software RAID group, the capacity of the software RAID disk is the smaller value of the disk capacity.
If the software RAID information exists on the specified disk, the installation program clears the software RAID information.
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 284/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IPv6
127.0.0.1:51299/icslite/print/pages/resource/print.do? 285/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Select the IP address type of the management plane where the VRM node is located based on network planning.
The management plane supports only single-stack deployment. During installation, select only IPv4 or IPv6. Do not configure two
types of IP addresses at the same time.
In a resource pool, the IP address types of all host management planes must be the same.
Configure only one management NIC for a host. If you configure IP addresses for other NICs, network communication may fail.
If the management NIC is not eth0, ensure that eth0 is not on the management VLAN. Otherwise, eth0 obtains the IP address of
the management plane when a DHCP server is deployed on the management plane, which results in the network exception.
If you specify a management plane VLAN, set the type of the VLAN on the access switch port connected with the management network
port to tagged so that the management plane and the switch can communicate with each other.
If you do not specify a management plane VLAN, set the types of some VLANs on the access switch port connected with the
management network port to untagged so that the aggregation switch is reachable to the uplink IP packets from the management plane
through these VLANs.
8. Configure VRM node network information (without configuring the VLAN for the management plane).
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 286/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IP Address: specifies the IP address of the management plane where the VRM node is located.
Netmask: specifies the subnet mask of the management plane where the VRM node is located.
IPv6
IP Address: specifies the IP address of the management plane where the VRM node is located.
Prefix: specifies the subnet prefix length of the management plane where the VRM node is located.
Default Gateway: specifies the gateway of the management plane where the VRM node is located.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad
on the right.
9. Configure VRM node network information (with configuring the VLAN for the management plane).
IPv4
127.0.0.1:51299/icslite/print/pages/resource/print.do? 287/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IP Address: specifies the IP address of the management plane where the VRM node is located.
Netmask: specifies the subnet mask of the management plane where the VRM node is located.
VLAN ID: Enter the planned VLAN for the management plane.
IPv6
IP Address: specifies the IP address of the management plane where the VRM node is located.
Prefix: specifies the subnet prefix length of the management plane where the VRM node is located.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 288/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Default Gateway: specifies the gateway of the management plane where the VRM node is located.
VLAN ID: Enter the planned VLAN for the management plane.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad
on the right.
IPv4
IPv6
If no configuration is required, go to 11.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 289/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Both the time zone and system time must be set, even if the time in Date/Time is the correct local time.
The initial value of Date/Time is the time of the default time zone (Asia/Beijing). After the time zone is configured, change the value of
Date/Time to the time of the new time zone. If the value of Date/Time is not changed, the user changes only the time zone by default,
and the system hardware time keeps unchanged. After the system is installed, the hardware time before the system installation and newly
configured time zone are used to set the current system time.
13. Configure the password of the root user of the VRM node.
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the node
after installation. In this case, you need to install the node again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
Digits
127.0.0.1:51299/icslite/print/pages/resource/print.do? 290/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The password cannot be the same as the username or the reverse username.
14. Configure the GRUB password for logging in to the VRM node.
The entered password is encrypted. If you enter an unwanted password by mistake and do not notice it, you cannot log in to the node
after installation. In this case, you need to install the node again. Therefore, to prevent this issue, perform the following:
Enter the password slowly and carefully.
To enter an uppercase letter, use the Shift key rather than the Caps Lock key.
The password must contain at least three types of the following characters:
Lowercase letters
Uppercase letters
Digits
The password cannot be the same as the username or the reverse username.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 291/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If a dialog box is displayed, asking you whether all configurations are complete, perform the operation on .
This dialog box will be displayed if the system disk had the VRM node or a similar OS installed before.
If a dialog box is displayed, asking you whether to format the partition, perform the operation on .
The installation process takes about 30 to 40 minutes. After the installation, the server restarts automatically. When the
login information is displayed, the VRM installation is complete.
If no operation is performed on the screen for a long period of time, a black screen may be displayed. Press Ctrl to switch
to the installation page.
If the mounted CD/DVD-ROM drive is disconnected during installation due to a network failure, reinstall the VRM node.
You can log in to the VRM node using SSH as the gandalf user. You can also switch to the root user when required.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 292/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the host is restarted, if the system displays an error message indicating that the partition does not exist when you access
another OS or during the startup, the possible cause is that the first boot device of the host is not the one configured during host
OS installation. For details, see "How Do I Change the Boot Sequence of a Server?" in FusionCompute 8.8.0 Maintenance Cases.
If an error is reported during the restart after the host is installed, cancel the ISO file mounting and restart the host again to enter
the system.
After the host is installed, if you enter the BIOS installation page again during the host restart, you need to cancel the ISO file
mounting and restart the host again to enter the system.
Initialize VRM.
16. After the VRM node is installed and automatically restarts, log in to the host as user root in the remote control window
and install the linux-firmware firmware package. For details, see How Do I Install the linux-firmware Firmware Package?
.
17. Set the passwords of the gandalf user, Portal user, Postgres GaussDB database, and Galax GaussDB database. The
password must conform to the following rules:
Password of user gandalf The password must contain at least eight characters. GalaX8800!
Password of the user for logging in to the portal The password contains 8 to 30 characters. GalaX8800!
Password of the Postgres GaussDB database and The password contains 8 to 30 characters. GalaX8800!
password of the Galax GaussDB database
The password must contain at least one of the following
special characters: ~!@#$%^&*()-_=+|{};:<.>
The password must contain at least two types of the following
characters:
Uppercase letters
Lowercase letters
127.0.0.1:51299/icslite/print/pages/resource/print.do? 293/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Digits
The password cannot be the username or the username in
reverse order.
The password cannot contain any words in the Linux
password dictionary.
Run the following command to set the passwords of the gandalf user, Portal user, Postgres GaussDB database, and Galax
GaussDB database:
vrmInit
New password:
Enter the password of the gandalf user again and press Enter.
The password of the gandalf user has been reset if the following information is displayed:
When the following information is displayed, select an installation mode. In common mode, enter 1. In role-based mode,
enter 2. Then press Enter.
installation mode:
1.Common
2.Role-based
127.0.0.1:51299/icslite/print/pages/resource/print.do? 294/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Enter the password of the admin user again and press Enter.
installation mode:
1.Common
2.Role-based
Enter the password of the sysadmin user again and press Enter.
Information similar to the following is displayed:
Enter the password of the secadmin user again and press Enter.
Information similar to the following is displayed:
Enter the password of the secauditor user again and press Enter.
Common mode: This mode ensures high usability. In this mode, one account can be granted all operation permissions in the
system.
Role-based mode: This mode provides high security. One account can be granted permissions of only one of the following
administrators: sysadmin, secadmin, or secauditor. In this mode, administrator permissions are separated from each other and
mutually supervised.
System administrator (sysadmin): has permission to operate and maintain system services and create and delete
user accounts. The created user accounts are locked and do not belong to any role.
Security administrator (secadmin): has the permission to manage the rights for users and roles but has no
permission to create a user. A user account created by the system administrator can be used only after the security
administrator assigns a role to it and unlocks it.
Security auditor (secauditor): has the permission to view and export logs and audit other users' operations.
Before selecting the role-based mode for FusionCompute, ensure that components that interconnect with FusionCompute support
this mode.
The rights management mode cannot be changed after FusionCompute installation is complete.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 295/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Enter the password of the Postgres GaussDB database and press Enter.
Information similar to the following is displayed:
If the VRM is installed for the first time, the passwords of Postgres GaussDB database on active and standby nodes must be the
same, and the passwords of Galax GaussDB database on active and standby nodes must be the same.
If the VRM is faulty, you are advised to set the passwords of Postgres and Galax GaussDB databases to those before the fault
occurs after the fault is rectified.
Enter the password of the Postgres GaussDB database again and press Enter.
If the following information is displayed, the password of the Postgres GaussDB database is set successfully:
Enter the password of the Galax GaussDB database and press Enter.
Information similar to the following is displayed:
Enter the password of the Galax GaussDB database again and press Enter.
If the following information is displayed, the VRM initialization is complete:
18. When setting the password of the gandalf user, check whether the user exits abnormally.
If yes, run the following command to reset the passwords of the gandalf user, Postgres GaussDB database, and
Galax GaussDB database:
vrmInit
127.0.0.1:51299/icslite/print/pages/resource/print.do? 296/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
New password:
Enter the password of the gandalf user again and press Enter.
The password of the gandalf user has been reset if the following information is displayed:
If you do not need to set the password of the gandalf user, enter N. The command execution is complete.
Enter the password of the Postgres GaussDB database and press Enter.
Information similar to the following is displayed:
If the VRM is installed for the first time, the passwords of Postgres GaussDB database on active and standby nodes must be
the same, and the passwords of Galax GaussDB database on active and standby nodes must be the same.
If the VRM is faulty, you are advised to set the passwords of Postgres and Galax GaussDB databases to those before the
fault occurs after the fault is rectified.
Enter the password of the Postgres GaussDB database again and press Enter.
If the following information is displayed, the password of the Postgres GaussDB database is set successfully:
Modified successfully!
Enter the password of the Galax GaussDB database and press Enter.
Information similar to the following is displayed:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 297/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Enter the password of the Galax GaussDB database again and press Enter.
If no, go to 19.
19. If you need to use the vTPM function, run the following command to start the KMS service:
sh /opt/galax/root/kms/script/startKms.sh
For enterprise users: Visit https://support.huawei.com/enterprise , search for the software package by name, and
download it.
For carrier users: Visit https://support.huawei.com , search for the software package by name, and download it.
21. Ensure that the SFTP service has been enabled on the VRM node. If the SFTP service is not enabled, enable it by referring
to Enabling SFTP on CNA or VRM Nodes . If you use WinSCP to transfer the installation tool package, you do not need
to enable the SFTP service.
22. Upload the driver package to the /home/GalaX8800 directory on the VRM node.
23. Run the following command and enter the password of user root to switch to user root. Go to the /home/GalaX8800/
directory.
su - root
After the KMS service is enabled, the memory usage increases. You are advised to increase the memory specifications of the VRM node by one
level.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 298/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If yes, go to 30.
If no, go to 31.
30. Perform the operations provided in How Do I Configure NIC Binding for a VRM Node?
31. Determine whether two VRM nodes are deployed in active/standby mode.
Active/standby deployment is recommended. If the VRM node works in standalone mode and is faulty, data cannot be restored, and the
system reliability is low.
If yes, go to 32.
Before the configuration, ensure that both VRM nodes are powered on. During the configuration, do not power them off. Otherwise, the system
will break down.
33. Run the date command on each of the active and standby VRM nodes to query the current time.
34. In case of inconsistency, run the following command to modify the earlier time to the later time.
date -s "xxxx-xx-xx xx:xx:xx"
Example: date -s "2023-08-08 16:55:00"
35. Run the following command to synchronize the software time with the hardware time:
hwclock -w
127.0.0.1:51299/icslite/print/pages/resource/print.do? 299/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
36. Run the date command to check whether the modification is successful.
37. Log in to FusionCompute using the management IP address of the active VRM node.
For details, see Logging In to FusionCompute .
39. Choose System > System Configuration > Services and Management Nodes.
The Services and Management Nodes page is displayed.
40. Locate the row that contains VRM service in Service List and click Configure Deployment Mode.
A dialog box is displayed.
41. Enter the host name of the active VRM node in the Host Name of Local Node area.
42. Enter the management IP address of the standby VRM node in Peer IP Address.
43. Enter the host name of the standby VRM node in the Host Name of Peer Node area.
Change the default password of the gandalf user for logging in to the peer node.
If The node name already exists. Please change the name of the peer node is displayed, enter a new host name for the standby
VRM node in Host Name of Peer Node.
44. Set Password of gandalf on the peer node and Password of root on the peer node.
Floating IP Address: Enter the floating IP address of the active and standby VRM nodes. It must be
an idle IP address in the IP address segment planned for the management port of the VRM node.
Floating IP Address: Enter the floating IP address of the active and standby VRM nodes. It must be
an idle IP address in the IP address segment planned for the management port of the VRM node.
Subnet Prefix Length: Enter the subnet prefix length of the management plane.
If all the quorum IP addresses become invalid, the system fails to recognize the active VRM node. Therefore, both VRM nodes become
standby nodes and stop providing services.
You are advised to change the VRM quorum IP addresses to the new ones before the IP addresses are changed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 300/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The VRM active/standby mode configuration starts. It takes about 5 minutes and FusionCompute is unavailable during the
configuration process.
To ensure system security, it is recommended that administrators change the preset passwords immediately after the system
installation is complete and periodically change the passwords during the subsequent maintenance process. For details, see
Account Information Overview .
After the VRM is installed, you can manually enable the container management function. For details, see Enabling Container Management .
Network Planning
Quorum network devices (such as Ethernet switches) must be properly configured to connect the quorum site to the active and standby
VRM nodes.
For details about how to configure quorum network devices, such as IP address and VLAN configurations, see the product documentation
of the corresponding hardware server.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 301/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Number of network ports: ≥ 2. One management network port is used for OS management, and one quorum port is used to
connect to the VRM nodes.
FusionCompute- The quorum server software For enterprise users: Visit https://support.huawei.com/enterprise ,
QuorumServer-x.x.x-Eluer- version must match the VRM search for the software package by name, and download it.
X86.iso version.
For carrier users: Visit https://support.huawei.com , search for the
NOTE:
software package by name, and download it.
x.x.x indicates the actual
version number.
By Template
Prerequisites
You have set the first boot device to disk for the physical server to be installed.
You have obtained the IP address, username, and password for logging in to the BMC system of the physical server.
You have obtained the BIOS password of the physical server if the password has been configured.
You have prepared a tool used for remote access on various platforms, such as PuTTY.
Data
Table 1 describes the data required for performing this operation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 302/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
The following example describes how to install the host OS on a 2288H V5 server whose BIOS version is 0.25 (U47) by mounting an
image file to the host using the BMC remote control function.
The following uses eth0 that is used by the quorum server node as a management NIC as an example to describe operations related to
NICs.
When you log in to a remote server using HTML5, if Caps Lock fails to switch between uppercase and lowercase letters, press
Shift+Letter to enter an uppercase letter.
b. Choose Configuration > Boot Device, set Effective to One-time and Boot Medium to DVD-ROM, and click
Save.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 303/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
c. Choose Remote > Remote Virtual Console (Private Mode) to access the KVM.
d. Mount the ISO image file in Table 1 to install the basic package of the quorum server. For details, see 4 to 5 .
a. Configure a disk.
Select a disk of the Local-Disk type for installing the basic package of the quorum server, as shown in Figure 1.
The network must be planned based on Table 1. The following procedure uses an IPv4 address as an example and
set parameters as required.
i. Select the IP address type of the quorum plane based on the actual network plan, as shown in Figure 2.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 304/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 305/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you specify a quorum plane VLAN, set the type of the VLAN on the access switch port connected with
the management network port to tagged so that the quorum plane and the switch can communicate with
each other.
If you do not specify a quorum plane VLAN, set the type of a certain VLAN on the access switch port
connected with the management network port to untagged so that the aggregation switch is reachable to
the uplink IP packet from the quorum plane through the VLAN.
Connect now: You are advised to retain the default setting, that is, do not select this option.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 306/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Both the time zone and system time must be set, even if the time in Date/Time is the correct local time.
The initial value of Date/Time is the time of the default time zone (Asia/Beijing). After the time zone is configured, change the
value of Date/Time to the time of the new time zone. If the value of Date/Time is not changed, the user changes only the time
zone by default, and the system hardware time keeps unchanged. After the system is installed, the hardware time before the
system installation and newly configured time zone are used to set the current system time.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 307/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
During the installation, one of the following dialog boxes may be displayed when the operation in is performed:
Informs you that the current disk partitions are different from those required for the installation and the new
partitioning will delete original disk data. In this case, perform the operation in .
This dialog box will be displayed if the system disk has not been partitioned or the partitions do not meet the quorum
server requirements.
Asks you whether all configurations are complete. You can select Yes and press Enter.
This dialog box will be displayed if the system disk had the basic package of the quorum server or a similar OS
installed before.
The installation process takes about 20 minutes. After the installation, the OS will restart automatically. When the login
information is displayed, the basic package installation is complete.
If no operation is performed on the screen for a long period of time, a black screen may be displayed. Press Ctrl to switch to the
installation page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 308/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the mounted CD/DVD-ROM drive is disconnected during the installation due to a network failure, reinstall the basic package of
the quorum server.
After the installation, if "unknown error" is displayed during the startup, uninstall the ISO and restart the quorum server again.
When the installation progress reaches 100%, the error message that contains "isopackage.sdf file does not match" is displayed. In
this case, rectify the fault based on What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is
Reported During System Installation? .
3.3.6.3.2 By Template
The basic package of the quorum server can be installed by creating a VM using a template. For details, see Creating a VM from a
Template .
1. Modify parameters such as IPADDR, NETMASK, GATEWAY, and BOOTPROTO in the configuration file.
BOOTPROTO must be set to static.
2. After editing the configuration file, run the service network restart command to restart the network service.
3. Run the ifconfig command to check whether the configuration of eth0 takes effect. (If the configured IP address is displayed
in the command output, the configuration takes effect.)
Archive: OceanStor_XXX_QuorumServer_X86_64.zip
package/
package/quorum_server.sh
package/packages/
package/packages/OceanStor_XXX_QuorumServer_X86_64.rpm
package/qs_version.ini
package/tools/
2. Run the cd /usr/custom/rpm/package;sh quorum_server.sh -install command to install the quorum server software.
The current user is the root user. A quorum server administrator account needs to be provided. Continue to install?
<Y|N>:Y
New Password:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 309/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
quorumsvr is the default user account for the quorum server software installation. If you want to install the quorum server software
under another user account, enter the username after Enter an administrator account for the quorum server:[default: quorumsvr],
for example, Enter an administrator account for the quorum server:[default: quorumsvr]:User_test.
a. After the quorum server software is installed, the quorum service will automatically restart. Enter the CLI of the
quorum server, go to any directory, and run the qsadmin command to open the CLI of the quorum server
software. If the software CLI is displayed, the software has started successfully.
XXX@Linux:~# qsadmin
start main!
admin:/>
c. Enter the CLI of the quorum server, go to any directory, and run the ps -elf |grep quo* command to check whether
the quorum server software is installed successfully. If quorum_serverd is displayed in the command output, the
installation is successful.
Prerequisites
Before configuring the quorum server software, ensure that the service IP address for providing the arbitration service has
been planned.
User root has been permitted to log in to the quorum server using SSH. For details, see How Do I Log In to the Quorum
Server as User root Using SSH? .
PuTTY is available.
Procedure
1. Configure the service IP address for the quorum server.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 310/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
a. Use PuTTY and the quorum plane IP address to log in to the quorum server as user root.
b. Run the vi command to open the NIC configuration file. In this example, the network port corresponding to the
quorum port is eth1. Modify parameters such as BOOTPROTO, IPADDR, NETMASK, STARTMODE,
ONBOOT, and GATEWAY in the file.
XXX@Linux:~# vi /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO="static"
DEVICE="eth1"
IPADDR="192.168.6.31"
NETMASK="255.255.255.0"
STARTMODE="auto"
ONBOOT="yes"
GATEWAY="192.168.6.1"
c. After editing the configuration file, run the service network restart command to restart the network service.
d. Run the ifconfig command to check whether the configuration of eth1 takes effect. (If the configured IP address is
displayed in the command output, the configuration takes effect.)
XXX@Linux:~# ifconfig
127.0.0.1:51299/icslite/print/pages/resource/print.do? 311/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
XXX@Euler:~# qsadmin
start main!
admin:/>
After you start the quorum server software, run the help command for help information and to learn about the commands that are
required during the configuration process.
3. Add the service IP address and port number of the quorum server to the quorum server software.
On the CLI of the quorum server software, run the add server_ip command to add the service IP address and software
monitoring port number of the quorum server to the quorum server software for management.
The service IP address of the quorum server is used to interconnect with the VRM nodes and is used when the quorum server is
added to FusionCompute.
The default firewall port number used by the quorum server is 30002. The software monitoring port number must be the same as
this port number.
After the configuration is complete, run the show server_ip command. If the command output shows the IP address and
port number that you have added, the configuration is successful.
admin:/>show server_ip
1 192.168.6.31 30002
Automated Acceptance
Prerequisites
You have logged in to SmartKit.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 312/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Use either of the following methods to install the virtualization inspection service:
Method 1: On the SmartKit home page, click the Virtualization tab, click Function Management, select Datacenter
Virtualization Solution Inspection, and click Install.
Method 2: Import the software package of the DCS inspection service (SmartKit_version_Tool_Virtualization_Inspection.zip).
1. On the home page of SmartKit, click the Virtualization tab and click Function Management. On the page that is
displayed, click Import. In the Import dialog box, select the software package of the virtualization inspection
service and click OK.
2. In the dialog box that is displayed, click OK. In the Verification and Installation dialog box that is displayed, click
Install. In the dialog box that is displayed indicating a successful import, click OK. The status of Datacenter
Virtualization Solution Inspection changes to Installed.
Procedure
1. Access the inspection tool.
a. On the SmartKit home page, click the Virtualization tab. In Routine Maintenance, click Datacenter
Virtualization Solution Inspection.
d. In the displayed Create an Environment dialog box, set Environment Name and Customer Cloud Name,
and click OK.
The environment information can be added only once. If the environment information has been added, it cannot be added again.
If you need to add an environment, delete the original environment first.
3. Adds nodes.
c. Locate the customer cloud name and click Add Node in the Operation column.
Parameter Description
Username For user admin, the user name is admin, and the password is that set upon the first login.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 313/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
e. Click OK. In the Add Node dialog box that is displayed, confirm the node information, select all nodes, and click
OK. To add multiple sets of devices of the same type, repeat the operations for adding a customer cloud and adding
a node.
Scenarios
After deploying the FusionCompute system, use the inspection function of SmartKit to check the environment before service
provisioning to determine whether the current environment configuration is optimal.
Prerequisites
You have logged in to SmartKit.
The environment has been configured. For details, see System Management on SmartKit .
Procedure
1. Access the inspection tool.
a. On the SmartKit home page, click the Virtualization tab. In Routine Maintenance, click Datacenter
Virtualization Solution Inspection.
a. In the main menu, click Health Check to go to the Health Check Tasks page.
b. In the upper left corner, click Create to go to the Create Task page. In the Task Scenario area, select Quality
check.
Parameter Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 314/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Task Scenario Select a scenario where the health check task is executed.
Routine Health Check: Check basic check items required for routine O&M.
Pre-upgrade Check: Before the upgrade, check whether the system status meets the upgrade
requirements.
Quality Check: After deploying the FusionCompute environment, check the environment before service
rollout.
Post-upgrade acceptance: After the system is upgraded, check whether the system is normal.
Send check report via Indicates whether to enable the email push task.
email
Customer Cloud Select the target customer cloud where the health check task is executed.
Select Objects Select the target object for the health check task.
Management: nodes of services on the management plane
Select at least one node to execute a health check task.
Select Check Items Select the items for the health check task.
Select at least one item for the health check task.
NOTE:
By default, all check items of all nodes are selected. To modify the items, select the needed nodes.
Viewing basic information In the Basic Information area, view the name and status of the current task.
about the task
Viewing the object check The pass rate of objects and items are displayed in pie charts. You can select By environment or By
pass rate and check item pass product.
rate
NOTE:
Determine the object status based on the results of check items. The object status can be Passed or Failed.
If all check items are passed, the object status is Passed. If any check item is failed, the object status is
Failed.
Exporting the health check In the upper right corner of the page, click Export Report.
report
Select a report type (Basic Report, Quality Report, or Synthesis Report). If you select Synthesis
Report, enter the Customer Name (name of the user of the health check report) and Signature (name
of the provider of the health check report).
NOTE:
If a storage plane is not planned, the result of check item Network103 does not comply with the best practice. This issue does
not need to be handled and will be automatically resolved after storage interfaces are added.
If a large number of valid alarm information exists in the inspection environment, the inspection task may last for a long time.
Wait patiently.
Scenarios
The FusionCompute inspection function supports routine inspection, pre-upgrade check, and deployment quality inspection. In the
deployment quality inspection scenario, only the basic configuration and environment status after the deployment can be
inspected, but basic services and interfaces cannot be inspected. Therefore, after FusionCompute is deployed, self-check and
dialing test capabilities must be available.
The automated acceptance function of SmartKit provides the post-deployment self-check capability to ensure normal running of
FusionCompute.
Prerequisites
You have logged in to SmartKit.
The environment has been configured. For details, see System Management on SmartKit .
Procedure
1. Access the inspection tool.
a. On the SmartKit home page, click the Virtualization tab. In Routine Maintenance, click Datacenter
Virtualization Solution Inspection.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 316/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Parameter Description
Customer Cloud Customer cloud name, which has been set when the FusionCompute environment is added.
Parameter Description
Protocol CIFS: Use CIFS to export the template to the local PC.
NFS: Use NFS to export the template to a remote server. The NFS protocol does not support authentication and
encryption. Therefore, ensure that the NFS protocol is used on a trusted network.
f. Click Create Now. Then, the automated acceptance task is created. After the task is created, the system
automatically executes the task.
Basic Information In the Basic Information area, view the task name, status, duration, and other information.
You can click Export Report to export the acceptance report.
You can click Retry to perform the acceptance task again.
Acceptance Result The pass rates of acceptance cases are displayed in a pie chart.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 317/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Failed cases of each customer cloud environment are displayed in a list.
You can click Case Details to view the execution status of a case.
You can click Retry to retry all cases of the customer cloud.
Acceptance Details The acceptance results of all cases are displayed in a list, including passed and failed cases.
You can click Case Details to view the execution status of a case.
You can click Retry to retry all cases of the customer cloud.
3.3.8 Appendix
FAQ
3.3.8.1 FAQ
How Do I Handle the Issue that System Installation Fails Because the Disk List Cannot Be Obtained?
How Do I Handle the Issue that VM Creation Fails Due to Time Difference?
How Do I Handle the Issue that a Service Port Has Been Occupied on FusionCompute Installer?
What Do I Do If the Error "kernel version in isopackage.sdf file does not match current" Is Reported During System
Installation?
How Can I Handle the Issue that a Local Virtualized Datastore Fails to Be Added Due to a GPT Partition During Tool-based
Installation?
How Can I Handle the Issue that the Node Fails to Be Remotely Connected During the Host Configuration for Customized
VRM Installation?
How Do I Handle the Issue that the Host Cannot Be Started Properly and the grub rescue Page Is Displayed During the
Starting Process?
How Do I Handle the Issue that the VRM Installation Fails Because Importing the Template Takes a Long Time?
What Can I Do If Disk Selection Fails When a Host Is Being Reinstalled After a Fault Is Rectified?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 318/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Symptom
System installation fails because the disk list cannot be obtained. Figure 1 or Figure 2 shows failure information.
Possible Causes
No installation disk is available in the system. As a result, the installation fails and the preceding information is reported.
Storage media on the server are not initialized. As a result, the installation fails and the preceding information is reported.
The server was used, and its RAID controllers and disks contain residual data. As a result, the installation fails and the
preceding information is reported.
The system may not have a RAID controller card driver. You need to confirm the hardware driver model, download the
driver from the official website, and install it. For details about how to install the driver, see FusionCompute SIA Device
Driver Installation Guide.
Troubleshooting Guideline
Before installing the system, initialize the RAID controllers and disks on the server and delete their residual data.
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 319/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4. On the menu bar, choose Remote. The Remote Console page is displayed, as shown in Figure 3.
5. Click Java Integrated Remote Console (Private), Java Integrated Remote Console (Shared), HTML5 Integrated
Remote Console (Private), or HTML5 Integrated Remote Console (Shared). The real-time desktop of the server is
displayed, as shown in Figure 4 or Figure 5.
Java Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
Java Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS and
perform operations on the server using the iBMC. The users can view the operations of each other.
HTML5 Integrated Remote Console (Private): Only one local user or VNC user can connect to the server OS using the iBMC.
HTML5 Integrated Remote Console (Shared): Two local users or five VNC users can concurrently connect to the server OS
and perform operations on the server using the iBMC. The users can view the operations of each other.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 320/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
7. Select Reset.
The Are you sure to perform this operation dialog box is displayed.
8. Click Yes.
The server restarts.
9. When the following information is displayed during the server restart, press Delete quickly.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 321/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The default password for logging in to the BIOS is Admin@9000. Change the administrator password immediately after your
first login.
For security purposes, change the administrator password periodically.
The system will be locked if incorrect passwords are entered three consecutive times. You need to restart the server to unlock it.
12. On the Advanced screen, select Avago MegaRAID <SAS3508> Configuration Utility and press Enter. The
Dashboard View screen is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 322/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
13. Check whether the RAID array has been created for system disks on the server.
If yes, go to 14.
If no, go to 16.
14. On the Dashboard View screen, select Main Menu and press Enter. Then select Configuration Management and press
Enter.
15. On the Configuration Management screen, select Clear Configuration and press Enter. On the displayed confirmation
screen, select Confirm and press Enter. Then select Yes and press Enter to format the hard disk.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 323/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
16. On the Dashboard View screen, select Main Menu and press Enter. Then select Configuration Management and press
Enter. Select Create Virtual Drive and press Enter. The Create Virtual Drive screen is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 324/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
17. On the Create Virtual Drive screen, select Select RAID level using the up and down arrow keys and press Enter. Create
a RAID array (RAID 1 is used as an example) using disks. Select RAID1 from the drop-down list box, and press Enter.
18. On the Create Virtual Drive screen, select Default Initialization using the up and down arrow keys and press Enter.
Select Fast from the drop-down list box and press Enter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 325/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
19. Select Select Drives From using the up and down arrow keys and press Enter. Select Unconfigured Capacity using the
up and down arrow keys.
20. Select Select Drives using the up and down arrow keys and press Enter. Select the first (Drive C0 & C1:01:02) and the
second (Drive C0 & C1:01:05) disks using the up and down arrow keys to configure RAID 1.
Drive C0 & C1 may vary on different servers. You can select a disk by entering 01:0x after Drive C0 & C1.
Press the up and down arrow keys to select the corresponding disk, and press Enter. [X] after a disk indicates that the disk has
been selected.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 326/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
21. Select Apply Changes using the up and down arrow keys to save the settings. The message "The operation has been
performed successfully." is displayed. Press the down arrow key to choose OK and press Enter to complete the
configuration of member disks.
22. Select Save Configuration and press Enter. The operation confirmation screen is displayed. Select Confirm and press
Enter. Select Yes and press Enter. The message "The operation has been performed successfully." is displayed. Select
OK using the down arrow key and press Enter.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 327/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
23. Press ESC to return to the Main Menu screen. Select Virtual Drive Management and press Enter to view the RAID
information.
24. Press F10 to save all the configurations and exit the BIOS.
26. Before installing a system, access the disk RAID controller page to view disk information. Figure 8 shows disk
information. The method for accessing the RAID controller page varies depending on the RAID controller card in use. For
example, if RAID controller card 2308 is used, press Ctrl+C to access the disk RAID controller page.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 328/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
27. Check whether the RAID array has been created for system disks on the server.
If yes, select Manage Volume in Figure 8 to access the page shown in Figure 9 and then click Delete Volume to
delete the residual RAID disk information from the system.
If no, go to 28.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 329/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
29. After configuration is complete, select Save changes then exit this menu on the screen to exit, as shown in Figure 11.
Symptom
The VM fails to be created when VRM is installed using an installation tool. The message indicating that this issue may be caused
by time difference will be displayed in some scenarios. If the message is not displayed, when you query the log, you may find that
the time difference exceeds 5 minutes before the VM creation failure.
Procedure
1. Click Install VRM.
Check whether the VM is successfully created.
If no, the local PC may be a VM and is not restarted for a long time. In this case, go to 2.
3. Select Save.
5. Select Continue.
If data is not saved when you close the tool or VRM is not continuously installed but is uninstalled and then installed when you open the
tool again, the residual host data is not cleared. In this case, you need to install the host and VRM again. Otherwise, the system may
prompt you that the host has been added to another site when you configure the host.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 330/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3.3.8.1.3 How Do I Handle the Issue that a Service Port Has Been
Occupied on FusionCompute Installer?
Symptom
The installation is interrupted by a message displayed on FusionCompute Installer indicating that a port has been occupied.
Possible Causes
The default port used by the postgresql service is 5432. If the postgresql service has been installed on the local PC or a program
on the local PC uses port 5432, the postgresql service used by the FusionCompute installation wizard cannot start.
Troubleshooting Guideline
None
Procedure
1. On the local PC, click Start, enter cmd in Search programs and files, and press Enter to open the CLI.
2. Run the following command to view the ID of the process that is using port 5432:
netstat -ano | findstr :5432
If information similar to the following is displayed, make a note of the process ID (the value displayed in the last column):
3. Right-click the blank area on the task bar, select Start Task Manager to open the Windows Task Manager window.
4. On the Processes page, choose View > Select Columns in the upper part, select PID (Process Identifier) in the Select
Process Page Columns window, and click OK.
5. On the Processes tab page, select Show processes from all users and locate the process in the PID column based on the
ID obtained in 2.
6. If ending process does not exert adverse impact on the local PC, end the process.
If ending process exerts adverse impact on the local PC, run FusionCompute Installer on another PC.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 331/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the installation is complete, log in to the host again, switch to the root user, and run cd
/home/GalaX8800/Name of the decompressed folder and sh bin/webInstaller.sh uninstall to uninstall the
installation service.
After the installation is complete, the Complete page is displayed. Click Finish or the FusionCompute address. In
the dialog box that is displayed, click OK to deliver the service uninstallation task. If the VRM installation is
complete and the task is successfully delivered, the FusionCompute login page is displayed.
2. Delete the installation tool.
b. Run the following command and enter the password of the root user to switch to the root user:
su - root
d. Run the following commands to delete the installation tool software package:
rm -rf Name of the installation software package
Example: rm -rf FusionCompute-LinuxInstaller-8.8.0-ARM_64.zip
3. Disable the SFTP service. For details, see Disabling SFTP on CNA or VRM Nodes .
Symptom
During the system installation, an error is reported when the installation information is compared with the isopackage.sdf file. As
a result, the installation fails. Figure 1 shows the reported information.
Possible Causes
During the installation, both the local and remote ISO files are mounted to the server.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 332/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
1. Confirm the ISO file to be installed.
4. Reinstall the host or VRM using the ISO file of the remote CD/DVD-ROM drive.
Install the host. For details, see Installing Hosts Using ISO Images (x86) or Installing Hosts Using ISO Images (Arm) .
Install VRM. For details, see Installing VRM Nodes Using ISO Images (x86) or Installing VRM Nodes Using ISO Images
(Arm) .
No further action is required.
5. Reinstall the host or VRM using the ISO file of the local CD/DVD-ROM drive.
Symptom
add storage failed is displayed when you add a datastore during the VRM installation using the FusionCompute installation tool.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 333/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Error message "Storage device exists GPT partition, please clear and try again." is displayed in the log information.
Procedure
The following operations are high-risk operations because they will delete and format the specified storage device. Before performing the
following operations, ensure that the local disk of the GPT partition to be deleted is not used.
1. Take a note of the value of the storageUnitUrn field displayed in the log information about the storage device that fails to
be added.
For example: urn:sites:54830A53:storageunits:F1E6FF755C8C4AB49A8BD2791F1A4E3E
3. Run the following command and enter the password of user root to switch to user root:
su - root
4. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 334/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The value of the name field is the name of the storage device.
In the following example command output, the storage device name is HUS726T4TALA600_V6KV5K2S.
If the storage device name is on the right of master in 6, the host IP address is the value of master_ip. If the storage device name is on
the right of slave in 6, the host IP address is the value of slave_ip.
9. Run the following command and enter the password of user root to switch to user root:
su - root
10. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
11. Run the following command to view the storage device path:
redis-cli -p 6543 -a Redis password hget StorageUnit:Storage device name op_path
For details about the default password of Redis, see Account Information Overview . The storage device name is the name obtained in 6.
12. Run the following command to delete the signature of the file system on the local disk:
This operation will clear the original data on the disk, which is a high-risk operation. Before performing this operation, ensure that the
disk is not in use.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 335/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If yes, go to 14.
3.3.8.1.8 How Can I Handle the Issue that the Node Fails to Be
Remotely Connected During the Host Configuration for Customized
VRM Installation?
Symptom
When a host is installed using an ISO image, gandalf is not initialized. As a result, the system displays a message indicating that
the remote connection to the node fails during the host configuration for customized VRM installation.
Solution
Check whether the IP address of the host where the VRM is to be installed is correct.
Check whether the password of user root for logging in to the host where the VRM is to be installed is correct.
Check whether the following command has been executed on the host to set the password of user gandalf and whether the
password is correct:
cnaInit
If you enter an incorrect password of user gandalf for logging in to the host, the user will be locked for 5 minutes. To
manually unlock the user, log in to the locked CNA node as user root through remote control (KVM) and run the faillock --
reset command.
3.3.8.1.9 How Do I Handle the Issue that the Host Cannot Be Started
Properly and the grub rescue Page Is Displayed During the Starting
Process?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 336/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Symptom
When installing the server OS, /dev/sda and /dev/sdb are used to form a software RAID 1. After the installation is successful, the
server can be started properly.
If the OS needs to be reinstalled and other disks are selected for installation, for example, /dev/sdb and /dev/sdc are used to form
a software RAID 1, or only /dev/sda is installed, the OS is successfully installed, but the server fails to be started and enters the
grub rescue mode, as shown in the following figure.
Possible Causes
RAID 1 uses the disk mirroring technology to store data on one disk and write data on another disk at the same time.
Therefore, the grub boot information exists on the two disks after RAID 1 is created. After another disk is selected for
reinstallation, the disk that has been selected for installation but is not selected for this installation is considered as a data disk.
The information on the disk will not be deleted or formatted. Therefore, when the server is restarted after reinstallation, the BIOS
still detects the installation information and selects the incorrect default boot disk.
Solution
Solution 1:
1. If the system enters the grub rescue mode, you can forcibly restart the system. Press F11 to go to the Boot Manager page,
select Hard Disk Drive, and press + to select the boot disk, as shown in Figure 1 and Figure 2.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 337/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. Then, you can restart the server or directly boot the server using the correct boot disk.
3. Manually delete the residual installation information on the incorrectly identified disk. For example, if the /dev/sda and
/dev/sdb disks are selected for installation for the first time and the /dev/sda disk is selected for installation for the second
time, you need to delete the residual installation information on the /dev/sdb disk.
Solution 2:
1. In the grub rescue mode, only a few commands can be used, for example, set, ls, insmod, root, prefix.
set: This command is used to view environment variables. You can view the boot path and partition.
root: This command is used to specify the partition used for the system startup and set the grub boot partition in
grub rescue mode.
2. You can run the preceding commands to restore the system and enter the system. The procedure is as follows:
To view the partition, run grub rescue > ls command and press Enter. The command output is as follows:
To traverse the partitions to find the system installation partition, run the grub rescue > ls (hd0,gpt3) and press
Enter until Filesystem is unknown. is not displayed. The following figure shows the details.
To modify the boot partition, run the set command to view current boot partition settings, change the boot partition to
the correct one, and load normal, as shown in the following figure.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 338/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The system can be accessed normally. However, after the restart, the system still enters the grub rescue mode.
Therefore, you need to manually delete the residual installation information on the incorrectly identified disk by
running the following command:
dd if=/dev/zero of=/dev/sdb bs=512K count=1
3.3.8.1.10 How Do I Handle the Issue that the VRM Installation Fails
Because Importing the Template Takes a Long Time?
Symptom
It takes more than 2 hours to import the template during VRM installation using a tool.
Procedure
1. Check the IP address of the host where importing template takes a long time.
View the installation tool log and search for keyword upload vhd successfully to determine the host where the template
has been imported. If the host is not found, the template has not been imported.
2. Check whether the network is abnormal on the host whose template has not been imported.
a. Log in to the target host in SSH mode and run the ifconfig command to view the NIC device name
corresponding to the IP address. Run the ethtool [ dev ] command to view the NIC information, check whether
auto-negotiation is enabled, and check the current negotiated rate. If the negotiated rate is 10 Mbit/s, check
whether the hardware configuration is too low (such as NICs and network cables). You are advised to replace
the hardware and try again.
ethtool eth0
127.0.0.1:51299/icslite/print/pages/resource/print.do? 339/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
b. Use the local PC to ping the target host and check whether the delay is too long. If the delay is too long, check the
network status. You are advised to try again when the network is normal.
c. Through 2.a to 2.b, if any of the preceding exceptions is found,wait until the template import task is successfully
executed. Alternatively, stop the current task of importing the template, rectify the network fault, and import the
template again. If no network exception is found, you are advised to stop the current task of importing template and
try again.
Scenarios
This section describes how to rectify a disk selection failure if Failed to check whether the system can be restored is displayed
when a host is being reinstalled after a fault is rectified.
Procedure
1. Press Alt+F2 to log in to the system as user root.
3. Open the new_part_file_new and new_part_file_old files and check whether the content in the /boot line is the same in
the two files.
Starting with UUID:
If the content in one file starts with UUID and the content in the other file does not start with UUID, perform the
following operations:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 340/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
b. Modify the new_part_file_old file to ensure that the content in the /boot line is the same as that in the
new_part_file_new file and the /boot line is in the same location in the two files.
If the UEFI installation mode is used, the same operation is also required for the content in the /boot/efi line in the
new_part_file_old file.
Scenarios
This section guides you to uninstall a Mellanox NIC driver kernel module before using the FC HBA driver.
Prerequisites
You have obtained the IP address of the host where the driver is to be uninstalled.
PuTTY is available.
Procedure
1. Use PuTTY to log in to the host where the driver is to be uninstalled.
Ensure that the management IP address and user gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key
pair to authenticate the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication
Mode? .
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
4. Run the following command to go to the directory where the driver package resides (take the Arm architecture as an
example):
cd /opt/galax/install/driver/installfiles/MLNX_OFED_LINUX-23.10-0.5.5.0-euleros2.0sp12-aarch64
5. Run the following command to check whether the driver package contains the uninstall.sh script:
ll
127.0.0.1:51299/icslite/print/pages/resource/print.do? 341/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
7. Run the following commands to go to the driver directory and delete the Mellanox NIC driver package:
cd /opt/galax/install/driver
rm -rf /opt/galax/install/driver/installfiles/MLNX_OFED_LINUX-23.10-0.5.5.0-euleros2.0sp12-aarch64
rm -rf /opt/galax/install/driver/MLNX_OFED_LINUX-23.10-0.5.5.0-euleros2.0sp12-aarch64
8. After the uninstallation is successful, run the reboot command to restart the host.
Prerequisites
You have obtained the BMC IP address as well as the BMC username and password of the quorum server.
Procedure
1. Log in to the BMC system of the server, open the remote control window, and log in to the server as user root.
3. Check whether the management IP address of the server is contained in the value of ListenAddress in the file. If no,
manually add it.
ListenAddress 10.1.2.23
4. Check whether the root account exists in the value of AllowUsers in the file. If no, manually add it.
AllowUsers root
5. Check whether the value of PermitRootLogin is yes in the file. If no, manually change the value to yes.
PermitRootLogin yes
6. Press Esc to exit the insert mode and enter :wq to save the modification and exit.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 342/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Only Hygon servers support BIOS configuration. For details about the BIOS parameters, see the server vendor's configurations.
Access the BIOS in the OS and run the ipmitool chassis bootdev bios && ipmitool power reset or ipmitool power cycle
command. The BIOS setting page is displayed.
The dimmed options are unavailable. The items marked with have submenus.
For details about how to set baseline parameters, see Table 1.
SR-IOV Enable
127.0.0.1:51299/icslite/print/pages/resource/print.do? 343/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
IO IOMMU Enabled
Procedure
1. Prepare a Linux host that has the same architecture as the host where the firmware package is to be installed to create a
linux-firmware firmware image.
3. Use WinSCP to copy the firmware package to the host where the firmware image is to be created.
5. Run the following command and enter the password of user root as prompted to switch to user root and go to the directory
where the firmware package is stored:
su - root
6. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
8. Run the following command to create a firmware image file: Run the following command to generate dd.iso:
mkisofs -R -o dd.iso xxx/
Example: mkisofs -R -o dd.iso linux-firmware-20250211/
xxx indicates the name of the subfolder that stores the created firmware in the current path.
9. For a host where the linux-firmware firmware package is to be installed, mount the dd.iso firmware driver file. (If the host
is installed using an ISO image, disconnect the current image first.) After the mounting is successful, run the following
command to copy the firmware file:
c. Run the following command to view the file structure in the firmware directory:
ll firmware/
d. Compare the file structure in the firmware directory with that in the firmware package downloaded in 2 and check
whether they are consistent.
Basic Configuration
Appendix
127.0.0.1:51299/icslite/print/pages/resource/print.do? 345/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Process
Figure 1 shows the FusionCompute configuration process.
Configuration Tasks
Table 1 describes the configuration tasks in the FusionCompute configuration process.
Task Description
127.0.0.1:51299/icslite/print/pages/resource/print.do? 346/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
This section guides software commissioning engineers to load a license file to a site after FusionCompute is installed so that
FusionCompute can provide licensed services for this site within the specified period.
You can obtain the license using either of the following methods:
Apply for a license based on the electronic serial number (ESN) and load the license file.
Share a license file with another site. When a license is shared, the total number of physical resources (CPUs) and container
resources (vCPUs) at each site cannot exceed the license limit.
Prerequisites
Conditions
You need to obtain the following information before sharing a license file with another site:
If VRM nodes are deployed in active/standby mode at the site, you have obtained the VRM node floating IP address. If only
one VRM node is deployed at the site, you have obtained the management IP address.
Data
Data preparation is not required for this operation.
Procedure
Log in to FusionCompute.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 347/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
6. Select License server and check whether the value of License server IP address is 127.0.0.1.
a. If yes, go to 7.
b. If no, set License server IP address to 127.0.0.1, and click OK. Then, go to 7.
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download it.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download it.
If the VRM version of the license client is FusionCompute 8.8.0, a VRM node in a version earlier than FusionCompute 8.8.0 cannot be used as a
license server.
13. Run the following command on the VRM node of a later version to transfer the script to the /home/GalaX8800/ directory
of the VRM node of an earlier version. Then, move the script to the /opt/galax/gms/common/modsysinfo/ directory.
scp -o UserknownHostsFile=/dev/null -o StrictHostKeyChecking=no
/opt/galax/gms/common/modsysinfo/keystoreManage.sh gandalf@IP address of the VRM node of an earlier
version:/home/GalaX8800/
cp /home/GalaX8800/keystoreManage.sh /opt/galax/gms/common/modsysinfo/
a. Import the VRM certificate of the site where the license file has been loaded to the local end. For details, see
Manually Importing the Root Certificate .
b. Import the VRM certificate of the local end to the site where the license file has been loaded.
To obtain the VRM certificate of the local end, perform the following steps:
i. Use PuTTY and the management IP address to log in to the active VRM node as user gandalf.
ii. Run the following command and enter the password of user root to switch to user root:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 348/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
su - root
iii. Run the following command to copy server.crt to the /home/GalaX8800 directory:
cp /etc/galax/certs/vrm/server.crt /home/GalaX8800/
License server IP address: Enter the management IP address of the VRM node of the site that has the license file
loaded. If the site has only one VRM node deployed, enter the VRM node management IP address. If the site has
two VRM nodes working in active/standby mode, enter the floating IP address of the VRM nodes.
Account: Enter the username of the FusionCompute administrator of the site that has the license file loaded.
Password: Enter the password of the FusionCompute administrator of the site that has the license file loaded.
The FusionCompute administrator at the site where the license has been loaded must be a new machine-machine account whose
Subrole is administrator or a new system super administrator account.
The VRM that is activated in associated mode cannot be set as the license server.
The keys of VRM nodes that share the license must be the same. If they are different, change them to be the same.
If the VRM nodes of different versions share the license, change the keys of the later version to the keys of the earlier version. The
procedure is as follows:
a. Run the following command on the VRM nodes of the later version to transfer the script to the /home/GalaX8800/
directory of the VRM nodes of the earlier version.
scp -o UserknownHostsFile=/dev/null -o StrictHostKeyChecking=no
/opt/galax/root/vrm/tomcat/script/updateLmKey.sh gandalf@IP address of a VRM node in an earlier
version:/home/GalaX8800/
b. Run the following command on the VRM nodes of the earlier version to query the keys of VRM nodes of the earlier
version.
sh /home/GalaX8800/updateLmKey.sh query
c. Run the following command on the VRM nodes of the later version to change the keys of the later version to those of the
earlier version. After this command is executed, the VRM service automatically restarts.
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh set
The following command output is displayed:
Please Enter aes key:
Enter the key and press Enter. If the following information is displayed in the command output, the key is changed
successfully.
Redirecting to /bin/systemctl restart vrmd.service
127.0.0.1:51299/icslite/print/pages/resource/print.do? 349/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
success
If VRM nodes of the same version share a license, change the key of the client to that of the server. The procedure is as follows:
a. Run the following command on the server VRM node to query the key:
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh query
b. Run the following command on the client VRM node to set the key of the client to the key of the server: After this
command is executed, the VRM service automatically restarts.
sh /opt/galax/root/vrm/tomcat/script/updateLmKey.sh set
The following command output is displayed:
Please Enter aes key:
Enter the key and press Enter. If the following information is displayed in the command output, the key is changed
successfully.
Redirecting to /bin/systemctl restart vrmd.service
success
Scenarios
This section guides administrators to configure available MAC address segments for the system on FusionCompute to allocate a
unique MAC address to each VM.
FusionCompute provides 100,000 MAC addresses for users, ranging from 28:6E:D4:88:B2:A1 to 28:6E:D4:8A:39:40. The
first 5000 addresses (28:6E:D4:88:B2:A1 to 28:6E:D4:88:C6:28) are dedicated for VRM VMs. The default address segment
for new VMs is 28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40.
If only one FusionCompute environment is available on the Layer 2 network, the FusionCompute environment can use the
default address segment (28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40) provided by the system. In this case, skip this section.
If multiple FusionCompute environments are available on the Layer 2 network, you need to divide the default address
segment based on the number of VMs in each FusionCompute environment and allocate unique MAC address segments to
each FusionCompute environment. Otherwise, MAC addresses allocated to VMs may overlap, adversely affecting VM
communication.
When configuring a custom MAC address segment, change the default MAC address segment to the custom address
segment or add a new address segment. A maximum of five MAC address segments can be configured for each
FusionCompute environment, and the segments cannot overlap.
Prerequisites
Conditions
You have logged in to FusionCompute.
Data
The MAC address segments for user VMs have been planned.
The address segments to be configured and the reserved 5000 MAC addresses dedicated for VRM VMs cannot overlap.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 350/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If only one FusionCompute environment is available on the Layer 2 network, you can use the default MAC address segment
(28:6E:D4:88:C6:29 to 28:6E:D4:8A:39:40).
If multiple FusionCompute environments are available on the Layer 2 network, you need to divide the default MAC address segment
based on the number of VMs in each FusionCompute environment.
For example, if two FusionCompute environments are available on the Layer 2 network, evenly allocate the first 95,000 MAC addresses to
the two FusionCompute environments: 45,000 MAC addresses to one environment and 50,000 MAC addresses to the other environment.
The following MAC address segments can be allocated:
The MAC address segment for FusionCompute 1 (the first 45,000 addresses): 28:6E:D4:88:C6:29 to 28:6E:D4:89:75:F0
The MAC address segment for FusionCompute 2 (the last 50,000 addresses): 28:6E:D4:89:75:F1 to 28:6E:D4:8A:39:40
The same rule applies when there are multiple environments.
Procedure
5. Click OK.
The MAC address segment is configured.
To modify or delete a MAC address segment, locate the row where the target MAC address segment resides and click
Modify or Delete.
Scenarios
Configure a third-party FTP server to back up important data on the VRM node. After the FTP server is configured, the VRM
node automatically backs up important data to the FTP server at 02:00 every day. If management data is backed up to a host, the
system automatically copies the management data excluding monitoring data to the /opt/backupdb directory on the host every
hour. The host retains only data generated in one day. If a system exception occurs, the backup data can be used to restore the
system.
Prerequisites
Conditions
You have logged in to FusionCompute.
Data
Table 1 describes the data required for performing this operation.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 351/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If no FTP server is used, select Host (the data backup does not include monitoring data).
Username Specifies the username for logging in to the FTP server. ftpuser
Protocol Specifies the protocol to be used, which can be FTP or FTPS. FTPS
Type You are advised to select FTPS to enhance file transmission security. If the FTP server
does not support the FTPS protocol, select FTP.
Backup Specifies the relative path in which the backup files are stored. /GalaxEngineBackup/VRM/
Path If multiple sites share the same backup server, set the directory to /Backup/VRM-VRM IP
address/ for easy identification. VRM IP address indicates the VRM management IP
address if the site has one VRM node or the floating IP address if the site has two VRM
nodes working in active/standby mode. If data is backed up to a host, the directory is
/opt/backupdb, which cannot be changed.
Procedure
Configure a backup server.
2. Choose System Management > System Configuration > Services and Management Nodes.
The Services and Management Nodes page is displayed.
3. In the Service List area, locate the row that contains VRM service, click More, and choose Configure Management
Data Backup.
If multiple sites share the same backup server, set the directory to /Backup/VRM-VRM IP address/ for easy identification. VRM IP
address indicates the VRM management IP address if the site has one VRM node or the floating IP address if the site has two VRM
nodes working in active/standby mode. If the management data is backed up to a host, the backup directory must be /opt/backupdb, and
only key data is backed up.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 352/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4. Select Back up to a 3rd-party FTP server or local VRM node and set the following parameters:
Protocol Type: You are advised to select FTPS or SFTP to enhance file transmission security. If
the FTP server does not support the FTPS or SFTP protocol, select FTP.
If FTPS is used, you need to deselect the TLS session resumption option of the FTP server.
Username: Enter the username for logging in to the third-party FTP server.
Password: Enter the password for logging in to the third-party FTP server.
Port: Enter the FTP port used by the third-party FTP server.
Backup Path: Enter the relative path in which the backup files are stored.
b. Host
Figure 2 Host
127.0.0.1:51299/icslite/print/pages/resource/print.do? 353/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Click OK.
a. Check whether the management data backup configuration is correct, for example, the username or password.
b. Check whether the network between VRM and a third-party FTP server or host is faulty.
c. Check whether the disk space of a third-party FTP server or host is insufficient.
d. When the management data is backed up to a host of an earlier version, management data cannot be backed up.
e. When the management data is backed up to a host, if DNS is configured on the host, communication times out for logging in to
the host using SSH.
f. When the management data is backed up to a third-party FTP server, the FTP server configuration is incorrect.
g. When the management data is backed up to a third-party FTP server, the FTP server configuration client does not have
permission to create a folder or file.
h. In IPv6 scenarios, if the IP address of the VRM VM is the local IP address of the site (take an IP address prefixed with
FEC0::/10 as an example), a third-party FTP server cannot be configured.
If an exception occurs on a third-party FTP server, contact the server administrator to handle this issue.
If the root certificate is not imported, alarm Invalid Certificate of FTP Server for Management Data will be generated.
7. Run the following command and enter the password of the root user to switch to the root user:
su - root
8. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 354/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
9. Run the following command to check whether the server certificate verification is enabled:
cat /etc/galax/vrm/certSecurity.properties | grep verifySrvCert
View the command output and perform the following operations:
If verifySrvCert is true in the command output, the server certificate verification is enabled. In this case, go to 10.
If verifySrvCert is false in the command output, the server certificate verification is disabled. In this case, no further
action is required.
10. Import the root certificate as instructed in Importing the Root Certificate of the FTP Server Certificate for Managing Data
Backups .
Scenarios
On FusionCompute, configure the system time synchronization and the time zone to ensure the proper running of FusionCompute
services. After the configuration, all VRM nodes and existing hosts synchronize time with the NTP server. For hosts added later,
select Use Site Time Sync Policy to apply time synchronization information configured at the site to the hosts.
You are advised to configure an external NTP clock source. If no external NTP clock source exists, configure the clock source as
the VRM host when the VRM is deployed on a VM or the VRM which is deployed on a server.
If the external clock source is w32time, configure the NTP clock source by referring to How Do I Configure Time
Synchronization Between the System and an NTP Server of the w32time Type? .
If the external clock source is a Linux time server, set a host or VRM node (deployed on a physical server) as the internal clock
source, and configure the internal clock source to synchronize its time with the external clock source. For details, see How Do I
Configure Time Synchronization Between the System and a Host or VRM Node (NTP Server) When an External Linux Clock
Source Is Used? .
Time information about the exported alarm generation time will be affected after the system time zone is configured. However,
the time displayed on FusionCompute is still identical to the time set for the browser.
Prerequisites
Conditions
If multiple NTP servers are deployed, all the NTP servers use the same upper-layer clock source so that the system time on
all NTP servers is the same.
If the NTP server domain name is to be used, ensure that a DNS is available. For details, see Configuring the DNS Server .
Data
127.0.0.1:51299/icslite/print/pages/resource/print.do? 355/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
127.0.0.1:51299/icslite/print/pages/resource/print.do? 356/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
NTP Server: specifies the IP address or domain name of the NTP server. You can enter one to three IP addresses or
domain names of the NTP servers. If you enter a domain name for the configuration, ensure that a DNS is available.
If no external NTP server is deployed, configure this parameter based on the following deployment scenarios:
VRM node in virtualization deployment: Set this parameter to the management IP address of the host
accommodating the active VRM node.
VRM node in physical deployment: Set this parameter to the management IP address of the active VRM node.
If no external NTP server is deployed, set the system time on the node that serves as the NTP server first. For details, see How Do
I Manually Change the System Time on a Node?
4. Click Save.
A dialog box is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 357/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Click OK.
The time zone and NTP clock source are configured.
The configuration takes effect only after the FusionCompute service processes restart, which results in temporary service interruption, and the
antivirus service may be abnormal. Proceed with the subsequent operation only after the service processes restart.
3.4.3 Appendix
FAQ
Common Operations
3.4.3.1 FAQ
How Do I Handle the Issue that the Mozilla Firefox Browser Prompts Connection Timeout During the Login to
FusionCompute?
How Do I Handle the Storage Device Detection Failure on a FusionCompute Host During VRM Installation?
How Do I Configure Time Synchronization Between the System and an NTP Server of the w32time Type?
How Do I Configure Time Synchronization Between the System and a Host or VRM Node (NTP Server) When an External
Linux Clock Source Is Used?
How Do I Handle the Issue that VRM Services Become Abnormal Because the DNS Is Unavailable?
What Can I Do If an Error Message Is Displayed Indicating That the Sales Unit HCore Is Not Supported When I Import
Licenses on FusionCompute?
3.4.3.1.1 How Do I Handle the Issue that the Mozilla Firefox Browser
Prompts Connection Timeout During the Login to FusionCompute?
127.0.0.1:51299/icslite/print/pages/resource/print.do? 358/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Symptom
FusionCompute is reinstalled multiple times and the Mozilla Firefox browser is used to log in to the management page. As a
result, too many certificates are loaded to the Mozilla Firefox browser. When FusionCompute is installed again and the Mozilla
Firefox browser is used to log in to the management page, the certificate cannot be loaded. As a result, the login fails and the
browser prompts connection timeout.
Possible Causes
FusionCompute is reinstalled repeatedly.
Procedure
1. Click in the upper right corner of the browser and choose Options.
2. In the Network Proxy area of the General page, click Settings (E).
The Connection Settings dialog box is displayed.
If yes, go to 5.
If no, go to 4.
6. In the Security area, click View Certificates under the Certificate module.
The Certificate Manager dialog box is displayed.
7. On the Servers and Authorities tab pages, delete certificates that conflict with those used by the current management
interface.
The certificates to be deleted are those that use the same IP address as the VRM node. For example, if the IP address of the
VRM node is 192.168.62.27:
On the Servers tab page, delete the certificates of servers whose IP addresses are 192.168.62.27:XXX.
Scenarios
In the following scenarios, the host may fail to scan storage resources on the corresponding storage devices:
During the installation of hosts using the x86 architecture, the size of the swap partition is 30 GB by default. If you select
auto to automatically configure the swap partition size, the swap partition size is in proportion to the memory size. When the
host has a large memory size, the swap partition may occupy large storage space so that the system disk does not have
available space, and no other disks are available except the system disk.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 359/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The local disk of the host has residual partition information. In this case, you need to manually clear the residual information
on the storage devices.
Prerequisites
Conditions
You have obtained the IP address for logging in to the host.
Data
Data preparation is not required for this operation.
Procedure
1. Use PuTTY to log in to the host.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key
pair to authenticate the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication
Mode? .
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
For a host using the x86 architecture, determine the number of disks based on the value in the NAME column in the
command output.
If the host has only one disk, install the host again, manually specify the swap partition size, and install
VRM again.
The swap partition size is required to be greater than or equal to 30 GB. If the disk space is insufficient for host installation and
VRM installation, replace it with another disk.
If the host has other disks except the system disk and VRM can be created on other disks, go to 5.
Do not clear the partitions on the system disk when clearing the disk partitions on the host. Otherwise, the host becomes unavailable unless you
reinstall an OS on the host.
/dev/sda is the default system disk on a host. However, the system may select another disk as the system disk, or a user may specify a system
disk during the host installation. Therefore, distinguish between the system disk and user disks when deleting host disk partitions.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 360/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Run the following command to query the name of the existing disk on the host:
fdisk -l
6. In the command output, locate the Device column that contains the partitioned disks, and make a note of the disk names.
Information similar to the following is displayed:
...
Partition /dev/sdb1 of the disk is displayed in the Device column, and you need to make a note of the disk name /dev/sdb.
If the disk has only one partition, the partition will be automatically deleted, and information similar to the following will
be displayed:
Selected partition 1
Run the d command to automatically delete the unique partition, and then go to 10.
If yes, go to 12.
If no, go to 11.
12. Enter w to save the configuration and exit the fdisk mode.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 361/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
An IP SAN initiator is required for IP SAN storage devices to map hosts and storage devices using the world wide name (WWN)
generated after the storage devices are associated with hosts.
OceanStor 5500 V3 is used as an example in this section. For more details, see the documentation delivered with the storage
device.
Prerequisites
Conditions
You have logged in to the storage management system, and the storage devices have been detected.
You have configured the logical host (group) and LUNs on the storage management system of the SAN storage device,
including creating a logical host (group), dividing LUNs, and configuring the mapping between LUNs and the logical host
(group).
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download
the document for the desired version.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document
for the desired version.
Data
Data preparation is not required for this operation.
Procedure
1. Check whether the storage resource has been associated with the host.
If yes, go to 3.
If no, go to 2.
2. Create an initiator.
For details, see "Creating an Initiator" in OceanStor 5500 V3 Product Documentation.
Scenarios
An FC SAN initiator is required for FC SAN devices to map hosts and storage devices using the world wide name (WWN)
generated after the storage devices are associated with hosts. This section describes how to obtain the WWN of the host and
configure the FC SAN initiator.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 362/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
OceanStor 5500 V3 is used as an example in this section. For more details, see the documentation delivered with the storage
device.
Prerequisites
Conditions
You have configured the logical host (group) and LUNs on the storage management system of the SAN storage device,
including creating a logical host (group), dividing LUNs, and configuring the mapping between LUNs and the logical host
(group).
For enterprise users: Visit https://support.huawei.com/enterprise , search for the document by name, and download
the document for the desired version.
For carrier users: Visit https://support.huawei.com , search for the document by name, and download the document
for the desired version.
Data
Data preparation is not required for this operation.
Procedure
5. Click Scan.
6. Click Recent Tasks in the lower left corner. In the expanded task list, verify that the scan operation is successful.
Scenarios
127.0.0.1:51299/icslite/print/pages/resource/print.do? 363/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If the clock source is an NTP server of the w32time type, configure one host or a VRM node when the VRM node is deployed on
a physical server to synchronize time with the clock source, and then set this host or VRM node as the system clock source. This
type of clock source is called the internal clock source. Configure time synchronization between the system and the internal clock
source.
Prerequisites
Conditions
You have obtained the IP address or domain name of the NTP server of the w32time type of the Windows OS.
If the NTP server domain name is to be used, ensure that a domain name server (DNS) is available. For details, see
Configuring the DNS Server .
You have obtained the password of user root and the management IP address of the host or VRM node that is to be
configured as the internal clock source.
Procedure
Configure time synchronization between a host or VRM node and a w32time-type NTP server.
1. Use PuTTY to log in to the host or VRM node to be set as the internal clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key
pair to authenticate the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication
Mode? .
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
4. Run the following command to synchronize time between the host or VRM node and the NTP server:
service ntpd stop;/usr/sbin/ntpdate NTPServer && /sbin/hwclock -w -u > /dev/null 2>&1; service ntpd start
You can set NTPServer to the NTP server IP address or domain name. If you enter a domain name for the configuration,
ensure that a DNS is available.
If the command output contains the following information, run this command again:
5. Run the following commands to set the time synchronization interval to 20 minutes:
sed -i -e '/ntpdate/d' /etc/crontab
echo "*/20 * * * * root service ntpd stop;/usr/sbin/ntpdate NTPServer > /dev/null 2>&1 && /sbin/hwclock -w -u >
/dev/null 2>&1;service ntpd start" >>/etc/crontab
127.0.0.1:51299/icslite/print/pages/resource/print.do? 364/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
You can set NTPServer to the NTP server IP address or domain name. If you enter a domain name for the configuration,
ensure that a DNS is available.
6. Run the following command to restart the service for the configuration to take effect:
service crond restart
The configuration is successful if information similar to the following is displayed:
7. Run the following command to configure the host or VRM node as the internal clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip Management IP address of the host or VRM node that is to be
configured as the internal clock source -cycle 6 -timezone Local time zone -force true
In the preceding command, the value of Local time zone is in Continent/Region format and must be the time zone used by
the external clock source.
For example, Local time zone is set to Asia/Beijing.
If the command output contains the following information, the configuration is successful:
If a host is set as an internal clock source, the configuration causes the restart of the service processes on the host. If more than 40 VMs
run on the host, the service process restart will take a long time, triggering VM fault recovery tasks. However, the VMs will not be
migrated to another host. After the service processes restart, the fault recovery tasks will be automatically canceled.
8. Run the following command to check whether the synchronization status is normal:
ntpq -p
Information similar to the following is displayed:
==============================================================================
If the remote column contains * 6 to 10 minutes after you run the command, the synchronization status is normal.
10. Choose System Management > System Configuration > Time Management.
The Time Management page is displayed.
NTP Server: Set it to the management IP address of the host or VRM node that has been configured as the
internal clock source.
If a VRM node is set as the internal clock source and the VRM nodes are deployed in active/standby mode, NTP server must be set to
the management IP address of the active VRM node instead of the floating IP address of the VRM nodes.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 365/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The configuration takes effect only after the FusionCompute service processes restart, which results in temporary service interruption, and
abnormal antivirus service. Proceed with the subsequent operation only after the service processes restart.
Scenarios
In the scenario where the VRM node is deployed on a physical server, you are advised to connect two NICs to the management
plane and bind them to improve network reliability for the VRM node. NIC binding on the management plane is recommended for
the standalone mode and the active/standby mode for the VRM node. To bind other NICs, perform the operations provided in this
section.
Comply with the following principles when binding NICs for the VRM node:
When VRM nodes are deployed in active/standby mode, the NICs binding status on the active and standby VRM nodes must
be the same. That is, the active and standby VRM nodes use the same NICs (such as eth0 and eth1) or do not bind NICs. The
names of the bound NICs must be the same.
Prerequisites
Conditions
You have obtained the password for the root user of the VRM node.
Data
You have obtained the names of the bound NICs, for example, bond0.
When you configure the active/standby relationship on FusionCompute, the bound NICs to be selected are named bond0, bond1, and bond2.
Therefore, you are advised to name the bound NICs bond0, bond1, and bond2, respectively when you configure NIC binding, to ease
identification.
Procedure
Log in to the VRM node.
1. Open Internet Explorer on a local PC, enter the following IP address in the address bar, and press Enter:
https://VRM server BMC IP address
127.0.0.1:51299/icslite/print/pages/resource/print.do? 366/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If you cannot log in to the BMC system of a single blade server, you are advised to log in to the SMM of the blade server and open the
remote control window of the server.
5. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
6. Check whether the VRM management plane has a VLAN tag during VRM installation.
If yes, go to 7.
If no, go to 10.
7. Run the following command to switch to the screen for changing the VRM management plane VLAN:
sh /opt/galax/root/vrm/tomcat/script/nrm/updateVRMVlan.sh
Information similar to the following is displayed:
#####################/opt/galax/root/vrm/tomcat/script/nrm/updateVRMVlan START#####################
3.Obtain information about the management port group of the VRM node.
4.Update information about the management port group of the VRM node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 367/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
10. Run the following command to switch to the directory containing the script for NIC binding:
cd /opt/galax/root/vrm/tomcat/script/nrm/
11. Run the following command to bind the NICs for the VRM node:
sh bondNic.sh "Bound NIC name" "NIC 1 to be bound NIC 2 to be bound" -NIC type
The descriptions of the parameters in this command are as follows:
Bound NIC name: When you configure the active/standby relationship on FusionCompute, the bound NICs to be
selected are named bond0, bond1, and bond2. Therefore, you are advised to name the bound NICs bond0, bond1,
and bond2, respectively when you configure NIC binding, to ease identification.
NIC 1 to be bound and NIC 2 to be bound: To bind NICs on the management plane, enter the management NIC
configured during VRM installation for NIC 1 to be bound.
NIC type: To bind NICs on the management plane, enter m. To bind other NICs, enter o.
For example, to bind eth0 and eth1 on the management plane, run the following command:
sh bondNic.sh "bond0" "eth0 eth1" -m
The command is executed successfully if information similar to the following is displayed. In this case, you can close
the remote control window.
Running the sh bondNic.sh "bond0" "eth0 eth1" -m command causes the HA service to restart. Therefore, after executing this
command, wait for 3 minutes and then perform other operations.
12. Check whether the VRM management plane requires a VLAN tag.
If yes, go to 13.
13. Run the following command to switch to the screen for changing the VRM management plane VLAN:
sh /opt/galax/root/vrm/tomcat/script/nrm/updateVRMVlan.sh
Information similar to the following is displayed:
#####################/opt/galax/root/vrm/tomcat/script/nrm/updateVRMVlan START#####################
3.Obtain information about the management port group of the VRM node.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 368/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
4.Update information about the management port group of the VRM node.
15. Enter the ID of the new VLAN, for example, enter 2000.
If no information is displayed, the command is executed successfully. You can enter 6 again to check the current VLAN
device.
Verification
After you bind the NICs, run the following command to query the NIC binding status:
cat /proc/net/bonding/Bound NIC name
For example, run the following command to check the binding status of bond0:
cat /proc/net/bonding/bond0
Information similar to the following is displayed:
...
MII Status: up
Up Delay (ms): 0
MII Status: up
MII Status: up
127.0.0.1:51299/icslite/print/pages/resource/print.do? 369/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
The information about the NICs that are bound and the current NIC in use is displayed in the command output.
In the command output, if the NIC information is abnormal and the VRM management network is disconnected, perform the
following operations:
Run the following command to restart the network service:
service network restart
This command is a high-risk command. Run this command only when absolutely necessary.
After the command is executed successfully, check whether the NIC information and VRM management network are normal.
Additional Information
Related Tasks
Unbind NICs.
To unbind NICs, perform the following operations:
During NIC unbinding, if the VRM management plane has a VLAN tag, delete the original VLAN by performing 7 to 9 and then configure a
new VLAN for the management plane by performing 13 to 15.
1. Switch to the directory containing the script for NIC binding. For details, see 10.
unbond success
Running the sh unbondNic.sh "bond0" -m command causes the HA service to restart. Therefore, after executing this command, wait for
3 minutes and then perform other operations.
If the unbinding fails, rectify the fault. For details, see the operation for rectifying the fault that the binding fails.
Scenarios
127.0.0.1:51299/icslite/print/pages/resource/print.do? 370/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
If an external Linux clock source is used, manually configure a host or a VRM node when the VRM node is deployed on a
physical server to synchronize time with the external clock source. Set the host or VRM node as the system clock source that is
also called an internal clock source. Configure time synchronization between the system and the internal clock source.
Prerequisites
Conditions
You have obtained the IP address or domain name of the external clock source.
If the NTP server domain name is to be used, ensure that a DNS is available. For details, see Configuring the DNS Server .
You have obtained the password of user root and the management IP address of the host or VRM node that is to be
configured as the internal clock source.
Procedure
Configure time synchronization between the system and the host or VRM node functioning as the internal clock source.
1. Use PuTTY to log in to the host or VRM node to be set as the internal clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key
pair to authenticate the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication
Mode? .
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
4. Manually set the time on the host or VRM node to be consistent with that on the external clock source. For details, see
How Do I Manually Change the System Time on a Node?
NTP Server: Set it to the management IP address of the host or VRM node that has been configured as the
internal clock source.
If a VRM node is set as the internal clock source and the VRM nodes are deployed in active/standby mode, NTP Server must be set to
the management IP address of the active VRM node instead of the floating IP address of the VRM nodes.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 371/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
8. Click Save.
A dialog box is displayed.
9. Click OK.
The time zone and NTP clock source are configured.
The configuration takes effect only after the FusionCompute service processes restart, which results in temporary service interruption,
and abnormal antivirus service. Proceed with the subsequent operation only after the service processes restart.
Configure time synchronization between the host or VRM node and the external Linux clock source.
If the internal clock source is a host, configure time synchronization between the host and the external clock
source as instructed in Setting Time Synchronization on a Host .
If the internal clock source is a VRM node deployed on a physical server, go to 11.
11. Switch back to the VRM node that is configured as the internal clock source and run the following command to
synchronize time between the VRM node and the external clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip External clock source IP address or domain name -cycle 6 -
force true
Scenarios
During the host installation process, some parameters are incorrectly configured. As a result, the host cannot be added to
FusionCompute. In this case, you can run the hostconfig command to reconfigure the host parameters.
The following parameters can be reconfigured:
Host name
VLAN
Prerequisites
The OS has been installed on the host.
You have obtained the IP address, username, and password for logging in to the BMC system of the host.
You have obtained the password of user root for logging in to the host.
The host is not added to the site or cluster that has FusionCompute installed.
Procedure
Log in to the host.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 372/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
1. Open the browser on the local PC, enter the following IP address in the address bar, and press Enter:
If you cannot log in to the BMC system of a single blade server (in the x86 architecture), you are advised to log in to the SMM of the
blade server and open the remote control window of the server.
5. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
6. Run the following command to enter Main Installation Window, as shown in Figure 1:
hostconfig
127.0.0.1:51299/icslite/print/pages/resource/print.do? 373/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
7. Choose Network > eth0 to enter the IP Configuration for eth0 screen, as shown in Figure 2.
Configure only one management NIC for a host. If you configure IP addresses for other NICs, network communication may fail.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad
on the right.
10. Enter the gateway address of the host management plane in Default Gateway, as shown in Figure 3.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 374/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After the network configuration is complete, you can set the gateway address of the management plane and IP addresses of other planes
in Test Network to check whether the newly configured IP addresses are available.
12. Select Hostname. The Hostname Configuration screen is displayed, as shown in Figure 4.
13. Delete existing information, enter the new host name, and select OK.
15. Select VLAN. The VLAN Configuration screen is displayed, as shown in Figure 5.
16. Configure the VLAN ID, IP address, and subnet mask and select OK to complete VLAN configuration.
During configuration, use the number keys on your main keyboard as the program may not recognize input from the numerical keypad
on the right.
17. Select VLAN. The VLAN Configuration screen is displayed, as shown in Figure 6.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 375/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
After deleting the VLAN, switch to the Network screen to reconfigure network information.
19. After the VLAN is configured, select Network and check whether the gateway is successfully configured based on the
gateway information in the Network Information list.
Scenarios
The FusionCompute web client displays Huawei-related information, including the product name, technical support website,
product documentation links, online help links, copyrights information, system language, system logo (displayed in different areas
on the web client), background images on the login page, system page, and About page, as shown in Figure 1. This section guides
you to change or shield such information.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 376/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
: product name
127.0.0.1:51299/icslite/print/pages/resource/print.do? 377/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
: copyrights information
: system name
: background image
Prerequisites
Conditions
You have prepared the following images:
: A system logo in 16 x 16 pixel size displayed in the browser address box. The image must be named favicon.ico and
saved in ICO format.
: A system logo in 48 x 48 pixel size displayed on the login page and About page. The image must be named
huaweilogo.png and saved in PNG format.
: A background image in 550 x 550 pixel size. The image must be named login_enbg.png. The image is saved in PNG
format.
: A system logo in 33 x 33 pixel size displayed in the upper left corner of Online Help. The image must be named
huaweilogo.gif and saved in GIF format.
PuTTY is available.
WinSCP is available.
Ensure that SFTP is enabled on CNA or VRM nodes. For details, see Enabling SFTP on CNA or VRM Nodes .
Procedure
1. Use WinSCP to log in to the active VRM node.
Ensure that the management plane floating IP address and username gandalf are used for login.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 378/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
5. Run the following command and enter the password of user root to switch to user root:
su - root
6. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
7. Run the following command to open and edit the configuration file:
vi /opt/galax/vrmportal/tomcat/script/portalSh/syslogo/third/systitle.conf
Figure 2 shows information in the configuration file.
The entered information can contain only letters, digits, spaces, and special characters _-,.©:/
Set title to the new content. The value is a string of 1 to 18 characters (one uppercase letter is considered as two
characters).
Set link to the new content. The value is a string of 1 to 100 characters (one uppercase letter is considered as
two characters).
Set loginProductSupportText to false (to display information) or true (to hide information).
Set headProductSupportText to false (to display information) or true (to hide information).
127.0.0.1:51299/icslite/print/pages/resource/print.do? 379/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Set copyrightEnUs to the new content displayed when the system language is English. The value is a string of
1 to 100 characters (one uppercase letter is considered as two characters).
Set portalsysNameEnUs to the new content displayed when the system language is English. The value is a
string of 1 to 18 characters (one uppercase letter is considered as two characters).
8. Press Esc and enter :wq to save the configuration and exit the vi editor.
a. Right-click Computer and choose Properties > Advanced system settings > Environment Variables.
b. Open the CLI on the local PC and switch to the directory in which file systitle.conf is saved.
12. Run the following command to make the configuration take effect:
sh /opt/galax/root/vrmportal/tomcat/script/portalSh/syslogo/modifylogo.sh third
The configuration is successful if information similar to the following is displayed:
13. Use the browser to access the FusionCompute web client and check whether the new information is displayed, such as the
system logo, product name, copyrights information, and support website.
14. Disable the SFTP service. For details, see Disabling SFTP on CNA or VRM Nodes .
Additional Information
Related Tasks
Restore the default Huawei logo.
2. Run the following command and enter the password of user root to switch to user root:
su - root
127.0.0.1:51299/icslite/print/pages/resource/print.do? 380/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
b. Open the CLI on the local PC and switch to the directory in which file systitle.conf is saved.
7. Use the browser to access the FusionCompute web client and check whether the default Huawei interface is displayed.
Scenarios
If no external clock source is deployed, configure the host accommodating the VRM VM or the physical server that has VRM
installed as the NTP clock source. In this case, the system time on the target host or physical server must be accurate.
Prerequisites
You have obtained the passwords of users gandalf and root of the node to be configured as the NTP clock source.
Procedure
Log in to the operating system of the node.
1. Use PuTTY to log in to the node to be set as the NTP clock source.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key
pair to authenticate the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication
Mode? .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 381/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
4. Check whether any external NTP clock source is configured for the node.
If yes, go to 5.
If no, go to 6.
5. Run the following command to set the node as its NTP clock source:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip 127.0.0.1 -cycle 6 -timezone Local time zone -force true
For example, if the local time zone is Asia/Beijing and the node is a physical server that has VRM installed, run the
following command:
perl /opt/galax/gms/common/config/configNtp.pl -ntpip 127.0.0.1 -cycle 6 -timezone Asia/Beijing -force true
6. Run the date command to check whether the current system time is accurate.
If yes, go to 11.
If no, go to 7.
7. Run the required command to stop a corresponding process based on the node type.
8. Run the following command to rectify the system time of the node:
date -s Current time
The current time must be set in HH:MM:SS format.
For example, if the current time is 16:20:15, run the following command:
date -s 16:20:15
9. Run the following command to synchronize the new time to the basic input/output system (BIOS) clock:
/sbin/hwclock -w -u
10. Run the required command to start a corresponding process based on the node type.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 382/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
==============================================================================
If * is displayed on the left of LOCAL, the time service is running properly on the node. The node can be used as an NTP
clock source.
If * is not displayed, run the ntpq -p command again five to ten minutes later to check the time service running status.
Scenarios
Install and configure a VPN when a user cannot access the management network due to network isolation. After the VPN is
configured, the user can connect to the management network through the VPN to install and use FusionCompute. Figure 1 shows
the networking when the user accesses the management network through the VPN.
OpenVPN installation and configuration must be implemented on the server and client respectively. The OpenVPN server is
usually a VM or physical machine provided by the user, which is called server S in this example. The OpenVPN client is usually a
laptop or PC that is used by the administrator to connect to FusionCompute, which is called client C in this example.
This section describes how to install and configure VPN software, and OpenVPN software installed in a Windows operating
system (OS) is used as an example.
The installation and configuration of the OpenVPN software varies depending on versions. This section uses the OpenVPN
software in version 2.2.1 as an example.
Prerequisites
The VPN server must meet the following conditions:
A VM with double NICs has been created in an idle virtualization cluster. NIC 1 connects to the management
network, and NIC 2 connects to the office network. Create the VM in a service cluster with preference. Create the
VM in the management cluster when the service cluster does not have sufficient resources.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 383/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Procedure
Install the OpenVPN on the server and client, respectively.
2. Double-click openvpn-2.1.1-install.exe and install the OpenVPN software as instructed to the default directory.
3. Check whether a local connection is added to the server and client, respectively.
If yes, go to 5.
If no, go to 4.
4. Switch to the C:\Program Files\OpenVPN\bin directory on the server and client, respectively. Then, run addtap.bat.
A virtual NIC is added.
9. In the server OS, choose Start > Search, enter cmd, and press Enter.
The CLI is displayed.
10. Run the following command to enter the directory containing easy-rsa:
cd "\Program Files\OpenVPN\easy-rsa"
1 file<s> copied.
1 file<s> copied.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 384/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
build-ca
......................................................+*++*++
127.0.0.1:51299/icslite/print/pages/resource/print.do? 385/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
21. Configure information including the password based on the following figure.
After the configuration, the certificate for the server is generated.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 386/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
C:\Program Files\OpenVPN\easy-rsa>
32. Write the generated root CA certificate into the configuration file on the server.
# Any X509 key management system can be used.
# OpenVPN can also use a PKCS #12 formatted key file
# (see "pkcs12" directive in man page).
ca ca.crt
cert server.crt
key OpenVPNServer.key # This file should be kept secret
33. Write the generated dh1024.pem file into the configuration file on the server.
# Diffie hellman parameters.
# Generate your own with:
# openssl dhparam -out dh1024.pem 1024
# Substitute 2048 for 1024 if you are using
# 2048 bit keys.
dh dh1024.pem
34. Configure the intranet IP address and subnet mask of the OpenVPN server according to the actual network environment.
# Configure server mode and supply a VPN subnet
# for OpenVPN to draw client addresses from.
# The server will take 192.168.0.1 for itself,
# the rest will be made available to clients.
# Each client will be able to reach the server
# on 192.168.0.1. Comment this line out if you are
# ethernet bridging. See the man page for more info.
server 192.168.0.0 255.255.255.0
36. Send data parsed by the DNS to the client according to the actual network environment.
# Certain Windows-specific network settings
# can be pushed to clients, such as DNS
# or WINS server addresses. CAVEAT:
# https://openvpn.net/faq.html#dhcpcaveats
# The addresses below refer to the public
# DNS servers provided by opendns.com.
push "dhcp-option DNS 192.168.22.243"
push "dhcp-option WINS 192.168.0.20"
127.0.0.1:51299/icslite/print/pages/resource/print.do? 389/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
keepalive 10 120
40. Write the generated attack defense file into the configuration file on the server.
The hash-based message authentication code (HMAC) firewall defends against DoS attacks. Only controlled information
with an HMAC signature can be processed.
# The server and each client must have
# a copy of this key.
# The second parameter should be '0'
# on the server and '1' on the clients.
tls-auth ta.key 0
44. Copy the configured server.ovpn file to the C:\Program Files\OpenVPN\config directory.
45. Copy files ca.crt, ca.key, OpenVPNServer.crt, OpenVPNServer.csr, OpenVPNServer.key, dh1024.pem, and ta.key
stored in the C:\Program Files\OpenVPN\easy-rsa\keys directory to the C:\Program Files\OpenVPN\config directory.
46. Double-click OpenVPN GUI. After the OpenVPN starts, click in the right lower corner of the server desktop. Select
Connect.
49. Configure the port type used by the client to be the same as that for the server.
...
# the firewall for the TUN/TAP interface.
;dev tap
127.0.0.1:51299/icslite/print/pages/resource/print.do? 390/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
dev tun
50. Configure the protocol used by the client the same as that for the server.
# Are we connecting to a TCP or
# UDP server? Use the same setting as
# on the server.
;proto tcp
proto udp
52. Configure the connection mode for the server to random connection.
# Choose a random host from the remote
# list for load-balancing. Otherwise
# try hosts in the order specified.
remote-random
127.0.0.1:51299/icslite/print/pages/resource/print.do? 391/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
57. Write the generated certificate into the configuration file on the client.
# SSL/TLS parms.
# See the server config file for more
# description. It's best to use
# a separate .crt/.key file pair
# for each client. A single ca
# file can be used for all clients.
ca ca.crt
cert OpenVPNClient.crt
key OpenVPNClient.key
58. Copy the configured client.ovpn file to the C:\Program Files\OpenVPN\config directory.
59. Copy files ca.crt, ca.key, ta.key, OpenVPNClient.crt, OpenVPNClient.csr, OpenVPNClient.key, ta.key stored in the
C:\Program Files\OpenVPN\easy-rsa\keys directory to the C:\Program Files\OpenVPN\config directory.
60. Double-click OpenVPN GUI. After the OpenVPN starts, click in the right lower corner of the server desktop. Select
Connect.
Symptom
When a DNS is invalid or faulty, the system becomes faulty after the following operations are performed. As a result, the user fails
to log in to FusionCompute for changing the DNS configuration.
Possible Causes
An invalid DNS is configured.
Procedure
1. Use PuTTY to log in to one VRM node.
Ensure that the management IP address and username gandalf are used for login.
The system supports the login authentication using a password or private-public key pair. If you use a private-public key
pair to authenticate the login, see How Do I Use PuTTY to Log In to a Node in Private-Public Key Pair Authentication
Mode? .
127.0.0.1:51299/icslite/print/pages/resource/print.do? 392/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
2. Run the following command and enter the password of user root to switch to user root:
su - root
3. Run the TMOUT=XXX command to set the timeout to prevent user logout upon timeout.
6. Repeat 1 to 5 to clear the DNS configurations for the other VRM node.
7. Wait for 10 minutes and then check whether you can log in to FusionCompute successfully.
If no, go to 8.
Symptom
After FusionCompute is upgraded from a version earlier than 850 to 850 or later, or FusionCompute 850 or later is installed, an
error message is displayed indicating that licenses with the sales unit HCore are not supported when licenses with the sales unit
HCore are imported.
Possible Causes
The imported licenses contain licenses with the sales unit HCore.
Fault Diagnosis
None
Procedure
1. Log in to the iAuth platform using a W3 account and password.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 393/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
3. On the Apply By Application page, select ESDP-Electronic Software Delivery Platform in Enter an application, enter
GTS in Enter Privilege, and click Search.
7. In the navigation pane, choose License Commissioning and Maintenance > License Split.
8. Click Add Node, enter the ESN, and click Search to search for the license information. Select the license information and
click OK.
10. After the splitting, set Product Name to FusionCompute, Version to 8, and set ESN, and click Preview License.
11. Confirm that the license splitting result meets the expectation (16 HCore: 1 CPU (round down to the nearest integer)) and
click Submit.
12. Confirm the information in the dialog box displayed and click OK. (The dialog box asks you whether to continue license
splitting, because this operation will result in the following: The annual fee NE is processed as a common NE, and the
annual fee time and annual fee code remains unchanged. Only common BOMs are changed, but the annual fee time
remains unchanged.) Confirm the settings and click OK.
13. After the AMS manager approves the modification, the license splitting is complete.
14. Refresh the license on ESDP to obtain the split license file.
Related Information
None
Logging In to FusionCompute
127.0.0.1:51299/icslite/print/pages/resource/print.do? 394/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
This section guides administrators to configure the Google Chrome browser before logging in to FusionCompute for the first time.
After the configuration, you can use Google Chrome to perform operations on FusionCompute.
Related configurations, such as certificate configuration, for Google Chrome are required.
Google Chrome 115 is used as an example.
If the security certificate is not installed when the Google Chrome browser is configured, the download capability and speed for converting a
VM to a template and importing a template are limited.
Prerequisites
Conditions
Data
Data preparation is not required for this operation.
Procedure
Enter the login page.
IPv4
IPv6
If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.
The HTTPS protocol used by FusionCompute supports only TLS 1.2. If SSL 2.0, SSL 3.0, TLS 1.0, or TLS 1.1 is used, the
FusionCompute system cannot be accessed.
If Google Chrome slows down after running for a period of time and no data needs to be saved, press F6 on the current page to move the
cursor to the address bar of the browser. Then, press F5 to refresh the page and increase the browser running speed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 395/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
7. Locate the row that contains the address bar of the browser, and click Settings.
Select Privacy and security.
9. Click Import.
The Certificate Import Wizard dialog box is displayed.
11. Click Browse on the line where the file name is located.
Select the exported certificate.
To use a self-signed certificate, you need to generate a root certificate, issue a level-2 certificate based on the root certificate, use the
level-2 certificate as the web certificate, and import the root certificate to the certificate management page of the browser.
13. Select Place all certificates in the following store and click Browse.
The Select Certificate Store dialog box is displayed.
19. Locate the row that contains the address bar of the browser and select More tools.
Click Clear browsing data.
The Clear browsing data dialog box is displayed.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 396/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Browsing history
23. In the address box of the browser, repeat 2 to access the login page. You can see that Not secure is no longer displayed in
non-Chinese cryptographic algorithm scenarios.
Scenarios
This section guides administrators to set the Mozilla Firefox browser before logging in to FusionCompute the first time so that
they can use a Mozilla Firefox browser to perform operations normally on FusionCompute.
Prerequisites
Conditions
You have obtained the floating IP address of the VRM management nodes.
Data
Data preparation is not required for this operation.
Procedure
1. Open Mozilla Firefox.
IPv4
IPv6
If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.
4. Verify that Permanently store this exception is selected and click Confirm Security Exception.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 397/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Scenarios
This section guides administrators to log in to FusionCompute to manage virtual, service, and user resources in a centralized
manner.
Prerequisites
Conditions
You have configured the Google Chrome or Mozilla Firefox browser. For details, see Setting Google Chrome (Applicable to
Self-Signed Certificates) or Setting Mozilla Firefox .
The browser resolution is set to 1280 x 1024 or higher based on the service requirement to ensure the optimum display effect
on FusionCompute.
If the security certificate was not installed when the Google Chrome browser was set, the browser may display a message indicating that the web
page cannot be displayed upon first login to FusionCompute or to a VM using VNC. In this case, press F5 to refresh the web page.
The system supports the following browsers:
Google Chrome 118, Google Chrome 119, and Google Chrome 120
Mozilla Firefox 118, Mozilla Firefox 119, and Mozilla Firefox 120
Microsoft Edge 118, Microsoft Edge 119, and Microsoft Edge 120
Data
Table 1 describes the data required for performing this operation.
IP address of the Specifies the floating IP address of the VRM nodes if the 192.168.40.3
VRM node VRM nodes are deployed in active/standby mode, or the
management IP address of the VRM node if only one VRM
mode is deployed.
Username/Password Specifies the username and password used for logging in to Common mode:
FusionCompute.
Username: admin
Password:
Tool-based VRM installation: Set the
password during the installation.
Manual VRM installation using an ISO
image: Set the password when executing the
initialization script after the installation is
complete.
Role-based mode:
System administrator:
Username: sysadmin
Password:
127.0.0.1:51299/icslite/print/pages/resource/print.do? 398/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
Tool-based VRM installation: Set the
password during the installation.
Manual VRM installation using an ISO
image: Set the password when executing the
initialization script after the installation is
complete.
Security administrator:
Username: secadmin
Password:
Tool-based VRM installation: Set the
password during the installation.
Manual VRM installation using an ISO
image: Set the password when executing the
initialization script after the installation is
complete.
Security auditor:
Username: secauditor
Password:
Tool-based VRM installation: Set the
password during the installation.
Manual VRM installation using an ISO
image: Set the password when executing the
initialization script after the installation is
complete.
User type Specifies the type of the user to log in to the system. Local user
Local user: Log in to the system using a local username and
password.
Domain user: Log in to the system using a domain username
and password.
Procedure
1. Open Mozilla Firefox.
IPv4
IPv6
If two VRM nodes are deployed in active/standby mode, the IP address is the floating IP address of the VRM nodes. If
only one VRM node is deployed, the IP address is the management IP address of the VRM mode.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 399/400
15/07/2025, 16:14 127.0.0.1:51299/icslite/print/pages/resource/print.do?
When accessing the IP address, the system automatically converts the IP address into the HTTPS address to improve access
security.
If a firewall is deployed between the local PC and FusionCompute, enable port 8443 on the firewall.
3. Set Username and Password, select User type, and click Login. If you attempt to log in to the system again after the
initial login fails, you also need to set Verification code.
Enter the username and password based on the permission management mode configured during VRM installation.
Role-based mode: The login username of the system administrator is sysadmin. The login username of the security
administrator is secadmin, and the login username of the security auditor is secauditor.
If it is your first login using the administrator username, the system will ask you to change the password of the admin user.
The password must meet the following requirements:
The password contains 8 to 32 characters.
The password must contain at least one space or one of the following special characters: `~!@#$%^&*()-_=+\|[{}];:'",<.>/?
The password must contain at least two types of the following characters:
Uppercase letters
Lowercase letters
Digits
The FusionCompute management page is displayed after you log in to the system.
The user is automatically logged out of the FusionCompute management system in case of any of the following circumstances:
After you log in to FusionCompute, you can learn the product functions from the online help, product tutorial, and alarm help.
127.0.0.1:51299/icslite/print/pages/resource/print.do? 400/400