Hyperflex hx220 m6 Edge Spec Sheet
Hyperflex hx220 m6 Edge Spec Sheet
OVERVIEW.......................................................................................5
DETAILED VIEWS...............................................................................7
Chassis Front View - HXAF-E-220M6S (All Flash)..........................................................7
Chassis Front View - HX-E-220M6S (Hybrid)...............................................................8
Detailed Chassis Rear Views.................................................................................9
One Half-Height Riser................................................................................10
Three Half-Height Risers............................................................................11
Two Full-Height Risers...............................................................................12
BASE SERVER NODE STANDARD CAPABILITIES and FEATURES.......................13
CONFIGURING the SERVER..................................................................17
STEP 1 VERIFY SERVER SKU..............................................................................18
STEP 2 SELECT RISERS....................................................................................19
STEP 3 SELECT CPU(s)....................................................................................20
STEP 4 SELECT MEMORY..................................................................................24
Memory Configurations, Features.........................................................................26
STEP 5 SELECT DRIVE CONTROLLERS...................................................................29
Cisco 12G SAS HBA........................................................................................... 29
STEP 6 SELECT DRIVES...................................................................................30
STEP 7 SELECT OPTION CARD(s)........................................................................34
STEP 8 ORDER OPTIONAL PCIe OPTION CARD ACCESSORIES........................................36
STEP 9 SELECT HYPERFLEX EDGE NETWORK TOPOLOGY............................................39
STEP 10 ORDER GPU CARDS (OPTIONAL)..............................................................41
STEP 11 ORDER POWER SUPPLY.........................................................................42
STEP 12 SELECT INPUT POWER CORD(s)...............................................................43
STEP 13 ORDER TOOL-LESS RAIL KIT AND OPTIONAL REVERSIBLE CABLE MANAGEMENT ARM .
47 STEP14 ORDER SECURITY DEVICES (OPTIONAL)......................................................48
STEP 15 SELECT LOCKING SECURITY BEZEL (OPTIONAL)............................................49
STEP 16 SELECT HYPERVISOR / HOST OPERATING SYSTEM..........................................50
STEP 17 SELECT HYPERFLEX DATA PLATFORM (HXDP) SOFTWARE.................................52
STEP 18 CISCO INTERSIGHT..............................................................................53
STEP 19 SELECT INSTALLATION SERVICE...............................................................54
STEP 20 SELECT SERVICE and SUPPORT LEVEL........................................................55
SUPPLEMENTAL MATERIAL.................................................................63
Supported Network Topologies for HyperFlex Edge 2 Node Deployments...........................63
10 or 25 Gigabit Ethernet Dual Switch Topology.................................................64
10 or 25 Gigabit Ethernet Single Switch Topology...............................................65
1 Gigabit Ethernet Dual Switch Topology.........................................................66
1 Gigabit Ethernet Single Switch Topology.......................................................67
NIC Based 10 or 25 Gigabit Ethernet Dual Switch Topology (quad port).....................68
NIC Based 10 or 25 Gigabit Ethernet Dual Switch Topology (dual port)......................69
Supported Network Topologies for HyperFlex Edge 3 and 4 Node Deployments...................70
10 or 25 Gigabit Ethernet Dual Switch Topology.................................................71
10 or 25 Gigabit Ethernet Single Switch Topology...............................................72
1 Gigabit Ethernet Dual Switch Topology.........................................................73
1 Gigabit Ethernet Single Switch Topology.......................................................74
NIC Based 10 or 25 Gigabit Ethernet Dual Switch Topology (quad port).....................75
NIC Based 10 or 25 Gigabit Ethernet Dual Switch Topology (dual port)......................76
Chassis......................................................................................................... 77
Risers.......................................................................................................... 79
OVERVIEW
Cisco HyperFlex Edge Systems are optimized for remote sites, branch offices, and edge environments. As a
smaller form factor of Cisco HyperFlex, Cisco HyperFlex Edge keeps the full power of a next generation
hyperconverged platform even without connecting to Cisco UCS Fabric Interconnects. Cisco HyperFlex
Edge Systems support a variable configuration of 2, 3, or 4 HX converged nodes and supports the scale up
of CPU, memory, and storage capacity (hot-add additional capacity drives).
NOTE: HX220 M6 Edge requires Cisco Intersight for cluster deployment and ongoing management.
HyperFlex Edge operates using existing top of rack 1GE or 10/25GE switching with options for both single
and dual switch configurations. Edge clusters are configured with replication factor 2 (RF2) to ensure
availability during various failure scenarios. HyperFlex Edge is typically deployed in environments with a
minimal infrastructure footprint, hence the use of UCS compute-only nodes is not supported.
The Cisco HyperFlex HX220 M6 Edge All Flash/Hybrid Server Nodes extends the capabilities of Cisco’s
HyperFlex portfolio in a 1U form factor with the addition of the 3rd Gen Intel® Xeon® Scalable Processors
(Ice Lake), 16 DIMM slots per CPU for 3200-MHz DDR4 DIMMs with DIMM capacity points up to 128 GB. The
maximum memory capacity for 2 CPUs is listed here:
Drives
The HX220 M6 Edge All Flash/Hybrid Server Nodes has two LOM ports (10Gbase-T LOM) and a single 1 GbE
management port. A modular LAN on motherboard (mLOM) module provides up to two 100 GbE ports. A
connector on the front of the chassis provides KVM functionality.
See Figure 1 on page 5 for front and rear views of the HX220 M6 Edge All Flash/Hybrid Server Nodes.
HX-E-220M6S (Hybrid)
10 front drives are SAS/SATA HDDS and SDDs
Front View (see Figure 3 on page 8 for details)
Rear View (one half-height riser version) (see Figure 4 on page 10 for details)
Rear View (three half-height riser version) (see Figure 5 on page 11 for details)
Rear View (two full-height riser version) (see Figure 6 on page 12 for details)
DETAILED VIEWS
Chassis Front View - HXAF-E-220M6S (All Flash)
Figure 2 shows the front view of the Cisco HyperFlex HXAF-E-220M6S (All Flash) Server Node.
14 Unit Identification button/LED 20 KVM connector (used with KVM cable that
provides two USB 2.0, one VGA, and one
serial connector)
14 Unit Identification button/LED 20 KVM connector (used with KVM cable that
provides two USB 2.0, one VGA, and one
serial connector)
Figure 5 shows the details of the rear panel for the HX220 M6 Edge All Flash/Hybrid Server Nodes with
three rear half-height PCIe risers.
Figure 6 shows the details of the rear panel for the HX220 M6 Edge All Flash/Hybrid Server Nodes with two
rear full-height PCIe risers.
NOTE: By default, 1-CPU servers come with only one half-height riser 1 installed.
2-CPU servers support all three half-height risers.
Figure 4 Chassis Rear View (one half-height, 3/4 length PCIe riser)
Figure 5 Chassis Rear View (three half-height, 3/4 length PCIe risers)
NOTE: 1-CPU servers support only full-height riser 1 while 2-CPU servers support
both full-height risers.
3 Power supplies (two, redundant as 1+1) 8 1 GbE dedicated Ethernet management port
4 Modular LAN on motherboard (mLOM) slot 9 -10 Dual 1/10 GbE Ethernet ports
(LAN1, LAN2)
LAN1 is left connector,
LAN2 is right connector
HX-E-220M6S (Hybrid):
■ Up to 10 SFF SAS/SATA hard drives (HDDs) and SAS/SATA solid state
drives (SSDs). 10 Drives are used as below:
• Three to eight SAS HDD (for capacity)
• One SAS/SATA SSD (for caching)
Other storage:
■ A mini-storage module connector on the motherboard supports a
boot-optimized RAID controller carrier that holds up two SATA M.2 SSDs.
Mixing different capacity SATA M.2 SSDs is not supported.
Integrated management Baseboard Management Controller (BMC) running Cisco Integrated
processor Management Controller (CIMC) firmware.
Depending on your CIMC settings, the CIMC can be accessed through the
1GE dedicated management port, the 1GE/10GE LOM ports, or a Cisco
virtual interface card (VIC).
CIMC manages certain components within the server, such as the Cisco 12G
SAS HBA.
Storage controllers ■ Cisco 12G SAS HBA
• No RAID support
• JBOD/Pass-through Mode support
• Supports up to 10 SAS/SATA internal drives
Modular LAN on The dedicated mLOM slot on the motherboard can flexibly accommodate the
Motherboard (mLOM) following cards:
■ Cisco Virtual Interface Cards
Notes:
1. This product may not be purchased outside of the approved bundles (must be ordered under the MLB).
The Cisco HX220 M6 Edge All Flash/Hybrid Server Nodes do not include power supplies, CPUs,
DIMM, hard disk drives (HDDs), solid-state drives (SSDs) riser 1, riser 2, riser 3, tool-less rail kit,
or option cards.
The Cisco HX220 M6 Edge All Flash/Hybrid Server Nodes Requires selection of one HyperFlex
network topology based on the top of rack switch configuration and network redundancy
requirements. Selecting a topology automatically adds the necessary networking adapters to
the configuration.
HyperFlex Edge clusters can be configured in 2, 3 or 4 node configurations. Single node clusters
and clusters larger than 4 nodes are not supported with HyperFlex Edge.
NOTE:
■ Refer to Cisco HyperFlex Drive Compatibility document for future
expansion and drive compatibility within the same node and HX
cluster.
■ Use the steps on the following pages to configure the server with
the components that you want to include.
NOTE:
■ Ifyou do not order any risers, the system defaults to automatically include
the one half-height riser shown in the table.
■ If you order PID UCSC-R2R3-C220M6, the system includes three half-height
risers (riser 1, riser 2, and riser 3).
■ If you order PID HX-GPURKIT-C220, the system includes two full-height risers
(riser 1 and riser 2)
Approved Configurations
(1) Half-height riser 1 only (controlled from CPU1). This is the default and is
automatically included.
(2) Half-height risers 1, 2, and 3 only. Risers 1 and 2 are controlled from CPU1 and Riser 3 is
controlled from CPU2.
(3) Full-height risers 1 and 2 only. Riser 1 is controlled from CPU1 and riser 2 is controlled
from CPU2.
Select CPUs
Highest DDR4
Clock
Cache UPI1 Links DIMM Clock
Product ID (PID) Freq Power (W) Cores
Size (MB) (GT/s) Support (MHz)2
(GHz)
Highest DDR4
Clock
Cache UPI1Links DIMM Clock
Product ID (PID) Freq Power (W) Cores
Size (MB) (GT/s) Support (MHz)2
(GHz)
Notes:
1. UPI = Ultra Path Interconnect.
2. If higher or lower speed DIMMs are selected than what is shown in Table 7 on page 25 for a given CPU
speed, the DIMMs will be clocked at the lowest common denominator of CPU clock and DIMM clock.
3. The maximum number of HX-CPU-I8351N CPUs is one
4. The maximum number of HX-CPU-I6314U CPUs is one
5. The maximum number of HX-CPU-I6312U CPUs is one
CAUTION: For systems configured with 3rd Gen Intel® Xeon® Scalable
Processors (Ice Lake) processors operating above 28o C [82.4o F], a fan fault or
executing workloads with extensive use of heavy instructions sets such as Intel®
Advanced Vector Extensions 512 (Intel® AVX-512), may assert thermal and/or
performance faults with an associated event recorded in the System Event Log
(SEL).
Approved Configurations
■ Choose one or two identical CPUs listed in Table 5 Available CPUs, page 20
(2) One-CPU Configuration:
■ Choose one CPU from any one of the rows of Table 5 Available CPUs, page 20
■ For 1-CPU systems, the server is shipped by default with riser 1 only
■ HX Edge supports single socket for 10 core and above
■ Choose two identical CPUs from any one of the rows of Table 5 Available CPUs, page 20
■ For 2-CPU systems, the server is shipped:
— With half-height risers 1, 2, and 3 by default, or
— With full-height risers 1 and 2 if you order a non-T4 GPU with more than 75 W
power dissipation
NOTE:
■ You cannot have two I8351N or two I6314U or I6312U CPUs in a two-CPU
configuration.
■ Ifyou configure a server with one I8351N CPU or one I6314U CPU or one I6312U
CPU you cannot later upgrade to a 2-CPU system with two of these CPUs.
Caveats
■ The selection of 1 or 2 CPUs depends on the desired server functionality. See the
following sections:
— STEP 4 SELECT MEMORY, page 24
— STEP 5 SELECT DRIVE CONTROLLERS, page 29
— STEP 6 SELECT DRIVES, page 30
— STEP 7 SELECT OPTION CARD(s), page 34
Memory is organized with eight memory channels per CPU, with up to two DIMMs per channel,
as shown in Figure 7.
S
l
A1 A2 A2 A1
Chan A Chan A
B1 B2 B2 B1
Chan B
Chan B
C1 C2 C2 C1
Chan C Chan C
D1 D2 D2 D1
Chan D Chan D
CPU 1 CPU 2
E1 E2 E2 E1
Chan E Chan E
F1 F2 F2 F1
Chan F Chan F
G1 G2 G2 G1
Chan G Chan G
H1 H2 H2 H1
Chan H Chan H
Select DIMMs
Ranks
Product ID (PID) PID Description Voltage
/DIMM
3200-MHz DIMMs
HX-MR-X16G1RW 16 GB RDIMM SRx4 3200 (8Gb) 1.2 V 1
HX-MR-X32G1RW 32 GB RDIMM SRx4 3200 (16Gb) 1.2 V 1
HX-MR-X32G2RW 32 GB RDIMM DRx4 3200 (8Gb) 1.2 V 2
HX-MR-X64G2RW 64 GB RDIMM DRx4 3200 (16Gb) 1.2 V 2
HX-ML-128G4RW 128 GB LRDIMM QRx4 3200 (16Gb) (non-3DS) 1.2 V 4
DIMM Blank1
UCS-DIMM-BLK UCS DIMM Blank
Notes:
1. Any empty DIMM slot must be populated with a DIMM blank to maintain proper cooling airflow.
NOTE:
■ System performance is optimized when the DIMM type and quantity are equal
for both CPUs, and when all channels are filled equally across the CPUs in the
server.
■ The selected DIMMs must be all of same type and number of DIMMs must be
equal for both CPUs.
■ HyperFlex Data Platform reserves memory for each controller VM. Refer to the
<Install Guide> for reservation details.
■ The memory mirroring feature is not supported with HyperFlex nodes.
■ The server supports the following memory reliability, availability, and serviceability
(RAS) BIOS options (only one option can be chosen):
— Adaptive Double Device Data Correction (ADDDC) (default).
— Maximum performance.
■ For best performance, observe the following:
— When one DIMM is used, it must be populated in DIMM slot 1 (farthest away from
the CPU) of a given channel.
— When single- or dual-rank DIMMs are populated in two DIMMs per channel (2DPC)
configurations, always populate the higher number rank DIMM first (starting from
the farthest slot). For a 2DPC example, first populate with dual-rank DIMMs in
DIMM slot 1. Then populate single-rank DIMMs in DIMM 2 slot.
■ DIMMs for CPU 1 and CPU 2 (when populated) must always be configured identically.
■ Cisco memory from previous generation servers (DDR3 and DDR4) is not compatible with
the server.
■ Memory can be configured in any number of DIMMs as pairs, although for
optimal performance, see the following document:
HX M6 Memory Guide
Approved Configurations
following
#DIMMs tables. CPU 1 DIMM Placement in Channels (for identically ranked
DIMMs)
1 (A1)
2 (A1, E1)
4 (A1, C1); (E1, G1)
6 (A1, C1); (D1, E1); (G1, H1)
8 (A1, C1); (D1, E1); (G1, H1); (B1, F1)
12 (A1, C1); (D1, E1); (G1, H1); (A2, C2); (D2, E2); (G2, H2)
16 (A1, B1); (C1, D1); (E1, F1); (G1, H1); (A2, B2); (C2, D2); (E2, F2); (G2, H2)
following tables.
Table 8 3200-MHz DIMMs Memory Speeds with Different Intel® Xeon® Ice Lake® Processors
DIMM Rules
NOTE:
■ See the detailed mixing DIMM configurations at the following link
HX M6 Memory Guide
■ SAS/SATA drives are controlled through a Cisco 12G SAS pass-through HBA.
■ PCIe drives are controlled directly from the CPUs.
Approved Configurations
■ The Cisco 12G SAS HBA supports up to 10 internal drives with JBOD support.
Drive
Product ID (PID) PID Description Capacity
Type
Front Capacity Drive
HX-SD19T61X-EV 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 1.9 TB
HX-SD38T61X-EV 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 3.8 TB
HX-SD76T61X-EV 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 7.6 TB
HX-SD960G6S1X-EV 960GB 2.5 inch Enterprise Value 6G SATA SSD SATA 960 GB
(HyperFlex Release 5.0(1c) and later)
HX-SD19T6S1X-EV 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 1.9 TB
(HyperFlex Release 5.0(1c) and later)
HX-SD38T6S1X-EV 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 3.8 TB
(HyperFlex Release 5.0(1c) and later)
HX-SD76T6S1X-EV 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 7.6 TB
(HyperFlex Release 5.0(1c) and later)
Front Cache Drive
HX-SD800GK3X-EP 800GB 2.5in Enterprise Performance 12G SAS SSD(3X endurance) SAS 800 GB
Front System Drive
HX-SD240GM1X-EV 240GB 2.5 inch Enterprise Value 6G SATA SSD SATA 240 GB
Boot Drive
HX-M2-240GB 240GB SATA M.2 SATA 240 GB
HX-M2-HWRAID Cisco Boot optimized M.2 Raid controller
NOTE: Cisco uses solid state drives (SSDs) from a number of vendors. All solid state drives (SSDs) are subject
to physical write limits and have varying maximum usage limitation specifications set by the manufacturer.
Cisco will not replace any solid state drives (SSDs) that have exceeded any maximum usage specifications set
by Cisco or the manufacturer, as determined solely by Cisco.
Approved Configurations
NOTE:
■ A minimum of 3 capacity drives is supported for HX Edge configuration.
■ For cluster scale related information please see the product release notes
NOTE:
■ Order two identical M.2 SATA SSDs for the boot-optimized RAID controller. You
cannot mix M.2 SATA SSD capacities
■ It is recommended that M.2 SATA SSDs be used as boot-only devices.
■ The Boot-Optimized RAID controller supports VMWare, Windows and Linux
Operating Systems.
■ CIMC/UCSM is supported for configuring of volumes and monitoring of the
controller and installed SATA M.2 drives.
■ The minimum version of Cisco IMC and Cisco UCS Manager that supports this
controller is 4.2(1) and later. The name of the controller in the software is
MSTOR-RAID.
■ The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not
supported.
■ Hot-plug replacement is not supported. The server must be powered off.
■ The boot-optimized RAID controller is supported when the server is used as a
compute node in HyperFlex configurations.
■ See Figure 21 on page 77 for the location of the module connector on the
motherboard. This connector accepts the boot-optimized RAID controller.
Caveats
■ Self Encrypting Drives (SEDs) and NVMe drives are not supported for HX Edge configurations.
Drive
Product ID (PID) PID Description Capacity
Type
Front Capacity Drive
HX-HD12TB10K12N 1.2 TB 12G SAS 10K RPM SFF HDD SAS 1.2 TB
HX-HD18TB10K4KN 1.8TB 12G SAS 10K RPM SFF HDD (4K) SAS 1.8 TB
HX-HD24TB10K4KN 2.4 TB 12G SAS 10K RPM SFF HDD (4K) SAS 2.4 TB
Front Cache Drive
HX-SD480G63X-EP 480GB 2.5in Enterprise Performance 6G SATA SSD(3X endurance) SATA 480 GB
Front System Drive
HX-SD240GM1X-EV 240GB 2.5 inch Enterprise Value 6G SATA SSD SATA 240 GB
Boot Drive
HX-M2-240GB 240GB SATA M.2 SATA 240 GB
HX-M2-HWRAID Cisco Boot optimized M.2 Raid controller
NOTE: Cisco uses solid state drives (SSDs) from a number of vendors. All solid state drives (SSDs) are subject
to physical write limits and have varying maximum usage limitation specifications set by the manufacturer.
Cisco will not replace any solid state drives (SSDs) that have exceeded any maximum usage specifications set
by Cisco or the manufacturer, as determined solely by Cisco.
Approved Configurations
NOTE:
■ A minimum of 3 capacity drives is supported for HX Edge configuration.
■ For cluster scale related information please see the product release notes
NOTE:
■ Order two identical M.2 SATA SSDs for the boot-optimized RAID controller. You
cannot mix M.2 SATA SSD capacities
■ It is recommended that M.2 SATA SSDs be used as boot-only devices.
■ The Boot-Optimized RAID controller supports VMWare, Windows and Linux
Operating Systems.
■ CIMC/UCSM is supported for configuring of volumes and monitoring of the
controller and installed SATA M.2 drives.
■ The minimum version of Cisco IMC and Cisco UCS Manager that supports this
controller is 4.2(1) and later. The name of the controller in the software is
MSTOR-RAID
■ The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not
supported.
■ Hot-plug replacement is not supported. The server must be powered off.
■ The boot-optimized RAID controller is supported when the server is used as a
compute node in HyperFlex configurations.
■ See Figure 21 on page 77 for the location of the module connector on the
motherboard. This connector accepts the boot-optimized RAID controller.
Caveats
■ Self Encrypting Drives (SEDs) and NVMe drives are not supported for HX Edge configurations.
NOTE: Use of 10GE PCI card is not allowed with 6300 Series FI.
Approved Configurations
■ You can select up to one of the PCIe option cards listed in Table 12 to be installed in Riser 1.
Riser 1 is controlled by CPU 1. Risers 2 and 3 are not installed in a 1-CPU system.
■ One additional PCIe card may be added with HX-E-TOPO4 or HX-E-TOPO3 selections.
■ No additional PCIe cards may be added with HX-E-TOPO2 selection.
■ You can select up to two of the PCIe option cards listed in Table 12 for a two-riser
system (Riser 1 and Riser 2 installed) and up to three of the PCIe option cards for a three-
riser system (Riser 1, Riser 2, and Riser 3 installed). Risers 1 and 2 are controlled by CPU
1 and riser 3 is controlled by CPU 2.
■ Three additional PCIe cards may be added with HX-E-TOPO4 or HX-E-TOPO3 selections.
■ Two additional PCIe card may be added with HX-E-TOPO2 selection.
Caveats
Select
■ NIC Interoperability with Cisco Cables/Optics (Table 13 & Table 14 on page 37).
■ NIC Interoperability with Intel Cables/Optics (Table 15 on page
XDACBL3M ✓ ✓
XDACBL5M ✓ ✓
E10GSFPLR ✓ ✓
The information in the preceding tables was compiled from testing conducted by Cisco Transceiver Module
Group (TMG) and vendors. The latest compatibility with optical modules and DACs can be found at
https://tmgmatrix.cisco.com/.
Many topologies are supported to ensure the right fit for many environments. HyperFlex Edge
supports single and dual switch topologies, depending on the level of high availability required.
See the SUPPLEMENTAL MATERIAL, page 62 for more details on each topology option.
Select one network topology from the options listed in Table 16.
Notes:
1. Starting with HyperFlex 5.0(2a), the TOPO5 option is supported
2.Minimum 4 NIC Ports required, If NIC connectivity mode is selected, cannot select Riser1 HH X16 Slot or Riser2 HH
X8 Slot Options
3. Refer to Table 16 for the list of available cards for the TOPO5 (NIC connectivity mode)
NOTE:
■A topology selection is required. Intel NIC adapters in STEP 7 are used by guest
VMs/applications only. These adapters may not be substituted for the adapters
automatically included when selecting a topology.
■ Selecting HX-E-TOPO4 will include the Cisco UCS VIC 1467 quad port 25G SFP28 mLOM
card (HX-M-V25-04) for 10/25GE topologies. Two ports on the 10GE are used for
HyperFlex functions. The remaining two ports may be used by applications after the
HyperFlex deployment is completed.
■ Selecting HX-E-TOPO2 will include the Intel i350 quad port PCIe NIC for 1GE topologies.
Two ports on the NIC are used for HyperFlex functions. The remaining two ports may be
used by applications after the HyperFlex deployment is completed.
■ Cisco strongly recommends HX-E-TOPO4 for all new deployments for the following
reasons:
• Higher storage performance
• Expansion ready - Ability to support node expansion in a future HyperFlex
Data Platform software release.
• Investment protection provides up to 100GE of theoretical throughput per server.
• Leaves PCIe slots free for accessories
■ Starting with HyperFlex 5.0(2a), the TOPO5 option is supported
■ For full details see the HyperFlex Networking Topologies Tech Note.
Maximum cards
Product ID (PID) PID Description Card Size
Per node
Notes:
1. Refer to https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/c/hw/c220m6/
install/c220m6.h tml for more details.
NOTE:
■ AllGPU cards must be procured from Cisco as there is a unique SBIOS ID
required by CIMC and UCSM.
Caveats
http://ucspowercalc.cisco.com
WARNING:
■ Starting 1st January 2024, only Titanium rated PSUs are allowed to be shipped
to European Union (EU), European Economic Area (EEA), United Kingdom (UK),
Switzerland and other countries that adopted Lot 9 Regulation.
■ DC PSUs are not impacted by Lot 9 Regulation and are EU/UK Lot 9 compliant
NOTE: In a server with two power supplies, both power supplies must be identical.
NOTE: Table 20 lists the power cords for servers that use power supplies less than
2300 W. Table 21 lists the power cords for servers that use 2300 W power supplies.
Note that the power cords for 2300 W power supplies use a C19 connector so they
only fit the 2300 W power supply connector.
Table 20 Available Power Cords (for server PSUs less than 2300 W)
Connector:
IEC60320/C13
1
CAB-AC-L620-C13 AC Power Cord, NEMA L6-20 - C13,
2M/6.5ft
Table 20 Available Power Cords (for server PSUs less than 2300 W)
(IEC60320/C13)
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
(BS 1363A) 13 AMP fuse
186576
CAB-250V-10A-ID Power Cord, 250V, 10A, India
Connector:
EL 701 1
8
250V
Connector:
EL 212 (IEC60320/C13)
(SI-32)
1
8
Table 20 Available Power Cords (for server PSUs less than 2300 W)
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
2,133.6 ± 25
The reversible cable management arm mounts on either the right or left slide rails at the rear
of the server and is used for cable management. Use Table 23 to order a cable management
arm.
For more information about the tool-less rail kit and cable management arm, see the Cisco 220
M6 Installation and Service Guide at this URL:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/c/hw/c220m6/install/
c220m6.html
NOTE: If you plan to rackmount your HX220 M6 Edge All Flash/Hybrid Server Nodes,
you must order a tool-less rail kit.The same rail kits and CMAs are used for M5 and
M6 servers.
A chassis intrusion switch gives a notification of any unauthorized mechanical access into the
server.
NOTE:
■ The TPM module used in this system conforms to TPM 2.0, as defined by the
Trusted Computing Group (TCG). It is also SPI-based.
■ TPM installation is supported after-factory. However, a TPM installs with a
one-way screw and cannot be replaced, upgraded, or moved to another server.
If a server with a TPM is returned, the replacement server must be ordered
with a new TPM.
Table 25 Locking Bezel Option for HX220 M6 Edge All Flash/Hybrid Server Nodes
NOTE:
■ VMware ESXi Hypervisor - We no longer ship VMWare ESXi from the factory.
Refer to this link for the further details.
https://www.cisco.com/c/en/us/td/docs/hyperconverged_systems/HyperFlex_
HX_DataPlatformSoftware/BroadcomAgreementNotice/b-broadcom-terminated-
vmware-embedded-agreement.html
■ Microsoft operating system - Optional guest OS licenses that may be purchased
to run on top of the hypervisor.
Operating system
Microsoft Options
HX-MSWS-22-ST16C Windows Server 2022 Standard (16 Cores/2 VMs)
HX-MSWS-22-DC16C Windows Server 2022 Data Center (16 Cores/Unlimited VMs)
If you have noncritical implementations and choose to have no service contract, the following
coverage is supplied:
For support of the entire HyperFlex System, Cisco offers the Cisco Smart Net Total Care
Service. This service provides expert software and hardware support to help sustain
performance and high availability of the unified computing environment. Access to Cisco
Technical Assistance Center (TAC) is provided around the clock, from anywhere in the world.
For systems that include Unified Computing System Manager, the support service includes
downloads of UCSM upgrades. The Cisco Smart Net Total Care Service includes flexible hardware
replacement options, including replacement in as little as two hours. There is also access to
Cisco's extensive online technical resources to help maintain optimal efficiency and uptime of
the unified computing environment. For more information please refer to the following url:
http://www.cisco.com/c/en/us/services/technical/smart-net-total-care.html?stickynav=1
Note: For PID HX-E-220M6S, select Service SKU with HXE22M6S suffix (Example: CON-OSP-HXE22M6S)
**Includes Local Language Support (see below for full description) – Only available in China and Japan
***Includes Local Language Support and Drive Retention – Only available in China and Japan
An enhanced offer over traditional Smart Net Total Care which provides onsite troubleshooting
expertise to aid in the diagnostics and isolation of hardware issue within our customers’ Cisco
HyperFlex System environment. It is delivered by a Cisco Certified field engineer (FE) in
collaboration with remote TAC engineer and Virtual Internetworking Support Engineer (VISE).
**Includes Local Language Support (see below for full description) – Only available in China and Japan
Solution Support includes both Cisco product support and solution-level support, resolving
complex issues in multivendor environments, on average, 43% more quickly than product
support alone. Solution Support is a critical element in data center administration, to help
rapidly resolve any issue encountered, while maintaining performance, reliability, and return
on investment.
This service centralizes support across your multivendor Cisco environment for both our
products and solution partner products you’ve deployed in your ecosystem. Whether there is
an issue with a Cisco or solution partner product, just call us. Our experts are the primary point
of contact and own the case from first call to resolution. For more information please refer to
the following url:
http://www.cisco.com/c/en/us/services/technical/solution-support.html?stickynav=1
For faster parts replacement than is provided with the standard Cisco HyperFlex warranty, Cisco
offers the Cisco Smart Net Total Care Hardware Only Service. You can choose from two levels of
advanced onsite parts replacement coverage in as little as four hours. Smart Net Total Care
Hardware Only Service provides remote access any time to Cisco support professionals who can
determine if a return materials authorization (RMA) is required.
***Includes Local Language Support and Drive Retention – Only available in China and Japan
Cisco Partner Support Service (PSS) is a Cisco Collaborative Services service offering that is
designed for partners to deliver their own branded support and managed services to enterprise
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets
to help them:
■ Expand their service portfolios to support the most complex network environments
■ Lower delivery costs
■ Deliver services that increase customer loyalty
PSS options enable eligible Cisco partners to develop and consistently deliver high-value
technical support that capitalizes on Cisco intellectual assets. This helps partners to realize
higher margins and expand their practice.
PSS provides hardware and software support, including triage support for third party software,
backed by Cisco technical resources and level three support.
PSS Hardware Only provides customers with replacement parts in as little as two hours and
provides remote access any time to Partner Support professionals who can determine if a return
materials authorization (RMA) is required. You can choose a desired service listed in Table 36.
Note: For PID HX-E-220M6S, select Service SKU with HXE22M6S suffix (Example: CON-PSW7-HXE22M6S)
Note: For PID HX-E-220M6S, select Service SKU with HXE22M6S suffix (Example: CON-DSO-HXE22M6S)
Combined Services makes it easier to purchase and manage required services under one
contract. SNTC services help increase the availability of your vital data center infrastructure and
realize the most value from your unified computing investment. The more benefits you realize
from the Cisco HyperFlex System, the more important the technology becomes to your
business. These services allow you to:
With the Cisco Drive Retention Service, you can obtain a new disk drive in exchange for a
faulty drive without returning the faulty drive.
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The Drive Retention service
enables you to retain your drives and ensures that the sensitive data on those drives is not
compromised, which reduces the risk of any potential liabilities. This service also enables you
to comply with regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in the above tables (where
available)
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
Where available, and subject to an additional fee, local language support for calls on all
assigned severity levels may be available for specific product(s) – see tables above.
For a complete listing of available services for Cisco Unified Computing System, see the
following URL:
http://www.cisco.com/en/US/products/ps10312/serv_group_home.html
SUPPLEMENTAL MATERIAL
Supported Network Topologies for HyperFlex Edge 2 Node Deployments
Cisco HyperFlex Edge offers both 1 Gigabit Ethernet (GE) and 10/25 Gigabit Ethernet (GE) installation
option. Both topologies support single top-of-rack (ToR) and dual ToR switch options for ultimate
network flexibility and redundancy.
Consider the following when determining the best topology for your cluster:
■ Cisco highly recommends the 10/25GE topology for higher performance and future node
expansion capabilities.
■ The 1GE for clusters that will never require node expansion, and instances where the ToR switch
does not have 10/25GE ports available.
NOTE: A network topology is chosen during initial deployment and cannot be changed or
upgraded without a full reinstallation. Choose your network topology carefully and with
future needs in mind.
Below is a summary of the supported topologies, refer to the Cisco HyperFlex Edge Deployment Guide,
Pre-installation Checklist chapter, for full details.
Figure 8 Physical cabling for the 10/25GE Dual Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 9 Physical cabling for the 10/25GE Single Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 10 Physical cabling for the 1GE Dual Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 11 Physical cabling for the 1GE Single Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 12 Physical cabling for the Quad Port NIC Based 10/25GE Dual Switch Topology.
Figure 13 Physical cabling for the Dual Port NIC Based 10/25GE Dual Switch Topology.
Cisco HyperFlex Edge offers both 1 Gigabit Ethernet (GE) and 10/25 Gigabit Ethernet (GE) installation
option. Both topologies support single top-of-rack (ToR) and dual ToR switch options for ultimate
network flexibility and redundancy.
Consider the following when determining the best topology for your cluster:
■ Cisco highly recommends the 10/25GE topology for higher performance and future node
expansion capabilities.
■ The 1GE topology is reserved for clusters that will never require node expansion, and
instances where the ToR switch does not have 10/25GE ports available.
NOTE: A network topology is chosen during initial deployment and cannot be changed or
upgraded without a full reinstallation. Choose your network topology carefully and with
future needs in mind.
Below is a summary of the supported topologies, refer to the Cisco HyperFlex Edge Deployment Guide,
Pre-installation Checklist chapter, for full details.
Figure 14 Physical cabling for the 10/25GE Dual Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 15 Physical cabling for the 10/25GE Single Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 16 Physical cabling for the 1GE Dual Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 17 Physical cabling for the 1GE Single Switch Topology. Detailed diagrams for
network topologies can be found in the pre-installation checklist.
Figure 18 Physical cabling for the Quad Port NIC Based 10/25GE Dual Switch Topology.
Figure 19 Physical cabling for the Dual Port NIC Based 10/25GE Dual Switch Topology.
Chassis
Internal views of the HX220 M6 Edge All Flash/Hybrid Server Nodes chassis with the top cover
removed are shown in Figure 20 and Figure 21 on page 77.
Figure 20 HX220 M6 Edge All Flash/Hybrid Server Nodes With Top Cover Off (full-height, full-
width PCIe cards)
1 2 3 45 6 7 8
10
11
3091
13 3 12
An internal view of the HX220 M6 Edge All Flash/Hybrid Server Nodes chassis with the top cover
removed is shown in Figure 21.
Figure 21 HX220 M6 Edge All Flash/Hybrid Server Nodes With Top Cover Off (full-height, half-
width PCIe cards)
1 2 3 45 6 7 8
10
11
12
3091
14 3 13
Risers
Figure 22 shows the locations of the PCIe riser connectors on the HX220 M6 Edge All
Flash/Hybrid Server Nodes motherboard. The allowed configurations are:
Figure 22 HX220 M6 Edge All Flash/Hybrid Server Nodes riser connector locations
HX220 M6 Motherboard
Riser 3 Connector
Riser 2 Connector
Riser 1 Connector
Figure 23 shows three half-height risers plugged into their respective connectors.
Figure 23 HX220 M6 Edge All Flash/Hybrid Server Nodes With Three Half-Height Risers Plugged In
HX220M6 Motherboard
Riser 3 Connector
Half-Height Riser 3
Half-Height Riser 2
Half-Height Riser 1
Riser 1 Connector
Riser 2 Connector
Figure 24 shows two full-height risers plugged in. Note that riser 1 is plugged into the riser 1
connector and riser 2 is plugged into the riser 3 connector. Riser 2 connector is not used.
Figure 24 HX220 M6 Edge All Flash/Hybrid Server Nodes With Two Full-Height Risers Plugged In
HX220 M6 Motherboard
Riser 3 Connector
Full-Height Riser 2
FullHeight Riser 1
Riser 1 Connector
Pin Signal
1
RTS (Request to Send)
2
DTR (Data Terminal Ready)
3
TxD (Transmit Data)
4 GND (Signal Ground)
5
GND (Signal Ground)
6
RxD (Receive Data)
7
DSR (Data Set Ready)
8 CTS (Clear to Send)
KVM Cable
The KVM cable provides a connection into the server, providing a DB9 serial connector, a VGA connector
for a monitor, and dual USB ports for a keyboard and mouse. With this cable, you can create a direct
connection to the operating system and the BIOS running on the server.
2 DB-9 serial connector 4 Two-port USB connector (for a mouse and keyboard)
SPARE PARTS
This section lists the upgrade and service-related parts for the HX220 M6 Edge All Flash/Hybrid Server
Nodes. Some of these parts are configured with every server.
NOTE: Some spare parts you order may also require accessories for full
functionality. For example, drives or drive controllers may need accompanying
cables. CPUs may need heatsinks, thermal paste, and installation tools. The
spares and their accessory parts are listed in Table 40.
KVM Cable
N20-BKVM= KVM local IO cable for UCS servers console port
Risers
UCSC-R2R3-C220M6= Kit containing two half-height risers (risers 2 and 3)
HX-GPURKIT-C220= Kit containing a GPU mounting bracket and the following risers (risers
1 and 2)
UCSC-FBRS-C220M6= Riser 2 and Riser 3 blank panels
CPUs
Note: If you are ordering a second CPU, see the CPU Accessories section in this table for additional parts
you may need to order for the second CPU.
8000 Series Processors
HX-CPU-I8380= Intel 8380 2.3GHz/270W 40C/60MB DDR4 3200MHz
HX-CPU-I8368= Intel 8368 2.4GHz/270W 38C/57MB DDR4 3200MHz
CPU Accessories
UCSC-HSLP-M6= Heatsink for 1U/2U LFF/SFF GPU SKU
UCS-CPU-TIM= Single CPU thermal interface material syringe for M5 server HS seal1
UCS-M6-CPU-CAR= Spare CPU Carrier for M6
UCSX-HSCK= UCS Processor Heat Sink Cleaning Kit (when replacing a CPU)
UCS-CPUAT= CPU Assembly Tool for M5 Servers
UCSC-FAN-C220M6= C220M6 2U Fan
3200-MHz DIMMs
HX-MR-X16G1RW= 16 GB RDIMM SRx4 3200 (8Gb)
HX-MR-X32G1RW= 32 GB RDIMM SRx4 3200 (16Gb)
HX-MR-X32G2RW= 32 GB RDIMM DRx4 3200 (8Gb)
HX-MR-X64G2RW= 64 GB RDIMM DRx4 3200 (16Gb)
HX-ML-128G4RW= 128 GB LRDIMM QRx4 3200 (16Gb)
DIMM Blank
UCS-DIMM-BLK= UCS DIMM Blank
Drives
Note: When ordering additional SAS/SATA front or rear drives, you may need to order a cable to connect
from the drive to the motherboard. See the Drive Cables section in this table.
HXAF-E-220M6S (All Flash)
Front Capacity Drive
HX-SD19T61X-EV= 1.9TB 2.5 inch Enterprise Value 6G SATA SSD
HX-SD38T61X-EV= 3.8TB 2.5 inch Enterprise Value 6G SATA SSD
HX-SD76T61X-EV= 7.6TB 2.5 inch Enterprise Value 6G SATA SSD
HX-SD960G6S1X-EV= 960GB 2.5 inch Enterprise Value 6G SATA SSD
HX-SD19T6S1X-EV= 1.9TB 2.5 inch Enterprise Value 6G SATA SSD
HX-SD38T6S1X-EV= 3.8TB 2.5 inch Enterprise Value 6G SATA SSD
HX-SD76T6S1X-EV= 7.6TB 2.5 inch Enterprise Value 6G SATA SSD
Front Cache Drive
Boot Drive
HX-M2-240GB= 240GB SATA M.2
HX-M2-HWRAID= Cisco Boot optimized M.2 Raid controller
HX-E-220M6S (Hybrid)
Front Capacity Drive
HX-HD12TB10K12N= 1.2 TB 12G SAS 10K RPM SFF HDD
HX-HD18TB10K4KN= 1.8TB 12G SAS 10K RPM SFF HDD (4K)
HX-HD24TB10K4KN= 2.4 TB 12G SAS 10K RPM SFF HDD (4K)
Front Cache Drive
HX-SD480G63X-EP= 480GB 2.5in Enterprise Performance 6GSATA SSD(3X endurance)
Front System Drive
HX-SD240GM1X-EV= 240GB 2.5 inch Enterprise Value 6G SATA SSD
Boot Drive
HX-M2-240GB= 240GB SATA M.2
HX-M2-HWRAID= Cisco Boot optimized M.2 Raid controller
Drive Cables
CBL-SATA-C220M6= SATA cable C220M6 (1U)
Note: If you are ordering a HX-SAS-220M6 you might need to order SAS cables. See the Drive Controller
Cables section of this table.
CMA
HX-CMA-C220M6= Reversible CMA for C220 M6 ball bearing rail kit
Security
HX-TPM-002C= TPM 2.0, TCG, FIPS140-2, CC EAL4+ Certified, for M6 servers
HX-INT-SW02= C220 and C240 M6 Chassis Intrusion Switch
Bezel
HXAF220C-BZL-M5S= HXAF220C M5 Security Bezel
HX220C-BZL-M5= HX220C M5 Security Bezel
Notes:
1. This part is included with the purchase of option or spare CPU or CPU processor kits.
(1) Have the following tools and materials available for the procedure:
Carefully remove and replace the CPU and heatsink in accordance with the instructions found
in “Cisco M5 Server Installation and Service Guide” found at:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/c/hw/c220m6/install/
c220m6.html
(1) Have the following tools and materials available for the procedure:
(3) Order one heat sink for each new CPU. Order PID UCSC-HSLP-M6=.
Carefully install the CPU and heatsink in accordance with the instructions found in “Cisco M5
Server Installation and Service Guide” found at:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/c/hw/c220m6/install/
c220m6.html
Step 2 Press evenly on both ends of the DIMM until it clicks into place in its slot
Note: Ensure that the notch in the DIMM aligns with the slot. If the notch is misaligned, it is
possible to damage the DIMM, the slot, or both.
Step 3 Press the DIMM connector latches inward slightly to seat them fully.
Step 4 Populate all slots with a DIMM or DIMM blank. A slot cannot be empty.
1
3
2
4
1
For additional details on replacing or upgrading DIMMs, see “Cisco M5 Server Installation and
Service Guide” found at this link:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/c/hw/c220m6/install/
c220m6.html
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Table 41 HX220 M6 Edge All Flash/Hybrid Server Nodes Dimensions and Weight
Parameter Value
Power Specifications
The server is available with the following types of power supplies:
Table 42 HX220 M6 Edge All Flash/Hybrid Server Nodes Power Specifications (1050 W AC power supply)
Parameter Specification
Input Connector IEC320 C14
Input Voltage Range (V rms) 100 to 240
Maximum Allowable Input Voltage Range (V rms) 90 to 264
Frequency Range (Hz) 50 to 60
Maximum Allowable Frequency Range (Hz) 47 to 63
Maximum Rated Output (W)1 800 1050
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (V rms) 100 120 208 230
Nominal Input Current (A rms) 9.2 7.6 5.8 5.2
Maximum Input at Nominal Input Voltage (W) 889 889 1167 1154
Maximum Input at Nominal Input Voltage (VA) 916 916 1203 1190
Minimum Rated Efficiency (%)2 90 90 90 91
Minimum Rated Power Factor2 0.97 0.97 0.97 0.97
Maximum Inrush Current (A peak) 15
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)3 12
Notes:
1. Maximum rated output is limited to 800W when operating at low-line input voltage (100-127V)
2. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published
at http://www.80plus.org/ for certified values
3. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Table 43 HX220 M6 Edge All Flash/Hybrid Server Nodes Power Specifications (1050 W V2 DC power
supply)
Parameter Specification
Input Connector Molex 42820
Input Voltage Range (V rms) -48
Maximum Allowable Input Voltage Range (V rms) -40 to -72
Frequency Range (Hz) NA
Maximum Allowable Frequency Range (Hz) NA
Maximum Rated Output (W) 1050
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (V rms) -48
Nominal Input Current (A rms) 24
Maximum Input at Nominal Input Voltage (W) 1154
Maximum Input at Nominal Input Voltage (VA) 1154
Minimum Rated Efficiency (%)1 91
Minimum Rated Power Factor1 NA
Maximum Inrush Current (A peak) 15
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)2 5
Notes:
1. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published
at http://www.80plus.org/ for certified values
2. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Table 44 HX220 M6 Edge All Flash/Hybrid Server Nodes 1600 W (AC) Power Supply Specifications
Parameter Specification
Input Connector IEC320 C14
Input Voltage Range (V rms) 200 to 240
Maximum Allowable Input Voltage Range (V rms) 180 to 264
Frequency Range (Hz) 50 to 60
Maximum Allowable Frequency Range (Hz) 47 to 63
Maximum Rated Output (W) 1600
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (V rms) 100 120 208 230
Nominal Input Current (A rms) NA NA 8.8 7.9
Maximum Input at Nominal Input Voltage (W) NA NA 1778 1758
Maximum Input at Nominal Input Voltage (VA) NA NA 1833 1813
Minimum Rated Efficiency (%)1 NA NA 90 91
Minimum Rated Power Factor2 NA NA 0.97 0.97
Maximum Inrush Current (A peak) 30
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)2 12
Notes:
1. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published
at http://www.80plus.org/ for certified values
2. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Table 45 HX220 M6 Edge All Flash/Hybrid Server Nodes 2300 W (AC) Power Supply Specifications
Parameter Specification
Input Connector IEC320 C20
Input Voltage Range (Vrms) 100 to 240
Maximum Allowable Input Voltage Range (Vrms) 90 to 264
Frequency Range (Hz) 50 to 60
Maximum Allowable Frequency Range (Hz) 47 to 63
Maximum Rated Output (W)1 2300
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (Vrms) 100 120 208 230
Nominal Input Current (Arms) 13 11 12 10.8
Maximum Input at Nominal Input Voltage (W) 1338 1330 2490 2480
Maximum Input at Nominal Input Voltage (VA) 1351 1343 2515 2505
Minimum Rated Efficiency (%)2 92 92 93 93
Minimum Rated Power Factor2 0.99 0.99 0.97 0.97
Maximum Inrush Current (A peak) 30
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)3 12
Notes:
1. Maximum rated output is limited to 1200W when operating at low-line input voltage (100-127V)
2. This is the minimum rating required to achieve 80 PLUS Titanium certification, see test reports published
at http://www.80plus.org/ for certified values
3. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
For configuration-specific power specifications, use the Cisco UCS Power Calculator at this URL:
http://ucspowercalc.cisco.com
Environmental Specifications
The environmental specifications for the HX220 M6 server are listed in Table 46.
Table 47 HX220 M6 Edge All Flash/Hybrid Server Nodes extended operating temperature hardware
Configuration Limits
Compliance Requirements
The regulatory compliance requirements for servers are listed in Table 48.
Parameter Description
Regulatory Compliance Products should comply with CE Markings per directives
2014/30/EU and 2014/35/EU
Safety UL 60950-1 Second Edition
CAN/CSA-C22.2 No. 60950-1 Second Edition
EN 60950-1 Second Edition
IEC 60950-1 Second Edition
AS/NZS 60950-1
GB4943 2001
EMC - Emissions 47CFR Part 15 (CFR 47) Class
A AS/NZS CISPR32 Class A
CISPR32 Class A
EN55032 Class A
ICES003 Class A
VCCI Class A
EN61000-3-2
EN61000-3-3
KN32 Class A
CNS13438 Class A
EMC - Immunity EN55024
CISPR24
EN300386
KN35