0% found this document useful (0 votes)
636 views42 pages

Cisco UCS C240 M6 - Installation

Uploaded by

Vijay Daniel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
636 views42 pages

Cisco UCS C240 M6 - Installation

Uploaded by

Vijay Daniel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Overview

This chapter contains the following topics:


• Overview, on page 1
• External Features, on page 4
• Summary of Server Features, on page 29
• Serviceable Component Locations, on page 40

Overview
The Cisco UCS C240 M6 is a stand-alone 2U rack server chassis that can operate in both standalone
environments and as part of the Cisco Unified Computing System (Cisco UCS).
The Cisco UCS C240 M6 servers support a maximum of two 3rd Gen Intel® Xeon® Scalable Processors, in
either one or two CPU configurations.
The servers support:
• 16 DIMM slots per CPU for 3200-MHz DDR4 DIMMs in capacities up to 128 GB DIMMs.
• A maximum of 8 or 12 TB of memory is supported for a dual CPU configuration populated with either:
• DIMM memory configurations of either 32 128 GB DDR DIMMs, or 16 128 GB DDR4 DIMMs
plus 16 512 GB Intel® Optane™ Persistent Memory Modules (DCPMMs).

• The servers have different supported drive configurations depending on whether they are configured
with large form factor (LFF) or small form factor (SFF) front-loading drives.
• The C240 M6 12 LFF supports midplane mounted storage through a maximum of r4 LFF HDDs.
• Up to 2 M.2 SATA RAID cards for server boot.
• Rear Storage risers (2 slots each)
• One rear PCIe riser (3 slots)
• Internal slot for a 12 G SAS RAID controller with SuperCap for write-cache backup, or for a SAS HBA.
• Network connectivity through either a dedicated modular LAN over motherboard card (mLOM) that
accepts a series 14xx/15xxx Cisco virtual interface card (VIC) or a third-party NIC. These options are
in addition to Intel x550 10Gbase-T mLOM ports built into the server motherboard.

Overview
1
Overview
Overview

• One mLOM/VIC card provides 10/25/40/50/100 Gbps. The following mLOMs are supported:
• Cisco UCS VIC 15238 Dual Port 40/100G QSFP28 mLOM (UCSC-M-V5D200G) supports:
• a x16 PCIe Gen4 Host Interface to the rack server
• two 40G/100G QSFP28 ports
• 4GB DDR4 Memory, 3200 MHz
• Integrated blower for optimal ventilation

• Cisco UCS VIC 15428 Quad Port CNA MLOM (UCSC-M-V5Q50G) supports:
• a x16 PCIe Gen4 Host Interface to the rack server
• four 10G/25G/50G SFP56 ports
• 4GB DDR4 Memory, 3200 MHz
• Integrated blower for optimal ventilation

• Cisco UCS VIC 1467 Quad Port 10/25G SFP28 mLOM (UCSC-M-V25-04) supports:
• a x16 PCIe Gen3 Host Interface to the rack server
• four 10G/25G QSFP28 ports
• 2GB DDR3 Memory, 1866 MHz

• Cisco UCS VIC 1477 Dual Port 40/100G QSFP28 (UCSC-M-V100-04)


• a x16 PCIe Gen3 Host Interface to the rack server
• two 40G/100G QSFP28 ports
• 2GB DDR3 Memory, 1866 MHz

These options are in addition to Intel x550 10Gbase-T mLOM ports built into the server motherboard.
• The following virtual interface cards (VICs) are supported in addition to some third-party VICs):
• Cisco UCS VIC 1455 quad port 10/25G SFP28 PCIe (UCSC-PCIE-C25Q-04=)
• Cisco UCS VIC 1495 Dual Port 40/100G QSFP28 CNA PCIe (UCSC-PCIE-C100-042)

• Two power supplies (PSUs) that support N+1 power configuration.


• Six modular, hot swappable fans.

Server Configurations, LFF


The server is orderable with the following configuration for LFF drives.
• Cisco UCS C240 M6 LFF 12 (UCSC-C240-M6L)—Large form-factor (LFF) drives, with a 12-drive
backplane.
• Front-loading drive bays 1—12 support 3.5-inch SAS/SATA drives.

Overview
2
Overview
Overview

• The midplane drive cage supports four 3.5-inch SAS-only drives.


• Optionally, rear-loading drive bays support either two or four SAS/SATA or NVMe drives.

Server Configurations, SFF 12 SAS/SATA


The SFF 12 SAS/SATA configuration (UCSC-C240-M6-S) can be configured with 12 SFF drives and an
optional optical drive. Also, The SFF configurations can be ordered as either an I/O-centric configuration or
a storage centric configuration. This server supports the following:
• A maximum of 12 Small form-factor (SFF) drives, with a 12-drive backplane.
• Front-loading drive bays 1—12 support a maximum of 12 2.5-inch SAS/SATA drives as SSDs or
HDDs.
• Optionally, drive bays 1—4 can support 2.5-inch NVMe SSDs. In this configuration, any number
of NVMe drives can be installed up to the maximum of 4.

Note NVMe drives are supported only on a dual CPU server.

• The server can be configured with a SATA Interposer card. If your server uses a SATA Interposer
card, up to a maximum of 8 SATA-only drives can be configured. These drives can be installed
only in slots 1-8.
• Drive bays 5 —12 support SAS/SATA SSDs or HDDs only; no NVMe.
• Optionally, the rear-loading drive bays support four 4 2.5-inch SAS/SATA or NVMe drives.

Server Configurations, 24 SFF SAS/SATA


The SFF 24 SAS/SATA configuration (UCSC-C240-M6SX) can be ordered as either an I/O-centric
configuration or a storage centric configuration. This server supports the following:
• A maximum of 24 small form-factor (SFF) drives, with a 24-drive backplane.
• Front-loading drive bays 1—24 support 2.5-inch SAS/SATA drives as SSDs or HDDs.
• Optionally, drive bays 1—4 can support 2.5-inch NVMe SSDs. In this configuration, any number
of NVMe drives can be installed up to the maximum of 4.

Note NVMe drives are supported only on a dual CPU server.

• Drive bays 5 —24 support SAS/SATA SSDs or HDDs only; no NVMe.


• Optionally, the rear-loading drive bays support four 4 2.5-inch SAS/SATA or NVMe drives.
• As an option, this server can be ordered with "GPU ready" configuration. This option supports
adding GPUs at a later date even though the GPU is not purchased at the time the server is initially
ordered.

Overview
3
Overview
External Features

Note To order the GPU Ready configuration through the Cisco online ordering
and configuration tool, you must select the GPU air duct PID to enable GPU
ready configuration. Follow the additional rules displayed in the tool. For
additional information, see GPU Card Configuration Rules.

Server Configurations, 12 NVMe


The SFF 12 NVMe configuration (UCSC-C240-M6N) can be ordered as an NVMe-only server. The
NVMe-optimized server requires two CPUs. T His server supports the following:
• A maximum of 12 SFF NVMe drives as SSDs with a 12-drive backplane, NVMe-optimized.
• Front-loading drive bays 1—12 support 2.5-inch NVMe PCIe SSDs only.
• The two rear-loading drive bays support two 2.5-inch NVMe SSDs only. These drive bays are the
top and middle slot on the left of the rear panel.

Server Configurations, 24 NVMe


The SFF 24 NVMe configuration (UCSC-C240-M6SN) can be ordered as an NVMe-only server. The
NVMe-optimized server requires two CPUs. This server supports the following:
• A maximum of 24 SFF NVMe drives as SSDs with a 24-drive backplane, NVMe-optimized.
• Front-loading drive bays 1—24 support 2.5-inch NVMe PCIe SSDs only.
• The two rear-loading drive bays support two 2.5-inch NVMe SSDs only. These drive bays are the
top and middle slot on the left of the rear panel.
• As an option, this server can be ordered with "GPU ready" configuration. This option supports
adding GPUs at a later date even though the GPU is not purchased at the time the server is initially
ordered.

Note To order the GPU Ready configuration through the Cisco online ordering
and configuration tool, you must select the GPU air duct PID to enable GPU
ready configuration. Follow the additional rules displayed in the tool. For
additional information, see GPU Card Configuration Rules.

External Features
This topic shows the external features of the different configurations of the server.
For definitions of LED states, see Front-Panel LEDs.

Overview
4
Overview
External Features

Cisco UCS C240 M6 Server 24 SAS/SATA Front Panel Features


The following figure shows the front panel features of Cisco UCS C240 M6SX, which is the small form-factor
(SFF), 24 SAS/SATA drive version of the server. Front-loading drives can be mix and match in slots 1 through
4 to support up to four SFF NVMe or SFF SAS/SATA drives. UCS C240 M6 servers with any number of
NVMe drives must be dual CPU systems.
This configuration can support up to 4 optional SAS/SATA drives in the rear PCIe slots.
Figure 1: Cisco UCS C240 M6 Server 24 SAS/SATA Front Panel

1 Power Button/Power Status LED 2 Unit Identification LED

3 System Status LEDs 4 Fan Status LED

5 Temperature Status LED 6 Power Supply Status LED

7 Network Link Activity LED 8 Drive Status LEDs

Overview
5
Overview
External Features

9 NVMe Drive Bays, front loading 10 KVM connector (used with KVM cable that provides
one DB-15 VGA, one DB-9 serial, and two USB 2.0
Drive bays 1—24 support front-loading SFF
connectors)
SAS/SATA drives.
Drive bays 1 through 4 can support SAS/SATA
hard drives and solid-state drives (SSDs) or NVMe
PCIe drives. Any number of NVMe drives up to 4
can reside in these slots.
Drive bays 5 - 24 support SAS/SATA hard drives
and solid-state drives (SSDs) only.
Drive bays are numbered 1 through 24 with bay 1
as the leftmost bay.

Cisco UCS C240 M6 Server 12 SAS/SATA Drives Front Panel Features


The following figure shows the front panel features of Cisco UCS C240 M6S, which is the small form-factor
(SFF) drive, 12 SAS/SATA drive version of the server. Front-loading drives can be mix and match in slots 1
through 4 to support up to four SFF NVMe or SFF SAS/SATA drives. UCS C240 M6 servers with any number
of NVMe drives must be dual CPU systems.
This configuration can support up to 4 optional SAS/SATA or NVMe drives in the rear PCIe slots.
For definitions of LED states, see Front-Panel LEDs.
Figure 2: Cisco UCS C240 M6 Server (SFF SAS/SATA, 12-Drive) Front Panel

Overview
6
Overview
External Features

1 Power Button/Power Status LED 2 Unit Identification LED

3 System Status LEDs 4 Fan Status LED

5 Temperature Status LED 6 Power Supply Status LED

7 Network Link Activity LED 8 Drive Status LEDs

9 Drive Bays front loading 10 Drive Bays 13 through 24 are blocked off with sheet
metal.
Drive bays 1—12 support front-loading SFF
SAS/SATA drives.
Drive bays 1 thorough 4 can support SAS/SATA
hard drives and solid-state drives (SSDs) as well
as NVME PCIe drives. Any number of NVMe
drives up to 4 can reside in these slots.
Drive bays 5 - 12 support SAS/SATA hard drives
and solid-state drives (SSDs) only.
Drive bays are numbered 1 through 24 with bay 1
as the leftmost bay.
Note If the server has a SATA Interposer
card, a maximum of 8 SATA drives is
supported in slots 1 through 8.

11 KVM connector (used with KVM cable that -


provides one DB-15 VGA, one DB-9 serial, and
two USB 2.0 connectors)

Cisco UCS C240 M6 Server 12 SAS/SATA Drives (Plus Optical) Front Panel Features
The following figure shows the front panel features of Cisco UCS C240 M6S, which is the small form-factor
(SFF) drive, 12-drive version of the server. Front-loading drives can be mix and match in slots 1 through 4
to support up to four SFF NVMe or SFF SAS/SATA drives. UCS C240 M6 servers with any number of NVMe
drives must be dual CPU systems.
This configuration can support up to 4 optional SAS/SATA drives in the rear PCIe slots.
For definitions of LED states, see Front-Panel LEDs.

Overview
7
Overview
External Features

Figure 3: Cisco UCS C240 M6 Server 12 SAS/SATA Plus Optical Drive, Front Panel Features

1 Power Button/Power Status LED 2 Unit Identification LED

3 System Status LEDs 4 Fan Status LED

5 Temperature Status LED 6 Power Supply Status LED

7 Network Link Activity LED 8 Drive Status LEDs

9 Drive Status LEDs 10 Drive bays 13-24 are blocked off with sheet metal.
Drive bays 1—12 support front-loading SFF drives.
Drive bays 1 thorough 4 can support SAS/SATA
hard drives and solid-state drives (SSDs) as well
as NVME PCIe drives. Any number of NVMe
drives up to 4 can reside in these slots.
Note If the server has a SATA Interposer
card, a maximum of 8 SATA drives is
supported in slots 1 through 8.

11 KVM connector (used with KVM cable that 12 Optional optical DVD drive is installed horizontally.
provides one DB-15 VGA, one DB-9 serial, and
two USB 2.0 connectors)

Overview
8
Overview
External Features

Cisco UCS C240 M6 Server 12 NVMe Drive Front Panel Features


The following figure shows the front panel features of Cisco UCS C240 M6N, which is the small form-factor
(SFF) drive, 12 NVMe drive version of the server. Front-loading drives are all NVMe only. UCS C240 M6
servers with any number of NVMe drives must be dual CPU systems.
This configuration can support up to 2 optional NVMe drives in the rear PCIe slots.
For definitions of LED states, see Front-Panel LEDs.
Figure 4: Cisco UCS C240 M6 Server 12 NVMe Front Panel

1 Power Button/Power Status LED 2 Unit Identification LED

3 System Status LEDs 4 Fan Status LED

5 Temperature Status LED 6 Power Supply Status LED

7 Network Link Activity LED 8 Drive Status LEDs

9 Drive bays 1—12 support front-loading SFF 10 Drive bays 13 —24 are blocked off with sheet metal.
NVMe drives only.
Drive bays are numbered 1 through 12 with bay 1
as the leftmost bay.

11 KVM connector (used with KVM cable that -


provides one DB-15 VGA, one DB-9 serial, and
two USB 2.0 connectors)

Overview
9
Overview
External Features

Cisco UCS C240 M6 Server 24 NVMe Drives Front Panel Features


The following figure shows the front panel features of Cisco UCS C240 M6SN, which is the small form-factor
(SFF) drive, 24 NVMe drive version of the server. Front-loading drives are all NVMe; SAS/SATA drives are
not supported. UCS C240 M6 servers with any number of NVMe drives must be dual CPU systems.
This configuration can support up to 2 optional NVMe drives in the rear PCIe slots.
Figure 5: Cisco UCS C240 M6 Server 24 NVMe Front Panel

1 Power Button/Power Status LED 2 Unit Identification LED

3 System Status LEDs 4 Fan Status LED

5 Temperature Status LED 6 Power Supply Status LED

7 Network Link Activity LED 8 Drive Status LEDs

9 Drive Status LEDs 10 Drive bays 1—24 support front-loading SFF NVMe
drives.
Drive bays are numbered 1 through 24 with bay 1
as the leftmost bay.

11 KVM connector (used with KVM cable that -


provides one DB-15 VGA, one DB-9 serial, and
two USB 2.0 connectors)

Overview
10
Overview
External Features

Cisco UCS C240 M6 Server 12 LFF Drive Front Panel Features


The following figure shows the front panel features of the large form-factor (LFF) configuration of the server.
This version of the server supports 12 3.5-inch LFF SAS-only front-loading hard disk drives (HDDs) plus up
to 4 3.5-inch LFF mid-plane mounted HDDs. As an option, the server can also support up to 4 SFF drives as
SAS, SATA, or NVME in the rear PCIe slots.
For definitions of LED states, see Front-Panel LEDs.
Figure 6: Cisco UCS C240 M6 Server 12 LFF Drive Front Panel

1 Power Button/Power Status LED 2 Unit Identification LED

3 System Status LEDs 4 Fan Status LED

5 Temperature Status LED 6 Power Supply Status LED

7 Network Link Activity LED 8 Drive Status LEDs

9 KVM connector (used with KVM cable that 10 Drive bays 1—12 support front-loading LFF
provides one DB-15 VGA, one DB-9 serial, and SAS-only drives.
two USB 2.0 connectors)
Drive bays are numbered 1 through 12 with bay 1
as the leftmost bay, 12 as the rightmost bottom bay.

Overview
11
Overview
External Features

Common Rear Panel Features


The following illustration shows the rear panel hardware features that are common across all models of the
server.

1 Rear hardware configuration options: 2 Power supplies (two, redundant as 1+1)


• For I/O-Centric, these are PCIe slots. See Power Specifications for specifications and
supported options.
• For Storage-Centric, these are storage drives
bays.

This illustration shows the slots unpopulated

3 VGA video port (DB-15 connector) 4 Serial port (RJ-45 connector)

5 1-Gb/10-Gb auto negotiating Ethernet ports, 2 6 USB 3.0 ports, 2


which are ports 1 and 2 in the cluster.
These LAN ports (LAN1 and LAN2) support 1
Gbps and 10 Gbps and auto negotiate to the
optimal speed based on the link partner capability.
The third port in the cluster is the dedicated1 Gb
management port.

7 Rear unit identification button/LED 8 Modular LAN-on-motherboard (mLOM) card slot


(x16)

Cisco UCS C240 M6 Server 24 Drive Rear Panel, I/O Centric


The Cisco UCS C240 M6 24 SAS/SATA SFF version has a rear configuration option, for either I/O (I/O
Centric) or Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the
Storage Centric version of the server offering drive bays.
The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
M6SX.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

Overview
12
Overview
External Features

1 Riser 1A 2 Riser 2A

3 Riser 3A or 3C - -

The following table shows the riser options for this version of the server.

Table 1: Cisco UCS C240 M6 24 SFF SAS/SATA/NVMe (UCSC-C240-M6SX)

Riser Options

Riser 1 Riser 1A supports three PCIe slots numbered bottom


to top:
This riser is I/O-centric and controlled by CPU 1 or
CPU 2. • Slot 1 is full-height, 3/4 length, x8, NCSI
• Slot 2 is full-height, full-length, x16, NCSI
• Slot 3 is full-height, full-length, x8, no NCSI

Riser 2 Riser 2A supports three PCIe slots:


This riser is I/O-centric and controlled by CPU 2. • Slot 1 is full-height, 3/4 length, x8, NCSI
• Slot 2 is full-height, full-length, x16, NCSI
• Slot 3 is full-height, full-length, x8, no NCSI

Riser 3 Riser 3A supports two PCIe slots:


This riser is I/O-centric and controlled by CPU 2. • Slot 7 is full-height, full-length, x8
• Slot 8 is full-height, full-length, x8

Riser 3C supports a GPU only.


• Supports one full-height, full-length, double-wide
GPU (PCIe slot 7 only), x16
• Slot 8 is blocked by double-wide GPU

Overview
13
Overview
External Features

Cisco UCS C240 M6 Server 12 SAS/SATA Drive Rear Panel, I/O Centric
The Cisco UCS C240 M6 12 SAS/SATA SFF version has a rear configuration option, for either I/O (I/O
Centric) or Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the
Storage Centric version of the server offering drive bays.

Note This version of server has an option for a DVD drive on the front of the server. The rear panel shown
here is the same for both the standard server and the DVD drive version of the server.

The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
M6S.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

1 Riser 1A 2 Riser 2A

3 Riser 3A or 3C -

The following table shows the riser options for this version of the server.

Table 2: Cisco UCS C240 M6 12 SFF SAS/SATA (USC-C240M6-S)

Riser Options

Riser 1 Riser 1A supports three PCIe slots numbered bottom


to top:
This riser is I/O-centric and controlled by CPU 1.
• Slot 1 is full-height, 3/4 length, x8, NCSI
• Slot 2 is full-height, full-length, x16, NCSI
• Slot 3 is full-height, full-length, x8, no NCSI

Riser 2 Riser 2A supports three PCIe slots:


This riser is I/O-centric and controlled by CPU 2. • Slot 4 is full-height, 3/4 length, x8, NCSI
• Slot 5 is full-height, full-length, x16, NCSI
• Slot 6 is full-height, full-length, x8, no NCSI

Overview
14
Overview
External Features

Riser Options

Riser 3 Riser 3A supports two PCIe slots


This riser is I/O-centric and controlled by CPU 2. • Slot 7 is full-height, full-length, x8, no NCSI
• Slot 8 is full-height, full-length, x8, no NCSI

• Riser 3C supports a GPU only.


• Supports one full-height, full-length,
double-wide GPU (PCIe slot 7 only), x16
• Slot 8 is blocked by double-wide GPU

Cisco UCS C240 M6 Server 24 NVMe Drive Rear Panel, I/O Centric
The Cisco UCS C240 M6 24 NVMe version has a rear configuration option, for either I/O (I/O Centric) or
Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
M6SN.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

The following table shows the riser options for this version of the server.

1 Riser 1A 2 Riser 2A

3 Riser 3A, 3B, or 3C (not -


supported)

Overview
15
Overview
External Features

Table 3: Cisco UCS C240 M6 24 SFF NVMe (UCSC-C240M6-SN)

Riser Options

Riser 1 Riser 1A supports three PCIe slots:


This riser is I/O-centric and controlled by CPU 1 or • Slot 1 is full-height, 3/4 length, x8, NCSI
CPU 2.
• Slot 2 is full-height, full-length, x16, NCSI
• Slot 3 is full-height, full-length, x8, no NCSI

Riser 2A Riser 2A supports three PCIe slots:


This riser is I/O-centric and controlled by CPU 2. • Slot 4 is full-height, 3/4 length, x8
• Slot 5 is full-height, full-length, x16
• Slot 6 is full-height, full-length, x8

Riser 3 Riser 3A, 3B, and 3C are not supported.

Cisco UCS C240 M6 Server 12 NVMe Drive Rear Panel, I/O Centric
The Cisco UCS C240 M6 24 NVMe version has a rear configuration option, for either I/O (I/O Centric) or
Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
M6N.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

The following table shows the riser options for this version of the server.

1 Riser 1A 2 Riser 2A

3 Riser 3A, 3B, or 3C (Not -


Supported)

Overview
16
Overview
External Features

Table 4: Cisco UCS C240 M6 24 SFF NVMe (UCSC-C240M6-SN)

Riser Options

Riser 1 Riser 1A supports three PCIe slots:


This riser is I/O-centric and controlled by CPU 1 or • Slot 1 is full-height, 3/4 length, x8, NCSI
CPU 2.
• Slot 2 is full-height, full-length, x16, NCSI
• Slot 3 is full-height, full-length, x8, no NCSI

Riser 2A Riser 2A supports three PCIe slots:


This riser is I/O-centric and controlled by CPU 2. • Slot 4 is full-height, 3/4 length, x8, NCSI
• Slot 5 is full-height, full-length, x16, NCSI
• Slot 6 is full-height, full-length, x8

Riser 3 Riser 3A, 3B, and 3C are not supported.

Cisco UCS C240 M6 Server 24 Drive Rear Panel, Storage Centric


The Cisco UCS C240 M6 24 SAS/SATA SFF version has a rear configuration option, for either I/O (I/O
Centric) or Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the
Storage Centric version of the server offering drive bays.
The following illustration shows the rear panel features for the Storage Centric version of the Cisco UCS
C240 M6SX.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

The following table shows the riser options for this version of the server.

1 Riser 1B 2 Riser 2A (Not supported)

3 Riser 3B, 3C -

Overview
17
Overview
External Features

Table 5: Cisco UCS C240 M6 24 SFF SAS/SATA/NVMe (UCSC-C240-M6SX)

Riser Options

Riser 1 Riser 1B supports two SFF SAS/SATA/NVMe drives


This riser is Storage-centric and controlled by CPU • Slot 1 is reserved
2.
• Slot 2 (drive bay 102), x4
• Slot 3 (drive bay 101), x4

When the server uses a hardware RAID controller


card, SAS/SATA HDDs or SSDs, or NVMe PCIe
SSDs are supported in the rear bays.

Riser 2 Riser 2A is not supported for a Storage-centric version


of the server.
This riser is I/O-centric and controlled by CPU 2.

Riser 3 Riser 3B has two PCIe slots that can support two SFF
drives (NVMe).
This riser is controlled by CPU 2.
• Slot 7 (drive bay 104), x4
• Slot 8 (drive bay 103), x4

When the server uses a hardware RAID controller


card, SAS/SATA HDDs or SSDs, or NVME PCIe
SSDs, are supported in the rear bays.
Riser 3C has two PCIe slots that can support a GPU.
• Slot 7 supports one full-height, full-length,
double-wide GPU, x16
• Slot 8 is blocked when a double-wide GPU is
installed

Cisco UCS C240 M6 Server 12 Drive Rear Panel, Storage Centric


The Cisco UCS C240 M6 12 SAS/SATA SFF version has a rear configuration option, for either I/O (I/O
Centric) or Storage (Storage Centric) with the I/O Centric version of the server offering PCIe slots and the
Storage Centric version of the server offering drive bays.
The following illustration shows the rear panel features for the Storage Centric version of the Cisco UCS
C240 M6S.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

Overview
18
Overview
External Features

The following table shows the riser options for this version of the server.

1 Riser 1A 2 Riser 2A

3 Riser 3B, 3C -

Table 6: Cisco UCS C240 M6 12 SFF SAS/SATA (CSC-C240M6-S)

Riser Options

Riser 1 Riser 1A is not supported for a Storage Centric version


of the server.

Riser 2 Riser 2A is not supported for a Storage Centric version


of the server.

Riser 3 Riser 3B can support two SFF drives


(SAS/SATA/NVMe):
This riser is Storage-centric and controlled by CPU
2. • Slot 7 (drive bay 104), x4
• Slot 8 (drive bay 103), x4

When you are using a HWRAID controller,


SAS/SATA HDDs or SSDs, or NVME PCIe SSDs,
are supported in the rear bays.
Riser 3C has two PCIe slots that can support a GPU.
• Slot 7 supports one full-height, full-length,
double-wide GPU, x16
• Slot 8 is blocked when a double-wide GPU is
installed

Cisco UCS C240 M6 Server 24 NVMe Drive Rear Panel, Storage Centric
The Cisco UCS C240 M6 24 NVMe SFF version has a rear configuration option, for either I/O (I/O Centric)
or Storage (Storage Centric) with the I/O Centric version of the server using PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the Storage Centric version of the Cisco UCS
C240 M6SN.

Overview
19
Overview
External Features

• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

The following table shows the riser options for this version of the server.

1 Riser 1B 2 Riser 2A (Not Supported)

3 Riser 3A, 3B, or 3C (Not -


Supported)

Table 7: Cisco UCS C240 M6 24 SFF NVMe (UCSC-C240M6-SN)

Riser Options

Riser 1B Riser 1B supports two NVMe drives:


This riser is Storage centric and controlled by CPU • Slot 1 is reserved
2.
• Slot 2 (drive bay 102), x4
• Slot 3 (drive bay 101), x4

When the server uses a hardware RAID controller


card, NVMe PCIe SSDs are supported in the rear bays.

Riser 2 Riser 2A does not support storage devices.

Riser 3 Riser 3A, 3B, and 3C are not supported.

Cisco UCS C240 M6 Server 12 NVMe Drive Rear Panel, Storage Centric
The Cisco UCS C240 M6 12 NVMe SFF version has a rear configuration option, for either I/O (I/O Centric)
or Storage (Storage Centric) with the I/O Centric version of the server using PCIe slots and the Storage Centric
version of the server offering drive bays.
The following illustration shows the rear panel features for the Storage Centric version of the Cisco UCS
C240 M6N.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

Overview
20
Overview
External Features

The following table shows the riser options for this version of the server.

1 Riser 1B 2 Riser 2A (Not Supported)

3 Riser 3A, 3B, or 3C (Not -


Supported)

Table 8: Cisco UCS C240 M6 12 SFF NVMe (UCSC-C240-M6N)

Riser Options

Riser 1B Riser 1B supports two NVME drives:


This riser is Storage-centric and controlled by CPU • Slot 1 is reserved
2.
• Slot 2 (drive bay 102), x4
• Slot 3 (drive bay 101), x4

When the server uses a hardware RAID controller


card, NVMe PCIe SSDs are supported in the rear bays.

Riser 2 Riser 2A does not support storage devices.

Riser 3 Riser 3A, 3B, and 3C are not supported.

Cisco UCS C240 M6 Server 12 LFF Drive Rear Panel


Unlike the SFF versions of the server, the Cisco UCS C240 M6 LFF has one supported hardware configuration
on the rear panel. The rear panel hardware configuration offers both PCIe slots and drive bays.
The following illustration shows the rear panel features for the I/O Centric version of the Cisco UCS C240
LFF.
• For features common to all versions of the server, see Common Rear Panel Features.
• For definitions of LED states, see Rear-Panel LEDs.

Overview
21
Overview
External Features

The following table shows the riser options for this version of the server.

1 Riser 1B 2 Riser 2A

3 Riser 3B -

Table 9: Cisco UCS C240 M6 12 LFF (UCSC-C240-LFF)

Riser Options

Riser 1 Riser 1B supports three PCIe slots numbered bottom


to top.
This riser is controlled by CPU 1.
• Slot 1 is reserved for a drive controller.
• Slot 2 supports one drive (drive bay 102), x4
• Slot 3 supports one drive (drive bay 101), x4
When using a hardware RAID controller card or
SAS

When using a hardware RAID controller card or SAS


HBA in the server, SAS/SATA HDDs or SSDs are
supported in the rear bays.
NVMe PCIe SSDs are supported in the rear bays
without need for a RAID controller.

Riser 2 Riser 2A supports three PCIe slots:


This riser is controlled by CPU 2. • Slot 4 is full-height, 3/4 length, x8
• Slot 5 is full-height, full-length, x16
• Slot 6 is full-height, full-length, x8

Overview
22
Overview
PCIe Risers

Riser Options

Riser 3 Riser 3B supports two drives:


This riser is controlled by CPU 2. • Slot 7 (drive bay 104), x4
• Slot 8 (drive bay 103), x4

When the server uses a hardware RAID controller


card or SAS HBA, SAS/SATA HDDs or SSDs are
supported in the rear bays.
NVMe PCIe SSDs are supported in the rear bays
without need for a RAID controller

PCIe Risers
The following different PCIe riser options are available.

Riser 1 Options
This riser supports two options, Riser 1A and 1B.

Overview
23
Overview
PCIe Risers

1 PCIe slot 1, Full height, ¾ length, 2 PCIe slot 2, full height, full length,
x8, NCSI x16, NCSI, GPU capable

3 PCIe slot 3, full height, full length, 4 Edge Connectors


x8, no NCSI

Overview
24
Overview
PCIe Risers

1 PCIe slot 1, Reserved for drive 2 Drive Bay 102, x4


controller

3 Drive 103, x4 4 Edge Connectors

Overview
25
Overview
PCIe Risers

Riser 2
This riser supports one option, Riser 2A.

1 PCIe slot 4, Full height, ¾ length, 2 PCIe slot 5, full height, full length,
x8 x16

Overview
26
Overview
PCIe Risers

3 PCIe slot 6, full height, full length, 4 Edge Connectors


x16

Riser 3
This riser supports three options, 3A, 3B, and 3C.

Overview
27
Overview
PCIe Risers

1 PCIe slot 7, Full height, full length 2 PCIe slot 8, full height, full length,
x8 x16

3 Edge Connectors

1 PCIe Slot 7, Drive Bay 103, x4 2 PCIe Slot 8, Drive Bay 103, x4

3 Edge Connectors

Overview
28
Overview
Summary of Server Features

1 PCIe Slot 7, supports one full 2 Edge Connectors


height, full length, double-wide
GPU (slot 7 only), x16

Summary of Server Features


The following tables list a summary of the server features for the LFF version and SFF versions of the server.

Overview
29
Overview
Summary of Server Features

Table 10: Server Features, SFF

Feature Description

Chassis Two rack-unit (2RU) chassis

Central Processor One or two 3rd Generation Intel Xeon processors.

Chipset Intel® C621 series chipset

Memory 32 slots for registered DIMMs (RDIMMs) or load-reduced


DIMMs (LR DIMMs) and support for Intel® Optane™ Persistent
Memory Modules (DCPMMs)

Multi-bit error protection Multi-bit error protection is supported

Video The Cisco Integrated Management Controller (CIMC) provides


video using the Matrox G200e video/graphics controller:
• Integrated 2D graphics core with hardware acceleration
• Embedded DDR memory interface supports up to 512 MB
of addressable memory (8 MB is allocated by default to video
memory)
• Supports display resolutions up to 1920 x 1200 16bpp @
60Hz
• High-speed integrated 24-bit RAMDAC
• Single lane PCI-Express host interface running at Gen 1
speed

Network and management I/O Rear panel:


• One 1-Gb Ethernet dedicated management port (RJ-45
connector)
• Two 1-Gb/10-Gb BASE-T Ethernet LAN ports (RJ-45
connectors)
The dual LAN ports can support 1 Gbps and 10 Gbps,
depending on the link partner capability.
• One RS-232 serial port (RJ-45 connector)
• One VGA video connector port (DB-15 connector)
• Two USB 3.0 ports

Front panel:
• One front-panel keyboard/video/mouse (KVM) connector
that is used with the KVM breakout cable. The breakout
cable provides two USB 2.0, one VGA, and one DB-9 serial
connector.

Overview
30
Overview
Summary of Server Features

Feature Description

Power Up to two of the following hot-swappable power supplies:


• 1050 W (AC)
• 1050 W (DC)
• 1600 W (AC)
• 2300 W (AC)

One power supply is mandatory; one more can be added for 1 +


1 redundancy as long power supplies are the same.
For additional information, see Supported Power Supplies

ACPI The advanced configuration and power interface (ACPI) 4.0


standard is supported.

Front Panel The front panel controller provides status indications and control
buttons

Cooling Six hot-swappable fan modules for front-to-rear cooling.

InfiniBand The PCIe bus slots in this server support the InfiniBand
architecture.

Expansion Slots For the SFF versions of the server, three half-height riser slots
are supported:
• Riser 1A (3 PCIe slots)
• Riser 1B (2 drive bays)
• Riser 2A (3 PCIe slots)
• Riser 3A (2 PCIe slots)
• Riser 3B (2 drive bays)
• Riser 3C (1 full-length, double-wide GPU)

Note Not all risers are available in every server configuration


option.

One or two dedicated slots (depending on the server type) for a


SATA interposer or storage controller(s).

Overview
31
Overview
Summary of Server Features

Feature Description

Interfaces Rear panel:


• One 1Gbase-T RJ-45 management port
• Two 10Gbase-T LOM ports
• One RS-232 serial port (RJ45 connector)
• One DB15 VGA connector
• Two USB 3.0 port connectors
• One flexible modular LAN on motherboard (mLOM) slot
that can accommodate various interface cards

Front panel supports one KVM console connector that supplies:


• two USB 2.0 connectors,
• one VGA DB15 video connector
• one serial port (RS232) RJ45 connector

Overview
32
Overview
Summary of Server Features

Feature Description

Internal Storage Devices

Overview
33
Overview
Summary of Server Features

Feature Description
• UCSC-C240-M6S:
• Up to 12 SFF SAS/SATA hard drives (HDDs) or
SAS/SATA solid state drives (SSDs)
• Optionally, up to four SFF NVMe PCIe SSDs. These
drives must be placed in front drive bays 1, 2, 3, and 4
only, can be mixed with SAS/SATA drives. The rest of
the bays (5 - 12) can be populated with SAS/SATA
SSDs or HDDs. Two CPUs are required in a server that
has any number of NVMe drives.
• Optionally, one front-facing DVD drive
• Optionally, up to two SFF rear-facing
SAS/SATA/NVMe drives
• If using a SATA Interposer, up to 8 SATA-only drives
can be installed (slots 1-8 only).

• UCSC-C240-M6SX:
• Up to 24 front SFF SAS/SATA hard drives (HDDs) or
SAS/SATA solid state drives (SSDs).
• Optionally, up to four front SFF NVMe PCIe SSDs.
These drives must be placed in front drive bays 1, 2, 3,
and 4 only. The rest of the bays (5 - 24) can be
populated with SAS/SATA SSDs or HDDs. Two CPUs
are required in a server that has any number of NVMe
drives.
• Optionally, up to four SFF rear-facing
SAS/SATA/NVMe drives

• UCSC-C240-M6N:
• Up to 12 front NVMe (only) drives
• Optionally, up to 2 rear NVMe (only) drives
• Two CPUs are required in a server that has any number
of NVMe drives.

• UCSC-C240-M6SN:
• Up to 24 front NVMe drives (only).
• Optionally, up to 2 rear NVMe drives (only)
• Two CPUs are required when choosing NVMe SSDs

• Other Storage:
• A mini-storage module connector on the motherboard

Overview
34
Overview
Summary of Server Features

Feature Description
supports a boot-optimized RAID controller carrier that
holds up two SATA M.2 SSDs. Mixing different
capacity SATA M.2 SSDs is not supported.

• Optional 2 M.2 RAID cards for use as boot volumes.

Integrated Management Processor Baseboard Management Controller (BMC) running Cisco


Integrated Management Controller (CIMC) firmware.
Depending on your CIMC settings, the CIMC can be accessed
through the 1GE dedicated management port, the 1GE/10GE
LOM ports, or a Cisco virtual interface card (VIC).
CIMC manages certain components within the server, such as the
Cisco 12G SAS HBA.

Storage Controllers One SATA Interposer board, 12G RAID HBA, or one or two 12G
SAS HBAs plug into a dedicated slot.
• SATA Interposer board:
• AHCI support for up to eight SATA-only drives (slots
1- 8)
• Supported only on the UCSC-C240M6-S server

• Cisco 12G RAID controller with 4GB FBWC (for


UCSC-240-M6S server)
• RAID support (RAID 0, 1, 5, 6, 10, 50 and 60) and SW
RAID0
• Supports up to 14 internal SAS/SATA drives

• Cisco M6 12G SAS RAID controller with 4GB FBWC (for


UCSC-240-M6SX server)
• RAID support (RAID 0, 1, 5, 6, 10, 50, and 60) and
SRAID0
• Supports up to 28 internal SAS/SATA drives

• Cisco M6 12G SAS HBA (for UCSC-240-M6S and


UCSC-240-M6SX servers)
• RAID Support 0, 1, and 10
• JBOD/Pass-through Mode support
• Each HBA supports up to 14 SAS/SATA internal drives

Overview
35
Overview
Summary of Server Features

Feature Description

Modular LAN over Motherboard (mLOM) slot The dedicated mLOM slot on the motherboard can flexibly
accommodate the following cards:
• Cisco Virtual Interface Cards (VICs)
• Quad Port Intel i350 1GbE RJ45 Network Interface Card
(NIC)

Note The four Intel i350 ports are provided on an optional


card that plugs into the mLOM slot. These ports are
separate from the two embedded LAN ports on the
motherboard.

Server Management Cisco Intersight provides server management.

CIMC Cisco Integrated Management Controller (CIMC) 4.2(1) or later


is required for the server.

Table 11: Server Features, LFF

Feature Description

Chassis Two rack-unit (2RU) chassis

Central Processor One or two 3rd Generation Intel Xeon processors.

Chipset Intel® C621 series chipset

Memory 32 slots for registered DIMMs (RDIMMs) or load-reduced


DIMMs (LR DIMMs) and support for Intel® Optane™ Persistent
Memory Modules (DCPMMs)

Multi-bit error protection Multi-bit error protection is supported

Video The Cisco Integrated Management Controller (CIMC) provides


video using the Matrox G200e video/graphics controller:
• Integrated 2D graphics core with hardware acceleration
• DDR2/3 memory interface supports up to 512 MB of
addressable memory (8 MB is allocated by default to video
memory)
• Supports display resolutions up to 1920 x 1200 16bpp @
60Hz
• High-speed integrated 24-bit RAMDAC
• Single lane PCI-Express host interface running at Gen 1
speed

Overview
36
Overview
Summary of Server Features

Feature Description

Network and management I/O Rear panel:


• One 1Gbase-T RJ-45 management port
• Two 10Gbase-T LOM ports
• One RS-232 serial port (RJ-45 connector)
• One DB15 VGA connector
• Two USB 3.0 ports
• One flexible modular LAN on motherboard (mLOM) slot
that can accommodate various interface cards

Front panel supports one KVM console connector that supplies:


• two USB 2.0 connectors,
• one VGA DB15 video connector
• one serial port (RS232) RJ45 connector

Power Up to two of the following hot-swappable power supplies:


• 1050 W (AC)
• 1050 W (DC)
• 1600 W (AC)
• 2300 W (AC)

One power supply is mandatory; one more can be added for 1 +


1 redundancy. With two power supplies, both must be the same
type and wattage.

ACPI The advanced configuration and power interface (ACPI) 4.0


standard is supported.

Front Panel The front panel controller provides status indications and control
buttons

Cooling Six hot-swappable fan modules for front-to-rear cooling.

InfiniBand The PCIe bus slots in this server support the InfiniBand
architecture.

Expansion Slots • Riser 1B (1 PCIe slot reserved for a drive controller and 2
HDD slots)
• Riser 2A (3 PCIe slots)
• Riser 3B (2 HDD slots)

Overview
37
Overview
Summary of Server Features

Feature Description

Interfaces Rear panel:


• One 1Gbase-T RJ-45 management port
• Two 10Gbase-T LOM ports
• One RS-232 serial port (RJ45 connector)
• One DB15 VGA connector
• Two USB 3.0 port connectors
• One flexible modular LAN on motherboard (mLOM) slot
that can accommodate various interface cards

Front panel: Supports one KVM console connector that supplies


two USB 2.0 connectors, one VGA DB15 video connector, and
one serial port (RS232) RJ45 connector

Internal Storage Devices • Large Form Factor (LFF) drives with a 12-drive backplane.
The server can hold a maximum of:
• 12 LFF 3.5-inch front-loading SAS-only hard drives
(HDDs, 4 mid-plane LFF drives,
• As an option, up to four 3.5-inch mid-plane SAS-only
LFF HDDs
• Optionally, up to four rear-facing SAS/SATA
HDDs/SSDs or up to four rear-facing NVMe PCIe SSDs

• A mini-storage module connector on the motherboard


supports a boot-optimized RAID controller carrier that holds
up two SATA M.2 SSDs. Mixing different capacity SATA
M.2 SSDs is not supported.

Overview
38
Overview
Summary of Server Features

Feature Description

Storage Controllers The 12G RAID HBA or 12G SAS HBA plugs into slot 1 (bottom
slot) of riser 1B.
• Cisco M6 12G SAS RAID Controller with 4GB FBWC
• RAID support (RAID 0, 1, 5, 6, 10, 50 and 60) and
SRAID
• Supports up to 32 internal SAS/SATA drives
• Plugs into drive slot 1 of riser 1B

• Cisco M6 12G SAS HBA


• RAID 0, 1, and 10 support
• JBOD/Pass-through Mode support
• Supports up to 32 SAS/SATA internal drives
• Plugs into slot 1 of riser 1B

Integrated Management Processor Baseboard Management Controller (BMC) running Cisco


Integrated Management Controller (CIMC) firmware.
Depending on your CIMC settings, the CIMC can be accessed
through the 1GE dedicated management port, the 1GE/10GE
LOM ports, or a Cisco virtual interface card (VIC).
CIMC manages certain components within the server, such as the
Cisco 12G SAS HBA.

Modular LAN over Motherboard (mLOM) slot The dedicated mLOM slot on the motherboard can flexibly
accommodate the following cards:
• Cisco Virtual Interface Cards (VICs)
• Quad Port Intel i350 1GbE RJ45 Network Interface Card
(NIC)

Note The four Intel i350 ports are provided on an optional


card that plugs into the mLOM slot. These ports are
separate from the two LAN ports embedded on the
motherboard.

Server Management Cisco Intersight provides server management.

CIMC Cisco Integrated Management Controller 4.2(1) or later is required


for this server.

Overview
39
Overview
Serviceable Component Locations

Serviceable Component Locations


This topic shows the locations of the field-replaceable components and service-related items. The view in the
following figure shows the server with the top cover removed.
Figure 7: Cisco UCS C240 M6 Server, Serviceable Component Locations

1 Front-loading drive bays. 2 Cooling fan modules (six, hot-swappable)

3 DIMM sockets on motherboard (16 per CPU) 4 CPU socket 1


See DIMM Population Rules and Memory
Performance Guidelines for DIMM slot numbering.
Note An air baffle rests on top of the DIMM
and CPUs when the server is operating.
The air baffle is not displayed in this
illustration.

5 CPU socket 2 6 M.2 RAID Controller

Overview
40
Overview
Serviceable Component Locations

7 PCIe riser 3 (PCIe slots 7 and 8 numbered from 8 PCIe riser 2 (PCIe slots 4, 5, 6 numbered from
bottom to top), with the following options: bottom to top), with the following options:
• 3A (Default Option)—Slots 7 (x16 • 2A (Default Option)—Slot 4 (x24 mechanical,
mechanical, x8 electrical), and 8 (x16 x8 electrical) supports full height, ¾ length
mechanical, x8 electrical). Both slots can card; Slot 5 (x24 mechanical, x16 electrical)
accept a full height, full length GPU card. supports full height, full length GPU card; Slot
6 (x16 mechanical, x8 electrical) supports full
• 3B (Storage Option)—Slots 7 (x24 height, full length card.
mechanical, x4 electrical) and 8 (x24
mechanical, x4 electrical). Both slots can
accept 2.5-ich SFF universal HDDs.
• 3C (GPU Option)—Slots 7 (x16 mechanical,
x16 electrical) and 8 empty (NCSI support
limited to one slot at a time). Slot 7 can
support a full height, full length GPU card.

9 PCIe riser 1 (PCIe slot 1, 2, 3 numbered bottom to -


top), with the following options:
• 1A (Default Option)—Slot 1 (x24 mechanical,
x8 electrical) supports full height, ¾ length
card; Slot 2 (x24 mechanical, x16 electrical)
supports full height, full length GPU card;
Slot 3 (x16 mechanical, x8 electrical) supports
full height, full length card.
• 1B (Storage Option)—Slot 1 (x24 mechanical,
x8 electrical) supports full height, ¾ length
card; Slot 2 (x4 electrical), supports 2.5-inch
SFF universal HDD; Slot 3 (x4 electrical),
supports 2.5-inch SFF universal HDD

The Technical Specifications Sheets for all versions of this server, which include supported component part
numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).

Overview
41
Overview
Serviceable Component Locations

Overview
42

You might also like