Lenovo Azure HCI Configure Guide
Lenovo Azure HCI Configure Guide
Lenovo ThinkAgile MX
Certified Configurations for
Azure Stack HCI - V1 Servers
Last Update: April 2023
Dave Feisthammel
Mike Miller
David Ye
This document provides background information regarding the Microsoft Windows Server
Software-Defined (WSSD) program for Windows Server 2016 and the Microsoft Azure Stack
HCI program for Windows Server 2019, as well as the benefits of deploying certified
configurations based on Lenovo® ThinkAgile™ MX Certified Nodes and Appliances. We
focus on details of current Lenovo certified configurations for Azure Stack HCI that are based
on ThinkSystem™ SE350 and SR650 servers, including processor, memory, network, and
storage components available for each cluster node. This includes the following solutions:
ThinkAgile MX3520-H Hybrid Appliance
ThinkAgile MX3520-F All-Flash Appliance
ThinkAgile MX Certified Node on SR650
ThinkAgile MX1020-H Hybrid Appliance
ThinkAgile MX1020-F All-Flash Appliance
ThinkAgile MX1021-H Hybrid Certified Node
ThinkAgile MX1021-F All-Flash Certified Node
Looking for Lenovo ThinkAgile MX solutions that are based on our V2 servers? Check our
companion document at http://lenovopress.com/lp1520.
At Lenovo Press, we bring together experts to produce technical publications around topics of
importance to you, providing information and best practices for using Lenovo products and
solutions to solve IT challenges. See our publications at http://lenovopress.com.
Do you have the latest version? We update our papers from time to time, so check
whether you have the latest version of this document by clicking the Check for Updates
button on the front page of the PDF. Pressing this button will take you to a web page that
will tell you if you are reading the latest version of the document and give you a link to the
latest if needed. While you’re there, you can also sign up to get notified via email whenever
we make an update.
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Microsoft HCI certification overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
ThinkAgile MX Series solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Lenovo certified configurations for Microsoft Azure Stack HCI . . . . . . . . . . . . . . . . . . . . . . . 5
Component selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Network switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Other recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Change History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Azure Stack HCI, a host operating system from Microsoft, is Microsoft’s HCI solution for
customers who wish to run workloads on-premises and extend easily to Microsoft Azure for
hybrid capabilities such as back-up, site recovery, storage, cloud-based monitoring and more.
Whether you prefer to deploy the Azure Stack HCI operating system or take advantage of
Azure Stack HCI functional capabilities that are built into Windows Server, Lenovo ThinkAgile
MX solutions provide hardware that is certified for use in both scenarios.
Lenovo has worked closely with Microsoft for many years to ensure our products perform
smoothly and reliably with Microsoft operating systems and software. Our customers can
leverage the benefits of our partnership with Microsoft by taking advantage of HCI solutions
that have been certified under either the Microsoft Windows Server Software-Defined
(WSSD) program for Windows Server 2016 or the Microsoft Azure Stack HCI program for
Windows Server 2019.
Deploying Lenovo certified configurations for Microsoft HCI solutions takes the guesswork out
of system configuration. Whether you intend to build a converged or hyperconverged S2D
cluster, you can rest assured that purchasing a certified configuration will provide a rock solid
foundation with minimal obstacles along the way. These node configurations are certified by
Lenovo and validated by Microsoft for out-of-the-box optimization. Using the Lenovo
ThinkAgile MX Certified Node configurations presented in this document, you can get up and
running without lengthy design and build time, knowing that the solution will work as intended.
This document briefly discusses the Microsoft HCI certification programs, and then presents
the Lenovo certified configurations that have been validated for use in a Microsoft HCI
solution under these programs. Details of each node configuration are specified, including all
key components. Since there is some latitude for component customization in these
configurations, the rules for customization are also described.
Perhaps the greatest value to be derived from the WSSD program from a customer
perspective is to reduce the risks and unknowns associated with deploying an HCI solution
using “off the shelf” components. To earn certification in the WSSD program, Lenovo has met
or exceeded multiple criteria set by Microsoft for quality, accelerated time to value,
out-of-the-box optimization, and expedited problem resolution.
The Microsoft WSSD program is an intense certification program which includes the following
requirements for hardware components:
Servers and components must have Windows Server 2016 logo certification
Key components must have SDDC “Additional Qualifiers” certification (SDDC-AQ)
– Servers
– Network adapters
– Storage adapters (SAS/SATA HBAs)
– Storage devices (NVMe, SSD, and HDD)
For more information about the Microsoft WSSD program, visit the following URL:
https://docs.microsoft.com/en-us/windows-server/sddc
Many of the certification requirements from the WSSD program have been carried over into
the Azure Stack HCI program, which begins with Windows Server 2019 logo certification.
Each key hardware component must pass rigorous testing procedures and be certified as an
Azure Stack HCI component before it can be included in an Azure Stack HCI solution.
In addition to the specific certification requirements that must be met by the individual
hardware components, Microsoft requires end-to-end solution validation for each
configuration to be certified. This involves running the fully configured HCI solution for many
hours, while putting it through various usage and potential failure scenarios.
What is unique about Lenovo certified configurations for Microsoft HCI solutions is our
rigorous evaluation process to select the best components from our existing Lenovo product
portfolio. The main objective is to ensure our customers will have great confidence in our HCI
solutions for a production environment.
For more information about the Microsoft Azure Stack HCI program, visit the following URL:
https://docs.microsoft.com/en-us/windows-server/azure-stack-hci
To learn why deploying a certified configuration for S2D is an optimal path to success for S2D
deployment, read the two-part Microsoft blog post at the following URLs:
https://cloudblogs.microsoft.com/windowsserver/2018/02/20/the-technical-value-of-w
ssd-validated-hci-solutions-part-1
ThinkAgile MX Appliance
Lenovo ThinkAgile MX Appliances map to Microsoft “Azure Stack HCI Integrated Systems.”
These solutions are based on exactly the same hardware as ThinkAgile MX Certified Nodes.
The only differences between a ThinkAgile MX Certified Node and Appliance that are based
on the same server (for example, the ThinkSystem SR650 rack server) is that the Appliance
configuration includes the following items:
Azure Stack HCI operating system is preloaded before shipping to the customer
ThinkAgile Advantage Support for 3 years (can be uplifted to a longer term, quicker
response time, or both via Premier support)
The remainder of this document focuses on describing the existing Lenovo configurations that
have been certified under the Microsoft HCI certification programs and the details of key
components contained in each configuration. The purpose of this document is to provide
guidance for Lenovo customers and technical pre-sales personnel during the process of
configuring a Microsoft certified HCI solution for production usage. This document assumes
the reader has prior knowledge of Microsoft HCI technologies, including S2D.
5
considerations for ThinkAgile MX1020 and MX1021 on SE350” on page 20 for information
related to the unique characteristics of this Edge Server when used as an Azure Stack HCI
cluster node.
Table 1 lists the key components of the example configurations for S2D that have been
certified under the Microsoft WSSD program for Windows Server 2016 and the Azure Stack
HCI program for Windows Server 2019. Depending on the configuration, the number of nodes
can range from a minimum of 2 to a maximum of 16. Note that we regularly certify additional
configurations as time and resources allow.
The format of the configuration name follows a specific pattern. The first two or three
alphabetic characters define the storage types included in the configuration (“N” for NVMe,
“S” for SSD, and “H” for HDD). The next three or four alphanumeric characters define the total
raw storage capacity of the node (e.g. “80T” indicates a total capacity of 80TB per node). The
next numeric character defines the configuration sequence for the given component
parameters. For example, if there are two certified configurations that contain NVMe and
HDD storage devices with a total raw capacity of 80TB per node, they would be referred to as
NH80T1a and NH80T2a. The final letter represents the revision of that particular
configuration.
SH40T1a ThinkAgile MX 4 x 800GB SSD 10 x 4TB 430-16i Mellanox CX-4 2-port 2-16
(hybrid) 2 CPUs FC: B170 FC: AUU8 FC: AUNM 25GbE FC: AUAJ2
192GB-1.5TB
SH60T1a ThinkAgile MX 4 x 1.6TB SSD 10 x 6TB 430-16i Mellanox CX-4 2-port 2-16
(hybrid) 2 CPUs FC: B171 FC: AUUA FC: AUNM 25GbE FC: AUAJ2
192GB-1.5TB
NH80T1a ThinkAgile MX 4 x 3.2TB NVMe 10 x 8TB 430-16i Mellanox CX-4 2-port 2-16
(hybrid) 2 CPUs (U.2) FC: AUU9 FC: AUNM 25GbE FC: AUAJ2
192GB-1.5TB FC: B2XG
NH120T1a ThinkAgile MX 4 x 3.2TB NVMe 10 x 12TB 430-16i Mellanox CX-4 2-port 2-16
(hybrid) 2 CPUs (U.2) FC: B118 FC: AUNM 25GbE FC: AUAJ2
192GB-1.5TB FC: B2XG
NS61T1a ThinkAgile MX 4 x 750GB Optane 16 x 3.84TB 3 x 430-8i Mellanox CX-4 2-port 4-16
(all-flash) 2 CPUs NVMe (U.2) SSD FC: AUNL 100GbE FC: ATRP3
192GB-1.5TB FC: B2ZJ FC: B49C
NS77T1a ThinkAgile MX 4 x 3.2TB NVMe 20 x 3.84TB 3 x 430-8i Mellanox CX-4 2-port 2-16
(all-flash) 2 CPUs (U.2) SSD FC: AUNL 100GbE FC: ATRP3
192GB-1.5TB FC: B11K FC: B49C
NN38T1a ThinkAgile MX 12 x 3.2TB NVMe (U.2) 3 x 430-8i Mellanox CX-4 2-port 2-4
(all-flash) 2 CPUs FC: B11K FC: AUNL 100GbE FC: ATRP3
192GB-1.5TB (all-NVMe)
NN16T1a ThinkAgile MX1021 8 x 2TB NVMe N/A SE350 10GbE SFP+ 2-45
1 CPU FC: B75E 2-port Wired Network
64-256GB (all-NVMe) Module FC: B6F44
NN12T1a ThinkAgile MX1021 2 x 650GB High 6 x 2TB NVMe N/A SE350 10GbE SFP+ 2-45
1 CPU Endurance NVMe FC: B75E 2-port Wired Network
64-256GB FC: B75C Module FC: B6F44
SS08T1a ThinkAgile MX1021 4 x 1.92TB SATA SSD (non-SED) Onboard SE350 10GbE SFP+ 2-45
1 CPU FC: B75B SATA 2-port Wired Network
64-256GB Controller Module FC: B6F44
1 This list is not exhaustive and can be customized. Refer to the “Component selection” on page 22 for information
about customizing these configurations. All configurations use a dual 480GB M.2 SSD configured as a RAID-1
mirrored volume for OS boot.
2 Mellanox CX-4 1-port 40GbE (FC ATRN) and ThinkSystem Intel E810-DA2 10/25GbE SFP28 2-Port PCIe Ethernet
Adapter (FC BCD6) are also certified for this configuration.
3 Mellanox CX-4 2-port 25GbE (FC AUAJ) and ThinkSystem Intel E810-DA2 10/25GbE SFP28 2-Port PCIe Ethernet
Adapter (FC BCD6) are also certified for this configuration.
4
SE350 10GBASE-T 4-port Wired Network Module (FC B7Z7) and SE350 Wireless Network Module (FC B6F3) are
also certified for this configuration.
5
Only 2 nodes are supported for direct-connect (switchless) configurations using the SE350.
Again, the configurations shown are example configurations and are not meant to provide an
exhaustive list of all available certified configurations. Refer to “Component selection” on
page 22 for additional information regarding components that have been certified. Also, refer
to “Special considerations for ThinkAgile MX1020 and MX1021 on SE350” on page 20 for
information related to the unique characteristics of this Edge Server when used as an Azure
Stack HCI cluster node. If you have questions about the validity of a configuration you would
like to purchase, check with your account team.
7
SH40T1a (hybrid)
ID
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
NVMe
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
SR650
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
5
23
PCIe3
HDD PCIe3
SATA
4T B
PORT 2
4 6
PORT 1
PCIe3 PCIe3
24
PCIe3
HDD AC AC
SATA
4T B
DC DC
Network Adapter
This is a general purpose configuration that uses SSD and HDD storage devices. It is
recommended when raw capacity requirements are less than 40TB per node. Network
bandwidth of 10GbE is generally adequate for this configuration. This is one of the
configurations that has been certified for use in a 2-node Microsoft HCI solution.
SH60T1a (hybrid)
ID
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
NVMe
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
SR650
SATA
SATA
SATA
4TB
4TB
4TB
4TB
5
23
PCIe3
HDD PCIe3
SATA
4T B
PORT 2
4 6
PORT 1
PCIe3 PCIe3
24
PCIe3
HDD AC AC
SATA
4T B
DC DC
Network Adapter
This is a general purpose configuration that uses SSD and HDD storage devices, with
increased raw capacity of 60TB per node. It is recommended when a bit more storage
capacity is required. A 16-node Microsoft HCI solution built using this configuration will
provide a total raw storage capacity of nearly a petabyte.
9
NH80T1a hybrid configuration
This configuration uses the Lenovo ThinkAgile MX Certified Node configured with hot-swap
NVMe U.2 devices for the cache tier and HDDs for the capacity tier. Total raw capacity of this
configuration is 80TB per node.
NH80T1a (hybrid)
ID
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
NVMe
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
SR650
SATA
SATA
SATA
4TB
4TB
4TB
4TB
5
23
PCIe3
HDD PCIe3
SATA
4T B
PORT 2
4 6
PORT 1
PCIe3 PCIe3
24
PCIe3
HDD AC AC
SATA
4T B
DC DC
Network Adapter
Note: It is recommended to use a minimum of 25GbE network bandwidth for better HDD
rebuild times for HDDs with a capacity of 8TB or more.
This is a high performance configuration that uses hot-swap NVMe U.2 devices inserted into
the AnyBay drive bays as cache for the HDD capacity tier and has the same raw capacity of
80TB per node as Configuration NH80T1a. It is highly recommended to use a minimum
network bandwidth of 25GbE in order to keep up with NVMe storage performance and also to
potentially reduce HDD rebuild times. This is one of the configurations that has been certified
for use in a 2-node Microsoft HCI solution.
NH120T1a (hybrid)
ID
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
NVMe
SATA
SATA
SATA
SATA
4TB
4TB
4TB
4TB
SR650
SATA
SATA
SATA
4TB
4TB
4TB
4TB
5
23
PCIe3
HDD PCIe3
SATA
4T B
PORT 2
4 6
PORT 1
PCIe3 PCIe3
24
PCIe3
HDD AC AC
SATA
4T B
DC DC
Network Adapter
Note: It is recommended to use a minimum of 25GbE network bandwidth for better HDD
rebuild times for HDDs with a capacity of 8TB or more.
This is a high performance configuration that uses hot-swap NVMe U.2 devices inserted into
the AnyBay drive bays as cache for the HDD capacity tier and has a total raw capacity of
120TB per node. It is highly recommended to use a minimum network bandwidth of 25GbE in
order to keep up with NVMe storage performance and also to potentially reduce HDD rebuild
times. A 16-node Microsoft HCI solution built using this configuration will provide a total raw
storage capacity of nearly 2 Petabytes.
11
NS61T1a all-flash configuration
This configuration uses the Lenovo ThinkAgile MX Certified Node with 24 2.5” drive bays
configured with U.2 NVMe devices for the cache tier and SSDs for the capacity tier. Total raw
capacity of this configuration is approximately 61TB per node. The focus of this configuration
is performance rather than large capacity.
NS61T1a (all-flash)
0 1 2 3 4 5 NVMe 6 7 8 9 10 11 12 - 15 16 17 18 19 20 21 22 23
ID
Empty
Empty
Empty
Empty
NVMe
NVMe
NVMe
NVMe
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SR650
SAS SAS SAS SAS NVMe NVMe NVMe NVMe SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS
3.84 TB 3.84 TB 3.84 TB 3.84 TB 750G B 750G B 750G B 750G B 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB
1 5
PCIe3 PCIe3
PORT 2
2 4 6
PORT 1
PCIe3 PCIe3 PCIe3
AC AC
3
PCIe3 DC DC
/ MI 2
This is an ultra-high performance all-flash configuration that uses NVMe devices as cache for
the SSD capacity tier, but has a smaller raw capacity of approximately 61TB per node. In
order to achieve maximum performance, this configuration includes a 2-port 100GbE
Mellanox network adapter in each node. The Mellanox ConnectX-6 adapters shown above
support Ethernet, including RoCEv2, and have been certified for Azure Stack HCI.
NS77T1a (all-flash)
0 1 2 3 4 5 NVMe 6 7 8 9 10 11 12 - 15 16 17 18 19 20 21 22 23
ID
NVMe
NVMe
NVMe
NVMe
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SR650
SAS SAS SAS SAS NVMe NVMe NVMe NVMe SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS
3.84 TB 3.84 TB 3.84 TB 3.84 TB 750G B 750G B 750G B 750G B 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB
1 5
PCIe3 PCIe3
PORT 2
2 4 6
PORT 1
PCIe3 PCIe3 PCIe3
AC AC
3
PCIe3 DC DC
/ MI 2
This is an ultra-high performance all-flash configuration that uses NVMe devices as cache for
the SSD capacity tier, but has a smaller raw capacity of approximately 77TB per node. In
order to achieve maximum performance, this configuration includes a 2-port 100GbE
Mellanox network adapter in each node. The Mellanox ConnectX-6 adapters shown above
support Ethernet, including RoCEv2, and have been certified for Azure Stack HCI.
13
NN38T1a all-flash configuration (all-NVMe)
This configuration uses the Lenovo ThinkAgile MX Certified Node with 24 2.5” drive bays
configured with U.2 NVMe devices as the only storage devices. Total raw capacity of this
configuration is approximately 38TB per node. The focus of this configuration is performance
rather than large capacity.
NN38T1a (all-flash)
0 1 2 3 4 5 NVMe 6 7 8 9 10 11 12 - 15 16 17 18 19 20 21 22 23
ID
Empty
Empty
Empty
Empty
Empty
Empty
Empty
Empty
Empty
Empty
Empty
Empty
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
NVMe
SR650
NVMe NVMe NVMe NVMe NVMe NVMe NVMe NVMe NVMe NVMe NVMe NVMe
3.2TB 3.2TB 3.2TB 3.2TB 3.2TB 3.2TB 3.2TB 3.2TB 3.2TB 3.2TB 3.2TB 3.2TB
1 5
PCIe3 PCIe3
PORT 2
2 4 6
PORT 1
PCIe3 PCIe3 PCIe3
AC AC
3
PCIe3 DC DC
/ MI 2
This is an ultra-high performance all-NVMe configuration that uses only NVMe devices for
storage, but has a smaller raw capacity of approximately 38TB per node. In order to achieve
maximum performance, this configuration includes a 2-port 100GbE Mellanox network
adapter in each node. The Mellanox ConnectX-6 adapters shown above support Ethernet,
including RoCEv2, and have been certified for Azure Stack HCI.
SS92T1a (all-flash)
0 1 2 3 4 5 NVMe 6 7 8 9 10 11 12 - 15 16 17 18 19 20 21 22 23
ID
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SR650
SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS SAS
3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB 3.84 TB
1 5
PCIe3 PCIe3
PORT 2
2 4 6
PORT 1
PCIe3 PCIe3 PCIe3
AC AC
3
PCIe3 DC DC
/ MI 2
This is an ultra-high performance all-SSD configuration that uses only SSD devices for
storage, but has a smaller raw capacity of approximately 92TB per node. In order to achieve
maximum performance, this configuration includes a 2-port 100GbE Mellanox network
adapter in each node. The Mellanox ConnectX-6 adapters shown above support Ethernet,
including RoCEv2, and have been certified for Azure Stack HCI.
15
NN16T1a all-flash configuration (all-NVMe)
This configuration uses the Lenovo ThinkAgile MX1021 on SE350 Certified Node with eight
2TB NVMe storage devices configured as a single-tier solution. Total raw capacity of this
configuration is approximately 16TB per node. The focus of this configuration is for Remote
Office/Branch Office (ROBO) environments at the edge. It is typically deployed as a two-node
direct-connected Azure Stack HCI solution.
NN16T1a (all-flash)
Left Wing
4 x NVMe M.2 Data Adapter #1
Drives 2, 3, 4, 5
Riser Cage
This is a single-tier high performance all-NVMe configuration that uses only NVMe devices for
storage, with a raw capacity of approximately 16TB per node. Based on the small form factor
of the ThinkSystem SE350 Edge Server, this configuration is ideal for use at the edge, where
high-speed network switches are not available to handle storage traffic inside the cluster.
Note that for direct-connected scenarios, ThinkAgile MX1021 supports only 2 nodes due to
the limited number of high-speed network ports available in these systems.
NN12T1a (all-flash)
Left Wing
2 x NVMe (HE)
M.2 Data Adapter #1
2 x NVMe
Drives 2, 3, 4, 5
Riser Cage
This is a two-tier high performance all-NVMe configuration that uses only NVMe devices for
storage, with a raw capacity of approximately 12TB per node. Based on the small form factor
of the ThinkSystem SE350 Edge Server, this configuration is ideal for use at the edge, where
high-speed network switches are not available to handle storage traffic inside the cluster.
Note that for direct-connected scenarios, ThinkAgile MX1021 supports only 2 nodes due to
the limited number of high-speed network ports available in these systems.
17
SS08T1a all-flash configuration (all-SATA SSD)
This configuration uses the Lenovo ThinkAgile MX1021 on SE350 Certified Node with 4
1.92TB SATA SSD storage devices configured as a single-tier solution. Total raw capacity of
this configuration is approximately 8TB per node. The focus of this configuration is for Remote
Office/Branch Office (ROBO) environments at the edge. It is typically deployed as a two-node
direct-connected Azure Stack HCI solution.
SS08T1a (all-flash)
Left Wing
4 x SATA SSD M.2 Data Adapter #1
Drives 2, 3, 4, 5
Riser Cage
This is a single-tier all-SSD configuration that uses only non-SED SATA SSD devices for
storage, but has a relatively small raw capacity of under 8TB per node. Based on the small
form factor of the ThinkSystem SE350 Edge Server, this configuration is ideal for use at the
edge, where high-speed network switches are not available to handle storage traffic inside the
cluster. Note that for direct-connected scenarios, ThinkAgile MX1021 supports only 2 nodes
due to the limited number of high-speed network ports available in these systems. Since SED
storage devices cannot be shipped into certain countries, including China, the single-tier
all-SSD configuration is currently the only ThinkAgile MX1021 configuration available in these
countries.
Direct-connect networking
For a 2- or 3-node HCI cluster, it is possible to connect the network adapters directly to each
other without placing a network switch between the nodes. For a 2-node cluster using the
2-port Mellanox ConnectX-4 10/25GbE network adapter as an example, this means that Port
1 of the adapter on one node can be cabled directly into Port 1 of the second node and Port 2
from each node can be direct-connected as well. In this example, the network cables are
standard SFP28 Direct Attach Cables (DACs). There is no need for a “crossover” cable.
One of the most significant benefits associated with the direct-connect method is that
high-speed network switches are not required for storage traffic inside the cluster (aka
“east-west” traffic). However, since the network adapters are connected to each other, a
separate network connection is required from the customer network to the cluster (aka
“north-south” traffic). A low-cost option for this additional network interface is to use one of the
LAN On Motherboard (LOM) cards available for the ThinkAgile MX Certified Node.
For most 2-node implementations, the 2-port 1GbE RJ45 LOM card option is sufficient.
Table 2 shows the LOM card options that are available. Figure 12 shows how the high-speed
network adapter and LOM card are used in a 2-node direct-connected solution.
Important: The LOM cards shown in the table above are NOT certified to carry RDMA
storage traffic inside the solution. These cards are offered only to connect the cluster to the
customer network.
19
5
23 PCIe3
PCIe3
SATA
4T B
PORT 2
4 6
PORT 1
PCIe3 PCIe3
24
PCIe3
AC AC
SATA
4T B
DC DC
Figure 12 Diagram showing network connectivity for a ThinkAgile MX Certified Node that is part of a
2-node direct-connected HCI cluster.
Figure 13 Illustration of USB file share witness for a direct-connected 2-node HCI cluster.
Due to the unique design of the ThinkSystem SE350 Edge Server, there are several special
guidelines that apply to ThinkAgile MX1020 and MX1021 on SE350. This section provides
these guidelines and some recommendations for configuring a ThinkAgile MX1020 and
MX1021 solutions.
Although the SE350 Edge Server has been certified for use in Azure Stack HCI clusters
containing from two to four nodes, its intended purpose is at the edge in a two-node
direct-connected cluster. Due to the limited number of 10GbE network ports provided by the
SE350 Network Modules, it is not practical to build a cluster containing more than two nodes
without requiring a high-speed network switch to handle the storage traffic.
Since there are no hard disk drives available for the SE350, all-flash configurations are the
only configurations available for ThinkAgile MX1020 and MX1021 (i.e. hybrid configurations
are not available). Furthermore, the all-flash configurations that are available are actually
all-NVMe or all-SSD configurations. Lenovo has not certified a mix of NVMe and SATA SSD
data devices for use as an Azure Stack HCI cluster node. Three example configurations can
be found in this document, including NN16T1a (single-tier all-NVMe), NN12T1a (two-tier
all-NVMe), and SS08T1a (single-tier all-SSD).
In addition to the storage device caveats discussed in the previous paragraph, note that SED
storage devices cannot be shipped into certain countries, including China. The only non-SED
data storage devices that are currently available for the SE350 are SATA SSD devices. Also,
since the SE350 supports a maximum of four SATA SSDs, which is the minimum number of
devices required for an Azure Stack HCI cluster node, storage configuration for these
solutions is essentially limited to the size (480GB, 960GB, or 1.92TB) of the four SATA SSDs
selected. This represents an approximate total raw storage capacity per node of 2TB, 4TB,
and 8TB, respectively.
For the OS Boot device, ThinkAgile MX1020 and MX1021 are identical to other ThinkAgile
MX solutions, using dual 480GB M.2 SSD devices configured as a RAID-1 mirrored Boot
volume using the ThinkSystem SE350 M.2 Mirroring Enablement Kit.
The SE350 is a single-processor server, using a single Intel Xeon D-2100 series processor. In
addition, since there are only 4 DIMM sockets available, memory capacity is restricted to a
range between 64 and 256GB per node.
ThinkAgile MX1020 and MX1021 support only the iWARP implementation of RDMA via its
integrated Network Modules. Lenovo does not currently support the addition of any PCIe
network adapters to ThinkAgile MX1020 or MX1021 solutions. If you are interested in adding
such a network adapter to nodes, please contact your sales representative to engage the
Lenovo Special Bid process.
For information related to deploying a two-node direct-connected Azure Stack HCI cluster
using ThinkAgile MX1021 on SE350, refer to the Lenovo Press document ThinkAgile MX1021
on SE350 Azure Stack HCI (S2D) Deployment Guide, which can be downloaded from the
following URL:
https://lenovopress.com/lp1298
21
Component selection
The Lenovo certified configurations listed above include several common hardware
components. Depending on workloads and other requirements, there is some flexibility in
customization of each configuration to meet a large range of customer needs. However, the
following configuration guidelines must be followed:
Nodes
– Besides Lenovo ThinkAgile MX Certified Nodes, only the Lenovo ThinkSystem SR630
are certified for Azure Stack HCI.
– The ThinkSystem SR630 is not available as a ThinkAgile MX Certified Node, but can
be configured appropriately for use as a Microsoft Azure Stack HCI cluster node.
– In general, the number of nodes can range from 2 to 16 (refer to Table 1 on page 6),
but not all configurations have been certified for this full range.
– The ThinkSystem SR630 can be ordered in the configurations listed in Table 1 on
page 6, with the exception of storage devices. Since the SR630 is a 1U rack server that
supports up to 12 x 2.5” or 4 x 3.5” storage devices, these limitations must be kept in
mind when configuring an SR630-based solution.
Processors
– Two Intel processors with a recommended minimum of 8 cores per CPU in the Silver
(4100 series) Gold (5100 or 6100 series) or Platinum (8100 series) processor families
– Single Intel Xeon D-2100 processor is supported for ThinkAgile MX1021 on SE350
– 205 watt processors are not supported
Memory
– Minimum of 192GB is required for converged, 384GB for hyperconverged
– For ThinkAgile MX1021 on SE350, minimum of 64GB for converged, 128GB for HCI
– Maximum of 1.5TB per node
– We strongly recommend a “balanced memory configuration” - for details, see the
following URL: http://lenovopress.com/lp0742.pdf
OS Boot (not part of Microsoft Azure Stack HCI certification)
– Minimum requirement is 200GB OS boot volume
– M.2 Mirroring Kit with dual 480GB M.2 SSD configured as RAID-1 for resilience are
specified in all Lenovo ThinkAgile MX Certified Node configurations
– If hot swap storage is preferred for boot, we recommend a RAID card configured for
RAID-1 with two SSDs or two HDDs (SR650 server only)
GPUs
The following two tables show supported GPUs and whether they will support GPU-P
functionality upon release of an updated device driver from NVIDIA. Table 3 shows the
GPUs that are supported for ThinkAgile MX solutions running on Lenovo SR650 rack
servers.
Table 4 shows the GPUs that are supported for ThinkAgile MX solutions running on
Lenovo SE350 edge servers.
Note: Since a GPU consumes the only available PCIe slot, only 4 SSD or NVMe devices
can be configured in any SE350 server that includes a GPU.
Networking
– We recommend 25GbE for 8TB or above HDD for improved HDD rebuild time
– For RoCE v2:
• Mellanox ConnectX-4 Lx 2-port 10/25GbE Ethernet Adapter (use this NIC for typical
hybrid storage configurations)
• Mellanox ConnectX-4 Lx 2-port 100GbE Ethernet Adapter (use this NIC for all-flash
storage configurations when the additional throughput is required)
• Mellanox ConnectX-6 HDR100 QSFP56 2-port PCIe InfiniBand Adapter (this NIC
also supports Ethernet, including RoCE v2, and can be used for all-flash storage
configurations when additional throughput is required)
• Mellanox ConnectX-6 HDR100 QSFP56 1-port PCIe InfiniBand Adapter (this NIC
also supports Ethernet, including RoCE v2, and can be used for all-flash storage
configurations when additional throughput is required and two 1-port NICs are
preferred over a single 2-port NIC)
• Mellanox ConnectX-6 Lx 10/25GbE SFP28 2-Port PCIe Ethernet Adapter
• Mellanox ConnectX-6 Dx 100GbE QSFP56 2-port PCIe Ethernet Adapter (use this
NIC for All-Flash configurations when the additional throughput is required)
• Mellanox ConnectX-4 Lx 1-port 40GbE Ethernet Adapter with Mellanox QSA 100G
to 25G Cable Adapter (use this NIC/Cable Adapter combination if two 1-port NICs
are preferred over a single 2-port NIC)
• SFP+ DAC cables support 10Gb and SFP28 DAC cables support 25Gb port speed
• Network switches must support the RoCE v2 feature set for best storage
performance (see “Network switches” on page 29 for more information regarding
Lenovo and NVIDIA/Mellanox network switches that have been tested with
ThinkAgile MX solutions)
– For iWARP:
• ThinkSystem SE350 supports iWARP via its integrated Network Modules
• ThinkSystem Intel E810-DA2 10/25GbE SFP28 2-Port PCIe Ethernet Adapter
23
Notes: For ThinkSystem SR630 and SR650 servers, LAN On Motherboard (LOM)
ports are supported for north-south traffic in direct-connected clusters, but not for
east-west (storage) traffic, as discussed in “Direct-connect networking” on page 19.
For these servers, only the Intel E810-DA2 has been certified for east-west traffic.
Storage HBAs
– Hybrid storage configurations
• ThinkSystem 430-16i SAS/SATA 12Gb HBA
• ThinkSystem 440-16i SAS/SATA 12Gb HBA
• ThinkSystem 4350-16i SAS/SATA 12Gb HBA
– All-flash storage configuarions
• ThinkSystem 430-8i SAS/SATA 12Gb HBA
• ThinkSystem 440-8i SAS/SATA 12Gb HBA
• ThinkSystem 4350-8i SAS/SATA 12Gb HBA
– ThinkSystem SE350 M.2 SATA/NVMe 4-bay Data Drive Enablement Kit
• Used only for ThinkAgile MX1020 and MX1021 on SE350
NVMe switch adapters
– ThinkSystem 1610-4p NVMe Switch adapter
• NVMe switches are used for configurations that include more than 4 NVMe devices
Storage devices
– For configurations with two storage device types, the number of devices can be
reduced to a minimum of two cache and four capacity devices
– For configurations with a single storage device type (all-SSD or all-NVMe), the number
of devices can be reduced to a total of 4 SSD or 4 NVMe devices
– NVMe U.2 devices require AnyBay option
We strongly recommend a minimum 10% cached to capacity ratio (e.g. 2x 800GB SSD and
4x 4TB HDD). Although this is not a requirement, care should be taken to provide enough
cache space for the amount of capacity available in the solution or performance can be
impacted significantly.
Table 5 provides a list of all certified Lenovo storage devices that can be used to configure an
Hybrid storage HCI solution based on Lenovo ThinkSystem SR630 and SR650 rack servers.
This table does not include OS boot devices.
Table 5 Lenovo storage devices certified for ThinkAgile MX hybrid storage solutions on SR630 and SR650
Storage Devices Used for Hybrid Solutions based on SR630 and SR650 Feature Type Usage
Code
ThinkSystem 3.5" Intel P4610 1.6TB Mainstream NVMe PCIe 3.0 x4 HS SSD B58C NVMe Cache
ThinkSystem 3.5" Intel P5600 1.6TB Mainstream NVMe PCIe 4.0 x4 HS SSD BCFM SSD Cache
ThinkSystem 3.5" Intel P5600 3.2TB Mainstream NVMe PCIe 4.0 x4 HS SSD BCFJ SSD Cache
ThinkSystem 3.5" Intel P5600 6.4TB Mainstream NVMe PCIe 4.0 x4 HS SSD BCFQ SSD Cache
ThinkSystem 3.5" Intel P5620 1.6TB Mixed Use NVMe PCIe 4.0 x4 HS SSD BNEK SSD Cache
ThinkSystem 3.5" Intel P5620 3.2TB Mixed Use NVMe PCIe 4.0 x4 HS SSD BNEM SSD Cache
ThinkSystem 3.5" Intel P5620 6.4TB Mixed Use NVMe PCIe 4.0 x4 HS SSD BNEN SSD Cache
ThinkSystem 3.5" Intel P5620 12.8TB Mixed Use NVMe PCIe 4.0 x4 HS SSD BNEP SSD Cache
ThinkSystem 3.5” SS530 800GB Performance SAS 12Gb HS SSD B4Y8 SSD Cache
ThinkSystem 3.5” SS530 1.6TB Performance SAS 12Gb HS SSD B4Y9 SSD Cache
ThinkSystem 3.5” SS530 3.2TB Performance SAS 12Gb HS SSD B4YA SSD Cache
ThinkSystem 3.5" PM1645a 800GB Mainstream SAS 12Gb HS SSD B8HT SSD Cache
ThinkSystem 3.5" PM1645a 1.6TB Mainstream SAS 12Gb HS SSD B8JN SSD Cache
ThinkSystem 3.5" PM1645a 3.2TB Mainstream SAS 12Gb HS SSD B8JK SSD Cache
ThinkSystem 3.5" PM1655 800GB Mixed Use SAS 24Gb HS SSD BNW7 SSD Cache
ThinkSystem 3.5" PM1655 1.6TB Mixed Use SAS 24Gb HS SSD BNWA SSD Cache
ThinkSystem 3.5" PM1655 3.2TB Mixed Use SAS 24Gb HS SSD BNWB SSD Cache
ThinkSystem 3.5" PM1655 6.4TB Mixed Use SAS 24Gb HS SSD BP3G SSD Cache
ThinkSystem 3.5" 4TB 7.2K SATA 6Gb HS 512n HDD AUU8 HDD Capacity
ThinkSystem 3.5" 6TB 7.2K SATA 6Gb HS 512e HDD AUUA HDD Capacity
ThinkSystem 3.5" 8TB 7.2K SATA 6Gb HS 512e HDD AUU9 HDD Capacity
ThinkSystem 3.5" 10TB 7.2K SATA 6Gb HS 512e HDD AUUB HDD Capacity
ThinkSystem 3.5" 12TB 7.2K SATA 6Gb HS 512e HDD B118 HDD Capacity
ThinkSystem 3.5" 14TB 7.2K SATA 6Gb HS 512e HDD B497 HDD Capacity
ThinkSystem 3.5" 4TB 7.2K NL SAS 12Gb HS 512n HDD AUU6 HDD Capacity
ThinkSystem 3.5" 6TB 7.2K NL SAS 12Gb HS 512e HDD AUU7 HDD Capacity
ThinkSystem 3.5" 8TB 7.2K NL SAS 12Gb HS 512e HDD B0YR HDD Capacity
ThinkSystem 3.5" 10TB 7.2K NL SAS 12Gb HS 512e HDD AUUG HDD Capacity
ThinkSystem 3.5" 12TB 7.2K NL SAS 12Gb HS 512e HDD B117 HDD Capacity
ThinkSystem 3.5" 14TB 7.2K NL SAS 12Gb HS 512e HDD B496 HDD Capacity
Table 6 provides a list of all certified Lenovo storage devices that can be used to configure an
All-Flash HCI solution based on Lenovo ThinkSystem SR630 and SR650 rack servers. This
table does not include OS boot devices.
Table 6 Lenovo storage devices certified for ThinkAgile MX all-flash storage solutions on SR630 and SR650
Storage Devices Used for All-Flash Solutions based on SR630 and SR650 Feature Type Usage
Code
ThinkSystem U.2 Intel P4800X 750GB Performance NVMe PCIe 3.0 x4 HS SSD B2ZJ NVMe Cache
ThinkSystem 2.5” U.2 Intel P5600 1.6TB Mainstream NVMe PCIe 3.0 x4 HS SSD B589 NVMe Cache
ThinkSystem 2.5” U.2 Intel P5600 3.2TB Mainstream NVMe PCIe 3.0 x4 HS SSD B58A NVMe Cache
ThinkSystem 2.5” U.2 Intel P5600 6.4TB Mainstream NVMe PCIe 3.0 x4 HS SSD B58B NVMe Cache
ThinkSystem 2.5” U.2 Intel P5620 1.6TB Mixed Use NVMe PCIe 3.0 x4 HS SSD BNEG NVMe Cache
ThinkSystem 2.5” U.2 Intel P5620 3.2TB Mixed Use NVMe PCIe 3.0 x4 HS SSD BNEH NVMe Cache
25
Storage Devices Used for All-Flash Solutions based on SR630 and SR650 Feature Type Usage
Code
ThinkSystem 2.5” U.2 Intel P5620 6.4TB Mixed Use NVMe PCIe 3.0 x4 HS SSD BNEZ NVMe Cache
ThinkSystem 2.5” U.2 Intel P5620 12.8TB Mixed Use NVMe PCIe 3.0 x4 HS SSD BA4V NVMe Cache
ThinkSystem 2.5” SS530 400GB Performance SAS 12Gb HS SSD B4Y4 SSD Cache
ThinkSystem 2.5” SS530 800GB Performance SAS 12Gb HS SSD B4Y5 SSD Cache
ThinkSystem 2.5” SS530 1.6TB Performance SAS 12Gb HS SSD B4Y6 SSD Cache
ThinkSystem 2.5” SS530 3.2TB Performance SAS 12Gb HS SSD B4Y7 SSD Cache
ThinkSystem 2.5" PM1645a 800GB Mainstream SAS 12Gb HS SSD B8HU SSD Cache
ThinkSystem 2.5" PM1645a 1.6TB Mainstream SAS 12Gb HS SSD B8J4 SSD Cache
ThinkSystem 2.5" PM1645a 3.2TB Mainstream SAS 12Gb HS SSD B8JD SSD Cache
ThinkSystem 2.5" PM1655 800GB Mixed Use SAS 24Gb HS SSD BNW8 SSD Cache
ThinkSystem 2.5" PM1655 1.6TB Mixed Use SAS 24Gb HS SSD BNW9 SSD Cache
ThinkSystem 2.5" PM1655 3.2TB Mixed Use SAS 24Gb HS SSD BNW6 SSD Cache
ThinkSystem 2.5" PM1655 6.4TB Mixed Use SAS 24Gb HS SSD BP3K SSD Cache
ThinkSystem 2.5" U.2 P5520 1.92TB Read Intensive NVMe PCIe 4.0 x4 HS SSD BMGD NVMe Capacity
ThinkSystem 2.5" U.2 P5520 3.84TB Read Intensive NVMe PCIe 4.0 x4 HS SSD BMGE NVMe Capacity
ThinkSystem 2.5" U.2 P5520 7.68TB Read Intensive NVMe PCIe 4.0 x4 HS SSD BNEF NVMe Capacity
ThinkSystem 2.5" U.2 P5520 15.36TB Read Intensive NVMe PCIe 4.0 x4 HS SSD BNEQ NVMe Capacity
ThinkSystem 2.5" Intel S4620 480GB Mixed Use SATA 6Gb HS SSD BA7Q SSD Capacity
ThinkSystem 2.5" Intel S4620 960GB Mixed Use SATA 6Gb HS SSD BA4T SSD Capacity
ThinkSystem 2.5" Intel S4620 1.92TB Mixed Use SATA 6Gb HS SSD BA4U SSD Capacity
ThinkSystem 2.5" Intel S4620 3.84TB Mixed Use SATA 6Gb HS SSD BK7L SSD Capacity
ThinkSystem 2.5" Intel S4510 1.92TB Entry SATA 6Gb HS SSD B49B SSD Capacity
ThinkSystem 2.5" Intel S4510 3.84TB Entry SATA 6Gb HS SSD B49C SSD Capacity
ThinkSystem 2.5" Intel S4520 480GB Read Intensive SATA 6Gb HS SSD BA7G SSD Capacity
ThinkSystem 2.5" Intel S4520 960GB Read Intensive SATA 6Gb HS SSD BA7H SSD Capacity
ThinkSystem 2.5" Intel S4520 1.92TB Read Intensive SATA 6Gb HS SSD BA7J SSD Capacity
ThinkSystem 2.5" Intel S4520 3.84TB Read Intensive SATA 6Gb HS SSD BK77 SSD Capacity
ThinkSystem 2.5" Intel S4520 7.68TB Read Intensive SATA 6Gb HS SSD BK78 SSD Capacity
ThinkSystem 2.5" 5300 1.92TB Entry SATA 6Gb HS SSD B8J5 SSD Capacity
ThinkSystem 2.5" 5300 3.84TB Entry SATA 6Gb HS SSD B8JP SSD Capacity
ThinkSystem 2.5" 5300 7.68TB Entry SATA 6Gb HS SSD B8J2 SSD Capacity
ThinkSystem 2.5" 5300 480GB Mainstream SATA 6Gb HS SSD B8HY SSD Capacity
ThinkSystem 2.5" 5300 960GB Mainstream SATA 6Gb HS SSD B8J6 SSD Capacity
ThinkSystem 2.5" 5300 1.92TB Mainstream SATA 6Gb HS SSD B8JE SSD Capacity
ThinkSystem 2.5" 5300 3.84TB Mainstream SATA 6Gb HS SSD B8J7 SSD Capacity
ThinkSystem 2.5" 5400 PRO 480GB Read Intensive SATA 6Gb HS SSD BQ1P SSD Capacity
ThinkSystem 2.5" 5400 PRO 960GB Read Intensive SATA 6Gb HS SSD BQ1R SSD Capacity
ThinkSystem 2.5" 5400 PRO 1.92TB Read Intensive SATA 6Gb HS SSD BQ1X SSD Capacity
ThinkSystem 2.5" 5400 PRO 3.84TB Read Intensive SATA 6Gb HS SSD BQ1S SSD Capacity
ThinkSystem 2.5" 5400 PRO 7.68TB Read Intensive SATA 6Gb HS SSD BQ1T SSD Capacity
ThinkSystem 2.5" PM1645a 800GB Mainstream SAS 12Gb HS SSD B8HU SSD Capacity
ThinkSystem 2.5" PM1645a 1.6TB Mainstream SAS 12Gb HS SSD B8J4 SSD Capacity
ThinkSystem 2.5" PM1645a 3.2TB Mainstream SAS 12Gb HS SSD B8JD SSD Capacity
ThinkSystem 2.5" PM1643a 1.92TB Entry SAS 12Gb HS SSD B91B SSD Capacity
ThinkSystem 2.5" PM1643a 3.84TB Entry SAS 12Gb HS SSD B91C SSD Capacity
ThinkSystem 2.5" PM1643a 7.68TB Entry SAS 12Gb HS SSD B91D SSD Capacity
ThinkSystem 2.5" PM1653 960GB Read Intensive SAS 24Gb HS SSD BNWC SSD Capacity
ThinkSystem 2.5" PM1653 1.92TB Read Intensive SAS 24Gb HS SSD BNWE SSD Capacity
ThinkSystem 2.5" PM1653 3.84TB Read Intensive SAS 24Gb HS SSD BNWF SSD Capacity
ThinkSystem 2.5" PM1653 7.68TB Read Intensive SAS 24Gb HS SSD BP3E SSD Capacity
ThinkSystem 2.5" PM1653 15.36TB Read Intensive SAS 24Gb HS SSD BP3J SSD Capacity
ThinkSystem 2.5" PM1653 30.72TB Read Intensive SAS 24Gb HS SSD BP3D SSD Capacity
Note: Do not use a storage device for a purpose other than listed in the Usage column. For
example, the Intel S4500 and S4510 SSDs have been certified for use only as a capacity
device, so should not be used as a cache device.
Table 7 Lenovo storage devices certified for ThinkAgile MX all-flash storage solutions on SE350
Storage Devices Used for All-Flash Solutions based on SE350 server Feature Type Usage
Code
ThinkSystem M.2 650GB P4511 NVMe SED High Endurance SSD B75C NVMe Cache
ThinkSystem M.2 7450 MAX 800GB Mixed Use NVMe PCIe 4.0 x4 NHS SSD BQUL NVMe Cache
ThinkSystem M.2 5300 480GB SATA 6Gbps Non-HS SSD B919 SSD Capacity
ThinkSystem M.2 5300 960GB SATA 6Gbps Non-HS SSD B8JJ SSD Capacity
ThinkSystem M.2 5300 1.92TB SATA 6Gbps Non-HS SSD BCNZ SSD Capacity
ThinkSystem M.2 5400 PRO 480GB Read Intensive SATA 6Gb NHS SSD BQ1Y SSD Capacity
ThinkSystem M.2 5400 PRO 960GB Read Intensive SATA 6Gb NHS SSD BQ20 SSD Capacity
27
Storage Devices Used for All-Flash Solutions based on SE350 server Feature Type Usage
Code
ThinkSystem M.2 1TB P4511 NVMe SED SSD B75D NVMe Capacity
ThinkSystem M.2 2TB P4511 NVMe SED SSD B75E NVMe Capacity
ThinkSystem M.2 7450 PRO 960GB Read Intensive NVMe PCIe 4.0 x4 NHS SSD BQUJ NVMe Capacity
ThinkSystem M.2 7450 PRO 1.92TB Read Intensive NVMe PCIe 4.0 x4 NHS SSD BQUK NVMe Capacity
ThinkSystem M.2 7450 PRO 3.84TB Read Intensive NVMe PCIe 4.0 x4 NHS SSD BRFZ NVMe Capacity
Intel S4500 SSD devices September 2018 Intel S4510 SSD devices
Intel P4600 NVMe devices March 2019 Intel P4610 NVMe devices
ThinkSystem 3.5" HUSMM32 SSD June 2019 ThinkSystem 3.5” SS530 SSD
devices devices
ThinkSystem 2.5" 5200 SSD devices August 2020 ThinkSystem 2.5" 5300 SSD
devices
ThinkSystem 3.5” SS530 SSD devices November 2020 ThinkSystem 3.5" PM1645a SSD
devices
ThinkSystem 2.5” SS530 SSD devices November 2020 ThinkSystem 2.5" PM1645a SSD
devices
ThinkSystem M.2 5100 SSD devices February 2021 ThinkSystem M.2 5300 SSD devices
ThinkSystem U.2 Intel P4610 SSD July 2021 ThinkSystem U.2 Intel P5600 SSD
devices devices
Intel S4510 SSD devices June 2022 Intel S4520 SSD devices
M.2 Intel P4511 NVMe devices June 2022 M.2 7450M NVMe devices
ThinkSystem 2.5" U.2 P5500 NVMe March 2023 ThinkSystem 2.5" U.2 P5520 NVMe
devices devices
ThinkSystem 2.5" U.2 P5600 NVMe March 2023 ThinkSystem 2.5" U.2 P5620 NVMe
devices devices
ThinkSystem 2.5" PM1643a SSD devices March 2023 ThinkSystem 2.5" PM1653 SSD
devices
ThinkSystem 2.5" PM1645a SSD devices March 2023 ThinkSystem 2.5" PM1655 SSD
devices
ThinkSystem 2.5" 5300 Entry SSD March 2023 ThinkSystem 2.5" 5400 PRO Read
devices Intensive SSD devices
ThinkSystem 2.5" 5300 Mainstream SSD March 2023 ThinkSystem 2.5" 5400 PRO Read
devices Intensive SSD devices
Network switches
Network switches that we have tested in our labs include Lenovo and NVIDIA (Mellanox)
switches. Although Lenovo no longer sells network switches, we include information about
them here for customers who already own them. Mellanox switches must be ordered directly
from NVIDIA.
Although network switches are not specifically certified under the Microsoft HCI certification
programs, all of the Lenovo certified configurations for Microsoft HCI discussed in this
29
document have undergone rigorous end-to-end solution validation using Lenovo network
switches to carry all solution traffic.
Table 9 lists the recommended Lenovo networking switches for S2D deployment. These
switches support the Remote Direct Memory Access (RDMA) feature of Microsoft SMB 3.x,
which is used extensively by S2D and are fully compatible with the Mellanox ConnectX-4 Lx
network adapters used in these solutions to provide the highest storage performance.
Note: The first part number and feature code listed in Table 9 is for a switch with rear to
front airflow. The second part number and feature code is for front to rear airflow.
RackSwitch G8272
The Lenovo RackSwitch G8272 uses 10 Gb SFP+ and 40 Gb QSFP+ Ethernet technology
and is specifically designed for the data center. It is ideal for today's big data, cloud, and
optimized workload solutions. It is an enterprise class Layer 2 and Layer 3 full featured switch
that delivers line-rate, high-bandwidth switching, filtering, and traffic queuing without delaying
data. Large data center-grade buffers help keep traffic moving, while the hot-swap redundant
power supplies and fans (along with numerous high-availability features) help provide high
availability for business sensitive traffic. In addition to the 10GbE and 40GbE connections, the
G8272 can use 1GbE connections.
Other recommendations
We also recommend the features and upgrades in this section to maximize the security and
manageability of the S2D solution built using the Lenovo certified configurations discussed in
this document.
For ThinkAgile MX solutions that are based on the ThinkSystem SR630, SR650, and SE350
servers, order Feature Code B0MK to enable TPM 2.0 or Feature Code AUK7 to enable TPM
2.0 and Secure Boot. Note that Secure Boot is required by Microsoft for all ThinkAgile MX
Appliance solutions, so will be selected by default for all Appliance configurations.
Note: TPM is not supported in PRC. For systems shipped to China, NationZ TCM is used
and supported.
31
Microsoft S2D cluster solution. For more information, see the LXCA Product Guide at the
following URL:
https://lenovopress.com/tips1200-lenovo-xclarity-administrator
https://sysmgt.lenovofiles.com/help/index.jsp?topic=%2Fcom.lenovo.lxci_wac.doc%2Fw
ac_welcome.html
Summary
Lenovo is a key partner in the Microsoft WSSD and Azure Stack HCI programs for
certification of HCI solutions. Based on Lenovo’s investment in these programs and the
tremendous amount of time, resources, and effort dedicated to certification and validation
testing for each certified configuration discussed in this document, our customers can rest
assured that the configurations presented will perform smoothly and reliably right out of the
box.
This document has provided some background information related to the Microsoft WSSD
and Azure Stack HCI programs, as well as details of Lenovo certified configurations that have
been certified and validated under the program to run Storage Spaces Direct. Selecting from
the list of Lenovo certified configurations found in this document to build an S2D HCI solution
will save time, money, and effort associated with designing and building a do-it-yourself
solution that could be riddled with issues.
We will add more examples of Lenovo certified configurations for Microsoft HCI solutions to
this document as additional certifications are completed.
Change History
Changes in the April 2023 update:
Updated certified storage device lists (Table 5 on page 24 and Table 6 on page 26)
Updated storage device EOL table (Table 8 on page 28)
33
Added a brief description of ThinkAgile MX Appliance offerings in “ThinkAgile MX
Appliance” on page 5
Added “Network switches” on page 29 to provide information regarding GPU adapter
support for ThinkAgile MX solutions
Updated storage device tables (Table 5 on page 24 and Table 4) with new supported
drives
Removed supported OS boot devices from storage device tables
Added supported OS boot devices to “Component selection” on page 22
Authors
This paper was produced by the following specialists:
Dave Feisthammel is a Senior Solutions Architect working at the Lenovo Center for Microsoft
Technologies in Bellevue, Washington. He has over 25 years of experience in the IT field,
including four years as an IBM client and over 18 years working for IBM and Lenovo. His
areas of expertise include Windows Server and systems management, as well as
virtualization, storage, and cloud technologies. He is currently a key contributor to Lenovo
solutions related to Microsoft Azure Stack HCI and Azure Stack Hub.
Mike Miller is a Windows Engineer with the Lenovo Server Lab in Bellevue, Washington. He
has over 35 years in the IT industry, primarily in client/server support and development roles.
The last 13 years have been focused on Windows Server operating systems and server-level
35
hardware, particularly on operating system/hardware compatibility, advanced Windows
features, and Windows test functions.
David Ye is a Principal Solutions Architect at Lenovo with over 25 years of experience in the
IT field. He started his career at IBM as a Worldwide Windows Level 3 Support Engineer. In
this role, he helped customers solve complex problems and critical issues. He is now working
in the Lenovo Infrastructure Solutions Group, where he works with customers on Proof of
Concept designs, solution sizing and reviews, and performance optimization. His areas of
expertise are Windows Server, SAN Storage, Virtualization and Cloud, and Microsoft
Exchange Server. He is currently leading the effort in Microsoft Azure Stack HCI and Azure
Stack Hub solutions development.
A special thank you to the following Lenovo colleagues for their contributions to this project:
Daniel Ghidali, Manager - Microsoft Technology and Enablement
Hussein Jammal - Advisory Engineer, Microsoft Solutions Lead EMEA
Vinay Kulkarni, Principal Technical Consultant - Microsoft Solutions and Enablement
Oana Adelina Onofrei, Solutions Engineer - ISG Software Development
Laurentiu Petre, Solutions Engineer - ISG Software Development
Vy Phan, Technical Program Manager - Microsoft OS and Solutions
David Watts, Senior IT Consultant - Lenovo Press
Lenovo may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or
third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Send us your comments via the Rate & Provide Feedback form found at
http://lenovopress.com/lp0866
Trademarks
Lenovo, the Lenovo logo, and For Those Who Do are trademarks or registered trademarks of Lenovo in the
United States, other countries, or both. These and other Lenovo trademarked terms are marked on their first
occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law
trademarks owned by Lenovo at the time this information was published. Such trademarks may also be
registered or common law trademarks in other countries. A current list of Lenovo trademarks is available on
the Web at http://www.lenovo.com/legal/copytrade.html.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
AnyBay™ RackSwitch™ ThinkSystem™
Lenovo® Lenovo(logo)®
Lenovo XClarity™ ThinkAgile™
Intel, and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the
United States and other countries.
Azure, BitLocker, Microsoft, Windows, Windows Server, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.