0% found this document useful (0 votes)
41 views11 pages

RD100DSR1

Uploaded by

Pradeep Salve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views11 pages

RD100DSR1

Uploaded by

Pradeep Salve
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

[EcoStruxure™ Reference Design 100]

3818 kW, Tier III, NAM, Chilled Water,


Liquid-Cooled & Air-Cooled AI Clusters
Design Overview

Data Center IT Capacity


3818 kW Introduction
Adaptable from 1864 kW to 3818 kW
High-density AI clusters and liquid cooling bring new challenges to data
center design. Schneider Electric’s data center reference designs help
Target Availability
shorten the planning process by providing validated, proven, and
Tier III documented data center physical infrastructure designs to address such
challenges. This design focuses on the deployment of high-density AI
Annualized PUE at 100% Load clusters with two IT rooms. IT room 1 depicts three retrofit scenarios, where
San Francisco, CA: 1.16 – 1.18 a new, high-density AI cluster is installed alongside existing traditional IT.
Dallas, TX:1.21 – 1.23 • Scenario 1A shows a high-density air-cooled AI cluster.
(Scenario dependent) • Scenario 1B shows a high-density liquid-cooled AI cluster which uses
liquid-to-air coolant distribution units (CDUs) for heat rejection. This is
Racks and Density ideal for scenarios where facility water systems are unavailable.
Total Racks: 128 / 144 (Scenario dependent) • Scenario 1C shows a high-density liquid-cooled AI cluster which uses
Rack Density: liquid-to-liquid CDUs. This is ideal for scenarios where facility water
Max air-cooled: 40 kW systems are available.
Max liquid-cooled: 73 kW IT room 2 is purpose-built and optimized for a liquid-cooled AI cluster
which uses liquid-to-liquid CDUs.
Data Center Overall Space
Reference Design 100 includes information for four technical areas: facility
32,920 ft2
power, facility cooling, IT space and lifecycle software. They represent the
Regional Voltage and Frequency integrated systems required to meet the design’s specifications in this
480V, 60Hz overview document.

About this Design

• IT space and power distribution


designed to accommodate AI clusters
with density up to 73 kW per rack

• Various options to support liquid-cooled


racks, including liquid-to-air coolant
distribution units (CDUs) and liquid-to-
liquid CDUs

• Chilled water systems optimized for


high water temperatures using Uniflair
FWCV fan walls and Uniflair XRAF HT
air-cooled packaged chillers

• Redundant design for increased


availability and concurrent
maintainability
[EcoStruxure™ Reference Design 100] 2

Facility Power
Facility Power Block Diagram
Power Block
The facility power system supplies power to all components within the data center.
Utility A Utility B Utility C
In this concurrently maintainable electrical design, power to the IT rooms is
2.5 MW 2.5 MW 2. 5MW
supplied through three 2.5 MW powertrains. The three powertrains provide 2+1
2.5 MVA G 2.5M VA G 2.5 MVA G
MV/LV TX
Genset
MV/LV TX
Genset
MV/LV TX
Genset
distributed redundant UPS power to the IT space, backed up by diesel generators.
ATS ATS ATS Each powertrain consists of a 3000-amp QED-2 main switchboard feeding two
LV SWB. A LV SWB. B LV SWB. C
UPS UPS UPS UPS UPS UPS 1250 kW Galaxy VX UPS with 5 minutes of runtime in parallel and a 3000-amp
1A 2A 1B 2B 1C 2C

LV Dist. A LV Dist. B LV Dist. C


QED-2 distribution section. The main switchboards also feed the Uniflair FWCV
fan walls in the two IT rooms. Downstream, these powertrains feed power
distribution units (PDUs), either APC Modular PDUs or Galaxy PDU, that power
the IT racks with 2N redundancy. The UPSs also feed the CDUs and chilled water
pumps. Separately, two 1.5 MW powertrains feed the chillers with 2N redundant
IT Room 1 IT Room 2
power.
Utility A Utility B
The facility power system is designed to support integrated peripheral devices like
1.5 MVA G
1.36 MW
Genset 1.5 MVA G
1.36 MW
Genset fire panels, access control systems, and environmental monitoring and control
MV/LV TX MV/LV TX
devices. Power meters in the electrical path monitor power quality and allow for
ATS ATS
MECH SWB. A MECH SWB. B predictive maintenance & diagnostics of the system. These meters also integrate
with EcoStruxure Power Monitoring Expert.
Chillers Every component in this design is built and tested to the applicable IEEE or UL
standards.
Further design details, such as dimensions, schematics, and equipment lists are
available in the engineering package.

Facility Power Attributes

Name Value Unit


Design Options Total facility peak power (IT and
6250 kW
This reference design can be modified as cooling)
follows without a significant effect on the Total amps (IT main bus, each) 3000 A
design’s performance attributes: Input voltage (IT main bus) 480 V
• Add EcoStruxure Power Monitoring
Switchboard kAIC (IT main bus) 65 kA
Expert
Generator redundancy (IT main bus) Distributed redundant
• Provision for load bank
• Change UPS battery type & runtime IT power path Dual
• Add facility cooling UPS IT space UPS capacity, per powertrain 2500 kW
• Add/remove/change standby IT space UPS redundancy Distributed redundant
generators: IT space UPS runtime @ rated load 5 minutes
o Location & tank size IT space UPS output voltage 480 V
Total amps (Facility cooling bus, each) 1600 A
Input voltage (Facility cooling bus) 480 V
Switchboard kAIC (Facility cooling
35 kA
bus)
Generator redundancy (Facility cooling
2N
bus)
Facility cooling UPS capacity N/A kW
Facility cooling UPS redundancy N/A
Facility cooling UPS runtime @ rated
N/A minutes
load

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 3

Facility Cooling
Facility Cooling Block Diagrams
The facility cooling design is based on the specified AI deployment scenarios.
For IT Room 1 (retrofit IT room scenario), a chilled water system with dual path
piping is implemented. Three Uniflair BCEF chillers with free cooling capabilities
deliver 68°F chilled water in an N+1 configuration.
The facility cooling design for IT Room 2 (new IT room scenario) is comprised
of two separate chilled water loops. A high temperature water loop, with two
Uniflair XRAF extra high temperature chillers with screw compressors and free
cooling capabilities, provides 88°F water to the IT room to cool the IT
equipment. A separate chilled water loop, with two Uniflair XRAF extra high
temperature chillers, provides 70°F water for the air handling units of the IT
room. Using Uniflair XRAF extra high temperature chillers for the chilled water
loop enables future-readiness for water temperature increase. Uniflair BCEF
chillers and standard Uniflair XRAF chillers are also good options at this
specified chilled water temperature.
A thermal storage system provides 5 minutes of continuous cooling after a
power outage or chiller restart. The Uniflair BCEF and Uniflair XRAF chillers
can fully restart within 3 minutes.
More information on fan wall and CDU cooling architecture is provided in the IT
room section of this document.
This design is instrumented to work with EcoStruxure IT Expert and AVEVA
Unified Operations Center.
Further design details such as dimensions, schematics, and equipment lists are
available in the engineering package.
Facility Cooling Attributes
Name Value Unit
4976 (Dallas)
Total max cooling capacity (chillers) 5450 (San kW
Francisco)
Input voltage 480 V
Heat rejection medium Chilled water
Chiller redundancy N+1
Packaged chiller
Outdoor heat exchange
Design Options with free cooling
CW supply temperature 68 – 70 °F
This reference design can be modified as CW return temperature 86 °F
follows without a significant effect on the CW supply temp (Room 2, to CDUs) 88 °F
design’s performance attributes: CW return temp (Room 2, from CDUs) 104 °F
Combined* storage tank size 989 ft3
• Add EcoStruxure IT Expert
Ride-through time 5 minutes
• Change storage tank size
Outdoor ambient temperature range 11.1 to 110.5 °F
• Use standard temperature chillers,
Economizer type Water-side
like Uniflair XRAF or Uniflair BCEF,
*Summation of all three chilled water loops
chillers for loop with fan walls in IT
Room 2

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 4

Retrofit IT Room: Scenario 1A


IT Room 1A Diagrams
The first retrofit IT room scenario features eighty 12 kW air-cooled IT racks. The
IT Room 1A Layout
load has been expanded with an AI cluster consisting of twenty-four 40 kW air-
FAN WALL FAN WALL FAN WALL
cooled server racks with six 15 kW air-cooled networking racks (modeled after
Nvidia’s DGX SuperPOD). This scenario demonstrates a 50/50 split in power
PDU1 PDU1 PDU1 PDU1 PDU1 PDU1 PDU1 PDU1
between low and high-density IT racks. The 12 kW IT racks are configured in pods
of 20 racks and share a 4 ft. wide hot aisle. The 40 kW air-cooled AI racks are
Cold Aisle
Cold Aisle

Cold Aisle
Cold Aisle
Cold Aisle

Hot Aisle
Hot Aisle

Hot Aisle

Hot Aisle

configured with four racks together and two 15 kW networking racks in the middle
of the row. The high-density pod shares an 8 ft. wide hot aisle to allow proper
airflow. Ducted hot aisles and a common ceiling plenum return hot air to the fan
RPPs RPPs RPPs
walls for cooling.
Six Uniflair FWCV chilled water fan walls deliver clean and conditioned supply air
Cold Aisle

Cold Aisle

Cold Aisle
Hot Aisle

Hot Aisle

NET
NET
NET
NET
NET
NET
PDU1 266kVA to the IT room in an N+1 configuration. The redundant piping system across the
PDU2 500kVA IT room provides an alternate path for chilled water in case of cooling equipment
RPPs 2x400A
PDU2 PDU2

PDU2 PDU2

RPPs RPPs RPPs


12kW failure or maintenance.
40kW
PDU2 PDU2 NET 15kW The 12 kW IT racks are configured with 1+1 40 A NetShelter Metered rack-mount
FAN WALL FAN WALL FAN WALL power distribution units (rPDUs). The 15 kW networking racks are configured with
1+1 30 A NetShelter Advanced rPDUs. The 40 kW AI racks are configured with
1+1 100 A NetShelter Advanced rPDUs. Each rack is powered by 2N redundant
feeds from their respective pod’s PDUs, providing A- and B-side power to each
rack. The pods of 12 kW IT racks are powered by APC Modular 266 kVA PDUs
with their built-in power distribution modules. Each Galaxy 500 kVA PDU cabinets
feed power to a pair Galaxy RPPs, with PowerLogic BCPM for branch circuit
monitoring, which distribute power to rows of 40 kW AI racks and 15 kW
networking racks.

IT Room 1A Attributes
Name Value Unit
IT load 2010 kW
Supply voltage to IT 415 V
Single or dual cord Dual
Number of 12kW air-cooled racks 80 racks
Number of 40kW air-cooled racks 24 racks
Number of 15kW networking racks 6 racks
IT floor space 4467 ft2
CRAC/CRAH type Fan wall
Design Options CRAC/CRAH redundancy N+1
CW supply temperature 68 °F
This reference design can be modified as CW return temperature 86 °F
follows without a significant effect on the Containment type Ducted hot aisle
design’s performance attributes:
CDU type N/A
• Use Uniflair FXCV fan walls CDU redundancy N/A
• CRAHs can be selected instead of TCS loop supply temperature N/A
fan walls TCS loop return temperature N/A
• Variations in AI cluster
configuration

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 5

Retrofit IT Room: Scenario 1B


IT Room 1B Diagrams
The second retrofit IT room scenario features eighty 12 kW air-cooled IT racks.
The load has been expanded with an AI cluster consisting of eight 73 kW liquid-
cooled AI racks and eight 40 kW air-cooled networking racks (modeled after
Nvidia’s DGX SuperPOD). The AI cluster is configured with four liquid cooled racks
together and two networking racks on each end of the row. For the liquid-cooled
racks, Uniflair ACSX liquid-to-air (L2A) coolant distribution units (CDUs) are placed
on the opposite side of the hot aisle. The liquid cooled servers use direct-to-chip
cooling technology. The liquid cooling loop directly feeding coolant to the racks is
known as the Technology Cooling System (TCS). An 8 ft. wide hot aisle is
designed for the high-density pods to ensure proper airflow. Ducted hot aisles and
a common ceiling plenum return hot air to the fan walls for cooling.
L2A CDUs allow liquid-cooled racks to be deployed in data centers without the
need for facility water. The CDUs supply coolant to the racks and reject heat into
the air. Six Uniflair FWCV chilled water fan walls with redundant piping deliver
supply air to the IT room in an N+1 configuration.
The 12 kW IT racks are powered by 1+1 40 A NetShelter Metered rPDUs. The 40
kW networking racks are configured with 1+1 100 A NetShelter Advanced rPDUs.
The 73 kW liquid-cooled AI racks are configured with 3+3 OCP V3 power shelves,
each fed with a 60 A power feed. Each rack is powered by 2N redundant feeds
from their respective pod’s PDUs, providing A- and B-side power to each rack. The
pods of 12 kW IT racks are powered by APC Modular 266 kVA PDUs with their
built-in power distribution modules. Each Galaxy 500 kVA PDU cabinets feed
power to a pair Galaxy RPPs, with PowerLogic BCPM for branch circuit monitoring
and can be configured with shunt trip units on circuit breakers feeding liquid-cooled
racks for leak detection, which distribute power to rows of 73 kW AI racks and 40
kW networking racks.

IT Room 1B Attributes
Name Value Unit
IT load 1864 kW
Supply voltage to IT 415 V
Single or dual Dual
Number of 12kW air-cooled racks 80 racks
Number of 73kW liquid-cooled racks 8 racks
Number of 40kW networking racks 8 racks
Design Options IT floor space 4467 ft2
This reference design can be modified as CRAC/CRAH type Fan wall
follows without a significant effect on the
CRAC/CRAH redundancy N+1
design’s performance attributes:
CW supply temperature 68 °F
• Use Uniflair FXCV fan walls CW return temperature 86 °F
• CRAHs can be selected instead of Containment type Ducted hot aisle
fan walls CDU type L2A
• Variations in AI cluster CDU redundancy N
configuration TCS loop supply temperature 104 °F
TCS loop return temperature 122 °F

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 6

Retrofit IT Room: Scenario 1C


The third retrofit IT room scenario features eighty 12 kW air-cooled IT racks. The
IT Room 1C Diagrams load has been expanded with an AI cluster consisting of eight 73 kW liquid-cooled
IT racks and eight 40 kW air-cooled networking racks (modeled after Nvidia’s DGX
SuperPOD). The AI cluster is configured with four liquid cooled racks together and
two networking racks on each end of the row. For the liquid-cooled racks, two
Uniflair CPOR liquid-to-liquid (L2L) CDUs provide coolant to the racks. The L2L
CDUs are placed in the service hallway. The liquid cooled servers use direct-to-
chip cooling technology. The liquid-cooled pod shares an 8 ft. wide hot aisle for
proper airflow.
L2L CDUs are the heat exchange interface between liquid-cooled IT racks and the
facility water system (FWS). In this scenario, the CDUs are tied together on a
common loop providing N+1 redundancy. The CDUs are fed the same facility
supply water as the fan walls. Six Uniflair FWCV chilled water fan walls with
redundant piping deliver supply air to the IT room in an N+1 configuration.
The 12 kW IT racks are powered by 1+1 40 A NetShelter Metered rPDUs. The 40
kW networking racks are configured with 1+1 100 A NetShelter Advanced rPDUs.
The 73 kW liquid-cooled AI racks are configured with 3+3 OCP V3 power shelves,
each fed with a 60 A power feed. Each rack is powered by 2N redundant feeds
from their respective pod’s PDUs, providing A- and B-side power to each rack. The
pods of 12 kW IT racks are powered by APC Modular 266 kVA PDUs with their
built-in power distribution modules. Each Galaxy 500 kVA PDU cabinets feed
power to a pair Galaxy RPPs, with PowerLogic BCPM for branch circuit monitoring
and can be configured with shunt trip units on circuit breakers feeding liquid-cooled
racks for leak detection, which distribute power to rows of 73 kW AI racks and 40
kW networking racks.

IT Room 1C Attributes

Name Value Unit


IT load 1864 kW
Supply voltage to IT 415 V
Single or dual cord Dual
Number of 12kW air cooled racks 80 racks
Number of 73kW liquid cooled racks 8 racks
Number of 40kW networking racks 8 racks
Design Options IT floor space 4467 ft2
This reference design can be modified as CRAC/CRAH type Fan wall
follows without a significant effect on the
design’s performance attributes: CRAC/CRAH redundancy N+1
CW supply temperature 68 °F
• Use Uniflair FXCV fan walls CW return temperature 86 °F
• CRAHs can be selected instead of Containment type Ducted hot aisle
fan walls CDU type L2L
• Variations in AI cluster CDU redundancy 2N
configuration TCS loop supply temperature 104 °F
TCS loop return temperature 122 °F

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 7

New Build IT Room 2


IT Room 2 Diagrams IT Room 2 is dedicated to a new AI cluster and features sixteen 73 kW liquid-
cooled IT racks with sixteen 40 kW air-cooled networking racks placed at the ends
of the rows. The racks are configured in one pod and share a 6 ft wide hot aisle.
For the liquid-cooled racks, three Uniflair CPOR liquid-to-liquid (L2L) CDUs
provide coolant to the racks. The L2L CDUs are placed in the service hallway.
The liquid cooled servers use direct-to-chip cooling technology.
Four Uniflair FWCV chilled water fan walls deliver supply air to the IT room in an
N+1 configuration. Three Uniflair CPOR L2L CDUs are tied together on a common
TCS loop with N+1 redundancy to provide coolant to the liquid-cooled racks. The
CDUs run on a separate, high-temperature chilled water loop to increase free
cooling opportunity. Uniflair XRAF extra high temperature chillers make it possible
to operate this chiller-based cooling loop at temperatures not seen in the industry
today providing unmatched cooling efficiency.
The 40 kW networking racks are configured with 1+1 100 A NetShelter Advanced
rPDUs. The 73 kW liquid-cooled AI racks are configured with 3+3 OCP V3 power
shelves, each fed with a 60 A power feed. Each rack is powered by 2N redundant
feeds from their respective pod’s PDUs providing A- and B-side power to each
rack. Each Galaxy 1000 kVA PDU cabinet feed power to a SE Hyper High Density
RPP, with PowerLogic HDPM for branch circuit monitoring and can be configured
with shunt trip units on circuit breakers feeding liquid-cooled racks for leak
detection, which distribute power to a row of 73 kW AI racks and 40 kW networking
racks.

IT Room 2 Attributes
Name Value Unit
IT load 1808 kW
Supply voltage to IT 415 V
Single or dual cord Dual
Number of 73kW liquid-cooled racks 16 racks
Number of 40kW networking racks 16 racks
IT floor space 1711 ft2
CRAC/CRAH type Fan wall
CRAC/CRAH redundancy N+1
DESIGN OPTIONS CW supply temperature 70 °F
This reference design can be modified as CW return temperature 86 °F
follows without a significant effect on the Containment type Ducted hot aisle
design’s performance attributes:
CDU type L2L
• Use Uniflair FXCV fan walls CDU redundancy N+1
• CRAHs can be selected instead of CDU CW supply temperature 88 °F
fan walls CDU CW return temperature 104 °F
• Variations in AI cluster TCS loop supply temperature 104 °F
configuration TCS loop return temperature 122 °F

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 8

Lifecycle Software
High-density AI clusters push the limits of data center facility infrastructure, so it’s
critical to leverage advanced planning and operation tools to ensure safe and
reliable operations.
Planning & Design
Electrical Safety and Reliability: Due to the high amount of power supplied to an
AI cluster, design specifications such as available fault current, arc flash hazards
and breaker selectivity must be analyzed in the design phase. Applications like
Ecodial and eTAP simulate the electrical design and reduce the chance of costly
mistakes or even worse, injury.
Cooling: AI clusters are pushing the limits of what can be done with air-cooling.
Modeling the IT space with computational fluid dynamics (CFD) helps spot issues
including high pressure areas, rack recirculation, and hot spots. This is especially
true when retrofitting an existing data center with an AI cluster. Schneider
Electric’s IT Advisor CFD can quickly model airflow, allowing rapid iteration to find
the best design and layout.
Operations
EcoStruxure TM is Schneider Electric’s open, interoperable, integrated Internet of
Things (IOT)-enabled system architecture and platform. It consists of three
layers: connected products, edge control, and applications, analytics, and
services.

EcoStruxure Data Center is a combination of three domains of EcoStruxure:


Power, Building, and IT. Each domain is focused on a subsystem of the data
center: power, cooling, and IT. These three domains combined will reduce risks,
increase efficiencies, and speed operations across the entire facility.

• EcoStruxure Power monitors power quality, generates alerts, while


protecting and controlling the electrical distribution the electrical
distribution system of the data center from the MV level to the LV level.
It uses any device for monitoring and alerting, uses predictive analytics
for increased safety, availability, and efficiency, while lowering
maintenance costs.
• EcoStruxure Building controls cooling effectively while driving reliability,
efficiency, and safety of building management, security, and fire
systems. It performs data analytics on assets, energy use, and
operational performance.
• EcoStruxure IT makes IT infrastructure more reliable and efficient while
simplifying management by offering complete visibility, alerting and
Visit EcoStruxure for Data Center modelling tools. It receives data, generates alerts, predictive analytics,
for more details. and system advice on any device to optimize availability and efficiency
in the IT space.
There are several options for supervisory visibility and control. AVEVA Unified
Operations Center can provide visibility at a site or across an entire enterprise.

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 9

Design Attributes
OVERVIEW Value Unit
Target availability III Tier
Annualized PUE at 100% load 1.17 / 1.18 / 1.16 San Francisco, CA
(1A & 2 / 1B & 2 / 1C & 2) 1.22 / 1.23 / 1.21 Dallas, TX
Data center IT capacity 3672 – 3818 kW
Data center overall space 32,920 ft2
Maximum rack density 73 kW/rack
FACILITY POWER Value Unit
Total facility peak power (IT and cooling) 6500 kW
Total amps (IT main bus, each) 3000 A
Input voltage (IT main bus) 480 V
Switchboard kAIC 65 kA
Generator redundancy (IT main bus) Distributed redundant
IT Power path Dual
IT space UPS capacity, per powertrain 2500 kW
IT space UPS redundancy Distributed redundant
IT space UPS runtime @ rated load 5 minutes
IT space UPS output voltage 480 V
Total amps (facility cooling bus, each) 1600 A
Input voltage (facility cooling bus) 480 V
Switchboard kAIC (facility cooling bus) 35 kA
Generator redundancy (facility cooling
2N
bus)
FACILITY COOLING Value Unit
Total max cooling capacity (chillers) 4976 (Dallas), 5450 (San Francisco) kW
Input voltage 480 V
Heat rejection medium Chilled water
Chiller redundancy N+1
Outdoor heat exchange Packaged chiller with free cooling
68 – 70
CW supply temperature °F
68-70
CW return temperature 86 °F
CW supply temp (IT Room 2, to CDUs) 88 °F
CW return temp (IT Room 2, from CDUs) 104 °F
Combined* storage tank size 989 ft3
Ride-through time 5 minutes
Outdoor ambient temperature range 11.1 to 110.5 °F
Economizer type Water-side
*Summation of all three chilled water loops

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 10

Design Attributes continued


Retrofit room New room
IT SPACE Total Unit
1A 1B 1C 2
IT load 2010 1864 1864 1808 3672 – 3818 kW
Supply voltage to IT 415 415 415 415 415 V
Maximum density 40 73 73 73 73 kW/rack
Number of racks 110 96 96 32 128 – 142 racks
IT floor space 4467 4467 4467 522 4989 ft2
Single or dual cord Dual Dual Dual Dual Dual
CRAC/CRAH type Fan wall Fan wall Fan wall Fan wall Fan wall
CRAC/CRAH redundancy N+1 N+1 N+1 N+1 N+1
Ducted hot Ducted Ducted Ducted hot Ducted hot
Containment type
aisle hot aisle hot aisle aisle aisle
CDU Type N/A L2A L2L L2L
CDU redundancy N/A N N+1 N+1
CW supply temperature 68 68 68 21 °F
CW return temperature 86 86 86 30 °F
CDU CW supply temperature N/A N/A 68 88 °F
CDU CW return temperature N/A N/A 86 104 °F
TCS loop supply temperature N/A 104 104 104 °F
TCS loop return supply temperature N/A 122 122 122 °F

Document Number RD100DS Revision 1


[EcoStruxure™ Reference Design 100] 11

Schneider Electric Life-Cycle Services


Life Cycle Services
Plan
1 Team of over 7,000 trained specialists covering every
phase and system in the data center
What are my options?

Install Standardized, documented, and validated methodology


How do I install and commission? 2 leveraging automation tools and repeatable processes
developed over 45 years
Operate
How do I operate and maintain?
3 Complete portfolio of services to solve your technical or
business challenge, simplify your life, and reduce costs
Optimize
How do I optimize?

Renew
How do I renew my solution?

Get more information for this design:


Engineering Package
Every reference design is built with technical documentation for engineers
and project managers. This includes engineering schematics (CAD, PDF),
floor layouts, equipment lists containing all the components used in the
design and 3D images showing real world illustrations of our reference
3D spatial views Floor layouts designs.
Documentation is available in multiple formats to suit the needs of both
engineers and managers working on data center projects. For the engineering
package of this design please email us at [email protected]

One-line schematics Bill of materials

Document
Email Number RD100DS
[email protected] for further assistance Revision 1

You might also like