SlideShare a Scribd company logo
HPC Seminar – September 2013

Scale Out Computing With NeXtScale
Systems
Karl Hansen, HPC and Technical Computing, IBM systemX, Nordic
1

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Journey Started in 2008 – iDataPlex
Flexible computing optimized for Data Center serviceability
Race car design
– Performance centric approach
– Cost efficient
– Energy Conscious

All-front access
– Reduces time behind the rack
– Reduces cabling errors
– Highly energy efficient

Low cost, Flexible chassis
– Support for servers, GPUs, and Storage
– Easy to install and service
– Greater density than traditional 1U systems

Optimized for Top of Rack (TOR) switching
– No expensive mid plane
– Latency Optimized
– Open Ecosystem

2

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

IBM iDataPlex dx360 M4 Refresh 3

WHAT’S NEW:
Intel Xeon E5-2600 v2 product family
Intel Xeon Phi 7120P coprocessor
New 1866MHz and 1.35V RDIMMs
Higher Performance:
Intel Xeon E5-2600 v2 processors providing up to 12 cores, 30MB cache and 1866MHz
maximum memory speed to deliver more performance in the same power envelope
Intel Xeon Phi coprocessor delivers over 1 Teraflop of double precision peak
performance providing up to 4x more performance per watt than with processors alone
Increased memory performance with 1866MHz DIMMs and new energy efficient 1.35V
RDIMM options, ideal for HPC workloads

Learn More: http://www-03.ibm.com/systems/x/hardware/rack/dx360m4/index.html
3

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

2013 - Introducing IBM NeXtScale
A superior building block approach for scale-out computing
Standard Rack
Chassis
Primary Target Workloads

High Performance
Computing
Public Cloud

Private Cloud

Compute

Storage

Better data center density and flexibility
Compatible with standard racks
Optimized for Top of Rack Switching
Top BIN E-5 2600 v2 processors

Acceleration

Designed for solution redundancy
The best of iDataPlex
Very powerful roadmap

More Coming
4

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

IBM NeXtScale: Elegant Simplicity
One Architecture Optimized for Many Use Cases

IBM Rack
or
Client Rack

One Simple Light Chassis
IBM NeXtScale n1200

Compute

Storage

PCI – GPU / Phi

IBM NeXtScale nx360 M4

nx360 M4 + Storage NeX

nx360 M4 + PCI NeX

Dense Compute

Add RAID card + cable

Add PCI riser + GPUs

Top Performance

Dense 32TB in 1U

2 x 300W GPU in 1U

Energy Efficient

Simple direct connect

Full x16 Gen3 connect

IO flexibility

No trade offs in base

No trade offs in base

Swappable

Mix and Match

Mix and Match

5

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Deep Dive into the NeXtScale n1200 enclosure
The ultimate high density server, designed for your Technical, Grid, and Cloud computing workloads.
n1200 enclosure
New MT: 5456
Twice the amount of density than regular 1U servers.
n1200 Enclosure

Dense Chassis – The Foundation
Form factor

6U tall with 12 half-wide bays
Mix and match compute, storage, or GPU
nodes within chassis

6U tall – standard rack

Number of Bays

12

Power Supplies

6 hot swap, non redundant, N+N or
N+1 Redundant. 80 PLUS Platinum
high energy efficiency 900W

Fans

10 Hot swap

– Each system is individually serviceable
– No left or right specific parts – meaning system
can be put in any slot

Can have up to 7 chassis (up to 84 servers) in
a standard 19” rack
No in-chassis networking integration
– Systems connect to TOR switches
– No need to manage the chassis via FSM, iMM,
etc

Shared power and cooling
– 6 non redundant, N+1, N+N, hot swap power
supplies to keep business critical applications
up and running
– 10 hot swap fans

Front access cabling – no need to go to rear of
the rack or chassis

6

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale – Dense Chassis
IBM NeXtScale n1200 enclosure

The Dense Chassis

System infrastructure
Optimized shared infrastructure

Bay 11

6U Chassis, 12 bays
½ wide component
support
Up to 6 900W power
supplies N+N or N+1
configurations
Up to 10 hot swap fans
Fan and Power Controller
Mix and match compute,
storage, or GPU nodes
No built in networking
No chassis management
required

Bay 12

Bay 9

Bay 10

Bay 7

Bay 8

Bay 5

Bay 6

Bay 3

Bay 4

Bay 1

Bay 2
Bay 11

Front view of the IBM NeXtScale n1200 enclosure shown with
12 compute nodes installed

5x 80mm fans
3x power supplies
7

Fan and Power
Controller

3x power supplies
5x 80mm fans

Rear view of the IBM NeXtScale n1200 enclosure

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

n1200 Chassis Details

Rear View

262.7 mm
(6U)
Front View

10 x 80mm
Fans

6 x Hot Swap
80+ Platinum 900W Power
Supplies

Power design supports non-redundant, N+1, and N+N power

8

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

View of Chassis
Rear View

Front View

Power
Distribution Board

Access Cover

Fan / Power
Control Card

6 Power Supplies
12 Node
Bays

9

Fan & System LEDs

10 x 80 mm Fans

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale – The Compute Node

System infrastructure
Simple architecture

The Compute Node
New ½ Wide 1U, 2 socket
server
Next generation Intel
processors (IVB EP)
Flexible slot-less I/O design
Generous PCIe capability
Open design, works with
existing x86 tools
Versatile design with
flexible Native Expansion
options
32TB local storage
(Nov)
GPUs/Phi adapters
(2014)

IBM NeXtScale nx360 M4 – Hyperscale Server

Power button and
information LED

PCIe 3.0 Slot
1 GbE ports
Dual-port mezzanine card
(IB/Ethernet)

x8 mezz.
connector

KVM
connector

CPU #2

Labeling tag

2x DIMMs

IMM
management
port

Drive bay(s)

2x DIMMs

2x DIMMs
10

x24 PCIe 3.0 slot

2x DIMMs

IBM Confidential – Presented under NDA

CPU #1

Power
connector

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

½ Wide Node Details

216 +/- 0.5 mm
8.5 inches

41 +/- 0.5 mm
Storage Choice
1 x 3.5” HDD
2 x 2.5” HDD/SSD
4 x 1.8” SSD

Power Interposer Card
Top BIN
processors x 2
All External Cable Connectors Out
the front of server for easy access
on cool aisle

Power Button and
Information LEDs

11

Motherboard
IMM v2 (IPMI / SoL compliant BMC)
2 x 1Gb Intel NIC
8 DIMMs @ 1866MHz
Mezzanine Card
(IO – IB, 10Gb)

PCIe Adapter –
Full High, Half Length

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

nx360 M4 Node

The essentials
– Dedicated or shared 1Gb for management
– Two production 1Gb Intel NICs and 1 additional port for IMM
– Standard PCI card support
– Flexible LOM/Mezzanine for IO expansion
– Power, Basic LightPath, and KVM crash cart access
– Simple pull out asset tag for naming or RFID
– Intel Node Manager 2.0 Power Metering/Management

The first silver System x server
– Clean, simple, and lower cost
– Blade like weight and size – rack like individuality and control
12

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

IBM NeXtScale: Elegant Simplicity
NeXtScale will keep you in front (of the rack that is)
>100 ºF!!!
65-80ºF

Know what cable you are pulling!

Which aisle would rather be working in?

Service NeXtScale from the front of the rack
Cold aisle accessibility to most components
Tool-less access to servers
Server removal without unplugging power
13

Front-access to Networking cables & Switches
Simple cable routing (front or traditional rear
switching)
Power and LEDs all front facing

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

nx360 M4 Block diagram:

14

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

nx360 M4 is optimized for HPC and Grid
Full CPU lineup support up to 130W
8 DIMM slots
– Optimized for max speed at 1 DIMM/channel 1866MHz
– Optimized for HPC workloads
• 2-4GB/core with 24 cores fits nicely into 16GB cost sweet spot

– Optimized for cost (reduced board to 8 layers, HP has 12)
– Optimized for efficiency (greater processor spread to reduce preheating)

Infiniband FDR mezzanine
– Optimized for performance and cost

Chassis capable of Non-redundant or N+1 power to reduce cost
– HPC typically deploys non-redundant (software resiliency)
– Option for N+1 to protect 12 nodes from throttling in PSU failure for minimal cost add

Flexible integrated storage for boot and scratch
– 1 3.5” (or stateless – no HDD) is common for HPC
– 2 2.5” is used in some grid applications
– 4 1.8” SSD for low power, additional flexibility

Enabled for GPU and storage trays
– Pre-positioned PCIe slots (1 in front, 1 in back)
15

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

iDataPlex and NeXtScale – Complementary Offerings
iDataPlex is being refreshed with Intel Xeon E5-2600 v2 processors – full stack
– Will ship thousands nodes of iDataPlex in 3Q
– Expect continued sales through 2015
– Clients with proven iDataPlex solutions can continue to purchase

iDataPlex provides several functions that Gen 1 NeXtScale will not
– Water Cooling – Stay with iDataPlex for direct water cooling until next gen
– 16 DIMM slots – for users that need 256GB or more of memory, iDataPlex is a better
choice until next gen NeXtScale offering
– Short term we will use iDataPlex for our GPU/GPGPU support

Key point – NeXtScale is not a near term replacement for iDataPlex
NeXtScale will be our architecture of choice for HPC, Cloud, Grid, IPDC & Analytics
– More flexible architecture with stronger roadmap
– As NeXtScale continues to add functionality - iDataPlex will no longer be needed –
outlook 2015

16

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale Improves on an Already Great iDataPlex Platform
iDataPlex

NeXtScale

iDataplex requires unique rack to achieve density –
most customers prefer standard rack

NeXtScale fits in any standard rack

84 servers per rack is difficult to utilize and configure –
Infiniband fits into multiples of 18 or 24, creating a
mismatch with 84 servers

NeXtScale single rack allows all Infiniband and
Ethernet switching with 72 servers – the perfect
multiple

iDataplex clusters are difficult to configure with unused
switch ports at maximum density

NeXtScale offers 72 nodes per rack + infrastructure,
making configuring straightforward

Wide iDataPlex rack drives longer, higher cost cables

NeXtScale is optimized for 19” rack, reducing cable
length for rack to rack and cost

Other servers and storage in clusters forces addition of
standard racks to layout, which eliminates the iDataPlex
datacenter advantage

NeXtScale, System x and storage use the same rack
– easy to optimize and deploy

iDataPlex
Longer optical cables x3
x6

x6

x6

NeXtScale
Shorter copper cables x3
x6
fillers fillers

17

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

NeXtScale Product Timeline
Shipping: Oct 2013

6U Dense Chassis
1U tall 1/2 Wide Compute
node

Shipping: Nov 2013

Storage Native Expansion
(Storage NeX)
1U (2U tall 1/2 Wide)
Up to 32TB total capacity

Shipping: 1H 2014

PCI Native Expansion
(PCI NeX)
1U (2U tall 1/2 Wide)
GPU or Xeon Phi Support

A Lot More Coming

More Storage
More IO Options
Next Gen Processors
MicroServers

6U Chassis will support mix-match nodes
18

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Storage NeX

19

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Storage NeX

20

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Storage NeX – Internals
Seven LFF (3.5” ) drives internal to
storage NeX, plus one additional drive on
the nx360 M4
Cable attached to a SAS or SATA RAID
adapter or HBA on the nx360 M4

1

0

Drives are not hotswap

2

Initial capacity is up to 4TB per drive

7

3
4

5
6

21

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

PCI NeX

22

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

PCI NeX
Supports 2 Full Height, Full Length,
Double Wide adapters at up to 300W
Provides 2 x16 slots
Requires 1300W power supplies in the
chassis
Will support Intel Xeon Phi and nVIDIA
GPUs
Expected availability 1H 2014

23

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

PCI NeX - Slots

GPU 0

GPU 1

Gen3

Gen3

Note:
All PCIe Slots & Server GPU slots
have separate SMBus Connections
x16

x16

Upper 1U
Lower 1U
x
8

IB/10G…
Mezzanine
Connector

x8

x
8

x24 Planar Connector

x16 Planar Connector

FHHL

DRAM

Proc 0

x16

x24

Proc 1

x8

DRAM

QPI
24

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

How the PCI NeX attaches

PCIe Connector
Locations

25

IBM Confidential – Presented under NDA

© 2013 IBM Corporation
IBM Systems and Technology Group Technical Symposium
Melbourne Australia | 20 – 23 August 2013

Dense chassis – flexibility for the future
Room to scale – future proof flexibility. The investment platform for HPC.
Dense Compute

Dense Storage

2 socket
12 compute
½ wide

2 socket
6 compute
½ wide
8 3.5” HDD

Ultra-Dense

dense
microservers
(3U)
26

GPU / Accelerator

2 socket
6 compute
+ 2 GPU

GPU / Accelerator
with IO

2 socket
4 compute
+ 4 GPU

Full-wide

1-2 socket (full-wide)
6 compute
Memory or HDD rich

Dense Storage

dense
hot-swap storage
(4U)

IBM Confidential – Presented under NDA

More…

© 2013 IBM Corporation

More Related Content

PPT
IBM Special Announcement session Intel #IDF2013 September 10, 2013
PPTX
ttec / transtec | IBM NeXtScale
PDF
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
PDF
Summit workshop thompto
PDF
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
PPTX
Green Networking With Blade
PDF
PLX Technology Company Overview
PDF
RONNIEE Express: A Dramatic Shift in Network Architecture
IBM Special Announcement session Intel #IDF2013 September 10, 2013
ttec / transtec | IBM NeXtScale
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
Summit workshop thompto
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
Green Networking With Blade
PLX Technology Company Overview
RONNIEE Express: A Dramatic Shift in Network Architecture

What's hot (18)

PDF
Fujitsu World Tour 2017 - Compute Platform For The Digital World
PDF
IBM Intelligent Cluster-Data Sheet
PDF
Xilinx Edge Compute using Power 9 /OpenPOWER systems
PDF
Bullx HPC eXtreme computing cluster references
PPT
Virtualización en la Red del Data Center - Extreme Networks
PDF
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
PDF
Optimized packet processing software for networking and security
PPTX
Altera’s Role In Accelerating the Internet of Things
PDF
Fortissimo converged super_converged_hyper
PDF
SuperO Desktop gaming solutions -by Supermicro
PDF
SGI HPC Update for June 2013
PDF
Supermicro X12 Performance Update
DOC
Blade server technology report
PPTX
Consumption Based On-Demand Private Cloud in a Box
PDF
@IBM Power roadmap 8
PPTX
Rack Cluster Deployment for SDSC Supercomputer
PDF
Open compute technology
 
PDF
OpenPOWER System Marconi100
Fujitsu World Tour 2017 - Compute Platform For The Digital World
IBM Intelligent Cluster-Data Sheet
Xilinx Edge Compute using Power 9 /OpenPOWER systems
Bullx HPC eXtreme computing cluster references
Virtualización en la Red del Data Center - Extreme Networks
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
Optimized packet processing software for networking and security
Altera’s Role In Accelerating the Internet of Things
Fortissimo converged super_converged_hyper
SuperO Desktop gaming solutions -by Supermicro
SGI HPC Update for June 2013
Supermicro X12 Performance Update
Blade server technology report
Consumption Based On-Demand Private Cloud in a Box
@IBM Power roadmap 8
Rack Cluster Deployment for SDSC Supercomputer
Open compute technology
 
OpenPOWER System Marconi100
Ad

Similar to NeXtScale HPC seminar (20)

PPT
Xtw01t4v011311 i dataplex
PDF
Blue line Supermicro Server Building Block Solutions
PDF
IBM NeXtScale nx360 M4
PPT
Xtw01t7v021711 cluster
PPTX
22by7 and DellEMC Tech Day July 20 2017 - Power Edge
PPT
Introducing Affordable HPC or HPC for the Masses - IBM NeXtScale System
PPT
Jan 2011 Presentation
PPTX
Data OnTAP Cluster Mode Administrator
PPTX
Arm DynamIQ: Intelligent Solutions Using Cluster Based Multiprocessing
 
PDF
IBM Flex System - Evento Torino 19 novembre 2013
PDF
IBM Flex System™ : ultimi annunci – Risultati migliori, costi inferiori, per ...
PDF
IBM System x iDataPlex dx360 M4
PPT
HP Bladesystem Overview September 2009
PPTX
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
PPTX
Red Hat Storage Day Boston - Supermicro Super Storage
PDF
Evolution of Supermicro GPU Server Solution
PPT
Grid rac preso 051007
PDF
IBM HPC Transformation with AI
PDF
OpenPOWER/POWER9 Webinar from MIT and IBM
Xtw01t4v011311 i dataplex
Blue line Supermicro Server Building Block Solutions
IBM NeXtScale nx360 M4
Xtw01t7v021711 cluster
22by7 and DellEMC Tech Day July 20 2017 - Power Edge
Introducing Affordable HPC or HPC for the Masses - IBM NeXtScale System
Jan 2011 Presentation
Data OnTAP Cluster Mode Administrator
Arm DynamIQ: Intelligent Solutions Using Cluster Based Multiprocessing
 
IBM Flex System - Evento Torino 19 novembre 2013
IBM Flex System™ : ultimi annunci – Risultati migliori, costi inferiori, per ...
IBM System x iDataPlex dx360 M4
HP Bladesystem Overview September 2009
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Red Hat Storage Day Boston - Supermicro Super Storage
Evolution of Supermicro GPU Server Solution
Grid rac preso 051007
IBM HPC Transformation with AI
OpenPOWER/POWER9 Webinar from MIT and IBM
Ad

More from IBM Danmark (20)

PPTX
DevOps, Development and Operations, Tina McGinley
PPTX
Velkomst, Universitetssporet 2013, Pia Rønhøj
PPTX
Smarter Commerce, Salg og Marketing, Thomas Steglich-Andersen
PPT
Mobile, Philip Nyborg
PPTX
IT innovation, Kim Escherich
PPTX
Echo.IT, Stefan K. Madsen
PPT
Big Data & Analytics, Peter Jönsson
PPTX
Social Business, Alice Bayer
PDF
Numascale Product IBM
PDF
Mellanox IBM
PDF
Intel HPC Update
PDF
IBM general parallel file system - introduction
PDF
Future of Power: PowerLinux - Jan Kristian Nielsen
PDF
Future of Power: Power Strategy and Offerings for Denmark - Steve Sibley
PDF
Future of Power: Big Data - Søren Ravn
PDF
Future of Power: IBM PureFlex - Kim Mortensen
PDF
Future of Power: IBM Trends & Directions - Erik Rex
PDF
Future of Power: Håndtering af nye teknologier - Kim Escherich
PDF
Future of Power - Lars Mikkelgaard-Jensen
PDF
Future of Power: IBM Power - Lars Johanneson
DevOps, Development and Operations, Tina McGinley
Velkomst, Universitetssporet 2013, Pia Rønhøj
Smarter Commerce, Salg og Marketing, Thomas Steglich-Andersen
Mobile, Philip Nyborg
IT innovation, Kim Escherich
Echo.IT, Stefan K. Madsen
Big Data & Analytics, Peter Jönsson
Social Business, Alice Bayer
Numascale Product IBM
Mellanox IBM
Intel HPC Update
IBM general parallel file system - introduction
Future of Power: PowerLinux - Jan Kristian Nielsen
Future of Power: Power Strategy and Offerings for Denmark - Steve Sibley
Future of Power: Big Data - Søren Ravn
Future of Power: IBM PureFlex - Kim Mortensen
Future of Power: IBM Trends & Directions - Erik Rex
Future of Power: Håndtering af nye teknologier - Kim Escherich
Future of Power - Lars Mikkelgaard-Jensen
Future of Power: IBM Power - Lars Johanneson

Recently uploaded (20)

PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
Web App vs Mobile App What Should You Build First.pdf
PPTX
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PPT
Module 1.ppt Iot fundamentals and Architecture
PPTX
Modernising the Digital Integration Hub
PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PDF
Enhancing emotion recognition model for a student engagement use case through...
PPTX
TLE Review Electricity (Electricity).pptx
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
August Patch Tuesday
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
O2C Customer Invoices to Receipt V15A.pptx
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
Web App vs Mobile App What Should You Build First.pdf
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
cloud_computing_Infrastucture_as_cloud_p
A contest of sentiment analysis: k-nearest neighbor versus neural network
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Programs and apps: productivity, graphics, security and other tools
NewMind AI Weekly Chronicles - August'25-Week II
Getting started with AI Agents and Multi-Agent Systems
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
Module 1.ppt Iot fundamentals and Architecture
Modernising the Digital Integration Hub
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
Enhancing emotion recognition model for a student engagement use case through...
TLE Review Electricity (Electricity).pptx
A novel scalable deep ensemble learning framework for big data classification...
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
August Patch Tuesday
Univ-Connecticut-ChatGPT-Presentaion.pdf
O2C Customer Invoices to Receipt V15A.pptx

NeXtScale HPC seminar

  • 1. HPC Seminar – September 2013 Scale Out Computing With NeXtScale Systems Karl Hansen, HPC and Technical Computing, IBM systemX, Nordic 1 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 2. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Journey Started in 2008 – iDataPlex Flexible computing optimized for Data Center serviceability Race car design – Performance centric approach – Cost efficient – Energy Conscious All-front access – Reduces time behind the rack – Reduces cabling errors – Highly energy efficient Low cost, Flexible chassis – Support for servers, GPUs, and Storage – Easy to install and service – Greater density than traditional 1U systems Optimized for Top of Rack (TOR) switching – No expensive mid plane – Latency Optimized – Open Ecosystem 2 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 3. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 IBM iDataPlex dx360 M4 Refresh 3 WHAT’S NEW: Intel Xeon E5-2600 v2 product family Intel Xeon Phi 7120P coprocessor New 1866MHz and 1.35V RDIMMs Higher Performance: Intel Xeon E5-2600 v2 processors providing up to 12 cores, 30MB cache and 1866MHz maximum memory speed to deliver more performance in the same power envelope Intel Xeon Phi coprocessor delivers over 1 Teraflop of double precision peak performance providing up to 4x more performance per watt than with processors alone Increased memory performance with 1866MHz DIMMs and new energy efficient 1.35V RDIMM options, ideal for HPC workloads Learn More: http://www-03.ibm.com/systems/x/hardware/rack/dx360m4/index.html 3 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 4. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 2013 - Introducing IBM NeXtScale A superior building block approach for scale-out computing Standard Rack Chassis Primary Target Workloads High Performance Computing Public Cloud Private Cloud Compute Storage Better data center density and flexibility Compatible with standard racks Optimized for Top of Rack Switching Top BIN E-5 2600 v2 processors Acceleration Designed for solution redundancy The best of iDataPlex Very powerful roadmap More Coming 4 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 5. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 IBM NeXtScale: Elegant Simplicity One Architecture Optimized for Many Use Cases IBM Rack or Client Rack One Simple Light Chassis IBM NeXtScale n1200 Compute Storage PCI – GPU / Phi IBM NeXtScale nx360 M4 nx360 M4 + Storage NeX nx360 M4 + PCI NeX Dense Compute Add RAID card + cable Add PCI riser + GPUs Top Performance Dense 32TB in 1U 2 x 300W GPU in 1U Energy Efficient Simple direct connect Full x16 Gen3 connect IO flexibility No trade offs in base No trade offs in base Swappable Mix and Match Mix and Match 5 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 6. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Deep Dive into the NeXtScale n1200 enclosure The ultimate high density server, designed for your Technical, Grid, and Cloud computing workloads. n1200 enclosure New MT: 5456 Twice the amount of density than regular 1U servers. n1200 Enclosure Dense Chassis – The Foundation Form factor 6U tall with 12 half-wide bays Mix and match compute, storage, or GPU nodes within chassis 6U tall – standard rack Number of Bays 12 Power Supplies 6 hot swap, non redundant, N+N or N+1 Redundant. 80 PLUS Platinum high energy efficiency 900W Fans 10 Hot swap – Each system is individually serviceable – No left or right specific parts – meaning system can be put in any slot Can have up to 7 chassis (up to 84 servers) in a standard 19” rack No in-chassis networking integration – Systems connect to TOR switches – No need to manage the chassis via FSM, iMM, etc Shared power and cooling – 6 non redundant, N+1, N+N, hot swap power supplies to keep business critical applications up and running – 10 hot swap fans Front access cabling – no need to go to rear of the rack or chassis 6 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 7. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale – Dense Chassis IBM NeXtScale n1200 enclosure The Dense Chassis System infrastructure Optimized shared infrastructure Bay 11 6U Chassis, 12 bays ½ wide component support Up to 6 900W power supplies N+N or N+1 configurations Up to 10 hot swap fans Fan and Power Controller Mix and match compute, storage, or GPU nodes No built in networking No chassis management required Bay 12 Bay 9 Bay 10 Bay 7 Bay 8 Bay 5 Bay 6 Bay 3 Bay 4 Bay 1 Bay 2 Bay 11 Front view of the IBM NeXtScale n1200 enclosure shown with 12 compute nodes installed 5x 80mm fans 3x power supplies 7 Fan and Power Controller 3x power supplies 5x 80mm fans Rear view of the IBM NeXtScale n1200 enclosure IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 8. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 n1200 Chassis Details Rear View 262.7 mm (6U) Front View 10 x 80mm Fans 6 x Hot Swap 80+ Platinum 900W Power Supplies Power design supports non-redundant, N+1, and N+N power 8 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 9. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 View of Chassis Rear View Front View Power Distribution Board Access Cover Fan / Power Control Card 6 Power Supplies 12 Node Bays 9 Fan & System LEDs 10 x 80 mm Fans IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 10. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale – The Compute Node System infrastructure Simple architecture The Compute Node New ½ Wide 1U, 2 socket server Next generation Intel processors (IVB EP) Flexible slot-less I/O design Generous PCIe capability Open design, works with existing x86 tools Versatile design with flexible Native Expansion options 32TB local storage (Nov) GPUs/Phi adapters (2014) IBM NeXtScale nx360 M4 – Hyperscale Server Power button and information LED PCIe 3.0 Slot 1 GbE ports Dual-port mezzanine card (IB/Ethernet) x8 mezz. connector KVM connector CPU #2 Labeling tag 2x DIMMs IMM management port Drive bay(s) 2x DIMMs 2x DIMMs 10 x24 PCIe 3.0 slot 2x DIMMs IBM Confidential – Presented under NDA CPU #1 Power connector © 2013 IBM Corporation
  • 11. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 ½ Wide Node Details 216 +/- 0.5 mm 8.5 inches 41 +/- 0.5 mm Storage Choice 1 x 3.5” HDD 2 x 2.5” HDD/SSD 4 x 1.8” SSD Power Interposer Card Top BIN processors x 2 All External Cable Connectors Out the front of server for easy access on cool aisle Power Button and Information LEDs 11 Motherboard IMM v2 (IPMI / SoL compliant BMC) 2 x 1Gb Intel NIC 8 DIMMs @ 1866MHz Mezzanine Card (IO – IB, 10Gb) PCIe Adapter – Full High, Half Length IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 12. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 nx360 M4 Node The essentials – Dedicated or shared 1Gb for management – Two production 1Gb Intel NICs and 1 additional port for IMM – Standard PCI card support – Flexible LOM/Mezzanine for IO expansion – Power, Basic LightPath, and KVM crash cart access – Simple pull out asset tag for naming or RFID – Intel Node Manager 2.0 Power Metering/Management The first silver System x server – Clean, simple, and lower cost – Blade like weight and size – rack like individuality and control 12 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 13. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 IBM NeXtScale: Elegant Simplicity NeXtScale will keep you in front (of the rack that is) >100 ºF!!! 65-80ºF Know what cable you are pulling! Which aisle would rather be working in? Service NeXtScale from the front of the rack Cold aisle accessibility to most components Tool-less access to servers Server removal without unplugging power 13 Front-access to Networking cables & Switches Simple cable routing (front or traditional rear switching) Power and LEDs all front facing IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 14. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 nx360 M4 Block diagram: 14 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 15. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 nx360 M4 is optimized for HPC and Grid Full CPU lineup support up to 130W 8 DIMM slots – Optimized for max speed at 1 DIMM/channel 1866MHz – Optimized for HPC workloads • 2-4GB/core with 24 cores fits nicely into 16GB cost sweet spot – Optimized for cost (reduced board to 8 layers, HP has 12) – Optimized for efficiency (greater processor spread to reduce preheating) Infiniband FDR mezzanine – Optimized for performance and cost Chassis capable of Non-redundant or N+1 power to reduce cost – HPC typically deploys non-redundant (software resiliency) – Option for N+1 to protect 12 nodes from throttling in PSU failure for minimal cost add Flexible integrated storage for boot and scratch – 1 3.5” (or stateless – no HDD) is common for HPC – 2 2.5” is used in some grid applications – 4 1.8” SSD for low power, additional flexibility Enabled for GPU and storage trays – Pre-positioned PCIe slots (1 in front, 1 in back) 15 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 16. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 iDataPlex and NeXtScale – Complementary Offerings iDataPlex is being refreshed with Intel Xeon E5-2600 v2 processors – full stack – Will ship thousands nodes of iDataPlex in 3Q – Expect continued sales through 2015 – Clients with proven iDataPlex solutions can continue to purchase iDataPlex provides several functions that Gen 1 NeXtScale will not – Water Cooling – Stay with iDataPlex for direct water cooling until next gen – 16 DIMM slots – for users that need 256GB or more of memory, iDataPlex is a better choice until next gen NeXtScale offering – Short term we will use iDataPlex for our GPU/GPGPU support Key point – NeXtScale is not a near term replacement for iDataPlex NeXtScale will be our architecture of choice for HPC, Cloud, Grid, IPDC & Analytics – More flexible architecture with stronger roadmap – As NeXtScale continues to add functionality - iDataPlex will no longer be needed – outlook 2015 16 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 17. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale Improves on an Already Great iDataPlex Platform iDataPlex NeXtScale iDataplex requires unique rack to achieve density – most customers prefer standard rack NeXtScale fits in any standard rack 84 servers per rack is difficult to utilize and configure – Infiniband fits into multiples of 18 or 24, creating a mismatch with 84 servers NeXtScale single rack allows all Infiniband and Ethernet switching with 72 servers – the perfect multiple iDataplex clusters are difficult to configure with unused switch ports at maximum density NeXtScale offers 72 nodes per rack + infrastructure, making configuring straightforward Wide iDataPlex rack drives longer, higher cost cables NeXtScale is optimized for 19” rack, reducing cable length for rack to rack and cost Other servers and storage in clusters forces addition of standard racks to layout, which eliminates the iDataPlex datacenter advantage NeXtScale, System x and storage use the same rack – easy to optimize and deploy iDataPlex Longer optical cables x3 x6 x6 x6 NeXtScale Shorter copper cables x3 x6 fillers fillers 17 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 18. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 NeXtScale Product Timeline Shipping: Oct 2013 6U Dense Chassis 1U tall 1/2 Wide Compute node Shipping: Nov 2013 Storage Native Expansion (Storage NeX) 1U (2U tall 1/2 Wide) Up to 32TB total capacity Shipping: 1H 2014 PCI Native Expansion (PCI NeX) 1U (2U tall 1/2 Wide) GPU or Xeon Phi Support A Lot More Coming More Storage More IO Options Next Gen Processors MicroServers 6U Chassis will support mix-match nodes 18 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 19. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Storage NeX 19 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 20. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Storage NeX 20 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 21. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Storage NeX – Internals Seven LFF (3.5” ) drives internal to storage NeX, plus one additional drive on the nx360 M4 Cable attached to a SAS or SATA RAID adapter or HBA on the nx360 M4 1 0 Drives are not hotswap 2 Initial capacity is up to 4TB per drive 7 3 4 5 6 21 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 22. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 PCI NeX 22 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 23. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 PCI NeX Supports 2 Full Height, Full Length, Double Wide adapters at up to 300W Provides 2 x16 slots Requires 1300W power supplies in the chassis Will support Intel Xeon Phi and nVIDIA GPUs Expected availability 1H 2014 23 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 24. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 PCI NeX - Slots GPU 0 GPU 1 Gen3 Gen3 Note: All PCIe Slots & Server GPU slots have separate SMBus Connections x16 x16 Upper 1U Lower 1U x 8 IB/10G… Mezzanine Connector x8 x 8 x24 Planar Connector x16 Planar Connector FHHL DRAM Proc 0 x16 x24 Proc 1 x8 DRAM QPI 24 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 25. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 How the PCI NeX attaches PCIe Connector Locations 25 IBM Confidential – Presented under NDA © 2013 IBM Corporation
  • 26. IBM Systems and Technology Group Technical Symposium Melbourne Australia | 20 – 23 August 2013 Dense chassis – flexibility for the future Room to scale – future proof flexibility. The investment platform for HPC. Dense Compute Dense Storage 2 socket 12 compute ½ wide 2 socket 6 compute ½ wide 8 3.5” HDD Ultra-Dense dense microservers (3U) 26 GPU / Accelerator 2 socket 6 compute + 2 GPU GPU / Accelerator with IO 2 socket 4 compute + 4 GPU Full-wide 1-2 socket (full-wide) 6 compute Memory or HDD rich Dense Storage dense hot-swap storage (4U) IBM Confidential – Presented under NDA More… © 2013 IBM Corporation