Virtual I/O Architecture and Performance: Li Ming Jun
Virtual I/O Architecture and Performance: Li Ming Jun
Li Ming Jun
[email protected]
Outline
Why Virtualization
Virtual I/O on system p5
Outline
Why virtualization
Virtual I/O on system p5
10
Bandwidth (Gb)
4
2
0
1998
1999
2001
2003
2004
2006
Year
Performance
FC (Trend)
Result
Ethernet (Trend)
3
2
System p (Trend)
1
0
2003
2004
2005
2006
2007
Year
4
4
Outline
Why Virtualization
Virtual I/O on system p5
Virtual
Ethernet
Function
Ethernet
FC
Ethernet
Virtual
I/O Server*
Virtual
Ethernet
Function
Virtual
SCSI
Function
Hypervisor
Ethernet
B
Benefits
Virtual SCSI
Ethernet
Virtual Ethernet
* Available on System p via the Advanced POWER virtualization features. IVM supports a single Virtual I/O Server.
6
2007 IBM Corporation
Hypervisor: Type 1
Apps
...
OS
Apps
OS
Paravirtualization
Architected Hypervisor calls
More efficient than trap and emulate
Hardware Assists
POWER Hypervisor mode
Hypervisor timer
Processor utilization registers
Real mode offset registers
I/O drawers and towers (isolation/performance)
SMT Dormant thread
Hypervisor
SMP Server
Hypervisor: Type 2
Apps
OS
...
Apps
OS
Hypervisor
Host OS
SMP Server
7
2007 IBM Corporation
VIO
Server
LPAR 2
LPAR 3
Buffer
Virtual
SCSI
Server
Virtual
Disk
FC
Adapter
Read Message
LPAR 1
Disk
Disk
8
2007 IBM Corporation
9
2007 IBM Corporation
10
10
2007 IBM Corporation
11
11
2007 IBM Corporation
12
12
2007 IBM Corporation
Virtual SCSI
POWER5 Server
Micro-partitions
External Storage
A1 A2
A3
A4
A5
Shared
Fiber Chan
Adapter
Shared
SCSI
Adapter
VIOS
v
S
C
S
I
v
L
A
N
B2
A2
B3
A3
B1 B2 B4 B5
B3
VIOS owns physical disk resources
LVM based storage on VIO Server
Physical Storage can be SCSI or FC
Local or remote
Partition sees disks as vSCSI (Virtual SCSI)
devices
Virtual SCSI devices added to partition
via HMC
LUNs on VIOS accessed as vSCSI disk
VIOS must be active for client to boot
Virtual SCSI
POWER Hypervisor
Available via optional Advance POWER Virtualization or POWER Hypervisor and VIOS features.
13
13
LV
VSCSI
Optical
VSCSI
LVM
DVD
Multi-Path
or
Disk Drivers
Hdisk
vSCSI
Client
Adapter
vSCSI
Server
Adapter
Optical
Driver
Adapter /
Drivers
POWER5 Hypervisor
FC or SCSI
Device
14
14
2007 IBM Corporation
15
15
2007 IBM Corporation
16
16
2007 IBM Corporation
17
17
2007 IBM Corporation
Complexity
Requires LVM mirroring to be setup on the client
AIX A
(LVM Mirror)
AIX B
(LVM Mirror)
Resilience
Protection against failure of single VIOS / SCSI disk
/ SCSI controller
Notes
Requires multiple adapters on each VIOS
18
VIOS 1
VIOS 2
vSCSI
vSCSI
SCSI
MPIO
SCSI
MPIO
18
2007 IBM Corporation
LVM Mirroring
vscsi0
vscsi1
LVM Mirroring
vscsi0
vscsi1
Hypervisor
vhost0
vhost1
vhost0
vhost1
vtscsi0
vtscsi1
vtscsi0
vtscsi1
SCSI MPIO
VIOS 1
19
scsi0
scsi1
SCSI MPIO
VIOS 2
scsi0
scsi1
19
2007 IBM Corporation
Complexity
Requires MPIO to be setup on the client
VIOS 1
Resilience
Protection against failure of a single VIOS, FC
adapter, or path.
Protection against FC adapter failures within VIOS
vSCSI
AIX A
(MPIO Default
PCM)
AIX B
(MPIO Default
PCM)
MPATH*
VIOS 2
vSCSI
MPATH*
Throughput / Scalability
Potential for increased bandwidth due to Multi-Path
I/O
Primary LUNs can be split across multiple VIOS to
help balance the I/O load.
FC
FCSAN
SAN
Notes
Must be PV VSCSI disks.
A
B
PV LUNs
* Note: See the slide labeled VIOS Multi-Path Options for a high level overview of MPATH options.
20
20
2007 IBM Corporation
vscsi0
vscsi1
vscsi0
vscsi1
Hypervisor
vhost0
vhost1
vhost0
vhost1
vtscsi0
vtscsi1
vtscsi0
vtscsi1
Multi-Path Driver
VIOS 1
fcs0
Multi-Path Driver
fcs1
VIOS 2
fcs0
fcs1
A
B
PV LUNs
21
Active
Passive
21
2007 IBM Corporation
Ent0
(Phy)
Client 1
Client 2
Shared
Ethernet
Adapter
en0
(if)
en0
(if)
ent1
(Vir)
ent0
(Vir)
ent0
(Vir)
Ethernet
Switch
22
Ethernet
Header
Bridge
Router
Bridge
Router
TCP/IP
Header
Data
Ethernet
Header
TCP/IP
Header
Data
Switch
Switch
Ethernet
Header
TCP/IP
Header
Data
Ethernet V
Header L
TCP/IP
Header
Data
23
Client
Client LPAR
en0
(if)
en0
(if)
ent0
ent0
(Virt)
Switch
Port
VIOS
ent1
(Virt)
Logic
ent2
(SEA)
Port
ent0
(phy)
To other switch
POWER5 Server
Traditional Switches
24
VLAN Benefits
Ethernet
Switch
Preamble
Start
Frame
Delimiter
Dest.
MAC
Address
Source
MAC
Address
Tag
Prot. ID
( 0x8100 )
User Priority
( 3 bits)
Tag
Control
Info
CFI
(1 bit = 0)
Length
Data
Pad
Frame
Check
Sequence
VLAN ID
(12 bits)
25
All adapters must be configured for the same speed and must be
full duplex.
All network adapters that form the link aggregation (not including a
backup adapter) must be connected to the same network switch.
Host
NIC
NIC
NIC
Ethernet Switch
Greater reliability
26
EtherChannel
Standard Algorithm
All traffic to the same host goes out the same NIC.
Host
Source
IP Addr
Dest.
IP Addr
Dest
Port
Hash
Function
Source
Port
NIC
NIC
NIC
NIC
Ethernet Switch
27
VIOS 1
ent2
(Phy)
Client 1
en8
(if)
ent6
(LA)
ent8
(SEA)
ent1
(Phy)
ent4
(Vir)
VID
200
Client 2
en7
(if)
ent5
(Vir)
PVID
2
ent0
(Phy)
ent7
(SEA)
en0
(if)
en1
(if)
en0
(if)
en1
(if)
ent3
(Vir)
ent0
(Vir)
ent1
(Vir)
ent0
(Vir)
ent1
(Vir)
VID 300
( PVID 3 )
PVID 2
VID 200,300
PVID 100
PVID
300
PVID
2
PVID 100
-vadapter ent3
-default ent3
-defaultid 100
-vadapter ent4,ent5
-default ent4
-defaultid
Physical Ethernet
adapter or link
aggregation device
PVID
200
Virtual Ethernet
adapters in the VIOS
that will be used with
this SEA
Virtual Ethernet
that will contain
the default VLAN
Default
VLAN
28
Needs
Optional
Can
Cannot
VLAN-tagged
Supported
only on AIX.
VLAN-tagged
Supported
POWER5 Server
AIX Client
traffic is supported.
POWER5 Server
Client
Virt
Enet
Virt Virt
Enet Enet
NIB
VIOS 1
VIOS 2
VIOS 1
VIOS 2
SEA
SEA
SEA
SEA
Enet
PCI
Enet
PCI
Enet
PCI
Enet
PCI
Primary
Backup
29
VIO server
Database lpar
Application lpar
VIO server
en0
LA
en2
pvid101
vlan
101
SEA
en1
vent0
LA
ent2
SCSI0
LA
ent2
vent1
vent0
vlan
hdisk0
vhost0
IO server
rootvg
LVs for
client
rootvgs
VSCSI0
hdisk0
VSCSI1
rootvg
mirror
LV2
LV1
hdisk1
MPIO
disk1
vhost3
hdisk3
vpath1
VSCSI0
VSCSI2
SDD
hdisk1
vhost0
hdisk1
vhost1
vpath1
en1
LV2
hdisk2
vpath1
rootvg
mirror
hdisk0
LV1
VSCSI1
vhost1
VSCSI3
hdisk0
SCSI0
IO server
rootvg
LVs for
client
rootvgs
vhost3
vpath1
SDD
MPIO
disk 2
hdisk4
VSCSI4
VSCSI5
LUN1
hdisk9
vhost2
LUN0
hdisk8
LUN0
hdisk7
HBA or
SCSI
vhost2
hdisk5
HBA0
HBA1
LUN1
hdisk9
HBA1
LUN0
hdisk8
HBA0
HBA or
SCSI
LUN0
hdisk7
SAN Switch
30
LA
en2
SEA
pvid102
hdisk1
Real
Virtual
en0
vent1
SAN Switch
LUN0
LUN1
LUN2
30
2007
Special thanks to Tom Prokop
forIBM
this Corporation
slide
Outline
Why Virtualization
Virtual I/O on system p5
31
31
32
32
2007 IBM Corporation
ent3
SEA
hdisk2-n
ent1
vhost0-n
fcs0
fcs1
Suspended
VIO Client:vLPAR1
Partition
hdisk0 hdiskn
ent0
VIO
Suspended
Client:vLPAR2(3)
Partition
en3
(if)
ent2
VIOServer
hdisk1
hdiskx
hdisk0
en0
(if)
hdisk1
hdiskx
hdisk0
en0
(if)
vscsi1
vscsix
vscsi0
ent0
vscsi1
vscsix
vscsi0
ent0
DS4500
DS4500
33
33
2007 IBM Corporation
Throughput
Expense
Write 63MB/s
100%
Read 71MB/s
79%
Write 62MB/s
100%+3%
Read 70MB/s
79%+3%
Write 200MB/s
100%
Read 200MB/s
70%
Write 160MB/s
100%+10%
Read 185MB/s
73%+10%
Comments
Note: max throughput data depends on backend storage system, the data in this table is only used for
compare the differences of virtual SCSI and physical SCSI.
1.
2.
3.
34
Virtual SCSI provides high efficient performance at low resource consumption at VIO Server
side
Multiple virtual adapters do not help to improve performance, setup one virtual adapter for each
client LPAR is enough.
There seems no bandwidth restriction of virtual SCSI, but depends on the backed physical
adapters.
34
Throughput
Expense
900Mbps
32%
Virtual LAN
1262Mbps
98%
940Mbps
20%
4062Mbps
85%
839Mbps
75%+75%
Comments
Note: max throughput data depends on system resource, the data in this table is only used for
compare the differences of virtual LAN and physical LAN
1.
2.
3.
4.
35
Virtual Ethernet adapter provides high throughput than physical adapters if enough CPU available.
Virtual Ethernet need more CPU cycle than physical network adapter
SEA doubles the cost of CPU at VIOS and client LPAR.
Jumbo frame help to improve performance at higher bandwidth and lower resource consumption.
35
2007 IBM Corporation
Outline
Why Virtualization
Virtual I/O on system p5
36
36
Dual 1Gb
Quad 1Gb
Address Sharing:
Dual 1GB: 16 MAC Addresses / port group
Quad 1GB: 16 MAC Addresses / port group
Dual 10GB: 16 MAC Addresses / port group
Dual
10Gb
38
38
IVE advantages
Linux
i5/OS
AIX
Advantages:
No POWER Hypervisor hits
Ethernet
Driver
Ethernet
Driver
PHYP
Ethernet
Driver
HEA
39
39
Outline
Why Virtualization
Virtual I/O architecture on system p5
40
40
IBM Redbooks
http://www.redbooks.ibm.com
41
41