Concepts Guide - IBM TotalStorage DS4000 Storage Manager
Concepts Guide - IBM TotalStorage DS4000 Storage Manager
9.23
Concepts Guide
GC26-7734-04
IBM System Storage DS4000 Storage Manager Version
9.23
Concepts Guide
GC26-7734-04
Note:
Before using this information and the product it supports, be sure to read the general information under “Notices” on page
131.
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . .1
Storage Manager documentation and readme files . . . . . . . . . . . .1
Product updates . . . . . . . . . . . . . . . . . . . . . . . . .1
FAStT product renaming . . . . . . . . . . . . . . . . . . . . . .2
Machine types and supported software . . . . . . . . . . . . . . . .3
Terms to know . . . . . . . . . . . . . . . . . . . . . . . . .6
New features and enhancements . . . . . . . . . . . . . . . . . .7
FAStT product renaming . . . . . . . . . . . . . . . . . . . . .8
| Controller firmware 6.23: New features . . . . . . . . . . . . . . .8
| Controller firmware 6.19: New features . . . . . . . . . . . . . . .8
Controller firmware 6.16: New features . . . . . . . . . . . . . . .8
Controller firmware 6.14 and 6.15: New features . . . . . . . . . . . .8
Controller firmware 6.12: New features . . . . . . . . . . . . . . .9
Controller firmware 6.10: New features . . . . . . . . . . . . . . .9
Storage Manager premium features . . . . . . . . . . . . . . . . . 11
Storage subsystem components . . . . . . . . . . . . . . . . . . 12
Storage subsystem model types . . . . . . . . . . . . . . . . . 12
Storage partitioning specifications . . . . . . . . . . . . . . . . . 14
Software components . . . . . . . . . . . . . . . . . . . . . . 15
Storage Manager client (SMclient) . . . . . . . . . . . . . . . . . 15
Storage Manager host agent (SMagent). . . . . . . . . . . . . . . 16
Redundant disk array controller (RDAC) multipath driver . . . . . . . . 16
NetWare native failover driver . . . . . . . . . . . . . . . . . . 17
Storage Manager utility (SMutil) . . . . . . . . . . . . . . . . . . 17
| Microsoft MPIO . . . . . . . . . . . . . . . . . . . . . . . . 18
Host types . . . . . . . . . . . . . . . . . . . . . . . . . . 19
System requirements . . . . . . . . . . . . . . . . . . . . . . 20
Hardware requirements . . . . . . . . . . . . . . . . . . . . . 20
Storage subsystem management . . . . . . . . . . . . . . . . . . 21
Direct (out-of-band) management method . . . . . . . . . . . . . . 22
Host-agent (in-band) management method. . . . . . . . . . . . . . 24
Reviewing a sample network . . . . . . . . . . . . . . . . . . . 25
Managing coexisting storage subsystems . . . . . . . . . . . . . . . 26
Managing the storage subsystem using the graphical user interface . . . . . 27
Enterprise Management window . . . . . . . . . . . . . . . . . 27
iv IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Failure notification . . . . . . . . . . . . . . . . . . . . . . . 79
Updating the firmware in the storage subsystem and storage expansion
enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . 79
Downloading controller firmware . . . . . . . . . . . . . . . . . 80
Traditional controller firmware download. . . . . . . . . . . . . . 80
The staged controller firmware download feature . . . . . . . . . . 81
Downloading NVSRAM . . . . . . . . . . . . . . . . . . . . . 81
Downloading NVSRAM from a firmware image . . . . . . . . . . . 81
Downloading NVSRAM as a standalone image . . . . . . . . . . . 81
Downloading drive firmware . . . . . . . . . . . . . . . . . . . 81
General Considerations . . . . . . . . . . . . . . . . . . . . 82
Parallel drive firmware download . . . . . . . . . . . . . . . . 82
Environmental services module card . . . . . . . . . . . . . . . . 83
Downloading ESM firmware . . . . . . . . . . . . . . . . . . 83
Viewing and recovering missing logical drives . . . . . . . . . . . . . 84
Alert notification overview . . . . . . . . . . . . . . . . . . . . . 85
Configuring mail server and sender address . . . . . . . . . . . . . 85
Selecting the node for notification . . . . . . . . . . . . . . . . . 85
Setting alert destinations . . . . . . . . . . . . . . . . . . . . 85
Configuring alert destinations for storage subsystem critical-event notification 86
Event Monitor overview . . . . . . . . . . . . . . . . . . . . . . 86
Installing the Event Monitor . . . . . . . . . . . . . . . . . . . 87
Setting alert notifications . . . . . . . . . . . . . . . . . . . . 87
Synchronizing the Enterprise Management window and Event Monitor . . . 88
Recovery Guru . . . . . . . . . . . . . . . . . . . . . . . . . 89
Contents v
DS4500 Storage Subsystem library . . . . . . . . . . . . . . . . . 122
DS4400 Storage Subsystem library . . . . . . . . . . . . . . . . . 123
DS4300 Storage Subsystem library . . . . . . . . . . . . . . . . . 124
DS4200 Express Storage Subsystem library . . . . . . . . . . . . . 125
DS4100 Storage Subsystem library . . . . . . . . . . . . . . . . . 126
DS4000 Storage Expansion Enclosure documents . . . . . . . . . . . 127
Other DS4000 and DS4000-related documents . . . . . . . . . . . . 128
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Trademarks. . . . . . . . . . . . . . . . . . . . . . . . . . 131
Important notes . . . . . . . . . . . . . . . . . . . . . . . . 132
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
vi IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Figures
1. Direct (out-of-band) managed storage subsystems . . . . . . . . . . . . . . . . . . 23
2. Host-agent (in-band) managed storage subsystems . . . . . . . . . . . . . . . . . . 25
3. Sample network using direct and host-agent managed storage subsystems . . . . . . . . . 26
4. The Enterprise Management window . . . . . . . . . . . . . . . . . . . . . . . 27
5. Device tree with a management domain . . . . . . . . . . . . . . . . . . . . . . 28
6. Subsystem Management window Logical View and Physical View . . . . . . . . . . . . . 30
7. The script editor window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8. Unconfigured and free capacity nodes . . . . . . . . . . . . . . . . . . . . . . . 69
9. The task assistant in the Enterprise Management window . . . . . . . . . . . . . . . . 76
10. The task assistant in the Subsystem Management window . . . . . . . . . . . . . . . 77
11. Monitoring storage subsystem health using the Enterprise Management window . . . . . . . 78
12. Event monitoring example . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
13. Location of the Recovery Guru toolbar button . . . . . . . . . . . . . . . . . . . . 89
14. Recovery Guru window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
15. Recovery Guru window showing Replaced status icon . . . . . . . . . . . . . . . . . 91
16. Recovered drive failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Use this guide to better understand the storage manager software and to perform
the following tasks:
v Determine what storage-subsystem configuration you will use to maximize data
availability
v Set up alert notifications and monitor your storage subsystems in a management
domain
v Identify storage manager features that are unique to your specific installation
See also: The DS4000 Storage Server and Storage Expansion Enclosure Quick
Start Guide provides an excellent overview of the installation process.
xii IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 1. Where to find DS4000 installation and configuration procedures (continued)
Installation task Where to find information or procedures
4 Route the storage v DS4100 Storage Subsystem Installation, User’s, and
expansion unit Fibre Maintenance Guide
Channel cables
v DS4200 Express Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4300 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4400 Fibre Channel Cabling Instructions
v DS4500 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4700 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4800 Storage Subsystem Installation, User’s, and
Maintenance Guide
5 Route the host v DS4100 Storage Subsystem Installation, User’s, and
server Fibre Channel Maintenance Guide
cables
v DS4200 Express Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4300 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4400 Fibre Channel Cabling Instructions
v DS4500 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4700 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4800 Storage Subsystem Installation, User’s, and
Maintenance Guide
6 Power up the v DS4100 Storage Subsystem Installation, User’s, and
subsystem Maintenance Guide
v DS4200 Express Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4300 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4400 Fibre Channel Storage Server Installation and
Support Guide
v DS4500 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4700 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4800 Storage Subsystem Installation, User’s, and
Maintenance Guide
xiv IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 1. Where to find DS4000 installation and configuration procedures (continued)
Installation task Where to find information or procedures
14 Verify DS4000 v DS4100 Storage Subsystem Installation, User’s, and
subsystem health Maintenance Guide
v DS4200 Express Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4300 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4400 Fibre Channel Storage Server Installation and
Support Guide
v DS4500 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4700 Storage Subsystem Installation, User’s, and
Maintenance Guide
v DS4800 Storage Subsystem Installation, User’s, and
Maintenance Guide
15 Enable DS4000
Copy Services premium features
Storage Manager
DS4000 Storage Manager Copy Services Guide
premium feature
keys FC/SATA Intermix premium feature
DS4000 Fibre Channel and Serial ATA Intermix
Premium Feature Installation Overview
Storage Partitioning (and general premium features
information)
v DS4000 Storage Manager Concepts Guide
v DS4000 Storage Manager Installation and Support
Guide for AIX, HP-UX, Solaris and Linux on POWER
v DS4000 Storage Manager Installation and Support
Guide for Windows 2000/Server 2003, NetWare,
ESX Server, and Linux
16 Configure arrays and v DS4000 Storage Manager Installation and Support Guide for
logical drives AIX, HP-UX, Solaris and Linux on POWER
17 Configure host v DS4000 Storage Manager Installation and Support Guide for
partitions Windows 2000/Server 2003, NetWare, ESX Server, and
18 Verify host access to Linux
DS4000 storage v DS4000 Storage Manager online help
Chapter 2, “Storing and protecting your data,” on page 45 describes the various
data protection features of the DS4000 Storage Subsystem. These features include
input/output (I/O) data path failover support, Media Scan, and copy services.
Chapter 6, “Critical event problem solving,” on page 97 provides a list of all the
critical events that the storage management software sends if a failure occurs. The
list includes the critical event number, describes the failure, and refers you to the
procedure to correct the failure.
Appendix A, “Online help task reference,” on page 113 provides a task-based index
to the appropriate online help. There are two separate online help systems in the
storage-management software that correspond to each main window: the Enterprise
Management window and the Subsystem Management window.
xvi IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
You can solve many problems without outside assistance by following the
troubleshooting procedures that IBM provides in the DS4000 Storage Manager
online help or in the documents that are provided with your system and software.
The information that comes with your system also describes the diagnostic tests
that you can perform. Most subsystems, operating systems, and programs come
with information that contains troubleshooting procedures and explanations of error
messages and error codes. If you suspect a software problem, see the information
for the operating system or program.
Web sites
The most up-to-date information about DS4000 storage subsystems and DS4000
Storage Manager, including documentation and the most recent software, firmware,
and NVSRAM downloads, can be found at the following Web sites.
DS4000 Midrange Disk Systems
Find the latest information about IBM System Storage disk storage systems,
including all of the DS4000 storage subsystems:
www-1.ibm.com/servers/storage/disk/ds4000/
IBM System Storage products
Find information about all IBM System Storage products:
www.storage.ibm.com/
Support for IBM System Storage disk storage systems
Find links to support pages for all IBM System Storage disk storage
systems, including DS4000 storage subsystems and expansion units:
www-304.ibm.com/jct01004c/systems/support/supportsite.wss/
brandmain?brandind=5345868
System Storage DS4000 interoperability matrix
Find the latest information about operating system and HBA support,
clustering support, storage area network (SAN) fabric support, and DS4000
Storage Manager feature support:
www-1.ibm.com/servers/storage/disk/ds4000/interop-matrix.html
DS4000 Storage Manager readme files
1. Go to the following Web site:
www-304.ibm.com/jct01004c/systems/support/supportsite.wss/
brandmain?brandind=5345868
2. In the Product family drop-down menu, select Disk systems, and in the
Product drop-down menu, select your Storage Subsystem (for example,
DS4800 Midrange Disk System). Then click Go.
3. When the subsystem support page opens, click the Install/use tab, then
click the DS4000 Storage Manager Pubs and Code link. The
Downloads page for the subsystem opens.
www.ibm.com/servers/storage/support/san/index.html
DS4000 technical support
Find downloads, hints and tips, documentation, parts information, HBA and
Fibre Channel support:
www-304.ibm.com/jct01004c/systems/support/supportsite.wss/
brandmain?brandind=5345868In the Product family drop-down menu, select
Disk systems, and in the Product drop-down menu, select your Storage
Subsystem (for example, DS4800 Midrange Disk System). Then click Go.
Premium feature activation
Generate a DS4000 premium feature activation key file by using the online
tool:
www-912.ibm.com/PremiumFeatures/jsp/keyInput.jsp
IBM publications center
Find IBM publications:
www.ibm.com/shop/publications/order/
Support for System p™ servers
Find the latest information supporting System p AIX and Linux servers:
www-304.ibm.com/jct01004c/systems/support/supportsite.wss/
brandmain?brandind=5000025
Support for System x™ servers
Find the latest information supporting System x Intel®- and AMD-based
servers:
www-304.ibm.com/jct01004c/systems/support/supportsite.wss/
brandmain?brandind=5000008
Fix delivery center for AIX and Linux on POWER
Find the latest AIX and Linux on POWER information and downloads:
www-912.ibm.com/eserver/support/fixes/fcgui.jsp
In the Product family drop-down menu, select UNIX® servers. Then select
your product and fix type from the subsequent drop-down menus.
EserverSystem p and AIX information center
Find everything you need to know about using AIX with System p and
POWER servers:
publib.boulder.ibm.com/infocenter/pseries/index.jsp?
xviii IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Support for Linux on System p
Find information about using Linux on System p servers:
www.ibm.com/servers/eserver/pseries/linux/
Linux on POWER resource center
Find information about using Linux on POWER servers:
www.ibm.com/servers/enable/linux/power/
www.ibm.com/services/sl/products/
For more information about the IBM Support Line and other IBM services, go to the
following Web sites:
v www.ibm.com/services/
v www.ibm.com/planetwide/
www.ibm.com/planetwide/
In the U.S. and Canada, hardware service and support is available 24 hours a day,
7 days a week. In the U.K., these services are available Monday through Friday,
from 9 a.m. to 6 p.m.
Be sure to include the name and order number of the document and, if
applicable, the specific location of the text that you are commenting on,
such as a page number or table number.
xx IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Chapter 1. Introduction
This concepts guide provides the conceptual framework that is necessary to
understand the terminology and features of the IBM DS4000 Storage Manager
Version 9.23.
www-1.ibm.com/servers/storage/support/disk/
2. Click the link for your storage subsystem.
3. When the subsystem page opens, click the Download tab.
4. When the download page opens, click the Storage Mgr tab then click
on the appropriate link under the Current Versions and Readmes
column.
Important: Updated readme files contain the latest device driver versions,
firmware levels and other information that supersedes this document.
IBM DS4000 Storage Manager Installation and Support Guides
Use the installation and support guide for your operating system or platform
to set up, install, configure, and work with the IBM DS4000 Storage
Manager Version 9.23.
After you complete all of the Storage Manager and host installation procedures,
refer to the following online help systems, which contain information and procedures
that are common to all host operating system environments.
Enterprise Management window help
Use this online help system to learn more about working with the entire
management domain.
Subsystem Management window help
Use this online help system to learn more about managing individual
storage subsystems.
Note: To access the help systems from the Enterprise Management and
Subsystem Management windows in IBM DS4000 Storage Manager Version
9.1x, click Help on the toolbar, or press F1.
Product updates
Important
In order to keep your system up to date with the latest firmware and other
product updates, use the information below to register and use the My
support Web site.
To be notified of important product updates, you must first register at the IBM
Support and Download Web site:
www-1.ibm.com/servers/storage/support/disk/index.html
In the Additional Support section of the Web page, click My support. On the next
page, if you have not already done so, register to use the site by clicking Register
now.
Note: During this process a check list displays. Do not check any of the items
in the check list until you complete the selections in the pull-down
menus.
5. When you finish selecting the menu topics, place a check in the box for the
machine type of your DS4000 series product, as well as any other attached
DS4000 series product(s) for which you would like to receive information, then
click Add products. The My Support page opens again.
6. On the My Support page, click the Edit profile tab, then click Subscribe to
email. A pull-down menu displays.
7. In the pull-down menu, select Storage. A check list displays.
8. Place a check in each of the following boxes:
a. Please send these documents by weekly email
b. Downloads and drivers
c. Flashes
d. Any other topics that you may be interested in
Then, click Update.
9. Click Sign out to log out of My Support.
2 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 2. Mapping of FAStT names to DS4000 series names
Previous FAStT Product Name Current DS4000 Product Name
IBM TotalStorage FAStT Storage Server IBM TotalStorage DS4000
FAStT DS4000
FAStT Family DS4000 Mid-range Disk System
FAStT Storage Manager vX.Y (for example DS4000 Storage Manager vX.Y (for example
v9.10) v9.10)
FAStT100 DS4100
FAStT600 DS4300
FAStT600 with Turbo Feature DS4300 Turbo
FAStT700 DS4400
FAStT900 DS4500
EXP700 DS4000 EXP700
EXP710 DS4000 EXP710
EXP100 DS4000 EXP100
®
FAStT FlashCopy FlashCopy for DS4000
FAStT VolumeCopy VolumeCopy for DS4000
FAStT Remote Mirror (RM) Enhanced Remote Mirroring for DS4000
FAStT Synchronous Mirroring Metro Mirroring for DS4000
Global Copy for DS4000
(New Feature = Asynchronous Mirroring
without Consistency Group)
Global Mirroring for DS4000
(New Feature = Asynchronous Mirroring with
Consistency Group)
Chapter 1. Introduction 3
Table 3. Machine types, supported controller firmware versions, and supported Storage
Manager software
Controller Supported storage
Machine firmware manager software
Product name type Model version version
IBM TotalStorage DS4800 1815 80A/H 06.16.xx.xx, 9.16, 9.19, 9.23
Storage Subsystem 06.23.xx.xx
IBM TotalStorage DS4800 1815 82A/H 06.14.xx.xx, 9.14, 9.15, 9.16,
Storage Subsystem 84A/H 06.15.xx.xx, 9.19, 9.23
88A/H 06.16.xx.xx,
06.23.xx.xx
IBM TotalStorage DS4200 Disk 1814 7VA/H 06.16.xx.xx, 9.16, 9.19, 9.23
Storage Subsystem 06.23.xx.xx
IBM TotalStorage DS4700 Disk 1814 70A/H, 06.16.xx.xx, 9.16, 9.19, 9.23
Storage Subsystem 72A/H, 06.23.xx.xx
70T/S,
72T/S
IBM TotalStorage DS4100 1724 100 6.10.xx.xx, 8.42, 9.10, 9.12,
Storage Subsystem (Base Model) 06.12.xx.xx 9.14, 9.15, 9.16,
9.19, 9.23
IBM TotalStorage DS4100 1724 1SC 5.42.xx.xx,
Storage Subsystem (Single 1S 06.12.xx.xx
Controller Model)
IBM TotalStorage DS4500 Disk 1742 90X 5.30.xx.xx, 8.3, 8.4, 8.41, 8.42,
Storage Subsystem 90U 5.40.xx.xx, 9.10, 9.12, 9.14,
5.41.xx.xx 9.15, 9.16
(supports
EXP100
only),
6.10.xx.xx,
06.12.xx.xx,
06.19.xx.xx,
06.23.xx.xx
IBM TotalStorage DS4400 Disk 1742 1RU 5.00.xx.xx, 8.0, 8.2, 8.21, 8.3,
Storage Subsystem 1RX 5.20.xx.xx, 8.41, 8.42, 9.10,
5.21.xx.xx, 9.12, 9.14, 9.15,
5.30.xx.xx, 9.16, 9.19, 9.23
5.40.xx.xx,
6.10.xx.xx,
6.12.xx.xx
4 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 3. Machine types, supported controller firmware versions, and supported Storage
Manager software (continued)
Controller Supported storage
Machine firmware manager software
Product name type Model version version
IBM TotalStorage DS4300 Disk 1722 6LU 5.34.xx.xx 8.41.xx.03 or later,
Storage Subsystem (Single 6LX 8.42, 9.10, 9.12,
Controller) 9.14, 9.15, 9.16,
9.19, 9.23
IBM TotalStorage DS4300 Disk 60U 5.33.xx.xx, 8.3, 8.4, 8.41, 8.42,
Storage Subsystem (Base Model) 60X 5.34.xx.xx, 9.10, 9.12, 9.14,
5.40.xx.xx, 9.15, 9.16, 9.19,
6.10.xx.xx, 9.23
6.12.xx.xx,
06.19.xx.xx,
06.23.xx.xx
IBM TotalStorage DS4300 Disk 60U 5.41.xx.xx
Storage Subsystem (Turbo 60X (supports
Model) EXP100
only),
6.10.xx.xx,
6.12.xx.xx,
06.19.xx.xx,
06.23.xx.xx
IBM Netfinity® FAStT500 RAID 3552 1RU 4.x, 7.0, 7.01, 7.02,
Controller Enclosure Unit (no 1RX 5.00.xx.xx, 7.10, 8.0, 8.2, 8.21,
longer available for purchase) 5.20.xx.xx, 8.3, 8.41, 8.42,
5.21.xx.xx, 9.10, 9.12, 9.14,
5.30.xx.xx 9.15, 9.16, 9.19,
9.23
IBM FAStT200 High Availability 3542 2RU 4.x, 7.02, 7.10, 8.0, 8.2,
(HA) Storage Subsystem (no 2RX 5.20.xx.xx, 8.21, 8.3, 8.41,
longer available for purchase) 5.30.xx.xx 8.42, 9.10, 9.12,
9.14, 9.15, 9.16,
9.19, 9.23
IBM FAStT200 Storage 3542 1RU 4.x, 7.02, 7.10, 8.0, 8.2,
Subsystem (no longer available 1RX 5.20.xx.xx, 8.21, 8.3, 8.41,
for purchase) 5.30.xx.xx 8.42, 9.10, 9.12,
9.14, 9.15, 9.16,
9.19, 9.23
IBM Netfinity Fibre Channel RAID 3526 1RU 4.x 7.0, 7.01, 7.02,
Controller Unit (no longer 1RX 7.10, 8.0, 8.2, 8.21,
available for purchase) 8.3, 8.41, 8.42,
9.10, 9.14, 9.15,
9.16
Notes:
1. All of the controller firmware versions listed in the table are available
free-of-charge.
2. Storage subsystems with controller firmware version 04.00.02.xx through
4.01.xx.xx must be managed with Storage Manager 8.x.
3. Controller firmware level 06.12.xx.xx supports EXP100 SATA expansion
enclosures with the following storage subsystems:
v DS4100 and DS4300 Base models
Chapter 1. Introduction 5
v DS4300 Turbo models
v DS4400
v DS4500
If you want to upgrade to 06.12.xx.xx and your controller firmware level is
currently 05.41.1x.xx, you must first upgrade to firmware version 05.41.5x.xx
(provided on the CD that is shipped with the EXP100.) After your firmware is at
level 05.41.5x.xx, you can then upgrade to 06.12.xx.xx.
4. Firmware levels 5.40.xx.xx and earlier provide support for EXP500 and EXP700
storage expansion enclosures only. For EXP710 support, firmware versions
06.1x.xx.xx or later are required.
Terms to know
If you are upgrading from a previous version of Storage Manager, you will find that
some of the terms that you are familiar with have changed. It is important that you
familiarize yourself with the new terminology. Table 4 provides a list of some of the
old and new terms.
Table 4. Old and new terminology
Term used in previous versions New term
RAID module or storage array Storage subsystem
Drive group Array
Logical unit number (LUN) (See note) LUN
Drive module storage expansion enclosure
Controller module Controller enclosure
Environmental card CRU Environmental service module (ESM)
Customer replaceable unit (CRU)
Fan canister Fan CRU
Power-supply canister Power-supply CRU
LED Indicator light
Auto-volume transfer Auto logical-drive transfer
Volume Logical drive
6 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 4. Old and new terminology (continued)
Term used in previous versions New term
Volume group Array
Note: In Storage Manager 7.10 and later, the term logical unit number (LUN) refers to a
logical address that is used by the host computer to access a logical drive.
It is important to understand the distinction between the following two terms when
reading this document:
Management station
A management station is a system that is used to manage the storage
subsystem. It is attached to the storage subsystem in one of the following
ways:
v Through a TCP/IP Ethernet connection to the controllers in the storage
subsystem
v Through a TCP/IP connection to the host-agent software that is installed
on a host computer that is directly attached to the storage subsystem
through the Fibre Channel I/O path
Host and host computer
A host computer is a system that is directly attached to the storage
subsystem through a Fibre Channel I/O path. This system is used to do the
following tasks:
v Serve data (typically in the form of files) from the storage subsystem
v Function as a connection point to the storage subsystem for a remote
management station
Notes:
1. The terms host and host computer are used interchangeably throughout this
document.
2. A host computer can also function as a management station.
Chapter 1. Introduction 7
FAStT product renaming
IBM is in the process of renaming some FAStT family products. For a reference
guide that identifies each new DS4000 product name with its corresponding FAStT
product name, see “FAStT product renaming” on page 2.
8 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
v All of the features listed in “Controller firmware 6.12: New features”
and“Controller firmware 6.10: New features.”
Chapter 1. Introduction 9
Note: The terms “Enhanced Remote Mirror Option,” “Metro/Global Remote
Mirror Option,” “Remote Mirror,” “Remote Mirror Option,” and
“Remote Mirroring” are used interchangeably throughout this
document, the SMclient, and the online help system to refer to
remote mirroring functionality.
Parallel hard drive firmware download
You can now download drive firmware packages to multiple drives
simultaneously, which minimizes downtime. In addition, all files that are
associated with a firmware update are now bundled into a single firmware
package. See the Subsystem Management window online help for drive
firmware download procedures.
Notes:
1. Drive firmware download is an offline management event. You must
schedule downtime for the download because I/O to the storage
subsystem is not allowed during the drive firmware download process.
2. Parallel hard drive firmware download is not the same thing as
concurrent download.
Staged controller firmware download
You can now download the DS4000 controller firmware and NVSRAM to
DS4300 Turbo and DS4500 Storage Subsystem for later activation.
Depending on your firmware version, DS4000 Storage Subsystem model,
and host operating system, the following options might be available:
v Controller firmware download only with immediate activation
v Controller firmware download with the option to activate the firmware at a
later time
10 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
DS4000 FC/SATA Intermix premium feature
Storage Manager 9.1x with controller firmware 6.10.xx.xx (and later)
supports the DS4000 FC/SATA Intermix premium feature. This premium
feature supports the concurrent attachment of Fibre Channel and SATA
storage expansion enclosures to a single DS4000 controller configuration.
With controller firmware 6.10.xx.xx and later versions, the FC/SATA Intermix
premium feature is enabled using NVSRAM.
For more information about using the Intermix premium feature, including
configuration and set-up requirements, see the IBM TotalStorage DS4000
Fibre Channel and Serial ATA Intermix Premium Feature Installation
Overview (GC26-7713).
Support for DS4000 EXP710 storage expansion enclosures
Storage Manager 9.1x with controller firmware 6.10.xx.xx (and later)
supports DS4000 EXP710 storage expansion enclosures.
Increased support for DS4000 EXP100 SATA storage expansion enclosures
DS4000 EXP100 SATA storage expansion enclosures are now supported
on DS4400 Fibre Channel Storage Subsystems.
Also, the DS4100 storage subsystem now supports up to seven EXP100
SATA storage expansion enclosures.
DS4000 Storage Manager usability enhancements
DS4000 Storage Manager 9.10 and later versions feature the following
usability enhancements:
v One-click collection of support data, drive diagnostic data, drive channel
state management, controller ‘service mode,’ and the ability to save host
topology information
v Improved media error handling for better reporting of unreadable sectors
in the DS4000 Storage Subsystem event log, and persistent reporting of
unreadable sectors
Chapter 1. Introduction 11
copy data from one logical drive (the source logical drive) to another logical
drive (the target logical drive) in a single storage subsystem. The
VolumeCopy feature can be used to copy data from arrays that use smaller
capacity drives to arrays that use larger capacity drives, to back up data, or
to restore FlashCopy logical drive data. The VolumeCopy feature includes a
Create Copy Wizard that is used to assist in creating a VolumeCopy, and a
Copy Manager that is used to monitor VolumeCopies after they have been
created. For more information about VolumeCopy, see “VolumeCopy” on
page 63 or see the IBM TotalStorage DS4000 Storage Manager Version 9
Copy Services Guide
Enhanced Remote Mirroring
The Enhanced Remote Mirroring option provides real-time replication of
data between storage subsystems over a remote distance. In the event of a
disaster or unrecoverable error at one storage subsystem, the Enhanced
Remote Mirroring option enables you to promote a second storage
subsystem to take over responsibility for normal I/O operations. For more
information about the Enhanced Remote Mirroring option, see “Enhanced
Remote Mirroring option” on page 63 or see the IBM TotalStorage DS4000
Storage Manager Version 9 Copy Services Guide.
Fibre Channel/SATA Intermix
The IBM TotalStorage DS4000 Fibre Channel and Serial ATA Intermix
premium feature supports the concurrent attachment of Fibre Channel and
SATA storage expansion enclosures to a single DS4000 controller
configuration.
| The DS4700, DS4200, FAStT200, DS4300 Turbo storage subsystems, and DS4100
| and DS4300 base or SCU storage subsystems integrate the storage expansion
| enclosure and the RAID controller function in the same physical enclosure.
A DS4000 storage subsystem model might not support the attachment of all
available DS4000 drive expansion enclosure models. For example, the DS4800
storage subsystem supports the attachment of the DS4000 EXP810, EXP710, and
EXP100 drive expansion enclosures only. Refer to the Installation and User’s Guide
for your DS4000 storage subsystem model for the supported drive expansion
enclosure models for that storage subsystem.
12 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
In addition, the DS4000 storage subsystem also support the intermixing of different
DS4000 drive expansion enclosure models behind a given DS4000 storage
subsystem. There are restrictions, prerequisites, and rules to connect the different
drive enclosure models behind a DS4000 storage subsystem. Refer to the
Installation and User’s Guide for your DS4000 storage subsystem model, the
DS4000 drive expansion enclosure model and the adding capacity and the Hard
Drive and Storage Expansion Enclosure Installation and Migration Guide for more
information.
The maximum number of drives and storage expansion enclosures that a RAID
controller can support depends on the model of the RAID storage subsystems. See
the Installation and User’s Guide for your DS4000 storage subsystem model for the
maximum number of drives and storage expansion enclosures that are supported
per storage subsystem.
The physical disk capacity of the storage subsystem is divided into arrays and
logical drives. These are recognized by the operating system as unformatted
physically attached disks. Each logical component can be configured to meet data
availability and I/O performance needs. Table 6 describes the storage subsystem
logical components.
Table 6. Storage subsystem logical components
Component Description
Array An array is a set of physical drives that are grouped together
logically by the controllers in a storage subsystem. Each array
is created with a RAID level to determine how user and
redundancy data is written to and retrieved from the drives.
Chapter 1. Introduction 13
Table 6. Storage subsystem logical components (continued)
Component Description
Storage partition A storage partition is a logical identity that consists of one or
more storage subsystem logical drives. The storage partition
is shared with host computers that are part of a host group or
is accessed by a single host computer.
14 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 7. Storage partitioning specifications per DS4000 storage subsystem
model (continued)
DS4000 subsystem Storage Partitioning Maximum number Available storage
product name enabled by default of defined storage partition purchase
(machine type, model partitions options
number)
DS4400 (1742, Yes 64 None
1RU/1RX)
DS4300 base (1722, No 16 4, 8, 4 - 8, 16, 8 - 16
60U/60X)
DS4300 Turbo (1722, Yes (8 partitions) 64 8 - 16, 8 - 64, 16 -
60U/60X) 64
DS4300 SCU (1724, No 16 4, 8, 4 - 8, 16, 8 - 16
6LU/6LX)
DS4200 (1814-7VH) Yes (2 partitions 64 2-4, 2-8, 4-8, 4-16,
standard) 8-16, 8-64, 16-64
| DS4200 (1814-7VA) Choice of 2, 4, 8, 16, 64 2-4, 2-8, 4-8, 4-16,
64 8-16, 8-64, 16-64
| DS4100 (1724, 100) No 16 4, 8, 4 - 8, 16, 8 - 16
FAStT500 (3552, Yes 64 None
1RU/1RX)
| FAStT200 (3542, Yes 16 None
| 1RU/1RX)
Software components
This section describes the IBM TotalStorage DS4000 Storage Manager version 9.16
software components.
The Storage Manager client is called thin because it only provides an interface for
storage management based on information that is supplied by the storage
subsystem controllers. When you install the SMclient software component on a
management station to manage a storage subsystem, you send commands to the
storage subsystem controllers. The controller firmware contains the necessary logic
to carry out the storage management commands. The controller validates and runs
the commands and provides the status and configuration information that is sent
back to the SMclient.
Note: Do not start more than eight instances of the Storage Manger client
programs at the same time if the Storage Manager program is installed in
Chapter 1. Introduction 15
multiple host servers or management stations. In addition, do not send more
than eight SMcli commands to a storage subsystem at any given time.
The host agent, along with the network connection on the host computer, provides
an in-band host agent type network-management connection to the storage
subsystem instead of the out-of-band direct network-management connection
through the individual Ethernet connections on each controller.
The management station can communicate with a storage subsystem through the
host computer that has host agent management software installed. The host agent
receives requests from the management station through the network connection to
the host computer, and sends the requests to the controllers in the storage
subsystem through the Fibre Channel I/O path.
Notes:
1. Host computers that have the host agent software installed are automatically
discovered by the storage management software. They are displayed in the
device tree in the Enterprise Management window along with their attached
storage subsystems.
A storage subsystem might be duplicated in the device tree if you are managing
it through its Ethernet connections and it is attached to a host computer with the
host agent software installed. In this case, you can remove the duplicate
storage subsystem icon from the device tree by using the Remove Device
option in the Enterprise Management window.
2. Unless you are using Windows NT®, you must make a direct (out-of-band)
connection to the DS4000 Storage Subsystem in order to set the correct host
type. The correct host type will allow the DS4000 Storage Subsystem to
configure itself properly for the host server operating system. After you make a
direct (out-of-band) connection to the DS4000 Storage Subsystem, depending
on your particular site requirements, you can use either or both management
methods. Therefore, if you want to manage your subsystem with the in-band
management method, you must establish both in-band and out-of-band
management connections.
Note: Starting with controller firmware 06.14.xx.xx, the default host type is
Windows 2000/Server 2003 non-clustered, instead of Windows (SP5 or
higher) non-clustered.
16 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
When a component in the Fibre Channel I/O path, such as a cable or the controller
itself, fails, the RDAC multipath driver transfers ownership of the logical drives that
are assigned to that controller to the other controller in the pair.
RDAC requires that the non-failover version of the Fibre Channel Host Bus Adapter
is installed in the host server. In addition, the storage subsystem controller must be
set to non-ADT mode.
Note: The RDAC driver is not available for Hewlett Packard HP-UX and Novell
NetWare operating systems. In the Novell NetWare environment, the Novell
native failover driver is used in place of RDAC.
For NetWare 6.5 with SP6 and later, Novell NetWare provides native multipath
support that requires the DS4000 Storage subsystem to disable the AVT/ADT mode.
To disable the AVT/ADT function so that the new Novell native multipath driver can
be used, you must run the DS4000 ″DisableAVT_Netware.script″ SMcli script file.
Novell Netware 6.5 SP6 includes the following muiltipath driver modules; MM.NLM,
NWPA.NLM, SCSIHD.CDM, LSIMPE.CDM. Always use the latest version of the
LSIMPE.cdm from either the one provided with the IBM DS4000 Fibre Channel
HBA device driver or the one that is part of the Novell NetWare operating system
distribution CD. The LSIMPE.cdm enables the Novell multipath failover driver to
identify those logical drives that have been mapped from the DS4000 Storage
Subsystem to the host server. Please refer to the Fibre Channel HBA NetWare
driver readme file for more information on how to configure LUN failover and
failback.
Note: Storage Manager 9.23 is not supported with NetWare. You can still attach
your host to a subsystem that is running 6.23.xx.xx to run I/O, you just
cannot manage that system from the NetWare host.
Chapter 1. Introduction 17
Notes:
1. In a Linux operating system environment, you must install RDAC for multipath
failover protection in order to use the utilities in the Storage Manager Utility
package.
2. Refer to the Storage Manager readme files for all supported operating systems.
See “Storage Manager documentation and readme files” on page 1 for
instructions that describe how to find the readme files online.
| Microsoft® MPIO
| MPIO or MPIO/DSM: This multipath driver is included in the DS4000 Storage
| Manager host software package for Windows version 9.19 and later releases; it is
| not included in the Storage Manager host software for Windows releases prior to
| Storage Manager 9.19. MPIO is a DDK kit from Microsoft for developing code that
| manages multipath devices. It contains a core set of binary drivers which are
| installed with the IBM DS4000 Device Specific Module (DSM) to provide a
| transparent system architecture that relies on Microsoft Plug and Play to provide
| LUN multipath functionality while maintains compatibility with existing Microsoft
| Windows device driver stacks. The MPIO driver performs the following tasks:
| v Detects and claims the physical disk devices presented by the DS4000 storage
| subsystems based on Vendor/Product ID strings and manage the logical paths to
| the physical devices
| v Presents a single instance of each LUN to the rest of the Windows operating
| system
| v Provides an optional interface via WMI for use by user-mode applications
| v Relies on the vendor’s (IBM) customized Device-Specific Module (DSM) for the
| information on the behavior of storage subsystem devices on the following:
| – I/O routing information
| – Conditions requiring a request to be retried, failed, failed over or fail-back (for
| example, Vendor-Unique errors)
| – Handles miscellaneous functions such as Release/Reservation commands
| v Multiple Device-Specific Modules (DSMs) for different disk storage subsystems
| can be installed in the same host server.
| Co-existence of RDAC and MPIO/DSM in the same host is not supported. You must
| use two different servers: one server with the RDAC multipath driver for performing
| IOs to the DS4000 subsystem that does not support MPIO as the multipath driver
| and the other server with the MPIO multipath driver for performing IOs to the
| DS4000 subsystem that does support MPIO as the multipath driver. RDAC and
| MPIO/DSM drivers handle logical drives (LUNs) in fail-conditions similarly because
| the DSM module that has code to handle these conditions are ported from RDAC.
| However, the MPIO/DSM driver will be the required Microsoft multipath driver for
| future Microsoft Windows operating systems.
18 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Host types
The host type setting that you specify when you configure Storage Manager
determines how the storage subsystem controllers work with the operating systems
on the connected hosts.
All Fibre Channel HBA ports that are defined with the same host type are handled
the same way by the DS4000 controllers. This determination is based on the
specifications that are defined by the host type. Some of the specifications that
differ according to the host type setting include the following options:
Auto Volume Transfer
Enables or disables the Auto-Logical Drive Transfer feature (ADT/AVT). For
more information about ADT, see “Auto-Logical Drive Transfer feature” on
page 48.
Enable Alternate Controller Reset Propagation
Determines whether the controller will propagate a Host Bus Reset/Target
Reset/Logical Unit Reset to the other controller in a dual controller
subsystem to support Microsoft Clustering Services.
Allow Reservation on Unowned LUNs
Determines the controller response to Reservation/Release commands that
are received for LUNs that are not owned by the controller.
Sector 0 Read Handling for Unowned Volumes
v Enable Sector 0 Reads for Unowned Volumes: Applies only to host
types with the Auto-Logical Drive Transfer feature enabled. For non-ADT
hosts, this option will have no effect.
v Maximum Sectors Read from Unowned Volumes: Specifies the
maximum allowable sectors (starting from sector 0) that can be read by a
controller that does not own the addressed volume. The value of these
bits specifies the maximum number of additional sectors that can be read
in addition to sector 0.
Reporting of Deferred Errors
Determines how the DS4000 controllers’ deferred errors are reported to the
host.
Do Not Report Vendor Unique Unit Attention as Check Condition
Determines whether the controller will report a vendor-unique Unit Attention
condition as a Check Condition status.
World Wide Name In Standard Inquiry
Enables or disables Extended Standard Inquiry.
Ignore UTM LUN Ownership
Determines how inquiry for the Universal Access LUN (UTM LUN) is
reported. The UTM LUN is used by the DS4000 Storage Manager host
software to communicate to the DS4000 storage subsystem in DS4000
storage subsystem in-band management configurations.
Report LUN Preferred Path in Standard Inquiry Data
Reports the LUN preferred path in bits 4 and 5 of the Standard Inquiry Data
byte 6.
In most DS4000 configurations, the NVSRAM settings for each supported host type
for a particular operating system environment are sufficient for connecting a host to
the DS4000 storage subsystems. You should not need to change any of the host
Chapter 1. Introduction 19
type settings for NVSRAM. If you think you need to change the NVSRAM settings,
please contact your IBM support representative for advice before proceeding.
For information about which host type setting you need to specify for your host
operating system and how to specify the setting, see the IBM Storage Manager
Installation and Support Guide for your operating system.
System requirements
This section provides detailed information about the hardware, software, and
storage management architecture for IBM DS4000 Storage Manager Version 9.1x.
Hardware requirements
Table 8 lists the hardware that is required to install Storage Manager 9.1x.
Table 8. Storage management architecture hardware components
Component Description
Management station (one or A management station is a computer that is connected
more) through an Ethernet cable to the host computer or directly
to the controller.
v Monitor setting of 1024 x 768 pixels with 64,000 colors.
The minimum display setting that is allowed is 800 x 600
pixels with 256 colors.
v Hardware-based Windows acceleration. Desktop
computers that use system memory for video memory
are not preferred for use with the storage management
software.
20 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 8. Storage management architecture hardware components (continued)
Component Description
Fibre Channel switches Fibre Channel switches are used if there are more host
servers that need to access the storage subsystem than
the available number of physical Fibre Channel ports on
the storage subsystem.
Note: For FAStT500, DS4400, and DS4500 Storage
Subsystems, if one of the two ports of a host minihub is
connected to the Fibre Channel switch, the other minihub
port must be left unconnected (open). This restriction does
not apply to the host ports in DS4100, DS4300 and
DS4800 Storage Subsystems.
Host computer A host computer is a computer that is running one or more
applications that accesses the storage subsystem through
the Fibre Channel I/O data connection.
Storage subsystem and The storage subsystem and storage controller are storage
controller (one or more) entities that are managed by the storage management
software that consists of both physical components (such
as drives, controllers, fans, and power supplies) and logical
components (such as arrays and logical drives).
File server You can store the storage management software on a
central file server. Management stations on the network can
then remotely access the storage management software.
Note: Do not start more than eight instances of the program at the same time if the
DS4000 Storage Manager client program is installed in multiple host server
or management stations. You should manage all DS4000 Storage
Subsystems in a SAN from a single instance of the Storage Manager client
program.
Chapter 1. Introduction 21
Direct (out-of-band) management method
When you use the direct (out-of-band) management method, you manage storage
subsystems directly over the network through the Ethernet connection to each
controller. To manage the storage subsystem through the Ethernet connections, you
must define the IP address and host computer name for each controller and attach
a cable to the Ethernet connectors on each of the storage subsystem controllers.
See Figure 1 on page 23.
Note: You can avoid DHCP/BOOTP server and network tasks by assigning static
IP addresses to the controller, by using a default IP address, or if you are
managing all storage subsystems through the Fibre Channel I/O path
using a host agent.
Important: Unless you are using Windows NT, you must make direct (out-of-band)
connection to the DS4000 Storage Subsystem in order to obtain the correct host
type. The correct host type will allow the DS4000 Storage Subsystem to configure
itself properly for the host server operating system. After you make a direct
(out-of-band) connection to the DS4000 Storage Subsystem, depending on your
particular site requirements, you can use either or both management methods.
Therefore, if you wish to manage your subsystem with the in-band management
method, you must establish both in-band and out-of-band management
connections.
www.ibm.com/support/
If your controller firmware is at 05.4x.xx.xx or later, you should set the controller
static IP address via the SMclient Subsystem Management window after making
22 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
management connections to the DS4000 controller via in-band or out-of-band
management (using the default IP address as indicated in Table 9).
Table 9 lists the default settings for storage subsystem controllers that have
firmware version 05.00.xx. or later:
Table 9. Default settings for controllers with firmware version 05.00.xx or later
Controller IP address Subnet mask
A 192.168.128.101 (and 255.255.255.0
192.168.129.101 for DS4800
only)
B 192.168.128.102 (and 255.255.255.0
192.168.129.102 for DS4800
only)
Figure 1 shows a system in which the storage subsystems are managed through
the direct (out-of-band) management method.
Storage subsystems
Controller
Controller
Host computer
Fibre-channel
I/O path
Controller
Controller
Storage subsystems
Ethernet
Network-management connection
Management station
(one or more)
SJ001087
Chapter 1. Introduction 23
Host-agent (in-band) management method
When you use the host-agent (in-band) management method, the controllers in the
storage subsystem are managed through the host-agent Fibre Channel network
connection to a host computer, rather than through the direct (out-of-band) Ethernet
network connections to each controller. The host-agent software on the host
computer enables communication between the management software and the
controllers in the storage subsystem. The management software can be installed in
the host or in the management station that is connected to the host through the
Ethernet network connection. To manage a storage subsystem using the host-agent
management method, you must install the host-agent software on the host
computer and then use the Enterprise Management window to add the host
computer to the management domain. By including the host computer in the
domain, you will include attached host-agent managed storage subsystems also.
Managing storage subsystems through the host agent has these advantages:
v You do not have to run Ethernet cables to the controllers.
v You do not need a DHCP/BOOTP server to connect the storage subsystems to
the network.
v You do not need to perform the controller network configuration tasks.
v When adding devices, you must specify a host computer name or IP address
only for the host computer instead of for the individual controllers in a storage
subsystem. Storage subsystems that are attached to the host computer are
automatically detected.
Managing storage subsystems through the host agent has these disadvantages:
v The host agent requires a special logical drive, called an access volume, to
communicate with the controllers in the storage subsystem. Therefore, you are
limited to configuring one less logical drive than the maximum number that is
allowed by the operating system and the host adapter that you are using.
Important:
– If your host already has the maximum number of logical drives configured,
either use the direct management method or give up a logical drive for use as
the access logical drive.
– Systems running the Windows XP operating system can only be used as
storage management stations. You cannot use Windows XP as a host
operating system.
Note: The access logical drive is also referred to as the Universal Xport Device.
v If the connection through the Fibre Channel is lost between the host and the
subsystem, the subsystem cannot be managed or monitored.
24 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Storage subsystems
Controller
Running the
Controller host-agent software
Host computer
Fibre-channel
I/O path
Controller
Controller
Storage subsystems
Network
Management station
(one or more)
SJ001089
Chapter 1. Introduction 25
Network A Fibre Channel
TCP/IP I/O path Ethernet
Storage
subsystem
finance
Storage
subsystem
engineering
Figure 3. Sample network using direct and host-agent managed storage subsystems
For example, a coexisting situation exists when you have a new storage subsystem
with controllers that are running firmware version 06.10.xx.xx, and the storage
subsystem is attached to the same host as one or more of the following
configurations:
v A storage subsystem with controllers running firmware versions 04.00.xx.xx
through 04.00.01.xx, which is managed by a separate management station with
Storage Manager 7.10
v A storage subsystem with controllers running firmware versions 04.01.xx.xx
through 06.1x.xx.xx, which is managed with Storage Manager 9.1x
Important: The common host must have the latest level (version 9.1x) of RDAC
and SMagent installed. For DS4300 Turbo, DS4400 and DS4500, the 06.12.xx.xx
firmware is available free of charge for download from the IBM support Web site
along with all the fixes and software patches. In a coexisting environment, you must
upgrade all DS4000 controller firmware to the latest supported code level.
26 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Managing the storage subsystem using the graphical user interface
This section includes information about managing the storage subsystem using the
SMclient graphical user interface (GUI) and covers the following topics:
v The Enterprise Management window
v The Subsystem Management window
v Populating the management domain
v The script editor
The two main windows are the Enterprise Management window and the Subsystem
Management window. The Enterprise Management window is shown in Figure 4.
The Subsystem Management window is shown in Figure 6 on page 30.
ds4hb004
The emwdata.bin configuration file contains a list of the storage subsystems that
are included in the management domain, and any alert destinations you have
configured. After adding the storage subsystems, use the Enterprise Management
Chapter 1. Introduction 27
window primarily for course-level monitoring and alert notification of non-optimal
storage subsystem conditions. You can also use it to open the Subsystem
Management window for a particular storage subsystem. The emwdata.bin
configuration file is stored in a default directory. The name of the default directory
depends on your operating system and firmware version.
When storage subsystems are added to the Enterprise Management window, they
are shown in the device tree as child nodes of the storage management station
node. A storage subsystem can be managed through an Ethernet connection on
each controller in the storage subsystem (in-band) or through a host interface
connection to a host with the host-agent installed (out-of-band).
Host Denver
sj001155
28 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Read the following points before you populate a management domain:
v Be sure to specify the IP addresses for both controllers when you add new
storage subsystems to existing storage subsystems that are managed using the
out-of-band management method.
v If a given DS4000 Storage Subsystem is listed in the device tree as being both
out-of-band and in-band managed, the DS4000 storage manager program will
select the out-of-band route to manage the storage subsystem.
Note: If the DS4000 Storage Subsystem is seen by the SMclient through both
in-band and out-of-band management methods, the subsystem will be
displayed in two places in the device tree.
v When you add new storage subsystems to the existing storage subsystems in a
SAN that are managed through the host-agent software, you must stop and
restart the host-agent service in the host server that has a Fibre-channel
connection to the new storage subsystem. When the host-agent service restarts,
the new storage subsystem is detected. Then, go to the Enterprise Management
window, select the host server on which you just restarted the host-agent service,
and click Tools > Rescan to add the new storage subsystems to the
management domain under the host server node in the device tree.
v If you have a large network, the Automatic Recovery option might take a while to
complete. You might also get duplicate storage subsystem entries listed in the
device tree if there are multiple hosts in the same network that has a host-agent
connection to the storage subsystems. You can remove a duplicate storage
management icon from the device tree by using the Remove Device option in the
Enterprise Management window.
v When storage subsystems are detected or added to the Enterprise Management
window for the first time, they are shown as Unnamed in the device tree unless
they have been named by another storage management station.
For more information about populating a management domain, see the Enterprise
Management window online help.
Chapter 1. Introduction 29
The features of a particular release of firmware will be accessible when a
Subsystem Management window is launched from the Enterprise Management
window to manage a storage subsystem. For example, you can manage two
storage subsystems using the Storage Manager software; one storage subsystem
has firmware version 6.1x.xx.xx and the other has firmware version 4.xx.xx.xx.
When you open a Subsystem Management window for a particular storage
subsystem, the correct Subsystem Management window version is used. The
storage subsystem with firmware version 6.1x.xx.xx will use version 9.1x of the
storage management software, and the storage subsystem with firmware version
5.30.xx.xx will use version 8.3. You can verify the version you are currently using by
clicking Help > About in the Subsystem Management Window.
ds4hb008
30 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 10. Subsystem Management window tabs
Tabs Description
Logical/Physical View The Subsystem Management Window
Logical/Physical View contains two panes: the
Logical View and the Physical View.
Chapter 1. Introduction 31
Table 11. The Subsystem Management window menus
Menu Definition
Storage Subsystem The Storage Subsystem menu contains
options to perform the following storage
subsystem management operations:
v Locating functions (locating the storage
subsystem by flashing indicator lights)
v Automatically configuring the storage
subsystem. Save storage subsystem
configuration data in a file using the SMcli
script commands.
v Enabling and disabling premium features
v Displaying the Recovery Guru and the
corresponding problem summary, details
and recovery procedures
v Monitoring performance
v Changing various Storage Subsystem
settings - passwords, default host types,
Media scan settings, enclosure order,
cache settings and failover alert delay.
v Setting controller clocks
v Activating or deactivating the Enhanced
Remote Mirroring option - Upgrade Mirror
Repository Logical Drive
v Renaming storage subsystem
v Viewing the storage subsystem profile
v Managing the controller enclosure alarm
(DS4800 only)
View The View menu allows you to perform the
following tasks:
v Open the Task Assistant tool
v Switch the display between the
Logical/Physical view and the Mappings
view
v View associated components for a selected
drive in the Physical pane of the
Logical/Physical view
v Find a particular node in the Logical view
or Mappings view
v Go directly to a particular FlashCopy,
FlashCopy Repository, VolumeCopy source
or target logical drive node in the Logical
Drive tree.
32 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 11. The Subsystem Management window menus (continued)
Menu Definition
Mappings The Mappings menu allows you to make
changes to or retrieve details about mappings
associated with a selected node. The
Mappings menu contains the following
options:
v Define hosts, host groups, host ports, or
storage partitioning
v Change
v Move
v Replace Host Port
v Show All Host Port Information
v Remove
v Rename
Note: You must be in the Mappings View to
access the options available in this menu.
Array The Array menu presents options to perform
the following storage management operations
on arrays:
v Locating logical drives
v Changing RAID level or controller
ownership
v Adding free capacity (drives)
v Deleting an array
Note: These menu options are only available
when a an array is selected.
Logical Drive The Logical Drive menu provides options to
perform the following storage management
operations on volumes:
v Creating logical drives
v Changing ownership/preferred path,
segment size, Media Scan settings, cache
settings, modification priority
v Increasing capacity
v Creating a VolumeCopy
v Viewing VolumeCopies using the Copy
Manager
v Creating, recreating, or disabling a
FlashCopy logical drive
v Creating, suspending, resuming, or
changing remote mirror settings and testing
communication.
v Removing a mirror relationship
v Viewing logical drive properties
v Deleting, or renaming a logical drive
Note: These menu options are only available
when a logical drive is selected.
Chapter 1. Introduction 33
Table 11. The Subsystem Management window menus (continued)
Menu Definition
Controller The Controller menu displays options to
perform the following storage management
operations on controllers:
v Changing the preferred loop ID
v Modify the IP address, gateway address,
or network subnet mask of a controller
v Viewing controller properties
Note: These menu options are only available
when a controller is selected.
Drive The Drive menu contains options to perform
the following storage management operations
on drives:
v Locating a drive and storage expansion
enclosure
v Assigning or unassigning a hot spare
v Viewing drive properties
Note: These menu options are only available
when a drive is selected.
34 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 11. The Subsystem Management window menus (continued)
Menu Definition
Advanced The Advanced menu allows you to perform
certain maintenance functions. The Advanced
menu contains the following options:
v Maintenance
– Downloading firmware and NVSRAM
files
– Downloading drive expansion enclosure
ESM firmware
– Activating or clearing staged controller
firmware
– Managing persistent reservations
– Downloading drive mode pages
– Placing an array online or offline
v Troubleshooting
– Collecting support data and drive data
– Viewing the event log
– Viewing drive channel details
– Running Read Link Status diagnostics
– Capturing state information
– Running controller diagnostics
– Running Discrete lines diagnostics
(DS4800 only)
v Recovery
– Failing, reconstructing, reviving or
initializing a drive
– Initializing, revive, and defragment an
array
– Checking an array for redundancy
– Initializing logical drives
– Resetting the configuration and
controller
– Placing controller online, offline, or in
service mode
– Redistributing logical drives
– Displaying unreadable sectors reports
– Enabling or disabling data transfer (I/O)
Help The Help menu provides options to perform
the following actions:
v Display the contents of the Subsystem
Management window online help
v View a reference of all Recovery Guru
procedures
v View the software version and copyright
information
Chapter 1. Introduction 35
provided for running scripted management commands. If the controller firmware
version is 5.4x.xx.xx or earlier, some of the management functions that can be done
through the GUI are not implemented through script commands. Storage Manager
9.1x in conjunction with controller firmware version 6.10.xx.xx and higher provides
full support of all management functions via SMcli commands.
SJ001138
Important: Use caution when running the commands in the script window because
the script editor does not prompt for confirmation on operations that are destructive
such as the Delete arrays, Reset Storage Subsystem configuration commands.
Not all script commands are implemented in all versions of the controller firmware.
The earlier the firmware version, the smaller the set of script commands. For more
information about script commands and firmware versions, see the DS4000 Storage
Manager Enterprise Management window.
For a list of available commands and their syntax, see the online Command
Reference help.
36 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
– Cntl+A: To select everything in the window
– Cntl+C: To copy the marked text in the window into a Windows clipboard
buffer
– Cntl+V: To paste the text from the Windows clipboard buffer into the
window
– Cntl+X: To delete (cut) the marked text in the window
– Cntl+Home: To go to the top of the script window
– Cntl+End: To go to bottom of the script window
v The Output view displays the results of the operations.
A splitter bar divides the window between the Script view and the Output view.
Drag the splitter bar to resize the views.
The following list includes some general guidelines for using the script editor:
v All statements must end with a semicolon (;).
v Each base command and its associated primary and secondary parameters must
be separated by a space.
v The script editor is not case sensitive.
v Each new statement must begin on a separate line.
v Comments can be added to your scripts to make it easier for you and future
users to understand the purpose of the command statements.
The comment //The following command assigns hot spare drives. is included
for clarification and is not processed by the script editor.
Important: You must end a comment that begins with // with an end-of-line
character, which you insert by pressing the Enter key. If the script engine does
not find an end-of-line character in the script after processing a comment, an
error message displays and the script fails.
v Text contained between the /* and */ characters
For example:
Chapter 1. Introduction 37
The command line interface (SMcli)
You can use the command line interface (SMcli) to perform the following tasks:
v Run scripts on multiple storage systems
v Create batch files
v Run mass operations on multiple storage systems
v Access the script engine directly without using the Enterprise Management
window
In Storage Manager 9.1x with controller firmware 06.10.xx.xx or higher, there is full
support for management functions via SMcli commands. For a list of the available
commands with the usage syntax and examples, see the Command Reference in
the Enterprise window online help.
Using SMcli
Perform the following steps to use the SMcli:
1. Go to the command line shell of your operating system. At the command
prompt, type SMcli, followed by either the controller name, host-agent name,
worldwide name (WWN) or user-supplied name of the specific storage
subsystems. The name that you enter depends on your storage subsystem
management method:
v For directly managed subsystems, enter the host name or IP address of the
controller or controllers
v For host-agent managed subsystems, enter the host name or IP address of
the host
Note: Some command line shells might not support commands longer than 256
characters. If your command is longer than 256 characters, use a
different shell or enter the command into the Storage Manager script
editor.
If you specify a host name, or the IP address, the command line utility verifies
that a storage subsystem exists.
If you specify the user-supplied storage subsystem name or WWN, the utility
ensures that a storage subsystem with that name exists at the specified location
and can be contacted.
Notes:
v You must use the -n parameter if more than one host-agent managed
storage subsystem is connected to the host. For example:
SMcli hostmachine -n sajason
v Use the -w parameter if you specify the WWN of the storage subsystem. For
example:
SMcli -w 600a0b800006602d000000003beb684b
v You can specify the storage subsystem by its user supplied name only using
the -n parameter if the storage subsystem is configured in the Enterprise
Management window. For example:
SMcli -n Storage Subsystem London
The name must be unique to the Enterprise Management window.
2. Type one or more commands, for example:
-c "<command>;[<command2>;...]"
or
38 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
type the name of a script file, for example:
-f <scriptfile>.
SMcli first verifies the existence and locations of the specified storage
subsystems and, if applicable, the script file. Next, it verifies the script command
syntax and then runs the commands.
3. Then you can do one of the following actions:
v Specify the output file, for example:
[-o <outputfile>]
v Specify the password, for example:
[-p <password>]
v Run the script only, for example:
[-e]
-a email:MAILADDRESS
Chapter 1. Introduction 39
Table 12. Command line parameters (continued)
Command line Action
parameter
-A Specify a storage subsystem to add to the management domain.
Specify an IP address (xx.xx.xx.xx) for each controller in the
storage subsystem.
The configuration file lists all known storage subsystems that are
currently configured in the Enterprise Management window.
-e Run the commands only, without performing a syntax check first.
-f Specify the name of a file containing script engine commands to
be performed on the specified storage subsystem. Use the -f
parameter in place of the -c parameter.
Note: Any errors that are encountered when running the list of
commands will by default cause the command to stop. Use the on
error continue; command in the script file to override this
situation.
-F Specify the e-mail address that will send the alerts.
-i When used with the -d parameter, display the contents of the
configuration file in the following format:
40 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 12. Command line parameters (continued)
Command line Action
parameter
-n Specify the storage subsystem name on which you want to
perform the script commands.
-x trap:COMMUNITY, HOST
where
v COMMUNITY is the SNMP community name
v HOST is the IP address or the host name of a station running an
SNMP service.
-x email:MAILADDRESS
Notes:
v All statements must end with a semicolon (;).
v Separate each base command and any parameters with a space.
Chapter 1. Introduction 41
v Separate each parameter and its parameter value with an equal sign (=).
v The SMcli is not case-sensitive. You can enter any combination of upper and
lowercase letters. The usage shown in the examples in the section “SMcli
examples” on page 43 follows the convention of having a capital letter start the
second word of a parameter.
v For a list of supported commands and their syntax, see the Enterprise
Management window online help. The online help contains commands that are
current with the latest version of the storage management software.
Some of the commands might not be supported if you are managing storage
subsystems running firmware for previous releases. See the Firmware
Compatibility List in the Enterprise Management window online help for a
complete list of commands and the firmware levels on which they are supported.
Note: If you invoke SMcli and specify a storage subsystem, but do not specify
the commands or script file to run, SMcli runs in interactive mode. This
allows you to specify the commands interactively. Use Ctrl+D to stop
SMcli.
42 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
v Insert three carets (^^^) before each special script character when it is used
within a literal script command string. For example, to change the name of the
storage subsystem to Finance&Payroll, type the following command:
-c "set storageSubsystem userLabel=\"Finance^^^&payroll\";"
See the appropriate operating system documentation for a list special script
characters.
SMcli examples
Following are examples of how you can use the SMcli to access and run script
engine commands.
Note: The usage of the -c and the -p parameters varies depending on your
operating system.
v For Microsoft Windows systems, the -c and the -p parameters must be
enclosed in double quotation marks (").
v For UNIX systems, the -c and the -p parameter strings must be enclosed
in single quotation marks (’).
1. Rename “Payroll Array” to “Finance Array” using the host name ICTSANT.
For Windows systems:
2. In the storage subsystem with controller names “finance 1” and “finance 2,” use
the password Test Array to do the following:
v Delete the logical drive named “Stocks & Bonds”.
v Create a new logical drive named “Finance”.
v Show the health status of the storage subsystem, which is managed using
the direct management method.
For Windows systems:
3. Run the commands that are in the script file named scriptfile.scr in the storage
subsystem named “Example” without performing a syntax check.
For both Windows and UNIX systems:
Chapter 1. Introduction 43
4. Run the commands found in the script file named scriptfile.scr on the storage
subsystem named “Example.” Use “My Array” as the password and direct all
output to output.txt.
For Windows systems:
5. Display all storage subsystems that are currently configured in the Enterprise
Management window (configuration file), using <IP address> format instead of
<hostname> format.
For Windows and UNIX systems:
SMcli -d –i
44 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Chapter 2. Storing and protecting your data
When you configure a storage subsystem, review the appropriate data protection
strategies and decide how you will organize the storage capacity into logical drives
that are shared among hosts in the enterprise.
Storage subsystems are designed for reliability, maximum data protection, and 24
hour data availability through a combination of hardware redundancy and controller
firmware configurations.
Logical drives
The storage management software identifies several distinct logical drives. The
following list describes each type of logical drive.
Standard logical drive
A standard logical drive is a logical structure that is created on a storage
subsystem for data storage. Use the Create Logical Drive wizard to create
a standard logical drive. Only standard logical drives are created if neither
FlashCopy nor the Enhanced Remote Mirroring option features are enabled.
Standard logical drives are also used with creating FlashCopy logical drives
and Enhanced Remote Mirroring logical drives.
FlashCopy logical drive
A FlashCopy logical drive is a point-in-time image of a standard logical
drive. A FlashCopy logical drive is the logical equivalent of a complete
physical copy, but you create it much more quickly and it requires less disk
space. The logical drive from which you are creating the FlashCopy logical
drive, called the base logical drive, must be a standard logical drive in your
storage subsystem. For more information about FlashCopy logical drives,
see “FlashCopy” on page 62.
FlashCopy repository logical drive
A FlashCopy repository logical drive is a special logical drive in the storage
46 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Dynamic Logical Drive Expansion
Attention: Increasing the capacity of a standard logical drive is only supported on
certain operating systems. If you increase the logical drive capacity on a host
operating system that is unsupported, the expanded capacity will be unusable, and
you cannot restore the original logical drive capacity. For information about
supported operating systems, see Increase Logical Drive Capacity: Additional
Instructions in the Storage Subsystem Management window online help.
Dynamic Logical Drive Expansion (DVE) is a modification operation that you use to
increase the capacity of standard or FlashCopy repository logical drives. You can
increase the capacity by using any free capacity available on the array of the
standard or FlashCopy repository logical drive.
Data is accessible on arrays, logical drives, and disk drives throughout the entire
modification operation.
During the modification operation, the logical drive for which the capacity is being
increased shows the following three factors:
v A status of Operation in Progress
v The original logical drive capacity
v The total capacity being added
After the capacity increase completes, the expanded capacity of the logical drive
displays, and the final capacity for the Free Capacity node that is involved shows a
reduction in capacity. If you use all of the free capacity to increase the logical drive
size, then the Free Capacity node that is involved is removed from the Logical
View.
You cannot increase the storage capacity of a logical drive if any of the following
conditions apply:
v One or more hot spare drives are in use in the logical drive
v The logical drive has a non-optimal status
v Any logical drive in the array is in any state of modification
v The controller that owns this logical drive is in the process of adding capacity to
another logical drive. (Each controller can add capacity to only one logical drive
at a time.)
v No free capacity exists in the array
v No unconfigured capacity (in the form of drives) is available to add to the array
For more information, see Learn About Increasing the Capacity of a Logical Drive
on the Learn More tab in the Storage Subsystem Management online help window.
Arrays
An array is a set of drives that the controller logically groups together to provide
one or more logical drives to an application host. When you create a logical drive
To create an array, a minimum of two parameters must be specified: RAID level and
capacity (how large you want the array). For the capacity parameter, you can either
choose the automatic choices provided by the software or select the manual
method to indicate the specific drives to include in the array. The automatic method
should be used whenever possible, because the software provides the best
selections for drive groupings.
In addition to these two parameters, you can also specify the segment size, the
cache read-ahead count, and which controller is the preferred owner.
Multipath drivers, such as the redundant disk array controller (RDAC) and VERITAS
Volume Manager with Dynamic Multipathing (DMP), are installed on host computers
that access the storage subsystem and provide I/O path failover.
This section describes ADT and other operating-system specific failover protection
features.
For controller firmware versions 05.2x.xx.xx and higher, the ADT feature is
automatically disabled or enabled depending on the type of host ports in the host
partition to which you mapped the logical drives. It is disabled by default for
Microsoft Windows, IBM AIX, and Sun Solaris operating systems. It is enabled by
default for Linux, Novell NetWare, and HP-UX operating systems.
Notes:
1. In most cases, ADT is disabled for the operating system for which RDAC is the
failover driver. In the “remote boot” configurations, ADT must be enabled.
48 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
2. If you are using Dynamic Multi-pathing (DMP) as your default failover driver, you
must uninstall RDAC.
| Note: Storage Manager 9.23 is not supported with NetWare. You can still
| attach your host to a subsystem that is running 6.23.xx.xx to run I/O,
| you just cannot manage that system from the NetWare host.
With Novell native failover support, the Automatic Logical Drive Transfer
(ADT)/Automatic Volume Transfer (AVT) mode/function must be disabled.
www-307.ibm.com/pc/support/site.wss/document.do?lndocid=MIGR-59039
50 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
v Multipath driver software on the host or hosts and ADT disabled on the storage
subsystem
v No multipath driver software on the host or hosts and ADT enabled on the
storage subsystem (no failover)
Note: If you want to change the default ADT settings, contact technical support.
Multipath driver software with ADT enabled on the storage subsystem
This is the normal configuration setting for Novell NetWare, Linux (when
using FC HBA failover driver instead of RDAC), and Hewlett Packard
HP-UX systems.
Two active controllers are located in a storage subsystem. When you create
a logical drive, you assign one of the two active controllers to own the
logical drive (called preferred controller ownership) and to control the I/O
between the logical drive and the application host along the I/O path. The
preferred controller normally receives the I/O requests from the logical
drive. If a problem along the data path (such as a component failure)
causes an I/O to fail, the multipath driver issues the I/O to the alternate
controller.
When ADT is enabled and used with a host multipath driver, it helps ensure
that an I/O data path is available for the storage subsystem logical drives.
The ADT feature changes the ownership of the logical drive that is receiving
the I/O to the alternate controller. After the I/O data path problem is
corrected, the preferred controller automatically reestablishes ownership of
the logical drive as soon as the multipath driver detects that the path is
normal again.
Multipath driver software with ADT disabled on the storage subsystem
This is the configuration setting for Microsoft Windows, IBM AIX, and Sun
Solaris and Linux (when using the RDAC driver and non-failover
Fibre-channel HBA driver) systems.
When ADT is disabled, the I/O data path is still protected as long as you
use a multipath driver. However, when an I/O request is sent to an
individual logical drive and a problem occurs along the data path to its
preferred controller, all logical drives on the preferred controller are
transferred to the alternate controller. In addition, after the I/O data path
problem is corrected, the preferred controller does not automatically
re-establish ownership of the logical drive. You must open a storage
management window, select Redistribute Logical Drives from the Advanced
menu, and perform the Redistribute Logical Drives task.
No multipath driver software with ADT enabled on the storage subsystem (no
failover protection)
RAID-1, RAID-3, and RAID-5 write redundancy data to the drive media for fault
tolerance. The redundancy data might be a copy of the data (mirrored) or an
error-correcting code that is derived from the data. If a drive fails, the redundant
data is stored on a different drive from the data that it protects. The redundant data
is used to reconstruct the drive information on a hot-spare replacement drive.
RAID-1 uses mirroring for redundancy. RAID-3 and RAID-5 use redundancy
information, sometimes called parity, that is constructed from the data bytes and
striped along with the data on each disk.
Table 13 describes the RAID level configurations that are available with the Storage
Manager 9.1x software.
Table 13. RAID level configurations
RAID level Short description Detailed description
RAID-0 Non-redundant, RAID-0 offers simplicity, but does not provide data
striping mode redundancy. A RAID-0 array spreads data across all
drives in the array. This normally provides the best
performance but there is not any protection against
single drive failure. If one drive in the array fails, all
logical drives contained in the array fail. This RAID level
is not recommended for high data-availability needs.
RAID 0 is better for non-critical data.
52 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 13. RAID level configurations (continued)
RAID level Short description Detailed description
RAID-1 Striping/Mirroring v A minimum of two drives is required for RAID-1: one
mode for the user data and one for the mirrored data. The
DS4000 Storage Subsystem implementation of RAID-1
is basically a combination of RAID-1 and RAID-10,
depending on the number of drives selected. If only
two drives are selected, RAID-1 is implemented. If you
select four or more drives (in multiples of two), RAID
10 is automatically configured across the volume
group: two drives for user data, and two drives for the
mirrored data.
v RAID-1 provides high performance and the best data
availability. On a RAID-1 logical drive, data is written
to two duplicate disks simultaneously. On a RAID-10
logical drive, data is striped across mirrored pairs.
v RAID-1 uses disk mirroring to make an exact copy of
data from one drive to another drive. If one drive fails
in a RAID-1 array, the mirrored drive takes over.
v RAID-1 is costly in terms of capacity. One-half of the
drives are used for redundant data.
RAID-3 High-bandwidth v RAID-3 requires one dedicated disk in the logical drive
mode to hold redundancy information (parity). User data is
striped across the remaining drives.
v RAID-3 is a good choice for applications such as
multimedia or medical imaging that write and read
large amounts of sequential data. In these
applications, the I/O size is large, and all drives
operate in parallel to service a single request,
delivering high I/O transfer rates.
RAID-5 High I/O mode v RAID-5 stripes both user data and redundancy
information (parity) across all of the drives in the
logical drive.
v RAID-5 uses the equivalent of one drive’s capacity for
redundancy information.
v RAID-5 is a good choice in multi-user environments
such as database or file-system storage, where the I/O
size is small and there is a high proportion of read
activity. When the I/O size is small and the segment
size is appropriately chosen, a single read request is
retrieved from a single individual drive. The other
drives are available to concurrently service other I/O
read requests and deliver fast read I/O request rates.
Note: One array uses a single RAID level and all redundancy data for that array is
stored within the array.
The capacity of the array is the aggregate capacity of the member drives, minus the
capacity that is reserved for redundancy data. The amount of capacity that is
needed for redundancy depends on the RAID level that is used.
Important: A warning box opens when you select the Check array redundancy
option that cautions you to only use the option when instructed to do so by the
Recovery Guru. It also informs you that if you need to check redundancy for any
reason other than recovery, you can enable redundancy checking through Media
Scan. For more information on Media Scan, see “Media scan” on page 56.
Note: You can enable the write-cache mirroring parameter for each logical drive but
when write-cache mirroring is enabled, half of the total cache size in each
controller is reserved for mirroring the cache data from the other controller.
To prevent data loss or damage, the controller writes cache data to the logical drive
periodically. When the cache holds a specified start percentage of unwritten data,
the controller writes the cache data to the logical drive. When the cache is flushed
down to a specified stop percentage, the flush is stopped. For example, the default
start and stop settings for a logical drive are 80% and 20% of the total cache size,
respectively. With these settings, the controller starts flushing the cache data when
the cache reaches 80% full and stops flushing cache data when the cache is
flushed down to 20% full. For maximum data safety, you can choose low start and
stop percentages, for example, a start setting of 25% and a stop setting of 0%.
However, these low start and stop settings increase the chance that data that is
needed for a host computer read will not be in the cache, decreasing the cache-hit
percentage and, therefore, the I/O request rate. It also increases the number of disk
writes necessary to maintain the cache level, increasing system overhead and
further decreasing performance.
If a power outage occurs, data in the cache that is not written to the logical drive is
lost, even if it is mirrored to the cache memory of both controllers. Therefore, there
are batteries in the controller enclosure that protect the cache against power
outages. The controller battery backup CRU change interval is three years from the
date that the backup battery CRU was installed for all models of the following
DS4000 Storage Subsystems only: FAStT200, FAStT500, DS4100, DS4300,
DS4400, and DS4500. There is not any replacement interval for the cache battery
backup CRU in other DS4000 Storage Subsystems. The storage management
software features a battery-age clock that you can set when you replace a battery.
This clock keeps track of the age of the battery (in days) so that you know when it
is time to replace the battery.
54 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Note: For the FAStT200, DS4100, and DS4300 or DS4300 Turbo disk systems, the
battery CRU is located inside each controller CRU. For DS4800, the
batteries CRU are located in the Interconnect-batteries CRU.
Write caching is disabled when batteries are low or discharged. If you enable a
parameter called write-caching without batteries on a logical drive, write caching
continues even when the batteries in the controller enclosure are removed.
Attention: For maximum data integrity, do not enable the write-caching without
batteries parameter, because data in the cache is lost during a power outage if the
controller enclosure does not have working batteries. Instead, contact IBM service
to get a battery replacement as soon as possible to minimize the time that the
subsystem is operating with write-caching disabled.
The hot-spare drive adds another level of redundancy to the storage subsystem. If
a logical drive fails in the storage subsystem, the hot-spare drive is automatically
substituted without requiring a physical swap. If the hot-spare drive is available
when a logical drive fails, the controller uses redundancy data to reconstruct the
data from the failed logical drive to the hot-spare drive. When you have physically
replaced the failed logical drive, the data from the hot-spare drive is copied back to
the replacement drive. This is called copyback.
Media scan
A media scan is a background process that runs on all logical drives in the storage
subsystem for which it is enabled, providing error detection on the drive media.
Media scan checks the physical disks for defects by reading the raw data from the
disk and, if there are errors, writing it back. The advantage of enabling media scan
is that the process can find media errors before they disrupt normal logical-drive
read and write functions. The media scan process scans all logical-drive data to
verify that it is accessible.
Note: The background media scan operation does not scan hot-spare or unused
optimal hard drives (those that are not part of a defined logical drive) in a
DS4000 Storage Subsystem configuration. To perform a media scan on
hot-spare or unused optimal hard drives, you must convert them to logical
drives at certain scheduled intervals and then revert them back to their
hot-spare or unused states after you scan them.
When enabled, the media scan runs on all logical drives in the storage subsystem
that meet the following conditions:
v The logical drive is in an optimal status
v There are no modification operations in progress
v The Media Scan parameter is enabled
Note: The media scan must be enabled for the entire storage subsystem and
enabled on each logical drive within the storage subsystem to protect the
logical drive from failure due to media errors.
Media scan only reads data stripes, unless there is a problem. When a block in the
stripe cannot be read, the read comment is retried a certain number times. If the
read continues to fail, the controller calculates what that block should be and issues
a write-with-verify command on the stripe. As the disk attempts to complete the
56 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
write command, if the block cannot be written, the drive reallocates sectors until the
data can be written. Then the drive reports a successful write and Media Scan
checks it with another read. There should not be any additional problems with the
stripe. If there are additional problems, the process repeats until there is a
successful write, or until the drive is failed due to many consecutive write failures
and a hotspare takes over. Repairs are only made on successful writes and the
drives are responsible for the repairs. The controller only issues writewithverify
commands. Therefore, data stripes can be read repeatedly and report bad sectors
but the controller calculates the missing information with RAID.
In a DS4000 dual controller storage subsystem, there are two controllers handling
I/O (Controllers A and B). Each logical drive that you create has a preferred
controller which normally handles I/O for it. If a controller fails, the I/O for logical
drives “owned” by the failed controller fails over to the other controller. Media scan
I/O is not impacted by a controller failure and scanning continues on all applicable
logical drives when there is only one remaining active controller.
If a drive is failed during the media scan process due to errors, normal
reconstruction tasks are initiated in the controllers operating system and Media
Scan attempts to rebuild the array using a hotspare drive. While this reconstruction
process occurs, no more media scan processing is done on that particular array.
Note: Because additional I/O reads are generated for media scanning, there might
be a performance impact depending on the following factors:
v The amount of configured storage capacity in the DS4000 Storage
Subsystem.
The greater the amount of configured storage capacity in the DS4000
storage subsystem, the higher the performance impact is.
v The configured scan duration for the media scan operations.
The longer the scan, the lower the performance impact is.
v The status of the redundancy check option (enabled or disabled).
If redundancy check is enabled, the performance impact is higher due to
the need to read the data and recalculated.
Note: Media scan makes three attempts to read the bad blocks.
Redundancy mismatches Redundancy errors are found.
Note: This error could occur only when the optional redundancy
checkbox is enabled, when the media scan feature is enabled,
and the logical drive or array is not RAID-0.
Unfixable error The data could not be read and parity or redundancy information
could not be used to regenerate it. For example, redundancy
information cannot be used to reconstruct data on a degraded
logical drive.
58 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Note: With redundancy check, media scan goes through the same process as
without redundancy check, but, in addition, the parity block is recalculated
and verified. If the parity has data errors, the parity is rewritten. The
recalculation and comparison of the parity data requires additional I/O
which can affect performance.
Important: Changes to the media settings will not go into effect until the current
media scan cycle completes.
To change the media scan settings for the entire storage subsystem, perform the
following steps:
1. Select the storage subsystem entry in the Logical/Physical view of the
Subsystem Management window.
2. Click Storage Subsystem > Change > Media Scan Settings.
To change the media scan settings for a given logical drive, perform the following
steps:
1. Select the logical drive entry in the Logical/Physical view of the Subsystem
Management window.
2. Click Logical Drive > Change > Media Scan Settings.
Whenever the storage subsystem has some idle time, it starts or continues media
scanning operations. If application generated disk I/O work is received, it gets
priority. Therefore, the media scan process can slow down, speed up, or in some
cases be suspended as the work demands change. If a storage subsystem receives
a great deal of application-generated disk I/O, it is possible for the Media Scan to
fall behind in its scanning. As the storage subsystem gets closer to the end of the
duration window during which it should finish the media scan, the background
application starts to increase in priority (i.e. more time is dedicated to the media
scan process). This increase in priority only increases to a certain point because
the DS4000 Storage Subsystem priority is process application-generated disk I/O.
In this case, it is possible that the media scan duration will be longer than the
media scan duration settings.
Note: If you change the media scan duration setting, the changes will not take
effect until the current media scan cycle completes or the controller is reset.
Table 15 lists the restrictions that apply to the copy service features.
Table 15. Restrictions to copy services premium feature support
Storage Features not Features not Features not
subsystem supported on supported on supported on
controller firmware controller firmware controller firmware
version 5.3x.xx.xx version 5.4x.xx.xx version 6.1x.xx.xx
(See note)
DS4800 N/A N/A None
Note: DS4800 is
supported in the
controller firmware
6.1x.xx.xx code thread
starting at version
06.14.xx.xx.
| DS4700 N/A N/A None
Note: DS4700 is
supported in the
controller firmware
6.1x.xx.xx code thread
starting at version
06.16.82.xx.
| DS4200 N/A N/A None
Note: DS4200 is
supported in the
controller firmware
6.1x.xx.xx code thread
starting at version
06.16.88.xx.
DS4100 N/A Enhanced Remote VolumeCopy
Mirroring option Note: DS4100 base is
VolumeCopy supported in the
controller firmware
6.1x.xx.xx code thread
starting at version
06.12.xx.xx.
60 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 15. Restrictions to copy services premium feature support (continued)
Storage Features not Features not Features not
subsystem supported on supported on supported on
controller firmware controller firmware controller firmware
version 5.3x.xx.xx version 5.4x.xx.xx version 6.1x.xx.xx
(See note)
DS4100 SCU N/A N/A N/A
DS4300 Enhanced Remote Enhanced Remote Enhanced Remote
Mirroring option Mirroring option Mirroring option
FlashCopy Note: DS4300 base is
VolumeCopy supported in the
controller firmware
6.1x.xx.xx code thread
starting at version
06.12.xx.xx.
DS4300 SCU Enhanced Remote N/A N/A
Mirroring option
VolumeCopy
DS4300 Turbo Enhanced Remote Enhanced Remote None
Mirroring option Mirroring option Note: DS4300 Turbo
VolumeCopy is supported in the
controller firmware
6.1x.xx.xx code thread
starting at version
06.10.xx.xx.
DS4400 VolumeCopy none None
Note: Controller fw
05.3x.xx.xx/05.4x.xx.xx
support the first
release/version of
Remote Mirroring
instead of the second
version of the RM as
supported in controller
fw 06.1x.xx.xx.
DS4500 VolumeCopy none None
Note: Controller fw
05.3x.xx.xx/05.4x.xx.xx
support the first
release/version of
Remote Mirroring
instead of the second
version of the RM as
supported in controller
fw 06.1x.xx.xx.
FAStT200 Enhanced Remote N/A N/A
Mirroring option
VolumeCopy
Note: The VolumeCopy feature is not available on Storage Manager 8.3 and
earlier.
FlashCopy
Use FlashCopy to create and manage FlashCopy logical drives. A FlashCopy
logical drive is a point-in-time image of a standard logical drive in your storage
subsystem. The logical drive that is copied is called a base logical drive.
When you make a FlashCopy, the controller suspends writes to the base logical
drive for a few seconds while it creates a FlashCopy repository logical drive. This is
a physical logical drive where FlashCopy metadata and copy-on-write data are
stored.
You can create up to four FlashCopy logical drives of a base logical drive and then
write data to the FlashCopy logical drives to perform testing and analysis. For
example, before upgrading a database management system, you can use
FlashCopy logical drives to test different configurations. You can disable the
FlashCopy when you are finished with it, for example after a backup completes.
Then you can re-create the FlashCopy the next time you do a backup and reuse
the same FlashCopy repository logical drive.
For operating-system specific information and instructions for using FlashCopy, see
the IBM TotalStorage DS4000 Storage Manager Version 9 Copy Services User’s
Guide or the FlashCopy online help.
62 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
VolumeCopy
The VolumeCopy feature is a premium feature that comes with the DS4000 Storage
Manager 9.1x software and is enabled by purchasing a premium feature key.
VolumeCopy is used with FlashCopy and, therefore, it can be purchased together
with FlashCopy as a single copy service option, or at a later time as an
enhancement to FlashCopy. The VolumeCopy feature is a firmware-based
mechanism that is used to copy data from one logical drive (the source logical
drive) to another logical drive (the target logical drive) in a single storage
subsystem. This feature can be used to perform the following tasks:
v Copy data from arrays that use smaller capacity drives to arrays that use larger
capacity drives
v Back up data
v Restore FlashCopy logical drive data to the base logical drive
This feature includes a Create Copy wizard that you can use to create a logical
drive copy, and a Copy Manager that you can use to monitor logical drive copies
after they have been created.
Backing up data
The VolumeCopy feature allows you to create a backup of a logical drive by
copying data from one logical drive to another logical drive in the same storage
subsystem. The target logical drive can be used as a backup for the source logical
drive, for system testing, or to back up to another device, such as a tape drive.
Attention: If the logical drive that you want to copy is used in a production
environment, the FlashCopy feature must be enabled. A FlashCopy of the logical
drive must be created and then specified as the VolumeCopy source logical drive,
instead of using the actual logical drive itself. This requirement allows the original
logical drive to continue to be accessible during the VolumeCopy operation.
For more information about VolumeCopy, see the IBM TotalStorage DS4000
Storage Manager Version 9 Copy Services User’s Guide.
The Enhanced Remote Mirroring option is a premium feature that comes with the
IBM DS4000 Storage Manager software and is enabled by purchasing a premium
64 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Read access to mirror secondary logical drives
This features allows direct host read access as well as creation of
FlashCopy logical drives on mirror secondary logical drives. Read and write
access is allowed to FlashCopies of the secondary logical drive.
Enhanced Remote Mirroring diagnostics
There are three new diagnostic services now offered with the Enhanced
Remote Mirroring option:
v First, the mirror creation process is improved to provide explicit return
status for failed mirror creation requests.
v Second, an inter-subsystem communication diagnostic allows the user to
test connectivity between two subsystems after a mirror relationship is in
place.
v Third, a new feature also included in this release provides RLS data for
host ports. This data can be used to isolate and diagnose intermittent
connections at the Fibre Channel level.
Increased number of mirror relationships per subsystem
Storage Manager 9.1x offers 64 mirror relationships per subsystem.
However, the increased number of mirrors requires additional logging
resources in the mirror repository logical drives. This release creates larger
logical drives to accommodate the additional resources, but if smaller
repositories exist, the number of mirrors is limited to 32. The user does
have the option of expanding his existing repository logical drives so that
they can handle 64 volumes.
Resynchronization methods
Two resynchronization methods are available in the current release of the
storage management software: Manual Resynchronization, which is the
recommended method, and Automatic Resynchronization. Selecting the
Manual Resynchronization option allows you to manage the
resynchronization process in a way that provides the best opportunity for
recovering data.
If a link interruption occurs and prevents communication between the
primary logical drive and secondary logical drive in a remote mirror pair, the
data on the logical drives might no longer be mirrored correctly. When
connectivity is restored between the primary logical drive and secondary
logical drive, a resynchronization takes place either automatically or needs
to be started manually. During the resynchronization, only the blocks of data
that have changed on the primary logical drive during the link interruption
are copied to the secondary logical drive.
FlashCopy logical drive enhancement
When creating a FlashCopy in conjunction with Enhanced Remote
Mirroring, you are now permitted to base the FlashCopy logical drive on the
primary logical drive or secondary logical drive of a remote mirror
configuration. This enhancement allows the secondary drive to backup
through its FlashCopy image.
Note: There is a limit to how many logical drives you can create in a single storage
subsystem. When the Enhanced Remote Mirroring option is enabled, the
total number of logical drives that are supported for each storage subsystem
is reduced by two from the number of logical drives that you would have
without the Enhanced Remote Mirroring option enabled.
Primary logical drives: The primary logical drive is the drive that accepts host
computer I/O operations and stores program data. When the mirror relationship is
first created, data from the primary logical drive is copied (becomes a mirror image)
in its entirety to the secondary logical drive. This process is known as a full
synchronization and is directed by the controller owner of the primary logical drive.
During a full synchronization, the primary logical drive remains fully accessible for
all normal I/O operations.
Secondary logical drives: The secondary logical drive stores the data that is
copied from the primary logical drive associated with it. The controller owner of the
secondary logical drive receives remote writes from the controller owner of the
primary logical drive and does not accept host computer write requests.
The new remote mirror option allows the host server to issue read requests to the
secondary logical drive.
Note: The host server must have the ability to mount the file system as read-only
in order to properly mount and issue read requests to the data in the
secondary logical drive.
When you activate the Enhanced Remote Mirroring option on the storage
subsystem, the system creates two mirror repository logical drives, one for each
controller in the storage subsystem. An individual mirror repository logical drive is
not needed for each mirror logical drive pair.
66 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
When you create the mirror repository logical drives, you specify their location. You
can either use existing free capacity or you can create an array for the logical
drives from unconfigured capacity and then specify the RAID level.
Because of the critical nature of the data that is stored, the RAID level of mirror
repository logical drives must be non-zero. The required size of each logical drive is
128 MB for each mirror repository logical drive 256 MB total). If you are upgrading
from the previous version of the Enhanced Remote Mirroring option, you must
upgrade the size of the repository logical drive from 4 MB to 128 MB in order to
support a maximum of 64 remote mirror pairs. Only a maximum of 32 remote mirror
pairs is supported with the 4 MB repository logical drive.
Write modes
When a write request is made to the primary logical drive, the controller owner of
the primary logical drive also initiates a remote write request to the secondary
logical drive. The timing of the write I/O completion indication that is sent back to
the host depends on the write mode option that is selected.
Asynchronous write mode, which is a new remote mirroring feature, allows the
primary-side controller to return the write I/O request completion to the host server
before data has been successfully written to the secondaryside controller.
Synchronous write mode, also known as Metro Mirroring, requires that all data has
been successfully written to the secondaryside controller before the primaryside
controller returns the write I/O request completion to the host server.
Mirror relationships
Before you define a mirror relationship, the Enhanced Remote Mirroring option must
be enabled on both the primary and secondary storage subsystems. A secondary
standard logical drive candidate (a logical drive that is intended to become one of a
mirrored pair) must be created on the secondary storage subsystem if one does not
already exist. It must be a standard logical drive and at least the same size as or
larger than the primary logical drive.
When secondary logical drive candidates are available, you can define a mirror
relationship in the storage management software by identifying the storage
subsystem that contains the primary logical drive and the storage subsystem that
contains the secondary logical drive.
When you set up the mirror relationship, a full synchronization occurs as data from
the primary logical drive is copied in its entirety to the secondary logical drive.
For more information on the Enhanced Remote Mirroring option, see the IBM
TotalStorage DS4000 Storage Manager Version 9 Copy Services User’s Guide.
The Persistent Reservations option enables you to view and clear volume
reservations and associated registrations. Persistent reservations are configured
and managed through the cluster server software, and prevent other hosts from
accessing particular volumes.
You can also manage persistent reservations through the script engine and the
command line interface. For more information, see the Enterprise Management
Window online help.
After you have set the password for each storage subsystem, you are prompted for
that password the first time that you attempt a destructive operation in the
Subsystem Management window. You are asked for the password only once during
a single management session.
Important: There is no way to change the password once it is set. Ensure that the
password information is kept in a safe and accessible place. Contact
IBM technical support for help if you forget the password to the storage
subsystem.
68 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Chapter 3. Configuring storage subsystems
This chapter describes the storage subsystem configuration options that you can
use to maximize data availability. It also outlines the high-level steps to configure
available storage subsystem capacity into logical drives and storage partitions.
Beginning with Storage Manager 9.12 and later versions, in conjunction with
controller firmware 06.12 and later, there are Task Wizards in the Enterprise
Management and Subsystems Management windows that will guide you through
most of the common DS4000 Storage Subsystem management tasks.
A logical drive is a logical structure that you create on a storage subsystem for data
storage. A logical drive is defined by a set of physical drives called an array, which
has a defined RAID level and capacity. You can define logical drives from either
unconfigured capacity nodes or free capacity nodes in the storage subsystem from
the Subsystem Management window. See Figure 8.
ds4hb008
If you have not configured any logical drives on the storage subsystem, the only
node that is available is the unconfigured capacity node.
When you create logical drives from unconfigured capacity, array candidates are
shown in the Create Logical Drive pull-down menu. Select the subsystem window
with information about whether the array candidate has channel protection. In a
SCSI environment, channel protection depends on the RAID level of the logical
drive and how many logical drives are present on any single drive channel. For
Storage partitioning
You can use the Storage Partitions feature of the Storage Manager software to
consolidate logical drives into sets called storage partitions. You grant visibility of
partitions to defined host computers or a defined set of hosts called a host group.
Storage partitions enable host computers to share storage capacity. Storage
partitions consolidate storage and reduce storage management costs.
For procedures that describe how to create storage partitions and host groups, see
the IBM TotalStorage DS4000 Storage Manager 9 Installation and Support Guide
for your operating system. For more detailed information about storage partitions,
see the Subsystem Management window online help.
Switch zoning
You might need to configure switch zoning before you create storage partitions.
Switch zoning is a SAN partitioning method that controls the traffic that runs through
a storage networking device, or switch. When you create zones on the switch, the
ports outside of a zone are invisible to ports within the zone. In addition, traffic
within each zone can be physically isolated from traffic outside the zone.
You can find more information about switch zoning in the IBM TotalStorage DS4000
Storage Manager Installation and Support Guide for your operating system.
70 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 16. Storage partitioning terminology (continued)
Term Description
Host port Host ports physically reside on the host adapters and are
automatically discovered by the Storage Manager
software. To give a host computer access to a partition,
you must define its associated host ports.
You can use storage partitioning to enable access to logical drives by designated
host computers in a host group or by a single host computer. A storage partition is
created when a collection of host computers (a host group) or a single host
computer is associated with a logical drive-to-LUN mapping. The mapping defines
which host group or host computer can access a particular logical drive in a storage
subsystem. Host computers and host groups can access data only through
assigned logical drive-to-LUN mappings.
Note: DS4000 controller firmware versions 04.00.xx.xx and earlier allow only host
computers that were running the same operating system to access a single
storage subsystem.
Host computers can run different operating systems (for example, Sun Solaris and
Windows 2000) or variants of the same operating system (for example, Windows
2000 running in a cluster environment or Windows 2000 running in a non-cluster
environment). When you specify a host computer type in the Define New Host Port
window, the Heterogeneous Hosts feature enables the controllers in the storage
subsystem to tailor their behavior (such as LUN reporting and error conditions) to
the needs of the operating system or variant of the host computer that is sending
72 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
the information. For detailed information about defining heterogeneous host
computer types, see the Subsystem Management window online help.
Note: To receive critical alerts, the Enterprise Management window must be open
(it can be minimized), or the Event Monitor must be installed and running.
To open the Task Assistant, choose View > Task Assistant from either the
Enterprise Management window or the Subsystem Management window, or click
the Task Assistant button in the toolbar:
ds4cg002
Figure 9. The task assistant in the Enterprise Management window
76 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
ds4cg003
Note: The Task Assistant is automatically invoked every time you open the
Subsystem Management window unless you check the Don’t show the task
assistant at start-up again check box at the bottom of the window.
dss00039
Figure 11. Monitoring storage subsystem health using the Enterprise Management window
78 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 17. Storage subsystem status icon quick reference (continued)
Icon Status Description
Unresponsive An Unresponsive status indicates that the management station
cannot communicate with the controller or controllers in the
storage subsystem over its network management connection.
Failure notification
When you monitor a storage subsystem, there are several indicators that the
storage subsystem failed. The following list describes the various indicators:
v The Subsystem Management window displays the Needs Attention icon in the
following locations:
– The Overall Health Status pane, Device Tree view, or Device Table of the
Enterprise Management window
– The Subsystem Management window Logical view
– Individual storage subsystems in the Enterprise Management window
v The Recovery Guru button in the Subsystem Management window changes
from Optimal to Needs Attention status and flashes.
v Non-optimal component icons are displayed in the Subsystem Management
window Logical view and Physical view.
v Critical SNMP trap or e-mail error messages are sent.
v The hardware displays fault lights.
Failure-notification
You might receive failure notifications about your storage subsystem at the network
management station or in e-mail. Hardware fault lights display on the affected
controller and storage expansion enclosures.
Note: For Storage Manager 8.3 and later, you can perform ESM and drive firmware
download using the Advanced menu in the Storage Subsystem window of
Important: The following sections include information that is useful to know before
you download your firmware and NVSRAM. These sections do not include
procedures for downloading the firmware and NVSRAM. For detailed instructions on
the firmware and NVSRAM downloading procedures, see the IBM TotalStorage
DS4000 Storage Manager Installation and Support Guide for your operating system.
Attention:
1. IBM supports firmware download with I/O, sometimes referred to as “concurrent
firmware download.” Before proceeding with concurrent firmware download,
check the readme file packaged with the firmware code or your particular host
operating system’s DS4000 Storage Manager host software for any restrictions
to this support. See “Storage Manager documentation and readme files” on
page 1 for instructions that describe how to find the readme files online.
2. Suspend all I/O activity while downloading firmware and NVSRAM to a DS4000
Storage Subsystem with a single controller or you will not have redundant
controller connections between the host server and the DS4000 Storage
Subsystem.
Important:
Note: The traditional download process takes significantly longer and must be done
in one phase, rather than in two phases as with the staged controller
firmware download. Therefore the staged controller firmware download,
which is described in “The staged controller firmware download feature” on
page 81 is the preferred method.
80 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
The staged controller firmware download feature
Storage Manager 9.1x, in conjunction with controller firmware version 06.1x.xx.xx or
later, offers a new feature in addition to the traditional controller firmware download
called the staged controller firmware download.
The staged controller firmware download feature separates firmware loading and
firmware activation into two separately executable steps. You can perform the
time-consuming task of loading the firmware online so that it is functionally
transparent to the application. You can then defer the activation of the loaded
firmware to a convenient time. Controller firmware or NVSRAM packages can be
downloaded from the storage management software to all storage subsystem
controllers. This feature allows you to perform the following actions:
v Controller firmware download only with immediate activation
v NVSRAM download with immediate activation
v Controller firmware download and, optionally, NVSRAM download with the option
to activate both later
Downloading NVSRAM
There are two methods for downloading the NVSRAM, from a firmware image or
from a standalone image. The following sections describe the two methods.
Attention: Note the following considerations before you download the drive
firmware:
v The drive firmware files for various Fibre Channel hard drive types are not
compatible with each other. Ensure that the firmware that you download to the
drives is compatible with the drives that you select. If incompatible firmware is
downloaded, the selected drives might become unusable, which will cause the
logical drive to be in a degraded or even failed state.
v The drive firmware update must be performed without making any host I/O
operations to the logical drives that are defined in the storage subsystem.
Otherwise, it could cause the firmware download to fail and make the drive
unusable, which could lead to loss of data availability.
v Do not make any configuration changes to the storage subsystem while
downloading drive firmware or it could cause the firmware download to fail and
make the selected drives unusable.
v If you download the drive firmware incorrectly, it could result in damage to the
drives or loss of data.
With parallel drive firmware download, a drive firmware image is sent to the
controller with a list of drives to update. The controller issues download commands
to multiple drives simultaneously. The controller still blocks all I/O access to all
logical drives on the subsystem during the download sequence but the overall down
time is significantly reduced since multiple drives can be updated concurrently.
The following list includes some restrictions and limitations of the parallel drive
firmware download feature:
v The maximum number of packages that can be downloaded simultaneously is
four.
v The maximum number of drives allowed in one download list is equal to the
maximum number of drives that are supported by the storage subsystem.
v A drive cannot be associated with more than one download package in any
download command.
82 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
v The download of an unpackaged file is not supported.
With Storage Manager 9.1x and controller firmware 05.4x.xx.xx or higher, you can
update the ESM firmware while host I/O operations are made to the logical drives
that are defined in the storage subsystem. You can only do this if, in the ESM
firmware download window, you select and download to one storage expansion
enclosure at a time. The ESM firmware version must be the same in all of the Fibre
Channel storage expansion enclosures, and it must be of the same type in a given
DS4000 Storage Subsystem configuration.
For example, if the DS4000 Storage Subsystem has three EXP810 Fibre Channel
storage expansion enclosures and two EXP710 Fibre Channel storage expansion
enclosures, the firmware of all the ESMs in the two EXP710 Fibre Channel storage
expansion enclosures must be the same and the firmware of all ESMs in the three
EXP810 Fibre Channel storage expansion enclosures must be the same.
The ESM code for one model (for example, the EXP810) Fibre Channel storage
expansion enclosure is not compatible with a different model (for example, EXP710)
Fibre Channel storage expansion enclosure.
Before you begin to download the ESM firmware, consider the following points:
v The IBM Fibre Channel storage expansion enclosures must be connected
together in an IBM supported storage expansion enclosure Fibre Channel
connection scheme.
v Both of the ESMs in each of the storage expansion enclosures must be
connected in dual redundant drive loops.
v Use SMclient to check for any loss of redundancy errors in the drive loop and to
make the appropriate corrections before you attempt to download the ESM
firmware.
Automatic ESM firmware synchronization: When you install a new ESM into an
existing storage expansion enclosure in a DS4000 storage subsystem that supports
automatic ESM firmware synchronization, the firmware in the new ESM is
automatically synchronized with the firmware in the existing ESM. This automatically
resolves any ESM firmware mismatch conditions.
To enable automatic ESM firmware synchronization, ensure that your system meets
the following requirements:
Missing logical drives are only displayed in the Logical view if they are standard
logical drives or repository logical drives. In addition, one of the following conditions
must exist:
v The logical drive has an existing logical drive-to-LUN mapping, and drives that
are associated with the logical drive are no longer accessible.
v The logical drive is participating in a remote mirror as either a primary logical
drive or a secondary logical drive, and drives that are associated with the logical
drive are no longer accessible.
v The logical drive is a mirror repository logical drive, and drives that are
associated with the logical drive are no longer accessible. The Recovery Guru
has a special recovery procedure for this case. Two mirror repository logical
drives are created together on the same array when the Global/Metro remote
mirror option feature is activated and one is used for each controller in the
storage subsystem. If drives that are associated with the array are no longer
accessible, then both mirror repository logical drives are missing, and all remote
mirrors are in an unsynchronized state.
v The logical drive is a base logical drive with associated FlashCopy logical drives,
and drives that are associated with the logical drive are no longer accessible.
v The logical drive is a FlashCopy repository logical drive, and drives that are
associated with the logical drive are no longer accessible.
84 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
If missing logical drives are detected by the storage subsystem, a Missing Logical
Drives group is created in the Logical view of the Subsystem Management window.
Each missing logical drive is shown and identified by its worldwide name and logical
drive type. Missing logical drives are identified as being one of the following types
of drives:
v A standard logical drive
v A base logical drive
v A FlashCopy repository logical drive
v A primary logical drive
v A secondary logical drive
v A mirror repository logical drive
Missing logical drives, in most cases, are recoverable. Do not delete missing logical
drives without confirming that the logical drives are no longer needed, because they
will be permanently removed from the configuration.
If the storage subsystem detects that logical drives are missing because they have
either been accidentally removed or their storage expansion enclosures have
sustained a power loss, you can recover these logical drives by using either of the
following methods:
v Reinsert the drives back into the storage expansion enclosure.
v Ensure that the power supplies of the storage expansion enclosure are properly
connected to an operating power source and have an optimal status.
Important: To set up alert notifications using SNMP traps, you must copy and
compile a management information base (MIB) file on the designated NMS. See the
Storage Manager installation guide for your operating system for details.
After alert destinations are set, a check mark is displayed in the left pane where the
management station, host computer, or storage subsystem displays. When a critical
problem occurs on the storage subsystem, the software sends a notification to the
specified alert destinations.
Also, you can use the storage management software to validate potential
destination addresses and specify management-domain global e-mail alert settings
for mail server and sender e-mail address.
The Event Monitor is a separate program that is bundled with the Storage Manager
client software.
The Event Monitor and SMclient send alerts to a remote system. The emwdata.bin
file on the management station contains the name of the storage subsystem that is
being monitored and the address where to send alerts. The alerts and errors that
occur on the storage subsystem are continually monitored by SMclient and the
86 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Event Monitor. The Event Monitor takes over for the client after SMclient is shut
down. When an event is detected, a notification is sent to the remote system.
To install the Event Monitor software, you must have administrative permissions on
the computer where the Event Monitor will reside, and you must install both
SMclient and the Event Monitor software together. After the software is installed, the
Event Monitor icon (shown in Figure 12 on page 88) is displayed in the lower left
corner of the Enterprise Management window.
The e-mail alert destinations will not work unless you also configure a mail server
and sender e-mail address. Click Edit > Configure Mail Server in the Enterprise
Management window. Configure the mail server and sender e-mail address only
one time for all e-mail alert destinations.
Note: If you want to set identical alert destinations on more than one management
station or host computer, you must install the Event Monitor on each system.
Then you can either repeat setting up the alert destinations or copy the
emwdata.bin file from one system to the other. However, be aware that if you
have configured the Event Monitor on multiple systems that will monitor the
same storage subsystem, you will receive duplicate alert notifications for the
same critical problem on that storage subsystem.
If the Event Monitor is configured and running on more than one host computer or
management station that is connected to the storage subsystem, you will receive
duplicate alert notifications for the same critical problem on that storage subsystem.
The Event Monitor and the Enterprise Management window share the information to
send alert messages. The Enterprise Management window displays alert status to
help you install and synchronize the Event Monitor. The parts of the Enterprise
Management window that are related to event monitoring are shown in Figure 12 on
page 88.
When the Event Monitor and the Enterprise Management window are synchronized,
the Synchronization button is unavailable. When a configuration change occurs,
the Synchronization button becomes active. Clicking the Synchronization button
synchronizes the Event Monitor and the Enterprise Management software
components.
Note: The Enterprise Management window and the Event Monitor are automatically
synchronized whenever you close the Enterprise Management window. The
Event Monitor continues to run and send alert notifications as long as the
operating system is running.
For detailed information about setting up alert destinations or about the Enterprise
Management window, see the Enterprise Management window online help.
88 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Recovery Guru
The Recovery Guru is a component of the Subsystem Management window in the
SMclient package. The Recovery Guru diagnoses storage subsystem problems and
suggests recovery procedures to correct the problems. To start the Recovery Guru,
click Recovery Guru in the Subsystem Management window, shown in Figure 13,
or click Storage Subsystem > Recovery Guru.
The Recovery Guru window is shown in Figure 14 on page 90. The Summary pane
shows that there are two different failures in this storage subsystem: a hot spare in
use, and a failed battery CRU.
When you select a failure from the list in the Summary pane, the appropriate
details and a recovery procedure display in the Details pane. For example, the
Recovery Guru window shows that Logical Drive - Hot Spare in Use is selected.
The Details pane shows that in logical drive ‘SWest’, a hot-spare drive has
replaced a failed logical drive in enclosure 6, slot 9. The Recovery Procedure
pane shows the details about this failure and how to recover from it.
As you follow the recovery procedure to replace the failed logical drive in the
Subsystem Management window, the associated logical drive (‘SWest’) icon
changes to Operation in Progress, and the replaced logical drive icon changes to
Replaced Drive. The data that is reconstructed to the hot-spare drive is copied
back to the replaced physical drive. During the copyback operation, the status icon
changes to Replaced, as shown in Figure 15 on page 91.
90 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
The drive icon changes from Failed to Replaced status.
The logical drive icon changes from Hot-spare drive icon remains Optimal in
Optimal to Operation in Progress. use during the copyback operation.
When the copyback operation is complete, the status icon changes to Optimal, as
shown in Figure 16 on page 92.
92 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Chapter 5. Tuning storage subsystems
The information in the chapter helps you use data from the Performance Monitor.
This chapter also describes the tuning options that are available in Storage
Manager 9.1x for optimizing storage subsystem and application performance. Use
the Subsystem Management window Performance Monitor to monitor storage
subsystem performance in real time and to save performance data to a file for later
analysis. You can specify the logical drives and controllers to monitor and the
polling interval. Also, you can receive storage subsystem totals, which is data that
combines the statistics for both controllers in an active-active controller pair.
Table 18 describes the Performance Monitor data that is displayed for selected
devices.
Table 18. Performance Monitor tuning options in the Subsystem Management window
Data field Description
Total I/Os Total I/Os performed by this device since the beginning of the
polling session. For more information, see “Balancing the Fibre
Channel I/O load.”
Read percentage The percentage of total I/Os that are read operations for this
device. Write percentage is calculated as 100 minus this value.
For more information, see “Optimizing the Fibre Channel I/O
request rate” on page 94.
Cache-hit percentage The percentage of read operations that are processed with data
from the cache, rather than requiring a read from the logical
drive. For more information, see “Optimizing the Fibre Channel
I/O request rate” on page 94.
Current KB per second During the polling interval, the transfer rate is the amount of
data, in KB, that is moved through the Fibre Channel I/O path in
one second (also called throughput). For more information, see
“Optimizing the I/O transfer rate” on page 94.
Maximum KB per second The maximum transfer rate that is achieved during the
Performance Monitor polling session. For more information, see
“Optimizing the I/O transfer rate” on page 94.
Current I/O per second The average number of I/O requests that are serviced per
second during the current polling interval (also called an I/O
request rate). For more information, see “Optimizing the Fibre
Channel I/O request rate” on page 94.
Maximum I/O per second The maximum number of I/O requests that are serviced during
a one-second interval over the entire polling session. For more
information, see “Optimizing the Fibre Channel I/O request rate”
on page 94.
You can identify actual Fibre Channel I/O patterns to the individual logical drives
and compare those with the expectations based on the application. If a controller
has more I/O activity than expected, move an array to the other controller in the
storage subsystem by clicking Array > Change Ownership.
If you notice that the workload across the storage subsystem (total Fibre Channel
I/O statistic) continues to increase over time while application performance
decreases, you might need to add storage subsystems to the enterprise.
One of the ways to improve the I/O transfer rate is to improve the I/O request rate.
Use the host-computer operating system utilities to gather data about I/O size to
understand the maximum transfer rates possible. Then, use the tuning options that
are available in Storage Manager 9.1x to optimize the I/O request rate to reach the
maximum possible transfer rate.
Note: Fragmentation affects logical drives with sequential Fibre Channel I/O
access patterns, not random Fibre Channel I/O access patterns.
Determining the Fibre Channel I/O access pattern and I/O size
To determine if the Fibre Channel I/O access has sequential characteristics, enable
a conservative cache read-ahead multiplier (for example, 4) by clicking Logical
Drive > Properties. Then, examine the logical drive cache-hit percentage to see if
it has improved. An improvement indicates that the Fibre Channel I/O has a
sequential pattern. For more information, see “Optimizing the cache-hit percentage”
on page 95. Use the host-computer operating-system utilities to determine the
typical I/O size for a logical drive.
94 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Enabling write-caching
Higher Fibre Channel I/O write rates occur when write-caching is enabled,
especially for sequential Fibre Channel I/O access patterns. Regardless of the Fibre
Channel I/O access pattern, be sure to enable write-caching to maximize the Fibre
Channel I/O rate and shorten the application response time.
If the cache-hit percentage of all logical drives is low or trending downward and you
do not have the maximum amount of controller cache memory installed, you might
need to install more memory.
If an individual logical drive has a low cache-hit percentage, you can enable cache
read-ahead for that logical drive. Cache read-ahead can increase the cache-hit
percentage for a sequential I/O workload. If cache read-ahead is enabled, the
cache fetches more data, usually from adjacent data blocks on the drive. In addition
to the requested data, this feature increases the chance that a future request for
data is fulfilled from the cache, rather than requiring a logical drive access.
The cache read-ahead multiplier values specify the multiplier to use for determining
how many additional data blocks are read into the cache. Choosing a higher cache
read-ahead multiplier can increase the cache-hit percentage.
If you determine that the Fibre Channel I/O access pattern has sequential
characteristics, set an aggressive cache read-ahead multiplier (for example, 8).
Then examine the logical-drive cache-hit percentage to see if it has improved.
Continue to customize logical-drive cache read-ahead to arrive at the optimal
multiplier. (For a random I/O pattern, the optimal multiplier is 0.)
Important: In Storage Manager 7.01 and 7.02, the segment size is expressed in
the number of data blocks. The segment size in Storage Manager 9.1x is expressed
in KB.
When you create a logical drive, the default segment size is a good choice for the
expected logical-drive usage. To change the default segment size, click Logical
Drive > Change Segment Size.
If the I/O size is larger than the segment size, increase the segment size to
minimize the number of drives that are needed to satisfy an I/O request. This
technique helps even more if you have random I/O access patterns. If you use a
single logical drive for a single request, it leaves other logical drives available to
simultaneously service other requests.
When you use the logical drive in a single-user, large I/O environment such as a
multimedia application, storage performance is optimized when a single I/O request
is serviced with a single array data stripe (which is the segment size multiplied by
the number of logical drives in the array that are used for I/O requests). In this
case, multiple logical drives are used for the same request, but each logical drive is
accessed only once.
96 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Chapter 6. Critical event problem solving
When a critical event occurs, it is logged in the Event Log. It is also sent to any
e-mail and SNMP trap destinations that you have configured. The critical event type
and the sense key/ASC/ASCQ data are both shown in the event log details.
If a critical event occurs and you plan to call technical support, you can use the
Customer Support Bundle feature to gather and package various pieces of data that
can aid in remote troubleshooting. Perform the following steps to use the Customer
Support Bundle feature:
1. From the subsystem management window of the logical drive that is exhibiting
problems, go to the Advanced menu.
2. Select Troubleshooting > Advanced > Collect All Support Data. The Collect
All Support Data window opens.
3. Type the name of the file where you want to save the collected data or click
browse to select the file. Click Start.
It takes several seconds for the zip file to be created depending on the amount
of data to be collected.
4. Once the process completes, you can send the zip file electronically to
customer support for troubleshooting.
Table 19 provides more information about events with a critical priority, as shown in
the Subsystem Management window event log.
Table 19. Critical events
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 1001 - Channel failed 6/3F/C3 Description: The controller failed a channel and cannot
access drives on this channel any more. The FRU group
qualifier (byte 26) in the sense data indicates the relative
channel number of the failed channel. Typically this
condition is caused by a drive ignoring the SCSI protocol
on one of the controller destination channels. The
controller fails a channel if it issued a reset on a channel
and continues to see the drives ignore the SCSI Bus
Reset on this channel.
98 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 19. Critical events (continued)
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 1510 - ESM canister None Description: Two ESM canisters in the same storage
miswire expansion enclosure are connected to the same Fibre
Channel loop. A level of redundancy has been lost and
the I/O performance for this storage expansion enclosure
is reduced.
100 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 19. Critical events (continued)
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 2248 - Drive failed - 6/3F/80 Description: The drive failed during a write command.
write failure The drive is marked failed.
102 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 19. Critical events (continued)
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 2803 - Uninterruptible 6/3F/C9 Description: The uninterruptible power supply has
power supply battery - two indicated that its standby power supply is nearing
minutes to failure depletion.
104 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 19. Critical events (continued)
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 281C- Maximum 6/3F/C6 Description: The maximum temperature of the enclosure
temperature exceeded has been exceeded. Either a fan has failed or the
temperature of the room is too high. This condition is
critical and might cause the enclosure to shut down if you
do not fix the problem immediately. The automatic
shutdown conditions depend on the model of the
enclosure.
106 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 19. Critical events (continued)
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 2830 - Mixed drive None Description: The storage subsystem currently contains
types not supported drives of different drive technologies, such as Fibre
Channel (FC) and Serial ATA (SATA). Mixing different
drive technologies is not supported on this storage
subsystem.
108 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 19. Critical events (continued)
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 560C - CtlrDiag task None Description: The alternate controller in this pair is
on controller’s alternate attempting to run diagnostics and could not secure the
cannot obtain Mode test area from other storage subsystem operations. The
diagnostics were canceled.
Event 6101 - Internal None Description: Because of the amount of data that is
configuration database full required to store certain configuration data, the maximum
number of logical drives has been underestimated. One or
both of the following types of data might have caused the
internal configuration database to become full:
v FlashCopy logical drive configuration data
v Global/Metro remote mirror configuration data
110 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Table 19. Critical events (continued)
Critical event number Sense key/ASC/ASCQ Critical event description and required action
Event 6202 - Failed None Description: Either the FlashCopy repository logical drive
FlashCopy logical drive that is associated with the FlashCopy logical drive is full
or its associated base or FlashCopy repository logical
drives have failed due to one or more drive failures on
their respective arrays.
Action: Start the Recovery Guru and click the Mirror Data
Unsynchronized recovery procedure. Follow the
instructions to correct this failure.
Event 6503 - Remote logical None Description: This event is triggered when either a cable
drive link down between one array and its peer has been disconnected,
the Fibre Channel switch has failed, or the peer array has
reset. This error could result in the Mirror Data
Unsynchronized, event 6402. The affected remote logical
drive displays an Unresponsive icon, and this state will
be selected in the tooltip when you pass your cursor over
the logical drive.
112 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Appendix A. Online help task reference
The Enterprise Management software and Subsystem Management software have
unique online help systems. This reference is a task-oriented index to the
appropriate help system.
114 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
See the following online help For information about the following task
Enterprise Management window Adding comments to a script
Creating logical drives using the Script Editor
Deleting an array or logical drive using the Script
Editor
Downloading new firmware or NVSRAM to the storage
subsystem using the Script Editor
Editing an existing script
Running the currently loaded script
Interpreting script results
Opening a new script
Saving the script results to a local file
Saving the script in the Script view
Using the Script Editor
Verifying the syntax of the currently loaded script
Event notification
See the following online help For information about the following task
Enterprise Management window Configuring destination addresses for notifications
about an individual storage subsystem
Configuring destination addresses for notifications
about every storage subsystem that is attached and
managed through a particular host computer
Configuring destination addresses for notifications
about every storage subsystem in the management
domain
Interpreting an e-mail or SNMP trap message
Specifying management-domain global e-mail alert
settings
Validating potential destination addresses
Subsystem Management window Displaying storage subsystem events in the Event
Viewer
Interpreting event codes
Interpreting event summary data
Saving selected events to a file
Viewing and interpreting event details
Viewing events stored in the Event Log
Running and displaying Drive Channel diagnostics
Capturing all support data and storage subsystem
state information
116 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Recovering from problems
If a critical event occurs and you plan to call technical support, you can use the
Customer Support Bundle feature to gather and package various pieces of data that
can aid in remote troubleshooting. For more information on the Customer Support
Bundle feature, see 97.
See the following online help For information about the following task
Subsystem Management window Failing a selected drive or drives
Identifying when to use the Recovery Guru
Initializing drives, logical drives, or arrays
Interpreting Recovery Guru information
Manually reconstructing a drive
Moving arrays (and their associated logical drives)
back to their preferred controller owners
Placing a controller online or offline
Recovering from connection failures
Recovering from storage subsystem problems
Reviving the drives in a selected array or an individual
drive
Saving Recovery Guru information to a text file
118 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Appendix B. Additional DS4000 documentation
The following tables present an overview of the IBM System Storage DS4000
Storage Manager, Storage Subsystem, and Storage Expansion Enclosure product
libraries, as well as other related documents. Each table lists documents that are
included in the libraries and what common tasks they address.
You can access the documents listed in these tables at both of the following Web
sites:
www.ibm.com/servers/storage/support/disk/
www.ibm.com/shop/publications/order/
120 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
DS4700 Storage Subsystem library
Table 22 associates each document in the DS4700 Storage Subsystem library with
its related common user tasks.
Table 22. DS4700 Storage Subsystem document titles by user tasks
Title User Tasks
Planning Hardware Software Configuration Operation and Diagnosis and
Installation Installation Administration Maintenance
IBM System Storage
DS4700 Storage
Subsystem
U U U U U
Installation, User’s
and Maintenance
Guide
IBM System Storage
DS4700 Storage
Subsystem Fibre U
Channel Cabling
Guide
122 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
DS4400 Storage Subsystem library
Table 24 associates each document in the DS4400 (previously FAStT700) Storage
Subsystem library with its related common user tasks.
Table 24. DS4400 Storage Subsystem document titles by user tasks
Title User Tasks
Planning Hardware Software Configuration Operation and Diagnosis and
Installation Installation Administration Maintenance
IBM TotalStorage
DS4400 Fibre
U U U U U
Channel Storage
Server User’s Guide
IBM TotalStorage
DS4400 Fibre
Channel Storage U U U U
Server Installation
and Support Guide
IBM TotalStorage
DS4400 Fibre
U U
Channel Cabling
Instructions
124 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
DS4200 Express Storage Subsystem library
Table 26 associates each document in the DS4200 Express Storage Subsystem
library with its related common user tasks.
Table 26. DS4200 Express Storage Subsystem document titles by user tasks
Title User Tasks
Planning Hardware Software Configuration Operation and Diagnosis and
Installation Installation Administration Maintenance
IBM System Storage
DS4200 Express
Storage Subsystem
U U U U U
Installation, User’s
and Maintenance
Guide
IBM System Storage
DS4200 Express
U
Storage Subsystem
Cabling Guide
126 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
DS4000 Storage Expansion Enclosure documents
Table 28 associates each of the following documents with its related common user
tasks.
Table 28. DS4000 Storage Expansion Enclosure document titles by user tasks
Title User Tasks
Planning Hardware Software Configuration Operation and Diagnosis and
Installation Installation Administration Maintenance
IBM System Storage
DS4000 EXP810
Storage Expansion
Enclosure U U U U U
Installation, User’s,
and Maintenance
Guide
IBM TotalStorage
DS4000 EXP700
and EXP710
Storage Expansion
U U U U U
Enclosures
Installation, User’s,
and Maintenance
Guide
IBM DS4000
EXP500 Installation U U U U U
and User’s Guide
IBM System Storage
DS4000 EXP420
Storage Expansion
Enclosure U U U U U
Installation, User’s,
and Maintenance
Guide
IBM System Storage
DS4000 Hard Drive
and Storage
Expansion U U
Enclosures
Installation and
Migration Guide
Notes:
1. The IBM TotalStorage DS4000 Hardware Maintenance Manual does not contain maintenance information for the
IBM System Storage DS4100, DS4200, DS4300, DS4500, DS4700, or DS4800 storage subsystems. You can find
maintenance information for these products in the IBM System Storage DSx000 Storage Subsystem Installation,
User's, and Maintenance Guide for the particular subsystem.
128 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Appendix C. Accessibility
This section provides information about alternate keyboard navigation, which is a
DS4000 Storage Manager accessibility feature. Accessibility features help a user
who has a physical disability, such as restricted mobility or limited vision, to use
software products successfully.
By using the alternate keyboard operations that are described in this section, you
can use keys or key combinations to perform Storage Manager tasks and initiate
many menu actions that can also be done with a mouse.
Note: In addition to the keyboard operations that are described in this section, the
DS4000 Storage Manager 9.14, 9.15, and 9.16 software installation packages for
Windows include a screen reader software interface. To enable the screen reader,
select Custom Installation when using the installation wizard to install Storage
Manager 9.14, 9.15, or 9.16 on a Windows host/management station. Then, in the
Select Product Features window, select Java™ Access Bridge in addition to the
other required host software components.
Table 30 defines the keyboard operations that enable you to navigate, select, or
activate user interface components. The following terms are used in the table:
v Navigate means to move the input focus from one user interface component to
another.
v Select means to choose one or more components, typically for a subsequent
action.
v Activate means to carry out the action of a particular component.
130 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Notices
This publication was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service can be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may be
used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
Any references in this publication to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for this
IBM product, and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Trademarks
The following terms are trademarks of International Business Machines Corporation
in the United States, other countries, or both:
IBM
AIX
e-server logo
FlashCopy
HelpCenter
Intellistation
© Copyright IBM Corp. 2004, 2007 131
Netfinity
ServerProven
TotalStorage
System x
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Other company, product, or service names may be the trademarks or service marks
of others.
Important notes
Processor speeds indicate the internal clock speed of the microprocessor; other
factors also affect application performance.
CD-ROM drive speeds list the variable read rate. Actual speeds vary and are often
less than the maximum possible.
When referring to processor storage, real and virtual storage, or channel volume,
KB stands for approximately 1000 bytes, MB stands for approximately 1000000
bytes, and GB stands for approximately 1000000000 bytes.
Maximum internal hard disk drive capacities assume the replacement of any
standard hard disk drives and population of all hard disk drive bays with the largest
currently supported drives available from IBM.
Some software may differ from its retail version (if available), and may not include
user manuals or all program functionality.
132 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Glossary
This glossary provides definitions for the host system and the external fibre-channel (FC) link and
terminology and abbreviations used in IBM vice versa. Also called an I/O adapter, host adapter, or
TotalStorage DS4000 publications. FC adapter.
See Refers you to (a) a term that is the array. A collection of fibre-channel or SATA hard drives
expanded form of an abbreviation or that are logically grouped together. All the drives in the
acronym, or (b) a synonym or more array are assigned the same RAID level. An array is
sometimes referred to as a ″RAID set.″ See also
preferred term.
redundant array of independent disks (RAID), RAID
See also level.
Refers you to a related term.
asynchronous write mode. In remote mirroring, an
Abstract Windowing Toolkit (AWT). A Java graphical option that allows the primary controller to return a write
user interface (GUI). I/O request completion to the host server before data
has been successfully written by the secondary
accelerated graphics port (AGP). A bus specification controller. See also synchronous write mode, remote
that gives low-cost 3D graphics cards faster access to mirroring, Global Copy,Global Mirroring.
main memory on personal computers than the usual
peripheral component interconnect (PCI) bus. AGP AT. See advanced technology (AT) bus architecture.
reduces the overall cost of creating high-end graphics
subsystems by using existing system memory. ATA. See AT-attached.
access volume. A special logical drive that allows the AT-attached. Peripheral devices that are compatible
host-agent to communicate with the controllers in the with the original IBM AT computer standard in which
storage subsystem. signals on a 40-pin AT-attached (ATA) ribbon cable
followed the timings and constraints of the Industry
adapter. A printed circuit assembly that transmits user Standard Architecture (ISA) system bus on the IBM PC
data input/output (I/O) between the internal bus of the AT computer. Equivalent to integrated drive electronics
(IDE).
command. A statement used to initiate an action or disk array controller (dac). A disk array controller
start a service. A command consists of the command device that represents the two controllers of an array.
name abbreviation, and its parameters and flags if See also disk array router.
applicable. A command can be issued by typing it on a
disk array router (dar). A disk array router that
command line or selecting it from a menu.
represents an entire array, including current and
community string. The name of a community deferred paths to all logical unit numbers (LUNs) (hdisks
contained in each Simple Network Management on AIX). See also disk array controller.
Protocol (SNMP) message.
DMA. See direct memory access.
concurrent download. A method of downloading and
domain. The most significant byte in the node port
installing firmware that does not require the user to stop
(N_port) identifier for the fibre-channel (FC) device. It is
I/O to the controllers during the process.
not used in the Fibre Channel-small computer system
CRC. See cyclic redundancy check. interface (FC-SCSI) hardware path ID. It is required to
be the same for all SCSI targets logically connected to
CRT. See cathode ray tube. an FC adapter.
134 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
drive channels. The DS4200, DS4700, and DS4800 receiving end. Most ECCs are characterized by the
subsystems use dual-port drive channels that, from the maximum number of errors they can detect and correct.
physical point of view, are connected in the same way
as two drive loops. However, from the point of view of ESD. See electrostatic discharge.
the number of drives and enclosures, they are treated
as a single drive loop instead of two different drive ESM canister. See environmental service module
loops. A group of storage expansion enclosures are canister.
connected to the DS4000 storage subsystems using a
automatic ESM firmware synchronization. When
drive channel from each controller. This pair of drive
you install a new ESM into an existing storage
channels is referred to as a redundant drive channel
expansion enclosure in a DS4000 storage subsystem
pair.
that supports automatic ESM firmware synchronization,
drive loops. A drive loop consists of one channel from the firmware in the new ESM is automatically
each controller combined to form one pair of redundant synchronized with the firmware in the existing ESM.
drive channels or a redundant drive loop. Each drive
EXP. See storage expansion enclosure.
loop is associated with two ports. (There are two drive
channels and four associated ports per controller.) For expansion port (E_port). A port that connects the
the DS4800, drive loops are more commonly referred to switches for two fabrics.
as drive channels. See drive channels.
Extended Industry Standard Architecture (EISA). A
DRAM. See dynamic random access memory. bus standard for IBM compatibles that extends the
Industry Standard Architecture (ISA) bus architecture to
Dynamic Host Configuration Protocol (DHCP). A
32 bits and allows more than one central processing
protocol defined by the Internet Engineering Task Force
unit (CPU) to share the bus. See also Industry Standard
that is used for dynamically assigning Internet Protocol
Architecture.
(IP) addresses to computers in a network.
fabric. A Fibre Channel entity which interconnects and
dynamic random access memory (DRAM). A
facilitates logins of N_ports attached to it. The fabric is
storage in which the cells require repetitive application
responsible for routing frames between source and
of control signals to retain stored data.
destination N_ports using address information in the
ECC. See error correction coding. frame header. A fabric can be as simple as a
point-to-point channel between two N-ports, or as
EEPROM. See electrically erasable programmable complex as a frame-routing switch that provides multiple
read-only memory. and redundant internal pathways within the fabric
between F_ports.
EISA. See Extended Industry Standard Architecture.
fabric port (F_port). In a fabric, an access point for
electrically erasable programmable read-only connecting a user’s N_port. An F_port facilitates N_port
memory (EEPROM). A type of memory chip which can logins to the fabric from nodes connected to the fabric.
retain its contents without consistent electrical power. An F_port is addressable by the N_port connected to it.
Unlike the PROM which can be programmed only once, See also fabric.
the EEPROM can be erased electrically. Because it can
only be reprogrammed a limited number of times before FC. See Fibre Channel.
it wears out, it is appropriate for storing small amounts
of data that are changed infrequently. FC-AL. See arbitrated loop.
electrostatic discharge (ESD). The flow of current feature enable identifier. A unique identifier for the
that results when objects that have a static charge storage subsystem, which is used in the process of
come into close enough proximity to discharge. generating a premium feature key. See also premium
feature key.
environmental service module (ESM) canister. A
component in a storage expansion enclosure that Fibre Channel (FC). A set of standards for a serial
monitors the environmental condition of the components input/output (I/O) bus capable of transferring data
in that enclosure. Not all storage subsystems have ESM between two ports at up to 100 Mbps, with standards
canisters. proposals to go to higher speeds. FC supports
point-to-point, arbitrated loop, and switched topologies.
E_port. See expansion port.
Fibre Channel-Arbitrated Loop (FC-AL). See
error correction coding (ECC). A method for arbitrated loop.
encoding data so that transmission errors can be
detected and corrected by examining the data on the Fibre Channel Protocol (FCP) for small computer
system interface (SCSI). A high-level fibre-channel
mapping layer (FC-4) that uses lower-level fibre-channel
Glossary 135
(FC-PH) services to transmit SCSI commands, data, HBA. See host bus adapter.
and status information between a SCSI initiator and a
SCSI target across the FC link by using FC frame and hdisk. An AIX term representing a logical unit number
sequence formats. (LUN) on an array.
field replaceable unit (FRU). An assembly that is heterogeneous host environment. A host system in
replaced in its entirety when any one of its components which multiple host servers, which use different
fails. In some cases, a field replaceable unit might operating systems with their own unique disk storage
contain other field replaceable units. Contrast with subsystem settings, connect to the same DS4000
customer replaceable unit (CRU). storage subsystem at the same time. See also host.
FlashCopy. A premium feature for DS4000 that can host. A system that is directly attached to the storage
make an instantaneous copy of the data in a volume. subsystem through a fibre-channel input/output (I/O)
path. This system is used to serve data (typically in the
F_port. See fabric port. form of files) from the storage subsystem. A system can
be both a storage management station and a host
FRU. See field replaceable unit. simultaneously.
GBIC. See gigabit interface converter host bus adapter (HBA). An interface between the
fibre-channel network and a workstation or server.
gigabit interface converter (GBIC). A transceiver that
performs serial, optical-to-electrical, and host computer. See host.
electrical-to-optical signal conversions for high-speed
networking. A GBIC can be hot swapped. See also host group. An entity in the storage partition topology
small form-factor pluggable. that defines a logical collection of host computers that
require shared access to one or more logical drives.
Global Copy. Refers to a remote logical drive mirror
pair that is set up using asynchronous write mode host port. Ports that physically reside on the host
without the write consistency group option. This is also adapters and are automatically discovered by the
referred to as ″Asynchronous Mirroring without DS4000 Storage Manager software. To give a host
Consistency Group.″ Global Copy does not ensure that computer access to a partition, its associated host ports
write requests to multiple primary logical drives are must be defined.
carried out in the same order on the secondary logical
drives as they are on the primary logical drives. If it is hot swap. To replace a hardware component without
critical that writes to the primary logical drives are turning off the system.
carried out in the same order in the appropriate
secondary logical drives, Global Mirroring should be hub. In a network, a point at which circuits are either
used instead of Global Copy. See also asynchronous connected or switched. For example, in a star network,
write mode, Global Mirroring, remote mirroring, Metro the hub is the central node; in a star/ring network, it is
Mirroring. the location of wiring concentrators.
Global Mirroring. Refers to a remote logical drive IBMSAN driver. The device driver that is used in a
mirror pair that is set up using asynchronous write mode Novell NetWare environment to provide multipath
with the write consistency group option. This is also input/output (I/O) support to the storage controller.
referred to as ″Asynchronous Mirroring with Consistency
IC. See integrated circuit.
Group.″ Global Mirroring ensures that write requests to
multiple primary logical drives are carried out in the IDE. See integrated drive electronics.
same order on the secondary logical drives as they are
on the primary logical drives, preventing data on the in-band. Transmission of management protocol over
secondary logical drives from becoming inconsistent the fibre-channel transport.
with the data on the primary logical drives. See also
asynchronous write mode, Global Copy, remote Industry Standard Architecture (ISA). Unofficial
mirroring, Metro Mirroring. name for the bus architecture of the IBM PC/XT™
personal computer. This bus design included expansion
graphical user interface (GUI). A type of computer slots for plugging in various adapter boards. Early
interface that presents a visual metaphor of a real-world versions had an 8-bit data path, later expanded to 16
scene, often of a desktop, by combining high-resolution bits. The ″Extended Industry Standard Architecture″
graphics, pointing devices, menu bars and other menus, (EISA) further expanded the data path to 32 bits. See
overlapping windows, icons, and the object-action also Extended Industry Standard Architecture.
relationship.
136 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
initial program load (IPL). The initialization procedure local area network (LAN). A computer network
that causes an operating system to commence located on a user’s premises within a limited geographic
operation. Also referred to as a system restart, system area.
startup, and boot.
logical block address (LBA). The address of a logical
integrated circuit (IC). A microelectronic block. Logical block addresses are typically used in
semiconductor device that consists of many hosts’ I/O commands. The SCSI disk command
interconnected transistors and other components. ICs protocol, for example, uses logical block addresses.
are constructed on a small rectangle cut from a silicon
crystal or other semiconductor material. The small size logical partition (LPAR). (1) A subset of a single
of these circuits allows high speed, low power system that contains resources (processors, memory,
dissipation, and reduced manufacturing cost compared and input/output devices). A logical partition operates as
with board-level integration. Also known as a chip. an independent system. If hardware requirements are
met, multiple logical partitions can exist within a system.
integrated drive electronics (IDE). A disk drive (2) A fixed-size portion of a logical volume. A logical
interface based on the 16-bit IBM personal computer partition is the same size as the physical partitions in its
Industry Standard Architecture (ISA) in which the volume group. Unless the logical volume of which it is a
controller electronics reside on the drive itself, part is mirrored, each logical partition corresponds to,
eliminating the need for a separate adapter card. Also and its contents are stored on, a single physical
known as an Advanced Technology Attachment partition. (3) One to three physical partitions (copies).
Interface (ATA). The number of logical partitions within a logical volume
is variable.
Internet Protocol (IP). A protocol that routes data
through a network or interconnected networks. IP acts logical unit number (LUN). An identifier used on a
as an intermediary between the higher protocol layers small computer system interface (SCSI) bus to
and the physical network. distinguish among up to eight devices (logical units) with
the same SCSI ID.
Internet Protocol (IP) address. The unique 32-bit
address that specifies the location of each device or loop address. The unique ID of a node in
workstation on the Internet. For example, 9.67.97.103 is fibre-channel loop topology sometimes referred to as a
an IP address. loop ID.
interrupt request (IRQ). A type of input found on loop group. A collection of storage area network
many processors that causes the processor to suspend (SAN) devices that are interconnected serially in a
normal processing temporarily and start running an single loop circuit.
interrupt handler routine. Some processors have several
interrupt request inputs that allow different priority loop port. A node port (N_port) or fabric port (F_port)
interrupts. that supports arbitrated loop functions associated with
an arbitrated loop topology.
IP. See Internet Protocol.
LPAR. See logical partition.
IPL. See initial program load.
LUN. See logical unit number.
IRQ. See interrupt request.
MAC. See medium access control.
ISA. See Industry Standard Architecture.
management information base (MIB). The
Java Runtime Environment (JRE). A subset of the information that is on an agent. It is an abstraction of
Java Development Kit (JDK) for end users and configuration and status information.
developers who want to redistribute the Java Runtime
Environment (JRE). The JRE consists of the Java virtual man pages. In UNIX-based operating systems, online
machine, the Java Core Classes, and supporting files. documentation for operating system commands,
subroutines, system calls, file formats, special files,
JRE. See Java Runtime Environment. stand-alone utilities, and miscellaneous facilities.
Invoked by the man command.
label. A discovered or user entered property value that
is displayed underneath each device in the Physical and MCA. See micro channel architecture.
Data Path maps.
media scan. A media scan is a background process
LAN. See local area network. that runs on all logical drives in the storage subsystem
for which it has been enabled, providing error detection
LBA. See logical block address. on the drive media. The media scan process scans all
Glossary 137
logical drive data to verify that it can be accessed, and NMS. See network management station.
optionally scans the logical drive redundancy
information. non-maskable interrupt (NMI). A hardware interrupt
that another service request cannot overrule (mask). An
medium access control (MAC). In local area NMI bypasses and takes priority over interrupt requests
networks (LANs), the sublayer of the data link control generated by software, the keyboard, and other such
layer that supports medium-dependent functions and devices and is issued to the microprocessor only in
uses the services of the physical layer to provide disastrous circumstances, such as severe memory
services to the logical link control sublayer. The MAC errors or impending power failures.
sublayer includes the method of determining when a
device has access to the transmission medium. node. A physical device that allows for the
transmission of data within a network.
Metro Mirroring. This term is used to refer to a
remote logical drive mirror pair which is set up with node port (N_port). A fibre-channel defined hardware
synchronous write mode. See also remote mirroring, entity that performs data communications over the
Global Mirroring. fibre-channel link. It is identifiable by a unique worldwide
name. It can act as an originator or a responder.
MIB. See management information base.
nonvolatile storage (NVS). A storage device whose
micro channel architecture (MCA). Hardware that is contents are not lost when power is cut off.
used for PS/2 Model 50 computers and above to
provide better growth potential and performance N_port. See node port.
characteristics when compared with the original
personal computer design. NVS. See nonvolatile storage.
Microsoft Cluster Server (MSCS). MSCS, a feature NVSRAM. Nonvolatile storage random access
of Windows NT Server (Enterprise Edition), supports the memory. See nonvolatile storage.
connection of two servers into a cluster for higher
Object Data Manager (ODM). An AIX proprietary
availability and easier manageability. MSCS can
storage mechanism for ASCII stanza files that are
automatically detect and recover from server or
edited as part of configuring a drive into the kernel.
application failures. It can also be used to balance
server workload and provide for planned maintenance. ODM. See Object Data Manager.
mini hub. An interface card or port device that out-of-band. Transmission of management protocols
receives short-wave fiber channel GBICs or SFPs. outside of the fibre-channel network, typically over
These devices enable redundant Fibre Channel Ethernet.
connections from the host computers, either directly or
through a Fibre Channel switch or managed hub, over partitioning. See storage partition.
optical fiber cables to the DS4000 Storage Server
controllers. Each DS4000 controller is responsible for parity check. (1) A test to determine whether the
two mini hubs. Each mini hub has two ports. Four host number of ones (or zeros) in an array of binary digits is
ports (two on each controller) provide a cluster solution odd or even. (2) A mathematical operation on the
without use of a switch. Two host-side mini hubs are numerical representation of the information
shipped as standard. See also host port, gigabit communicated between two pieces. For example, if
interface converter (GBIC), small form-factor pluggable parity is odd, any character represented by an even
(SFP). number has a bit added to it, making it odd, and an
information receiver checks that each unit of information
mirroring. A fault-tolerance technique in which has an odd value.
information on a hard disk is duplicated on additional
hard disks. See also remote mirroring. PCI local bus. See peripheral component interconnect
local bus.
model. The model identification that is assigned to a
device by its manufacturer. PDF. See portable document format.
MSCS. See Microsoft Cluster Server. performance events. Events related to thresholds set
on storage area network (SAN) performance.
network management station (NMS). In the Simple
Network Management Protocol (SNMP), a station that peripheral component interconnect local bus (PCI
runs management application programs that monitor local bus). A local bus for PCs, from Intel, that
and control network elements. provides a high-speed data path between the CPU and
up to 10 peripherals (video, disk, network, and so on).
NMI. See non-maskable interrupt. The PCI bus coexists in the PC with the Industry
Standard Architecture (ISA) or Extended Industry
138 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Standard Architecture (EISA) bus. ISA and EISA boards recoverable virtual shared disk (RVSD). A virtual
plug into an IA or EISA slot, while high-speed PCI shared disk on a server node configured to provide
controllers plug into a PCI slot. See also Industry continuous access to data and file systems in a cluster.
Standard Architecture, Extended Industry Standard
Architecture. redundant array of independent disks (RAID). A
collection of disk drives (array) that appears as a single
polling delay. The time in seconds between volume to the server, which is fault tolerant through an
successive discovery processes during which discovery assigned method of data striping, mirroring, or parity
is inactive. checking. Each array is assigned a RAID level, which is
a specific number that refers to the method used to
port. A part of the system unit or remote controller to achieve redundancy and fault tolerance. See also array,
which cables for external devices (such as display parity check, mirroring, RAID level, striping.
stations, terminals, printers, switches, or external
storage units) are attached. The port is an access point redundant disk array controller (RDAC). (1) In
for data entry or exit. A device can contain one or more hardware, a redundant set of controllers (either
ports. active/passive or active/active). (2) In software, a layer
that manages the input/output (I/O) through the active
portable document format (PDF). A standard controller during normal operation and transparently
specified by Adobe Systems, Incorporated, for the reroutes I/Os to the other controller in the redundant set
electronic distribution of documents. PDF files are if a controller or I/O path fails.
compact; can be distributed globally by e-mail, the Web,
intranets, or CD-ROM; and can be viewed with the remote mirroring. Online, real-time replication of data
Acrobat Reader, which is software from Adobe Systems between storage subsystems that are maintained on
that can be downloaded at no cost from the Adobe separate media. The Enhanced Remote Mirror Option is
Systems home page. a DS4000 premium feature that provides support for
remote mirroring. See also Global Mirroring, Metro
premium feature key. A file that the storage Mirroring.
subsystem controller uses to enable an authorized
premium feature. The file contains the feature enable ROM. See read-only memory.
identifier of the storage subsystem for which the
premium feature is authorized, and data about the router. A computer that determines the path of
premium feature. See also feature enable identifier. network traffic flow. The path selection is made from
several paths based on information obtained from
private loop. A freestanding arbitrated loop with no specific protocols, algorithms that attempt to identify the
fabric attachment. See also arbitrated loop. shortest or best path, and other criteria such as metrics
or protocol-specific destination addresses.
program temporary fix (PTF). A temporary solution or
bypass of a problem diagnosed by IBM in a current RVSD. See recoverable virtual shared disk.
unaltered release of the program.
SAI. See Storage Array Identifier.
PTF. See program temporary fix.
SA Identifier. See Storage Array Identifier.
RAID. See redundant array of independent disks
(RAID). SAN. See storage area network.
RAID level. An array’s RAID level is a number that SATA. See serial ATA.
refers to the method used to achieve redundancy and
fault tolerance in the array. See also array, redundant scope. Defines a group of controllers by their Internet
array of independent disks (RAID). Protocol (IP) addresses. A scope must be created and
defined so that dynamic IP addresses can be assigned
RAID set. See array. to controllers on the network.
RAM. See random-access memory. SCSI. See small computer system interface.
random-access memory (RAM). A temporary storage segmented loop port (SL_port). A port that allows
location in which the central processing unit (CPU) division of a fibre-channel private loop into multiple
stores and executes its processes. Contrast with DASD. segments. Each segment can pass frames around as
an independent loop and can connect through the fabric
RDAC. See redundant disk array controller. to other segments of the same loop.
read-only memory (ROM). Memory in which stored sense data. (1) Data sent with a negative response,
data cannot be changed by the user except under indicating the reason for the response. (2) Data
special conditions. describing an I/O error. Sense data is presented to a
host system in response to a sense request command.
Glossary 139
serial ATA. The standard for a high-speed alternative optical fiber cables and switches. An SFP is smaller
to small computer system interface (SCSI) hard drives. than a gigabit interface converter (GBIC). See also
The SATA-1 standard is equivalent in performance to a gigabit interface converter.
10 000 RPM SCSI drive.
SNMP. See Simple Network Management Protocol and
serial storage architecture (SSA). An interface SNMPv1.
specification from IBM in which devices are arranged in
a ring topology. SSA, which is compatible with small SNMP trap event. (1) (2) An event notification sent by
computer system interface (SCSI) devices, allows the SNMP agent that identifies conditions, such as
full-duplex packet multiplexed serial data transfers at thresholds, that exceed a predetermined value. See
rates of 20 Mbps in each direction. also Simple Network Management Protocol.
server. A functional hardware and software unit that SNMPv1. The original standard for SNMP is now
delivers shared resources to workstation client units on referred to as SNMPv1, as opposed to SNMPv2, a
a computer network. revision of SNMP. See also Simple Network
Management Protocol.
server/device events. Events that occur on the server
or a designated device that meet criteria that the user SRAM. See static random access memory.
sets.
SSA. See serial storage architecture.
SFP. See small form-factor pluggable.
static random access memory (SRAM). Random
Simple Network Management Protocol (SNMP). In access memory based on the logic circuit know as
the Internet suite of protocols, a network management flip-flop. It is called static because it retains a value as
protocol that is used to monitor routers and attached long as power is supplied, unlike dynamic random
networks. SNMP is an application layer protocol. access memory (DRAM), which must be regularly
Information on devices managed is defined and stored refreshed. It is however, still volatile, meaning that it can
in the application’s Management Information Base lose its contents when the power is turned off.
(MIB).
storage area network (SAN). A dedicated storage
SL_port. See segmented loop port. network tailored to a specific environment, combining
servers, storage products, networking products,
SMagent. The DS4000 Storage Manager optional software, and services. See also fabric.
Java-based host-agent software, which can be used on
Microsoft Windows, Novell NetWare, AIX, HP-UX, Storage Array Identifier (SAI or SA Identifier). The
Solaris, and Linux on POWER host systems to manage Storage Array Identifier is the identification value used
storage subsystems through the host fibre-channel by the DS4000 Storage Manager host software
connection. (SMClient) to uniquely identify each managed storage
server. The DS4000 Storage Manager SMClient
SMclient. The DS4000 Storage Manager client program maintains Storage Array Identifier records of
software, which is a Java-based graphical user interface previously-discovered storage servers in the host
(GUI) that is used to configure, manage, and resident file, which allows it to retain discovery
troubleshoot storage servers and storage expansion information in a persistent fashion.
enclosures in a DS4000 storage subsystem. SMclient
can be used on a host system or on a storage storage expansion enclosure (EXP). A feature that
management station. can be connected to a system unit to provide additional
storage and processing capacity.
SMruntime. A Java compiler for the SMclient.
storage management station. A system that is used
SMutil. The DS4000 Storage Manager utility software to manage the storage subsystem. A storage
that is used on Microsoft Windows, AIX, HP-UX, Solaris, management station does not need to be attached to
and Linux on POWER host systems to register and map the storage subsystem through the fibre-channel
new logical drives to the operating system. In Microsoft input/output (I/O) path.
Windows, it also contains a utility to flush the cached
data of the operating system for a particular drive before storage partition. Storage subsystem logical drives
creating a FlashCopy. that are visible to a host computer or are shared among
host computers that are part of a host group.
small computer system interface (SCSI). A standard
hardware interface that enables a variety of peripheral storage partition topology. In the DS4000 Storage
devices to communicate with one another. Manager client, the Topology view of the Mappings
window displays the default host group, the defined host
small form-factor pluggable (SFP). An optical group, the host computer, and host-port nodes. The
transceiver that is used to convert signals between host port, host computer, and host group topological
140 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
elements must be defined to grant access to host Transmission Control Protocol (TCP). A
computers and host groups using logical drive-to-LUN communication protocol used in the Internet and in any
mappings. network that follows the Internet Engineering Task Force
(IETF) standards for internetwork protocol. TCP
striping. Splitting data to be written into equal blocks provides a reliable host-to-host protocol between hosts
and writing blocks simultaneously to separate disk in packed-switched communication networks and in
drives. Striping maximizes performance to the disks. interconnected systems of such networks. It uses the
Reading the data back is also scheduled in parallel, with Internet Protocol (IP) as the underlying protocol.
a block being read concurrently from each disk then
reassembled at the host. Transmission Control Protocol/Internet Protocol
(TCP/IP). A set of communication protocols that
subnet. An interconnected but independent segment provide peer-to-peer connectivity functions for both local
of a network that is identified by its Internet Protocol (IP) and wide-area networks.
address.
trap. In the Simple Network Management Protocol
sweep method. A method of sending Simple Network (SNMP), a message sent by a managed node (agent
Management Protocol (SNMP) requests for information function) to a management station to report an
to all the devices on a subnet by sending the request to exception condition.
every device in the network.
trap recipient. Receiver of a forwarded Simple
switch. A fibre-channel device that provides full Network Management Protocol (SNMP) trap.
bandwidth per port and high-speed routing of data by Specifically, a trap receiver is defined by an Internet
using link-level addressing. Protocol (IP) address and port to which traps are sent.
Presumably, the actual recipient is a software
switch group. A switch and the collection of devices application running at the IP address and listening to
connected to it that are not in other groups. the port.
switch zoning. See zoning. TSR program. See terminate and stay resident
program.
synchronous write mode. In remote mirroring, an
option that requires the primary controller to wait for the uninterruptible power supply. A source of power
acknowledgment of a write operation from the from a battery that is installed between a computer
secondary controller before returning a write I/O request system and its power source. The uninterruptible power
completion to the host. See also asynchronous write supply keeps the system running if a commercial power
mode, remote mirroring, Metro Mirroring. failure occurs, until an orderly shutdown of the system
can be performed.
system name. Device name assigned by the vendor’s
third-party software. user action events. Actions that the user takes, such
as changes in the storage area network (SAN), changed
TCP. See Transmission Control Protocol.
settings, and so on.
TCP/IP. See Transmission Control Protocol/Internet
worldwide port name (WWPN). A unique identifier for
Protocol.
a switch on local and global networks.
terminate and stay resident program (TSR
worldwide name (WWN). A globally unique 64-bit
program). A program that installs part of itself as an
identifier assigned to each Fibre Channel port.
extension of DOS when it is executed.
WORM. See write-once read-many.
topology. The physical or logical arrangement of
devices on a network. The three fibre-channel write-once read many (WORM). Any type of storage
topologies are fabric, arbitrated loop, and point-to-point. medium to which data can be written only a single time,
The default topology for the disk array is arbitrated loop. but can be read from any number of times. After the
data is recorded, it cannot be altered.
TL_port. See translated loop port.
WWN. See worldwide name.
transceiver. A device that is used to transmit and
receive data. Transceiver is an abbreviation of zoning. (1) In Fibre Channel environments, the
transmitter-receiver. grouping of multiple ports to form a virtual, private,
storage network. Ports that are members of a zone can
translated loop port (TL_port). A port that connects
communicate with each other, but are isolated from
to a private loop and allows connectivity between the
ports in other zones. (2) A function that allows
private loop devices and off loop devices (devices not
segmentation of nodes by address, name, or physical
connected to that particular TL_port).
port and is provided by fabric switches or hubs.
Glossary 141
142 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Index
A components
button 92
about this document xi
software 15
access volume 24
storage subsystem 12
add Storage Subsystem option 28
Concepts Guide 119
address, IBM xix
configuration
ADT feature 48, 50
mail server 85
Advanced menu 35
sender address 85
AIX and Sun Solaris, failover protection 50
storage subsystem 69
alert destinations 75
Contacting Device status 79
configuration 86
controller
configuring 86
cache memory, data protection 54
setting 85
description 13
alert notification
enclosure 13, 54
configuring alert destinations 86
transfer rate, optimizing 94
mail server configuration 85
Controller menu 34
overview 85
copy services
selecting the node 85
Enhanced Remote Mirroring option 59
setting 87
FlashCopy 59
setting alert destinations 85
VolumeCopy 59
array 13, 47, 52
Copy Services Guide 119
Array menu 33
copyback 55
asynchronous write mode 64
critical event
audience xi
notification 85, 86
Auto-Logical Drive Transfer (ADT) feature 48
problem solving 97
automatic discovery option 28
customer support alert notification
how to configure 85
B
background media scan 56 D
data
backing up 63
C copying for greater access 63
cache flush path failover protection 48
described 54 protection 116
performance impacts 54 protection in the controller cache memory 54
settings 54 protection strategies 45
start percentage 54 redundancy 52
stop flush percentage 54 restoring FlashCopy logical drive data 63
cache hit DCE (Dynamic Capacity Expansion) 48
optimizing 95 default host group, defined 71
percentage 95 default logical drive-to-LUN mapping
cache read-ahead, choosing a multiplier 94 defined 72
capacity default LUN 71
Dynamic Capacity Expansion (DCE) 48 default settings for failover protection 50
free 14 device drivers
free and unconfigured 66 downloading latest versions 1
unconfigured 14 Device Table 78
channel protection, using 69 Device Tree 78
coexisting storage subsystems, managing 26 DHCP/BOOTP server 22, 25
command line interface (SMcli) direct (out-of-band) management method
examples 43 advantages 22
overview 38 described 22
parameters 39 disadvantages 22
usage and formatting requirements 42 directly managed storage subsystems 22
using 38 disk access, minimize 96
comments about this document, how to send xix document organization xv
144 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Fibre Channel I/O (continued) host-agent management method (continued)
size 94 described 24
Fibre Channel switches 21 disadvantages 24
files, defragmenting 96 Hot Add utility 17
fire suppression xix hot spare drive
firmware configuring 55
downloading 80 defined 55
new features how to send your comments xix
version 6.10.xx.xx 9 HP-UX, failover protection 50
version 6.12.xx.xx 9
updating in the storage expansion enclosures 79
updating in the storage subsystem 79 I
version 6 I/O access pattern and I/O size 94
Fixing status 78 I/O data field 93
FlashCopy I/O data path protection 48
description 59 I/O request rate
logical drive 45 impact from cache flush settings 54
overview 11, 62 optimizing 94
repository logical drive 45 I/O transfer rate, optimizing 94
script scenarios 59 IBM address xix
form, reader comment xix IBM Safety Information 128
free capacity 14, 66 Intermix
free-capacity nodes 69 enabling with NVSRAM (firmware 6.10.xx.xx) 11
full synchronization 66 enabling with premium feature key 9
G L
Global Copy 64 Linux
Global Mirroring 64 failover protection 50
glossary 133 local storage subsystems 63
graphical user interface (GUI) Logical Drive menu 33
managing the storage subsystem 27 logical drive types
primary 65
secondary 65
H logical drive-to-LUN mapping
hardware default 72
requirements 20 defined 71
hardware components specific 71
DHCP server, BOOTP or BOOTP compliant 20 logical drive-to-LUN terminology
file server 21 default host group 71
host computer 21 host 71
management station 20 host group 71
network-management station 20 host port 71
storage subsystem 21 mapping 71, 72
hardware service and support xix storage partition topology 70
Help menu 35 storage partitions 70
heterogeneous hosts mapping preference 72
defining types 72 logical drives
overview 72 base 62
host adapters 20 creating step-by-step 69
host bus adapters 20 definition 13
host computer 7, 71 Dynamic Logical Drive Expansion (DVE) 47
host group FlashCopy 45, 62
definition 70 FlashCopy repository 45
description 71 mirror relationship 67
host port mirror repository 46, 66
defined 71, 72 missing 84
discovery of 71 modification priority setting 95
host-agent managed storage subsystems 24 overview 45
host-agent management method primary 46
advantages 24 recovering 84
Index 145
logical drives (continued) notification
repository 62 alert 85
secondary 46 configuring alert destinations 86
source 46 failure 79
standard 45 of events 116
target 46 selecting the node 85
VolumeCopy 63 setting alert destinations 85
Logical/Physical view 29, 78 setting alert notifications 87
Logical/Physical View 31 Novell NetWare failover protection 49
LUN NVSRAM, downloading
address space 71 from a firmware image 81
defined 71 from a standalone image 81
M O
machine types and supported software 3 online help systems
mail server configuration 85 configuring storage partitions 115
managed hub 20 configuring storage subsystems 113, 114
management domain, populating 113 Enterprise Management window 115
automatic discovery option 28 event notification 116
overview 28 miscellaneous system administration 117
using Add Storage Subsystem 28 performance and tuning 118
management methods for storage subsystem populating a management domain 113
direct (out-of-band) management method 22 protecting data 116
host-agent management method 24 recovering from problems 117
management station 7, 20 security 118
management, storage subsystem Subsystem Management window 113, 114, 115
direct (out-of-band) 22 using a script editor 114
host-agent 24 operating system specific failover protection 49
overview 21 organization of the document xv
Mappings menu 33 overall health status 78
Mappings View 31 ownership, preferred controller 51
media scan
changing settings 56
duration 59 P
errors reported 57 parallel drive firmware download 82
overview 56 parameters, SMcli 39
performance impact 57 parity 52
settings 58 password protection, configuring 68
medical imaging applications 53 performance and tuning 118
menus, Subsystem Management window 31 performance monitor 93
Microsoft Windows failover protection 49 Persistent Reservations, managing 67
Migration Guide 119 physical view, subsystem-management window 29
mirror relationships 67 point-in-time (PIT) image 62
mirror repository 65 power outage 54
mirror repository logical drives 46, 66 preferred controller ownership 51
missing logical drives, viewing and recovering 84 premium feature support
MPIO 18 restrictions 60
multi-user environments 53 premium features
multimedia applications 53 Enhanced Remote Mirroring option 59
FlashCopy 59
Intermix 9, 11
N VolumeCopy 59
Needs Attention primary logical drive 46
icon 79 primary logical drives 66
status 78 priority setting, modification 95
new features 7 problem recovery 117
new features in this edition 7 problem solving, critical event 97
notes, important 132
notices xvi, 131
146 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Q security 118
segment size, choosing 96
quick reference status
sender address configuration 85
Contacting Device 79
sending your comments to IBM xix
Fixing 78
settings, media scan 58
Optimal 78
Simple Network Management Protocol (SNMP)
Optimal status 78
traps 25
Unresponsive 79
SMagent disk space requirements 16
SMcli
examples 43
R overview 38
RAID level parameters 39
and channel protection 69 usage and formatting requirements 42
application behavior 53, 95 using 38
choosing 53, 95 SMclient 15
configurations 52 SMdevices utility 17
data redundancy 52 software components
described 52 RDAC 16
RAID-0 SMagent 16
described 52 SMclient 15
drive failure consequences 52 software, supported 3
RAID-1 source logical drive 46
described 53 staged controller firmware download 81
drive failure consequences 53 start percentage, cache flush 54
RAID-3 stop percentage, cache flush 54
described 53 storage area network (SAN)
drive failure consequences 53 technical support Web site xviii
RAID-5 storage expansion enclosure 13
described 53 storage expansion enclosures, updating the
drive failure consequences 53 firmware 79
RDAC feature 49 storage management software
reader comment form processing xix Enterprise Management window 27
reconstruction 55 hardware requirements
Recovery Guru BOOTP server 20
Recovery Procedure 89 installation requirements 20
Summary area 89 new terminology 6
window 89 Subsystem Management window 29
redundancy of Fibre Channel arbitrated loops 69 Storage Manager 9.1 client 15
Redundant disk array controller (RDAC) 16 Storage Manager software
reference, task 113 new features 9
remote mirror setup, logical drive types 65 Storage Manager Utility (SMutil) 17
remote storage subsystems 63 storage partition
renaming 2 feature 11, 72
requirements switch zoning 70
hardware 20 storage partition topology, defined 70
SMcli 42 storage partitioning specifications 14
resources storage partitions
Web sites xvii configuring 115
restrictions creating 70
premium feature support 60 described 70
resynchronization methods 65 description 14
role reversal 66 enabled 14
feature key 72
major steps to creating 87
S storage subsystem
sample network, reviewing 25 components 12
script editor configuration 69, 114
adding comments to a script 37 creating logical drives 69
using 36, 114 description 20
window 35 device tree 28
secondary logical drive 46, 66 failure notification 79
Index 147
storage subsystem (continued) terminology 6
hardware requirements 20 topological elements, when to define 70
logical components 13 trademarks 131
maintaining and monitoring 75 transfer rate 93
maintaining in a management domain 78
managing using the graphical user interface 27
password protection configuration 68 U
physical components 13 unconfigured capacity 14, 66
quick reference status icon 78 unconfigured nodes 69
status quick reference 78 UNIX BOOTP server 20
tuning options available 93 Unresponsive status 79
updating the firmware 79
storage subsystem management
direct (out-of-band) 22 V
host-agent 24 version, firmware 6
overview 21 View menu 32
Storage Subsystem menu 32 VolumeCopy
storage subsystems backing up data 63
coexisting 26 copying data for greater access 63
directly managed 22 description 59
host-agent managed 24 overview 11, 63
local and remote 63 restoring FlashCopy logical drive data to the base
tuning 93 logical drive 63
storage subsystems maintenance
in a management domain 78
overview 75
storage-partition mapping preference
W
Web sites
defined 72
AIX fix delivery center xviii
storage-subsystem failures, recovering from 89
DS4000 interoperability matrix xvii
Subsystem Management window 79
DS4000 storage subsystems xvii
Advanced menu 35
DS4000 technical support xviii
Array menu 33
IBM publications center xviii
component of SMclient 15
IBM System Storage products xvii
Controller menu 34
Linux on POWER support xix
Drive menu 34
Linux on System p support xix
event log 97
list xvii
Help 1
premium feature activation xviii
Help menu 35
readme files xvii
Logical Drive menu 33
SAN support xviii
Logical/Physical View 31
switch support xviii
Mappings menu 33
who should read this document xi
Mappings View 31
window, script editor 35
menus 31
write cache mirroring
monitoring storage subsystems with 75
described 54
overview 29
how to enable 54
Storage Subsystem menu 32
write caching
tabs 30
and data loss 54
View menu 32
and performance 54
supported software 3
enabling 95
suspend and resume mirror synchronization 64
write order consistency 64
switch
technical support Web site xviii
zoning 70
system administration 117 Z
zoning 70
T
target logical drive 46
task reference 113
tasks by document title 119
tasks by documentation title 119
148 IBM System Storage DS4000 Storage Manager Version 9.23: Concepts Guide
Readers’ comments — we would like to hear from you.
IBM System Storage DS4000 Storage Manager Version 9.23
Concepts Guide
We appreciate your comments about this publication. Please comment on specific errors or omissions, accuracy,
organization, subject matter, or completeness of this book. The comments you send should pertain to only the
information in this manual or product and the way in which the information is presented.
For technical questions and information about products and prices, please contact your IBM branch office, your IBM
business partner, or your authorized remarketer.
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you. IBM or any other organizations will only use the
personal information that you supply to contact you about the issues that you state on this form.
Comments:
If you would like a response from IBM, please fill in the following information:
Name Address
Company or Organization
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
_________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
GC26-7734-04 Along Line
Printed in USA
GC26-7734-04