Fibrecat Om en
Fibrecat Om en
Operating Manual
Certified documentation
according to DIN EN ISO 9001:2000
To ensure a consistently high quality standard and
user-friendliness, this documentation was created to
meet the regulations of a quality management system which
complies with the requirements of the standard
DIN EN ISO 9001:2000.
cognitas. Gesellschaft für Technik-Dokumentation mbH
www.cognitas.de
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Important Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 Installing Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6 Connecting Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
V CAUTION Reference to hazards that can lead to personal injury, loss of data or
damage to equipment
Table 1: Typographic Conventions
Step Task
1 Verify the site installation requirements (see page 20)
2 Install the enclosures (see page 45)
3 Cabling the enclosures (see page 53)
4 Connect the power cords (see page 56)
5 Test the enclosure connections (see page 56)
6 Connect the hosts (see page 57)
7 Configure a system for the first time (see page 65
Table 12: Installation and Configuration Checklist
V CAUTION!
● The device must only be connected to a properly grounded wall outlet (the
device is fitted with a tested and approved power cable).
● Make sure that the power sockets on the device and the protective grounded
outlet of the building’s wiring system is freely accessible.
● Switching off the device does not cut off the supply of power. To do this you must
remove the power plugs.
● Before opening the unit, switch off the device and then pull out the power plugs.
● Route the cables in such a way that they do not form a potential hazard (make
sure no-one can trip over them) and that they cannot be damaged. When
connecting up a device, refer to the relevant notes in this manual.
● Never connect or disconnect data transmission lines during a storm (lightning
hazard).
● Systems which comprise a number of cabinets must use a separate fused
socket for each cabinet.
● The servers and the directly connected external storage subsystems should be
connected to the same power supply distributor. Otherwise you run the risk of
losing data if, for example, the central processing unit is still running but the
storage subsystem has failed during a power failure.
● Make sure that no objects (such as bracelets or paper clips) fall into or liquids
spill into the device (risk of electric shock or short circuit).
● In emergencies (e.g. damage to housings, power cords or controls or ingress of
liquids or foreign bodies), immediately power down the device, pull out the
power plugs and notify your service department.
● Note that proper operation of the system (in accordance with IEC 60950/DIN
EN 60950) is guaranteed only if slot covers are installed on all vacant slots
and/or dummies on all vacant bays and the housing cover is fitted (cooling, fire
protection, RFI suppression).
You must follow the instructions below when handling modules containing electrostatic-
sensitive components
Ê Discharge static electricity from your body (for example by touching a grounded metal
object) before handling modules containing electrostatic-sensitive components.
Ê The equipment and tools you use must be free of static charge.
Ê Remove the power plug before installing or removing modules containing electrostatic-
sensitive components.
Ê Only hold modules containing electrostatic-sensitive components by their edges.
Ê Do not touch any of the pins or track conductors on a module containing
electrostatic-sensitive components.
Ê Use a grounding strap designed for the purpose, to connect you to the system unit as
you install the modules.
Ê Place all components on a static-safe base.
I An exhaustive description of the handling of modules containing electrostatic-
sensitive components can be found in the relevant European and international
standards (DIN EN 61340-5-1, ANSI/ESD S20.20).
2.3 CE Certificate
The shipped version of this device complies with the requirements of the EEC
directives 89/336/EEC ”Electromagnetic compatibility“ and 73/23/EEC ”Low
voltage directive“. The device therefore qualifies for the CE certificate
(CE=Communauté Européenne).
This product has been designed in accordance with standards for ”environmentally friendly
product design and development“. This means that the designers have taken into account
important criteria such as durability, selection of materials, emissions, packaging, the ease
with which the product can be dismantled and the extent to which it can be recycled.
This saves resources and thus reduces the harm done to the environment.
Notes on packaging
Please do not throw away the packaging. We recommend that you do not throw away the
original packaging in case you need it later for transporting.
Please avoid attaching your own labels to plastic housing parts wherever possible, since
this makes it difficult to recycle them.
The device may not be disposed of with household rubbish. This appliance is
labelled in accordance with European Directive 2002/96/EC concerning used
electrical and electronic appliances (WEEE - waste electrical and electronic
equipment).
The guideline determines the framework for the return and recycling of used
appliances as applicable throughout the EU. To return your used device, please
use the return and collection systems available to you. You will find further infor-
mation on this at www.fujitsu-siemens.com/recycling.
For details on returning and reuse of devices and consumables within Europe, refer to the
“Returning used devices” manual, or contact your Fujitsu Siemens Computers branch
office/subsidiary or our recycling centre in Paderborn:
Fujitsu Siemens Computers
Recycling Center
D-33106 Paderborn
Tel. +49 5251 8180-10
Fax +49 5251 8180-15
The floor space at the installation site must be strong enough to support the combined
weight of the rack, controller enclosures, expansion modules, and any additional
equipment. The site should provide sufficient space for installation, operation, and servicing
the enclosures, and also requires sufficient ventilation to allow a free flow of air to all enclo-
sures.
The following table lists enclosure dimensions and weight. Weights are based on an
enclosure having 12 drive modules, 2 controller or expansion modules, and 2 power and
cooling modules installed.
Specification Rackmount
Height 2U 8.76 cm
Width
● Chassis excluding mounting ears 44.6 cm
● Chassis including mounting ears 48.0 cm
Depth
● Chassis 55.37 cm
● To back of power and cooling module handle 57.12 cm
Weight, controller enclosure (12 drives)
● SAS drives 33.1 kg
● SATA drives 33.6 kg
Table 13: Dimension and Weight Specification Examples
Important Notes Site Requirements
Specification Rackmount
Weight, expansion enclosure (12 drives)
● SAS drives 30.8 kg
● SATA drives 31.3 kg
Table 13: Dimension and Weight Specification Examples
Specification Range
Altitude To 3 km, derate 2° C for every 1 km up to 3 km
Relative Humidity 10% to 90% RH, 27° C max. wet bulb, non-condensing
Temperature 5° C to 40° C, non-condensing
Shock 3.0 g, 11 ms, half-sine
Vibration 0.15 g (vertical); 0.10 g (horizontal), 5 to 500 Hz, swept-sine
Table 14: Environmental Reqirements
Each enclosure is shipped with two AC power cords that are appropriate for use in a typical
outlet in the destination country. Each power cord should connect one of the power and
cooling modules to an independent, external power source. To ensure power redundancy,
connect the two power cords to two separate circuits; for example, to one commercial circuit
and to one uninterruptable power source (UPS).
Safety status of I/O connections comply with Separated Extra Low Voltage (SELV) require-
ments.
Each enclosure has two power and cooling modules for redundancy. If full redundancy is
required, use a separate power source for each module. The AC power supply unit in each
power and cooling module is auto-ranging and is automatically configured to an input
voltage range from 88–264 VAC with an input frequency of 47–63 Hz. The power and
cooling modules meet standard voltage requirements for both U.S. and international
operation. The power and cooling modules use standard industrial wiring with line-to-
neutral or line-to-line power connections.
As you prepare for installation, follow these requirements:
● All AC mains and supply conductors to power distribution boxes for the rack-mounted
system must be enclosed in a metal conduit or raceway when specified by local,
national, or other applicable government codes and regulations.
● Ensure that the voltage and frequency of your power source match the voltage and
frequency inscribed on the equipment’s electrical rating label.
● To ensure redundancy, provide two separate power sources for the enclosures. These
power sources must be independent of each other, and each must be controlled by a
separate circuit breaker at the power distribution point.
● The system requires voltages within minimum fluctuation. The customer-supplied facil-
ities’ voltage must maintain a voltage with not more than ± 5 percent fluctuation. The
customer facilities must also provide suitable surge protection.
● Site wiring must include an earth ground connection to the AC power source. The
supply conductors and power distribution boxes (or equivalent metal enclosure) must
be grounded at both ends.
● Power circuits and associated circuit breakers must provide sufficient power and
overload protection. To prevent possible damage to the AC power distribution boxes and
other components in the rack, use an external, independent power source that is
isolated from large switching loads (such as air conditioning motors, elevator motors,
and factory loads).
3.1.1 Components
Enclosure ID LED
Power switch
Link Speed Link Speed
FC FC
Port 0 Port 1
FC FC
Port 0 Port 1
Port/Switch Description
Power switch Toggle, where “|” is On and “O” is Off.
Host ports 4-Gbps FC ports used to connect to data hosts. Each port contains an SFP1 trans-
ceiver. Host port 0 and 1 connect to host channel 0 and 1, respectively.
CLI port Micro-DB9 port used to connect the controller module to a local management host
using RS-232 communication for out-of-band configuration and management.
Ethernet port 10/100BASE-T Ethernet port used for TCP/IP-based out-of-band management of
the RAID controller. An internal Ethernet device provides standard 10 Mbit/second
and 100 Mbit/second full-duplex connectivity.
Table 17: Controller Enclosure Ports and Switches (Back)
Port/Switch Description
Expansion port 3-Gbps, 4-lane (12 Gbps total) table-routed egress port used to connect SAS
expansion enclosures.
Table 17: Controller Enclosure Ports and Switches (Back)
1 The SFPs are part of the controller modules and must not be removed (SFP = Small Form-factor Pluggable).
AC Power Good
Host link status Cache status
DC Voltage/Fan Fault/
Service Required Host link speed Host activity Expansion port status
Link Speed Link Speed
FC FC
Port 0 Port 1
FC FC
Port 0 Port 1
3.2.1 Components.
Description Quantity
SAS expansion module 1 or 21
SAS or SATA drive module 2–12 per enclosure
AC power and cooling module 2 per enclosure
3-Gbps, 4-lane SAS In port 1 per expansion module
3-Gbps, 4-lane SAS Out port 1 per expansion module
Table 20: Expansion Enclosure Components
1
Air management system drive blanks or I/O blanks must fill empty slots to maintain
optimum airflow through the chassis.
Power switch
0 0
0 0
Port/Switch Description
Power switch Toggle, where “|” is On and “O” is Off.
SAS In port 3-Gbps, 4-lane (12 Gbps total) subtractive ingress port used to connect to a
controller enclosure.
SAS Out port 3-Gbps, 4-lane (12 Gbps total) table-routed egress port used to connect to another
expansion enclosure.
Table 21: Expansion Enclosure Ports and Switches (Back)
AC Power Good
DC Voltage/Fan Fault/
Service Required SAS In port status SAS Out port status
0 0
0 0
Ethernet Ethernet
LAN LAN
P0 P1 P0 P1
internal internal
Cntrl A Cntrl B disk
disk
Portinterconnect
enabled
FibreCAT SX
Figure 6: Direct Attached FibreCAT SX for Windows and Linux
The figure shows the preferred DAS (Direct Attach Storage) redundant controller and path
configuration. This configuration requires host port interconnect circuitry between controller
modules to be enabled and for path-failover a host-based multipathing software is
necessary. The cabling example shows a high-availability configuration.
Configuration Rules
FC-Topology: Arbitrated Loop (FC-AL)
FC Speed: max. 4Gbit/s (note “Host Interface Speed for FibreCAT SX60 /
SX80 in Direct Attached Configurations” on page 42)
Host Port Interconnect: Interconnected
Path-Failover Software: FSC DDM V3; native MPIO (DSM); FSC Multipath V 4,
native DM-MP (RedHat, SuSE)
Ethernet Ethernet
LAN LAN
P0 P1 P0 P1
internal internal
Cntrl A Cntrl B disk
disk
Portinterconnect
enabled
FibreCAT SX
Figure 7: Direct Attached FibreCAT SX (Controller Failover Scenario)
Ethernet Ethernet
LAN LAN
Path-Failover
P0 P1 P0 P1
internal internal
Cntrl A Cntrl B disk
disk
Portinterconnect
enabled
FibreCAT SX
Figure 8: Direct Attached FibreCAT SX (Path Failover Scenario)
In scenario of path-failover, host A see his B-LUNs via the host interconnect line. The filter
driver (multipath software) generate the B-LUN I/Os across the other HBA.
LAN
Ethernet
LAN
MultiPath
HBA
A+B LUNs
Switch A Switch B
MMF MMF
internal A B A B
disks
Host P0 P1 P0 P1
Cntrl A Cntrl B
Portinterconnect
StraightThrough
FibreCAT SX
Figure 9: Switch Attached FibreCAT SX for Windows, Linux, VMware and - not for FibreCAT SX60 - Solaris
This configuration requires host port interconnect circuitry between controller modules to
be straight through. The cabling example shows a controller and path high-availability
configuration. For path failover this configuration requires a host-based multipathing
software. VMware is not supported in direct attached environment; only in FC Switch config-
urations.
Configuration Rules
FC-Topology; Arbitrated Loop (FC-AL)
FC Speed: 2 / 4 Gbit/s
Host Port Interconnect: Straight through
Max. Member in Switch Zone: only 1 initiator and 1 target per zone
Path-Failover Software Linux: native DM-MP (RedHat, SuSE)
Path-Failover Software Windows: FSC DDM V3, native MPIO (DSM), FSC Multipath V 4
Path-Failover Software VMware: native MP
FibreCAT SX Configuration:
● Host Port Configuration Link Speed: 4 Gbit/s or 2 Gbit/s (depending on HBA)
● Internal Host Port Interconnect: Straight-through
LAN
Ethernet Ethernet
LAN LAN
A+B LUNs
P0 P1 P0 P1
internal internal
Cntrl A Cntrl B disk
disk
Portinterconnect
enabled
Configuration Rules
FC-Topology: Arbitrated Loop (FC-AL)
FC Speed: max. 4 Gbit/s (note “Host Interface Speed for FibreCAT SX60 /
SX80 in Direct Attached Configurations” on page 42)
Host Port Interconnect: Interconnected
Path-Failover. native MPxIO
LAN
Ethernet Ethernet
LAN LAN
A+B LUNs
P0 P1 P0 P1
internal internal
Cntrl A Cntrl B disk
disk
Portinterconnect
enabled
Configuration Rules
FC-Topology: Arbitrated Loop (FC-AL)
FC Speed: 2 / 4 Gbit/s
Host Port Interconnect: Straight through
Max. Member in Switch Zone: only 1 initiator and 1 target per zone
Path-Failover. native MPxIO
Here you can find out the HIM Model of your controller module(s):
Figure 12: Detecting the HIM Model with FibreCAT SX Manager’s WBI (Example with two HIM Models 0)
Figure 13: Detecting the HIM Revision with FibreCAT SX Manager’s WBI (Example with two HIM Models 1)
Positioning tappets Screws and nuts Mounting outline for PRIMECENTER racks
2. U*
2. U* 2. U* 2. U* 2. U*
1. U*
1. U* 1. U* 1. U* 1. U*
2. U*
2. U* 2. U* 2. U* 2. U*
1. U*
1. U* 1. U* 1. U* 1. U*
Legend
*: height units (U) counted starting from the bottom line of the device
**: "rear left" and "rear right": seen from the rear side of the rack
"front left" and "front right": seen from the front side of the rack
: M5 sqare caged nut
: M5 screw with centering washer (“mounting spring”)
Figure 15: Rack Post Mounting Positions of Two enclosures
2. Place the right rail (support angle down) from the front into the rack, putting the
positioning tappet into the appropriate whole of the right rear rack post.
3. Compress the spring mounted rail to its length and screw on the front rail end to the right
front rack post as shown below.
Figure 18: Mounting the Right Sliding Rail (Front Side of the Rack)
Figure 19: Mounting the Right Sliding Rail (Rear Side of the Rack)
Figure 20: Mounting the Left Sliding Rail (Rear Side of the Rack)
6. Mount two square cage nuts to the front rack posts (one to the right and one to the left
post) into the post holes between the rail screws each. See the figure below for the
position of a nut. The nuts must be set in from inside the rack and lock in place in the
square post holes.
Figure 21: Mounting Position of a Cage Nut Between the Rail Screws
7. Put the enclosure from the front of the rack on the support angles of the sliding rails and
push it into the rack to the back stop.
8. Screw on the enclosure to both cage nuts mounted to the front rack posts as shown
below for the right side of the enclosure.
0 0
Expansion 4A
0 0
Expansion 4B
0 0
Expansion 3A
0 0
Expansion 3B
0 0
Expansion 2A
0 0
Expansion 2B
0 0 0 0
Expansion 1A
0 0 0 0
Expansion 1B
FC
Port 0
FC
Port 1 Controller A FC
Port 0
FC
Port 1
FC
Port 0
FC
Port 1 Controller B FC
Port 0
FC
Port 1
Controller + 1 Controller + 4
Figure 23: Fault-Tolerant Cabling Connections Between Controller and Expansion Enclosures
0 0
Expansion 4A
0 0
Expansion 4B
0 0
Expansion 3A
0 0
Expansion 3B
0 0
Expansion 2A
0 0
Expansion 2B
0 0
Expansion 1A
0 0
Expansion 1B
Controller A FC
Port 0
FC
Port 1
Controller B FC
Port 0
FC
Port 1
Figure 24: Non-Fault-Tolerant Cabling Connections Between Controller and Expansion Enclosures
LAN
Ethernet Ethernet
LAN LAN
HBA HBA
A+B LUNs A+B LUNs
P0 P1 P0 P1
internal internal
Cntrl A Cntrl B disk
disk
Portinterconnect
enabled
FibreCAT SX
Figure 25: High-Availability, Dual-Controller, Direct Attach Connection to Dual Data Hosts
The following figure shows a non-redundant path-failover configuration that can be used
when high performance is more important than high availability. This configuration requires
the host port interconnect circuitry to be disabled, which it is by default.
LAN
Ethernet Ethernet
LAN LAN
MMF 300m
B LUNs B LUNs
HBA HBA
HBA HBA
A LUNs A LUNs
P0 P1 P0 P1
internal internal
Cntrl A Cntrl B disk.
disk
Portinterconnect
StraightThrough
FibreCAT SX
Figure 26: High-Performance, Dual-Controller, Direct Attach Connection to Dual Data Hosts (No Solaris Support)
Note “Host Interface Speed for FibreCAT SX60 / SX80 in Direct Attached Configurations”
on page 42.
The following figure shows a redundant connection through a single switch to a single data
host with two HBA ports. This configuration requires that host port interconnects are
disabled. It also requires host-based multipathing software.
LAN
Ethernet
LAN
MMF 300m
HBA
A+B LUNs
Switch
internal
disks
Host P0 P1 P0 P1
Cntrl A Cntrl B
Portinterconnect
StraightThrough
FibreCAT SX
Figure 27: Redundant Connection Through a Single Switch to a Single Data Host
LAN
Ethernet
LAN
A+B LUNs
HBA
MultiPath
HBA
A+B LUNs
internal
disks P0 P1 P0 P1
Cntrl A Cntrl B
Portinterconnect
enabled
FibreCAT SX
Figure 28: Minimal Connection Through a Single Switch to a Single Data Host
Note “Host Interface Speed for FibreCAT SX60 / SX80 in Direct Attached Configurations”
on page 42.
4. Using fiber optic cables connect each of the two switches to FC HBAs on the host as
shown in the following figure.
This configuration requires host port interconnect circuitry between controller modules to
be Straight through. The cabling examples show a controller and path high-availability
configuration. For path failover this configuration requires a host-based multipathing
software. The controller enclosure is equipped with two FC RAID controllers and the host
has two FC HBAs.
g
LAN
Ethernet
LAN
Host P0 P1 P0 P1
Cntrl A Cntrl B
Portinterconnect
StraightThrough
FibreCAT SX
Figure 29: Redundant, High-Availability Connection Through Switches to Dual Data Hosts
2. Use the provided micro-DB9 serial cable to connect controller A to a serial port on a
host computer.
1
If HyperTerminal is not part of your Windows installation, install it from your Windows data medium:
Start > Control Panel > Add or remove programs > Add/Remove Windows Components > Accessories and Utilities > Details
> Communications > Details > HyperTerminal (check box) > OK
:
Communication settings Values
Baud Rate 115,200
Data Bits 8
Stop Bits 1
Parity None
Flow Control None
Connector COM1 (typically)
Table 27: Terminal Emulator Connection Settings
where:
<address> is the IP address of the controller.
<netmask> is the subnet mask, in dotted-decimal format.
<gateway> is the IP address of a default router.
a|b specifies the controller whose network parameters you are setting. The upper
controller is a; the lower controller is b.
6. Type show network-parameters to verify the new IP addresses.
7. Type exit to exit the CLI.
8. Verify Ethernet connectivity by typing ping <ip-address> in the host’s computers
command prompt window.
9. Connect controller B to the host computer and connect to it with the terminal emulator.
10. Repeat Step 5–Step 8 to configure controller B.
4. Press the power switches at the back of each expansion enclosure to the On position.
While enclosures power up, their LEDs turn on and off intermittently. After the LEDs
stop blinking, if no LEDs on the front and back of the enclosure are yellow, the power-
on sequence is complete and no faults have been detected.
5. Press the power switches at the back of the controller enclosure to the On position.
If the enclosure’s power-on sequence succeeded as described in Step 4, the system is
ready to use.
active-active
Synonym for dual active components or controllers. A pair of components, such
as the controllers in a failure tolerant storage subsystem that share a task or
class of tasks when both are functioning normally. When one of the components
fails, the other takes on the entire task. Dual active controllers are connected to
the same set of storage devices, improving both I/O performance and failure
tolerance compared to a single controller. (SNIA)
address
A data structure or logical convention used to identify a unique entity, such as a
particular process or network device.
AL_PA
See arbitrated loop physical address (AL_PA).
ANSI
American National Standards Institute
API
Application Programming Interface
ARP
Address Resolution Protocol
array
See storage system.
block
A single sector on a disk. The smallest unit of data stored (written) to or
retrieved (read) from a disk.
broadcast write
Technology that provides simultaneous caching of write data to both RAID
controllers’ cache memory with positive direct memory access acknowl-
edgement (certified DMA).
cache
A high speed memory or storage device used to reduce the effective time
required to read data from or write data to a lower speed memory or device.
Read cache holds data in anticipation that it will be requested by a client. Write
cache holds data written by a client until it can be safely stored on more
permanent storage media such as disk or tape. (SNIA)
See also write-back cache, write-through cache.
capacitor pack
The controller module component that provides backup power to transfer
unwritten data from cache to Compact Flash memory in the event of a power
failure. Storing the data in Compact Flash provides unlimited backup time. The
unwritten data can be committed to the disk drives when power is restored.
channel
A physical path used for the transfer of data and control information between
storage devices and a RAID controller or a host; or, a SCSI bus in a controller
module.
chassis
An enclosure’s metal housing.
chunk size
The number of contiguous blocks in a stripe on a disk drive in a virtual disk. The
number can be adjusted to improve performance. Generally, larger chunks are
more effective for sequential reads. See block.
CLI
The command-line interface that system administrators can use to configure,
monitor, and manage FibreCAT SX storage systems. The CLI is accessible from
any management host that can access a controller module through an out-of-
band Ethernet or RS-232 connection.
controller
The control logic in a storage subsystem that performs command transfor-
mation and routing, aggregation (RAID, mirroring, striping, or other), high-level
error recovery, and performance optimization for multiple storage devices.
(SNIA)
A controller is also referred to as a RAID controller.
controller enclosure
An enclosure that contains disk drives and one or two controller modules.
See controller module.
controller module
A FRU that contains: a storage controller processor; a management controller
processor; out-of-band management interfaces; a LAN subsystem; cache
protected by a capacitor pack and Compact Flash memory; host, expansion,
and management ports; and midplane connectivity. If a controller enclosure
contains redundant controller modules, the upper one is designated A and the
lower one is designated B.
CPLD
Complex programmable logic device. A generic term for an integrated circuit
that can be programmed in a laboratory to perform complex functions.
DAS
See direct attach storage (DAS).
data host
A host that reads/writes data to the storage system. A data host can be
connected directly to the system (direct attached storage, or DAS) or can be
connected to an external switch that supports multiple data hosts (storage area
network, or SAN).
data mirroring
Data written to one disk drive is simultaneously written to another disk drive. If
one disk fails, the other disk can be used to run the virtual disk and reconstruct
the failed disk. The primary advantage of disk mirroring is 100 percent data
redundancy: since the disk is mirrored, it does not matter if one of the disks fails;
both disks contain the same data at all times and either can act as the opera-
tional disk. The disadvantage of disk mirroring is that it is expensive because
each disk in the virtual disk is duplicated. RAID 1 and 10 use mirroring.
data striping
The storing of sequential blocks of incoming data on all the different disk drives
in a virtual disk. This method of writing data increases virtual disk throughput
because multiple disks are working simultaneously, retrieving and storing. RAID
0, 10, 3, 5 and 50 use striping.
DHCP
Dynamic Host Configuration Protocol
disk mirroring
See data mirroring.
DMA
Direct Memory Access
drive module
A FRU consisting of a disk drive and drive sled.
dynamic spare
An available disk drive that is used to replace a failed drive in a virtual disk, if
the Dynamic Spares feature is enabled and no vdisk spares or global spares are
designated.
EIA
Enterprise Information Architecture
EMP
See enclosure management processor (EMP).
enclosure
A physical storage device that contains disk drives. If the enclosure contains
integrated RAID controllers it is known as a controller enclosure; otherwise it is
an expansion enclosure.
expansion module
A FRU that contains: host, expansion, and management ports; an Enclosure
Management Processor; and midplane connectivity. If a system contains
redundant expansion modules, the upper one is designated A and the lower one
is designated B.
expansion enclosure
An enclosure that contains disk drives and one or two expansion modules. See
expansion module.
fabric
A Fibre Channel switch or two or more Fibre Channel switches interconnected
in such a way that data can be physically transmitted between any two N_Ports
on any of the switches. (SNIA)
fabric switch
A Fabric switch functions as a routing engine that actively directs data transfer
from source to destination and arbitrates every connection. Bandwidth per node
via a Fabric switch remains constant when more nodes are added, and a node
on a switch port uses a data path of up to 100 MByte/sec to send or receive
data.
failback
See recovery.
failover
In an active-active configuration, failover is the act of temporarily transferring
ownership of controller resources from a failed controller to a surviving
controller. The resources include virtual disks, cache data, host ID information,
and LUNs and WWNs. See also recovery.
fault tolerance
The capacity to cope with internal hardware problems without interrupting the
system’s data availability, often by using backup systems brought online when
a failure is detected. Many systems provide fault tolerance by using RAID archi-
tecture to give protection against loss of data when a single disk drive fails.
Using RAID 1, 3, 5, 10, or 50 techniques, the RAID controller can reconstruct
data from a failed disk drive and write it to a spare or replacement disk drive.
FC
See Fibre Channel (FC).
FC-AL
See Fibre Channel-Arbitrated Loop (FC-AL).
FRU
See field-replaceable unit (FRU).
FSM
FibreCAT SX Manager comprising both a WBI and a CLI.
GByte
Gigabyte (GB)
global spare
A spare disk drive that is available to all virtual disks in a system.
HBA
See host bus adapter (HBA).
hot swap
The ability to remove and replace a FRU while the system is powered on and
operational.
in-band management
Transmission of a protocol other than the primary data protocol over the same
medium as the primary data protocol. Management protocols are a common
example of in-band transmission. (SNIA)
initialization
The process of writing a specific pattern to all data blocks on all disk drives in a
virtual disk. This process overwrites and destroys existing data on the disk
drives and the virtual disk. Initialization is required to make the entire virtual disk
consistent at the onset. Initialization ensures that virtual-disk verifications
performed in the future are executed correctly.
I/O
Input/Output
IP
Internet Protocol
JBOD
Just a Bunch of Disks. An expansion enclosure that is directly attached to a
host.
KByte
Kilobyte (KB)
LAN
See local area network (LAN).
leftover drive
A disk drive that contains metadata but is no longer part of a virtual disk.
loop address
Indicates the unique ID of a node in FC loop topology. A loop address is
sometimes referred to as a Loop ID.
loop topology
See Fibre Channel-Arbitrated Loop (FC-AL).
LUN
See logical unit number (LUN).
management host
A workstation with a direct or local connection to the system and that is used to
manage the system.
master volume
A volume that is enabled for snapshots. A master volume must be owned by the
same controller as the associated snap pool.
MByte
Megabyte (MB).
MC
See management controller (MC).
metadata
Data in the first sectors of a disk drive that the system uses to identify virtual
disk members.
MIB
See management information base (MIB).
Non-RAID
The RAID level option that can be used for a virtual disk having a single disk
drive and that does not need the data redundancy or performance benefits of
RAID. The capacity of a non-RAID virtual disk equals the capacity of its disk
drive. For fault tolerance, use RAID 0 or above.
out-of-band management
Method of accessing and managing a system using the RS-232 or Ethernet
connection.
ownership
In an active-active configuration, one controller has ownership of the following
resources: virtual disks and vdisk spares. When a controller fails, the other
controller assumes temporary ownership of its resources.
PHY
Hardware component that converts between digital and analog in the signal
path between the storage controller, expander controller, disk drives, and SAS
ports.
PID
Primary controller identifier number.
port interconnect
A dual-controller enclosure includes host port interconnect circuitry which can
be used to connect the host ports on the upper controller module to those on
the lower controller module. When enabled, the port interconnect gives each
host access to all the volumes assigned to both controllers and makes it
possible to create a redundant configuration without using an external Fibre
Channel switch. The port interconnect should only be enabled when the system
is used in direct attach configurations. When using a switch attached configu-
ration, the port interconnect must be disabled.
priority
Priority enables controllers to serve other I/O requests while running jobs
(utilities) such as rebuilding virtual disks. Priority ranges from low, which uses
the controller’s minimum resources, to high, which uses the controller’s
maximum resources.
RAID
Redundant Array of Independent Disks, a family of techniques for managing
multiple disks to deliver desirable cost, data availability, and performance
characteristics to host environments. (SNIA)
RAID controller
See controller.
RAS
Reliability, availability, and serviceability. These headings refer to a variety of
features and initiatives all designed to maximize equipment uptime and mean
time between failures, minimize downtime and the length of time necessary to
repair failures, and eliminate or decrease single points of failure in favor of
redundancy.
rebuild
The regeneration and writing onto one or more replacement disks of all of the
user data and check data from a failed disk in a virtual disk with RAID level 1,
10, 3, 5, and 50. A rebuild can occur while applications are accessing data on
the system’s virtual disks.
recovery
In an active-active configuration, recovery (also known as failback) is the act of
returning ownership of controller resources from a surviving controller to a
previously failed (but now active) controller. The resources include virtual disks,
cache data, host ID information, and LUNs and WWNs.
rollback
The process of resetting a volume's data to become identical to a snapshot
taken of that volume.
SAN
See Storage Area Network (SAN).
SAS
Serial Attached SCSI
SATA
Serial Advanced Technology Attachment
SC
See storage controller (SC).
SCSI
Small Computer System Interface. A collection of ANSI standards and
proposed standards which define I/O buses primarily intended for connecting
storage subsystems or devices to hosts through host bus adapters. (SNIA)
SFP
Small form-factor pluggable connector, used in FC controller module host ports.
With FibreCAT SX storage systems, the SFPs belong to the controller modules
and are not FRUs.
SID
Secondary controller identifier number.
SMART
Self-Monitoring Analysis and Reporting Technology. The industry-standard
reliability prediction indicator for both the IDE/ATA and SCSI hard disk drives.
Hard disk drives with SMART offer early warning of some hard disk failures so
critical data can be protected.
SMI-S
Storage Management Interface Specification
SMTP
Simple Mail Transfer Protocol. A protocol for sending email messages between
servers and from mail clients to mail servers. The messages can then be
retrieved with an email client using either POP or IMAP.
snap pool
A volume that is configured to store snapshot data.
snapshot
A fully usable copy of a defined collection of data that contains an image of the
data as it appeared at the point in time at which the copy was initiated. (SNIA)
SNIA
Storage Networking Industry Association. A not profit trade association of
producers and consumers of storage networking products, whose goal is to
further storage networking technology and applications.
SNMP
Simple Network Management Protocol. An IETF protocol for monitoring and
managing systems and devices in a network. The data being monitored and
managed is defined by a MIB. The functions supported by the protocol are the
request and retrieval of data, the setting or writing of data, and traps that signal
the occurrence of events. (SNIA)
spare
See dynamic spare, global spare, vdisk spare.
standard volume
A volume that is not enabled for snapshots.
standby
See spare.
state
The current operational status of a disk drive, a virtual disk, or controller. A
controller module stores the states of drives, virtual disks, and the controller in
its nonvolatile memory. This information is retained across power interruptions.
storage system
One or more enclosures, referred to in a logical (as opposed to physical) sense.
strip size
See chunk size.
stripe size
The number of data disks in a virtual disk multiplied by the chunk size.
subvirtual disk
One of multiple RAID 1 virtual disks across which data is striped to form a RAID
10 virtual disk; or one of multiple RAID 5 virtual disks across which data is
striped to form a RAID 50 virtual disk.
system
See array.
TByte
Terabyte (TB)
TCP/IP
Transmission ControlProtocol/Internet Protocol
topology
The logical layout of the components of a computer system or network and their
interconnections. Topology deals with questions of what components are
directly connected to other components from the standpoint of being able to
communicate. It does not deal with questions of physical location of compo-
nents or interconnecting cables. (SNIA)
trap
A type of SNMP message used to signal that an event has occurred. (SNIA)
UPS
Uninterruptable Power Supply
vdisk
Abbreviation for virtual disk.
vdisk spare
A disk drive that is marked as a spare to support automatic data rebuilding after
a disk drive associated with a virtual disk fails. For a vdisk spare to take the
place of another disk drive, it must be at least equal in size to the failed disk drive
and all of the virtual disks dependent on the failed disk drive must be
redundant—RAID 1, 10, 3, 5, or 50.
verify
A process that checks the integrity of the redundant data on fault-tolerant virtual
disks. For RAID 3, 5, and 50, the verify process recalculates the parity of data
stripes in each of the virtual disk’s RAID stripe sets and compares it with the
stored parity. If a discrepancy is found, an error is reported and the new correct
parity is substituted for the stored parity. For RAID 1 and 10, the verify process
checks for mirror mismatches. If an inconsistency is encountered, data is copied
from the master disk drive to the slave disk drive. If a bad block is encountered
when the parity is regenerated, the data is copied from the other disk drive,
master or slave, to the reporting disk drive reallocating the bad block.
virtual disk
For FibreCAT SX storage systems, a set of disk drives that share a RAID level
and disk type, and across which host data is spread for redundancy or perfor-
mance.
volume
A logical subdivision of a virtual disk. Multiple LUNs can be assigned to the
same volume, one for each host port given access to the volume.
See also standard volume.
volume mapping
The process by which volume permissions (read only, read/write, or none) and
LUNs are assigned to a host port.
WBI
See web-browser interface (WBI).
write policy
A cache-writing strategy used to control write operations. The write policy
options are CIFS write-back and write-through cache.
write-back cache
A caching technique in which the completion of a write request is signaled as
soon as the data is in cache, and actual writing to non-volatile media occurs at
a later time. Write-back cache includes an inherent risk that an application will
take some action predicated on the write completion signal, and a system failure
before the data is written to non-volatile media will cause media contents to be
inconsistent with that subsequent action. For this reason, good write-back
cache implementations include mechanisms to preserve cache contents across
system failures (including power failures) and to flush the cache at system
restart time. (SNIA)
This is how FibreCAT SX storage systems operate.
See also write-through cache.
write-through cache
A caching technique in which the completion of a write request is not signaled
until data is safely stored on non-volatile media. Write performance with a write-
through cache is approximately that of a non-cached system, but if the data
written is also held in cache, subsequent read performance may be dramatically
improved. (SNIA)
FibreCAT SX storage systems use write-through cache when write-back cache
is disabled or when cache backup power is not working.
See also write-back cache.
Figure 12: Detecting the HIM Model with FibreCAT SX Manager’s WBI
(Example with two HIM Models 0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 13: Detecting the HIM Revision with FibreCAT SX Manager’s WBI
(Example with two HIM Models 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 18: Mounting the Right Sliding Rail (Front Side of the Rack) . . . . . . . . . . . . . . . . 50
Figure 19: Mounting the Right Sliding Rail (Rear Side of the Rack) . . . . . . . . . . . . . . . . 51
Figure 20: Mounting the Left Sliding Rail (Rear Side of the Rack) . . . . . . . . . . . . . . . . . 51
Figure 21: Mounting Position of a Cage Nut Between the Rail Screws . . . . . . . . . . . . . . 52
Figure 27: Redundant Connection Through a Single Switch to a Single Data Host . . . . 61
Figure 28: Minimal Connection Through a Single Switch to a Single Data Host . . . . . . . 62
Table 5: Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Table 6: Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Table 18: Controller Enclosure LEDs (Back, Power and Cooling Module). . . . . . . . . . . . 28
Table 22: Expansion Enclosure LEDs (Back, Power and Cooling Module) . . . . . . . . . . . 32
[9] See also the user forum for the FibreCAT SX Series at
http://www.fibreservice.net
system requirements 68
web browser configuration 68
web-based interface
configure 68
create virtual disks 71
description 8, 68
log out 73
set date and time 69
test array configuration 72
weight guidelines 21
weight, enclosure 20
wiring requirements, site 23