h18336 Powerstore Sap Hana VG
h18336 Powerstore Sap Hana VG
H18336.1
Validation Guide
Abstract
This validation guide describes storage best practices for SAP HANA in
Tailored Data Center Integration (TDI) deployments on SAP certified
Dell EMC PowerStore enterprise storage systems. The solution
enables customers to use PowerStore for SAP HANA TDI deployments
in a fully supported environment with their existing data center
infrastructures.
The information in this publication is provided as is. Dell Inc. makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Intel, the Intel logo, the Intel Inside logo and Xeon are trademarks
of Intel Corporation in the U.S. and/or other countries. Other trademarks may be trademarks of their respective owners.
Published in the USA 03/21 Validation Guide H18336.1.
Dell Inc. believes the information in this document is accurate as of its publication date. The information is subject to change
without notice.
Contents
Introduction ..................................................................................................................................... 4
Conclusion ..................................................................................................................................... 54
References ..................................................................................................................................... 55
Introduction
Solution SAP HANA is an in-memory data platform that can be deployed on-premises or in the
overview cloud. Organizations use the SAP HANA platform to analyze large volumes of data and
develop and deploy applications in real time. The Dell EMC PowerStore platform is a
midrange enterprise storage system that is optimized for simplicity across the storage life
cycle. The PowerStore platform is simple to acquire, deploy, manage, and service.
The solution that this guide describes uses SAP HANA in a TDI deployment scenario on
PowerStore enterprise storage systems. These PowerStore storage systems meet the
SAP HANA performance and functional requirements and are certified by SAP; customers
can therefore use PowerStore systems for SAP HANA TDI deployments in a fully
supported environment using their existing data center infrastructures.
The solution configuration recommendations in this guide are based on SAP requirements
for high availability (HA) and on the performance test results that are required to meet
SAP’s key performance indicators (KPIs) for SAP HANA TDI.
Key benefits This Dell EMC solution for SAP HANA TDI deployments on PowerStore storage systems
reduces hardware and operational costs, lowers risks, and increases server and network
vendor flexibility for customers. Customers can:
• Integrate SAP HANA into an existing data center.
• Choose between the following storage protocols and connectivity options for the
SAP HANA nodes:
▪ NAS—Shared file system (NAS/NFS)
▪ SAN—Fibre Channel (FC SAN)
• Use their existing operational processes, skills, and tools, thus avoiding the
significant risks and costs that are associated with operational change.
• Use the performance and scale benefits of PowerStore systems to obtain real-time
insights across the business.
• Expect significant benefits from using NVMe storage class memory (SCM) and
solid-state device (SSD) drives for the SAP HANA persistence by reducing SAP
HANA startup, host autofailover, and backup times.
• Expect storage efficiencies and capacity savings with PowerStore always-on inline
data reductions.
• Transition easily from an appliance-based model to the PowerStore-based TDI
architecture while relying on Dell Technologies Professional Services to minimize
risk.
This guide describes SAP HANA TDI deployments in physical environments. If you plan to
use SAP HANA in VMware virtualized environments on VMware vSphere, see SAP HANA
on VMware vSphere.
Audience This guide is intended for system integrators, system or storage administrators,
customers, partners, and Dell Technologies Professional Services personnel who must
configure a PowerStore storage system for a TDI environment for SAP HANA.
Term table
The following table defines abbreviations that are used in this guide:
Term Definition
FC Fibre Channel
We value your Dell Technologies and the authors of this guide welcome your feedback on the solution
feedback and the solution documentation. Contact the Dell Technologies Solutions team by email or
provide your comments by completing our documentation survey.
Note: For links to additional documentation for this solution, see the Dell Technologies Solutions
Info Hub for SAP.
Technology overview
SAP HANA The SAP HANA system combines SAP software components that are optimized on
deployment proven and certified SAP partner-provided hardware. Two models are available for on-
models premises deployment, as shown in the following figure:
Appliance model
By default, an SAP HANA appliance includes integrated storage, compute, and network
components. The appliance is certified by SAP, built by an SAP HANA hardware partner,
and shipped to customers with all its software components preinstalled, including
operating systems and SAP HANA software.
Dell Technologies provides preinstalled SAP HANA appliance solutions for a faster time-
to-market and easy integration into an SAP landscape. However, the SAP HANA
appliance model presents the following limitations for customers:
• Limited choice of servers, networks, and storage
• Inability to use existing data center infrastructure and operational processes
• Little control of the critical components in the appliance
• Fixed sizes for SAP HANA server and storage capacities, leading to higher costs
from a capacity shortfall and inability to respond rapidly to unexpected growth
demands
TDI model
The TDI deployment model enables customers to choose from a broad portfolio of SAP
HANA-certified servers that can be combined with SAP-certified network and storage
components. Different workloads can share the storage and network components to
optimize the total cost of ownership (TCO). Customers can seamlessly integrate SAP
HANA into existing data center operations such as disaster recovery, data protection,
monitoring, and management, reducing the cost, time-to-value, and risk of an overall SAP
HANA adoption. For more information, see the following SAP documents:
• SAP HANA Tailored Data Center Integration – Overview
• SAP HANA Tailored Data Center Integration - Frequently Asked Questions
SAP HANA SAP HANA is an in-memory database, which means the data is kept in the RAM of one or
database more SAP HANA worker hosts (active components that accept and process database
requests).All database operations (reads, inserts, updates, and deletions) are performed
in the main memory of the host. This feature differentiates the SAP HANA database from
traditional databases, where only a part of the data is cached in RAM and the remaining
data resides on disk.
Persistent storage enables you to restore the SAP HANA database to its most recent
committed state in the event of failure. The log captures all changes by database
transactions (redo logs). Data and undo log information are automatically saved to disk at
regular savepoints (the default is five minutes).
In single-host environments, the database must fit into the RAM of a single server. Single-
host environments are preferred for online transaction processing (OLTP)-type workloads
such as S/4HANA and SAP Business Suite on SAP HANA.
In multihost environments, the database tables are distributed across the RAM of multiple
servers. These environments use worker and standby hosts. The worker hosts accept and
process database requests, whereas standby hosts are passive components that have
the database services running but no data in RAM. A standby host waits for a worker host
to fail and then takes over its role, a process known as host autofailover. Because the in-
memory capacity in these deployments can be high, scale-out SAP HANA clusters are
perfectly suited for online analytical processing (OLAP)-type workloads with large
datasets such as SAP Business Warehouse on SAP HANA and BW/4HANA. By default,
SAP supports scale-out deployments of up to 16 worker hosts. If more than 16 worker
hosts are needed, a site-specific SAP certification is required.
PowerStore The new Dell EMC PowerStore system is a midrange storage product with a container-
overview based active/active architecture. The PowerStore system supports the latest storage
media such as NVMe SCM, NVMe SSDs, and NVMe NVRAM drives. This system is a
highly available 2U two-node-based appliance that comes with a flexible consumption
model. The PowerStore system can scale out to up to four appliances (eight nodes) in a
cluster and scale up to three expansion enclosures per appliance.
PowerStore enclosure
Appliances in the PowerStore series are available in one of the following configurations:
• PowerStore T appliances are storage-centric. These models enable you to
manage and provision block and file storage to external hosts. During the initial
configuration, you can configure an appliance for block-only storage or for
unified (block and file) storage.
In SAP HANA environments, configure the appliance for unified storage even when
block storage is used for the SAP HANA persistence. The SAP HANA shared file
system (/hana/shared) can reside on a PowerStore NFS share.
PowerStore Manager UI
The PowerStore system also supports management, configuration, and monitoring using
a command-line interface (PSTCLI) and a REST API. For relevant documentation, see the
PowerStore Info Hub.
PowerStore T Dell Technologies has certified the following PowerStore T models for running SAP
models certified HANA: 1000T, 3000T, 5000T, 7000T, and 9000T. The following table shows high-level
for SAP HANA specifications for these models. For more information, see the PowerStore data and
specification sheet.
Note: SAP HANA certification tests for both FC SAN and NAS/NFS were performed on the 1000T
model with a base enclosure configuration of 2 x NVRAM cache drives, 23 x NVMe SSD drives,
and R5 (8+1) protection. The PowerStore X models are not certified for SAP HANA TDI
production deployments but can be used for nonproduction deployments.
CPUs per 4 x Intel CPUs, 32 4 x Intel CPUs, 48 4 x Intel CPUs, 64 4 x Intel CPUs, 80 4 x Intel CPUs, 80
appliance cores, 1.8GHz cores, 2.1GHz cores, 2.1GHz cores, 2.4GHz cores, 2.4GHz
Supported drives NVMe SSD, NVMe Optane SCM SSD, SAS SSD
To determine the proper storage system model, number, and type of front-end I/O
modules, and disk configuration for an SAP HANA TDI deployment, you will need the
following information:
• Number of SAP HANA nodes to be deployed on the storage system
• Capacity requirements of the SAP HANA nodes
An SAP HANA node can be either a single (scale-up) SAP HANA server or an SAP HANA
worker node that is part of an SAP HANA multihost cluster. Multiple SAP HANA nodes
can be connected to a PowerStore T storage system up to a recommended maximum
number. For more information, see SAP HANA capacity requirements.
As part of the SAP HANA certification, we performed extensive testing using the SAP
HANA-HWC-ES-1.1 certification scenario to determine the scalability of the PowerStore
product family. The following table shows the number of SAP HANA nodes that are
supported on a specified PowerStore storage system model or cluster. These numbers
represent the recommended maximum number of SAP HANA production nodes that can
be connected to the PowerStore system while still meeting the SAP performance KPIs.
Table 2. SAP HANA FC scalability (supported number of nodes) per appliance and cluster
Appliances 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
per cluster
Number of 6 12 18 24 8 16 24 32 12 24 36 48 14 28 42 56 16 32 48 64
SAP HANA
nodes
supported
The “Appliances per cluster” row in the table shows the number of PowerStore appliances
in the cluster. Each appliance has two active/active storage nodes and each node has two
four-port front-end (FE) I/O modules. A PowerStore cluster can have up to four
appliances.
The second row shows the number of production SAP HANA nodes that can be deployed
on the specified configuration using FC SAN connectivity. SAP HANA standby nodes in a
scale-out cluster are not counted for scalability because they do not have a storage
persistence.
Achieving these numbers in a customer environment requires:
• FC network with 16 or 32 Gb/s link speed on all components
• Two four-port I/O modules (16 or 32 Gb/s) per PowerStore node
Note: Two I/O modules with 16 FE ports per PowerStore node are required for the
maximum number of SAP HANA nodes. A single I/O module with eight FE ports per
PowerStore node is acceptable for 50 percent of the maximum number.
• 16 dedicated FE ports for SAP HANA, with SAP HANA nodes equally balanced
across the FE ports
• Capacity for the SAP HANA persistence allocated on PowerStore volumes or file
systems with the “Volume Performance Policy” set to high
• Host connections configured as described in SAP HANA host configuration and
setup
To determine the proper storage system model and the disk configuration for an SAP
HANA TDI deployment, you will need the following information:
• Number of SAP HANA nodes to be deployed on the storage system
• Capacity requirements of the SAP HANA nodes
An SAP HANA node can be either a single (scale-up) SAP HANA server or an SAP HANA
worker node that is part of an SAP HANA multihost cluster.
As part of the SAP HANA certification, Dell Technologies performed extensive testing
using the SAP HANA-HWC-ES-1.1 certification scenario to determine the scalability of the
PowerStore product family. The following table shows the recommended maximum
number of SAP HANA production nodes that can be connected to a specified PowerStore
model while still meeting the SAP performance KPIs:
Table 3. SAP HANA NAS scalability (number of production nodes) per appliance.
1000T 6
3000T 8
5000T 12
7000T 14
Table 3 shows the number of production SAP HANA nodes that can be deployed on the
specified configuration using NAS/NFS connectivity. SAP HANA standby nodes in a
HANA scale-out cluster are not counted for scalability because they do not have a storage
persistence.
Note: PowerStore appliance clustering for NAS storage is not yet available.
Introduction SAP HANA production systems in TDI environments must meet the SAP KPIs for storage
performance. These systems also demand HA and redundancy in storage and network
components such as network cards (HBAs), network switches, and storage FE
components. HA and redundant systems protect SAP HANA database operations from
component failure.
The following sections of this guide provide configuration recommendations and best
practices for the networks and storage systems based on the connectivity option that you
choose for the SAP HANA production systems.
Network This section describes the considerations that arise when you connect SAP HANA to
configuration PowerStore storage systems over an FC SAN network or an IP network for NAS/NFS.
Direct attachments
PowerStore systems support direct attachment of the SAP HANA hosts to the appliance
through FC if the hosts connect to both the PowerStore nodes and have the required
native Linux multipathing software (DM-MPIO). Even though direct-attached FC
connections are supported with PowerStore, Dell Technologies strongly recommends
using switches from production environments to provide HA and redundancy and protect
SAP HANA against any component failures.
Overview This section provides configuration recommendations for connecting the PowerStore
storage system to SAP HANA nodes.
Module All PowerStore models have the same connectivity options. The following figure shows
connectivity these connectivity options as seen from the rear of the unit:
options
PowerStore node A is the bottom node, while PowerStore node B is the top node. Each
node has one embedded module (red rectangles) with management and SAS expansion
ports and a four-port Mezzanine card for connections such as NAS or cluster
interconnects. This card can be used for NAS/NFS and the SAP HANA NAS persistence
and shared file system /hana/shared. The system configures all NAS server interfaces
on the first two bonded ports on this four-port card. This configuration cannot be changed.
Because both bonded ports are used for file traffic, link loss on a single port does not
affect connectivity to the NAS server.
Each node has two additional slots for four-port I/O modules (green rectangles). These
slots must be used for SAP HANA client FC connections, which are either 16 Gb/s or 32
Gb/s FC.
PowerStore disk The PowerStore base enclosure can be configured with NVMe SSDs or NVMe storage
storage class memory (SCM) drives for user data. While NVMe SCM drives provide better
performance, the disk capacity of an NVMe SCM-based appliance is limited to the disks in
a base enclosure and cannot be expanded with an expansion enclosure. It is
recommended that all drives within a PowerStore system be the same size to maximize
the usable capacity from each drive. PowerStore systems require between six and 23
NVMe SSD drives in the base enclosure.
PowerStore uses NVMe NVRAM drives to provide persistent storage for cached write
data. PowerStore 1000 and 3000 appliance models have two NVRAM drives per system,
while the PowerStore 5000, 7000, and 9000 models have four NVRAM drives per system.
The following figure shows how the NVRAM drives are configured in the base enclosure.
NVMe SSD-based systems can also be expanded with an expansion enclosure using up
to 25 SAS SSD drives to increase the amount of available storage capacity, as shown in
the following figure:
The PowerStore Dynamic Resiliency Engine (DRE) is used to manage the drives in the
system. RAID settings are not user-configurable within the PowerStore system. RAID 5 is
used to protect user data that is stored on the drives based on the number of drives:
• If the PowerStore system is installed with fewer than 10 data drives, the system
configures RAID 5 with a 4+1 stripe width.
• If the PowerStore system is installed with 10 or more data drives, the system
configures RAID 5 with an 8+1 stripe width.
The system does not change the stripe width as more drives are added. Because
RAID 5 8+1 provides greater usable capacity from the same number of drives, we
recommend initially installing a PowerStore system with a minimum of 10 drives.
All drives in the system are automatically used to provide storage capacity. User
configuration of the drives is not necessary and dedicated hot spare drives are not
required. Spare space for a rebuild is automatically distributed across all drives, providing
better resource utilization and enabling a faster rebuild if there is a drive failure.
NAS servers The PowerStore file system uses virtualized file servers that are known as NAS servers. A
NAS server contains the configuration, interfaces, and environmental information that is
used to facilitate access to the file systems. This information includes services such as
DNS, LDAP, NIS, NDMP, anti-virus, and more.
New NAS servers are automatically assigned on a round-robin basis across the available
PowerStore nodes. The preferred node acts as a marker to indicate the node that the
NAS server runs based on this algorithm. After it is provisioned, the preferred node for a
NAS server never changes. The current node indicates the node on which the NAS server
is running. Changing the current node moves the NAS server to a different node, which
can be used for load-balancing purposes. When a NAS server is moved to a new node, all
file systems on the NAS server are moved along with it.
Volumes and file As a unified storage storage system, the PowerStore system allocates block LUNs
systems (volumes) and file systems on the same storage resources.
Performance policy
All block storage resources in a PowerStore system have a defined performance policy.
By default, this policy is set to medium. The performance policy does not have any impact
on system behavior unless some volumes are set to low performance while other volumes
are set to medium or high performance. During times of system resource contention, the
PowerStore system devotes fewer compute resources to volumes with a low-performance
policy. Reserve the low-performance setting for volumes that have fewer critical
performance needs and use the high-performance setting for all SAP HANA persistence
volumes.
1
ALUA is an industry standard protocol for identifying optimized paths between a storage system
and a host.
always use two NAS servers (one on each PowerStore node) and, when creating the file
systems, to distribute them evenly across the two NAS servers.
SAP HANA Every SAP HANA node requires storage devices and capacity for:
capacity • Operating system boot image
requirements
• SAP HANA installation (/hana/shared)
• SAP HANA persistence (data and log)
• Backup
For more information, see SAP HANA Storage Requirements.
Note: The formulas for capacity sizing in SAP HANA Storage Requirements are subject to change
by SAP. Ensure that you review the latest version of that document before you determine your
capacity requirements.
When booting from a SAN, follow the best practices that are described in the Dell EMC
Host Connectivity Guide for Linux.
Every SAP HANA scale-up or scale-out (worker) node requires two disk volumes or file
systems to save the in-memory database on disk (data) and to keep a redo log (log). The
size of these volumes or file systems depends on the anticipated total memory
requirement of the database and the RAM size of the node. To assist with disk sizing
preparation, SAP provides several tools and documents, as described in SAP HANA
Storage Requirements. The latest version of the requirements document (version 2.10)
provides the following formulas to calculate the size of the data volume:
• Option 1: If an application-specific sizing program can be used:
Sizedata = 1.2 x anticipated net disk space for data
where anticipated net disk space is the anticipated total memory requirement of the
database plus an additional 20 percent free space. If the database is distributed
across multiple nodes in a scale-out cluster, divide the anticipated net disk space
by the number of SAP HANA worker nodes in the cluster. For example, if the
anticipated net disk space is 2 TB and the scale-out cluster consists of four worker
If the anticipated net disk space is unknown at the time of the storage sizing, it is
recommended to use the RAM size of the node plus 20 percent free space for a
capacity calculation of the datafile system.
The size of the log volume depends on the RAM size of the node. SAP HANA
Storage Requirements provides the following formulas to calculate the minimum
size of the log volume:
[systems ≤ 512 GB] Sizeredolog = 1/2 x RAM
[systems > 512 GB] Sizeredolog = (min) 512 GB
Before you install the SAP HANA binary, configuration, and trace files and logs, every
SAP HANA node must have access to a file system that is mounted under the local
/hana/shared/ mount point. In an SAP HANA scale-out cluster, a single shared file
system is required and must be mounted on every node. Most SAP HANA installations
use a Network File System (NFS) for this purpose. PowerStore unified storage systems
can provide this file system with the NAS option.
Calculate the size of the /hana/shared/ file system by using the latest formula in
SAP HANA Storage Requirements. Version 2.10 (February 2017) of the requirements
document uses the following formulas for calculation:
• Single node (scale-up):
Sizeinstallation(single-node) = 1 x RAM but max 1 TB
• Multinode (scale-out):
Sizeinstallation(scale-out) = 1 x RAM_of_worker per four worker nodes
Backup
SAP HANA supports backup to a file system or the use of third-party tools that SAP has
certified. Dell EMC supports data protection strategies for SAP HANA backup using Dell EMC
Data Domain systems and Dell EMC NetWorker software. While an SAP HANA backup to
an NFS file system on a PowerStore all-flash or hybrid storage system is possible, we do not
recommend backing up the SAP HANA database to the storage system where the primary
persistence resides. If you plan to back up SAP HANA to an NFS file system on a different
PowerStore storage system, see SAP HANA Storage Requirements for information about
sizing the backup file system. The capacity requirement depends not only on the data size
and the frequency of change operations in the database, but also on the backup generations
that are kept on disk.
SAP HANA NSE SAP HANA Native Storage Extension (NSE) is a general-purpose, built-in warm data
store in SAP HANA that lets you manage less-frequently accessed data without fully
loading it into memory. NSE integrates flash-drive based database technology with the
SAP HANA in-memory database for an improved price-performance ratio.
• Hot data is used to store mission-critical data for real-time processing and
analytics. It is retained continuously in SAP HANA memory for fast performance
and is persisted to storage.
• Warm data is primarily used to store mostly read-only data that need not be
accessed frequently. The data need not reside continuously in SAP HANA
memory and can be unloaded to disk. It is still managed as a unified part of the
SAP HANA database, transactionally consistent with hot data, participates in
SAP HANA backup and system replication operations, and is stored in lower
cost stores within SAP HANA.
Note: NSE is subject to certain functional restrictions. For more information, see SAP
Note 2771956: SAP HANA NSE Functional Restrictions with HANA 2.0 SPS 04 and SAP Note
2927591: SAP HANA NSE Functional Restrictions with HANA 2.0 SPS 05 (access requires an
SAP username and password).
The following figure shows the difference between standard SAP HANA in-memory
storage and the storage offered with NSE:
Comparison: standard SAP HANA database with SAP HANA database with NSE
The capacity of a standard SAP HANA database is limited by the amount of main
memory. Using SAP HANA NSE, customers can bypass these limits by storing warm data
on PowerStore storage. Paging operations require a relatively small amount of SAP
HANA memory for the NSE buffer cache because the buffer cache can handle up to eight
times the size of warm data on disk. For example, a 2 TB SAP HANA system without NSE
equates to a 1 TB database in memory. With NSE and the addition of a 500 GB buffer
While hot data is ‘column loadable’, residing completely in-memory for fast processing
and loaded from disk into SAP HANA memory in columns, with SAP HANA NSE you can
specify certain warm data as ‘page loadable.’ This data is loaded into memory page by
page as required for query processing. Unlike column-loadable data, page-loadable data
does not need to reside completely in memory.
The following figure depicts the SAP HANA database with NSE:
NSE reduces the memory footprint for page-loadable data. The database is partly in
memory and partly on disk, as illustrated in Figure 9. The PowerStore storage system
together with SAP HANA NSE can be used to substantially increase SAP HANA data
capacity and reduce TCO for customers.
NSE is integrated with other SAP HANA functional layers, such as the query optimizer,
query execution engine, column store, and persistence layers. For more information about
SAP HANA NSE and related topics, see SAP HANA Native Storage Extension
Whitepaper.
Host connection When you connect an SAP HANA host to a PowerStore storage system using the FC
and setup using protocol, you must connect two host bus adapter (HBA) ports supporting 16 Gb/s or 32
FC SAN Gb/s link speed to the PowerStore system. Even though it is possible to use a single dual-
port HBA, Dell Technologies highly recommends using two HBAs for HA and redundancy.
Connect and zone each port over the FC SAN network to two FC FE ports per storage
processor on the PowerStore storage system. This setup produces two active and two
standby paths per LUN, as shown in the following figure:
After the HBAs of the SAP HANA server have been zoned to the PowerStore FC FE
ports, use the PowerStore Manager UI to create a host entry in the PowerStore system
and add volumes to the host.
1. From the PowerStore Manager UI, select Compute > Host & Host Groups and
click +ADD HOST.
3. Select Fibre Channel as the host protocol type and click Next.
A list of automatically discovered initiators is displayed, as shown in the following
figure:
4. Select the two FC initiators (WWNs) of the SAP HANA host HBAs from the list,
and then click Next.
Creating a volume
▪ Optionally, a description
▪ Quantity: You can create multiple volumes. When you specify a quantity,
PowerStore automatically adds a number suffix to the volume name.
▪ Required size
▪ Volume performance policy: Select High.
2. Check the host and let PowerStore automatically generate the logical unit number
under which the host discovers the volume. PowerStore selects the next available
number.
3. Click NEXT to access a summary page, as shown in the following figure:
2. To verify that the host can see the new volumes, run:
fdisk -l | grep -B1 -A4 PowerStore
3. After the new LUNs have been rescanned and are visible to the host, add them to
multipathing. Native Linux multipathing (DM-MPIO) must be enabled on the Linux
host. When multipathing is enabled, the host accesses the block LUNs over
multiple paths, providing redundancy if there is a component failure.
The /etc/multipath.conf file controls multipathing. This file does not exist
by default. Create the file by running the following command:
multipath -t > /etc/multipath.conf
Note: Ensure that you refer to the Dell EMC Host Connectivity Guide for Linux for the latest
PowerStore MPIO configuration settings.
When the block devices are under multipath control, you can format them by
using XFS and then mount them as required.
When you install an SAP HANA multimode scale-out cluster, the SAP HANA
storage connector (fcClient) mounts the devices during SAP HANA startup.
Determine the host number (in this example, host20) by running the multipath -ll
command, as shown in step 6 of Scan LUNs on the host and create file systems. Linux
assigns a specific number to every connected HBA port. The example uses host18 and
host20.
QLogic HBAs:
cat /sys/module/qla2xxx/parameters/ql2xmaxqdepth
3. Rebuild your Linux RAM disk or kernel image by following the instructions in the
Administration Guide for your Linux distribution. For example, for SUSE Linux
Enterprise Server 15, run the following commands and then reboot your system:
cd /boot
sync
dracut -f
Using NAS/NFS for SAP HANA shared directory with FC SAN deployments
In an SAP HANA scale-out cluster, a single shared file system (/hana/shared) is
required and must be mounted on every node. Most SAP HANA installations use a
Network File System (NFS) for this purpose. PowerStore unified storage systems can
provide this file system with the NAS option. To use NAS for the shared file system, see
Creating the SAP HANA shared file system.
Host connection Each SAP HANA host requires a network connection with 25 Gb/s link speed to the
and setup using dedicated storage network switches. For redundancy, the connection must have at least
NAS/NFS two NICs.
With two 25 Gb/s NICs on the SAP HANA hosts, you can optionally configure
active/active interface groups on the SAP HANA hosts to use network bonding (bonds),
where multiple network interfaces are aggregated into a single logical bonded interface.
Hosts that are configured with bonds require MLAG and LACP on the switches. The
following figure shows a sample network topology for eight SAP HANA nodes with two
25 Gb/s NICs, each connected through redundant switches to all four 25 Gb/s IP interface
ports on the PowerStore system:
The file systems for the SAP HANA persistence are created on the PowerStore appliance,
as described in Creating NAS file systems for the SAP HANA persistence. Add the
_netdev mount option when mounting NAS devices. This mount option prevents the
system from attempting to mount these file systems until the network has been enabled
on the system.
When you install an SAP HANA system, either as a single-node instance or a multimode
scale-out cluster, you must automate the mounting of the SAP HANA persistent devices
by using /etc/fstab. The mounts must be evenly distributed across the PowerStore IP
address of the NAS servers on both PowerStore nodes within the appliance.
Note: The /hana/shared file system must be shared on all hosts on SAP HANA scale-out
systems in both FC SAN and NAS/NFS deployments.
2. Enter the required details and click NEXT to specify the sharing protocol, as
shown in the following figure:
3. Choose NFSv3, NFSv4, or both depending on your requirements, and then define
the UNIX directory servers and DNS. Click NEXT to view a summary, as shown in
the following figure:
1. In the PowerStore Manager UI, select Storage > File Systems and click
+ CREATE.
The Create File System page opens, as shown in the following figure:
The Create a File System Details page opens, as shown in the following figure:
4. Click NEXT and then provide an NFS export name and (optionally) a description
for the export, as shown in the following figure:
5. Click NEXT to configure access to the file system based on your security
requirements, as shown in the following figure:
Configuring access
Mounting the HANA shared file system on the SAP HANA host
On the SAP HANA host:
1. Create the /hana/shared mount point by running the following command:
mkdir -p /hana/shared
where <NAS Server> is the IP address or hostname of your NAS server on the
PowerStore system.
If you are using NFSv3, adjust the NFS version.
2. Create two file systems for every SAP HANA node, one for data and the other for
log. See Create a shared file system for the steps.
3. Distribute the file systems evenly across the two NAS servers.
This step is important for performance and load-balancing. For example, place
data and log for the first SAP HANA node on NAS server A and place data and
log for the second SAP HANA node on NAS server B. Continue to balance
subsequent nodes across the two NAS servers. The following figure shows an
example:
Mount the SAP HANA file system for data and log
On the SAP HANA hosts, create the mount points by running the following commands,
where x = 1, 2, 3, and so on up to the number of hosts in the scale-out system:
mkdir -p /hana/data/SID/mnt0000x
mkdir -p /hana/data/SID/mnt0000x
chmod -R 777 /hana/data/SID/
chmod -R 777 /hana/log/SID/
Note: For SAP HANA scale-out systems, all the mount points must be created on each host.
where <NAS Server> is the IP address or hostname of your NAS server on the
PowerStore system. If you are using NFSv3, adjust the NFS version.
To achieve optimal performance, ensure that the SAP HANA mounts are evenly
distributed across all available PowerStore front-end IP networks of the NAS servers in
the /etc/fstab file. For more information, see Host connection and setup using
NAS/NFS.
In a 2+1 SAP HANA scale-out system with SID = NAS, the mount points are included in
the /etc/fstab file on each of the SAP HANA clients including any standby hosts. The
following code extract shows an example:
If preferred, use the following example for /etc/fstab with NFSv3 parameters.
The operating system command mount –a mounts all the directories from the
PowerStore system. Run this command on each SAP HANA client.
Overview This section describes the required settings in the SAP HANA global.ini file for an
SAP HANA scale-out installation on PowerStore systems.
SAP HANA Deploying an SAP HANA multinode scale-out cluster on FC-connected systems requires
storage the SAP HANA storage connector (fcClient or fcClientMpath). The storage
connector connector is responsible for mounting and unmounting the persistence (data and log) to
the SAP HANA worker nodes.
The storage connector also writes SCSI-3 persistent reservations (PRs) to the devices.
The Linux sg_persist or mpathpersist command initiates an operation known as I/O
fencing, which ensures that only one SAP HANA worker host has simultaneous access to
a set of data and log devices.
SAP HANA The storage connector API is controlled in the storage section of the SAP HANA
global.ini file global.ini file, as shown in the following code sample. The storage section of the file
contains entries for the block devices. You can run the multipath –ll command on the
SAP HANA hosts to determine the worldwide identifiers (WWIDs) of the partition entries.
[communication]
listeninterface = .global
[multidb]
mode = multidb
database_isolation = low
singletenant = yes
[persistence]
basepath_datavolumes = /hana/data/ANA
basepath_logvolumes = /hana/log/ANA
use_mountpoints = yes
[storage]
ha_provider = hdb_ha.fcClient
partition_*_*__prtype = 5
partition_1_data__wwid = 3600601604f804a00a79d565d90f63337
partition_1_log__wwid = 3600601604f804a00f99d565d224bd861
partition_2_data__wwid = 3600601604f804a00a89d565d2fd702d9
partition_2_log__wwid = 3600601604f804a00fa9d565d72e37054
partition_3_data__wwid = 3600601604f804a00a99d565d2e9c1d5b
partition_3_log__wwid = 3600601604f804a00fa9d565d2593cbc1
[trace]
ha_fcclient = info
For more information about the SAP HANA scale-out installation, the storage connector,
and configuring the global.ini file, see the SAP HANA Administration Guide and the
SAP HANA Server Installation and Update Guide on the SAP Help Portal.
Installing an SAP Installing an SAP HANA scale-out cluster on PowerStore NAS systems involves:
HANA scale-out • Configuring the /etc/fstab file on each SAP HANA client and mounting all
cluster the SAP HANA data, log, and shared directories from the PowerStore NAS
storage to the SAP HANA nodes
• Installing an SAP HANA scale-out instance with the SAP HANA hdblcm
command-line tool by using the NAS storage directories that you previously
created
Prerequisites
The configuration example in this guide assumes that the following basic installation and
configuration operations are complete on the SAP HANA nodes:
• The operating system is installed and configured according to the SAP
recommendations. Our example uses SUSE Linux 15 SP1 for SAP applications.
• The directories to be used for mount points have been created for
/hana/shared and also for the data directories
(/hana/data/SID/mnt0000x) and log directories
(/hana/log/SID/mnt0000x)on each of the SAP HANA nodes with 777
permissions.
• All network settings and bandwidth requirements for internode communications
are configured according to the SAP requirements.
• SSH keys have been exchanged between all SAP HANA nodes.
• System time synchronization has been configured through an NTP server.
• The SAP HANA installation DVD ISO file has been downloaded from the SAP
website and is available on a shared file system.
Note: SAP HANA can be installed only on certified server hardware. A certified SAP HANA expert
must perform the installation.
After all the file systems are mounted, you are ready to install the SAP HANA scale-out
cluster. Our example uses the hdblcm tool to install the SAP HANA 2+1 scale-out cluster.
For more information, see the SAP HANA Studio Installation and Update Guide.
After the SAP HANA installation DVD ISO file has been extracted to a shared software-
repository file system that is mounted on all hosts, begin the installation by running the
following command from the extracted installation folder:
#/SAPShareNew/software/SAP/HANA/hana2_rev2053/SAP_HANA_DATABASE/hd
blcm
Choose an action
Index | Action | Description
-----------------------------------------------
1 | install | Install new system
2 | extract_components | Extract components
3 | Exit (do nothing) |
Enter selected action index [3]: 1
SAP HANA Database version '2.00.053.00.1605092543' will be
installed.
Implementing This section applies only to multihost SAP HANA scale-out instances on NAS and the
STONITH with host autofailover. On failover, the database on the standby host must have read- and
the HA/DR write-access to the files of the failed active host. If the failed host can still write to these
provider for SAP files, the files might become corrupted. Preventing this corruption is called fencing.
HANA
When you use shared file systems such as PowerStore NAS storage and NFSv3 or
NFSv4, the STONITH method is implemented to achieve proper fencing capabilities and
ensure that locks are always freed.
Note: For multihost SAP HANA scale-out instances and the host auto-failover with NFSv3, the
STONITH (SAP HANA HA/DR provider) implementation is mandatory. With NFSv4, a locking
mechanism based on lease-time is available. The locking mechanism can be used for I/O fencing
and STONITH is not required. However, STONITH can be used to speed up failover and ensure
that locks are always released.
In such a setup, the storage connector API can be used for invoking the STONITH calls.
During failover, the SAP HANA leading host calls the STONITH method of the custom
storage connector with the hostname of the failed host as the input value.
# #
# IPMI mapping
#
10.230.79.85 hana01-ipmi
10.230.79.86 hana02-ipmi
10.230.79.87 hana03-ipmi
4. Verify that the IPMI tool is working on each SAP HANA host by running the
following command as root:
ipmitool power status –H hana01-ipmi -U root -P xxxx
5. Set the set-user-ID bit for IPMItool to enable sidadm execution permissions by
running:
chmod u+s /usr/bin/ipmitool
When using your own code in here, please copy this file to
location on /hana/shared outside the HANA installation.
This file will be overwritten with each hdbupd call! To configure
your own changed version of this file, please add
to your global.ini lines similar to this:
[ha_dr_provider_<HA_STONITH_Hook>]
provider = <HA_STONITH_Hook>
path = /hana/shared/HANA_Hooks
execution_order = 1
class HA_STONITH_Hook(HADRBase):
def about(self):
self.tracer.info(output)
time.sleep(10)
(code, output) = Helper._runOsCommand(power_status)
self.tracer.info(output)
if 'is off' in output:
msg = "Successfully powered off %s" % failingHost
self.tracer.info(msg)
rc = 0
elif 'is on' in output:
msg = "failed to power off %s, will try again" %
failingHost
self.tracer.info(msg)
(code, output) = Helper._runOsCommand(power_off)
self.tracer.info(output)
time.sleep(10)
(code, output) =
Helper._runOsCommand(power_status)
self.tracer.info(output)
if 'is off' in output:
msg = "Successfully powered off %s" %
failingHost
self.tracer.info(msg)
rc = 0
elif 'is on' in output:
msg = "unable to power off %s - Please CHECK" %
failingHost
self.tracer.info(msg)
return 1
#Power back on the failed host
if rc == 0:
(code, output) = Helper._runOsCommand(power_on)
time.sleep(10)
self.tracer.info(output)
(code, output) =
Helper._runOsCommand(power_status)
self.tracer.info(output)
if 'is on' in output:
msg = "successfully powered on %s" %
failingHost
self.tracer.info(msg)
rc = 0
elif 'is off' in output:
msg = "unable to power on %s - will try again to
power on" % failingHost
self.tracer.info(msg)
(code, output) = Helper._runOsCommand(power_on)
self.tracer.info(output)
time.sleep(10)
(code, output) =
Helper._runOsCommand(power_status)
self.tracer.info(output)
if 'is off' in output:
msg = "unable to power on %s -
Please CHECK" % failingHost
self.tracer.info(msg)
rc = 1
elif 'is on' in output:
msg = "Successfully powered on
%s - Please CHECK" % failingHost
self.tracer.info(msg)
rc = 0
self.tracer.info("leaving HANA HA stonith hook")
return rc
if not isForce:
# run pre takeover code
# run pre-check, return != 0 in case of error => will
abort takeover
return 0
else:
# possible force-takeover only code
# usually nothing to do here
return 0
if rc == 0:
# normal takeover succeeded
return 0
elif rc == 1:
# waiting for force takeover
return 0
elif rc == 2:
# error, something went wrong
return 0
port = parameters['port']
volume = parameters['volume']
serviceName = parameters['service_name']
database = parameters['database']
status = parameters['status']
databaseStatus = parameters['database_status']
systemStatus = parameters['system_status']
timestamp = parameters['timestamp']
isInSync = parameters['is_in_sync']
reason = parameters['reason']
siteName = parameters['siteName']
[ha_dr_provider_<HA_STONITH_Hook>]
provider = HA_STONITH_Hook
path = /hana/shared/HANA_Hooks
execution_order = 50
2. Using the SAP HANA Cockpit, in your SAP HANA database, select Database
Administration > Manage system configuration. You can add, configure, and
monitor the HA/DR provider information, as shown in the following figure:
Perform host autofailovers to ensure that the failovers work as expected and that
STONITH has been implemented on the failed host. The following figure shows an
example of output from the name server trace file following a host autofailover and
successful implementation of STONITH:
Post-installation configuration
File I/O The base layer of SAP HANA provides two file I/O interfaces:
optimization • SimpleFile―Used for small, simple I/O requests on configuration files, traces, and
after the SAP so on. This interface uses lightweight, platform-independent wrappers around
HANA system calls.
installation
• FileFactory & File―Used for huge, complex streams of I/O requests on the data
and log volumes and for backup and recovery. This interface uses synchronous
and asynchronous I/O operations.
You can configure the SAP HANA file I/O layer to optimize file I/O for a specific file system
and storage system.
Note: The Linux XFS file system is used on all Dell EMC storage volumes for the SAP HANA
persistence.
su - <sid>adm
hdbparam –p # lists current parameter setting
hdbparam –-paramset fileio [LOG].max_parallel_io_requests=128
hdbparam –-paramset fileio [LOG].num_completion_queues=4
hdbparam –-paramset fileio [LOG]. num_submit_queues=8
The following figure shows what the fileio section of global.ini looks like in the
Cockpit after the parameters are set:
All other parameters are set by default during installation. For more information, see SAP
Note 2399079—Elimination of hdbparam in SAP HANA 2 (access requires SAP user
credentials).
Configuring SAP SAP HANA NSE uses the data volume that the main database is using. While no special
HANA NSE configuration steps are necessary for the NSE persistence layer, sizing must consider the
additional capacity for the feature.
When you use the SAP HANA NSE feature, a portion of DRAM is used as a buffer cache
to dynamically load paged data from the persistence (data volume). By default, the size of
this buffer cache is 10 percent of the total main memory of the system. For more
information, see “SAP HANA Buffer Cache” in the SAP HANA Administration Guide.
SAP recommends not exceeding a ratio of 1:8 for the total amount of warm data handled
by SAP HANA NSE and the buffer cache. When using SAP HANA NSE, you can store
data in the warm tier in the following specific granularities:
• Tables
• Columns
• Partitions
Data location handling is built into SAP HANA's Data Definition Language (DDL). Manage
the configuration by using the SAP HANA CLI SQL client hdbsql or the SQL editor in SAP
HANA Studio or SAP HANA Cockpit.
To create a table using SAP HANA NSE (the warm tier), run the following DDL command:
SAP HANA NSE Advisor, which is based on real-time statistics from an existing SAP
HANA database, provides recommendations for which data to move from the hot (in-
memory) tier to the warm tier (SAP HANA NSE). Use the SAP HANA NSE Advisor
information for guidance regarding the amount of data to move to the warm tier on an
existing system.
For more information about SAP HANA NSE, SAP HANA NSE Data Sizing, and related
topics, see the SAP Help Portal.
Conclusion
Summary Using SAP HANA in TDI deployments with Dell EMC PowerStore enterprise storage
systems provides many benefits, including reduced hardware and operational costs, lower
risks, and greater hardware vendor flexibility.
You can easily transition to this new architecture and rely on Dell Technologies
Professional Services to minimize risk.
SAP certifies PowerStore storage systems for both the FC SAN and NAS/NFS protocols,
giving the customer the choice of block or file for SAP HANA. You can use these storage
systems for SAP HANA production and nonproduction installations and for single-node
and scale-out systems.
Findings During our tests with SAP HANA on PowerStore storage systems, we observed that:
• SAP HANA production installations on PowerStore meet the SAP HANA
storage performance KPIs when the configuration and scalability rules that are
described in this guide are applied.
• Using PowerStore clusters for block storage with up to four appliances enables
customers to linearly scale the number of FC SAN-connected SAP HANA nodes.
• PowerStore always-on inline data reduction provides significant storage efficiency
and reduces the capacity requirement. Specific savings might vary depending on
use case scenarios.
References
Dell The following documentation provides additional relevant information. Access to these
Technologies documents depends on your login credentials. If you do not have access to a document,
documentation contact your Dell Technologies representative.
• PowerStore Info Hub
• Solutions Info Hub for SAP
• Dell EMC PowerStore: Best Practices Guide
• Host Connectivity Guide for Linux
Note: The following SAP notes are available at the SAP Knowledge Base. Access to the notes
requires an SAP username and password.