NetBackup105 Dedupe Guide
NetBackup105 Dedupe Guide
Guide
Release 10.5
NetBackup™ Deduplication Guide
Last updated: 2024-09-25
Legal Notice
Copyright © 2024 Veritas Technologies LLC. All rights reserved.
Veritas, the Veritas Logo, Veritas Alta, and NetBackup are trademarks or registered trademarks
of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may
be trademarks of their respective owners.
This product may contain third-party software for which Veritas is required to provide attribution
to the third party (“Third-party Programs”). Some of the Third-party Programs are available
under open source or free software licenses. The License Agreement accompanying the
Software does not alter any rights or obligations you may have under those open source or
free software licenses. Refer to the Third-party Legal Notices document accompanying this
Veritas product or available at:
https://www.veritas.com/about/legal/license-agreements
The product described in this document is distributed under licenses restricting its use, copying,
distribution, and decompilation/reverse engineering. No part of this document may be
reproduced in any form by any means without prior written authorization of Veritas Technologies
LLC and its licensors, if any.
The Licensed Software and Documentation are deemed to be commercial computer software
as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19
"Commercial Computer Software - Restricted Rights" and DFARS 227.7202, et seq.
"Commercial Computer Software and Commercial Computer Software Documentation," as
applicable, and any successor regulations, whether delivered by Veritas as on premises or
hosted services. Any use, modification, reproduction release, performance, display or disclosure
of the Licensed Software and Documentation by the U.S. Government shall be solely in
accordance with the terms of this Agreement.
http://www.veritas.com
Technical Support
Technical Support maintains support centers globally. All support services will be delivered
in accordance with your support agreement and the then-current enterprise technical support
policies. For information about our support offerings and how to contact Technical Support,
visit our website:
https://www.veritas.com/support
You can manage your Veritas account information at the following URL:
https://my.veritas.com
If you have questions regarding an existing support agreement, please email the support
agreement administration team for your region as follows:
Japan CustomerCare_Japan@veritas.com
Documentation
Make sure that you have the current version of the documentation. Each document displays
the date of the last update on page 2. The latest documentation is available on the Veritas
website:
https://sort.veritas.com/documents
Documentation feedback
Your feedback is important to us. Suggest improvements or report errors or omissions to the
documentation. Include the document title, document version, chapter title, and section title
of the text on which you are reporting. Send feedback to:
NB.docs@veritas.com
You can also see documentation information or ask a question on the Veritas community site:
http://www.veritas.com/community/
https://sort.veritas.com/data/support/SORT_Data_Sheet.pdf
Contents
About MSDP optimized duplication within the same domain ................ 130
About the media servers for MSDP optimized duplication within
the same domain ............................................................ 132
About MSDP push duplication within the same domain ................. 132
About MSDP pull duplication within the same domain ................... 135
Configuring MSDP optimized duplication within the same
NetBackup domain ......................................................... 136
Configuring NetBackup optimized duplication or replication
behavior ....................................................................... 140
Setting NetBackup configuration options by using the command
line .............................................................................. 142
About MSDP replication to a different domain ................................... 143
Configuring MSDP replication to a different NetBackup domain ............ 144
About NetBackup Auto Image Replication .................................. 146
About trusted primary servers for Auto Image Replication ............. 153
About the certificate to use to add a trusted primary server ............ 157
Add a trusted primary server ................................................... 158
Remove a trusted primary server ............................................. 159
Enable inter-node authentication for a NetBackup clustered primary
server ........................................................................... 160
Configuring NetBackup CA and NetBackup host ID-based
certificate for secure communication between the source and
the target MSDP storage servers ....................................... 161
Configuring external CA for secure communication between the
source MSDP storage server and the target MSDP storage
server ........................................................................... 163
Configuring a target for MSDP replication to a remote domain
.................................................................................... 163
About configuring MSDP optimized duplication and replication
bandwidth ........................................................................... 167
About performance tuning of optimized duplication and replication for
MSDP cloud ........................................................................ 168
About storage lifecycle policies ...................................................... 169
About the storage lifecycle policies required for Auto Image
Replication .................................................................... 170
Creating a storage lifecycle policy ............................................ 171
Storage Lifecycle Policy dialog box settings ............................... 173
About MSDP backup policy configuration ......................................... 175
Creating a backup policy .............................................................. 176
Resilient network properties .......................................................... 176
Resilient connection resource usage ........................................ 178
Specify resilient connections for clients ..................................... 179
Adding an MSDP load balancing server ........................................... 180
Contents 8
Storage Platform Web Service (spws) does not start .................... 734
Disk volume API or command line option does not work ............... 734
Viewing MSDP disk errors and events ............................................. 735
MSDP event codes and messages ................................................. 735
Unable to obtain the administrator password to use an AWS EC2
instance that has a Windows OS .............................................. 738
Trouble shooting multi-domain issues ............................................. 738
Unable to configure OpenStorage server from another domain
.................................................................................... 738
MSDP storage server is down when you configure an OpenStorage
server ........................................................................... 739
MSDP server is overloaded when it is used by multiple NetBackup
domains ....................................................................... 740
Troubleshooting the cloud compaction error messages ....................... 741
Type Description
NetBackup appliance Veritas provides several hardware and a software solutions that
deduplication include NetBackup deduplication.
https://www.veritas.com/content/support/en_US/Appliances.html
Chapter 2
Quick start
This chapter includes the following topics:
Note: When deduplication is performed on the server side or the client side, the
same plug-in library is loaded. As a result, the deduplication capabilities and results
are not different.
By default, deduplication from the client side is disabled. From a policy, you can
enable client-side deduplication for all clients. Or, you can enable it on a per-host
basis in the host properties.
The following is an example of how to use the bpblient command with the
-client_option to add the client to the clientDB and enable Prefer to use
client-side deduplication:
Linux:
/usr/openv/NetBackup/bin/admincmd/bpclient
-client client_name -add -client_direct 1
Windows:
\Program Files\Veritas\NetBackup\bin\admincmd\bpclient.exe
-client client_name -add -client_direct 1
5 On the Review page, verify that all settings and information are correct. Click
Finish.
The disk pool creation and replication configuration continue in the background
if you close the window. If there is an issue with validating the credentials and
configuration of the replication, you can use the Change option to adjust any
settings.
6 Click Add storage unit at the top of the screen.
7 Select Media Server Deduplication Pool (MSDP) from the list and click Start.
8 In Basic properties, enter the Name of the MSDP storage unit and click Next.
9 In Disk pool, select the disk pool that was created and then click Next.
10 In the Media server tab, use the default selection of Allow NetBackup to
automatically selectand then click Next.
11 Review the setup of the storage unit and then click Save.
Model Description
One-to-one model A single production datacenter can back up to a disaster recovery site.
Many-to-one model Remote offices in multiple domains can back up to a storage device in
a single domain.
2 The target primary server (Domain 2) The storage server in the target domain
recognizes that a replication event has
occurred. It notifies the NetBackup
primary server in the target domain.
4 The target primary server (Domain 2) After the image is imported into the
target domain, NetBackup continues to
manage the copies in that domain.
Figure 2-1 is a typical A.I.R. setup that shows an image that is replicated from one
source domain to one target domain.
Quick start 29
About Auto Image Replication (A.I.R.)
Domain 1
Domain 2
9 Repeat these steps in the target domain. Use the source primary server name
as the primary server name in the Validate Certificate Authority field.
10 Configure storage server at both source domain and target domain.
The image is replicated from one storage server in the source domain to one
storage server in the target domain. The image is needed to configure the
MSDP at the source domain and the target domain.
Use the NetBackup web UI to configure the MSDP storage server, disk pool,
and storage unit.
install_path\NetBackup\bin
\nbcertcmd -getCACertificate -server target_primary_server
■ UNIX:
Quick start 31
About Auto Image Replication (A.I.R.)
/usr/openv/netbackup/bin
/nbcertcmd -getCACertificate -server target_primary_server
2 On the source MSDP storage server, run the following command to get the
certificate generated by target NetBackup primary server:
■ Windows:
install_path\NetBackup\bin
\nbcertcmd -getCertificate
-server target_primary_server -token token_string
■ UNIX:
/usr/openv/netbackup/bin
/nbcertcmd -getCertificate
-server target_primary_server -token token_string
3 Enter the storage lifecycle policy name and select the data
classification.
4 Click Add.
6 For the Destination storage, select the storage unit of the target
MSDP storage server.
4 Click Add.
6 For the Destination storage, select the storage unit of the target
MSDP storage server.
Create a backup At the source domain, create a backup and use the SLP as the policy
policy to perform a storage. Run the backup and after the backup runs, the replication job
backup and run the at the source domain runs. After a short period of time, the import job
SLP. at the target domain runs. The target domain manages the replicated
image at the target storage server.
Chapter 3
Planning your deployment
This chapter includes the following topics:
Step 1 Learn about deduplication nodes See “About MSDP deduplication nodes” on page 36.
and storage destinations
See “About the NetBackup deduplication destination” on page 36.
Step 2 Understand the storage capacity See “About MSDP capacity support and hardware requirements”
and requirements on page 37.
Step 3 Determine which type of See “About NetBackup media server deduplication” on page 42.
deduplication to use
See “About NetBackup Client Direct deduplication” on page 47.
Step 4 Determine the requirements for See “About MSDP storage servers” on page 44.
deduplication hosts
See “About MSDP server requirements” on page 45.
Step 5 Determine the credentials for See “About the NetBackup Deduplication Engine credentials”
deduplication on page 50.
Step 6 Read about compression and See “About MSDP compression” on page 121.
encryption
See “About MSDP encryption” on page 122.
Step 7 Read about optimized synthetic See “About MSDP optimized synthetic backups” on page 53.
backups
Step 8 Read about deduplication and SAN See “About MSDP and SAN Client” on page 53.
Client
Step 9 Read about optimized duplication See “About MSDP optimized duplication and replication” on page 54.
and replication
Planning your deployment 35
NetBackup naming conventions
Step 10 Read about stream handlers See “About MSDP stream handlers” on page 55.
Step 11 Read about best practices for See “MSDP deployment best practices” on page 61.
implementation
Step 12 Determine the storage requirements See “About provisioning the storage for MSDP” on page 67.
and provision the storage
See “About MSDP storage and connectivity requirements” on page 39.
Step 13 License MSDP See “About the MSDP license” on page 71.
Step 14 Configure MSDP See “Configuring MSDP server-side deduplication” on page 75.
Step 15 Migrate from other storage to See “Migrating from another storage type to MSDP” on page 742.
NetBackup deduplication
The naming conventions for the NetBackup Deduplication Engine differ from these
NetBackup naming conventions.
See “About the NetBackup Deduplication Engine credentials” on page 50.
Storage server The storage server deduplicates the backups, writes the data to the
storage, and manages the storage.
Load balancing Load balancing servers assist the storage server by deduplicating
servers backups. Load balancing servers are optional.
Clients The clients may include the clients that deduplicate their own data
(Client Direct).
See “About NetBackup Client Direct deduplication” on page 47.
Multiple media server deduplication nodes can exist. Nodes cannot share servers
or storage.
Each node manages its own storage. Deduplication within each node is supported;
deduplication between nodes is not supported.
See “About NetBackup media server deduplication” on page 42.
See “About MSDP storage servers” on page 44.
Table 3-2 MSDP capacity for NetBackup 10.1.1 and earlier versions
Table 3-2 MSDP capacity for NetBackup 10.1.1 and earlier versions
(continued)
The following table lists the MSDP capacity for NetBackup version 10.2 and later
with P/S cache MSDP pools.
Note: Although the new P/S cache enables support of larger MSDP pools, you
must ensure that you have the appropriate resources available to support the larger
pools.
To identify the supported applications and usage information for Flex Appliance,
see the following article:
https://www.veritas.com/support/en_US/article.100042995
NetBackup reserves 4 percent of the storage space for the deduplication database
and transaction logs. Therefore, a storage full condition is triggered at a 96-percent
threshold. If you use separate storage for the deduplication database, NetBackup
still uses the 96-percent threshold to protect the data storage from any possible
overload.
Planning your deployment 39
About MSDP storage and connectivity requirements
Storage media
The following are the minimum requirements for single stream read or write
performance for each disk volume. Greater individual data stream capability or
aggregate capability may be required to satisfy your objectives for writing to and
reading from disk.
Local disk storage may leave you vulnerable in a disaster. SAN disk can be
remounted at a newly provisioned server with the same name.
Planning your deployment 40
About MSDP storage and connectivity requirements
When you deploy NetBackup, provide a dedicated file system for the MSDP storage.
If the file system is used for MSDP storage is shared with other applications, it may
result in a performance degradation, and affect the reporting of storage utilization.
If another application writes an excessive amount of data, the file system may get
full unexpectedly. If storage reaches 96% of the capacity, the MSDP storage server
becomes unavailable for the backup jobs.
NetBackup Media Server Deduplication Pool does not support the following
storage types for deduplication storage:
■ Network Attached Storage (that is, file based storage protocols) such as CIFS
or NFS.
■ The ZFS file system.
The NetBackup compatibility lists are the definitive source for supported operating
systems, computers, and peripherals. See the compatibility lists available at the
following website:
http://www.netbackup.com/compatibility
The storage must be provisioned and operational before you can configure
deduplication in NetBackup.
See “About provisioning the storage for MSDP” on page 67.
Storage connection
The storage must be direct-attached storage (DAS), internal disks, or connected
by a dedicated, low latency storage area network (Fibre Channel or iSCSI).
A storage area network should conform to the following criteria:
HBAs The storage server should have an HBA or HBAs dedicated to the storage.
Those HBAs must have enough bandwidth to satisfy your throughput
objectives.
See “Fibre Channel and iSCSI comparison for MSDP” on page 41.
See “About NetBackup media server deduplication” on page 42.
Planning your deployment 41
About MSDP storage and connectivity requirements
Genesis Storage networking architecture that is designed Storage network protocol that is built on top of
to handle the same block storage format that TCP/IP to use the same wiring as the rest of the
storage devices use. enterprise.
Protocol FCP is a thin, single-purpose protocol that iSCSI is a multiple layer implementation that
provides lossless, in-order frame delivery and facilitates data transfers over intranets and long
low switch latency. distances. The SCSI protocol expects lossless,
in-order delivery, but iSCSI uses TCP/IP, which
experiences packet loss and out-of-order
delivery.
Host CPU load Low. Fibre Channel frame processing is Higher. Most iSCSI implementations use the
offloaded to dedicated low-latency HBAs. host processor to create, send, and interpret
storage commands. Therefore, Veritas requires
dedicated network interfaces on the storage
server to reduce storage server load and reduce
latency.
Flow control A built-in flow control mechanism that ensures No built-in flow control. Veritas recommends
data is sent to a device when it is ready to that you use the Ethernet priority-based flow
accept it. control as defined in the IEEE 802.1Qbb
standard.
Deduplication Deduplication
plug-in plug-in
Deduplication
plug-in
Storage server
NetBackup
Deduplication
Engine
Media Server
Deduplication Pool
Note: If you want to use Fibre Channel for VMware backups in a Veritas appliance
configuration, the same Fibre Channel datastore LUNs must be zoned to all of the
load balancing media servers.
Intel and AMD have similar performance and perform well on single core throughput.
Newer SPARC processors, such as the SPARC64 VII, provide the single core
throughput that is similar to AMD and Intel. Alternatively, UltraSPARC T1 and T2
single core performance does not approach that of the AMD and Intel processors.
Tests show that the UltraSPARC processors can achieve high aggregate throughput.
However, they require eight times as many backup streams as AMD and Intel
processors to do so.
CPU Veritas recommends at least a 2.2-GHz clock rate. Veritas recommends at least a 2.2-GHz clock rate.
A 64-bit processor is required. A 64-bit processor is required.
At least four cores are required. Veritas At least two cores are required. Depending on
recommends eight cores. throughput requirements, more cores may be
helpful.
For 64 TBs of storage, Intel x86-64 architecture
requires eight cores.
Operating The operating system must be a supported 64-bit The operating system must be a supported 64-bit
system operating system. operating system.
See the NetBackup Software Compatibility List for See the NetBackup Software Compatibility List for
your NetBackup release on the Veritas Support your NetBackup release on the following website.
website.
Note: In some environments, a single host can function as both a NetBackup primary
server and as a deduplication server. Such environments typically run fewer than
100 total backup jobs a day. (Total backup jobs are backups to any storage
destination, including deduplication and non-deduplication storage.) If you perform
more than 100 backups a day, deduplication operations may affect primary server
operations.
Deduplication Deduplication
plug-in plug-in
Deduplication
plug-in
NetBackup
Deduplication
Engine
The clients that deduplicate their own data conform to the standard NetBackup
release level compatibility. The NetBackup Release Notes for each release defines
the compatibility between NetBackup releases. To take advantage of any new
features, improvements, and fixes, Veritas recommends that the clients and the
servers be at the same release and revision.
See “About NetBackup Client Direct deduplication” on page 47.
See “About the NetBackup deduplication options” on page 20.
See “About MSDP server requirements” on page 45.
■ Asterisk (*)
■ Backward slash (\) and forward slash (/)
■ Double quote (")
■ Left parenthesis [(] and right parenthesis [)]
■ Less than (<) and greater than (>) sign.
■ Caret sign (^).
■ Percent sign (%).
■ Ampersand (&)
■ Spaces.
■ Leading and trailing quotes.
■ Square brackets ([])
■ At sign (@)
Veritas appliance products that use deduplication engine may have more restrictive
password requirements than mentioned here. See the appliance-specific
documentation for password guidelines.
Note: You cannot change the NetBackup Deduplication Engine credentials after
you enter them. Therefore, carefully choose and enter your credentials. If you must
change the credentials, contact your Veritas support representative.
Configure a specific To use a specific interface, you can enter that interface name
interface when you configure the deduplication storage server. NetBackup
uses this interface for all deduplication traffic unless you also
configure a separate interface for duplication and replication.
Configure an interface You can configure a separate network interface for the duplication
for duplication and and the replication traffic. The backup and restore traffic continues
replication traffic to use the default interface or the specific configured interface.
Port Usage
10082 The NetBackup Deduplication Engine (spoold). Open this port between the
hosts that deduplicate data. Hosts include load balancing servers and the clients
that deduplicate their own data.
10102 The NetBackup Deduplication Manager (spad). Open this port between the
hosts that deduplicate data. Hosts include load balancing servers and the clients
that deduplicate their own data.
443 Open this port between the MSDP server and the cloud storage target such as
AWS or Azure.
Planning your deployment 53
About MSDP optimized synthetic backups
What Description
Requirements The target storage unit's deduplication pool must be the same
deduplication pool on which the source images reside.
Limitations NetBackup does not support storage unit groups as a destination for
optimized synthetic backups. If NetBackup cannot produce the optimized
synthetic backup, NetBackup creates the more data-movement intensive
synthetic backup.
SAN clients can be used with the deduplication option; however, the deduplication
must occur on the media server, not the client. Configure the media server to be
both a deduplication storage server (or load balancing server) and an FT media
server. The SAN client backups are then sent over the SAN to the deduplication
server/FT media server host. At that media server, the backup stream is
deduplicated.
Do not enable client-side deduplication on SAN Clients. The data processing for
deduplication is incompatible with the high-speed transport method of Fibre
Transport. Client-side deduplication relies on two-way communication over the LAN
with the media server. A SAN client streams the data to the FT media server at a
high rate over the SAN.
Within the same NetBackup domain See “About MSDP optimized duplication within the
same domain” on page 130.
Table 3-9 MSDP job load performance for an MSDP storage server
When Description
Normal Normal operation is when all clients have been backed up once.
operation Approximately 15 to 20 jobs can run concurrently and with high performance
under the following conditions:
Storage NetBackup maintains the same number of concurrent backup jobs as during
approaches normal operation under the following conditions:
full capacity
■ The hardware meets minimum requirements. (More capable hardware
improves performance.)
■ The amount of data that is stored is between 85% to 90% of the capacity
of the storage.
For data that has already been deduplicated, the first backup with a new stream
handler produces a lower deduplication rate. After that first backup, the deduplication
rate should surpass the rate from before the new stream handler was used.
Veritas continues to develop additional stream handlers to improve backup
deduplication performance.
Note: When you use the Oracle stream handler, it is not recommended to use
variable-length deduplication.
The cacontrol command utility with the --sth flag, is used to override the default
behavior of NetBackup by creating a Marker Entry for a client, policy, or stream
type in a configuration file. The cacontrol command utility is located in the following
locations:
■ Windows: install_path\Veritas\pdde\cacontrol
■ UNIX: /usr/openv/pdde/pdcr/bin/cacontrol
In the following examples for cacontrol, STHTYPE must be set to Oracle to
configure the Oracle stream handler.
In NetBackup 8.3, you can configure cacontrol using the following options:
■ You can query the settings for the stream handler per client and policy.
■ You can enable the stream handler per client and policy.
Planning your deployment 57
About MSDP stream handlers
■ You can delete the settings for client and policy (return to default behavior).
When using the cacontrol command utility to create a Marker Entry in NetBackup
10.0, priority is given to the more granular configuration. For example:
The stream handler is enabled because the more granular configuration in Marker
Entry 1 has higher priority.
In NetBackup 10.0, you can configure cacontrol using the following options:
■ You can query the settings for the stream handler per client and policy.
■ You can enable the stream handler per client and policy.
■ You can delete the settings for a client and policy (return to default behavior).
■ You can query the settings for the stream handler per policy.
■ You can delete the settings for the stream handler per policy (return to default
behavior).
■ You can query the settings for the stream handler per stream handler type.
■ You can delete the settings for a stream handler (return to default behavior).
■ You can disable the stream handler per stream handler type.
You can protect the Microsoft SQL database with NetBackup using the following
methods:
■ MS-SQL-Server policy
Use MS-SQL-Server policy type to protect Microsoft SQL database. This policy
type is the recommended way to protect the Microsoft SQL database.
In this case, the Microsoft SQL stream handler is enabled automatically. The
policy has multiple options including a use of a user-provided batch file and
intelligent policy.
■ Standard policy
Use a standard policy to protect Microsoft SQL dump files.
In this case, the user dumps Microsoft SQL Server to a file and creates a
standard policy to back up the dumped file.
You must use cacontrol command to enable the Microsoft SQL stream handler
manually.
Following are the cacontrol options that you can use to manage Microsoft SQL
Server stream handler:
Option Description
cacontrol --sth get <Oracle|MSSQL> Get the specified marker entry status.
<client> <policy>
cacontrol --sth getbypolicy Get the specified marker entry status for a
<Oracle|MSSQL> <policy> given policy.
cacontrol --sth deletebypolicy Delete the specified marker entry for a given
<Oracle|MSSQL> <policy> policy.
cacontrol --sth updatebypolicy Update the specified marker entry status for
<Oracle|MSSQL> <policy> a given policy.
enabled|disabled
Planning your deployment 60
About MSDP stream handlers
2 Delete the stream handler settings for a policy (returns to the default behavior).
Microsoft SQL stream handler is enabled by default.
cacontrol --sth deletebypolicy MSSQL <POLICY name>
5 Delete the stream handler settings for a policy and a client (returns to the default
behavior). Microsoft SQL stream handler is enabled by default.
cacontrol --sth delete MSSQL <Client name> <POLICY name>
6 Query the settings for the stream handler for a policy and a client.
cacontrol --sth get MSSQL <Client name><POLICY name>
Things to consider:
■ When Client Direct setting is enabled, the Microsoft SQL stream handler is not
used even if the MS-SQL-Server policy type is used. The pdplugin does not
know the policy type when it runs on the client side.
To enable the stream handler, use the cacontrol command to enable the
Microsoft SQL stream handler for the policy or the client and policy.
■ Do not enable the NetBackup compression settings when the storage type is
MSDP. This setting causes deduplication loss even if the Microsoft SQL stream
handler is enabled or disabled.
■ The Microsoft SQL compression setting is provided by Microsoft SQL Server.
It compresses the SQL data. When Microsoft SQL native compression is enabled,
the deduplication rate may drop.
■ Microsoft SQL stream handler works well with the Microsoft SQL Server
Transparent Data Encryption (TDE). Microsoft SQL TDE only encrypts the SQL
Planning your deployment 61
MSDP deployment best practices
data and the SQL page structure does not change. The deduplication is not lost
when the Microsoft SQL Server TDE is enabled.
If you configure client deduplication, the clients deduplicate their own data. Some
of the deduplication load is removed from the deduplication storage server and
loading balancing servers.
Veritas recommends the following strategies to scale MSDP:
■ For the initial full backups of your clients, use the deduplication storage server.
For subsequent backups, use load balancing servers.
■ Enable client-side deduplication gradually.
If a client cannot tolerate the deduplication processing workload, be prepared
to move the deduplication processing back to a server.
Table 3-11 MSDP requirements and limitations for storage unit groups
What Description
Requirements A group must contain storage units of one storage destination type only.
That is, a group cannot contain both Media Server Deduplication Pool
storage units and storage units with other storage types.
Limitations NetBackup does not support the following for storage unit groups:
recovery, you may need to set the storage server configuration by using a saved
configuration file.
If you save the storage server configuration, you must edit it so that it includes only
the information that is required for recovery.
See “About saving the MSDP storage server configuration” on page 202.
See “Saving the MSDP storage server configuration” on page 203.
See “Editing an MSDP storage server configuration file” on page 204.
Up to 64 TBs
400 TBs
How many storage instances you provision depends on your storage requirements
for your backups. If your requirements are greater than one deduplication node can
accommodate, you can configure more than one node.
See “About MSDP deduplication nodes” on page 36.
Optimized duplication and replication can also affect the number of nodes you
provision.
See “About MSDP optimized duplication and replication” on page 54.
Other NetBackup requirements may affect how you provision the storage.
See “About MSDP storage and connectivity requirements” on page 39.
How to provision the storage is beyond the scope of the NetBackup documentation.
Consult the storage vendor’s documentation.
Provisioning the storage 68
About provisioning the storage for MSDP
Up to 64 TBs of storage
Provision the backup storage so that it appears as a single mount point to the
operating system.
Because the storage requires a directory path, do not use only the root node (/) or
drive letter (E:\) as the storage path. (That is, do not mount the storage as a root
node (/) or a drive letter (E:\).
If you use a separate disk volume for the deduplication database, provision a 1-TB
volume on a different mount point than the backup data storage.
5 Configure an MSDP storage server. Ensure that the Use alternate path for
deduplication database option is selected. Provide the storage path as
/msdp/vol0/data and the database path as /msdp/cat.
If a non-root user is used for the NetBackup media server, run the following
command to change the owner of the newly created volumes to the NetBackup
media server service user:
chown -R <NBU-service-user>:root <MSDP-volume-path>
For supported systems, see the InfoScale hardware compatibility list at the Veritas
website:
http://www.veritas.com/
http://www.veritas.com/
Note: Although InfoScale Storage supports NFS, NetBackup does not support NFS
targets for Media Server Deduplication Pool storage. Therefore, Media Server
Deduplication Pool does not support NFS with InfoScale Storage.
Chapter 5
Licensing deduplication
This chapter includes the following topics:
■ About NetBackup WORM storage support for immutable and indelible data
Step 1 Install the license for See “Licensing NetBackup MSDP” on page 72.
deduplication
Step 2 Create NetBackup log file See “NetBackup MSDP log files” on page 714.
directories on the primary
See “Creating NetBackup log file directories for MSDP” on page 714.
server and the media servers
Step 3 Configure the Deduplication The Deduplication Multi-Threaded Agent uses the default configuration values
Multi-Threaded Agent that control its behavior. You can change those values if you want to do so.
behavior
See “About the MSDP Deduplication Multi-Threaded Agent” on page 78.
Step 4 Configure the fingerprint Configuring the fingerprint cache behavior is optional.
cache behavior
See “About the MSDP fingerprint cache” on page 86.
Step 5 Enable support for 400 TB Before you configure a storage server that hosts a 400 TB Media Server
MSDP Deduplication Pool, you must enable support for that size storage.
Step 6 Configure a deduplication How many storage servers you configure depends on: your storage
storage server requirements and on whether or not you use optimized duplication or
replication. When you configure a storage server, the wizard also lets you
configure a disk pool and a storage unit.
Step 7 Configure a disk pool If you already configured a disk pool when you configured the storage server,
you can skip this step.
How many disk pools you configure depends on: your storage requirements
and on whether or not you use optimized duplication or replication.
Step 8 Create the data directories for For a 400 TB Media Server Deduplication Pool, you must create the data
400 TB support directories under the mount points for the storage directories.
See “Creating the data directories for 400 TB MSDP support” on page 97.
Step 9 Add the other volumes for For a 400 TB Media Server Deduplication Pool, you must add the second
400 TB support and third volumes to the disk pool.
Step 10 Configure a storage unit See “Configuring a Media Server Deduplication Pool storage unit” on page 115.
See “Configuring encryption for MSDP local storage volume” on page 122.
Step 13 Configure MSDP restore Optionally, you can configure NetBackup to bypass media servers during
behavior restores.
Step 16 Configure a backup policy Use the deduplication storage unit as the destination for the backup policy.
If you configured replication, use the storage lifecycle policy as the storage
destination.
Step 18 Protect the MSDP data and See “About protecting the MSDP data” on page 65.
catalog
See “About protecting the MSDP catalog” on page 208.
Step 1 Configure media server See “Configuring MSDP server-side deduplication” on page 75.
deduplication
Configuring deduplication 78
About the MSDP Deduplication Multi-Threaded Agent
Step 2 Learn about client See “About NetBackup Client Direct deduplication” on page 47.
deduplication
Step 4 Enable client-side See “Configuring client attributes for MSDP client-side deduplication”
deduplication on page 119.
Step 5 Configure remote client Configuring remote client fingerprint cache seeding is optional.
fingerprint cache seeding
See “Configuring MSDP fingerprint cache seeding on the client” on page 90.
See “About seeding the MSDP fingerprint cache for remote client
deduplication” on page 88.
Step 6 Configure client-direct Configuring client-direct restores is optional. If you do not do so, restores
restores travel through the NetBackup media server components.
The Deduplication Multi-Threaded Agent uses the default configuration values that
control its behavior. You can change those values if you want to do so. The following
table describes the Multi-Threaded Agent interactions and behaviors. It also provides
links to the topics that describe how to configure those interactions and behaviors.
Interaction Procedure
The clients that should use the Deduplication See “Configuring deduplication plug-in
Multi-Threaded Agent for backups interaction with the Multi-Threaded Agent”
on page 85.
The backup policies that should use the See “Configuring deduplication plug-in
Deduplication Multi-Threaded Agent interaction with the Multi-Threaded Agent”
on page 85.
Table 6-4 describes the operational notes for MSDP multithreading. If the
Multi-Threaded Agent is not used, NetBackup uses the single-threaded mode.
Item Description
Unsupported use cases NetBackup does not use the Multi-Threading Agent for the
following use cases:
Item Description
/usr/openv/pdde/pdag/bin/mtstrmd –terminate
/usr/openv/pdde/pdag/bin/mtstrmd
Logging parameters
The following table describes the logging parameters of the mtstrm.conf
configuration file.
Logging Description
Parameter
■ Windows: LogPath=install_path\Veritas\pdde\\..\netbackup\logs\pdde
■ UNIX: LogPath=/var/log/puredisk
Configuring deduplication 82
About the MSDP Deduplication Multi-Threaded Agent
Logging Description
Parameter
Possible values:
To enable or disable other logging information, append one of the following to the logging value,
without using spaces:
Retention How long to retain log files (in days) before NetBackup deletes them.
LogMaxSize The maximum log size (MB) before NetBackup creates a new log file. The existing log files that
are rolled over are renamed mtstrmd.log.<date/time stamp>
Possible value: 1 to the maximum operating system file size in MBs, inclusive.
Process parameters
The following table describes the process parameters of the mtstrm.conf
configuration file.
Configuring deduplication 83
About the MSDP Deduplication Multi-Threaded Agent
MaxConcurrentSessions The maximum number of concurrent sessions that the Multi-Threaded Agent
processes. If it receives a backup job when the MaxConcurrentSessions value
is reached, the job runs as a single-threaded job.
NetBackup configures the value for this parameter during installation or upgrade.
The value is the hardware concurrency value of the host divided by the
BackupFpThreads value (see Table 6-7). (For the purposes of this parameter,
the hardware concurrency is the number of CPUs or cores or hyperthreading units.)
On media servers, NetBackup may not use all hardware concurrency for
deduplication. Some may be reserved for other server processes.
For more information about hardware concurrency, see the pd.conf file
MTSTRM_BACKUP_ENABLED parameter description.
BackupShmBufferSize The size of the buffers (MB) for shared memory copying. This setting affects three
buffers: The shared memory buffer itself, the shared memory receive buffer in the
mtstrmd process, and the shared memory send buffer on the client process.
BackupReadBufferSize The size (MB) of the memory buffer to use per session for read operations from a
client during a backup.
BackupReadBufferCount The number of memory buffers to use per session for read operations from a client
during a backup.
BackupBatchSendEnabled Determines whether to use batch message protocols to send data to the storage
server for a backup.
FpCacheMaxMbSize The maximum amount of memory (MB) to use per session for fingerprint caching.
SessionCloseTimeout The amount of time to wait in seconds for threads to finish processing when a
session is closed before the agent times-out with an error.
SessionInactiveThreshold The number of minutes for a session to be idle before NetBackup considers it
inactive. NetBackup examines the sessions and closes inactive ones during
maintenance operations.
Threads parameters
The following table describes the threads parameters of the mtstrm.conf
configuration file.
Configuring deduplication 85
About the MSDP Deduplication Multi-Threaded Agent
BackupFpThreads The number of threads to use per session to fingerprint incoming data.
Default value: BackupFpThreads= (calculated by NetBackup; see the following
explanation).
NetBackup configures the value for this parameter during installation or upgrade.
The value is equal to the following hardware concurrency threshold values.
For more information about hardware concurrency, see the pd.conf file
MTSTRM_BACKUP_ENABLED parameter description.
BackupSendThreads The number of threads to use per session to send data to the storage server during
a backup operation.
■ (Windows) install_path\Veritas\NetBackup\bin\ost-plugins
2 To change a setting, specify a new value. The following are the settings that
control the interaction:
■ MTSTRM_BACKUP_CLIENTS
■ MTSTRM_BACKUP_ENABLED
■ MTSTRM_BACKUP_POLICIES
■ MTSTRM_IPC_TIMEOUT
Behavior Description
Behavior Description
On the client Configure seeding on the client for one or only a few clients.
On the storage server The use case that benefits the most is many clients to seed,
and they can use the fingerprint cache from a single host.
To ensure that NetBackup uses the seeded backup images, the first backup of a
client after you configure seeding must be a full backup with a single stream.
Specifically, the following two conditions must be met in the backup policy:
■ The Attributes tab Allow multiple data streams attribute must be unchecked.
■ The backup selection cannot include anyNEW_STREAM directives.
If these two conditions are not met, NetBackup may use multiple streams. If the
Attributes tab Limit jobs per policy is set to a number less than the total number
of streams, only those streams use the seeded images to populate the cache. Any
streams that are greater than the Limit jobs per policy value do not benefit from
seeding, and their cache hit rates may be close to 0%.
After the first backup, you can restore the original backup policy parameter settings.
The following items are example of informational messages that show that seeding
occurred:
02:15:17.433[4452.4884][DEBUG][dummy][11:bptm:6340:nbmaster1][DEBUG]
PDSTS: cache_util_load_fp_cache_nbu: enter
dir_path=/nbmaster1#1/2/#pdseed/host1, t=16s,
me=1024
02:15:17.449[4452.4884][DEBUG][dummy][11:bptm:6340:nbmaster1][DEBUG]
PDSTS: cache_util_load_fp_cache_nbu: adding
'nbmaster1_1420181254_C1_F1.img' to cache list
(1)
02:15:17.449[4452.4884][DEBUG][dummy][11:bptm:6340:nbmaster1][DEBUG]
PDSTS: cache_util_load_fp_cache_nbu: opening
/nbmaster1#1/2/#pdseed/host1/nbmaster1_1420181254_C1_F1.img
for image cache (1/1)
02:15:29.585[4452.4884][DEBUG][dummy][11:bptm:6340:nbmaster1][DEBUG]
PDVFS: pdvfs_lib_log: soRead: segment
c32b0756d491871c45c71f811fbd73af already
present in cache.
02:15:29.601[4452.4884][DEBUG][dummy][11:bptm:6340:nbmaster1][DEBUG]
PDVFS: pdvfs_lib_log: soRead: segment
346596a699bd5f0ba5389d4335bc7429 already
present in cache.
Warning: Do not use this procedure on the storage server or the load balancing
server. If you do, it affects all clients that are backed up by that host.
clienthostmachine The name of the existing similar client from which to seed
the cache.
Note: NetBackup treats long and short host names
differently, so ensure that you use the client name as it
appears in the policy that backs it up.
(By default, NetBackup uses the same path for the storage and the catalog; the
database_path and the storage_path are the same. If you configure a separate
path for the deduplication database, the paths are different.)
When a backup runs, NetBackup loads the fingerprints from the #pdseed directory
for the client. (Assuming that no fingerprints exist for that client in the usual catalog
location.)
Information about when to use this seeding method and how to choose a client
from which to seed is available.
Configuring deduplication 92
About MSDP fingerprinting
See “About seeding the MSDP fingerprint cache for remote client deduplication”
on page 88.
To seed the fingerprint cache from the storage server
1 Before the first backup of the remote client, specify the clients and the policy
in the following format:
UNIX: /usr/openv/pdde/pdag/bin/seedutil -seed -sclient client_name
-spolicy policy_name -dclient destination_client_name
Note: NetBackup treats long and short host names differently, so ensure that
you use the client name as it appears in the policy that backs it up.
After one full backup for the client or clients, NetBackup clears the seeding
directory automatically. If the first backup fails, the seeded data remains for
successive attempts. Although NetBackup clears the seeding directory
automatically, Veritas recommends that you clear the client seeding directories
manually.
-dclient destination_client_name The name of the new client for which you are
seeding the data.
-sclient source_client_name The client from which to copy the data for
seeding.
Note: NetBackup treats long and short host
names differently, so ensure that you use the
client name as it appears in the policy that
backs it up.
MaxCacheSize 50%
MaxPredictiveCacheSize 20%
MaxSamplingCacheSize 5%
EnableLocalPredictiveSamplingCache false
in contentrouter.cfg
EnableLocalPredictiveSamplingCache false
in spa.cfg
MaxCacheSize 512MiB
MaxPredictiveCacheSize 40%
MaxSamplingCacheSize 20%
Configuring deduplication 95
About MSDP fingerprinting
EnableLocalPredictiveSamplingCache true
in contentrouter.cfg
EnableLocalPredictiveSamplingCache true
in spa.cfg
For MSDP non-BYO deployments, the local volume and cloud volume share the
same S-cache and P-cache size. For the BYO deployment, S-cache and P-cache
are only for cloud volume, and MaxCacheSize is still used for local volume. In case
the system is not used for cloud backup, MaxPredictiveCacheSize and
MaxSamplingCacheSize can be set to a small value, for example, 1% or 128MiB.
MaxCacheSize can be set to a large value, for example, 50% or 60%. Similarly, if
the system is used for cloud backups only, MaxCacheSize can be set to 1% or
128MiB, and MaxPredictiveCacheSize and MaxSamplingCacheSize can be set
to a larger value.
The S-cache size is determined by the back-end MSDP capacity or the number of
fingerprints from the back-end data. With the assumption that average segment
size of 32KB, the S-cache size is about 100MB per TB of back-end capacity. P-cache
size is determined by the number of concurrent jobs and data locality or working
set of the incoming data. With working set of 250MB per stream (about 5 million
fingerprints). For example, 100 concurrent stream needs minimum memory of 25GB
(100*250MB). The working set can be larger for certain applications with multiple
streams and large data sets. As P-cache is used for fingerprint deduplication lookup
and all fingerprints that are loaded into P-cache stay there until its allocated capacity
is reached, the larger the P-cache size, the better the potential lookup hit rate, and
the more memory usage. Under-sizing S-cache or P-cache leads to reduced
deduplication rates and over-sizing increases the memory cost.
You are not required to specify the log level. Logs are saved at the following location:
■ UNIX: /var/log/puredisk/<date and time>-rebuild-scache.log
■ Windows: C:\rebuild-scache.log
Consider the following recommendations when you run the script to rebuild the
sampling cache.
■ After the upgrade, run this script immediately before running the backup tasks
to rebuild the sampling cache.
■ Ensure that the predictive and sampling cache are enabled when the script is
running.
■ For S3 interface for MSDP and universal share environments, rebuilding the
sampling cache may affect deduplication ratios.
■ The script execution affects all backup jobs causing them to fail during the
rebuilding process and the backup jobs are not performed.
If the backup job cannot be performed after the rebuilding process is complete,
restart the MSDP.
■ When the script is initiated, ensure that it is successful. If it is terminated for
some reason, run the script again.
■ If errors occur during the rebuilding process, run the script again.
To rebuild the sampling cache after an upgrade
1 Enable predictive and sampling cache in contentrouter.cfg and spa.cfg.
See “About sampling and predictive cache” on page 93.
2 On the storage server, run the script:
UNIX:/usr/openv/pdde/pdag/scripts/rebuild_scache.sh
Windows:install_path\Program Files\Veritas\bin\rebuild_scache.bat
3 Type y and press Enter to proceed. A warning about the job status is displayed.
4 Type y and press Enter to initiate the process to rebuild. The script displays
the rebuild state and percentage.
In the Flex worm environment, use deduplication shell to start the sampling cache
rebuilding.
Configuring deduplication 97
Enabling 400 TB support for MSDP
Prerequisite ■ The volumes must be formatted with the file systems that NetBackup
supports for MSDP and mounted on the storage server.
See “About provisioning the storage for MSDP” on page 67.
■ The storage server must be configured already.
See “Configuring a storage server for a Media Server Deduplication
Pool” on page 106.
The following is an example of the mount points for the three required storage
volumes:
Note: The number of storage volumes can vary based on your setup. The
maximum amount of storage space is 400 TB.
/msdp/cat
/msdp/vol1
...
/msdp/vol8
/msdp/vol1/data
...
/msdp/vol8/data
6 Configure the MSDP through the Storage Server Configuration Wizard and
Ensure that the Use alternate path for deduplication database option is
checked.
7 Provide the storage path as /msdp/vol1 and the database path as /msdp/cat.
Configuring deduplication 100
Enabling 400 TB support for MSDP
If a non-root user is used for the NetBackup media server, run the following
command to change the owner of the newly created volumes to the NetBackup
media server service user:
chown -R <NBU-service-user>:root <MSDP-volume-path>
9 Verify that the deduplication pool contains the new volumes using the following
command:
mkdir c:\etc
echo Windows_BYO > "c:\\etc\\nbapp-release"
The sizing recommendations for Windows are the same as they are for Linux. One
of the storage volumes must have 1 TB of storage space and the other storage
volumes can equal up to 400 TB of storage space. Using Windows, there are a few
additional requirements:
■ The DCHeaderHashSize setting in the <MSDP Storage
DIR>\etc\puredisk\contentrouter.cfg file must be modified to be 2000000
/ number_of_volumes. For example, with the full eight mount points, set the
DCHeaderHashSize to 250000.
■ The volume that is used should be present as nested volumes, not as letter
drives (C: Or E:). Veritas qualified this solution using NTFS volumes.
The following is an example volume layout and each data# directory is a nested
mount:
Note: MSDP storage capacity has a defined maximum and not following these
settings can result in performance-related issues due to data not being balanced
across all volumes.
For more information about MSDP storage capacity review the following section:
See “About MSDP capacity support and hardware requirements” on page 37.
Note: NetBackup supports a pool size up to 400 TB. A pool can be a smaller size
and expanded later by adding additional volumes.
Note: You cannot disable the MSDP KMS service after you enable it.
If the KMS service is not available for MSDP or the key in the KMS service that
MSDP uses is not available, then MSDP waits in an infinite loop and the backup
job may fail. When MSDP goes in an infinite loop, some commands that you run
might not respond.
After you configure KMS encryption or once the MSDP processes restart, check
the KMS encryption status after the first backup finishes.
The keys in the key dictionary must not be deleted, deprecated, or terminated. All
keys that are associated with the MSDP disk pool must be in an active or an inactive
state.
You can use the following commands to get the status of the KMS mode:
■ For UNIX:
/usr/openv/pdde/pdcr/bin/crcontrol --getmode
For MSDP cloud, run the following keydictutil command to check if the Logical
Storage Unit (LSU) is in KMS mode:
/usr/openv/pdde/pdcr/bin/keydictutil --list
■ For Windows:
<install_path>\Veritas\pdde\crcontrol.exe --getmode
Note: If you use the nbdevconfig command to add a new encrypted cloud LSU
and an encrypted LSU exists in this MSDP, the keygroupname must be the same
as the keygroupname in the previous encrypted LSU.
V7.5 "operation" You can only update the KMS status from
"set-local-lsu-kms-property" disabled to enabled.
string
V7.5 "encryption" "1" string Specifies the encryption status. This value
must be 1.
V7.5 "kmsenabled" "1" string Specifies the KMS status. This value must
be 1.
V7.5 "kmsservertype" "0" string Specifies the KMS server type. This value
must be 0.
V7.5 "kmsservername" "" string KMS server name that is shared among
all LSUs.
V7.5 "keygroupname" "" string The key group name must include the
following valid characters.
■ A-Z
■ a-z
■ 0-9
■ Underscore (_)
■ Hyphen (-)
■ Colon (:)
■ Period (.)
■ Space
Note: All encrypted LSUs in one storage server must use the same keygroupname
and kmsservername. KMS server must be configured. Key group and Key exist in
the KMS server.
Configuring deduplication 104
About MSDP Encryption using NetBackup Key Management Server service
Note: The following steps are not supported on Solaris OS. For Solaris, refer to
the following article:
Upgrade KMS encryption for MSDP on the Solaris platform
■ For Windows:
<install_path>\Veritas\NetBackup\bin\nbkms.exe -createemptydb
Enter the following parameters when you receive a prompt:
■ Enter the HMK passphrase
Enter a password that you want to set as the host master key (HMK)
passphrase. Press Enter to use a randomly generated HMK passphrase.
The passphrase is not displayed on the screen.
■ Enter HMK ID
Enter a unique ID to associate with the host master key. This ID helps
you to determine an HMK associated with any key store.
■ Enter KPK passphrase
Enter a password that you want to set as the key protection key (KPK)
passphrase. Press Enter to use a randomly generated HMK passphrase.
The passphrase is not displayed on the screen.
■ Enter KPK ID
Enter a unique ID to associate with the key protection key. This ID helps
you to determine a KPK associated with any key store.
Configuring deduplication 105
About MSDP Encryption using NetBackup Key Management Server service
After the operation completes successfully, run the following command on the
primary server to start KMS:
■ For UNIX:
/usr/openv/netbackup/bin/nbkms
■ For Windows:
sc start NetBackup Key Management Service
2 Create a key group and an active key by entering the following commands:
■ For UNIX:
/usr/openv/netbackup/bin/admincmd/nbkmsutil -createkg -kgname
msdp
/usr/openv/netbackup/bin/admincmd/nbkmsutil -createkey -kgname
msdp -keyname name –activate
■ For Windows:
<install_path>\Veritas\NetBackup\bin\admincmd\nbkmsutil.exe
-createkg -kgname msdp
<install_path>\Veritas\NetBackup\bin\admincmd\nbkmsutil.exe
-createkey -kgname msdp -keyname name -activate
■ On Windows:
<install_path>\Veritas\pdde\kms.cfg
[KMSOptions]
KMSEnable=true
KMSKeyGroupName=YourKMSKeyGroupName
KMSServerName=YourKMSServerName
KMSType=0
For KMSServerName, enter the hostname of the server where the KMS service
runs, mainly the primary server host name.
After completing the steps, you can upgrade MSDP.
Configuring deduplication 106
About MSDP Encryption using external KMS server
The type of storage. Select Media Server Deduplication Pool for the type of disk
storage.
The credentials for the See “About the NetBackup Deduplication Engine credentials”
deduplication engine. on page 50.
The storage paths. See “MSDP storage path properties” on page 108.
The network interface. See “About the network interface for MSDP” on page 51.
The load-balancing servers, See “About MSDP storage servers” on page 44.
if any.
When you configure the storage server, the wizard also lets you create a disk pool
and storage unit.
Prerequisite For a 96-TB Media Server Deduplication Pool, you must create the
required directories before you configure the storage server.
Media server Select the media server that you want to configure as the
storage server.
7 On the Storage server options page, search or enter the Storage path.
8 Enter the alternate path in the Use alternate path for deduplication database
field.
9 In Use specific network interface field, enter the interface.
10 If required, select the Enable encryption check box.
11 Click Next.
12 On the Media servers page, click Add.
13 Select the additional media servers.
14 Click Add.
15 Click Next.
16 On the Review page, verify all the information and click Save.
Configuring deduplication 108
Configuring a storage server for a Media Server Deduplication Pool
Property Description
Storage path The path to the storage. The storage path is the directory in which NetBackup stores the
raw backup data. Backup data should not be stored on the system disk.
Because the storage requires a directory path, do not use only the root node (/) or drive
letter (E:\) as the storage path. (That is, do not mount the storage as a root node (/) or a
drive letter (E:\).
For a 400 TB Media Server Deduplication Pool, you must enter the path name of the
mount point for the volume that you consider the first 32 TB storage volume. The following
is an example of a volume naming convention for the mount points for the backups:
See “About MSDP capacity support and hardware requirements” on page 37.
See “Creating the data directories for 400 TB MSDP support” on page 97.
You can use the following characters in the storage path name:
NetBackup requirements for the deduplication storage paths may affect how you expose
the storage.
Property Description
Use alternate path for By default, NetBackup uses the storage path for the MSDP database (that is, the MSDP
deduplication database catalog) location. The MSDP database is different than the NetBackup catalog.
Select this option to use a location other than the default for the deduplication database.
For a 400 TB Media Server Deduplication Pool, you must select this option.
For performance optimization, it is recommended that you use a separate disk volume for
the deduplication database than for the backup data.
Database path If you selected Use alternate path for deduplication database, enter the path name for
the database. The database should not be stored on the system disk.
For a 400 TB Media Server Deduplication Pool, you must enter the path name of the
partition that you created for the MSDP catalog. For example, if the naming convention for
your mount points is /msdp/volx , the following path is recommended for the MSDP
catalog directory:
/msdp/cat
For performance optimization, it is recommended that you use a separate disk volume for
the deduplication database than for the backup data.
You can use the following characters in the path name:
If the directory or directories do not exist, NetBackup creates them and populates
them with the necessary subdirectory structure. If the directory or directories exist,
NetBackup populates them with the necessary subdirectory structure.
Configuring deduplication 110
About disk pools for NetBackup deduplication
Caution: You cannot change the paths after NetBackup configures the deduplication
storage server. Therefore, decide during the planning phase where and how you
want the deduplicated backup data to be stored and then carefully enter the paths.
Caution: You cannot change the network interface after NetBackup configures the
deduplication storage server. Therefore, enter the properties carefully.
Property Description
Use specific network Select this option to specify a network interface for the
interface deduplication traffic. If you do not specify a network interface,
NetBackup uses the operating system host name value.
How many deduplication pools you configure depends on your storage requirements.
It also depends on whether or not you use optimized duplication or replication, as
described in the following table:
Type Requirements
Optimized duplication within Optimized duplication in the same domain requires the following deduplication pools:
the same NetBackup domain
■ At least one for the backup storage, which is the source for the duplication
operations. The source deduplication pool is in one deduplication node.
■ Another to store the copies of the backup images, which is the target for the
duplication operations. The target deduplication pool is in a different deduplication
node.
See “About MSDP optimized duplication within the same domain” on page 130.
Auto Image Replication to a Auto Image Replication deduplication pools can be either replication source or replication
different NetBackup domain target. The replication properties denote the purpose of the deduplication pool. The
deduplication pools inherit the replication properties from their volumes.
See “About the replication topology for Auto Image Replication” on page 150.
Auto Image Replication requires the following deduplication pools:
■ The deduplication storage server to query for the disk storage to use for the
pool.
■ The disk volume to include in the pool.
NetBackup exposes the storage as a single volume.
■ The disk pool properties.
Veritas recommends that disk pool names be unique across your enterprise.
To configure a deduplication disk pool by using the wizard
1 In the NetBackup Administration Console, select either NetBackup
Management or Media and Device Management.
2 From the list of wizards in the right pane, click Configure Disk Pool.
3 Click Next on the welcome panel of the wizard.
The Disk Pool Configuration Wizard panel appears.
4 On the Disk Pool Configuration Wizard panel, select the type of disk pool
you want to configure in the Storage server type window.
The types of disk pools that you can configure depend on the options for which
you are licensed.
After you select the disk pool in the Storage server type window, click Next.
5 On the Storage Server Selection panel, select the storage server for this disk
pool. The wizard displays the deduplication storage servers that are configured
in your environment.
Click Next.
6 On the Volume Selection panel, select the volume for this disk pool.
Media Server All of storage in the Storage Path that you configured in the
Deduplication Pool Storage Server Configuration Wizard is exposed as a
single volume. The PureDiskVolume is a virtual name for
that storage.
8 On the Disk Pool Configuration Summary panel, verify the selections. If OK,
click Next.
To configure the disk pool, click Next.
9 The Disk Pool Configuration Status panel describes the progress of the
operation.
After the disk pool is created, you can do the following:
Configure a storage unit Ensure that Create a storage unit using the disk pool that
you have just created is selected and then click Next. The
Storage Unit Creation wizard panel appears. Continue to
the next step.
10 In the Storage Unit Creation panel, enter the appropriate information for the
storage unit.
After you enter the appropriate information or select the necessary options,
click Next to create the storage unit.
11 After NetBackup configures the storage unit, the Finished panel appears. Click
Finish to exit from the wizard.
See “Viewing Media Server Deduplication Pool attributes” on page 508.
Property Description
Storage server The storage server name. The storage server is the same as the
NetBackup media server to which the storage is attached.
Storage server type For a Media Server Deduplication Pool, the storage type is
PureDisk.
Disk volumes For a Media Server Deduplication Pool, all disk storage is
exposed as a single volume.
Property Description
Total available space The amount of space available in the disk pool.
Total raw size The total raw size of the storage in the disk pool.
Disk Pool name The disk pool name. Enter a name that is unique across your
enterprise.
High water mark The High water mark indicates that the volume is full. When the
volume reaches the High water mark, NetBackup fails any backup
jobs that are assigned to the storage unit. NetBackup also does
not assign new jobs to a storage unit in which the deduplication
pool is full.
The High water mark includes the space that is committed to other
jobs but not already used.
Low water mark The Low water mark has no affect on the PureDiskVolume.
Limit I/O streams Select to limit the number of read and write streams (that is, jobs)
for each volume in the disk pool. A job may read backup images
or write backup images. By default, there is no limit. If you select
this property, also configure the number of streams to allow per
volume.
per volume Select or enter the number of read and write streams to allow per
volume.
Property Description
Storage unit A unique name for the new storage unit. The name can describe the
name type of storage. The storage unit name is the name used to specify a
storage unit for policies and schedules. The storage unit name cannot
be changed after creation.
Property Description
Disk type Select PureDisk for the disk type for a Media Server Deduplication
Pool.
Disk pool Select the disk pool that contains the storage for this storage unit.
All disk pools of the specified Disk type appear in the Disk pool list.
If no disk pools are configured, no disk pools appear in the list.
Media server The Media server setting specifies the NetBackup media servers that
can deduplicate the data for this storage unit. Only the deduplication
storage server and the load balancing servers appear in the media
server list.
Specify the media server or servers as follows:
NetBackup selects the media server to use when the policy runs.
Maximum For normal backups, NetBackup breaks each backup image into
fragment size fragments so it does not exceed the maximum file size that the file
system allows. You can enter a value from 20 MBs to 51200 MBs.
For a FlashBackup policy, Veritas recommends that you use the default,
maximum fragment size to ensure optimal deduplication performance.
Property Description
Maximum The Maximum concurrent jobs setting specifies the maximum number
concurrent jobs of jobs that NetBackup can send to a disk storage unit at one time.
(Default: one job. The job count can range from 0 to 256.) This setting
corresponds to the Maximum concurrent write drives setting for a Media
Manager storage unit.
NetBackup queues jobs until the storage unit is available. If three backup
jobs are scheduled and Maximum concurrent jobs is set to two,
NetBackup starts the first two jobs and queues the third job. If a job
contains multiple copies, each copy applies toward the Maximum
concurrent jobs count.
The number to enter depends on the available disk space and the
server's ability to run multiple backup processes.
Warning: A Maximum concurrent jobs setting of 0 disables the storage
unit.
Use WORM This option is enabled for storage units that are WORM capable.
WORM is the acronym for Write Once Read Many.
Select this option if you want the backup images on this storage unit to
be immutable and indelible until the WORM Unlock Time.
Note: NetBackup uses storage units for media server selection for write activity
(backups and duplications) only. For restores, NetBackup chooses among all media
servers that can access the disk pool.
For example, two storage units use the same set of media servers. One of the
storage units (STU-GOLD) has a higher Maximum concurrent jobs setting than
the other (STU-SILVER). More client backups occur for the storage unit with the
higher Maximum concurrent jobs setting.
See “Configuring a Media Server Deduplication Pool storage unit” on page 115.
Option Description
Compression for For backups, the deduplication plug-in compresses the data after it is deduplicated. The
backups data remains compressed during transfer from the plug-in to the NetBackup Deduplication
Engine on the storage server. The Deduplication Engine writes the encrypted data to the
storage. For restore jobs, the process functions in the reverse direction.
The COMPRESSION parameter in the pd.conf file on each MSDP host controls compression
and decompression for that host. By default, backup compression is enabled on all MSDP
hosts. Therefore, compression and decompression occur on the following hosts as necessary:
■ The clients that deduplicate their own data (that is, client-side deduplication).
■ The load balancing servers.
■ The storage server.
MSDP compression cannot occur on normal NetBackup clients (that is, the clients that do
not deduplicate their own data).
Note: Do not enable backup compression by selecting the Compression option on the
Attributes tab of the Policy dialog box. If you do, NetBackup compresses the data before
it reaches the plug-in that deduplicates it. Consequently, deduplication rates are very low.
Also, NetBackup does not use the Deduplication Multi-Threaded Agent if policy-based
encryption is configured.
Option Description
Compression for For duplication and replication, the deduplication plug-in compresses the data for transfer.
duplication and The data remains compressed during transfer from the plug-in to the NetBackup Deduplication
replication Engine on the storage server and remains compressed on the storage.
NetBackup chooses the least busy host to initiate and manage each duplication job and
replication job. To ensure that compression occurs for all optimized duplication and replication
jobs: do not change the default setting of the OPTDUP_COMPRESSION parameter.
For MSDP initial setup, you can use NetBackup web UI to configure encryption.
Use the following steps manually to enable encryption for the existing systems.
Once enabled, all data to the MSDP server local disk volume including NetBackup
media servers, servers in opt-dup, servers in AIR, and client direct hosts is
encrypted. You are not required to configure encryption at any other places.
Note: The following steps are for MSDP local disk volume only. For MSDP cloud
volume encryption, see the following topic.
See “Configuring encryption for MSDP cloud storage volumes” on page 123.
Encryption is enabled for all the data that is stored on the server, which includes
the MSDP storage server, the MSDP load-balancing servers, and the
NetBackup Client Direct deduplication clients.
3 Restart the MSDP services.
Note: Encryption configuration using the pd.conf file needs changes in NetBackup
media servers or clients, and its use is deprecated.
Note: Each MSDP storage volume encryption is configured individually. Like MSDP
disk storage pool encryption, once it is configured, all data to the cloud storage
volume will be encrypted regardless of the data source.
Note: The performance numbers shown were observed in the Veritas test
environment and are not a guarantee of performance in your environment.
rolling data conversion is enabled by default and works in the background after the
MSDP conversion completes. Only the data that existed before upgrade is converted.
All new data uses the new SHA-512/256 fingerprint and does not need conversion.
While in Fast mode, the rolling data conversion affects the performance of backup,
restore, duplication, and replication jobs. To minimize this effect, use the Normal
mode, which pauses the conversion when the system is busy, but slows down the
conversion process. The Fast mode keeps the conversion active regardless of
system state.
You can manage and monitor the rolling data conversion using the following
crcontrol command options.
Table 6-16 MSDP crcontrol command options for rolling data conversion
Option Description
Windows:
install_path\Veritas\pdde\Crcontrol.exe
--dataconverton
UNIX: /usr/openv/pdde/pdcr/bin/crcontrol
--dataconverton
Windows:
install_path\Veritas\pdde\Crcontrol.exe
--dataconvertoff
UNIX: /usr/openv/pdde/pdcr/bin/crcontrol
--dataconvertoff
Windows:
install_path\Veritas\pdde\Crcontrol.exe
--dataconvertstate
UNIX: /usr/openv/pdde/pdcr/bin/crcontrol
--dataconvertstate
Configuring deduplication 126
About MSDP encryption
Table 6-16 MSDP crcontrol command options for rolling data conversion
(continued)
Option Description
Windows:
install_path\Veritas\pdde\Crcontrol.exe
--dataconvertmode mode
UNIX: /usr/openv/pdde/pdcr/bin/crcontrol
--dataconvertmode <mode>
Client with NetBackup version earlier than AES (using inline data conversion)
8.0, using the Client Direct deduplication
Load balancing server with NetBackup AES (using inline data conversion)
version earlier than 8.0
Table 6-18 Encryption behavior for optimized duplication and Auto Image
Replication operations to a NetBackup 8.0 target server
Note: Inline data conversion takes place simultaneously while the backup,
duplication, or replication operations are in progress.
Backups and restores For backups and restores, NetBackup uses the network interface
that was configured during the storage server configuration.
Both the backup and restore traffic and the control traffic travel
over the backup network.
Duplication and For the duplication and the replication traffic, configure your host
replication operating systems to use a different network than the one you use
for backups and restores.
Both the duplication and the replication data traffic and the control
traffic travel over the duplication and replication network.
See “About MSDP optimized duplication within the same domain” on page 130.
See “About MSDP replication to a different domain” on page 143.
■ Both the source and the destination storage servers must have a network
interface card that is dedicated to the other network.
■ The separate network must be operational and using the dedicated network
interface cards on the source and the destination storage servers.
■ On UNIX MSDP storage servers, ensure that the Name Service Switch first
checks the local hosts file for before querying the Domain Name System (DNS).
See the operating system documentation for information about the Name Service
Switch.
To configure a separate network path for MSDP duplication and replication
1 On the source storage server, add the destination storage servers's dedicated
network interface to the operating system hosts file. If TargetStorageServer
is the name of the destination host on the network that is dedicated for
duplication, the following is an example of the hosts entry in IPv4 notation:
Veritas recommends that you always use the fully qualified domain name when
you specify hosts.
2 On the destination storage server, add the source storage servers's dedicated
network interface to the operating system hosts file. If SourceStorageServer
is the name of the source host on the network that is dedicated for duplication,
the following is an example of the hosts entry in IPv4 notation:
Veritas recommends that you always use the fully qualified domain name when
specifying hosts.
3 To force the changes to take effect immediately, flush the DNS cache. See the
operating system documentation for how to flush the DNS cache.
4 From each host, use the ping command to verify that each host resolves the
name of the other host.
If the ping command returns positive results, the hosts are configured for
duplication and replication over the separate network.
5 When you configure the target storage server, ensure that you select the host
name that represents the alternate network path.
Configuring deduplication 130
About MSDP optimized duplication within the same domain
■ The destination storage unit cannot be the same as the source storage unit.
■ The copy operation uses the maximum fragment size of the source storage unit,
not the setting for the destination storage unit. The optimized duplication copies
the image fragments as is. For greater efficiency, the duplication does not resize
and reshuffle the images into a different set of fragments on the destination
storage unit.
See “About the media servers for MSDP optimized duplication within the same
domain” on page 132.
See “About MSDP push duplication within the same domain” on page 132.
See “About MSDP pull duplication within the same domain” on page 135.
Configuring deduplication 132
About MSDP optimized duplication within the same domain
About the media servers for MSDP optimized duplication within the
same domain
For optimized Media Server Deduplication Pool duplication within the same
domain, the source storage and the destination storage must have at least one
media server in common. The common server initiates, monitors, and verifies the
duplication operation. The common server requires credentials for both the source
storage and the destination storage. (For deduplication, the credentials are for the
NetBackup Deduplication Engine, not for the host on which it runs.)
Which media server initiates the duplication operation determines if it is a push or
a pull operation, as follows:
■ If the media server is co-located physically with the source storage server, it is
push duplication.
■ If the media server is co-located physically with the destination storage server,
it is a pull duplication.
Technically, no advantage exists with a push duplication or a pull duplication.
However, the media server that initiates the duplication operation also becomes
the write host for the new image copies.
A storage server or a load balancing server can be the common server. The common
server must have the credentials and the connectivity for both the source storage
and the destination storage.
See “About MSDP optimized duplication within the same domain” on page 130.
See “About MSDP push duplication within the same domain” on page 132.
See “About MSDP pull duplication within the same domain” on page 135.
Deduplication Deduplication
plug-in plug-in
NetBackup NetBackup
2
Deduplication Deduplication
Engine Engine
3
Please verify that the data arrived
MSDP_L
Credentials: Credentials:
Credentials:
StorageServer-L StorageServer-L StorageServer-R
StorageServer-R
Figure 6-2 shows the settings for the storage unit for the normal backups for the
local deduplication node. The disk pool is the MSDP_L in the local environment.
Because all hosts in the local node are co-located, you can use any available media
server for the backups.
Configuring deduplication 134
About MSDP optimized duplication within the same domain
Figure 6-3 shows the storage unit settings for the optimized duplication. The
destination is the MSDP_R in the remote environment. You must select the common
server, so only the load balancing server LB_L2 is selected.
If you use the remote node for backups also, select StorageServer-R and load
balancing server LB_R1 in the storage unit for the remote node backups. If you
select server LB_L2, it becomes a load balancing server for the remote Media
Server Deduplication Pool. In such a case, data travels across your WAN.
Configuring deduplication 135
About MSDP optimized duplication within the same domain
Please verify
that the
plug-in plug-in
data arrived
MediaServer_DedupePool_A
MediaServer_DedupePool_B
Figure 6-5 shows the storage unit settings for the duplication destination. They are
similar to the push example except host B is selected. Host B is the common server,
so it must be selected in the storage unit.
Configuring deduplication 136
About MSDP optimized duplication within the same domain
If you use node B for backups also, select host B and not host A in the storage unit
for the node B backups. If you select host A, it becomes a load balancing server
for the node B deduplication pool.
Step 1 Review optimized duplication See “About MSDP optimized duplication within the same domain”
on page 130.
Configuring deduplication 137
About MSDP optimized duplication within the same domain
Step 2 Configure the storage servers See “Configuring a storage server for a Media Server Deduplication Pool”
on page 106.
One server must be common between the source storage and the
destination storage. Which you choose depends on whether you want a
push or a pull configuration.
See “About the media servers for MSDP optimized duplication within the
same domain” on page 132.
Step 3 Configure the deduplication If you did not configure the deduplication pools when you configured the
pools storage servers, use the Disk Pool Configuration Wizard to configure
them.
Step 4 Configure the storage unit for In the storage unit for your backups, do the following:
backups
1 For the Disk type, select PureDisk.
2 For the Disk pool, select your Media Server Deduplication Pool.
If you use a pull configuration, do not select the common media server in
the backup storage unit. If you do, NetBackup uses it to deduplicate backup
data. (That is, unless you want to use it for a load balancing server for the
source deduplication node.)
Configuring deduplication 138
About MSDP optimized duplication within the same domain
Step 5 Configure the storage unit for Veritas recommends that you configure a storage unit specifically to be
duplication the target for the optimized duplication. Configure the storage unit in the
deduplication node that performs your normal backups. Do not configure
it in the node that contains the copies.
Also select Only use the following media servers. Then, select the media
server or media servers that are common to both the source storage server
and the destination storage server. If you select more than one, NetBackup
assigns the duplication job to the least busy media server.
If you select only a media server (or servers) that is not common, the
optimized duplication job fails.
Step 6 Configure optimized Optionally, you can configure the bandwidth for replication.
duplication bandwidth
See “About configuring MSDP optimized duplication and replication
bandwidth” on page 167.
Step 7 Configure optimized Optionally, you can configure the optimized duplication behavior.
duplication behaviors
See “Configuring NetBackup optimized duplication or replication behavior”
on page 140.
Step 8 Configure a storage lifecycle Configure a storage lifecycle policy only if you want to use one to duplicate
policy for the duplication images. The storage lifecycle policy manages both the backup jobs and
the duplication jobs. Configure the lifecycle policy in the deduplication
environment that performs your normal backups. Do not configure it in the
environment that contains the copies.
When you configure the storage lifecycle policy, do the following:
■ The first operation must be a Backup. For the Storage for the Backup
operation, select the storage unit that is the target of your backups.
That storage unit can use a Media Server Deduplication Pool.
These backups are the primary backup copies; they are the source
images for the duplication operation.
■ For the second, child Operation, select Duplication. Then, select the
storage unit for the destination deduplication pool. That pool may can
be a Media Server Deduplication Pool.
Step 9 Configure a backup policy Configure a policy to back up your clients. Configure the backup policy in
the deduplication environment that performs your normal backups. Do not
configure it in the environment that contains the copies.
■ If you use a storage lifecycle policy to manage the backup job and the
duplication job: Select that storage lifecycle policy in the Policy storage
field of the Policy Attributes tab.
■ If you do not use a storage lifecycle policy to manage the backup job
and the duplication job: Select the storage unit that contains your normal
backups. These backups are the primary backup copies.
Step 10 Configure NetBackup Vault Configure Vault duplication only if you use NetBackup Vault to duplicate
for the duplication the images.
Step 11 Duplicate by using the Use the NetBackup bpduplicate command only if you want to duplicate
bpduplicate command images manually.
http://www.veritas.com/docs/DOC5332
Behavior Description
You can change the number of hours for the wait period.
If you use a storage lifecycle policy for duplication, do not configure optimized
duplication behavior for NetBackup Vault or the bpduplicate command, and vice
versa. NetBackup behavior may not be predictable.
Caution: These settings affect all optimized duplication jobs; they are not limited
to a specific NetBackup storage option.
■ Windows: install_path\NetBackup\db\config.
Configuration options are key and value pairs, as shown in the following examples:
■ CLIENT_READ_TIMEOUT = 300
■ LOCAL_CACHE = NO
■ RESUME_ORIG_DUP_ON_OPT_DUP_FAIL = TRUE
■ SERVER = server1.example.com
You can specify some options multiple times, such as the SERVER option.
Configuring deduplication 143
About MSDP replication to a different domain
/usr/openv/netbackup/bin/nbsetconfig
On a NetBackup server:
/usr/openv/netbackup/bin/admincmd/bpsetconfig
install_path\NetBackup\bin\nbsetconfig.exe
On a NetBackup server:
install_path\NetBackup\bin\admincmd\bpsetconfig.exe
2 At the command prompt, enter the key and the value pairs of the configuration
options that you want to set, one pair per line.
You can change existing key and value pairs.
You can add key and value pairs.
Ensure that you understand the values that are allowed and the format of any
new options that you add.
3 To save the configuration changes, type the following, depending on the
operating system:
Windows: Ctrl + Z Enter
UNIX: Ctrl + D Enter
Media Server Deduplication Pool A Media Server Deduplication Pool, which can be
hosted on the following systems:
Auto Image Replication does not support replicating from a storage unit group. That
is, the source copy cannot be in a storage unit group.
If a replication job fails, NetBackup retries the replication until it succeeds or the
source images expire. You can change the retry interval behavior.
See “Configuring NetBackup optimized duplication or replication behavior”
on page 140.
If a job fails after it replicates some of the images, NetBackup does not run a
separate image cleanup job for the partially replicated images. The next time the
replication runs, that job cleans up the image fragments before it begins to replicate
the images.
You can use a separate network for the duplication traffic.
See “About a separate network path for MSDP duplication and replication”
on page 128.
See “Configuring MSDP replication to a different NetBackup domain” on page 144.
See “About MSDP optimized duplication and replication” on page 54.
Step 1 Learn about MSDP replication See “About MSDP replication to a different domain” on page 143.
See “About NetBackup Auto Image Replication” on page 146.
Step 3 Add the remote storage server as See “Configuring a target for MSDP replication to a remote domain”
a replication target on page 163.
Step 4 Configure a storage lifecycle The following are the options when you configure the SLP operations:
policy
■ If you configured a trust relationship with the target domains, you can
specify one of the following options:
■ All replication target storage servers (across different
NetBackup domains)
NetBackup automatically creates an import SLP in the target
domain when the replication job runs.
■ A specific Master Server. If you choose this option, you then
select Target master server and Target import SLP.
You must create an import SLP in the target domain before you
configure an SLP in the source domain.
■ If you did not configure a trust relationship with the target domains,
All replication target storage servers (across different NetBackup
domains) is selected by default. You cannot choose a specific target
storage server.
NetBackup automatically creates an import SLP in the target domain
when the replication job runs.
See “About the storage lifecycle policies required for Auto Image
Replication” on page 170.
Step 5 Configure replication bandwidth Optionally, you can configure the bandwidth for replication.
■ Synchronize the clocks of the primary servers in the source and the target
domains so that the primary server in the target domain can import the images
as soon as they are ready. The primary server in the target domain cannot import
an image until the image creation time is reached. Time zone differences are
not a factor because the images use Coordinated Universal Time (UTC).
Process Overview
Table 6-23 is an overview of the process, generally describing the events in the
originating and target domains.
NetBackup uses storage lifecycle policies in the source domain and the target
domain to manage the Auto Image Replication operations.
See “About the storage lifecycle policies required for Auto Image Replication”
on page 170.
1 Originating primary server Clients are backed up according to a backup policy that indicates a storage
(Domain 1) lifecycle policy as the Policy storage selection. The SLP must include at least
one Replication operation to similar storage in the target domain.
2 Target primary server The storage server in the target domain recognizes that a replication event
(Domain 2) has occurred. It notifies the NetBackup primary server in the target domain.
3 Target primary server NetBackup imports the image immediately, based on an SLP that contains an
(Domain 2) import operation. NetBackup can import the image quickly because the
metadata is replicated as part of the image. (This import process is not the
same as the import process available in the Catalog utility.)
4 Target primary server After the image is imported into the target domain, NetBackup continues to
(Domain 2) manage the copies in that domain. Depending on the configuration, the media
server in Domain 2 can replicate the images to a media server in Domain 3.
Domain 1
SLP (D1toD2toD3)
Backup
Replication to target master Domain 2
SLP (D1toD2toD3)
Import Import
Replication to target server Domain 3
In the cascading model, the originating primary server for Domain 2 and Domain 3
is the primary server in Domain 1.
Note: When the image is replicated in Domain 3, the replication notification event
indicates that the primary server in Domain 2 is the originating primary server.
However, after the image is imported successfully into Domain 3, NetBackup
correctly indicates that the originating primary server is in Domain 1.
The cascading model presents a special case for the Import SLP that replicates
the imported copy to a target primary. (This primary server that is neither the first
nor the last in the string of target primary servers.)
Configuring deduplication 149
Configuring MSDP replication to a different NetBackup domain
The Import SLP must include at least one operation that uses a Fixed retention
type and at least one operation that uses a Target Retention type. So that the
Import SLP can satisfy these requirements, the import operation must use a Target
Retention.
Table 6-24 shows the difference in the import operation setup.
At least one operation must use the Target Here is the difference:
retention.
To meet the criteria, the import operation
must use Target retention.
Domain 1
SLP (D1toD2toD3)
Backup
Domain 2
Replication to target master
SLP (D1toD2toD3)
Import
Import Duplication
Domain 3
Replication to target master
Caution: Choose the target storage server carefully. A target storage server must
not also be a storage server for the originating domain.
Source A source volume contains the backups of your clients. The volume is the
source for the images that are replicated to a remote NetBackup domain.
Each source volume in an originating domain has one or more replication
partner target volumes in a target domain.
Target A target volume in the remote domain is the replication partner of a source
volume in the originating domain.
The following are the options and arguments for the command:
Save the output to a file so that you can compare the current topology with the
previous topology to determine what has changed.
See “Sample volume properties output for MSDP replication” on page 152.
Trust relationship You can select a subset of your trusted domains as a target
for replication. NetBackup then replicates to the specified
domains only rather than to all configured replication targets.
This type of Auto Image Replication is known as targeted
A.I.R.
Note: The NetBackup web UI does not support adding a trusted primary server
using an external CA-signed certificate.
See “About the certificate to use to add a trusted primary server” on page 157.
The following diagram illustrates the different tasks for adding trusted primary
servers when NetBackup CA-signed certificate (or host ID-based certificate) is used
to establish trust between the source and the target primary servers.
Configuring deduplication 155
Configuring MSDP replication to a different NetBackup domain
Domain A Domain B
(Source) (Target)
6
Targeted
Media server Image AIR Image Media server
5 4
Replication Policy Import Policy
3
Trust
Primary Server A established Primary Server B
Admin Admin
1
Get CA certificate Fingerprint and Authorization token or User
credentials from remote primary servers
Step 1 Administrators of both the source and the target primary To obtain the authorization tokens, use the
servers must obtain each other’s CA certificate fingerprint bpnbat command to log on and nbcertcmd
and authorization tokens or the user credentials. This activity to get the authorization tokens.
must be performed offline.
To obtain the SHA1 fingerprint of root
Note: It is recommended to use an authentication token to certificate, use the nbcertcmd
connect to the remote primary server. An authentication token -displayCACertDetail command.
provides restricted access and allows secure communication
To perform this task, see the NetBackup
between both the hosts. The use of user credentials (user
Commands Reference Guide.
name and password) may present a possible security breach.
Note: When you run the commands, keep
the target as the remote server.
Configuring deduplication 156
Configuring MSDP replication to a different NetBackup domain
Step 2 Establish trust between the source and the target domains. To perform this task in the NetBackup web
UI, see the following topic:
■ On the source primary server, add the target primary
server as trusted server. To perform this task using the nbseccmd,
■ On the target primary server, add the source primary see the NetBackup Commands Reference
server as trusted server. Guide.
Step 3 After you have added the source and target trusted servers, To understand the use of host ID-based
they have each other’s host ID-based certificates. The certificates, see the NetBackup Security and
certificates are used during each communication. Encryption Guide.
Step 3.1 Configure the source media server to get the security See “Configuring NetBackup CA and
certificates and the host ID certificates from the target primary NetBackup host ID-based certificate for
server. secure communication between the source
and the target MSDP storage servers”
on page 161.
Step 4 Create an import storage lifecycle policy in the target domain. See “About storage lifecycle policies”
on page 169.
Note: The import storage lifecycle policy name should
contain less than or equal to 112 characters.
Step 5 On the source MSDP server, use the Replication tab from See “Configuring a target for MSDP
the Change Storage Server dialog box to add the credentials replication to a remote domain” on page 163.
of the target storage server.
Step 5.1 Create a replication storage lifecycle policy in the source See “About storage lifecycle policies”
domain using the specific target primary server and storage on page 169.
lifecycle policy.
Step 6 The backups that are generated in one NetBackup domain See “About NetBackup Auto Image
can be replicated to storage in one or more target NetBackup Replication” on page 146.
domains. This process is referred to as Auto Image
Replication.
If your source and target trusted servers use different NetBackup versions, consider
the following.
Note: When you upgrade both the source and the target primary server to version
8.1 or later, you need to update the trust relationship. Run the following command:
nbseccmd -setuptrustedmaster -update
8.1 and later 8.1 and later Add a trusted primary server using
authorization token.
8.1 and later 8.0 or earlier On the source server, add the target
as the trusted primary server using the
remote (target) server’s credentials.
8.0 or earlier 8.1 and later On the source server, add the target
as the trusted primary server using the
remote (target) server’s credentials.
Which certificate The target primary server may support external CA, NetBackup CA,
authorities (CA) or both.
does the target
primary server
support?
The following table lists the CA support scenarios and the certificate to use to
establish trust between the source and the target primary servers.
4 For the servers that use the NetBackup certificate authority (CA), obtain the
fingerprint for each server.
More information
For more information on using an external CA with NetBackup, see the NetBackup
Security and Encryption Guide.
Note: Any trusted primary servers at NetBackup version 8.0 or earlier must be
removed using the NetBackup Administration Console or the NetBackup CLI.
You can remove a trusted primary server, which removes the trust relationship
between primary servers. Note the following implications:
■ Any replication operations fail that require the trust relationship.
■ A remote primary server is not included in any usage reporting after you remove
the trust relationship.
To remove a trusted primary server, you must perform the following procedure on
both the source and the target server.
Configuring deduplication 160
Configuring MSDP replication to a different NetBackup domain
Note: If you use multiple NICs, if you established trust using more that one host
NIC and if you remove the trust relationship with any one host NIC, the trust with
all the other host NICs is broken.
Targeted A.I.R. (Auto Image Auto Image Replication in which a primary server is in a
Replication) cluster requires inter-node authentication among the hosts
in that cluster. The NetBackup authentication certificates
provide the means to establish the proper trust relationships.
# bpnbaz -setupat
You will have to restart Netbackup services on this machine after
the command completes successfully.
Do you want to continue(y/n)y
Gathering configuration information.
Please be patient as we wait for 10 sec for the security services
to start their operation.
Generating identity for host 'bit1.remote.example.com'
Setting up security on target host: bit1.remote.example.com
nbatd is successfully configured on Netbackup Primary Server.
Operation completed successfully.
Note: After you upgrade to NetBackup 8.1.2 or later, manually deploy NetBackup
CA and the NetBackup host ID-based certificate on the source MSDP server to use
the existing Auto Image Replication.
1. On the target NetBackup primary server, run the following command to display
the NetBackup CA fingerprint:
■ Windows
install_path\NetBackup\bin\nbcertcmd -displayCACertDetail
■ UNIX
/usr/openv/netbackup/bin/nbcertcmd -displayCACertDetail
2. On the source MSDP storage server, run the following command to get the
NetBackup CA from target NetBackup primary server:
■ Windows
install_path\NetBackup\bin\nbcertcmd -getCACertificate -server
target_primary_server
■ UNIX
/usr/openv/netbackup/bin/nbcertcmd -getCACertificate -server
target_primary_server
When you accept the CA, ensure that the CA fingerprint is the same as
displayed in the previous step.
3. On the source MSDP storage server, run the following command to get a
certificate generated by target NetBackup primary server:
■ Windows
install_path\NetBackup\bin\nbcertcmd -getCertificate -server
target_primary_server -token token_string
■ UNIX
/usr/openv/netbackup/bin/nbcertcmd -getCertificate -server
target_primary_server -token token_string
■ NetBackup commands
■ Use the bpnbat command to log on the target NetBackup primary server.
■ Use the nbcertcmd command to get the authorization tokens.
For more information on the commands, refer to the NetBackup Commands
Reference Guide.
Configuring deduplication 163
Configuring MSDP replication to a different NetBackup domain
Note: About clustered primary servers: If you add a trusted primary server for
replication operations, you must enable inter-node authentication on all of the nodes
in the cluster. Enable the authentication before you begin the following procedure.
This requirement applies regardless of whether the clustered primary server is the
source of the replication operation or the target.
See “About trusted primary servers for Auto Image Replication” on page 153.
See “Enable inter-node authentication for a NetBackup clustered primary server”
on page 160.
Caution: Choose the target storage server or servers carefully. A target storage
server must not also be a storage server for the source domain. Also, a disk volume
must not be shared among multiple NetBackup domains.
Option Description
Target master server All trusted primary servers are in the drop-down
list.
Target storage server type If a trusted primary server is configured, the value
is Target storage server name.
Target storage server name If a trusted primary server is configured, select the
target storage server. If a trusted primary server is
not configured, enter the name of the target storage
server.
See “Configuring a target for MSDP replication to a remote domain” on page 163.
Configuring deduplication 167
About configuring MSDP optimized duplication and replication bandwidth
UNIX
/usr/openv/pdde/pdcr/bin/spauser -a -u <username> -p <password>
--role air --owner root
By default, bandwidthlimit=0.
The agent.cfg file resides in the following directory:
■ UNIX: storage_path/etc/puredisk
■ Windows: storage_path\etc\puredisk
By default, OPTDUP_BANDWIDTH = 0.
See “Configuring MSDP optimized duplication within the same NetBackup domain”
on page 136.
See “Configuring MSDP replication to a different NetBackup domain” on page 144.
A storage lifecycle policy (SLP) is a storage plan for a set of backups. An SLP is
configured within the Storage Lifecycle Policies utility.
An SLP contains instructions in the form of storage operations, to be applied to the
data that is backed up by a backup policy. Operations are added to the SLP that
determine how the data is stored, copied, replicated, and retained. NetBackup
retries the copies as necessary to ensure that all copies are created.
SLPs offer the opportunity for users to assign a classification to the data at the
policy level. A data classification represents a set of backup requirements, which
makes it easier to configure backups for data with different requirements. For
example, email data and financial data.
SLPs can be set up to provide staged backup behavior. They simplify data
management by applying a prescribed behavior to all the backup images that are
included in the SLP. This process allows the NetBackup administrator to leverage
the advantages of disk-based backups in the near term. It also preserves the
advantages of tape-based backups for long-term storage.
The SLP Parameters properties in the NetBackup web UI allow administrators to
customize how SLPs are maintained and how SLP jobs run.
Best-practice information about SLPs appears in the following document:
https://www.veritas.com/content/support/en_US/article.100009913
For more information, see the NetBackup Administrator's Guide, Volume I.
Configuring deduplication 170
About storage lifecycle policies
Domain 1 The Auto Image Replication SLP in the source domain must meet the following criteria:
(Source ■ The first operation must be a Backup operation to a Media Server Deduplication Pool.
domain) Indicate the exact storage unit from the drop-down list. Do not select Any Available.
Note: The target domain must contain the same type of storage to import the image.
■ At least one operation must be a Replication operation to a Media Server Deduplication Pool
in another NetBackup domain.
You can configure multiple Replication operations in an Auto Image Replication SLP. The
Replication operation settings determine whether the backup is replicated to all replication targets
in all primary server domains or only to specific replication targets.
■ The SLP must be of the same data classification as the Import SLP in Domain 2.
Domain 2 If replicating to all targets in all domains, in each domain NetBackup automatically creates an Import
SLP that meets all the necessary criteria.
(Target
domain) Note: If replicating to specific targets, you must create the Import SLP before creating the Auto
Image Replication SLP in the originating domain.
■ The first operation in the SLP must be an Import operation. NetBackup must support the
Destination storage as a target for replication from the source storage.
Indicate the exact storage unit from the drop-down list. Do not select Any Available.
■ The SLP must contain at least one operation that has the Target retention specified.
■ The SLP must be of the same data classification as the SLP in Domain 1. Matching the data
classification keeps a consistent meaning to the classification and facilitates global reporting by
data classification.
Figure 6-9 shows how the SLP in the target domain is set up to replicate the images
from the originating primary server domain.
Configuring deduplication 171
About storage lifecycle policies
Figure 6-9 Storage lifecycle policy pair required for Auto Image Replication
Note: Restart nbstserv after you make changes to the underlying storage for any
operation in an SLP.
5 Select an Operation type. If you're creating a child operation, the SLP displays
only those operations that are valid based on the parent operation that you
selected.
6 Configure the properties for the operation.
7 The Window tab displays for the following operation types: Backup From
Snapshot, Duplication, Import, Index From Snapshot, and Replication. If
you'd like to control when the secondary operation runs, create a window for
the operation.
8 On the Properties tab, click Advanced. Choose if NetBackup should process
active images after the window closes.
9 Click Create to create the operation.
10 Add additional operations to the SLP as needed. (See step 4.)
11 Change the hierarchy of the operations in the SLP if necessary.
Configuring deduplication 173
About storage lifecycle policies
12 Click Create to create the SLP. NetBackup validates the SLP when it is first
created and whenever it is changed.
13 Configure a backup policy and select a storage lifecycle policy as the Policy
storage.
See “Creating a backup policy” on page 176.
Note: The SLP options can be configured on the NetBackup web UI.
Setting Description
Storage lifecycle The Storage lifecycle policy name describes the SLP. The name cannot be modified after
policy name the SLP is created.
Configuring deduplication 174
About storage lifecycle policies
Setting Description
Data classification The Data classification defines the level or classification of data that the SLP is allowed
to process. The drop-down menu contains all of the defined classifications as well as the
Any classification, which is unique to SLPs.
The Any selection indicates to the SLP that it should preserve all images that are submitted,
regardless of their data classification. It is available for SLP configuration only and is not
available to configure a backup policy.
In an Auto Image Replication configuration where the master server domains run different
versions of NetBackup, see the following topic for special considerations:
See “About the storage lifecycle policies required for Auto Image Replication” on page 170.
One data classification can be assigned to each SLP and applies to all operations in the
SLP.
If a data classification is selected (other than Any), the SLP stores only those images from
the policies that are set up for that data classification. If no data classification is indicated,
the SLP accepts images of any classification or no classification.
The Data classification setting allows the NetBackup administrator to classify data based
on relative importance. A classification represents a set of backup requirements. When data
must meet different backup requirements, consider assigning different classifications.
For example, email backup data can be assigned to the silver data classification and financial
data backup may be assigned to the platinum classification.
A backup policy associates backup data with a data classification. Policy data can be stored
only in an SLP with the same data classification.
Once data is backed up in an SLP, the data is managed according to the SLP configuration.
The SLP defines what happens to the data from the initial backup until the last copy of the
image has expired.
Priority for secondary The Priority for secondary operations option is the priority that jobs from secondary
operations operations have in relationship to all other jobs. The priority applies to the jobs that result
from all operations except for Backup and Snapshot operations. Range: 0 (default) to
99999 (highest priority).
For example, you may want to set the Priority for secondary operations for a policy with
a gold data classification higher than for a policy with a silver data classification.
The priority of the backup job is set in the backup policy on the Attributes tab.
Configuring deduplication 175
About MSDP backup policy configuration
Setting Description
Operations Use the Add, Change, and Remove buttons to create a list of operations in the SLP. An
SLP must contain one or more operations. Multiple operations imply that multiple copies
are created.
The list also contains the columns that display information about each operation. Not all
columns display by default.
Arrows Use the arrows to indicate the indentation (or hierarchy) of the source for each copy. One
copy can be the source for many other copies.
Active The Active and Postponed options appear under State of Secondary Operation
Processing and refer to the processing of all duplication operations in the SLP.
and
Note: The Active and Postponed options apply to duplication operations that create
Postponed
tar-formatted images. For example, those created with bpduplicate. The Active and
Postponed options do not affect the images that are duplicated as a result of OpenStorage
optimized duplication, NDMP, or if one or more destination storage units are specified as
part of a storage unit group.
■ Enable Active to let secondary operations continue as soon as possible. When changed
from Postponed to Active, NetBackup continues to process the images, picking up
where it left off when secondary operations were made inactive.
■ Enable Postponed to postpone the secondary operations for the entire SLP. Postponed
does not postpone the creation of duplication jobs, it postpones the creation of images
instead. The duplication jobs continue to be created, but they are not run until secondary
operations are active again.
All secondary operations in the SLP are inactive indefinitely unless the administrator
selects Active or until the Until option is selected and an activation date is indicated.
Validate Across Click this button to see how changes to this SLP can affect the policies that are associated
Backup Policies button with this SLP. The button generates a report that displays on the Validation Report tab.
This button performs the same validation as the -conflict option performs when used
with the nbstl command.
For VMware backups, select the Enable file recovery from VM backup option
when you configure a VMware backup policy. The Enable file recovery from VM
backup option provides the best deduplication rates.
NetBackup deduplicates the client data that it sends to a deduplication storage unit.
See “About storage unit groups for MSDP” on page 64.
See “Use MSDP compression and encryption” on page 64.
Note: If a client is in a subdomain that is different from the server subdomain, add
the fully qualified domain name of the server to the client’s hosts file. For example,
india.veritas.org is a different subdomain than china.veritas.org.
When a backup or restore job for a client starts, NetBackup searches the Resilient
network list from top to bottom looking for the client. If NetBackup finds the client,
NetBackup updates the resilient network setting of the client and the media server
that runs the job. NetBackup then uses a resilient connection.
Property Description
FQDN or IP address The full qualified domain name or IP address of the host. The
address can also be a range of IP addresses so you can
configure more than one client at once. You can mix IPv4
addresses and ranges with IPv6 addresses and subnets.
Use the arrow buttons on the right side of the pane to move
up or move down an item in the list of resilient networks.
Configuring deduplication 178
Resilient network properties
Property Description
Note: The order is significant for the items in the list of resilient networks. If a client
is in the list more than once, the first match determines its resilient connection
status. For example, suppose you add a client and specify the client IP address
and specify On for Resiliency. Suppose also that you add a range of IP addresses
as Off, and the client IP address is within that range. If the client IP address appears
before the address range, the client connection is resilient. Conversely, if the IP
range appears first, the client connection is not resilient.
Other NetBackup properties control the order in which NetBackup uses network
addresses.
The NetBackup resilient connections use the SOCKS protocol version 5.
Resilient connection traffic is not encrypted. It is recommended that you encrypt
your backups. For deduplication backups, use the deduplication-based encryption.
For other backups, use policy-based encryption.
Resilient connections apply to backup connections. Therefore, no additional network
ports or firewall ports must be opened.
Note: If multiple backup streams run concurrently, the Remote Network Transport
Service writes a large amount of information to the log files. In such a scenario, it
is recommended that you set the logging level for the Remote Network Transport
Service to 2 or less. Instructions to configure unified logs are in a different guide.
■ More processes run on media servers and clients. Usually, only one more
process per host runs even if multiple connections exist.
■ The processing that is required to maintain a resilient connection may reduce
performance slightly.
4 Click Save.
The settings are propagated to the affected hosts through normal NetBackup
inter-host communication, which can take up to 15 minutes.
6 If you want to begin a backup immediately, restart the NetBackup services on
the primary server.
Note: Use variable-length deduplication for data types that do not show a good
deduplication ratio with the current MSDP intelligent deduplication algorithm and
affiliated streamers. Enabling Variable-length deduplication might improve the
deduplication ratio, but consider that the CPU performance might get affected.
The following table describes the effect of variable-length deduplication on the data
backup:
Configuring deduplication 182
About variable-length deduplication on NetBackup clients
Option Description
vldtype ■ VLD
The version 1 of the variable-length deduplication
algorithm.
■ VLD_2
The version 2 of the variable-length deduplication
algorithm. It is recommended to use this version as default.
■ VLD_3
Another version of the variable-length deduplication
algorithm.
Option Description
When you set parameters for client and policy, you can use asterisk (*) to
indicate all clients or policies.
For example,
cacontrol --vld updatebypolicy “*” VLD_V2 32 64
Note: Veritas recommends that you make a backup copy of the file before you edit
it.
Configuring deduplication 186
About the MSDP pd.conf configuration file
2 To activate a setting, remove the pound character (#) in column 1 from each
line that you want to edit.
3 To change a setting, specify a new value.
Note: The spaces to the left and right of the equal sign (=) in the file are
significant. Ensure that the space characters appear in the file after you edit
the file.
Parameter Description
BACKUPRESTORERANGE On a client, specifies the IP address or range of addresses that the local
network interface card (NIC) should use for backups and restores.
Specify the value in one of two ways, as follows:
Parameter Description
CR_STATS_TIMER Specifies a time interval in seconds for retrieving statistics from the storage
server host. The default value of 0 disables caching and retrieves statistics
on demand.
Consider the following information before you change this setting:
■ If disabled (set to 0), a request for the latest storage capacity information
occurs whenever NetBackup requests it.
■ If you specify a value, a request occurs only after the specified number
of seconds since the last request. Otherwise, a cached value from the
previous request is used.
■ Enabling this setting may reduce the queries to the storage server. The
drawback is the capacity information reported by NetBackup becomes
stale. Therefore, if storage capacity is close to full, Veritas recommends
that you do not enable this option.
■ On high load systems, the load may delay the capacity information
reporting. If so, NetBackup may mark the storage unit as down.
DEBUGLOG Specifies the file to which NetBackup writes the deduplication plug-in log
information. NetBackup prepends a date stamp to each day's log file.
On Windows, a partition identifier and slash must precede the file name.
On UNIX, a slash must precede the file name.
Note: This parameter does not apply for NDMP backups from a NetApp
appliance.
Default value:
Parameter Description
DISABLE_BACKLEVEL_TLS When secure communication is established between the client and the
server, this parameter specifies whether or not to disable older TLS
versions. NetBackup version 8.0 and earlier use older TLS versions such
as SSLV2, SSLV3, TLS 1.0, and TLS 1.1.
For a standard backup, NetBackup client version 8.0 and earlier can
communicate with NetBackup server (media server or load balance server)
version 8.1 that has TLS 1.2 enabled.
Parameter Description
ENCRYPTION Specifies whether to encrypt the data during backups. By default, files are
not encrypted.
If you set this parameter to 1 on all hosts, the data is encrypted during
transfer and on the storage.
To encrypt all data in the MSDP server, it is recommended that you use
the server option. ENCRYPTION parameter is useful only for the backups
or replication using the hosts where the pd.conf file exists.
FIBRECHANNEL Enables the Fibre Channel for backup, and restores the traffic to and from
a NetBackup series appliance.
Parameter Description
To determine the keep alive interval that NetBackup uses, examine the
deduplication plug-in log file for a message similar to the following:
For more information about the deduplication plug-in log file, see DEBUGLOG
and LOGLEVEL in this table.
Configuring deduplication 192
About the MSDP pd.conf configuration file
Parameter Description
FP_CACHE_CLIENT_POLICY
Note: Veritas recommends that you use this setting on the individual
clients that back up their own data (client-side deduplication). If you use it
on a storage server or load balancing server, it affects all backup jobs.
Specifies the client, backup policy, and date from which to obtain the
fingerprint cache for the first backup of a client.
By default, the fingerprints from the previous backup are loaded. This
parameter lets you load the fingerprint cache from another, similar backup.
It can reduce the amount of time that is required for the first backup of a
client. This parameter especially useful for remote office backups to a
central datacenter in which data travels long distances over a WAN.
clienthostmachine,backuppolicy,date
The date is the last date in mm/dd/yyyy format to use the fingerprint cache
from the client you specify.
See “Configuring MSDP fingerprint cache seeding on the client” on page 90.
Because incremental backups only back up what has changed since the
last backup, cache loading has little affect on backup performance for
incremental backups.
FP_CACHE_LOCAL Specifies whether or not to use the fingerprint cache for the backup jobs
that are deduplicated on the storage server. This parameter does not apply
to load balancing servers or to clients that deduplicate their own data.
Parameter Description
FP_CACHE_MAX_COUNT Specifies the maximum number of images to load in the fingerprint cache.
FP_CACHE_MAX_MBSIZE Specifies the amount of memory in MBs to use for the fingerprint cache.
FP_CACHE_PERIOD_REBASING_THRESHOLD Specifies the threshold (MB) for periodic rebasing during backups. A
container is considered for rebasing if both of the following are true:
■ The container has not been rebased within the last three months.
■ For that backup, the data segments in the container consume less
space than the FP_CACHE_PERIOD_REBASING_THRESHOLD value.
FP_CACHE_REBASING_THRESHOLD Specifies the threshold (MB) for normal rebasing during backups. A
container is considered for rebasing if both of the following are true:
■ The container has been rebased within the last three months.
■ For that backup, the data segments in the container consume less
space than the FP_CACHE_REBASING_THRESHOLD value.
Default value:FP_CACHE_REBASING_THRESHOLD = 4
If you change this value, consider the new value carefully. If you set it too
large, all containers become eligible for rebasing. Deduplication rates are
lower for the backup jobs that perform rebasing.
Parameter Description
LOCAL_SETTINGS Specifies whether to use the pd.conf settings of the local host or to allow
the server to override the local settings. The following is the order of
precedence for local settings:
■ Local host
■ Load balancing server
■ Storage server
LOGLEVEL Specifies the amount of information that is written to the log file. The range
is from 0 to 10, with 10 being the most logging.
MAX_LOG_MBSIZE The maximum size of the log file in megabytes. NetBackup creates a new
log file when the log file reaches this limit. NetBackup prepends the date
and the ordinal number beginning with 0 to each log file, such as
120131_0_pdplugin.log, 120131_1_pdplugin.log, and so on.
Parameter Description
MTSTRM_BACKUP_CLIENTS If set, limits the use of the Multi-Threaded Agent to the backups of the
specified clients. The clients that are not specified use single-threading.
This setting does not guarantee that the specified clients use the
Multi-Threaded Agent. The MaxConcurrentSessions parameter in the
mtstrm.conf file controls the number of backups the Multi-Threaded
Agent processes concurrently. If you specify more clients than the
MaxConcurrentSessions value, some of the clients may use
single-threaded processing.
Parameter Description
MTSTRM_BACKUP_ENABLED Use the Multi-Threaded Agent in the backup stream between the
deduplication plug-in and the NetBackup Deduplication Engine.
The following items describe the values that are used for the determination
algorithm:
■ A Linux media server that has 8 CPU cores with two hyperthreading
units per core has a hardware concurrency of 16. Therefore, the
hardware concurrency value for the algorithm is 8 (for media servers,
half of the system's hardware concurrency). Eight is greater than two
(the threshold value of Windows and Linux), so multithreading is enabled
(MTSTRM_BACKUP_ENABLED = 1).
■ A Solaris client that has 2 CPU cores without hyperthreading has a
hardware concurrency of 2. The hardware concurrency value for the
algorithm is 2 (for clients, all of the system's hardware concurrency).
Two is not greater than four (the threshold value of Solaris), so
multithreading is not enabled (MTSTRM_BACKUP_ENABLED = 0).
Parameter Description
MTSTRM_BACKUP_POLICIES If set, limits the use of the Multi-Threaded Agent to the backups of the
specified policies. The clients in the policies that are not specified use
single-threading, unless the client is specified in the
MTSTRM_BACKUP_CLIENTS parameter.
This setting does not guarantee that all of the clients in the specified policies
use the Multi-Threaded Agent. The MaxConcurrentSessions parameter
in the mtstrm.conf file controls the number of backups the Multi-Threaded
Agent processes concurrently. If the policies include more clients than the
MaxConcurrentSessions value, some of the clients may use
single-threaded processing.
MTSTRM_IPC_TIMEOUT The number of seconds to wait for responses from the Multi-Threaded
Agent before the deduplication plug-in times out with an error.
OPTDUP_BANDWIDTH Determines the bandwidth that is allowed for each optimized duplication
and Auto Image Replication stream on a deduplication server.
OPTDUP_BANDWIDTH does not apply to clients. The value is specified in
KBytes/second.
Parameter Description
OPTDUP_COMPRESSION Specifies whether to compress the data during optimized duplication and
Auto Image Replication. By default, files are compressed. To disable
compression, change the value to 0. This parameter does not apply to
clients.
OPTDUP_ENCRYPTION Specifies whether to encrypt the data during optimized duplication and
replication. By default, files are not encrypted. If you want encryption,
change the value to 1 on the MSDP storage server and on the MSDP load
balancing servers. This parameter does not apply to clients.
If you set this parameter to 1 on all hosts, the data is encrypted during
transfer.
OPTDUP_TIMEOUT Specifies the number of minutes before the optimized duplication times
out.
PREFERRED_EXT_SEGKSIZE Specifies the file extensions and the preferred segment sizes in KB for
specific file types. File extensions are case sensitive. The following describe
the default values: edb are Exchange Server files; mdfare SQL Server
master database files, ndf are SQL Server secondary data files, and
segsize64k are Microsoft SQL streams.
Parameter Description
PREFETCH_SIZE The size in bytes to use for the data buffer for restore operations.
PREDOWNLOAD_FACTOR Specifies the predownload factor to use when we restore the data from
cloud LSU.
RESTORE_DECRYPT_LOCAL Specifies on which host to decrypt and decompress the data during restore
operations.
You can also specify the segment size for specific file types. See
PREFERRED_EXT_SEGKSIZE.
Configuring deduplication 200
About the MSDP pd.conf configuration file
Parameter Description
You can also specify different maximum and minimum segment sizes with
this parameter for different NetBackup clients. If you do not specify the
segment sizes, then the default values are considered.
■ VLD_CLIENT_NAME = *
Enables variable-length deduplication for all NetBackup clients and
uses the default VLD_MIN_SEGKSIZE and VLD_MAX_SEGKSIZE values.
■ VLD_CLIENT_NAME = clientname
Enables variable-length deduplication for NetBackup client
clientname and uses the default VLD_MIN_SEGKSIZE and
VLD_MAX_SEGKSIZE values.
■ VLD_CLIENT_NAME = clientname (64, 256)
Enables variable-length deduplication for NetBackup client
clientname and uses 64 KB as the VLD_MIN_SEGKSIZE and 256
KB as the VLD_MAX_SEGKSIZE value.
VLD_MIN_SEGKSIZE The minimum size of the data segment for variable-length deduplication
in KB. The segment size must be in multiples of 4 and fall in between 4
KB to 16384 KB. The default value is 64 KB.
Parameter Description
VLD_MAX_SEGKSIZE The maximum size of the data segment for variable-length deduplication
in KB. VLD_MAX_SEGKSIZE is used to set a boundary for the data
segments. The segment size must be in multiples of 4 and fall in between
4 KB to 16384 KB. The default value is 128 KB.
You can also specify different maximum and minimum segment sizes with
this parameter for different NetBackup policies. If you do not specify the
segment sizes, then the default values are considered.
■ VLD_POLICY_NAME = *
Enables variable-length deduplication for all NetBackup policies and
uses the default VLD_MIN_SEGKSIZE and VLD_MAX_SEGKSIZE
values.
■ VLD_POLICY_NAME = policyname
Enables variable-length deduplication for NetBackup policy
policyname and uses the default VLD_MIN_SEGKSIZE and
VLD_MAX_SEGKSIZE values.
■ VLD_POLICY_NAME = policyname (64, 256)
Enables variable-length deduplication for NetBackup policy
policyname and uses 64 KB as the VLD_MIN_SEGKSIZEand 256
KB as the VLD_MAX_SEGKSIZE value.
Usually, you do not need to change settings in the file. However, in some cases,
you may be directed to change settings by a Veritas support representative.
The NetBackup documentation exposes only some of the contentrouter.cfg file
parameters. Those parameters appear in topics that describe a task or process to
change configuration settings.
V7.0 represents the version of the I/O format not the NetBackup release level. The
version may differ on your system.
If you get the storage server configuration when the server is not configured or is
down and unavailable, NetBackup creates a template file. The following is an
example of a template configuration file:
To use a storage server configuration file for recovery, you must edit the file so that
it includes only the information that is required for recovery.
See “Saving the MSDP storage server configuration” on page 203.
See “Editing an MSDP storage server configuration file” on page 204.
See “Setting the MSDP storage server configuration” on page 205.
For sshostname, use the name of the storage server. For file.txt, use a file name
that indicates its purpose.
If you get the file when a storage server is not configured or is down and unavailable,
NetBackup creates a template file.
Configuring deduplication 204
About saving the MSDP storage server configuration
V7.0 "storagepath" " " string The value should be the same as the value that was used
when you configured the storage server.
V7.0 "spalogpath" " " string For the spalogpath, use the storagepath value and
append log to the path. For example, if the storagepath
is D:\DedupeStorage, enter D:\DedupeStorage\log.
V7.0 "dbpath" " " string If the database path is the same as the storagepath
value, enter the same value for dbpath. Otherwise, enter
the path to the database.
V7.0 "required_interface" " " string A value for required_interface is required only if you
configured one initially; if a specific interface is not
required, leave it blank. In a saved configuration file, the
required interface defaults to the computer's hostname.
V7.0 "spalogin" "username" string Replace username with the NetBackup Deduplication
Engine user ID.
V7.0 "spapasswd" "password" string Replace password with the password for the NetBackup
Deduplication Engine user ID.
V7.0 "encryption" " " int The value should be the same as the value that was used
when you configured the storage server.
V7.0 "kmsenabled" " " int The value is used to enable or disable MSDP KMS
configuration. The value should be the same as the value
that was used when you configured the storage server.
Configuring deduplication 205
Setting the MSDP storage server configuration
V7.0 "kmsservertype" " " int The value is KMS server type. This value should be 0.
V7.0 "kmsservername" " " string The value is NBU Key Management Server. The value
should be the same as the value that was used when you
configured the storage server.
V7.0 "keygroupname" " " string The value should be the same as the value that was used
when you configured the storage server.
See “About saving the MSDP storage server configuration” on page 202.
See “Recovering from an MSDP storage server disk failure” on page 538.
See “Recovering from an MSDP storage server failure” on page 539.
To edit the storage server configuration
1 If you did not save a storage server configuration file, get a storage server
configuration file.
See “Saving the MSDP storage server configuration” on page 203.
2 Use a text editor to enter, change, or remove values.
Remove lines from and add lines to your file until only the required lines (see
Table 6-34) are in the configuration file. Enter or change the values between
the second set of quotation marks in each line. A template configuration file
has a space character (" ") between the second set of quotation marks.
Note: The only time you should use the nbdevconfig command with the -setconfig
option is for recovery of the host or the host disk.
The storage_server_name is the fully qualified domain name if that was used to
configure the storage server. For example, if the storage server name is
DedupeServer.example.com, the configuration file name is
DedupeServer.example.com.cfg.
Warning: Only follow these procedures if you are reconfiguring your storage server
and storage paths.
rm /etc/pdregistry.cfg
cp -f /usr/openv/pdde/pdconfigure/cfg/userconfigs/pdregistry.cfg
/etc/pdregistry.cfg
Configuring deduplication 208
About protecting the MSDP catalog
■ HKLM\SOFTWARE\Symantec\PureDisk\Agent\EtcPath
Daily shadow copies NetBackup automatically creates copies of the MSDP catalog.
Catalog backup policy Veritas provides a utility that you can use to configure a NetBackup
policy that backs up the MSDP catalog.
Warning: You can change the path only during initial MSDP configuration only. If
you change it after MSDP backups exist, data loss may occur.
The NetBackup Deduplication Manager creates a shadow copy at 0340 hours daily,
host time. To change the schedule, you must change the scheduler definition file.
See “Changing the MSDP shadow catalog schedule” on page 212.
By default, the NetBackup Deduplication Manager keeps five shadow copies of the
catalog. You can change the number of copies.
See “Changing the number of MSDP catalog shadow copies” on page 213.
Retention 2 weeks
Configuring deduplication 210
About protecting the MSDP catalog
UNIX:
/database_path/databases/catalogshadow
/storage_path/etc
/database_path/databases/spa
/storage_path/var
/usr/openv/lib/ost-plugins/pd.conf
/usr/openv/lib/ost-plugins/mtstrm.conf
/database_path/databases/datacheck
Windows:
database_path\databases\catalogshadow
storage_path\etc
storage_path\var
install_path\Veritas\NetBackup\bin\ost-plugins\pd.conf
install_path\Veritas\NetBackup\bin\ost-plugins\mtstrm.conf
database_path\databases\spa
database_path\databases\datacheck
By default, NetBackup uses the same path for the storage and the
catalog; the database_path and the storage_path are the
same. If you configure a separate path for the deduplication database,
the paths are different. Regardless, the drcontrol utility captures the
correct paths for the catalog backup selections.
You should consider the following items carefully before you configure an MSDP
catalog backup:
■ Do not use the Media Server Deduplication Pool as the destination for the
catalog backups. Recovery of the MSDP catalog from its Media Server
Deduplication Pool is impossible.
■ Use a storage unit that is attached to a NetBackup host other than the MSDP
storage server.
■ Use a separate MSDP catalog backup policy for each MSDP storage server.
The drcontrol utility does not verify that the backup selections are the same
for multiple storage servers. If the backup policy includes more than one MSDP
storage server, the backup selection is the union of the backup selections for
each host.
■ You cannot use one policy to protect MSDP storage servers on both UNIX hosts
and Windows hosts.
Configuring deduplication 211
About protecting the MSDP catalog
UNIX MSDP storage servers require a Standard backup policy and Windows
MSDP storage servers require an MS-Windows policy.
See “Configuring an MSDP catalog backup” on page 214.
See “Updating an MSDP catalog backup policy” on page 218.
Warning: You can change the shadow catalog path during initial MSDP configuration
only. If you change it after MSDP backups exist, data loss may occur.
Note: There is a period (.) in front of the file name that denotes a hidden file.
2 Edit the second section of the file (40 3 * * *). The schedule section conforms
to the UNIX crontab file convention, as follows:
40 3 * * *
┬ ┬ ┬ ┬ ┬
│ │ │ │ │
│ │ │ │ │
│ │ │ │ └───── Day of week (0 - 7, Sunday is both 0 and 7, or use
│ │ │ │ sun, mon, tue, wed, thu, fri, sat; asterisk (*) is
│ │ │ │ every day)
│ │ │ └────────── Month (1 - 12; asterisk (*) is every month)
│ │ └─────────────── Day of month (1 - 31; asterisk (*) is every
│ │ day of the month)
│ └──────────────────── Hour (0 - 23; asterisk (*) is every hour)
└───────────────────────── Minute (0 - 59; asterisk (*) is every
minute of the hour)
Descriptions of the options are available in another topic. Note: To ensure that
NetBackup activates the policy, you must specify the --residence residence
option.
See “MSDP drcontrol options” on page 215.
The utility creates a log file and displays its path in the command output.
See “NetBackup MSDP log files” on page 714.
Table 6-35 MSDP drcontrol options for catalog backup and recovery
Option Description
--auto_recover_DR Recover the MSDP catalog from the most recent backup image.
This option automatically recovers the catalog and performs all of
the actions necessary to return MSDP to full functionality.
To recover the catalog from a backup other than the most recent,
contact your Veritas Support representative.
--client host_name The client to back up (that is, the host name of the MSDP storage
server).
--cleanup Remove all of the old MSDP catalog directories during the catalog
recovery process. Those directories are renamed during the
recovery.
--disk_pool This option is required for auto_recover_DR when the disk pool
name cannot be determined from the host name.
--dsid The data selection ID is the catalog directory for one of the
NetBackup domains.
--hardware machine_type The hardware type or the computer type for the host.
Default: Unknown.
--initialize_DR Performs the following actions to prepare for MSDP catalog recovery:
--list_files List the files in the most recent MSDP catalog backup.
Table 6-35 MSDP drcontrol options for catalog backup and recovery
(continued)
Option Description
--log_file pathname The pathname for the log file that the drcontrol utility creates. By
default, the utility writes log files to
/storage_path/log/drcontrol/.
--new_policy Create a new policy to protect the deduplication catalog on this host.
If a policy with the given name exists already, the command fails.
Note: To ensure that NetBackup activates the policy, you must
specify the --residence residence option.
Default: Dedupe_Catalog_shorthostname
--recover_last_image Restore the MSDP catalog from the last set of backup images (that
is, the last full plus all subsequent incrementals). The drcontrol
utility calls the NetBackup bprestore command for the restore
operation.
--refresh_shadow_catalog Deletes all existing shadow catalog copies and creates a new catalog
shadow copy.
Configuring deduplication 218
About protecting the MSDP catalog
Table 6-35 MSDP drcontrol options for catalog backup and recovery
(continued)
Option Description
--residence residence The name of the storage unit on which to store the MSDP catalog
backups.
■ If the client name (of this media server) is not in the policy’s client
list, add the client name to the policy’s client list.
■ If you specify the --OS or --hardware options, replace the
values currently in the policy with the new values.
■ Update the backup selection based on the locations of the MSDP
storage directories and configuration files. Therefore, if you modify
any of the following, you must use this option to update the
catalog backup policy:
■ Any of the following values in the spa.cfg file
(section:variable pairs):
■ StorageDatabase:CatalogShadowPath
■ StorageDatabase:Path
■ Paths:Var
■ The spa.cfg or contentrouter.cfg locations in the
pdregistry.cfg file.
This option fails if there is no policy with the given policy name. It
also fails if the existing policy type is incompatible with the operating
system of the host on which you run the command.
■ To add the client name of the storage server to the policy’s client list.
■ To update the --OS value.
■ To update the --hardware value.
■ To update the backup selection if you modified any of the following configuration
values:
■ Any of the following values in the spa.cfg file (section:variable pairs):
■ StorageDatabase:CatalogShadowPath
■ StorageDatabase:Path
■ Paths:Var
Windows: install_path\Veritas\pdde\drcontrol--update_policy
--policy policy_name [--client host_name] [--hardware
machine_type] [--OS operating_system] [--OS operating_system]
[--NB_install_dir install_directory]
It describes the approved security functions for symmetric and asymmetric key
encryption, message authentication, and hashing.
For more information about the FIPS 140-2 standard and its validation program,
see the National Institute of Standards and Technology (NIST) and the
Communications Security Establishment Canada (CSEC) Cryptographic Module
Validation Program website at
https://csrc.nist.gov/projects/cryptographic-module-validation-program.
The NetBackup MSDP is now FIPS validated and can be operated in FIPS mode.
Note: You must run FIPS mode on a new installation of NetBackup 8.1.1. You can
only enable OCSD FIPS on NetBackup 10.0 and newer versions.
Caution: Enabling MSDP FIPS mode might affect the NetBackup performance on
a server with the Solaris operating system.
Enable the FIPS mode for MSDP by running the following commands:
■ For UNIX:
/usr/openv/pdde/pdag/scripts/set_fips_mode.sh 1
For Windows:
<install_path>\Veritas\pdde\set_fips_mode.bat 1
■ /usr/openv/netbackup/bin/bp.start_all
■ For Windows:
■ <install_path>\NetBackup\bin\bpdown
■ <install_path>\NetBackup\bin\bpup
■ For Windows:
<install_path>\Veritas\pdde\set_fips_mode.bat 1
■ For UNIX:
/usr/openv/pdde/pdag/scripts/set_fips_mode.sh 1
Restart the NetBackup services on the server and the client for these changes to
take effect:
■ For Windows:
■ <install_path>\NetBackup\bin\bpdown
■ <install_path>\NetBackup\bin\bpup
■ For UNIX:
■ /usr/openv/netbackup/bin/bp.kill_all
■ /usr/openv/netbackup/bin/bp.start_all
Warning: For security reasons, the recommendation is that you do not disable the
MSDP FIPS mode once it has been enabled.
For Windows:
<install_path>\Veritas\pdde\crcontrol.exe --getmode
4 Configure the NetBackup client-side deduplication backup policy and run the
backup operation.
To use an MSDP storage server from another NetBackup domain, the MSDP storage
server must have multiple MSDP users. Then NetBackup media servers or clients
can use the MSDP storage server from another NetBackup domain by using a
different MSDP user. Multiple NetBackup domains can use the same MSDP storage
server, but each NetBackup domain must use a different MSDP user to access that
MSDP storage server.
To add an MSDP user on an MSDP storage server, run the following command:
■ Windows
<install_path>\pdde\spauser -a -u <username> -p <password> --role
admin
■ UNIX
/usr/openv/pdde/pdcr/bin/spauser -a -u <username> -p <password>
--role admin
If the storage server is NetBackup WORM Storage Server or NetBackup Flex Scale
storage server, run following NetBackup Deduplication Shell command:
setting MSDP-user add-MSDP-user username=<username> [role=<role-name>]
Where, role is optional, which can be admin or app. If the role is not specified, the
default role admin is used.
To list all the MSDP users, run the following command on the MSDP storage server:
■ Windows
<install_path>\pdde\spauser -l
■ UNIX
/usr/openv/pdde/pdcr/bin/spauser -l
If the storage server is NetBackup WORM Storage Server or NetBackup Flex Scale
storage server, run following NetBackup Deduplication Shell command:
setting MSDP-user list
Note: We recommend that the total number of MSDP users that you create to
support the multi-domain should not exceed 128 users.
To use an MSDP storage server from another NetBackup domain, you must obtain
a NetBackup certificate from another NetBackup domain.
Run the following commands on every NetBackup media server or client that wants
to use an MSDP storage server from another domain:
■ Windows
Configuring deduplication 224
About MSDP multi-domain support
■ UNIX
/usr/openv/netbackup/bin/nbcertcmd -getCACertificate –server
another_primary_server
/usr/openv/netbackup/bin/nbcertcmd -getCertificate –server
another_primary_server -token token_string
If the storage server is NetBackup WORM Storage Server or NetBackup Flex Scale
storage server, run the following NetBackup Deduplication Shell command:
setting certificate get-CA-certificate
primary_server=another_primary_server
■ NetBackup commands
■ Use the bpnbat command to log on the target NetBackup primary server.
■ Use the nbcertcmd command to get the authorization tokens.
For more information on the commands, refer to the NetBackup Commands
Reference Guide.
primaryA primaryB
mediaA1 mediaB
mediaA2
clientA
Configuring deduplication 225
About MSDP multi-domain support
PrimaryA is the host name of the primary server of NetBackup domain A and the
domain contains two media servers (mediaA1, mediaA2), and one client (clientA).
PrimaryB is the host name of the primary server of NetBackup domain B and the
domain contains one media server (mediaB).
Using the following sample steps, create an MSDP storage server in domain B and
let domain A use the MSDP storage server:
1. Create an MSDP storage server on the media server mediaB of NetBackup
domain B.
■ Open the web UI.
■ On the left, click Storage > Disk storage.
■ On the Storage servers tab, click Add and select Media Server
Deduplication Pool to local or cloud storage.
2. Run the following command on mediaB to create a new MSDP user testuser1
with password as testuser1pass.
spauser -a -u “testuser1” -p “testuser1pass” --role admin
To run client-direct backup from clientA to MSDP storage server mediaB, run
the following certificate command on clientA:
■ nbcertcmd -GetCACertificate -server primaryB
5. After creating the MSDP OpenStorage server, create a related NetBackup disk
pool and storage unit. Use the storage unit to run all the related NetBackup
jobs.
When multi-domain is used with optimized duplication or A.I.R., there is
communication between the MSDP storage servers from two different NetBackup
domains. The MSDP storage server from the other domain must have a certificate
generated by primary server of the local NetBackup domain. Run the nbcertcmd
commands on the source side MSDP storage server to request a certificate from
the NetBackup primary server of the target MSDP storage server.
When backup and restore jobs on the client and multi-domain are used together,
there is communication between the NetBackup client and MSDP storage server
from two different NetBackup domains. Run the nbcertcmd commands on the
NetBackup client to request a certificate from the NetBackup primary server of
MSDP storage server.
When one NetBackup domain uses the MSDP storage server of another NetBackup
domain, the MSDP storage server cannot be the A.I.R target of that NetBackup
domain.
If an external CA is used in the NetBackup setup, you do not need to run the
nbcertcmd –GetCACertificate and the nbcertcmd –GetCertificate commands.
If NetBackup domains A and B do not use the same external CA, synchronize the
external root CA between the two NetBackup domains for MSDP communication.
For more information of the external CA, refer to NetBackup Security and Encryption
Guide.
When one NetBackup domain uses an MSDP storage server that has multiple
network interfaces and related host names, another NetBackup domain can use
any one host name to configure the OpenStorage server. If the MSDP storage
server that has multiple host names uses an external CA, the Subject Alternative
Name field of the external certificate must contain all the host names that are used
to configure the OpenStorage server.
Only local storage of one MSDP storage can be used by other NetBackup domains.
Cloud LSU in one MSDP storage server cannot be used by other NetBackup
domains. Different NetBackup domains should not use the same MSDP user to
access the MSDP storage server or it will cause the storage server to shut down
Configuring deduplication 227
About MSDP application user support
after several minutes. To solve this issue, See “Trouble shooting multi-domain
issues” on page 738.
If there is a data corruption in the MSDP storage server, only the first domain of
the MSDP storage server receives the data corruption notification. With multi-domain,
one storage server is used by multiple NetBackup domains, and each domain can
check space usage of this storage server. The used space of the storage server is
the sum of data from all domains.
Note: The universal share backup to the target domain is not supported in the
multi-domain setup.
mediaA - (10.XX.30.2/24)
mediaA2 - (10.XX.40.2/24)
primaryA is the primary server of domain A and has two host names and IP
addresses. mediaA is the media server of domain A and has two host names and
IP addresses. MSDP storage server is created on media server mediaA.
To let domain B access the MSDP storage server on mediaA of domain A, run the
following steps:
1. Create an MSDP storage server on media server mediaA of NetBackup domain
A.
Open the NetBackup web UI. Click Storage > Disk storage. Click on the
Storage servers tab. Click Add and select Media Server Deduplication Pool
to local or cloud storage.
2. Run following command on mediaA to create a new MSDP user testuser1
with password testuser1pass:
spauser -a -u “testuser1” -p “testuser1pass” --role admin
ssl_verify_client off;
ssl_protocols TLSv1.2;
ssl_ciphers
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-G
CM-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES
128-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256;
ssl_prefer_server_ciphers on;
include /etc/nginx/locations/nginx_loc_spws.conf;
}
/usr/openv/java/jre/bin/keytool
-keystore
/usr/openv/var/global/wsl/credentials/truststoreMSDP.bcfks
-storetype BCFKS
-providername CCJ
-providerclass
com.safelogic.cryptocomply.jcajce.provider.CryptoComplyFipsProvider
-providerpath /usr/openv/wmc/webserver/lib/ccj.jar
-storepass
4d2b912d38dff8d2406b2aba2d023740aafa520a16cd3bb8b1b39b10b58a4ce5
-keypass
4d2b912d38dff8d2406b2aba2d023740aafa520a16cd3bb8b1b39b10b58a4ce5
-alias primaryB -importcert -file cluster_spws_cert.pem
Note: Your version of the bc-fips-X.X.X.X.jar file can be different than the
one in the previous example. Search that directory for bc-fips* to find the
right version for your NetBackup installation.
3. When you run the -list command on primaryB, you should see something
similar to the following example:
/usr/openv/java/jre/bin/keytool -list
-keystore
/usr/openv/var/global/wsl/credentials/truststoreMSDP.bcfks
-storetype BCFKS
Configuring deduplication 231
About NetBackup WORM storage support for immutable and indelible data
-providername CCJ
-providerclass
com.safelogic.cryptocomply.jcajce.provider.CryptoComplyFipsProvider
-providerpath /usr/openv/wmc/webserver/lib/ccj.jar
-storepass
4d2b912d38dff8d2406b2aba2d023740aafa520a16cd3bb8b1b39b10b58a4ce5
The following table describes the WORM-specific options and arguments for the
catdbutil command.
Configuring deduplication 233
About NetBackup WORM storage support for immutable and indelible data
Table 6-36 The options and arguments for the catdbutil command.
The spadb command line utility that lets you use the NetBackup Deduplication
Manager (spad) to set WORM for an LSU and define the WORM mode and the
interval for making the image immutable and indelible.
The Deduplication Manager reads the WORM mode from the
/etc/lockdown-mode.conf file file.
The following table describes the WORM-specific options and arguments for the
spadb command.
Configuring deduplication 234
Running MSDP services with the non-root user
Table 6-37 The options and arguments for the spadb command.
spadb spadb update WORM set Use the data selection ID to configure the
${FIELD1_NAME}=xxx, following WORM properties:
Command line utility
${FIELD2_NAME}=xxxx where
that lets you use the ■ indelible_minimum_interval
id=${DSID} #
NetBackup and indelible_maximum_interval
Deduplication field names: Set the minimum and maximum interval in
Manager (spad) days for making the image indelible.
■ indeliable_minimum_interval
For example,
■ indeliable_maximum_interval
spadb -c "update WORM set
indelible_minimum_interval=1
where dsid=2"
spadb -c "update WORM set
indelible_maximum_interval=1000000
where dsid=2"
■ On the NetBackup BYO, run the following command to check the maximum
number of files that service user can open:
ulimit -Hn
Set the limit to 1048576 in /etc/security/limits.conf file.
To change the MSDP service user on NetBackup BYO
1 Stop the following services:
systemctl stop crond.service
/usr/openv/netbackup/bin/bp.kill_all
/opt/VRTSpbx/bin/vxpbx_exchanged stop
/usr/openv/netbackup/bin/bp.start_all
To change the MSDP service user on the media server on Flex Appliance
1 Stop the following services:
/opt/veritas/vxapp-manage/health disable
/opt/veritas/vxapp-manage/stop
/opt/veritas/vxapp-manage/health enable
Configuring deduplication 236
Running MSDP commands with the non-root user
For the usage of NetBackup Appliance Shell Menu, see the Veritas NetBackup
Appliance Commands Reference Guide.
2 Stop the NetBackup processes from the NetBackup Appliance shell menu.
Main_Menu > Support > Processes > NetBackup Stop
3 Run the following command from the NetBackup CLI to change the MSDP
service user.
nbucliuser-!> msdpserviceusercmd
For the usage of NetBackup CLI, see About the NetBackupCLI user role topic
of the Veritas NetBackup Appliance Security Guide.
4 Start NetBackup processes from the NetBackup Appliance shell menu.
Main_Menu > Support > Processes > NetBackup Start
5 Start the crond service from the NetBackup Appliance shell menu.
Main_Menu > Support > Service Restart crond
msdpserviceusercmd can take long time depending on the MSDP storage data
size. If you think that the command may be interrupted (for example, you turn off
the laptop), run msdpserviceusercmd command in the background using Linux
command nohup.
If msdpserviceusercmd is interrupted, MSDP service fails to start. In that case, run
the command again to restart the process to change the service user.
When you add an additional MSDP storage volume using the command crcontrol
--dsaddpartition [volume path], ensure that the MSDP service user has the
read and write permissions on the new storage volume path.
The services spad, spoold, ocsd, and s3srv are the MSDP services that run with
the service user. MSDP web service spws always runs with the spws user.
For example,
$ /usr/openv/netbackup/bin/nbcmdrun msdpcmdrun crstats
For more information about nbcmdrun, see Running NetBackup commands using
the nbcmdrun wrapper command topic of the NetBackup Security and Encryption
Guide.
nbcmdrun does not support to pass environment variable and user inputs to the
MSDP commands. Alternately, you can also run msdpcmdrun, as follows:
sudo -E /usr/openv/pdde/pdcr/bin/msdpcmdrun <msdp commands>
It requires configuration of sudoers for the msdpcmdrun and allow only one command
msdpcmdrun. Administrator creates and edits /etc/sudoers.d/custom file, and
configures it.
For example, the following configuration helps give the ‘test’ user the permission
to run msdpcmdrun with root user privileges.
test ALL=NOPASSWD:SETENV: /usr/openv/pdde/pdcr/bin/msdpcmdrun
■ Run the following command to list immutable cloud volumes and configurations.
$ export MSDPC_ACCESS_KEY=AccessKeyID
$ export MSDPC_SECRET_KEY=SecretAccessKey
$ export MSDPC_REGION=us-east-1
$ export MSDPC_PROVIDER=amazon
$ sudo -E /usr/openv/pdde/pdcr/bin/msdpcmdrun msdpcldutil list
Run msdpcmdrun -l command to list the MSDP commands that are supported by
msdpcmdrun.
When MSDP command runs as service user, if the option requires a file path, the
file path should be accessible to the service user. For example, msdpcldutil list
Configuring deduplication 238
Running MSDP commands with the non-root user
In NetBackup 10.5, it can run on any NetBackup media server on the Linux Red
Hat platform. We recommend that you run it on a Flex HA appliance or along
with the NetBackup primary server.
■ MVG volume
An MVG volume is a virtual volume. It is an abstraction of a group of regular
volumes from individual MSDP servers. One MVG server can manage multiple
MVG volumes.
It can be used by NetBackup in the same way as a regular volume in a disk pool
and storage unit configurations.
Task Description
Ensure that the MSDP volume group See “MSDP volume group requirements”
requirements are met. on page 242.
Configure the MVG server. See “Configuring an MVG server using the
web UI” on page 243.
Configure the MVG volume. See “Creating an MVG volume using the web
UI” on page 244.
Configure an MVG server using the See “Configuring an MVG server using the
command-line command-line” on page 245.
Create an MVG volume using the See “Creating an MVG volume using the
command-line command-line” on page 247.
Update an MVG volume using the See “Updating an MVG volume using the
command-line command-line” on page 248.
Configure the targeted AIR with MVG volume. See “Configuring the targeted AIR with an
MVG volume ” on page 248.
Update an MVG volume. See “Updating an MVG volume using the web
UI” on page 249.
List the MVG volumes. See “Listing the MVG volumes” on page 249.
Task Description
Configure the MSDP server to be used by See “Configuring the MSDP server to be used
MVG server if they have different credentials by an MVG server having different
credentials” on page 250.
Migrate a backup policy to use the MSDP See “Migrate a backup policy to use the
volume group. MSDP volume group” on page 252.
Migrate a backup policy from MVG to a See “Migrate a backup policy from an MVG
regular MSDP disk volume. volume to a regular MSDP disk volume”
on page 252.
Assign a client policy combination to another See “Assigning a client policy combination to
MSDP server. another MSDP server” on page 253.
Remove the MVG server configuration. See “Removing an MVG server configuration”
on page 253.
5 In the Storage server options, enter all required information and select Enable
MSDP volume group (MVG) service.
This option configures an MSDP server as an MSDP volume group (MVG)
server, which lets you group the volumes from other MSDP servers to create
MVG volumes. After you enable it, the MVG server can only host MVG volumes
and cannot host its own local or cloud volumes.
If you use Key Management Service (KMS), it must be configured before you
can select the KMS option.
Click Next.
6 (Optional) In Media servers, click Add to add any additional media servers
that you want to use.
Click Next.
7 On the Review page, confirm that all options are correct and click Save.
3 Get the MSDP configuration file template. The returned template should have
a configuration item mvgenabled 0.
nbdevconfig -getconfig -storage_server <media-server-fqdn> -stype
PureDisk -configlist ./cfg.msdp.template
MSDP volume group (MVG) 246
Configuring the MSDP volume group
# cat ./cfg.msdp
V7.5 "storagepath" "/sample-msdp-path" string
V7.5 "spalogin" "sample-username" string
V7.5 "spapasswd" "sample-password" string
V7.5 "spalogretention" "7" int
V7.5 "verboselevel" "3" int
V7.5 "dbpath" "/sample-msdp-path" string
V7.5 "required_interface" "" string
V7.5 "encryption" "1" string
V7.5 "mvgenabled" "1" string
It is recommended to specify the same path for MSDP storage and catalog
with storagepath and dbpath.
5 Initialize the MSDP server.
nbdevconfig -setconfig -storage_server <media-server-fqdn> -stype
PureDisk -configlist ./cfg.msdp
On success, mvg-controller and mvg-mds along with the other MSDP services
are initialized and started.
MSDP volume group (MVG) 247
Configuring the MSDP volume group
For example,
# cat sample-mvg-local.cfg
V7.5 "operation" "add-virtual-volume" string
V7.5 "virtualVolume" "sample-mvg-local" string
V7.5 "diskVolume" "sample-msdp-server1:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server2:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server3:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server4:PureDiskVolume:Y" string
# cat sample-mvg-cloud.cfg
V7.5 "operation" "add-virtual-volume" string
V7.5 "virtualVolume" "sample-mvg-cloud" string
V7.5 "diskVolume" "sample-msdp-server1:cloud-volume1:Y" string
V7.5 "diskVolume" "sample-msdp-server2:cloud-volume2:Y" string
V7.5 "diskVolume" "sample-msdp-server3:cloud-volume3:Y" string
V7.5 "diskVolume" "sample-msdp-server4:cloud-volume4:Y" string
For example,
# cat sample-mvg-local.cfg
V7.5 "operation" "update-virtual-volume" string
V7.5 "virtualVolume" "sample-mvg-local" string
V7.5 "diskVolume" "sample-msdp-server1:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server2:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server3:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server4:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server-new1:PureDiskVolume:Y" string
V7.5 "diskVolume" "sample-msdp-server-new2:PureDiskVolume:Y" string
Configuring replication targets with MSDP volume group is the same as regular
MSDP.
See “About Auto Image Replication (A.I.R.)” on page 27.
Note: If a disk volume fails and cannot be recovered, you can change it to read-only
mode in the MVG volume. Alternatively, you can re-create the MVG volume by
removing it from the list of the disk volumes.
2 Create the configuration file for the MVG volume deletion in the following format.
2 After you add an alias user, ensure that the alias user has the same data
selection ID as the admin user. Run the following command:
spauser -l
For example,
root@host ~ # spauser -l
user 1 : msdp, data selection: 2, role: admin
root@host ~ # spauser -a -u mvg-user --owner msdp --alias
Password:
Reenter Password:
Please input password of super user [msdp]:
One MSDP user can only be used by one NBU domain! Do not use one
same user in two or more NBU domains!
root@host ~ # spauser -l
user 1 : msdp, data selection: 2, role: admin
user 2 : mvg-user, data selection: 2, role: admin
MSDP volume group (MVG) 252
Configuring the MSDP volume group
Migrate the backup policies gradually overtime to reduce any potential impact.
You can also find the information in the job details of the recent backup jobs.
Migrate the backup policies gradually overtime to reduce any potential impact.
MSDP volume group (MVG) 253
Configuring the MSDP volume group
Note: Do not delete the MVG volumes if you want to preserve the MVG data
or want to reconfigure the MVG server by reusing the existing data.
Note: Do not perform this step if you want to preserve the MVG data or
want to reconfigure the MVG server by reusing the existing data.
Table 7-2
No Step Description
1 Reassemble the MSDP Find out the MVG volumes that MVG server had and the
volume group configuration. physical volumes each MVG volume used.
No Step Description
2 Reconfigure an MVG server. You can configure the MSDP server with MVG option
selected like an MSDP server. Using the previous host
name is recommended but is not necessary.
3 Add back the MVG volumes If NetBackup still has the disk pool configured with the
one by one. MVG volumes, run NetBackup command nbdevconfig
-setconfig with add-virtual-volume option to
add back the MVG volumes one by one, with the
information collected in Step 1.
No Step Description
4 (Optional) Adjust the client The client and policy assignment table is rebuilt in the
and policy assignment table. step 3. If a client and policy combination were taken by
multiple MSDP servers before disaster, the information
can be rebuilt but the primary one may be different than
the one before the disaster.
The MSDP server which has more data for the client
and policy is usually set as the primary one.
Example:
PATH=$PATH:/usr/openv/pdde/pdcr/bin
cacontrol --mvg set-maintenance-mode vramvg037340.rsv.ven.veritas.com
Updated virtual volumes: mvg-local
cacontrol --mvg get-maintenance-mode
MSDP volume group (MVG) 257
Limitations of the MSDP volume group
On the NetBackup web UI, go to the Disk Pool webpage of an MVG volume, select
and edit a disk volume of an MVG volume, then switch the mode between
"Maintenance" and "Normal".
Alternatively, you can deactivate the corresponding backup policies temporally
during the node maintenance.
Alternatively, if the maintenance cycle is short, you can use or tune out the MVG
rebalance freezing time on the MVG server, so that the MVG server does not move
the client policy assignment out if the node can be back to stable within the MVG
rebalance freezing time. By default, the value is 0.5 hour. It can be tuned by MVG
Controller API:
curl -K /uss/config/certificates/controller/.curl_config
https://localhost:10100/v1/agent/mvg/loadbalance/config -d
'{"asmt_rb_freezing_timeout": 3600 }' -X PATCH
curl -K /uss/config/certificates/controller/.curl_config
https://localhost:10100/v1/agent/mvg/loadbalance/config
Note: "MSDPLB+" policy means a backup policy of which the name starts
with "MSDPLB+".
Table 7-3
Node failure Description
Planned node failure In a routine node maintenance, we recommend that you reduce
the impact on deduplication ratio or a job failure.
Set the node in a maintenance mode on an MVG server. The
client policy assignments on the node are not automatically
changed if the node is in maintenance mode.
Unplanned node failure MVG server has a rebalance freezing time-out. A client policy is
not moved to another node until the current node stays
unreachable for longer than the rebalance freezing time, when
the next backup is started.
The value is configurable in the MVG tuning API with the keyword
asmt_rb_freezing_timeout. By default, it is 0.5 hour.
Node is back from the If a node stays down for long, the client policy assignments are
failure moved out when NetBackup run backup jobs with the backup
policies.
Node is full If a disk volume is full, a client policy is moved to another node.
It is not moved back unless when the new node is inactive or full
and the original node has the space available again.
■ Add the media servers of the regular MSDP servers to the load balance media
server list of the MVG server. Consider adding more media servers to the list if
there are.
■ Put an MVG server in a reliable place, which is desired to have physical isolation
with the regular MSDP servers.
As an example, the MVG server can be configured on the NetBackup primary
server if it is the only MVG server, or on HA-enabled Flex appliance or BYO
environment.
■ The physical volumes in one MVG volume have the similar configuration.
They have similar volume size, similar server CPU/memory, similar disk/network
performance. It makes a more balanced load between the physical volumes
and MSDP servers.
They are enforced to have the same encryption, KMS, WORM, and other security
configuration. The MVG volume creation fails if the settings do not match.
■ Add a new MVG server only when needed to reduce cost and not compromise
on the performance.
Add a new MVG server only when the existing ones cannot meet the
requirements. For example, physical isolation between the MVG servers and
the MVG volumes is needed. The existing MVG server too small to manage
many MVG volumes. The MVG server has too many MVG volumes.
■ Do not use multiple MVG servers to manage the physical volumes of the same
MSDP servers.
Do not have a physical volume assigned to multiple MVG volumes, does not
matter if the MVG volumes are managed by the same or different MVG servers.
If an MSDP server has multiple volumes, they can be assigned to different MVG
volumes of different MVG servers. However, it is not recommended.
■ Once a physical volume is assigned to an MVG volume, it's not recommended
to use it anymore for NetBackup jobs.
It does not block anything if the physical volume continues being used. You can
still use it to restore the pre-existing backup images. The currently working
configuration with the physical volume can continue working. To have a more
balanced load, remove it from the configuration of the backup policy when it is
assigned to an MVG volume.
■ Deactivate the media server on an MVG server if it is small.
If the MVG server is small, deactivate the media server on the MVG server from
NetBackup scheduling jobs on it when there are additional load-balancing media
servers for the MVG server. Do not deactivate it if it is the only load-balancing
media server of the MVG server.
You can go to NetBackup web UI Storage > Media server, select the MVG
server, then click Deactivate.
MSDP volume group (MVG) 261
MSDP commands for MVG maintenance
Task Command
List the data selections of MSDP server. Run the following command on the MSDP or the
MVG server.
Get the MSDP disk volume configuration. Run the following command on the MSDP or the
MVG server.
cacontrol --dataselection
getlsuconfig
Refresh the MSDP disk volume of an Run the following command on the MSDP server.
existing NetBackup disk pool.
cacontrol --dataselection
refresh-disk-volume <msdp_server>
<volume-name>
Add the MVG volume association state Run the following command on the MSDP server.
on an MSDP disk volume.
cacontrol --dataselection
assigntovvol <dsid-of-disk-volume>
<disk-volume> <msdp-server>
<mvg-server> <mvg-vol>
Task Command
Remove the MVG volume association Run the following command on the MSDP server.
state on an MSDP disk volume.
cacontrol --dataselection
removefromvvol <dsid-of-disk-volume>
<disk-volume> <msdp-server>
<mvg-server> <mvg-vol>
Get which MSDP server the client and the Run the following command on the MVG server.
policy combinations are assigned to.
cacontrol --cluster
get-cp-assignment
<dsid-of-mvg-volume> [<client>
[<policy>]]
Assign client and policy combination to Run the following command on the MVG server.
an MSDP server.
cacontrol --cluster
set-cp-assignment
<dsid-of-mvg-volume> <client>
<policy> <msdp-server>
Find the MSDP catalog. Run the following command on the MSDP server.
For example:
/usr/openv/pdde/pdcr/bin/cacontrol
--catalog find 2 / "*" --listtype
VV_ONLY
MSDP volume group (MVG) 263
Troubleshooting the MVG errors
Task Command
List the MVG volumes on MVG server. Run the following command on the MVG server.
Set the MSDP server in maintenance Run the following command on the MVG server.
mode.
cacontrol --mvg get-maintenance-mode
| set-maintenance-mode <msdp_server>
| unset-maintenance-mode
<msdp_server>
Remotely find the MSDP catalog of the Run the following command on the MVG server.
regular MSDP servers on MVG server.
cacontrol --mvg catalog-find
<dsid-of-mvg-volume> <dirname>
<basename> [ --listtype
VV_ONLY|DV_ONLY|ALL ] [ --hostname
<msdp_server> ]
Validate the MSDP server credentials. Run the following command on the MSDP or the
MVG server.
/usr/openv/pdde/pdcr/bin/spauser -v
–stdin
Failed to validate the name: <detailed A specified volume name does not meet the
info>. MSDP requirements.
See “NetBackup naming conventions”
on page 35.
MSDP volume group (MVG) 264
Troubleshooting the MVG errors
Listing the data selections failed. OR The error occurs if the MVG server cannot
use its default credentials to communicate
Checking virtual volume awareness failed.
with some MSDP server of the MSDP disk
volumes of the MVG volume.
Conflict in the configuration on The disk volumes for the MVG volume do not
<description>. have the compatible settings on WORM,
Encryption, KMS, or storage class types if
Note: <description> can be “encryption”,
they are cloud volumes.
“KMS”, “WORM”, “need Warming (for cold
cloud storage)” and so on.
Including both cloud LSU and local LSU Add a cloud volume to an MVG volume with
in a virtual volume is not expected. PureDiskVolumes or add a PureDiskVolume
to an MVG volume with cloud volumes.
Member number (<#>) of the MVG volume This error occurs when you try to assign too
exceeds the maximum allowed number many disk volumes to an MVG volume.
(<#>).
By default an MVG volume can have up to 8
disk volumes. The number is configurable but
is not recommended.
MSDP volume group (MVG) 265
Troubleshooting the MVG errors
Maximum allowed MVG volume number This error occurs when you try to create too
(<#>) is reached. many MVG volumes on the MVG server.
It's not allowed to reduce the member This error occurs when you try to remove
number from <#> to <#> for virtual volume some disk volumes from an MVG volume.
<volume-name>.
It's not allowed to change disk volume This error occurs when you try to insert a disk
<msdp-server1>:<volume-name1> to volume or change the order of the disk
<msdp-server2>:<volume-name2> for volume list of an MVG volume.
virtual volume %s.
Cannot find dsid for member A bad disk volume name is specified.
<msdp-server>:<volume-name>.
Changing virtual volume assignment The disk volume that is being assigned to an
failed: No disk pool was found for the disk MVG volume does not have a corresponding
volume. disk pool configured in NetBackup.
■ Create a Media Server Deduplication Pool storage server in the NetBackup web
UI
■ Configuring Veritas Alta Recovery Vault Azure and Azure Government using
the CLI
Limitations
■ Instant access for cloud LSU of AWS Glacier, AWS Deep Archive, and Microsoft
Azure Archive is not supported.
■ Universal share for cloud LSU of AWS Glacier, AWS Deep Archive, and Microsoft
Azure Archive is not supported.
■ Accelerator for cloud LSU of AWS Glacier, AWS Deep Archive, and Microsoft
Azure Archive is not supported.
MSDP cloud support 269
Create a Media Server Deduplication Pool storage server in the NetBackup web UI
■ Cloud DR for cloud LSU of AWS Glacier, AWS Glacier, AWS Deep Archive,
and Microsoft Azure Archive is not supported if the storage server name changes.
■ The Cloud LSU for AWS Glacier, AWS Deep Archive, and Microsoft Azure
Archive cannot be used as either sources or targets of AIR of any types, targeted
or classic.
■ The Cloud LSU for AWS Glacier, AWS Deep Archive, and Microsoft Azure
Archive can be used as targets of optimized duplication but they cannot be used
as sources of it.
■ Synthetic backup for cloud LSU of AWS Glacier, AWS Deep Archive, and
Microsoft Azure Archive is not supported.
■ Image verification for backups residing on a cloud LSU for AWS Glacier, AWS
Deep Archive, and Microsoft Azure Archive is not supported.
■ SAP HANA for cloud LSU of Microsoft Azure Archive is not supported.
■ Multi-threaded Agent must be disabled when a Client-Direct backup is in use
by NetBackup clients that have a NetBackup version earlier than 8.3.
■ If you select a load-balancing media server that has NetBackup version earlier
than 8.3, then the cloud LSUs are not listed. Even if you select cloud LSUs with
a media server that has a NetBackup version earlier than 8.3, the backups can
fail.
■ Image sharing is not supported on AWS Glacier and AWS Deep Archive.
■ Malware scan is not supported on AWS Glacier and AWS Deep Archive.
■ Instant access, universal share, and malware scan features are not supported
on SUSE Linux Enterprise.
Additional notes
Review the following additional information:
■ Currently, AWS S3 and Azure storage API types are supported.
MSDP cloud support 271
Managing credentials for MSDP-C
For more information about the storage API types that NetBackup supports,
refer to the topic About the cloud storage vendors for NetBackup in the
NetBackup Cloud Administrator’s Guide.
■ When you enable Server-Side Encryption, you can configure AWS
Customer-Managed keys. These keys cannot be deleted once they are in use
by NetBackup. Each object is encrypted with the key during upload and deleting
the key from AWS causes NetBackup restore failures.
■ For more information on environments and deployment of Veritas Alta Recovery
Vault for NetBackup, refer to the following article:
https://www.veritas.com/docs/100051821
Before you enable the Veritas Alta Recovery Vault Azure and Azure Government
options, review the steps from the Configuring Veritas Alta Recovery Vault Azure
and Azure Government section in the NetBackup Deduplication Guide.
Veritas Alta Recovery Vault supports multiple options. For Veritas Alta Recovery
Vault Azure and Azure Government options in the web UI, you must contact
your Veritas NetBackup account manager for credentials or with any questions.
See “About Veritas Alta Recovery Vault Azure and Amazon” on page 329.
5 Click Next.
6 Add a role that you want to have access to the credential.
■ Click Add.
■ Select the role.
■ Select the credential permissions that you want the role to have.
The following steps describe the method to create a cloud storage unit using
the command line:
1 Create an MSDP storage server.
See “Configuring MSDP server-side deduplication” on page 75.
2 Create a cloud instance alias.
For example:
Example 1: Creating an Amazon S3 cloud instance alias
# /usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in
amazon.com -sts <storage server> -lsu_name <lsu name>
The cloud alias name is <storage server>_<lsu name>, and is used to create
a bucket.
3 Create a new bucket (Optional)
For example:
# /usr/openv/netbackup/bin/nbcldutil -createbucket -storage_server
<storage server>_<lsu name> -username <cloud user> -bucket_name
<bucket name>
MSDP cloud support 274
Creating a cloud storage unit
V7.5 "operation" "add-lsu-cloud" string Specifies the value “add-lsu-cloud” for adding a new
cloud LSU.
V7.5 "lsuCloudBucketName" " " string Specifies the cloud bucket name.
V7.5 "lsuCloudBucketSubName" " " string Multiple cloud LSUs can use the same cloud bucket, this
value distinguishes different cloud LSUs.
Note: All encrypted LSUs in one storage server must use the same
keygroupname and kmsservername. If you use the nbdevconfig command to
add a new encrypted cloud logical storage unit (LSU) and an encrypted LSU
exists in this MSDP, the keygroupname must be the same as the keygroupname
in the previous encrypted LSU.
See “About MSDP Encryption using NetBackup Key Management Server
service” on page 101.
Create a configuration file and then run the following nbdevconfig command:
# /usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig
-storage_server <storage server> -stype PureDisk -configlist
<configuration file path>
Note: The parameter <storage server> must be the same as the parameter
<storage server> in Step 2.
MSDP cloud support 276
Updating cloud credentials for a cloud LSU
Note: You can also create the disk pool from the NetBackup web UI.
Note: You can also create the storage unit from the NetBackup web UI.
V7.5 "operation" "update-lsu-cloud" string Use the value “update-lsu-cloud” to update some
cloud LSU parameters.
For example:
# /usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig
-storage_server <storage server> -stype PureDisk -configlist
<configuration file path>
Note: Use the original storage account to update the Azure Recovery Vault or
Veritas Alta Recovery Vault CMS credentials.
V7.5 "operation" "update-lsu-cloud" string You can only update the KMS status from disabled to
enabled.
V7.5 "lsuKmsEnable" "YES" string Specifies the KMS status for the cloud LSU.
Key group name must have valid characters: A-Z, a-z, 0-9,
_ (underscore), - (hyphen), : (colon), . (period), and space.
Example to enable KMS status from disabled status to enabled status for cloud
LSU “s3amazon”:
Note: All encrypted LSUs in one storage server must use the same keygroupname
and kmsservername. If you use the nbdevconfig command to add a new encrypted
cloud Logical storage unit (LSU) and an encrypted LSU exists in this MSDP, the
keygroupname must be the same as the keygroupname in the previous encrypted
LSU.
For more information, See “About MSDP Encryption using NetBackup Key
Management Server service” on page 101.
/usr/openv/pdde/pdcr/bin/pddecfg -a listcloudlsu
dsid, lsuname, storageId, CachePath
3, S3Volume, server1_ S3Volume/cloud-bucket1/sub1, /msdp/data/ds_3
4, S3Volume2, server1_ S3Volume2/cloud-bucket1/sub2,
/msdp/data/ds_4
V7.5 "operation" "delete-lsu-cloud" string The value “delete-lsu-cloud” for deleting the MSDP
cloud LSU configurations in spad.
For example:
# /usr/openv/pdde/pdconfigure/pdde stop
9 Remove the cache and other back-end folders by using the following commands
(Optional):
# rm -r <CachePath>
# rm -r <msdp_storage_path>/spool/ds_<dsid>
# rm -r <msdp_storage_path>/queue/ds_<dsid>
# rm -r <msdp_storage_path>/processed/ds_<dsid>
# rm -r <msdp_storage_path>/databases/refdb/ds_<dsid>
# rm -r <msdp_storage_path>/databases/datacheck/ds_<dsid>
# /usr/openv/netbackup/bin/nbsvcmon
V7.5 "operation" " " string The value must be “set-replication” for adding a new
replication target.
V7.5 " rephostname" " " string Specifies the replication target's host name.
V7.5 "replogin" " " string Specifies the replication target storage server's user name.
V7.5 "reppasswd" " " string Specifies the replication target storage server's password.
V7.5 "repsourcevolume" " " string Specifies the replication source volume name.
V7.5 "reptargetvolume" " " string Specifies the replication target volume name.
Example:
See “About the storage lifecycle policies required for Auto Image Replication”
on page 170.
See “Creating a storage lifecycle policy” on page 171.
V7.5 "operation" " " string The value must be “delete-replication” for deleting a new
replication target.
V7.5 " rephostname" " " string Specifies the replication target's host name.
V7.5 "repsourcevolume" " " string Specifies the replication source volume name.
V7.5 "reptargetvolume" " " string Specifies the replication target volume name.
For example:
For example, there is a target storage server and the username of the server is
userA and there is a cloud LSU s3cloud1 in the target storage server. To replicate
an image from an old storage server to the cloud LSU of the target server, you can
use the following username while adding the A.I.R. target:
userA?LSU=s3cloud1
You must also create an import SLP for the local volume of the target storage server
in the target primary server. Then select the imported SLP when you create the
target A.I.R. SLP on the source side. When the A.I.R. runs, the import job on the
target side shows the policy name as SLP_No_Target_SLP in the Activity monitor,
but the data is sent to cloud.
If the NetBackup client version is 8.2 or earlier, the client direct backup from the
old client to cloud LSU of one storage server might fail. During the backup if mtstrmd
is used on the client side, the job fails with a media write error. To disable mtstrmd
at the client side, open the configuration file pd.conf on the client and change the
following:
MTSTRM_BACKUP_ENABLED = 1 to MTSTRM_BACKUP_ENABLED = 0.
■ Windows
install_path\Veritas\NetBackup\bin\ost-plugins
When a client direct backup runs with a cloud LSU and an old client, the client does
only client-side deduplication.
To use cloud LSU, the load balance server of the storage server must not be an
earlier version (NetBackup 8.2 or earlier). If there are new and old load balancers,
a new load balance server is selected automatically to make sure that the job can
be done successfully. When you restore a backup image on cloud LSU and you
select the media server explicitly, the media server that you select must not be an
earlier version of NetBackup.
MSDP cloud support 285
About the configuration items in cloud.json, contentrouter.cfg, and spa.cfg
UseMemForUpload If it is set to true, the upload cache directory is mounted in memory true
as tmpfs. It is especially useful for high speed cloud that disk speed
is bottleneck. It can also reduce the disk competition with local LSU.
The value is set to true if the system memory is enough.
DownloadDataCacheGB It is the maximum space usage of data file, mainly the SO BIN file. 500
The larger this cache, the more data files can reside in the cache.
Then there is no need to download these files from cloud when
doing restore.
Note: The initial value of DownloadDataCacheGB in the
cloud.json file is the value of CloudDataCacheSize in the
contentrouter.cfg file.
DownloadMetaCacheGB It is the maximum space usage of metadata file, mainly the DO file 500
and SO BHD file. The larger this cache, the more meta files can
reside in the cache. Then there is no need to download these files
from cloud when doing restore.
Note: The initial value of DownloadMetaCacheGB in the
cloud.json file is the value of CloudMetaCacheSize in the
contentrouter.cfg file.
MapCacheGB It is the max space usage of map file that is used for compatibility 5
of MD5 type fingerprint. The larger this cache, the more map files
can reside in the cache.
Note: The initial value of MapCacheGB in the cloud.json file is
the value of CloudMapCacheSize in the contentrouter.cfg
file.
When you add a new cloud LSU, the value of MapCacheGB is equal
to CloudMapCacheSize. You can later change this value in the
cloud.json file.
KeepData Keep uploaded data to data cache. The value always false if UseMem false
is true.
KeepMeta Keep uploaded meta to meta cache, always false if UseMem is true. false
ReadOnly LSU is read only, cannot write and delete on this LSU. false
WriteThreadNum The number of threads for writing data to the data container in 2
parallel that can improve the performance of IO.
RebaseThresholdMB Rebasing threshold (MB), when image data in container less than 4
the threshold, all of the image data in this container will not be used
for deduplication to achieve good locality. Allowed values: 0 to half
of MaxFileSizeMB, 0 = disabled
AgingCheckContainerIntervalDay The interval of checking a container for this Cloud LSU (in days). 180
Note: For upgraded system, you must add this manually if you
want to change the value for a cloud LSU.
CloudDataCacheSize Default data cache size when adding Cloud LSU. 500 GiB
CloudMapCacheSize Default map cache size when adding Cloud LSU. 5 GiB
CloudMetaCacheSize Default meta cache size when adding Cloud LSU. 500 GiB
CloudUploadCacheSize Default upload cache size when adding Cloud LSU. 12 GiB
CloudBits The number of top-level entries in the cloud cache. This number Auto-sized
is (2^CloudBits). Increasing this value improves cache according to
performance, at the expense of extra memory usage. Minimum MaxCloudCacheSize
value = 16, maximum value = 48.
DCSCANDownloadTmpPath While using the dcscan to check cloud LSU, data gets disabled
downloaded to this folder. For details, see the dcscan tool in
cloud support section.
MaxCacheSize + MaxPredictiveCacheSize +
MaxSamplingCacheSize + Cloud in-memory upload
cache size must be less than or equal to the value of
UsableMemoryLimit
ClusterHookMinHistoryAgeInSecond The minimum age in seconds for the history data to be valid. 604800
The data newer than the minimum age is not used.
MSDP cloud support 289
About the configuration items in cloud.json, contentrouter.cfg, and spa.cfg
ClusterHookMaxHistoryAgeInSecond The maximum age in seconds for the valid history data. The 2592000
data older than the maximum age is removed.
Adding a new cloud LSU fails if no partition has free space more than the following:
CloudDataCacheSize + CloudMapCacheSize + CloudMetaCacheSize +
CloudUploadCacheSize + WarningSpaceThreshold * partition size
Use the crcontrol --dsstat 2 command to check the space of each of the
partition.
MSDP cloud support 290
About the configuration items in cloud.json, contentrouter.cfg, and spa.cfg
Note: Each Cloud LSU has a cache directory. The directory is created under an
MSDP volume that is selected according to the disk space usage of all the MSDP
volumes. Cloud LSU reserves some disk space for cache from that volume, and
the local LSU cannot utilize more disk space.
The initial reserved disk space for each of the cloud LSU is the sum of values of
UploadCacheGB, DownloadDataCacheGB, DownloadMetaCacheGB, and MapCacheGB
in the <STORAGE>/etc/puredisk/cloud.json file. The disk space decreases when
the caches are used.
There is a Cache options in crcontrol --dsstat 2 output:
# crcontrol --dsstat 2
Path = /msdp/data/dp1/1pdvol
Data storage
The Cache option is the currently reserved disk space by cloud for this volume. The
disk space is the sum of the reserved space for all cloud LSUs that have cache
directories on this volume. The actually available space for Local LSU on this volume
is Avail – Cache.
EnablePOIDListCache The status of the POID (Path Object ID) list cache as enabled or true
disabled. Path Object contains the metadata associated with that
image. .
MSDP cloud support 291
Cloud space reclamation
AgingCheckSleepSeconds Aging check thread wakes up periodically with this time interval 20
(in seconds).
AgingCheckBatchNum The number of containers for aging check each time. 400
AgingCheckSizeLowBound This threshold is used to filter the containers whose size is 8Mib
less than this value for aging check.
AgingCheckLowThreshold This threshold is used to filter the containers whose garbage 10%
percentage is less than this value (in percentage).
After you update the aging check related parameters, you must restart the MSDP
service. You can use the crcontrol command line to update those parameters
without restarting MSDP service.
MSDP cloud support 292
Cloud space reclamation
8 Change cloud aging check to fast mode for a specified cloud LSU.
/usr/openv/pdde/pdcr/bin/crcontrol --cloudagingfastcheck <dsid>
CompactBatchNum The number of containers for cloud compaction each time. 400
CompactLboundMB Filter the containers that have garbage size less than this 16
value for cloud compaction.
CompactSizeLboundMB Filter the containers with the size less than this value for 32
cloud compaction.
CompactMaxPo Filter the containers that are referenced more than this 100
number of path objects for cloud compaction.
Dcscan downloads data container from the cloud. The default download path is
<STORAGE>/tmp/DSID_#dsid, where #dsid is dependent on the cloud LSU DSID
value. Different cloud storage providers have different DSID values. You do not
MSDP cloud support 294
About the tool updates for cloud support
need to know the DSID value, dcscan gets the DSID value automatically. The
default download path can be modified in the contentrouter.cfg file using the
DCSCANDownloadTmpPath field.
While using the dcscan tool to look at cloud data, -a option is disabled, because it
downloads all data containers from cloud, it is an expensive operation. The -fixdo
option is disabled as well, because dcscan only downloads data container from the
cloud. Others operations are same as the local LSU.
dcscan downloads data containers to its own cache. When compaction is enabled
for some LSU, before running dcscan for this LSU, remove those stale containers
from dcscan cache directory.
SEEDUTIL:
Seedutil can be used for seeding a backup for a better deduplication rate. It creates
links in the <destination client name> directory to all the backup files found in
the path <client name>/<policy name> that have <backup ID> in their names.
The user needs to know which DSID value the cloud LSU has used. That DSID value
needs to be given to the seedutil, to let seedutil know which cloud LSU will seed
a client. If you do a seeding for a local LSU, the default DSID is 2, you do not need
to give the DSID value. Seedutil cannot seed across different DSIDs.
For example,/usr/openv/pdde/pdag/bin/seedutil -seed -sclient
<source_client_name> -spolicy <source_policy_name> -dclient
<destination_client_name> -dsid <dsid_value>.
CRCONTROL
Using crcontrol –clouddsstat option to show cloud LSU datastore usage. DSID
value needs to be given. As cloud storage has unlimited space, the size is
hard-coded to 8 PB.
For example:
CRSTATS:
MSDP cloud support 295
About the disaster recovery for cloud LSU
Using crstats -cloud -dsid option to show the cloud LSU statistics. DSID value
needs to be given. As cloud storage has unlimited space, the size is hard-coded
to 8 PB.
For example:
PDDECFG:
/usr/openv/pdde/pdcr/bin/pddecfg -a listcloudlsu
dsid, lsuname, storageId, CachePath
3, S3Volume, amazon_1/cloud-bucket1/sub1, /msdp/data/ds_3
4, S3Volume2, amazon_1/cloud-bucket1/sub2, /msdp/data/ds_4
■ The primary server has no catalog of images in MSDP storage. For example,
when the primary server is reinstalled and catalog in the primary is lost. The
catalog is needed to import backup images. Refer section “About importing
backup images” in NetBackup Administrator's Guide, Volume I for more
information.
■ The primary server has an incorrect catalog record about the MSDP storage.
The catalog in the primary server is not correct any more as the storage server
is moved to a new media server after the disaster recovery. To correct the
catalog on the primary server, run the bpimage command. The new media server
here means a newly added media server or other existing media server.
■ When the primary has catalog of images in MSDP storage and the same media
server is used to do disaster recovery, it’s not needed to do backup images
importing.
■ Backup images importing is not supported when the cloud LSU is based on
Amazon S3 Glacier, Deep Archive, and Microsoft Azure Archive.
■ Cloud LSU of Amazon S3 Glacier, Deep Archive, and Microsoft Azure Archive
supports cloud disaster recovery only in Scenario 1 and Scenario 3.
You can do the disaster recovery for cloud LSU with the following three steps:
1. Set up the MSDP storage server with local storage.
2. Add a cloud LSU to reuse existing cloud data.
3. Perform backup images importing if the catalog is not available in the primary
server.
1 Delete old storage server-related Perform the following steps from Recovering from an MSDP storage
configurations server failure topic.See “Recovering from an MSDP storage server
failure” on page 539.
2 Configure the new storage server. Run the following command on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-creatests -storage_server “storage server” -stype
PureDisk -media_server “media server” -st 9
Note: You can also create the storage server from the NetBackup
web UI.
5 Create disk pool for cloud LSUs. Run the following commands on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-previewdv -storage_servers <storage server name>
-stype PureDisk | grep <LSU name> > /tmp/dvlist
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-createdp -dp <disk pool name> -stype PureDisk
-dvlist /tmp/dvlist -storage_servers <storage server
name>
Note: You can also create the disk pool from the NetBackup web UI.
1 Ensure that the old MSDP server Run the following commands on the old MSDP server:
is stopped if recovering from an old
/usr/openv/netbackup/bin/bp.kill_all
MSDP server to a new MSDP
server.
2 Configure the new storage server. Run the following command on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-creatests -storage_server “storage server” -stype
PureDisk -media_server “media server” -st 9
Note: You can also create the storage server from the NetBackup
web UI.
4 Restart the storage server. Run the following commands on the primary server:
/usr/openv/netbackup/bin/bp.kill_all
/usr/openv/netbackup/bin/bp.start_all
5 Create disk pool for cloud LSUs. Run the following commands on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-previewdv -storage_servers <storage server name>
-stype PureDisk | grep <LSU name> > /tmp/dvlist
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-createdp -dp <disk pool name> -stype PureDisk
-dvlist /tmp/dvlist -storage_servers <storage server
name>
Note: You can also create the disk pool from the NetBackup web UI.
6 Update the catalog images. Run the following command on the primary server:
7 Delete old storage server-related Run the following command on the old MSDP server.
configurations.
/usr/openv/netbackup/bin/bp.start_all
1 Delete old storage server-related Perform the following steps from Recovering from an MSDP storage
configurations. server failure topic.See “Recovering from an MSDP storage server
failure” on page 539.
2 Configure the new storage server. Run the following command on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-creatests -storage_server “storage server” -stype
PureDisk -media_server “media server” -st 9
Note: You can also create the storage server from the NetBackup
web UI.
4 Restart the storage server. Run the following commands on the new MSDP server:
/usr/openv/netbackup/bin/bp.kill_all
/usr/openv/netbackup/bin/bp.start_all
MSDP cloud support 300
About the disaster recovery for cloud LSU
5 Create disk pool for cloud LSUs. Run the following commands on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-previewdv -storage_servers <storage server name>
-stype PureDisk | grep <LSU name> > /tmp/dvlist
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-createdp -dp <disk pool name> -stype PureDisk
-dvlist /tmp/dvlist -storage_servers <storage server
name>
Note: You can also create the disk pool from the NetBackup web UI.
1 Configure the new storage server. Run the following command on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-creatests -storage_server “storage server” -stype
PureDisk -media_server “media server” -st 9
Note: You can also create the storage server from the NetBackup
web UI.
4 Create disk pool for cloud LSUs. Run the following commands on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-previewdv -storage_servers <storage server name>
-stype PureDisk | grep <LSU name> > /tmp/dvlist
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-createdp -dp <disk pool name> -stype PureDisk
-dvlist /tmp/dvlist -storage_servers <storage server
name>
Note: You can also create the disk pool from the NetBackup web UI.
MSDP cloud support 301
About the disaster recovery for cloud LSU
5 Update the catalog images. Run the following command on the primary server:
6 Delete old storage server-related Run the following command on the old MSDP server.
configurations.
/usr/openv/netbackup/bin/bp.start_all
1 Delete old storage server-related Perform the following steps from Recovering from an MSDP storage
configurations. server failure topic.See “Recovering from an MSDP storage server
failure” on page 539.
2 Configure the new storage server. Run the following command on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-creatests -storage_server “storage server” -stype
PureDisk -media_server “media server” -st 9
Note: You can also create the storage server from the NetBackup
web UI.
5 Create disk pool for cloud LSUs. Run the following commands on the primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-previewdv -storage_servers <storage server name>
-stype PureDisk | grep <LSU name> > /tmp/dvlist
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-createdp -dp <disk pool name> -stype PureDisk
-dvlist /tmp/dvlist -storage_servers <storage server
name>
Note: You can also create the disk pool from the NetBackup web UI.
4 Generate the config template. Make changes to the template file local-lsu.txt and delete all other
entries that are not needed.
Parameters:
/root/local-lsu.txt
V7.5 "storagepath" "/Storage" string
V7.5 "spalogin" "my-user-name" string
V7.5 "spapasswd" "my-password" string
V7.5 "spalogretention" "90" int
V7.5 "verboselevel" "3" int
1 Get LSU name before you reuse Run any of the following commands to get the LSUs (disk volumes) on
cloud LSU configuration. this MSDP server.
/usr/openv/netbackup/bin/admincmd/nbdevquery -listdp
-stype PureDisk -U
/usr/openv/netbackup/bin/admincmd/nbdevquery -listdv
-stype PureDisk -U
Sample output:
3 Check if lsuCloudAlias exist. Run the following command to list the instances to check if
lsuCloudAlias exist.
/usr/openv/netbackup/bin/admincmd/csconfig
cldinstance -i | grep <lsuname>
If an alias does not exist, run the following command to add them.
/usr/openv/netbackup/bin/admincmd/csconfig
cldinstance -as -in <cloud_privder_name> -sts
<storageserver> -lsu_name <lsuname>
/usr/openv/netbackup/bin/admincmd/csconfig
cldinstance -l
4 Reuse cloud LSU configuration. Run the following command for each LSU to configure the cloud LSU.
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-setconfig -storage_server <storageserver> -stype
PureDisk -configlist /root/dr-lsu.txt
5 Recover spad/spoold metadata For each cloud LSU perform the previous four steps, and then run the
from cloud. following command.
/usr/openv/pdde/pdcr/bin/cacontrol --catalog
clouddrstatus <lsuname>
/usr/openv/netbackup/bin/bp.start_all
MSDP cloud support 306
About the disaster recovery for cloud LSU
/usr/openv/pdde/pdcr/bin/pddecfg -a
startdatafullcheck -d <dsid>
/usr/openv/pdde/pdcr/bin/crcontrol --processqueue
--dsid <dsid>
Note: -d and --dsid options are optional parameters and applicable
for cloud LSU only. Use /usr/openv/pdde/pdcr/bin/pddecfg
-a listcloudlsu to get cloud LSU dsid value. If given disd value
is “0”, local LSU is processed.
See “Recovering the MSDP S3 IAM configurations from cloud LSU” on page 480.
2 On the NetBackup primary server, run the following command to reuse the
cloud LSU. Use the same credentials, bucket name, and sub bucket that were
used before the disaster recovery.
/usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig
-storage_server <storageserver> -stype PureDisk -configlist
<configuration file>
3 On the NetBackup primary server, get the storage server name. On the engines
container with the storage server name, run the following command to get the
catalog from the cloud:
/usr/openv/pdde/pdcr/bin/cacontrol --catalog clouddr <lsuname>
Note: The Cloud Recovery Server version must be the same or later than the
on-premises NetBackup version.
Task Description
Prepare a cloud recovery server. You must have a virtual machine in your cloud
environment and have NetBackup installed on it. You
can deploy the virtual machine using one of the following
ways.
Configure the NetBackup KMS If KMS encryption is enabled, perform the following
server. tasks.
Configure image sharing on the The NetBackup virtual machine in the cloud that is
cloud recovery server. configured for image sharing is called a cloud recovery
server. Perform the following steps to configure the
image sharing:
Task Description
Use the image sharing. After you configure this NetBackup virtual machine for
image sharing, you can import the images from your
on-premises environment to the cloud and recover them
when required. You can also convert VMs to VHD in
Azure or AMI in AWS.
of the bucket are the same as the Veritas Alta Recovery Vault account on the
image sharing server.
The temporary bucket or blob container name format is
vrtsonvert-<timestamp>/VRTSConvert-<timestamp>.
■ For Veritas Alta Recovery Vault Amazon, MSDP-C credentials with AWS account
with IAM and EC2 related permissions must be created before the VM
conversion. For Veritas Alta Recovery Vault Azure, MSDP-C credentials with
Azure general-purpose storage accounts must be created before the VM
conversion.
■ For Veritas Alta Recovery Vault, import uses VARV credentials and conversion
needs regular Azure credentials as Recovery Vault storage doesn't support
creating AMI or VHD
■ Veritas Alta Recovery Vault storage does not support creating AMI or VHD. Use
regular Azure credentials for conversion.
■ For a new image sharing server, ensure that NGINX is installed and running.
Install NGINX from Red Hat Software Collections. Refer to
https://www.softwarecollections.org/en/scls/rhscl/rh-nginx114/ for instructions.
Because the package name depends on the NGINX version, run yum search
rh-nginx to check if a new version is available. (For NetBackup 8.3, an EEB
is required if NGINX is installed from Red Hat Software Collections.)
MSDP cloud support 312
About Image Sharing using MSDP cloud
If you have configured IAM role in the EC2 instance, use the following command:
-pt hitachicp: Specify the cloud provider type as hitachicp (HCP LAN)
-sp <s3_http_port>: Specify an HCP storage server HTTP port (Default is 80).
Note: Configuring image sharing using MSDP cloud with the ims_system_config.py
script is not supported for SUSE Linux Enterprise. Use NetBackup web UI to
configure image sharing using MSDP cloud for SUSE Linux Enterprise.
Table 8-2 Steps for image sharing and the command options
Step Command
List all the backup images that are in the cloud. nbimageshare --listimage <LSU name>
<MSDP image sharing server>
You can import the multiple images. For every 100 images,
a new import job is created.
Table 8-2 Steps for image sharing and the command options (continued)
Step Command
Recover the VM as an AWS EC2 AMI or VHD in Azure. nbimageshare --recovervm <LSU name> <MSDP
image sharing server>
"ec2:CreateTags"
"ec2:DescribeImportImageTasks"
"ec2:ImportImage"
"ec2:DescribeImages"
"iam:ListRolePolicies"
"iam:ListRoles"
"iam:GetRole"
"iam:GetRolePolicy"
"iam:CreateRole"
"iam:PutRolePolicy"
/usr/openv/netbackup/bin/nbkms -createemptydb
/usr/openv/netbackup/bin/nbkms
/usr/openv/netbackup/bin/nbkmscmd -discovernbkms -autodiscover
Workaround:
■ You can change the maximum policy size limit for the vmimport role.
■ You can list and delete the existing policies using the following commands:
■ The recover operation with AWS provider includes the AWS import process.
Therefore, a vmdk image cannot be recovered concurrently in two restore jobs
at the same time.
■ In AWS, the image sharing feature can recover the virtual machines that satisfy
the Amazon Web Services VM import prerequisites.
For more information about the prerequisites, refer to the following article:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html
■ If you cannot obtain the administrator password to use an AWS EC2 instance
that has a Windows OS, the following error is displayed:
Password is not available. This instance was launched from a custom
AMI, or the default password has changed. A password cannot be
retrieved for this instance. If you have forgotten your password,
you can reset it using the Amazon EC2 configuration service. For
more information, see Passwords for a Windows Server Instance.
This error occurs after the instance is launched from an AMI that is converted
using image sharing.
For more information, refer to the following articles:
■ Amazon Elastic Compute Cloud Common Messages
MSDP cloud support 319
About Image Sharing using MSDP cloud
the VMs that originate from other hypervisors might not. For more information,
see Information for Non-Endorsed Distributions.
■ Hyper-V Drivers in source virtual machine
For Linux, the following Hyper-V drivers are required on the source VM:
■ hv_netvsc.ko
■ hv_storvsc.ko
■ hv_vmbus.ko
You may need to rebuild the initrd so that required kernel modules are available
on the initial ramdisk. The mechanism for rebuilding the initrd or initramfs image
may vary depending on the distribution. Many distributions have these built-in
drivers available already. For Red Hat or CentOS, the latest Hyper-V drivers
(LIS) may be required if the built-in drivers do not work well. For more information,
see Linux Kernel requirements.
For example, before you perform a backup for a Linux source VM that runs
CentOS or Red Hat, verify that required Hyper-V drivers are installed on the
source VM. Those drivers must be present on the source VM backup to start
the VM after conversion.
■ Take a snapshot of the source VM..
■ Run the following command to modify the boot image:
sudo dracut -f -v -N
■ Run the following command to verify that Hyper-V drivers are present in the
boot image:
lsinitrd | grep hv
■ Disk
■ The OS in source VMs is installed on the first disk of the source VMs. Do
not configure a swap partition on the operating system disk. See Information
for Non-endorsed Distributions
■ Multiple Data disks attached to new VM created by converted VHD will be
in offline status for Windows and unmounted for Linux. Need to make them
online and mount manually after conversion.
■ After creating a VM by converted VHD, one extra temporary storage disk
whose size is determined by the VM size may be added by Azure in both
MSDP cloud support 321
About Image Sharing using MSDP cloud
Linux and Windows systems. For more information, see Azure VM Temporary
Disk.
■ Networking
If the source VM has multiple network interfaces, only one interface will be kept
available in the new VM created by converted VHD.
Linux: The name of the primary network interface on source VMs must be eth0
for endorsed Linux distributions. If not, it is unable to connect new VM created
by converted VHD, and some manual steps need to be done on the converted
VHDs. For more information, see Can't connect to Azure Linux VM through
network.
Windows: Enable Remote Desktop Protocol (RDP) on the source VM. Some
windows systems need to disable the firewall in source VMs, otherwise unable
to connect remotely.
■ Azure account
When you convert VMDK to VHD, Azure account in image sharing using MSDP
cloud should be Azure general-purpose storage accounts. See Storage account
overview.
Note: For VM conversion, if the image sharing volume is Veritas Alta Recovery
Vault, only access credentials are supported, Azure Service Principal or AWS IAM
anywhere credentials are not supported.
cd LISISO
./install
reboot
Example,
Run any of the following commands to check if the new modules exist in new
initial ramdisk images.
lsinitrd | grep -i hv
grub2-mkconfig -o /boot/grub2/grub.cfg
■ Rebuild initrd.
cd /boot/
cp initrd-$(uname -r) initrd-$(uname -r).backup
mkinitrd -v -m "hv_vmbus hv_netvsc hv_storvsc" -f
/boot/initrd-$(uname -r) $(uname -r)
■ We recommend that the network interface in the source VM uses DHCP and
enabled on start.
See Prepare a Red Hat-based virtual machine for Azure
To convert the RHEL 8.6 VM image to VHD
1 Install Hyper-V device drivers and rebuild the initramfs image file.
Check if the Hyper-V drivers (hv_netvsc, hv_storvsc, hv_vmbus) are installed
or not.
lsinitrd | grep hv
cd /boot
cp initramfs-`uname -r`.img initramfs-`uname -r`.img.bak
Note: Add the spaces between the quotes and the driver name.
2 Rename the network interface to the name eth0 and enable the NIC on boot.
Azure Linux VMs use traditional NIC names by default.
In the network interface configuration file, configure ONBOOT=yes.
For example,
mv /etc/sysconfig/network-scripts/ifcfg-ens192
/etc/sysconfig/network-scripts/ifcfg-eth0 sed -i 's/ens192/eth0/g'
/etc/sysconfig/network-scripts/ifcfg-eth0
cd /boot
cp initramfs-`uname -r`.img initramfs-`uname -r`.img.bak
Note: Add the spaces between the quotes and the driver name.
2 Check that the network interface name is eth0. Ensure that the network interface
is using DHCP, and it is enabled on boot.
/etc/sysconfig/network/ifcfg-eth0 contains the following:
BOOTPROTO='dhcp'
STARTMODE='auto'
3 Regenerate the grub.cfg to ensure that console logs are sent to the serial
port.
■ To use the traditional NIC names in the file /etc/default/grub, change
the line GRUB_CMDLINE_LINUX="xxxxxxx" to
GRUB_CMDLINE_LINUX="xxxxxxx net.ifnames=0"
Remove the following parameters if they exist: rhgb quiet
crashkernel=auto
For more information about Veritas Alta Recovery Vault, see Explore Recovery
Vault.
Note: Veritas Alta Recovery Vault supports multiple options. For Veritas Alta
Recovery Vault Azure and Azure Government options in the web UI, you must
contact your Veritas NetBackup account manager for credentials or with any
questions.
Table 8-4 Steps for configuring Alta Recovery Vault for Azure and Azure
Government
Step 1 Retrieve Retrieve Veritas Alta Recovery Vault credentials from your
credentials. Veritas NetBackup account manager.
Table 8-4 Steps for configuring Alta Recovery Vault for Azure and Azure
Government (continued)
Step 3 Add a disk pool. In the NetBackup web UI, create a disk pool. Follow the
procedure in Create a disk pool in the NetBackup Web UI
Administrator’s Guide.
Step 4 Add a storage unit. In the NetBackup web UI, create a storage unit. Follow the
procedure in Create a storage unit in the NetBackup Web UI
Administrator’s Guide.
When you create the storage unit, select the Media Server
Deduplication Pool (MSDP) option. In the Disk pool step,
select the disk pool that was created in Step 3.
MSDP cloud support 332
Configuring Veritas Alta Recovery Vault Azure and Azure Government using the CLI
Note: If an update to the refresh token for an existing storage account is needed,
you must edit the credentials that are associated with the storage account. Use the
web UI and update the refresh token within Credential management.
You cannot have multiple credentials for the same storage account. Credentials
must be unique to the storage account. If you do not have unique credentials, you
can encounter issues such as the disk volume going down or backup and restore
failures to that disk volume.
Note: Use the original storage account to update the Veritas Alta Recovery Vault
CMS credentials.
Table 8-5 Steps for configuring Alta Recovery Vault for Azure and Azure
Government with the CLI
Step 1 Retrieve Retrieve Veritas Alta Recovery Vault credentials from your
credentials. Veritas NetBackup account manager.
MSDP cloud support 333
Configuring Veritas Alta Recovery Vault Azure and Azure Government using the CLI
Table 8-5 Steps for configuring Alta Recovery Vault for Azure and Azure
Government with the CLI (continued)
Step 2 Add credentials Log into NetBackup web UI and perform the following:
using the
1 On the left, click Credential management.
Credential
management 2 On the Named credentials tab, click Add and provide
option. the following properties:
■ Credential name
The Credential name must use alphanumeric
characters with hyphens or underscores and cannot
contain spaces or illegal characters.
■ Tag
■ Description
3 Click Next.
Table 8-5 Steps for configuring Alta Recovery Vault for Azure and Azure
Government with the CLI (continued)
Step 4 Create a cloud Use the following examples depending on your environment:
instance alias.
■ Creating an Veritas Alta Recovery Vault Azure cloud
instance alias:
/usr/openv/netbackup/bin/admincmd/csconfig
cldinstance -as -in
Veritas-Alta-Recovery-Vault-Azure
-sts <storage server>
-lsu_name <lsu name>
/usr/openv/netbackup/bin/admincmd/csconfig
Table 8-5 Steps for configuring Alta Recovery Vault for Azure and Azure
Government with the CLI (continued)
Step 6 Create a
configuration file,
then run
nbdevconfig
command.
MSDP cloud support 336
Configuring Veritas Alta Recovery Vault Azure and Azure Government using the CLI
Table 8-5 Steps for configuring Alta Recovery Vault for Azure and Azure
Government with the CLI (continued)
Table 8-5 Steps for configuring Alta Recovery Vault for Azure and Azure
Government with the CLI (continued)
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-setconfig -storage_server <storage server>
-stype PureDisk
-configlist <configuration file path>
Table 8-5 Steps for configuring Alta Recovery Vault for Azure and Azure
Government with the CLI (continued)
Step 7 Create disk pool. Create disk pool by running the nbdevconfig command.
The following are examples of using the nbdevconfig
command:
Example 1:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-previewdv -storage_servers <storage server name>
-stype PureDisk
| grep <LSU name> > /tmp/dvlist
Example 2:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-createdp -dp <disk pool name>
-stype PureDisk -dvlist /tmp/dvlist
-storage_servers <storage server name>
Note: You can also create the disk pool from the NetBackup
web UI or NetBackup Administration Console.
Step 8 Create storage Create storage unit by using bpstuadd command. The
unit. following are examples of using the bpstuadd command:
/usr/openv/netbackup/bin/admincmd/bpstuadd
-label <storage unit name>
-odo 0 -dt 6 -dp <disk pool name>
-nodevhost
Note: You can also create the storage server from the
NetBackup web UI or NetBackup Administration Console.
Note: If an update to the refresh token for an existing storage account is needed,
you must edit the credentials that are associated with the storage account. Use the
web UI and update the refresh token within Credential management.
You cannot have multiple credentials for the same storage account. Credentials
must be unique to the storage account. If you do not have unique credentials, you
can encounter issues such as the disk volume going down or backup and restore
failures to that disk volume.
MSDP cloud support 339
Configuring Veritas Alta Recovery Vault Azure and Azure Government using the CLI
csconfig cldinstance -us -in <instance name> -sts <alias name> -ntr <0,1>
Note: When you add the cloud LSU on a back-level media server using the CLI,
the -ntr option must be set to No (0). You must set the option to No because older
versions of the media server don’t have support for token based credentials. When
you use a NetBackup storage server version 10.2 or newer, the cloud alias instance
must have the -ntr option set to Yes. The setting cannot be set to No.
Instead of putting the storage account name for -username, use the name of the
credentials created using Credential Management. Also, when prompted for a
password, provide a dummy input because no password is needed.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance
-as -in Veritas-Alta-Recovery-Vault-Azure -sts <storage_server_name>
-stype PureDisk -lsu_name test1
Added the --enable_sas option to use for Veritas Alta Recovery Vault Azure.
Additionally, if the --enable _sas option is used you must export the following
environment variables:
■ MSDPC_MASTER_SERVER - This option is name of the NetBackup primary server.
export MSDPC_PROVIDER=vazure
export MSDPC_REGION="East US"
export MSDPC_ENDPOINT="https://<storage-account>.blob.core.windows.net/"
export MSDPC_ACCESS_KEY=<credential name>
export MSDPC_SECRET_KEY="dummy<any non-null string>
export MSDPC_MASTER_SERVER=<primary server>
export MSDPC_ALIAS=<storage_server_name>_test1
Alternatively, you can provide an access token that you receive from Veritas to
create the WORM bucket or volume. This option is not recommended because the
media server must connect to the Recovery Vault web server and Veritas has to
provide the Recovery Vault web server URI.
■ MSDPC_RVLT_API_URI - A new environment parameter for use when Veritas
provides a different endpoint.
■ MSDPC_ACCESS_TOKEN - An access token which is part of the credentials that
Veritas provides.
■ MSDPC_CMS_CRED_NAME - The credential name provided for storing the credentials.
Example output
export MSDPC_CMS_CRED_NAME=your_cms_credential_name
export MSDPC_ALIAS=your_alias_name
export MSDPC_REGION=your_region
export MSDPC_PROVIDER=vazure
MSDP cloud support 341
Configuring Veritas Alta Recovery Vault Amazon and Amazon Government
export MSDPC_ENDPOINT="https://your_storage_account.blob.core.windows.net/"
export MSDPC_MASTER_SERVER=<primary server>
Note: Veritas Alta Recovery Vault supports multiple options. For Veritas Alta
Recovery Vault Amazon and Amazon Government options in the web UI, you must
contact your Veritas NetBackup account manager for credentials or with any
questions.
Table 8-6
Steps Task Instructions
Step 1 Retrieve Retrieve Veritas Alta Recovery Vault credentials from your
credentials Veritas NetBackup account manager
Step 3 Add a disk pool. In the NetBackup web UI, create a disk pool. Follow the
procedure in Create a disk pool in the NetBackup Web UI
Administrator’s Guide.
Step 4 Add a storage In the NetBackup web UI, create a storage unit. Follow the
unit. procedure in Create a storage unit in the NetBackup Web UI
Administrator’s Guide.
When you create the storage unit, select the Media Server
Deduplication Pool (MSDP) option. In the Disk pool step,
select the disk pool that was created in Step 3.
Note: Use the web UI and update the refresh token within Credential management.
Table 8-7 Steps for configuring Alta Recovery Vault for Amazon and
Amazon Government with the CLI
Step 1 Retrieve Retrieve Veritas Alta Recovery Vault credentials from your Veritas
credentials. NetBackup account manager.
Step 2 Add Log into NetBackup web UI and perform the following:
credentials
1 On the left, click Credential management.
using the
Credential 2 On the Named credentials tab, click Add and provide the
management following properties:
option. ■ Credential name
The Credential name must use alphanumeric characters
with hyphens or underscores and cannot contain spaces
or illegal characters.
■ Tag
■ Description
3 Click Next.
Table 8-7 Steps for configuring Alta Recovery Vault for Amazon and
Amazon Government with the CLI (continued)
Step 6 Create a
configuration
file, then run
nbdevconfig
command.
MSDP cloud support 345
Configuring Veritas Alta Recovery Vault Amazon and Amazon Government using the CLI
Table 8-7 Steps for configuring Alta Recovery Vault for Amazon and
Amazon Government with the CLI (continued)
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-setconfig -storage_server <storage server>
MSDP cloud support 346
Configuring Veritas Alta Recovery Vault Amazon and Amazon Government using the CLI
Table 8-7 Steps for configuring Alta Recovery Vault for Amazon and
Amazon Government with the CLI (continued)
-stype PureDisk
-configlist <configuration file path>
Step 7 Create disk Create disk pool by running the nbdevconfig command. The
pool. following are examples of using the nbdevconfig command:
Example 1:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-previewdv -storage_servers <storage server name>
-stype PureDisk
| grep <LSU name> > /tmp/dvlist
Example 2:
/usr/openv/netbackup/bin/admincmd/nbdevconfig
-createdp -dp <disk pool name>
-stype PureDisk -dvlist /tmp/dvlist
-storage_servers <storage server name>
Note: You can also create the disk pool from the NetBackup web
UI or NetBackup Administration Console.
Step 8 Create Create storage unit by using bpstuadd command. The following
storage unit. are examples of using the bpstuadd command:
/usr/openv/netbackup/bin/admincmd/bpstuadd
-label <storage unit name>
-odo 0 -dt 6 -dp <disk pool name>
-nodevhost
Note: You can also create the storage server from the NetBackup
web UI or NetBackup Administration Console.
Note: Use the web UI and update the refresh token within Credential management.
MSDP cloud support 347
Configuring Veritas Alta Recovery Vault Amazon and Amazon Government using the CLI
csconfig cldinstance -us -in <instance name> -sts <alias name> -ntr <0,1>
Note: When you add the cloud LSU on a back-level media server using the CLI,
the -ntr option must be set to No (0). You must set the option to No because older
versions of the media server don’t have support for token based credentials. When
you use a NetBackup storage server version 10.3.1 or newer, the cloud alias
instance must have the -ntr option set to Yes. The setting cannot be set to No.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance
-as -in Veritas-Alta-Recovery-Vault-Amazon -sts <storage_server_name>
-stype PureDisk -lsu_name test1
Added the --enable_sts option to use for Veritas Alta Recovery Vault Amazon.
Additionally, if the --enable _sts option is used you must export the following
environment variables:
■ MSDPC_MASTER_SERVER - This option is name of the NetBackup primary server.
Example output
export MSDPC_PROVIDER=vamazon
export MSDPC_REGION="us-east-1"
export MSDPC_CMS_CRED_NAME=<credential name>
export MSDPC_MASTER_SERVER=<primary server>
export MSDPC_ALIAS=<storage_server_name>_testnew
Alternatively, you can provide an access token that you receive from Veritas to
create the WORM bucket or volume. This option is not recommended because the
media server must connect to the Recovery Vault web server and Veritas has to
provide the Recovery Vault web server URI.
■ MSDPC_RVLT_API_URI - A new environment parameter for use when Veritas
provides a different endpoint.
■ MSDPC_ACCESS_TOKEN - An access token which is part of the credentials that
Veritas provides.
■ Click Next.
■ In the drop-down, select Veritas Alta Recovery vault.
■ Click Veritas Alta Recovery Vault Azure or Veritas Alta Recovery Vault
Amazon.
■ Add the Storage account and Refresh token.
■ Select or add a role that can access this credential.
■ Review the information and click Finish.
Confirm the change by making sure that the need token renew option -ntr is
set to 1 for enabling this option on the storage server:
<install path>/netbackup/bin/admincmd/nbdevconfig
-setconfig -stype PureDisk -storage_server <storage_server>
-configlist <config file path>
Restart the services on the primary server and the media server for the changes
to take effect.
6 Verify the restore of the old backup and run a new backup. Restore the new
backup.
See “About bucket-level immutable storage support for Google Cloud Storage
” on page 359.
With NetBackup 10.0.1, you can use cloud immutable storage in a cluster
environments. For more information, See “About using the cloud immutable storage
in a cluster environment” on page 364.
5 On the Review page, verify that all settings and information are correct. Click
Finish.
The disk pool creation and replication configuration continue in the background
if you close the window. If there is an issue with validating the credentials and
configuration of the replication, you can use the Change option to adjust any
settings.
6 In the Storage unit tab, click Add.
7 Select Media Server Deduplication Pool (MSDP) and click Start.
8 In Basic properties, enter the Name of the MSDP storage unit and click Next.
9 Select the disk pool that was created and select the Enable WORM/Lock until
expiration box, and click Next.
10 In Media server, use the default selection of Allow NetBackup to
automatically select, and click Next.
If it has multiple Media servers, please select the version 9.1 or later.
11 Review the setup of the storage unit and then click Save.
Performance tuning
MSDP spad process has a retention cache. It saves the data container’s retention
time. When data container’s retention time is less than
retentionCacheTimeThreshold, it does not deduplicate again to quick reclaim the
storage. If it has dedupe, the retention time can be extended and cannot be deleted.
The config items are in cloudlsu.cfg,
These two tasks need different sets of permissions. The principal who has the first
set of permissions is a cloud administrator, and the principal who has the second
set of permissions is a backup administrator.
Amazon cloud users need the permissions to manage and use the cloud immutable
volumes.
Cloud administrator needs the permissions to run msdpcldutil to manage cloud
volumes.
"s3:BypassGovernanceRetention",
"s3:CreateBucket",
"s3:DeleteBucket",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetBucketLocation",
"s3:GetBucketObjectLockConfiguration",
"s3:GetBucketVersioning",
"s3:GetObject",
"s3:GetObjectRetention",
"s3:GetObjectVersion"
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:PutBucketObjectLockConfiguration",
"s3:PutBucketVersioning",
"s3:PutObject",
"s3:PutObjectRetention",
"s3:BypassGovernanceRetention",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetBucketLocation",
"s3:GetBucketObjectLockConfiguration",
"s3:GetBucketVersioning",
"s3:GetObject",
"s3:GetObjectRetention",
"s3:GetObjectVersion",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:ListBucketVersions",
MSDP cloud support 356
About MSDP cloud immutable (WORM) storage support
"s3:PutObject",
"s3:PutObjectRetention",
{
"Version": "2012-10-17",
"Id": "vtas-lockdown-mode-file-protection",
"Statement": [
{
"Sid": "vrts-lockdown-file-read-only",
"Effect": "Deny",
"Principal": "*",
"Action": [
"s3:DeleteObject",
"s3:PutObject",
"s3:PutObjectRetention"
],
"Resource": [
"arn:aws:s3:::bucket-name/volume-name/lockdown-mode.conf",
"arn:aws:s3:::bucket-name/volume-name/lsu-worm.conf",
"arn:aws:s3:::bucket-name/volume-name/lockdown-mode.conf",
"arn:aws:s3:::bucket-name/volume-name/lsu-worm.conf",
"arn:aws:s3:::bucket-name/volume-name/lockdown-mode.conf",
"arn:aws:s3:::bucket-name/volume-name/lsu-worm.conf"
],
"Condition": {
"StringNotEquals": {
"aws:userid": "YOUR-USER-ID-HERE"
}
}
}
MSDP cloud support 357
About MSDP cloud immutable (WORM) storage support
]
}
See “AWS user permissions to create the cloud immutable volume” on page 354.
See “Creating a cloud immutable storage unit using the web UI” on page 351.
From NetBackup 10.1 release, the cloud immutable object support for the following
S3 compatible platforms is added:
■ Wasabi (Wasabi cloud storage)
Cloud admin role and backup admin role are combined into a single role.
■ Scality RING/ARTESCA
■ Cloud admin role and backup admin role are combined into a single role.
See “Creating a cloud immutable storage unit using the web UI” on page 351.
From NetBackup 10.3 release, the cloud immutable object support for the following
S3 compatible platforms is added:
■ IBM cloud object storage (iCOS)
■ Cloud admin role and backup admin role are combined into a single role.
■ Only compliance mode is supported
See “Creating a cloud immutable storage unit using the web UI” on page 351.
See “Updating a cloud immutable volume” on page 352.
See “Creating a cloud immutable storage unit using the web UI” on page 351.
See “Updating a cloud immutable volume” on page 352.
■ Does not support NetBackup for AKS/EKS and NetBackup Flex Scale.
See “Creating a Google cloud immutable storage using the Web UI” on page 360.
See “Managing a Google cloud immutable storage using msdpcldutil tool”
on page 361.
6 On the Review page, verify that all settings and information are correct. Click
Finish.
The disk pool creation and replication configuration continue in the background
if you close the window. If there is an issue with validating the credentials and
configuration of the replication, you can use the Change option to adjust any
settings.
7 In the Storage unit tab, click Add.
8 Select Media Server Deduplication Pool (MSDP) and click Start.
9 In Basic properties, enter the Name of the MSDP storage unit and click Next.
10 Select the disk pool that was created and click Next.
11 In Media server, use the default selection of Allow NetBackup to
automatically select, and click Next.
12 Review the setup of the storage unit and then click Save.
To get the service account key, see Create and manage service account keys
To get the ACESS_KEY and SECRET_KEY, see HMAC keys
2 Create a Google cloud immutable storage.
# msdpcldutil bucket create --bucket bucketname --mode ENTERPRISE
–period 2D
7 If you change the retention policy through Google WebUI, you must sync the
MSDP configuration file.
#/usr/openv/pdde/pdcr/bin/msdpcldutil bucket sync –bucket
bucketname
■ A retention range is defined for the cloud volume. The retention of any backup
images must be in this range. NetBackup checks this condition when the backup
policy is created. You can define and modify this range in the NetBackup web
UI.
NetBackup uses the Google S3 XML API to effectively handle data management
in Google Cloud storage. However, the Google S3 XML API lacks the necessary
functionality to manage the retrieval and configuration of the bucket default retention
of the bucket. So the NetBackup is unable to determine if S3 buckets have a default
retention policy configured, and as a result, it perceives them as non-object-lock
buckets. When opting for a bucket with default retention policy in Google Cloud
Storage, certain unexpired objects cannot be deleted.
We recommend that you avoid bucket creation in the Google Cloud console. Create
all buckets exclusively in the NetBackup web UI. Additionally, identify any buckets
with default retention policy in the Google Web Console and refrain from using them
in NetBackup.
See “Creating a cloud immutable storage unit using the web UI” on page 351.
See “Updating a cloud immutable volume” on page 352.
See “Extend the cloud immutable volume live duration automatically” on page 354.
See “Performance tuning” on page 354.
"storage.buckets.create",
"storage.buckets.delete",
"storage.buckets.enableObjectRetention",
"storage.buckets.get",
MSDP cloud support 364
About MSDP cloud immutable (WORM) storage support
"storage.buckets.list",
"storage.buckets.update",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.list",
"storage.objects.overrideUnlockedRetention",
"storage.objects.setRetention",
"storage.objects.update"
"storage.buckets.get",
"storage.buckets.list",
"storage.buckets.update",
"storage.objects.create",
"storage.objects.delete",
"storage.objects.list",
"storage.objects.overrideUnlockedRetention",
"storage.objects.setRetention",
"storage.objects.update",
"storage.multipartUploads.create",
"storage.multipartUploads.abort",
"storage.multipartUploads.listParts",
"storage.multipartUploads.list"
2 Run the following command to find the backup ID and copy number.
catdbutil --worm list --allow_worm
MSDP cloud support 366
About AWS IAM Role Anywhere support
4 Expire the WORM image by using the NetBackup command on the NetBackup
primary server.
bpexpdate -backupid ${my_backup_id} -d 0 -try_expire_worm_copy
-copy ${my_copy_num}
■ msdpcldutil bucket
For more information, see msdpcldutil section in the Veritas NetBackup Commands
Reference Guide.
in your AWS account and use the necessary certificate and private key to
authenticate.
Note: IAM Role Anywhere is not supported for AWS Recovery Vault. Also, switching
between IAM Role Anywhere and other authentication types such as Access Key
is not supported.
https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaWelcome.html
To create a CA certificate for free by setting up a private CA authority and creating
a CA certificate, see:
https://docs.aws.amazon.com/rolesanywhere/latest/userguide/getting-started.html
Signature validation
To authenticate a request for credentials, IAM Roles Anywhere validates the
incoming signature by using the signature validation algorithm required by the key
type of the certificate, for example RSA or ECDSA. After validating the signature,
IAM Roles Anywhere checks that the certificate was issued by a certificate authority
configured as a trust anchor in the account using algorithms defined by public key
infrastructure X.509 (PKIX) standards.
End entity certificates must satisfy the following constraints to be used for
authentication:
■ The certificates MUST be X.509v3.
■ Basic constraints MUST include CA: false.
■ The key usage MUST include Digital Signature.
■ The signing algorithm MUST include SHA256 or stronger. MD5 and SHA1
signing algorithms are rejected.
Certificates used as trust anchors must satisfy the same requirements for the
signature algorithm, but with the following differences:
■ The key usage must include Certificate Sign, and may include CRL Sign.
Certificate Revocation Lists (CRLs) are an optional feature of IAM Roles
Anywhere.
■ Basic constraints MUST include CA: true.
Create policy
Create a policy in your AWS console and grant the following permissions required
by NetBackup.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets",
"s3:CreateBucket",
"s3:ListBucket",
"s3:GetBucketObjectLockConfiguration",
"s3:DeleteObject",
"s3:PutObject",
"s3:GetObject",
"s3:GetBucketPolicyStatus",
"s3:GetObjectRetention",
"s3:DeleteObjectVersion",
"s3:GetBucketVersioning",
"s3:BypassGovernanceRetention",
"s3:GetBucketPolicy",
"s3:PutBucketPolicy",
"s3:PutBucketObjectLockConfiguration",
"s3:DeleteBucket",
"s3:DeleteBucketPolicy",
"s3:PutBucketVersioning",
"s3:GetObjectVersion",
"s3:ListBucketVersions",
"s3:PutObjectRetention",
"s3:RestoreObject"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeImages"
],
MSDP cloud support 370
About AWS IAM Role Anywhere support
"Resource": [
"*"
]
}
]
}
Create role
Create a role in your AWS console. See AWS documentation for the details:
https://docs.aws.amazon.com/rolesanywhere/latest/userguide/getting-started.html#getting-started-step2
Your policy should look like this with the required parameters filled out.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"rolesanywhere.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole",
"sts:TagSession",
"sts:SetSourceIdentity"
],
"Condition": {
"ArnEquals": {
"aws:SourceArn": [
"arn:aws:rolesanywhere:<REGION>:<ACCOUNT
NUMBER>:trust-anchor/<TRUST ANCHOR ID>"
]
},
"StringEquals": {
"<PRINCIPAL TAG CHECK>"
}
}
}
]
}
MSDP cloud support 371
About AWS IAM Role Anywhere support
NetBackup does not require the ArnEquals and the Principal Tag StringEquals
checks, but they are suggested security constraints.
Create profile
Create a profile in your AWS console. For details, see:
https://docs.aws.amazon.com/rolesanywhere/latest/userguide/getting-started.html
Table 8-8
Steps Task Instructions
Step 2 Add a disk pool. In the NetBackup web UI, create a disk pool. Follow the
procedure in Create a disk pool in the NetBackup Web UI
Administrator’s Guide.
Step 3 Add a storage In the NetBackup web UI, create a storage unit. Follow the
unit. procedure in Create a storage unit in the NetBackup Web UI
Administrator’s Guide.
When you create the storage unit, select the Media Server
Deduplication Pool (MSDP) option. In the Disk pool step,
select the disk pool that was created in Step 3.
MSDP cloud support 372
About Azure service principal support
Note: Service principal is not supported with Azure Recovery Vault. Also, switching
between Azure service principal and other authentication type is not supported.
"Microsoft.Storage/storageAccounts/blobServices/containers/delete"
"Microsoft.Storage/storageAccounts/blobServices/containers/read"
"Microsoft.Storage/storageAccounts/blobServices/containers/write"
"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action"
"Microsoft.Storage/storageAccounts/blobServices/read"
"Microsoft.Storage/storageAccounts/read"
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete"
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read"
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write"
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action"
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action"
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action"
"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/immutableStorage/runAsSuperUser/action"
■ Create the following and keep the information handy, before starting with the
configuration:
MSDP cloud support 373
About Azure service principal support
■ Storage Account
■ Client ID
■ Tenant ID
■ Secret Key
Table 8-9
Steps Task Instructions
Step 2 Add a disk pool. In the NetBackup web UI, create a disk pool. Follow the
procedure in Create a disk pool in the NetBackup Web UI
Administrator’s Guide.
Step 3 Add a storage In the NetBackup web UI, create a storage unit. Follow the
unit. procedure in Create a storage unit in the NetBackup Web UI
Administrator’s Guide.
When you create the storage unit, select the Media Server
Deduplication Pool (MSDP) option. In the Disk pool step,
select the disk pool that was created in Step 3.
Amazon Elastic Kubernetes Service (EKS) This platform is supported and enabled by
default.
MSDP on-premises (BYO, Flex media server, This platform is supported. You must
NetBackup appliance) manually enable this option. The connection
from MSDP to object store must have good
network bandwidth and latency.
The instant access for object storage is enabled by default on the AKS/EKS
platforms. If instant access is not enabled by default, you must manually perform
the following steps to enable it:
1. Add the instant-access-object-store = 1 option into the
/etc/msdp-release file on storage server.
2. On the primary or media server, run the following commands to verify that the
IA_OBJECT_STORE name is in the extendedcapabilities option.
Example:
3. On the primary or media server, run the following commands to reload the
storage server attributes:
NetBackup can also recover NetBackup data which has been loaded onto the
device by image sharing.
Using Credentials
Use local user credentials with the device rather than your normal S3 IAM
credentials. You can get the root credentials from the AWS Snowball Edge client.
Retrieve the access key first:
Note: It’s best practice to configure non-root users. Refer to the following instructions
for creating local users:
Setting Up Local Users AWS Snowball
Note: You can use a different instance name in step 2, however be sure to use that
instance name for the remaining steps.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance -a
-in <instance name> -pt
amazon -sh <IP address of snowball>
-http_port <8080>
-https_port <8443>
-access_style 2
Once the MSDP Cloud storage pointing to AWS Snowball Edge device is configured,
you can create a backup policy to write data directly to the device. You can also
create a storage lifecycle policy (SLP) to duplicate data from your local MSDP
storage to the AWS Snowball Edge device. This device can also be used to perform
other supported NetBackup operations.
Note: The bucket on the AWS Snowball Edge device exists in AWS as it’s required.
You must have an existing bucket in AWS before an AWS Snowball Edge device
can be used for an import job.
MSDP cloud support 378
About NetBackup support for AWS Snowball Edge
2 Obtain the certificate. Run the following AWS Snowball Edge client command:
<client install location>/snowballEdge get-certificate
--certificate-arn <arn-value-from-last-cmd> --manifest-file
<path-to-manifest-file> --unlock-code
<unlock-code-from-aws-portal> --endpoint
https://<snowball-edge-IP>
2 During Disk pool creation for MSDP Cloud AWS Snowball Edge, make sure
to select Use SSL and clear Check certificate revocation under Security.
3 Other steps should be same from Configuring NetBackup for AWS Snowball
Edge section.
Ship the device to the cloud vendor. Refer to the AWS documentation for detailed
steps.
Once the device is at AWS, it takes a couple of days to import data into the S3
bucket. The import time depends on the size of the data residing on the device.
You can view the progress of your import job from AWS portal > AWS Snow
Family. Once the import job is completed, review the success log and the failure
log to verify that the required data is successfully imported into the S3 bucket.
After backups are imported into the S3 bucket, perform the steps in Reconfigure
NetBackup to work with S3 section before doing any NetBackup operation.
2 To get the storage server name created for the custom instance, run the
following and note the storage server which was configured when you created
the disk pool.
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance -i -in
<name of your instance>
■ /usr/openv/pdde/pdconfigure/pdde start
MSDP cloud support 381
About NetBackup support for AWS Snowball Edge
10 In the NetBackup web UI, go to Storage > Disk storage > Disk pools, select
the disk pool and click on Update disk volume.
11 Run the following and verify Status = UP:
/usr/openv/netbackup/bin/admincmd/nbdevquery -listdv -stype
PureDisk -U -dp <disk_pool_name>
12 Open the Disk pool details page in NetBackup web UI and verify that the
Service host is updated to the AWS service host for the region in the Cloud
details section.
13 Activate backup policies or activate the secondary operation processing in the
SLP, you can use the following command to activate secondary operations in
the SLP:
/usr/openv/netbackup/bin/admincmd/nbstlutil active -lifecycle
<slp_name>
14 Perform the restore and verify the data. Use the NetBackup web UI to verify
the images.
2 Navigate in the NetBackup web UI to Hosts > Master servers > <your master
server> > Cloud Storage and edit the cloud storage pointing to the Snowball
Edge device. Service host should be the S3 endpoint (s3.dualstack.<region
ID>.amazonaws.com), HTTP/HTTPS ports should be 80/443, region should be
<region ID> with endpoint the same as the service host.
3 If you disabled SSL on your AWS Snowball Edge instance, enable it again.
You can only enable SSL from the NetBackup Administration Console.
■ Go to Host Properties > Master Servers > <your master server> > Cloud
Storage.
■ Click on the cloud storage pointing to the AWS Snowball Edge device.
MSDP cloud support 382
About NetBackup support for AWS Snowball Edge
■ In the table Associated Cloud Storage Servers for, select your storage
server name and click Change.
■ Select Use SSL and Data Transfer.
■ Click Save.
4 Go to your Disk pool in the NetBackup web UI and update the Cloud
credentials with the credentials of your AWS account.
5 To refresh cloud instance, run:
/usr/openv/netbackup/bin/admincmd/csconfig r
■ /usr/openv/pdde/pdconfigure/pdde start
Note: If configuring NetBackup for AWS Snowball Edge with SSL enabled,
use -https_port 8443.
Note: If you use CMS for cloud authentication, use the CMS credential
name in the configuration file instead of "lsuCloudUser" and
"lsuCloudPassword". Use the following format:
The first option needs to restore the whole backup, which may contain many backups
and the amount of data can be huge. The second option allows one to move only
the backups that are needed.
During an AWS Snowball Edge export job, data in the bucket being exported is
read-only. This limitation is an AWS limitation to prevent race conditions with the
data. During data transit, no backups can be made to the bucket.
Depending on network speed, duplication of 1 TB of data from one S3 bucket to
another using an image sharing server in an EC2 instance can take time. So for a
typical Snowball workload of many TB of data, a duplication can take many hours
or up to a few days. The alternative to image sharing is to export directly from the
source bucket and not being able to access its data during device transit. Device
transit generally takes a few days. The benefits and drawbacks of these two solutions
should be weighed for your particular export needs.
To use image sharing to export data by AWS Snowball Edge
1 Create an EC2 instance within the same region as both the source and the
destination buckets reside. Network performance is important for this workflow.
Ensure an S3 endpoint is configured in the VPC the VM is in, as this speeds
up network speed between EC2 and S3.
2 Install NetBackup on the EC2 instance.
3 Configure an MSDP storage server for image sharing.
4 Use the web UI to configure a disk pool, disk volume, and storage unit that
points to the source bucket.
The volume should have the same name as the original volume the data was
created in.
5 Import the images.
6 Use the CLI to configure a disk pool, disk volume, and storage unit that points
to the (empty) destination bucket.
■ Create a cloud instance alias, run:
/usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in
amazon.com -sts <storage server> -lsu_name <lsu name>
/usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig
-storage_server <storage server> -stype PureDisk -configlist
<configuration file path>
Currently, the web UI prevents you from creating additional disk pools on image
sharing servers.
7 Duplicate the desired images into the destination storage.
8 Initiate an export job from the destination bucket with Snowball.
9 When the device arrives on-premises, create an image sharing server that
points to the AWS Snowball Edge device to perform the desired restores.
■ Bucket listing is not supported (AWS does not support S3 API for bucket listing)
■ The NetBackup web UI shows "No cloud buckets available" if you try to
Retrieve the buckets.
■ The device does not support bucket creation. This limitation is because you are
not allowed to create any new buckets on the device.
MSDP cloud support 387
Upgrading to NetBackup 10.3 and cluster environment
See the section called “Enable client-side deduplication from the command line”
on page 24.
To configure the cloud direct for backup
1 Configure the cloud direct for backup.
/usr/openv/pdde/pdag/bin/pdagutil --config-cloud-direct-backup
--storage-path <storage path> --cache-size <cache size>
■ Recovering the MSDP object store data from the backup images
■ Best practices
S3 interface for MSDP also supports object versioning, IAM, and identity-based
policy. It uses snowball-auto-extract to support small objects batch upload.
S3 interface for MSDP can be configured on MSDP build-your-own (BYO) server,
Flex appliance, Flex WORM, and NetBackup on AKS/EKS.
For S3 interface for MSDP configuration on Flex appliance, login to the Flex media
server and See “Configuring S3 interface for MSDP on MSDP build-your-own (BYO)
server” on page 392.
For S3 interface for MSDP configuration on Flex WORM, See “Managing S3 service
from the deduplication shell” on page 704.
For S3 interface for MSDP configuration on NetBackup on AKS/EKS, see the Using
S3 service in MSDP Scaleout topic of NetBackup Deployment Guide for Kubernetes
Clusters document.
Note: The time between the clients and the S3 server must be synchronized to
have successful API calls.
■ It's recommended that the storage server has more than 64 GB of memory and
8 CPUs.
■ Ensure that NGINX is installed in the storage server.
■ The NGINX version must be same as the one in the corresponding official
RHEL version release. Install it from the corresponding RHEL yum source.
■ Run the following command to confirm that the NGINX is ready:
systemctl is-active <ngnix service name>
If you want to use your certificates in S3 interface for MSDP, run the following
command:
/usr/openv/pdde/vxs3/cfg/script/s3srv_config.sh --cert=<certfile>
--key=<keypath> [--port=<port>] [--loglevel=<0-4>]
■ None: 0
■ Error: 1
■ Warning: 2
■ Info: 3 (default)
■ Debug: 4
■ Only PEM format of certificate and secret key is supported. Please convert other
format of certificate and secret key to PEM format.
■ After configuring S3 server, you can check S3 server status.
Root user: systemctl status pdde-s3srv
Other service users: sudo -E /usr/openv/pdde/pdcr/bin/msdpcmdrun
/usr/openv/pdde/vxs3/cfg/script/s3srv_adm.sh status
IAM workflow
In this section, the typical workflow of IAM is described. You can install AWS CLI
to send IAM-related API request to complete the tasks.
IAM workflow
1 Reset and get S3 server root user's credentials.
Create root user credentials. You can use the root user to create users with
limited permissions.
After S3 interface for MSDP is configured, run the following command to create
root user's credentials:
/usr/openv/pdde/vxs3/cfg/script/s3srv_config.sh --reset-iam-root
You can also use this command if you have lost root user's access keys. The
new access key and secret key of root user is available in the command output.
To create or reset root user's credentials using NetBackup web UI, see
Resetting the MSDP object store root credentials topic of the NetBackup Web
UI Administrator's Guide.
2 Create a user.
aws --endpoint https://<MSDP_HOSTNAME>:8443 [--ca-bundle
<CA_BUNDLE_FILE>] iam create-user --user-name <USER_NAME>
S3 Interface for MSDP 396
Identity and Access Management (IAM) for S3 interface for MSDP
Note: If you omit the --user-name option, the access key is created under the
user who sends the request.
Note: If you omit the --user-name option, the access key is deleted under the
user who sends the request. You cannot delete the last active access key of
a root user.
Note: If you omit the --user-name option, the access key is listed under the
user who sends the request.
S3 Interface for MSDP 397
Identity and Access Management (IAM) for S3 interface for MSDP
If you omit the --user-name option, the access key is updated under the user
who sends the request.
The option --status must follow Active or Inactive parameter (case sensitive).
You cannot update the last active access key of root user to Inactive status.
8 Get a specific user policy.
aws --endpoint https://<MSDP_HOSTNAME>:8443 [--ca-bundle
<CA_BUNDLE_FILE>] iam get-user-policy --user-name <USER_NAME>
--policy-name <POLICY_NAME>
13 Delete a user.
aws --endpoint https://<MSDP_HOSTNAME>:8443 [--ca-bundle
<CA_BUNDLE_FILE>] iam delete-user --user-name <USER_NAME>
Note: Before you delete a user, you must delete the user policies and access
keys that are attached to the user. You cannot delete a root user.
S3 Interface for MSDP 398
Identity and Access Management (IAM) for S3 interface for MSDP
Common Parameters
The following table contains the parameters that all actions use for signing Signature
Version 4 requests with a query string.
Parameters Description
Type: string
Required: Yes
Version The API version that the request is written for, expressed in
the format YYYY-MM-DD.
Type: string
Required: No
X-Amz-Algorithm The credential scope value that includes your access key,
the date, the region, the service, and a termination string.
The value is configured in the following format:
access_key/YYYYMMDD/region/service/aws4_request.
Type: string
Required: Conditional
S3 Interface for MSDP 399
Identity and Access Management (IAM) for S3 interface for MSDP
Parameters Description
X-Amz-Credential The credential scope value that includes your access key,
the date, the region, the service, and a termination string.
The value is configured in the following format:
access_key/YYYYMMDD/region/service/aws4_request.
Type: string
Required: Conditional
X-Amz-Date The date that is used to create the signature. The format
must be ISO 8601 basic format (YYYYMMDD'T'HHMMSS'Z').
For example, the following date time is a valid X-Amz-Date
value: 20220525T120000Z.
Type: string
Required: Conditional
Type: string
Required: Conditional
S3 Interface for MSDP 400
Identity and Access Management (IAM) for S3 interface for MSDP
Parameters Description
X-Amz-SignedHeaders Specifies all the HTTP headers that were included as part
of the canonical request. For more information about
specifying signed headers, see Task 1: Create a Canonical
Request For Signature Version 4 in the Amazon Web
Services General Reference.
Type: string
Required: Conditional
CreateUser
Creates a new IAM user for MSDP S3.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ UserName
The name of the user to create.
IAM user names must be unique. User names are case-sensitive.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: Yes
Response Elements
The following element is returned by server.
■ User
A structure with details about the new IAM user.
Type: User object
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ EntityAlreadyExists
The request was rejected because it attempted to create a resource that already
exists.
HTTP Status Code: 409
■ InvalidInput
The request was rejected because an invalid or out-of-range value was supplied
for an input parameter.
HTTP Status Code: 400
■ ServiceFailure
S3 Interface for MSDP 402
Identity and Access Management (IAM) for S3 interface for MSDP
https://msdps3.veritas.com:8443/?Action=CreateUser
&UserName=User1
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
GetUser
Retrieves the information about the specified IAM user.
If you do not specify a user name, IAM determines the user name implicitly based
on the MSDP S3 access key ID used to sign the request to this operation.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ UserName
The name of the user to get information about.
This parameter is optional. If it is not included, it defaults to the user making the
request.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
S3 Interface for MSDP 403
Identity and Access Management (IAM) for S3 interface for MSDP
Pattern: [\w+=,.@-]+
Required: No
Response Elements
The following element is returned by server.
■ User
A structure with details about the new IAM user.
Type: User object
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=GetUser
&UserName=User1
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
</GetUserResult>
</GetUserResponse>
ListUsers
Lists all the IAM users of the server.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
This API does not need any specific request parameters.
Response Elements
The following elements are returned by server.
■ Users.member.N
A list of users.
Type: Array of User objects
■ IsTruncated
A flag that indicates whether there are more items to return.
Type: Boolean
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=ListUsers
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
</ResponseMetadata>
<ListUsersResult>
<Users>
<member>
<CreateDate>2022-03-22T13:35:03Z</CreateDate>
<UserName>root</UserName>
</member>
<member>
<CreateDate>2022-03-25T06:57:08Z</CreateDate>
<UserName>User1</UserName>
</member>
</Users>
<IsTruncated>false</IsTruncated>
</ListUsersResult>
</ListUsersResponse>
DeleteUser
Deletes the specified IAM user.
You must delete the items (for example, access keys, policies) that are attached
to the user manually before you delete the user.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ UserName
The name of the user to delete.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: Yes
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ DeleteConflict
The request was rejected because it attempted to delete a resource that has
attached subordinate entities. The error message describes these entities.
HTTP Status Code: 409
■ NoSuchEntity
S3 Interface for MSDP 406
Identity and Access Management (IAM) for S3 interface for MSDP
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=DeleteUser
&UserName=User1
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
CreateAccessKey
Creates a new AWS secret access key and corresponding MSDP S3 access key
ID for the specified user. The default status for new keys is Active.
If you do not specify a user name, IAM determines the user name implicitly based
on the MSDP S3 access key ID signing the request.
A user can have up to two access keys.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ UserName
The name of the IAM user that the new key will belong to.
This parameter is optional. If it is not included, it defaults to the user making the
request.
Type: String
S3 Interface for MSDP 407
Identity and Access Management (IAM) for S3 interface for MSDP
https://msdps3.veritas.com:8443/?Action=CreateAccessKey
&UserName=User1
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
<CreateAccessKeyResult>
<AccessKey>
<AccessKeyId>2PPM4XHAKMG5JHZIUPEUG</AccessKeyId>
<CreateDate>2022-03-28T01:43:46Z</CreateDate>
<SecretAccessKey>9TvXcpw2YRYRZXZCyrCELGVWMNBZyJYY95jhDc1xgH
</SecretAccessKey>
<Status>Active</Status>
<UserName>User1</UserName>
</AccessKey>
</CreateAccessKeyResult>
</CreateAccessKeyResponse>
ListAccessKeys
Returns the information about the access key IDs associated with the specified IAM
user. If there is none, the operation returns an empty list.
If the UserName field is not specified, the user name is determined implicitly based
on the MSDP S3 access key ID used to sign the request.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ UserName
The name of the IAM user.
This parameter is optional. If it is not included, it defaults to the user making the
request.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: No
Response Elements
The following elements are returned by server.
■ AccessKeyMetadata.member.N
A list of objects that contains metadata about the access keys.
Type: Array of AccessKeyMetadata objects See “Data Types” on page 419.
■ IsTruncated
A flag that indicates whether there are more items to return.
Type: Boolean
Errors
S3 Interface for MSDP 409
Identity and Access Management (IAM) for S3 interface for MSDP
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=ListAccessKeys
&UserName=User1
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
</ListAccessKeysResult>
</ListAccessKeysResponse>
DeleteAccessKey
Deletes the access key pair that is associated with the specified IAM user.
If you do not specify a user name, IAM determines the user name implicitly based
on the MSDP S3 access key ID signing the request.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ AccessKeyId
The access key ID for the access key ID and secret access key you want to
delete.
Type: String
Length Constraints: Minimum length of 16. Maximum length of 128.
Pattern: [\w]+
Required: Yes
■ UserName
The name of the IAM user.
This parameter is optional. If it is not included, it defaults to the user making the
request.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: No
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
S3 Interface for MSDP 411
Identity and Access Management (IAM) for S3 interface for MSDP
Sample Request:
https://msdps3.veritas.com:8443/?Action=DeleteAccessKey
&AccessKeyId=GAATH0QN9N5W8TBQPSKPJ
&UserName=User1
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
UpdateAccessKey
Changes the status of the specified access key from Active to Inactive, or vice
versa. This operation can be used to disable a user's key as part of a key rotation
workflow.
If the UserName is not specified, the user name is determined implicitly based on
the MSDP S3 access key ID used to sign the request.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ AccessKeyId
The access key ID for the access key that you want to update.
Type: String
Length Constraints: Minimum length of 16. Maximum length of 128.
Pattern: [\w]+
Required: Yes
■ Status
The status you want to assign to the secret access key. Active means that the
key can be used for programmatic calls to MSDP S3 server, while Inactive
means that the key cannot be used.
Type: String
Valid Values: Active/Inactive
Required: Yes
S3 Interface for MSDP 412
Identity and Access Management (IAM) for S3 interface for MSDP
■ UserName
The name of the IAM user.
This parameter is optional. If it is not included, it defaults to the user making the
request.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: No
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or,
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=UpdateAccessKey
&AccessKeyId=GAATH0QN9N5W8TBQPSKPJ
&Status=Inactive
&UserName=User1
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
PutUserPolicy
Adds or updates an inline policy document that is embedded in the specified IAM
user.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ PolicyDocument
The policy document.
You must provide policies in JSON format in IAM.
Type: String
Required: Yes
■ PolicyName
The name of the policy document.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern: [\w+=,.@-]+
Required: Yes
■ UserName
The name of the user to associate the policy with.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: Yes
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ MalformedPolicyDocument
The request was rejected because the policy document was malformed.
HTTP Status Code: 400
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
S3 Interface for MSDP 414
Identity and Access Management (IAM) for S3 interface for MSDP
https://msdps3.veritas.com:8443/?Action=PutUserPolicy
&UserName=User1
&PolicyName=ExamplePolicy
&PolicyDocument={"Version":"2012-10-17","Statement":[{"Effect":"Allow",
"Action":["s3:*"],"Resource":["arn:aws:s3:::bkt3/*"]}]}
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
GetUserPolicy
Retrieves the specified inline policy document that is embedded in the specified
IAM user.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ PolicyName
The name of the policy document to get.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern: [\w+=,.@-]+
Required: Yes
■ UserName
The name of the user who the policy is associated with.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: Yes
S3 Interface for MSDP 415
Identity and Access Management (IAM) for S3 interface for MSDP
Response Elements
The following elements are returned by server.
■ PolicyDocument
The policy document.
IAM stores policies in JSON format.
Type: String
■ PolicyName
The name of the policy.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern: [\w+=,.@-]+
■ UserName
The user the policy is associated with.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern: [\w+=,.@-]+
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=GetUserPolicy
&UserName=User1
&PolicyName=ExamplePolicy
&Version=2010-05-08
&AUTHPARAMS
Sample Response:
S3 Interface for MSDP 416
Identity and Access Management (IAM) for S3 interface for MSDP
ListUserPolicies
Lists the names of the inline policies that are embedded in the specified IAM user.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ UserName
The name of the user to list policies for.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: Yes
Response Elements
The following element is returned by server.
■ PolicyNames.member.N
A list of policy names.
Type: Array of strings
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern: [\w+=,.@-]+
■ IsTruncated
A flag that indicates whether there are more items to return.
Type: Boolean
Errors
S3 Interface for MSDP 417
Identity and Access Management (IAM) for S3 interface for MSDP
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=ListUserPolicies
&UserName=User1
&AUTHPARAMS
Sample Response:
DeleteUserPolicy
Deletes the specified inline policy that is embedded in the specified IAM user.
Request Parameters
For information about the parameters that are common to all actions, See “Common
Parameters” on page 398.
■ PolicyName
The name identifying the policy document to delete.
S3 Interface for MSDP 418
Identity and Access Management (IAM) for S3 interface for MSDP
Type: String
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern: [\w+=,.@-]+
Required: Yes
■ UserName
The name identifying the user that the policy is embedded in.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern: [\w+=,.@-]+
Required: Yes
Errors
For information about the errors that are common to all actions, See “Common
Error Codes” on page 400.
■ NoSuchEntity
The request was rejected because it referenced a resource entity that does not
exist. The error message describes the resource.
HTTP Status Code: 404
■ ServiceFailure
The request processing has failed because of an unknown error, exception, or
failure.
HTTP Status Code: 500
Examples
Sample Request:
https://msdps3.veritas.com:8443/?Action=DeleteUserPolicy
&PolicyName=ExamplePolicy
&UserName=User1
&AUTHPARAMS
Sample Response:
Data Types
Table 9-3 Data types
■ UserName
The friendly name identifying the user.
Type: String
Length Constraints: Minimum length of 1. Maximum length
of 64.
Pattern: [\w+=,.@-]+
Required: Yes
■ CreateDate
The date and time, in ISO 8601 date-time format, when the
user was created.
Type: Timestamp
Required: Yes
S3 Interface for MSDP 420
Identity and Access Management (IAM) for S3 interface for MSDP
■ AccessKeyId
The ID for this access key.
Type: String
Length Constraints: Minimum length of 16. Maximum length
of 128.
Pattern: [\w]+
Required: Yes
■ CreateDate
The date when the access key was created.
Type: Timestamp
Required: No
■ SecretAccessKey
The secret key that is used to sign requests.
Type: String
Required: Yes
■ Status
The status of the access key. Active means that the key is
valid for API calls, while Inactive means it is not.
Type: String
Valid Values: Active | Inactive
Required: Yes
■ UserName
The name of the IAM user that the access key is associated
with.
Type: String
Length Constraints: Minimum length of 1. Maximum length
of 64.
Pattern: [\w+=,.@-]+
Required: Yes
S3 Interface for MSDP 421
Identity and Access Management (IAM) for S3 interface for MSDP
■ AccessKeyId
The ID for this access key.
Type: String
Length Constraints: Minimum length of 16. Maximum length
of 128.
Pattern: [\w]+
Required: No
■ CreateDate
The date when the access key was created.
Type: Timestamp
Required: No
■ Status
The status of the access key. Active means that the key is
valid for API calls, while Inactive means that it is not.
Type: String
Valid Values: Active | Inactive
Required: No
■ UserName
The name of the IAM user that the access key is associated
with.
Type: String
Length Constraints: Minimum length of 1. Maximum length
of 64.
Pattern: [\w+=,.@-]+
Required: No
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
S3 Interface for MSDP 422
Identity and Access Management (IAM) for S3 interface for MSDP
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
s3:* Any S3 and IAM operations. This All S3 and IAM APIs.
is an administrator permission.
Note: CreateBucket API requires
this permission.
The permission
s3:BypassGovernanceRetention
is only applied to the action s3:*.
CompleteMultipartUpload
CreateMultipartUpload
AbortMultipartUpload
PutObject
DeleteObject
DeleteObjects
PutBucketVersioning
DeleteBucket
CopyObject
PutObjectLockConfiguration
PutObjectRetention
S3 Interface for MSDP 423
Identity and Access Management (IAM) for S3 interface for MSDP
GetObject
GetBucketVersioning
GetBucketLocation
GetBucketEncryption
HeadBucket
CopyObject
GetObjectLockConfiguration
GetObjectRetention
ListObjects
ListObjectsV2
ListObjectVersions
ListMultipartUploads
Supported Effect:
Only "Allow" effect is supported.
Note: root user has embedded administrator permission, so you cannot attach a
policy to root user.
The permission
s3:BypassGovernanceRetention is only
applied to the action s3:*.
The permission
s3:BypassGovernanceRetention is not
applied to the current resource.
Note: The Governance mode in Flex WORM S3 object lock is Enterprise mode in
the MSDP LSU on Flex WORM. The Compliance mode in Flex WORM S3 object
lock is Compliance mode in the MSDP LSU on Flex WORM.
You can use the following S3 APIs for Object Lock in Flex WORM:
■ Create Bucket
See “CreateBucket ” on page 426.
S3 Interface for MSDP 425
S3 APIs for S3 interface for MSDP
■ Put Object
See “PutObject” on page 467.
■ Copy Object
See “Copy Object” on page 468.
■ Get Object
See “GetObject” on page 462.
■ Head Object
See “HeadObject” on page 465.
■ Delete Object
See “DeleteObject” on page 458.
■ Delete Objects
See “DeleteObjects” on page 460.
■ Create Multipart Upload
See “CreateMultipartUpload” on page 457.
■ Put Object Retention (Flex WORM only)
See “Put Object Retention (Flex WORM only)” on page 474.
■ Get Object Retention (Flex WORM only)
See “Get Object Retention (Flex WORM only)” on page 475.
■ Put Object Lock Configuration (Flex WORM only)
See “Put Object Lock Configuration (Flex WORM only)” on page 449.
■ GET Object Lock Configuration (Flex WORM only)
See “Get Object Lock Configuration (Flex WORM only)” on page 451.
<Resource>/azure-versioned/</Resource>
<RequestId>1653377472751453758</RequestId>
S3 APIs on Buckets
S3 APIs on buckets perform the following data arrangement functions:
■ Create a bucket.
■ Delete a bucket.
■ Check the bucket status.
■ List the buckets.
CreateBucket
Creates a new bucket. The bucket name is global unique for different LSU. Not
every string is an acceptable bucket name. For information about bucket naming
restrictions, see Bucket naming rules. You must specify Region(=lsu name) in the
request body. You are not permitted to create buckets using anonymous requests.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket to be created.
Required: Yes
Type: String
■ x-amz-bucket-object-lock-enabled (Flex WORM only)
Specifies if S3 Object Lock is enabled for the new bucket.
Request Body
■ CreateBucketConfiguration
Root level tag for the CreateBucketConfiguration parameters.
Required: Yes
■ LocationConstraint
Specifies the Region where the bucket will be created.
Note: The regions in S3Srv are the LSU names. If you don't specify a region,
the bucket is created in the region PureDiskVolume (Local LSU).
Type: String
Valid Values: PureDiskVolume, CLOUD_LSU_NAME
Required: No
Response Syntax
HTTP/1.1 200
DeleteBucket
Deletes the bucket. All objects including all object versions and delete markers in
the bucket must be deleted before the bucket itself can be deleted.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket to be deleted.
Required: Yes
Type: String
Response Syntax
HTTP/1.1 204
GetBucketEncryption
Returns the default encryption configuration for a bucket.
Request Syntax
S3 Interface for MSDP 429
S3 APIs for S3 interface for MSDP
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
Response Syntax
Response Body
■ ServerSideEncryptionConfiguration
Root level tag for the ServerSideEncryptionConfiguration parameters.
Required: Yes
■ Rule
Container for information about a particular server-side encryption
configuration rule.
■ ApplyServerSideEncryptionByDefault
Specifies the default server-side encryption to apply to objects in the
bucket.
■ SSEAlgorithm
Server-side encryption algorithm to use for the default encryption.
GetBucketLocation
Returns the bucket’s region using LocationConstraint of that object. Bucket's
region is MSDP LSU.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<LocationConstraint>
<LocationConstraint>string</LocationConstraint>
</LocationConstraint>
Response Body
■ LocationConstraint
Root level tag for the LocationConstraint parameters.
Required: Yes
■ LocationConstraint
LocationConstraint of that object is MSDP LSU.
GetBucketVersioning
Returns the versioning state of a bucket.
Request Syntax
Request Parameters
■ Bucket
Bucket name for which you want to get the versioning information.
Required: Yes
Type: String
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<VersioningConfiguration>
<Status>string</Status>
</VersioningConfiguration>
Response Body
■ VersioningConfiguration
Root level tag for the VersioningConfiguration parameters.
Required: Yes
■ Status
Versioning status of bucket.
Valid Values: Enabled
Possible Error Response
■ Success
HTTP status code 200.
■ NoSuchBucket
The specified bucket does not exist.
HTTP status code 404.
■ InternalError
Request failed because of an internal server error.
HTTP status code 500.
S3 Interface for MSDP 432
S3 APIs for S3 interface for MSDP
HeadBucket
Determines if a bucket exists or not. The operation returns a 200 OK if the bucket
exists and the user has permissions to access it.
Request Syntax
Request Parameters
■ Bucket
The name of the bucket.
Required: Yes
Type: String
Response Syntax
HTTP/1.1 200
ListBuckets
Lists all the buckets.
Request Syntax
GET / HTTP/1.1
Host: msdps3.server:8443
Request Parameters
The request does not use any URI parameters.
Response Syntax
S3 Interface for MSDP 433
S3 APIs for S3 interface for MSDP
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult>
<Buckets>
<Bucket>
<CreationDate>timestamp</CreationDate>
<Name>string</Name>
</Bucket>
</Buckets>
</ListAllMyBucketsResult>
Response Body
■ ListAllMyBucketsResult
Root level tag for all bucket results.
Required: Yes
■ Buckets
The list of buckets owned by the user that is authenticated for the request.
■ Bucket
Information of the bucket.
■ CreationDate
Bucket creation date and time.
■ Name
Name of the bucket.
ListMultipartUploads
Lists in-progress multipart uploads. An in-progress multipart upload is a multipart
upload that is initiated using the Create Multipart Upload request but is not complete
yet or aborted. This operation randomly returns a maximum of 10000 multipart
S3 Interface for MSDP 434
S3 APIs for S3 interface for MSDP
uploads in the response that is sorted by object key in ascending order. The
operation does not support paging.
Request Syntax
GET /bucket?uploads&prefix=Prefix
Host: msdps3.server:8443
Request Parameters
■ Bucket
Name of the bucket on which the multipart upload was initiated.
Required: Yes
Type: String
■ prefix
Limits the response to uploads that begin with the specified prefix.
Type: String
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<ListMultipartUploadsResult>
<Bucket>string</Bucket>
<KeyMarker>string</KeyMarker>
<UploadIdMarker>string</UploadIdMarker>
<NextKeyMarker>string</NextKeyMarker>
<Prefix>string</Prefix>
<NextUploadIdMarker>string</NextUploadIdMarker>
<MaxUploads>integer</MaxUploads>
<IsTruncated>boolean</IsTruncated>
<Upload>
<Initiated>timestamp</Initiated>
<Key>string</Key>
<StorageClass>string</StorageClass>
<UploadId>string</UploadId>
</Upload>
...
</ListMultipartUploadsResult>
Response Body
■ ListMultipartUploadsResult
Root level tag for the ListMultipartUploadsResult parameters.
Required: Yes
S3 Interface for MSDP 435
S3 APIs for S3 interface for MSDP
■ Bucket
Name of the bucket on which the multipart upload was initiated.
■ IsTruncated
A flag indicating whether all the results satisfying the search criteria were
returned by MSDP S3.
■ KeyMarker
S3 interface for MSDP expects the key-marker which was returned by server
in last request. The value of "NextKeyMarker" of response should be used
in request as key-marker.
■ MaxUploads
Limits the number of multipart uploads that are returned in the response.
■ NextKeyMarke
When the response is truncated, you can use this value as marker in
subsequent request to get next set of objects.
■ NextUploadIdMarker
When the response is truncated, you can use this value as marker in
subsequent request to get next set of objects.
■ UploadIdMarker
The value of UploadIdMarker passed in the request.
■ Prefix
Limits the response to keys that begin with the specified prefix.
■ Upload
Information that is related to a particular multipart upload. Response can
contain zero or multiple uploads.
■ Initiated
The time and date when the multipart upload was initiated.
Type: Timestamp
■ Key
Object name for which multipart upload was initiated.
■ StorageClass
Storage class of the uploaded part.
■ UploadId
Upload ID that identifies the multipart upload.
ListObjects
Returns a list of all the objects in a bucket. You can use the request parameters as
selection criteria to return a subset of the objects in a bucket. The API returns
objects with the latest version when the versioning is enabled on the bucket. A 200
OK response can contain valid or invalid XML. Ensure that you design your
application to parse the contents of the response and handle it appropriately.
Request Syntax
GET /bucket?delimiter=Delimiter&marker=Marker&max-keys
=Maxkeys&prefix=Prefix HTTP/1.1
Host: msdps3.server:8443
Request Parameters
■ Bucket
Name of the bucket that contains the objects.
Required: Yes
Type: String
■ delimiter
A delimiter is a character used to group keys. It rolls up the keys that contain
the same character between the prefix and the first occurrence of the delimiter
into a single result element in the CommonPrefixes collection. These rolled-up
keys are not returned elsewhere in the response. Each rolled-up result counts
as only one return against the MaxKeys value. MSDP S3 supports only "/" string
as delimiter.
Type: String
S3 Interface for MSDP 437
S3 APIs for S3 interface for MSDP
■ marker
The marker is the point where S3 interface for MSDP should begin listing objects.
S3 interface for MSDP expects the marker which was returned by server in last
request. The value of NextMarker of response should be used in request as
marker.
Type: String
■ max-keys
Limits the number of keys that are returned in the response. By default, the
action returns up to 1,000 key names.
Type: Integer
■ prefix
Limits the response to keys that begin with the specified prefix.
Type: String
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult>
<IsTruncated>boolean</IsTruncated>
<Marker>string</Marker>
<NextMarker>string</NextMarker>
<Contents>
<ETag>string</ETag>
<Key>string</Key>
<LastModified>timestamp</LastModified>
<Size>integer</Size>
<StorageClass>string</StorageClass>
</Contents>
...
<Name>string</Name>
<Prefix>string</Prefix>
<Delimiter>string</Delimiter>
<MaxKeys>integer</MaxKeys>
<CommonPrefixes>
<Prefix>string</Prefix>
</CommonPrefixes>
...
</ListBucketResult>
Response Body
■ ListBucketResult
S3 Interface for MSDP 438
S3 APIs for S3 interface for MSDP
For versioned bucket, it is recommended that you use List Object Versions API to
get information about all objects. If using "list objects" in versioned bucket when
results are truncated, key count in the results may be less than Max keys and you
can make a follow-up paginated request.
When using list objects APIs on a versioned bucket, if all of objects under the
specified prefix are delete markers, the specified prefix is displayed as a
CommonPrefixes element.
ListObjectsV2
Returns a list of all the objects in a bucket. You can use the request parameters as
a selection criteria to return a subset of the objects in a bucket. The API returns
objects with the latest version when the versioning is enabled on the bucket. A 200
S3 Interface for MSDP 440
S3 APIs for S3 interface for MSDP
OK response can contain valid or invalid XML. Make sure to design your application
to parse the contents of the response and handle it appropriately.
Request Syntax
GGET /bucket?list-type=2&continuation-token=Continuation
Token&delimiter=Delimiter&max-keys=MaxKeys&prefix=Prefix HTTP/1.1
Host: msdps3.server:8443
Request Parameters
■ Bucket
Name of the bucket that contains the objects.
Required: Yes
Type: String
■ continuation-token
Continuation-token is the point from where you want S3 interface for MSDP to
start listing objects. S3 interface for MSDP expects the continuation-token that
is returned by the server in the last request. The value of
NextContinuationToken of response should be used in request as
ContinuationToken. The token can only be used once and valid for two minutes
by default.
Type: String
■ delimiter
A delimiter is a character used to group keys. It rolls up the keys that contain
the same character between the prefix and the first occurrence of the delimiter
into a single result element in the CommonPrefixes collection. These rolled-up
keys are not returned elsewhere in the response. Each rolled-up result counts
as only one return against the MaxKeys value. MSDP S3 supports only "/" string
as delimiter.
Type: String
■ max-keys
Limits the number of keys that are returned in the response. By default, the
action returns up to 1,000 key names.
Type: Integer
■ prefix
Limits the response to keys that begin with the specified prefix.
Type: String
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
S3 Interface for MSDP 441
S3 APIs for S3 interface for MSDP
<ListBucketResult>
<IsTruncated>boolean</IsTruncated>
<Contents>
<ETag>string</ETag>
<Key>string</Key>
<LastModified>timestamp</LastModified>
<Size>integer</Size>
<StorageClass>string</StorageClass>
</Contents>
...
<Name>string</Name>
<Prefix>string</Prefix>
<Delimiter>string</Delimiter>
<MaxKeys>integer</MaxKeys>
<CommonPrefixes>
<Prefix>string</Prefix>
</CommonPrefixes>
...
<KeyCount>integer</KeyCount>
<ContinuationToken>string</ContinuationToken>
<NextContinuationToken>string</NextContinuationToken>
</ListBucketResult>
Response Body
■ ListBucketResult
Root level tag for the ListBucketResult parameters.
Required: Yes
■ CommonPrefixes
When determining the number of returns, all the keys (up to 1,000) rolled
into a common prefix count as one. CommonPrefixes contains all keys
between Prefix and the next occurrence of the string that is specified by the
delimiter.
■ Contents
Metadata about each object that is returned.
■ ETag
SHA256 digest of the object.
■ Key
Object name.
■ LastModified
Last modification date and time of the object.
S3 Interface for MSDP 442
S3 APIs for S3 interface for MSDP
■ Size
Size of the object.
■ StorageClass
Storage class of the object.
■ Delimiter
Delimiter value that is passed in request.
■ IsTruncated
A flag indicating whether all the results satisfying the search criteria were
returned by MSDP S3.
■ ContinuationToken
The ContinuationToken is the point from where you want S3 interface for
MSDP to start listing objects. S3 interface for MSDP expects
ContinuationToken that is returned by server in last request. The value of
NextContinuationToken of response should be used in request as
ContinuationToken.
■ KeyCount
The number of objects that are returned in the response body.
■ MaxKeys
The maximum number of objects that can be returned in the response body.
■ Name
The name of the bucket
■ NextContinuationToken
When the response is truncated, you can use this value as
ContinuationToken in subsequent request to get next set of objects.
■ Prefix
Limits the response to keys that begin with the specified prefix.
For versioned bucket, it is recommended that you use List Object Versions API
to get all objects information. If using "list objects" in versioned bucket when results
are truncated, key count in the results may be less than Max keys and you can
make a follow-up paginated request.
Recommend less than 1000 CommonPrefixes elements under a specified prefix
separated by the slash (/) delimiter character. If more than 10000 CommonPrefixes
elements under a specified prefix exist, list objects with the prefix and the delimiter
parameters in the request returns only 10000 elements. You can use list objects
without delimiter if you want to list all elements under a specified prefix.
S3 Interface for MSDP 443
S3 APIs for S3 interface for MSDP
When using list objects APIs on a versioned bucket, if all of objects under the
specified prefix are delete markers, the specified prefix is shown as a
CommonPrefixes element.
ListObjectVersions
Returns metadata about all versions of the objects in a bucket. You can also use
request parameters as selection criteria to return metadata about a subset of all
the object versions. S3 interface for MSDP recommends using this API with 1000
max keys and object name as a prefix to list all object versions in one request.
Request Syntax
S3 Interface for MSDP 444
S3 APIs for S3 interface for MSDP
GET /bucket/?versions&delimiter=Delimiter&key-marker=
KeyMarker&max-keys=MaxKeys&prefix=Prefix HTTP/1.1
Host: msdps3.server:8443
Or
GET /bucket/?versions&delimiter=Delimiter&max-keys=
MaxKeys&prefix=Prefix&version-id-marker=VersionIdMarker HTTP/1.1
Host: msdps3.server:8443
Request Parameters
■ Bucket
Name of the bucket that contains the objects.
Required: Yes
Type: String
■ key-marker
The value of NextKeyMarker of response should be used in request as marker.
The marker can only be used once and valid for two minutes by default. This
parameter can only be used with version-id-marker.
Type: String
■ delimiter
A delimiter is a character used to group keys. It rolls up the keys that contain
the same character between the prefix and the first occurrence of the delimiter
into a single result element in the CommonPrefixes collection. These rolled-up
keys are not returned elsewhere in the response. Each rolled-up result counts
as only one return against the MaxKeys value. MSDP S3 supports only "/" string
as delimiter.
Type: String
■ max-keys
Limits the number of keys returned in the response. By default, the action returns
up to 1,000 key names.
Type: Integer
■ prefix
Limits the response to keys that begin with the specified prefix.
Type: String
■ version-id-marker
The value of NextVerionIDMarker of response should be used in request as
VersionIdMarker. The marker can only be used once and valid for two minutes
by default. This parameter can only be used with key-marker.
Type: String
S3 Interface for MSDP 445
S3 APIs for S3 interface for MSDP
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<ListVersionsResult>>
<IsTruncated>boolean</IsTruncated>
<KeyMarker>string</KeyMarker>
<VersionIdMarker>string</VersionIdMarker>
<NextKeyMarker>string</NextKeyMarker>
<NextVersionIdMarker>string</NextVersionIdMarker>
<Version>
<ETag>string</ETag>
<IsLatest>boolean</IsLatest>
<Key>string</Key>
<LastModified>timestamp</LastModified>
<Size>integer</Size>
<StorageClass>string</StorageClass>
<VersionId>string</VersionId>
</Version>
...
<DeleteMarker>
<IsLatest>boolean</IsLatest>
<Key>string</Key>
<LastModified>timestamp</LastModified>
<VersionId>string</VersionId>
</DeleteMarker>
...
<Name>string</Name>
<Prefix>string</Prefix>
<Delimiter>string</Delimiter>
<MaxKeys>integer</MaxKeys>
<CommonPrefixes>
<Prefix>string</Prefix>
</CommonPrefixes>
...
</ListVersionsResult>>
Response Body
■ ListVersionsResult
Root level tag for the ListVersionsResult parameters.
Required: Yes
■ DeleteMarker
S3 Interface for MSDP 446
S3 APIs for S3 interface for MSDP
Metadata about each delete marker. Response can have zero or more delete
markers.
■ Contents
Metadata about each object that is returned.
■ IsLatest
Specify if the object is latest.
Type: Boolean
■ Key
Delete marker name.
■ LastModified
Last modification date and time of the delete marker.
Type: Timestamp
■ VersionId
Specify version ID of the delete marker.
■ Delimiter
Delimiter value that is passed in the request.
■ IsTruncated
A flag indicating whether all the results satisfying the search criteria were
returned by MSDP S3.
■ KeyMarker
The value of NextKeyMarker of response should be used in request as
KeyMarker.
■ MaxKeys
The maximum number of objects that can be returned in the response body.
■ Name
The name of the bucket.
■ NextKeyMarker
When the response is truncated, you can use this value as KeyMarker in
subsequent request to get next set of objects.
■ NextVersionIdMarker
When the response is truncated, you can use this value as VersionIdMarker
in subsequent request to get next set of objects.
■ Prefix
Limits the response to keys that begin with the specified prefix.
■ VersionIdMarker
S3 Interface for MSDP 447
S3 APIs for S3 interface for MSDP
PutBucketVersioning
Sets the versioning state of an existing bucket. You can set the versioning state
with the value Enabled, which enables versioning for the objects in the bucket.
If the versioning state has never been set on a bucket, the bucket has no versioning
state. When you enabled versioning on the bucket, the bucket is in the versioning
state and cannot be set back to non-versioning state.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket that contains the objects.
Required: Yes
Type: String
Request body
S3 Interface for MSDP 449
S3 APIs for S3 interface for MSDP
■ Status
The versioning state of the bucket.
Valid Values: Enabled
Required: Yes
Type: String
Response Syntax
HTTP/1.1 200
</Rule>
</ObjectLockConfiguration>
Request Parameters
■ Bucket
The bucket for which you want to create or replace Object Lock configuration.
Required: Yes
Type: String
Request body
■ ObjectLockConfiguration
Root level tag for the ObjectLockConfiguration parameters.
Required: Yes
■ ObjectLockEnabled
Indicates whether this bucket has an Object Lock configuration enabled. Enable
ObjectLockEnabled when you apply ObjectLockConfiguration to a bucket.
Valid Values: Enabled
Required: No
Type: String
■ Rule
Specifies the Object Lock rule for the specified objects. Enable the rule when
you apply ObjectLockConfiguration to a bucket.
The settings require both a mode and a period. The period can be either Days
or Years. You cannot specify Days and Years at the same time.
Required: No
Type: ObjectLockRule data type
Response Syntax
HTTP/1.1 200
■ InvalidBucketState
Object Lock configuration cannot be enabled on existing buckets.
HTTP status code 409.
■ InvalidRequest
The error may occur for some reasons. For details, please refer the error
messages.
HTTP status code 400.
Request Parameters
■ Bucket
The bucket for which you want to retrieve Object Lock configuration.
Required: Yes
Type: String
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<ObjectLockConfiguration>
<ObjectLockEnabled>string</ObjectLockEnabled>
<Rule>
<DefaultRetention>
<Days>integer</Days>
<Mode>string</Mode>
<Years>integer</Years>
</DefaultRetention>
</Rule>
</ObjectLockConfiguration>
Response body
■ ObjectLockConfiguration
Root level tag for the ObjectLockConfiguration parameters.
Required: Yes
■ ObjectLockEnabled
S3 Interface for MSDP 452
S3 APIs for S3 interface for MSDP
Indicates whether this bucket has an Object Lock configuration enabled. Enable
ObjectLockEnabled when you apply ObjectLockConfiguration to a bucket.
Valid Values: Enabled
Required: No
Type: String
■ Rule
Specifies the Object Lock rule for the specified objects. Enable the rule when
you apply ObjectLockConfiguration to a bucket.
The settings require both a mode and a period. The period can be either Days
or Years. You cannot specify Days and Years at the same time.
Required: No
Type: ObjectLockRule data type
Possible Error Response
■ Success
HTTP status code 200.
■ AccessDenied
Access Denied.
HTTP status code 403.
■ NoSuchBucket
The specified bucket does not exist.
HTTP status code 404.
■ S3srvExtObjectLockConfigurationNotFound
Object Lock configuration does not exist for this bucket.
HTTP status code 404.
■ InvalidRequest
The error may occur for some reasons. For details, please refer the error
messages.
HTTP status code 400.
S3 APIs on Objects
S3 APIs on objects perform the following main functions:
■ Upload data (object) to the MSDP server.
■ Download data from the MSDP server.
■ Delete data from the MSDP server.
■ List data in the MSDP server.
S3 Interface for MSDP 453
S3 APIs for S3 interface for MSDP
AbortMultipartUpload
Aborts a multipart upload. After a multipart upload is aborted, no additional parts
can be uploaded using that upload ID. The storage that is consumed by any
previously uploaded parts is freed.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
■ Key
The name of the object for which multipart upload was initiated.
Required: Yes
Type: String
■ uploadId
Upload ID of multipart upload.
Required: Yes
Type: String
Response Syntax
HTTP/1.1 204
■ InternalError
Request failed because of an internal server error.
HTTP status code 500.
CompleteMultipartUpload
Completes a multipart upload by assembling previously uploaded parts.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
■ Key
The name of the object.
Required: Yes
Type: String
■ uploadId
Upload ID of multipart upload.
Required: Yes
Type: String
Request body
■ CompleteMultipartUpload
Root level tag for the CompleteMultipartUpload parameters.
Required: Yes
■ Part
List of parts to create final object. It contains ETag and PartNumber.
S3 Interface for MSDP 455
S3 APIs for S3 interface for MSDP
■ ETag
ETag of the uploaded part.
■ PartNumber
PartNumber of the uploaded part.
Response Syntax
HTTP/1.1 200
x-amz-version-id: VersionId
<?xml version="1.0" encoding="UTF-8"?>
<CompleteMultipartUploadResult>
<Bucket>string</Bucket>
<Key>string</Key>
<ETag>string</ETag>
</CompleteMultipartUploadResult>
Response Headers
■ x-amz-version-id
Version ID of the created object.
Response Body
■ CompleteMultipartUploadResult
Root level tag for the CompleteMultipartUploadResult parameters.
Required: Yes
■ Bucket
Name of the bucket.
Required: Yes
Type: String
■ Key
The name of the object.
Required: Yes
Type: String
■ ETag
SHA256 digest of the object.
■ AccessDenied
Request was rejected because user authentication failed.
HTTP status code 403.
■ NoSuchBucket
The specified bucket does not exist.
HTTP status code 404.
■ InternalError
Request failed because of an internal server error.
HTTP status code 500.
S3 Interface for MSDP 457
S3 APIs for S3 interface for MSDP
CreateMultipartUpload
Initiates a multipart upload and returns an upload ID. This upload ID is used to
associate all the parts in the specific multipart upload.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
■ Key
The name of the object for which multipart upload was initiated.
Required: Yes
Type: String
■ x-amz-object-lock-mode (Flex WORM only)
Specifies the Object Lock mode that you want to apply to the uploaded object.
Valid Values: GOVERNANCE, COMPLIANCE
■ x-amz-object-lock-retain-until-date (Flex WORM only)
Specifies the date and time when you want the Object Lock to expire.
Note: If this option is not specified, the retention value will be calculated using
the bucket default object lock configuration.
object_lock_retain_until_date = current_system_timestamp +
bucket_default_object_lock_retention
Response Syntax
HTTP/1.1 200
x-amz-version-id: VersionId
<?xml version="1.0" encoding="UTF-8"?>
<InitiateMultipartUploadResult>
<Bucket>string</Bucket>
<Key>string</Key>
<UploadId>string</UploadId>
</InitiateMultipartUploadResult>
S3 Interface for MSDP 458
S3 APIs for S3 interface for MSDP
Response Body
■ InitiateMultipartUploadResult
Root level tag for the InitiateMultipartUploadResult parameters.
Required: Yes
■ Bucket
Name of the bucket.
■ Key
The name of the object.
■ UploadId
ID for the initiated multipart upload.
DeleteObject
Deletes the specified object in the bucket for a non-versioned bucket. If the
versioning is enabled on the bucket and VersionId is passed, the specified version
of the object is deleted. If the versioning is enabled on the bucket and VersionId
is not passed, a DeleteMarker is created for the object.
S3 Interface for MSDP 459
S3 APIs for S3 interface for MSDP
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
■ Key
The name of the object for which multipart upload was initiated.
Required: Yes
Type: String
■ versionId
The version ID of the Object.
Type: String
■ x-amz-bypass-governance-retention (Flex WORM only)
Indicates whether S3 Object Lock should bypass Governance mode restrictions
to process this operation. To use this header, you must have the
s3:BypassGovernanceRetention permission.
Response Syntax
HTTP/1.1 204
x-amz-delete-marker: DeleteMarker
x-amz-version-id: VersionId
Response Headers
■ x-amz-delete-marker
Specifies if the deleted object is a delete marker or not.
■ x-amz-version-id
Specifies Version ID of the deleted object.
Possible Error Response
■ Success
HTTP status code 204.
■ InvalidArgument
Invalid Argument.
HTTP status code 400.
S3 Interface for MSDP 460
S3 APIs for S3 interface for MSDP
■ AccessDenied
Request was rejected because user authentication failed.
HTTP status code 403.
■ NoSuchKey
The specified key does not exist.
HTTP status code 404.
■ NoSuchBucket
The specified bucket does not exist.
HTTP status code 404.
■ InternalError
Request failed because of an internal server error.
HTTP status code 500.
■ InvalidRequest
Current object is protected by Object Lock and can not be overwritten.
HTTP status code 400.
DeleteObjects
Deletes multiple objects from bucket by using a single request.
The Content-MD5 header is required for Multi-Object Delete request. S3 interface
uses the header value to ensure that your request body has not been altered in
transit.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
S3 Interface for MSDP 461
S3 APIs for S3 interface for MSDP
Type: String
■ x-amz-bypass-governance-retention (Flex WORM only)
Indicates whether S3 Object Lock should bypass Governance mode restrictions
to process this operation. To use this header, you must have the
s3:BypassGovernanceRetention permission.
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<DeleteResult>
<Deleted>
<DeleteMarker>boolean</DeleteMarker>
<DeleteMarkerVersionId>string</DeleteMarkerVersionId>
<Key>string</Key>
<VersionId>string</VersionId>
</Deleted>
...
<Error>
<Code>string</Code>
<Key>string</Key>
<Message>string</Message>
<VersionId>string</VersionId>
</Error>
...
</DeleteResult>
Response Body
■ DeleteResult
Root level tag for the DeleteResult parameters.
Required: Yes
■ Deleted
Information of the objects that are successfully deleted.
■ DeleteMarker
Specifies if deleted object was a delete marker or not.
■ DeleteMarkerVersionId
Specifies versionId of the deleted delete marker.
■ Key
The name of the object
■ VersionId
S3 Interface for MSDP 462
S3 APIs for S3 interface for MSDP
■ Error
Information of the objects which failed to be deleted.
■ Code
Error code of the error that occurred while deleting the object.
■ Key
The name of the object
■ Message
Error message
■ VersionId
VersionId of the object or delete marker for which error occurred.
GetObject
Retrieves objects from an S3 bucket. For larger object download, use the
range-based Get Object API.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
S3 Interface for MSDP 463
S3 APIs for S3 interface for MSDP
Required: Yes
Type: String
■ Key
Name of the object.
Required: Yes
Type: String
■ partNumber
Number of the part of the object that is being read. This is a positive integer
between 1 and 10,000.
Type: Integer
■ versionId
Version Id of object.
Type: String
Request Headers
■ Range
Returns the specified range bytes of object.
Type: Integer
Response Syntax
HTTP/1.1 200
x-amz-delete-marker: DeleteMarker
accept-ranges: AcceptRanges
Last-Modified: LastModified
Content-Length: ContentLength
ETag: ETag
x-amz-version-id: VersionId
Content-Range: ContentRange
x-amz-storage-class: StorageClass
Body
Response Headers
■ x-amz-delete-marker
Specifies the object return is a delete marker or not. If the object is not a delete
marker, this header does not get added in a response.
■ Last-Modified
The last modified time of the object.
■ Content-Length
Returned body size in bytes.
S3 Interface for MSDP 464
S3 APIs for S3 interface for MSDP
■ ETag
Specifies SHA256 of the returned object.
■ x-amz-version-id
Specifies the version ID of returned object.
■ Content-Range
The range of object that is returned in response.
■ x-amz-storage-class
Specifies the storage class of the returned object.
■ x-amz-object-lock-mode (Flex WORM only)
The Object Lock mode currently in place for this object.
Valid Values: GOVERNANCE, COMPLIANCE
■ x-amz-object-lock-retain-until-date (Flex WORM only)
The date and time when this object's Object Lock expires.
■ x-amz-meta-msdps3-object-creator
The API used to upload the object. The value PutGroupObject means that
"PutObject with the header x-amz-meta-snowball-auto-extract."
Valid Values: PutObject, PutGroupObject, UploadPart
Possible Error Response
■ Success
HTTP status code 200.
■ InvalidArgument
Invalid Argument.
Invalid version ID specified. HTTP status code 400.
■ EntityTooLarge
Your proposed upload exceeds the maximum allowed object size.
HTTP status code 400.
■ AccessDenied
Request was rejected because user authentication failed.
HTTP status code 403.
■ NoSuchKey
The specified key does not exist.
HTTP status code 404.
■ NoSuchBucket
The specified bucket does not exist.
HTTP status code 404.
S3 Interface for MSDP 465
S3 APIs for S3 interface for MSDP
■ InternalError
Request failed because of an internal server error.
HTTP status code 500.
HeadObject
Retrieves metadata from an object without returning the object itself. This operation
is used when you are interested only in an object’s metadata.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
■ Key
Name of the object.
Required: Yes
Type: String
■ partNumber
Number of the part of the object that is being read. This is a positive integer
between 1 and 10,000.
Type: Integer
■ versionId
Version ID of object.
Type: String
Response Syntax
HTTP/1.1 200
x-amz-delete-marker: DeleteMarker
accept-ranges: AcceptRanges
Last-Modified: LastModified
Content-Length: ContentLength
ETag: ETag
x-amz-version-id: VersionId
Content-Range: ContentRange
Response Headers
S3 Interface for MSDP 466
S3 APIs for S3 interface for MSDP
■ x-amz-delete-marker
Specifies the object return is a delete marker or not. If the object is not a delete
marker, this header does not get added in a response.
■ Last-Modified
The last modified time of the object.
■ Content-Length
Returned body size in bytes.
■ ETag
Specifies SHA256 of returned object.
■ x-amz-version-id
Specifies the version ID of returned object.
■ Content-Range
The range of object that is returned in response.
■ x-amz-object-lock-mode (Flex WORM only)
The Object Lock mode currently in place for this object.
Valid Values: GOVERNANCE, COMPLIANCE
■ x-amz-object-lock-retain-until-date (Flex WORM only)
The date and time when this object's Object Lock expires.
■ x-amz-meta-msdps3-object-creator
The API used to upload the object. The value PutGroupObject means that
"PutObject with the header x-amz-meta-snowball-auto-extract."
Valid Values: PutObject, PutGroupObject, UploadPart
Possible Error Response
■ Success
HTTP status code 200.
■ InvalidArgument
Invalid Argument. Invalid version ID specified.
HTTP status code 400.
■ AccessDenied
Request was rejected because user authentication failed.
HTTP status code 403.
■ NoSuchKey
The specified key does not exist.
HTTP status code 404.
■ NoSuchBucket
S3 Interface for MSDP 467
S3 APIs for S3 interface for MSDP
PutObject
Adds an object to a bucket. If the bucket is enabled for versioning, Put Object API
returns the VersionId of the object.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
Required: Yes
Type: String
■ Key
Name of the object.
Required: Yes
Type: String
■ x-amz-object-lock-mode (Flex WORM only)
The Object Lock mode that you want to apply to this object.
Valid Values: GOVERNANCE, COMPLIANCE
■ x-amz-object-lock-retain-until-date (Flex WORM only)
The date and time when you want this object's Object Lock to expire. Must be
formatted as a timestamp parameter.
Response Syntax
HTTP/1.1 200
ETag: ETag
x-amz-version-id: VersionId
Response Headers
S3 Interface for MSDP 468
S3 APIs for S3 interface for MSDP
■ x-amz-version-id
The version ID of the object PUT in the bucket.
Possible Error Response
■ Success
HTTP status code 200.
■ EntityTooLarge
The object size exceeded maximum allowed size.
HTTP status code 400.
■ AccessDenied
Request was rejected because user authentication failed.
HTTP status code 403.
■ NoSuchBucket
The specified bucket does not exist.
HTTP status code 404.
■ InternalError
Request failed because of an internal server error.
HTTP status code 500.
■ InvalidRequest
The error may occur for some reasons. For details, please refer the error
messages.
HTTP status code 400.
Copy Object
Creates a copy of an object in the storage server. You must have read access to
the source object and write access to the destination bucket. If bucket is versioning
enabled, then Copy Object API returns the VersionId of the object. When copying
an object, both the metadata and the ACLs are not preserved.
Request Syntax
Request Parameters
■ Bucket
The name of the destination bucket.
S3 Interface for MSDP 469
S3 APIs for S3 interface for MSDP
Required: Yes
Type: String
■ Key
The key of destination object.
Required: Yes
Type: String
■ x-amz-copy-source
Specifies the source object for the copy operation.
The value format: Specify the name of the source bucket and the key of the
source object, separated by a slash (/).
For example, to copy the object msdps3/copyright.txt from the bucket srcbk,
use srcbk/msdps3/copyright.txt. The value must be URL-encoded.
To copy a specific version of an object, append ?versionId=<version-id> to
the value. For example,
srcbk/msdps3/copyright.txt?versionId=AAAA1234567890
If you don't specify a version ID, the latest version of the source object is copied.
Pattern: \/.+\/.+
Required: Yes
■ x-amz-object-lock-mode (Flex WORM only)
The Object Lock mode that you want to apply to this copied object.
Valid Values: GOVERNANCE, COMPLIANCE
■ x-amz-object-lock-retain-until-date (Flex WORM only)
The date and time when you want this copied object's Object Lock to expire. It
must be formatted as a timestamp parameter.
Response Syntax
HTTP/1.1 200
x-amz-copy-source-version-id: CopySourceVersionId
x-amz-version-id: VersionId
<?xml version="1.0" encoding="UTF-8"?>
<CopyObjectResult>
<ETag>string</ETag>
<LastModified>timestamp</LastModified>
</CopyObjectResult>
Response Headers
■ x-amz-copy-source-version-id
Version of the copied object in the source bucket.
■ x-amz-version-id
S3 Interface for MSDP 470
S3 APIs for S3 interface for MSDP
UploadPart
Uploads a part in a multipart upload.
Request Syntax
Request Parameters
■ Bucket
Name of the bucket.
S3 Interface for MSDP 471
S3 APIs for S3 interface for MSDP
Required: Yes
Type: String
■ Key
Name of the object.
Required: Yes
Type: String
■ partNumber
Number of the part that is being uploaded.
Required: Yes
Type: String
■ uploadId
Upload ID of multipart upload.
Required: Yes
Type: String
■ Content-MD5
The base64-encoded 128-bit MD5 digest of the part data. This parameter is
required if object lock parameters are specified.
Response Syntax
HTTP/1.1 200
Run the tar or gzip command to manually batch small files, and then transfer them
to S3 interface for MSDP.
For example: tar -czf <archive-file> <small files or directory of small
files>
Request Parameters
■ Bucket
S3 Interface for MSDP 473
S3 APIs for S3 interface for MSDP
HTTP/1.1 200
ETag: ETag
x-amz-version-id: VersionId
Request Headers
■ x-amz-version-id
The version-id of the object PUT in the bucket.
Possible Error Response
■ Success
HTTP status code 200.
■ EntityTooLarge
The object size exceeded maximum allowed size.
HTTP status code 400.
■ AccessDenied
Request was rejected because user authentication failed.
HTTP status code 403.
■ NoSuchBucket
The specified bucket does not exist.
S3 Interface for MSDP 474
S3 APIs for S3 interface for MSDP
Request Parameters
■ Bucket
The bucket name that contains the object you want to apply this Object Retention
configuration to.
Required: Yes
Type: String
■ Key
The key name for the object that you want to apply this Object Retention
configuration to.
Required: Yes
Type: String
■ versionId
The version ID for the object that you want to apply this Object Retention
configuration to.
■ x-amz-bypass-governance-retention
Indicates whether this action should bypass Governance mode restrictions.
S3 Interface for MSDP 475
S3 APIs for S3 interface for MSDP
Request Body
■ Retention
Root level tag for the Retention parameters.
Required: Yes
■ Mode
Indicates the Retention mode for the specified object.
Valid Values: GOVERNANCE, COMPLIANCE
Required: No
Type: String
■ RetainUntilDate
The date on which this Object Lock Retention will expire.
Required: No
Type: Timestamp
Response Syntax
HTTP/1.1 200
Request Syntax
Request Parameters
■ Bucket
The bucket name that contains the object for which you want to retrieve the
retention settings.
Required: Yes
Type: String
■ Key
The key name for the object for which you want to retrieve the retention settings.
Required: Yes
Type: String
■ versionId
The version ID for the object for which you want to retrieve the retention settings.
Type: String
Response Syntax
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<Retention>
<Mode>string</Mode>
<RetainUntilDate>timestamp</RetainUntilDate>
</Retention>
Response Body
■ Retention
Root level tag for the Retention parameters.
Required: Yes
■ Mode
Indicates the Retention mode for the specified object.
Valid Values: GOVERNANCE, COMPLIANCE
Type: String
■ RetainUntilDate
The date on which this Object Lock Retention expires.
Type: Timestamp
Possible Error Response
S3 Interface for MSDP 477
S3 APIs for S3 interface for MSDP
■ Success
HTTP status code 200.
■ InvalidArgument
Invalid Argument. Invalid version id specified.
HTTP status code 400.
■ AccessDenied
Request was rejected because user authentication failed.
HTTP status code 403.
■ NoSuchKey
The specified key does not exist.
HTTP status code 404.
■ NoSuchBucket
The specified bucket does not exist.
HTTP status code 404.
■ Eliminate each inner .. path name element (the parent directory) along with
the non-.. element that precedes it.
■ Eliminate .. elements that begin a rooted path: that is, replace "/.." by "/" at
the beginning of a path.
■ The returned path ends in a slash only if it is the root "/".
■ If the result of this process is an empty string, it returns the string ".".
■ If the object name includes "%", it is treated as encoded name.
7 On the Recovery options page, select the Destination of the recovery. This
is a file system path of the destination client. Click Next.
8 On the Review page, review the details and click Start recovery.
After the restore job is finished, the recovered objects are in the destination
file system path of the destination NetBackup client.
The command displays the IAM configurations in the cloud LSU and current
IAM configurations.
The following warning appears:
WARNING: This operation overwrites current IAM configurations
with the IAM configurations in cloud LSU.
To overwrite the current IAM configurations, type the following and press Enter.
overwrite-with-<cloud_LSU_name>
; @restart
LogLevel=<log level>
Best practices
Following are the best practices for using S3 interface for MSDP:
S3 Interface for MSDP 483
Best practices
■ The time-out settings in S3 client can affect the request procedure. When the
server does not respond before the time-out, the client cancels the request
automatically.
Chapter 10
Monitoring deduplication
activity
This chapter includes the following topics:
For the method to show the MSDP compression rate, See “Viewing MSDP job
details” on page 485.
On UNIX and Linux, you can use the NetBackup bpdbjobs command to display
the deduplication rate. However, you must configure it to do so.
To view the global MSDP deduplication ratio
1 Open the web UI.
2 On the left, click Storage > Disk storage.
3 Click the Storage servers tab.
4 Click the storage server name to view the global MSDP deduplication ratio.
To view the MSDP deduplication ratio for a backup job in the Activity monitor
1 In the NetBackup web UI, click Activity monitor.
2 Click the Jobs tab.
The Deduplication ratio column shows the ratio for each job.
Field descriptions
Table 10-1 describes the deduplication activity fields.
Field Description
Dedup space saving The percentage of space that is saved by data deduplication (data is not written again).
Compression space The percentage of space that is saved because the deduplication engine compressed some
saving data before writing it to storage.
cache hits The percentage of data segments in the backup that is represented in the local fingerprint
cache. The deduplication plug-in did not have to query the database about those segments
If the pd.conf file FP_CACHE_LOCAL parameter is set to 0 on the storage, the cache hits
output is not included for the jobs that run on the storage server.
CR sent The amount of data that is sent from the deduplication plug-in to the component that stores
the data. In NetBackup, the NetBackup Deduplication Engine stores the data.
If the storage server deduplicates the data, it does not travel over the network. The
deduplicated data travels over the network when the deduplication plug-in runs on a computer
other than the storage server, as follows:
Field Description
CR sent over FC The amount of data that is sent from the deduplication plug-in over Fibre Channel to the
component that stores the data. In NetBackup, the NetBackup Deduplication Engine stores
the data.
dedup The percentage of data that was stored already. That data is not stored again.
multi-threaded stream Indicates that the Deduplication Multi-Threaded Agent processed the backup.
used
See “About the MSDP Deduplication Multi-Threaded Agent” on page 78.
PDDO stats Indicates that the job details are for storage on the following destinations:
rebased The percentage of segments that were rebased (that is, defragmented) during the backup.
Those segments had poor data locality.
NetBackup reports backup job completion only after backup rebasing is completed.
Using OpenStorage Indicates that the restore travels over the client-direct data path and does not use NetBackup
client direct to media server components to process the data.
restore...
encrypted Indicates if the new transferred data being written to the deduplication pool is encrypted or
not.
Descriptions of the job details that are not related to deduplication are in a different
topic.
If you use operating system tools to examine storage space usage, their results
may differ from the usage reported by NetBackup, as follows:
■ NetBackup usage data includes the reserved space that the operating system
tools do not include.
■ If other applications use the storage, NetBackup cannot report usage accurately.
NetBackup requires exclusive use of the storage.
Table 10-2 describes the options for monitoring capacity and usage.
Option Description
Change Storage Server The Change Storage Server dialog box Properties tab displays
dialog box storage capacity and usage. It also displays the global
deduplication ratio.
This dialog box displays the most current capacity usage that is
available in the NetBackup web UI.
Disk Pools window The Disk Pools window of the NetBackup web UI displays the
values that were stored when NetBackup polled the disk pools.
NetBackup polls every 5 minutes; therefore, the value may not
be as current as the value that is displayed in the Change
Storage Server dialog box.
To display the window Storage > Disk storage > Disk pools.
Disk Pool Status report The Disk Pool Status report displays the state of the disk pool
and usage information.
Monitoring deduplication activity 490
About MSDP container files
Option Description
Disk Logs report The Disk Logs report displays event and message information.
A useful event for monitoring capacity is event 1044; the following
is the description of the event in the Disk Logs report:The usage
of one or more system resources has exceeded a
warning level.
The nbdevquery The nbdevquery command shows the state of the disk volume
command and its properties and attributes. It also shows capacity, usage,
and percent used.
Number of containers : 1
Average container size : 1049 bytes (0.00MiB)
Space allocated for containers : 1049 bytes (1.02KiB)
Reserved space : 136.25GiB (68.1%)
Reserved space for cloud cache : 14.00GiB (22.0%)
Reserved space for vpfs cloud cache : 128.00GiB (64.0%)
For systems that host a Media Server Deduplication Pool, you can use the
following crcontrol command to show information about each partition:
/usr/openv/pdde/pdcr/bin/crcontrol --dsstat 3
Size The size of the storage that NetBackup can use: the Raw size of the
storage minus the file system Reserved space.
If the file system has a concept of root reserved space (such as EXT3
or VxFS), that space cannot be used for storage. The crcontrol
command does not include reserved space in the available space.
Unlike the crcontrol command, some operating system tools report
root reserved space as usable space.
Used The amount of deduplicated data that is stored on the file system.
NetBackup obtains the file system used space from the operating
system.
See “About MSDP storage capacity and usage reporting” on page 488.
See “About MSDP container files” on page 490.
See “Processing the MSDP transaction queue manually” on page 519.
See “MSDP storage full conditions” on page 733.
Replication The job that replicates a backup image to a target primary displays in the Activity Monitor as a
Replication job. The Target Master label displays in the Storage Unit column for this type of job.
Similar to other Replication jobs, the job that replicates images to a target primary can work on
multiple backup images in one instance.
The detailed status for this job contains a list of the backup IDs that were replicated.
Import The job that imports a backup copy into the target primary domain displays in the Activity Monitor as
an Import job. An Import job can import multiple copies in one instance. The detailed status for an
Import job contains a list of processed backup IDs and a list of failed backup IDs.
Note that a successful replication does not confirm that the image was imported at the target primary.
If the data classifications are not the same in both domains, the Import job fails and NetBackup does
not attempt to import the image again.
Failed Import jobs fail with a status 191 and appear in the Problems report when run on the target
primary server.
The image is expired and deleted during an Image Cleanup job. Note that the originating domain
(Domain 1) does not track failed imports.
For example,
** Encryption Crawler:
Encryption Crawler is unavailable for WORM Deduplication pools
or data stored on Cloud Tier.
Monitoring deduplication activity 495
Checking the image encryption status
--verbose or -v: Verbose is used to output KMS key IDs for the KMS encrypted
image or the data containers IDs for the unencrypted image.
Note: This command may run for a long time if the image consumes a large
number of data containers.
For example,
This example output is shortened; more flags may appear in actual output.
This example output is shortened; more flags may appear in actual output.
The following describes the options that require the arguments that are specific
to your domain:
Managing deduplication 500
Managing MSDP servers
-setattribute The attribute is the name of the argument that represents the
attribute new functionality.
-setattribute The attribute is the name of the argument that represents the
attribute functionality.
See “About changing the MSDP storage server name or storage path” on page 501.
Step 1 Ensure that no deduplication Deactivate all backup policies that use deduplication storage.
activity occurs
See theNetBackup Administrator's Guide, Volume I.
Step 2 Expire the backup images Expire all backup images that reside on the deduplication disk storage.
Warning: Do not delete the images. They are imported back into NetBackup
later in this process.
If you use the bpexpdate command to expire the backup images, use the
-nodelete parameter.
Step 3 Delete the storage units that See theNetBackup Administrator's Guide, Volume I.
use the disk pool
Step 4 Delete the disk pool See “Deleting a Media Server Deduplication Pool” on page 516.
Step 5 Delete the deduplication See “Deleting an MSDP storage server” on page 504.
storage server
Step 7 Delete the deduplication host Each load balancing server contains a deduplication host configuration file.
configuration file If you use load balancing servers, delete the deduplication host configuration
file from those servers.
Table 11-1 Changing the storage server name or storage path (continued)
Step 8 Delete the identity file and the Delete the following files from the MSDP storage server, depending on
file system table file operating system:
UNIX:
/storage_path/data/.identity
/storage_path/etc/puredisk/fstab.cfg
Windows:
storage_path\data\.identity
storage_path\etc\puredisk\fstab.cfg
Step 9 Change the storage server See the computer or the storage vendor's documentation.
name or the storage location
See “Use fully qualified domain names” on page 61.
Step 10 Reconfigure the storage When you configure deduplication, select the host by the new name and
server enter the new storage path (if you changed the path). You can also use a
new network interface.
Step 11 Import the backup images See theNetBackup Administrator's Guide, Volume I.
See “Changing the MSDP storage server name or storage path” on page 502.
Windows: install_path\Program
Files\Veritas\pdde\PDDE_deleteConfig.bat
On UNIX/Linux:
/usr/openv/volmgr/bin/tpconfig -add -storage_server sshostname
-stype PureDisk -sts_user_id UserID -password PassWord
On UNIX/Linux:
/usr/openv/volmgr/bin/tpconfig -delete -storage_server sshostname
-stype PureDisk -sts_user_id UserID
This example output is shortened; more flags may appear in actual output.
The following describes the options that require the arguments that are specific
to your domain:
-setattribute The attribute is the name of the argument that represents the
attribute new functionality.
It is recommended that you take the following actions when the volume topology
changes:
■ Discuss the changes with the storage administrator. You need to understand
the changes so you can change your disk pools (if required) so that NetBackup
can continue to use them.
■ If the changes were not planned for NetBackup, ask your storage administrator
to revert the changes so that NetBackup functions correctly again.
NetBackup can process changes to the following volume properties:
■ Replication Source
■ Replication Target
■ None
If these volume properties change, NetBackup can update the disk pool to match
the changes. NetBackup can continue to use the disk pool, although the disk pool
may no longer match the storage unit or storage lifecycle purpose.
The following table describes the possible outcomes and how to resolve them.
Outcome Description
NetBackup discovers the new The new volumes appear in the Change Disk Pool dialog box. Text in the dialog box
volumes that you can add to the changes to indicate that you can add the new volumes to the disk pool.
disk pool.
Managing deduplication 512
Managing Media Server Deduplication Pools
Outcome Description
The replication properties of all of A Disk Pool Configuration Alert pop-up box notifies you that the properties of all of the
the volumes changed, but they volumes in the disk pool changed, but they are all the same (homogeneous).
are still consistent.
You must click OK in the alert box, after which the disk pool properties in the Change
Disk Pool dialog box are updated to match the new volume properties
If new volumes are available that match the new properties, NetBackup displays those
volumes in the Change Disk Pool dialog box. You can add those new volumes to the
disk pool.
In the Change Disk Pool dialog box, select one of the following two choices:
■ OK. To accept the disk pool changes, click OK in the Change Disk Pool dialog box.
NetBackup saves the new properties of the disk pool.
NetBackup can use the disk pool, but it may no longer match the intended purpose
of the storage unit or storage lifecycle policy. Change the storage lifecycle policy
definitions to ensure that the replication operations use the correct source and target
disk pools, storage units, and storage unit groups. Alternatively, work with your storage
administrator to change the volume properties back to their original values.
■ Cancel. To discard the changes, click Cancel in the Change Disk Pool dialog box.
NetBackup does not save the new disk pool properties. NetBackup can use the disk
pool, but it may no longer match the intended use of the storage unit or storage
lifecycle policy.
Managing deduplication 513
Managing Media Server Deduplication Pools
Outcome Description
The replication properties of the A Disk Pool Configuration Error pop-up box notifies you that the replication properties
volumes changed, and they are of some of the volumes in the disk pool changed. The properties of the volumes in the
now inconsistent. disk pool are not homogeneous.
In the Change Disk Pool dialog box, the properties of the disk pool are unchanged, and
you cannot select them (that is, they are dimmed). However, the properties of the individual
volumes are updated.
Because the volume properties are not homogeneous, NetBackup cannot use the disk
pool until the storage configuration is fixed.
NetBackup does not display new volumes (if available) because the volumes already in
the disk pool are not homogeneous.
To determine what has changed, compare the disk pool properties to the volume
properties.
See “Viewing the replication topology for Auto Image Replication” on page 151.
Work with your storage administrator to understand the changes and why they were
made. The replication relationships may or may not have to be re-established. If the
relationship was removed in error, re-establishing the relationships seem justified. If you
are retiring or replacing the target replication device, you probably do not want to
re-establish the relationships.
The disk pool remains unusable until the properties of the volumes in the disk pool are
homogenous.
In the Change Disk Pool dialog box, click OK or Cancel to exit the Change Disk Pool
dialog box.
Managing deduplication 514
Managing Media Server Deduplication Pools
Outcome Description
NetBackup cannot find a volume A Disk Pool Configuration Alert pop-up box notifies you that an existing volume or
or volumes that were in the disk volumes was deleted from the storage device:
pool.
NetBackup can use the disk pool, but data may be lost.
To protect against accidental data loss, NetBackup does not allow volumes to be deleted
from a disk pool.
To continue to use the disk pool, do the following:
■ Use the bpimmedia command or the Images on Disk report to display the images
on the specific volume.
■ Expire the images on the volume.
■ Use the nbdevconfig command to set the volume state to DOWN so NetBackup
does not try to use it.
-setattribute The attribute is the name of the argument that represents the
attribute new functionality.
■ UNIX: /usr/openv/netbackup/bin/admincmd
■ Windows: install_path\NetBackup\bin\admincmd
See “Recovering the MSDP storage server after NetBackup catalog recovery” on page 542.
Managing deduplication 516
Managing Media Server Deduplication Pools
Windows: install_path\NetBackup\bin\admincmd\nbdevconfig
-changestate -stype PureDisk -dp disk_pool_name –dv PureDiskVolume
-state state
See “Recovering the MSDP storage server after NetBackup catalog recovery” on page 542.
For example,
msdpimgutil spacereport --client sadiexxvmxx.xxx.xxx.veritas.com
--policy PCloud --startdate 2023-09-02T01:23:22 --enddate 2023-
12-03T01:23:22 –copynumber 1
2 Get the size consumed by all the backup images from a specified client.
msdpimgutil spacereport --client sadiexxvmxx.xxx.xxx.veritas.com
Managing deduplication 518
Deleting backup images
3 Get the size consumed by all backup images from a specified policy or client
and policy.
msdpimgutil spacereport --client sadiexxvmxx.xxx.xxx.veritas.com
--policy dirPlocal2
4 Get the size consumed by all the backup images between the specified start
date and end date.
msdpimgutil spacereport --startdate 2023-09-02T01:23:22 --enddate
2023-12-03T01:23:22
--dsid is the optional parameter. Without disd value, all local and cloud LSU
process the MSDP transaction queue.
2 To determine if the queue processing is still active, run the following command:
UNIX: /usr/openv/pdde/pdcr/bin/crcontrol --processqueueinfo --dsid
<dsid>
Warning: Veritas recommends that you do not disable the data integrity checking.
If you do so, NetBackup cannot find and repair or report data corruption.
Windows: install_path\Veritas\pdde\pddecfg –a
enabledataintegritycheck -d <dsid>
Windows: install_path\Veritas\pdde\pddecfg –a
disabledataintegritycheck -d <dsid>
Windows: install_path\Veritas\pdde\pddecfg –a
getdataintegritycheck -d <dsid>
Enable CRC does not run if queue processing is active or during disk read
or write operations.
UNIX: /usr/openv/pdde/pdcr/bin/crcontrol
--crccheckon
Windows: install_path\Veritas\pdde\Crcontrol.exe
--crccheckon
Managing deduplication 523
Configuring MSDP data integrity checking behavior
Windows: install_path\Veritas\pdde\Crcontrol.exe
--crccheckoff
Enable fast Fast check CRC mode begins the check from container 64 and
checking does not sleep between checking containers.
When the fast CRC ends, CRC behavior reverts to the behavior
before fast checking was invoked.
UNIX: /usr/openv/pdde/pdcr/bin/crcontrol
--crccheckrestart
Windows: install_path\Veritas\pdde\Crcontrol.exe
--crccheckrestart
Windows: install_path\Veritas\pdde\Crcontrol.exe
--crccheckstate
Warning: Veritas recommends that you do not disable the data integrity checking.
If you do so, NetBackup cannot find and repair or report data corruption.
Table 11-3 The contentrouter.cfg file parameters for data integrity checking
EnableCRCCheck true Enable or disable cyclic redundancy checking (CRC) of the data
container files.
The longer the sleep interval, the more time it takes to check containers.
The greater the number of containers, the less time it takes to check
all containers, but the more system resources it takes.
ShutdownCRWhenError false Stops the NetBackup Deduplication Manager when a data loss is
discovered.
Table 11-3 The contentrouter.cfg file parameters for data integrity checking
(continued)
GarbageCheckRemainDCCount 100 The number of containers from failed jobs not to check for garbage. A
failed backup or replication job still produces data containers. Because
failed jobs are retried, retaining those containers means NetBackup
does not have to send the fingerprint information again. As a result,
retried jobs consume less time and fewer system resources than when
first run.
The greater the number of days, the fewer the objects that are checked
each day. The greater the number of days equals fewer storage server
resources consumed each day.
Type Description
Normal backup The rebasing that occurs during a backup if the normal rebasing
rebasing criteria are met, as follows:
Backup rebasing occurs only for the full backups that pass through
the normal MSDP backup process. For example, the NetBackup
Accelerator backups do not pass through the MSDP backup process.
Periodic backup The rebasing that occurs during a backup if the periodic rebasing
rebasing criteria are met, as follows:
■ The container has not been rebased within the last 3 months.
■ For that backup, the data segments in the container consume less
space than the FP_CACHE_REBASING_THRESHOLD value. The
FP_CACHE_REBASING_THRESHOLD parameter is in the pd.conf
file.
See “MSDP pd.conf file parameters” on page 186.
Backup rebasing occurs only for the full backups that pass through
the normal MSDP backup process. For example, the NetBackup
Accelerator backups do not pass through the MSDP backup process.
Server-side rebasing The storage rebasing that occurs on the server if the rebasing criteria
are met. Server-side rebasing includes the deduplicated data that
does not pass through the normal MSDP backup process. For
example, the NetBackup Accelerator backups do not pass through
the MSDP backup process.
Parameter Description
RebaseMaxPercentage The maximum percentage of the data segments to be rebased in a file. For any
file, if the percentage of the data segments reaches this threshold, the remainder
of the data segments are not rebased.
RebaseMaxTime The maximum time span in seconds of data segments to be rebased in a file. If
this threshold is reached, NetBackup does not rebase the remainder of the data
segments.
RebaseMinContainers The minimum number of containers in which a file’s data segments are stored for
the file to be eligible for rebasing. If the number of containers in which a file’s data
segments are stored is less than RebaseMinContainers, NetBackup does not
rebase the data segments.
RebaseScatterThreshold The data locality threshold for a container. If the total size of a file’s data segments
in a container is less than RebaseScatterThreshold, NetBackup rebases all
of the file’s data segments.
Type Description
Normal restore The MSDP storage server first rehydrates (that is, reassembles) the data. NetBackup then chooses
the least busy media server to move the data to the client. (NetBackup chooses the least busy
media server from those that have credentials for the NetBackup Deduplication Engine.) The
media server bptm process moves the data to the client.
The following media servers have credentials for the NetBackup Deduplication Engine:
Type Description
Restore directly to The storage server can bypass the media server and move the data directly to the client.
the client
You must configure NetBackup to bypass a media server and receive the restore data directly
from the storage server.
By default, NetBackup decompresses the data on the NetBackup media server except for client
direct restore. In that case, the data decompression is done on the client. You can configure
NetBackup so that the data is decompressed on the client rather than the storage server. See
the RESTORE_DECRYPT_LOCAL parameter in the MSDP pd.conf file.
Scenario 1 Yes Configure the client in another domain and restore directly to the client.
Scenario 2 No Create the client in the recovery domain and restore directly to the
client. This is the most likely scenario.
The steps to recover the client are the same as any other client recovery. The actual
steps depend on the client type, the storage type, and whether the recovery is an
alternate client restore.
For restores that use Granular Recovery Technology (GRT), an application instance
must exist in the recovery domain. The application instance is required so that
NetBackup has something to recover to.
Managing deduplication 533
Specifying the restore server
Restore from a shadow If NetBackup detects corruption in the MSDP catalog, the Deduplication Manager restores
copy the catalog automatically from the most recent shadow copy. The automatic restore process
also plays a transaction log so that the recovered MSDP catalog is current.
Although the shadow copy restore process is automatic, a restore procedure is available if
you need to recover from a shadow copy manually.
See “Restoring the MSDP catalog from a shadow copy” on page 536.
Recovering MSDP 536
Restoring the MSDP catalog from a shadow copy
Recover from a backup If you configured an MSDP catalog backup policy and a valid backup exists, you can recover
the catalog from a backup. As a general rule, you should only attempt to recover the MSDP
catalog from a backup if you have no alternatives. As an example: A hardware problem or
a software problem results in the complete loss of the MSDP catalog and the shadow copies.
The greatest chance for a successful outcome when you recover the MSDP catalog from
a backup is when the recovery is guided. An unsuccessful outcome may cause data loss.
For the customers who need to recover the MSDP catalog, Veritas wants to guide them
through the process. Therefore, to recover the MSDP catalog from a backup, contact your
Veritas support representative. You can refer the support representative to Knowledge Base
Article 000047346, which contains the recovery instructions.
Caution: You must determine if your situation is severe enough to recover the
catalog. Veritas recommends that you contact your Veritas Support representative
before you restore or recover the MSDP catalog. The Support representative can
help you determine if you need to recover the catalog or if other solutions are
available.
Restore the entire MSDP In this scenario, you want to restore the entire catalog from
catalog from a shadow copy one of the shadow copies.
Restore a specific MSDP The MSDP catalog is composed of multiple small database
database file files. Those files are organized in the file system by the client
name and policy name, as follows:
UNIX:
/database_path/databases/catalogshadow/2/ClientName/PolicyName
Windows:
database_path\databases\catalogshadow\2\ClientName\PolicyName
You can restore the database files for a client and a policy
combination. The restore of a specific client’s and policy’s
database files is always from the most recent shadow copy.
4 Enable all policies and storage lifecycle policies that back up to the Media
Server Deduplication Pool.
5 Restart the jobs that were canceled before the recovery.
To restore a specific MSDP database file from a shadow copy
1 If any MSDP jobs are active for the client and the backup policy combination,
either cancel them or wait until they complete.
2 Disable the policies and storage lifecycle policies for the client and the backup
policy combination that back up to the Media Server Deduplication Pool.
Recovering MSDP 538
Recovering from an MSDP storage server disk failure
3 Change to the shadow directory for the client and policy from which you want
to recover that database file. That directory contains the database files from
which to recover. The following are the pathname formats:
UNIX:
/database_path/databases/catalogshadow/2/ClientName/PolicyName
Windows:
database_path\databases\catalogshadow\2\ClientName\PolicyName
5 Enable all policies and storage lifecycle policies that back up to the Media
Server Deduplication Pool.
6 If you canceled jobs before you recovered the database files, restart them.
Note: This procedure describes recovery of the disk on which the NetBackup media
server software resides not the disk on which the deduplicated data resides. The
disk may or may not be the system boot disk.
Step 1 Replace the disk. If the disk is a system boot disk, also install the operating system.
See the hardware vendor and operating system documentation.
Step 2 Mount the storage. Ensure that the storage and database are mounted at the same locations.
Step 3 Install and license the See NetBackup Installation Guide for UNIX and Windows:
NetBackup media server
http://www.veritas.com/docs/DOC5332
software.
See “About the MSDP license” on page 71.
Step 4 Delete the deduplication host Each load balancing server contains a deduplication host configuration file.
configuration file If you use load balancing servers, delete the deduplication host configuration
file from those servers.
Step 5 Delete the credentials on If you have load balancing servers, delete the NetBackup Deduplication
deduplication servers Engine credentials on those media servers.
Step 6 Add the credentials to the Add the NetBackup Deduplication Engine credentials to the storage server.
storage server
See “Adding NetBackup Deduplication Engine credentials” on page 506.
Step 7 Get a configuration file If you did not save a storage server configuration file before the disk failure,
template get a template configuration file.
Step 8 Edit the configuration file See “Editing an MSDP storage server configuration file” on page 204.
Step 9 Configure the storage server Configure the storage server by uploading the configuration from the file you
edited.
Step 10 Add load balancing servers If you use load balancing servers in your environment, add them to your
configuration.
NetBackup recommends that you consider the following items before you recover:
■ The new computer must use the same byte order as the old computer.
Warning: If the new computer does not use the same byte order as the old
computer, you cannot access the deduplicated data. In computing, endianness
describes the byte order that represents data: big endian and little endian. For
example, SPARC processors and Intel processors use different byte orders.
Therefore, you cannot replace an Oracle Solaris SPARC host with an Oracle
Solaris host that has an Intel processor.
■ Veritas recommends that the new computer use the same operating system as
the old computer.
■ Veritas recommends that the new computer use the same version of NetBackup
as the old computer.
If you use a newer version of NetBackup on the new computer, ensure that you
perform any data conversions that may be required for the newer release.
If you want to use an older version of NetBackup on the replacement host,
contact your Veritas support representative.
Step 1 Delete the storage units that See the NetBackup Administrator's Guide, Volume I.
use the disk pool
Step 2 Delete the disk pool See “Deleting a Media Server Deduplication Pool” on page 516.
Step 3 Delete the deduplication See “Deleting an MSDP storage server” on page 504.
storage server
Step 4 Delete the deduplication host Each load balancing server contains a deduplication host configuration file.
configuration file If you use load balancing servers, delete the deduplication host configuration
file from those servers.
Step 5 Delete the credentials on If you have load balancing servers, delete the NetBackup Deduplication
deduplication servers Engine credentials on those media servers.
Step 6 Configure the new host so it When you configure the new host, consider the following:
meets deduplication
■ You can use the same host name or a different name.
requirements
■ You can use the same Storage Path or a different Storage Path. If you
use a different Storage Path, you must move the deduplication storage
to that new location.
■ If the Database Path on the original host is different that the Storage
Path, you can do one of the following:
■ You can use the same Database Path.
■ You can use a different Database Path. If you do, you must move
the deduplication database to the new location.
■ You do not have to continue to use a different Database Path. You
can move the databases directory into the Storage Path and then
specify only the Storage Path when you configure the storage server.
■ You can use the host’s default network interface or specify a network
interface.
If the original host used a specific network interface, you do not have to
use the same interface name.
■ If you had configured the previous MSDP storage server to use MSDP
Encryption using KMS service, you must use the same configuration for
the new MSDP storage server.
Step 7 Connect the storage to the Use the storage path that you configured for this replacement host.
host
See the computer or the storage vendor's documentation.
Step 8 Install the NetBackup media See the NetBackup Installation Guide.
server software on the new
host
Step 9 Reconfigure deduplication You must use the same credentials for the NetBackup Deduplication Engine.
Step 10 Import the backup images. See the NetBackup Administrator's Guide, Volume I.
Note: Perform importing of the backup images only when NetBackup catalog
is not present; otherwise, use the bpimage command to update the storage
server names and disk pool names for catalog backup images.
Recovering MSDP 542
Recovering the MSDP storage server after NetBackup catalog recovery
Warning: If the new computer does not use the same byte order as the old
computer, you cannot access the deduplicated data. In computing, endianness
describes the byte order that represents data: Big endian and little endian. For
example, SPARC processors and Intel processors use different byte orders.
Therefore, you cannot replace an Oracle Solaris SPARC host with an Oracle
Solaris host that has an Intel processor.
■ Veritas recommends that the new computer use the same operating system as
the old computer.
■ Veritas recommends that the new computer use the same version of NetBackup
as the old computer.
If you use a newer version of NetBackup on the new computer, ensure that you
perform any data conversions that may be required for the newer release.
Replacing MSDP hosts 544
Replacing the MSDP storage server host computer
Step 1 Expire the backup images Expire all backup images that reside on the deduplication disk storage.
Warning: Do not delete the images. They are imported back into NetBackup
later in this process.
If you use the bpexpdate command to expire the backup images, use the
-nodelete parameter.
Step 2 Delete the storage units that See the NetBackup Administrator's Guide, Volume I.
use the disk pool
Step 3 Delete the disk pool See “Deleting a Media Server Deduplication Pool” on page 516.
Step 4 Delete the deduplication See “Deleting an MSDP storage server” on page 504.
storage server
Step 5 Delete the deduplication host Each load balancing server contains a deduplication host configuration file.
configuration file If you use load balancing servers, delete the deduplication host configuration
file from those servers.
Step 6 Delete the credentials on If you have load balancing servers, delete the NetBackup Deduplication
deduplication servers Engine credentials on those media servers.
Step 7 Configure the new host so it When you configure the new host, consider the following:
meets deduplication
■ You can use the same host name or a different name.
requirements
■ You can use the same Storage Path or a different Storage Path. If you
use a different Storage Path, you must move the deduplication storage
to that new location.
■ If the Database Path on the original host is different that the Storage
Path, you can do one of the following:
■ You can use the same Database Path.
■ You can use a different Database Path. If you do, you must move
the deduplication database to the new location.
■ You do not have to continue to use a different Database Path. You
can move the databases directory into the Storage Path and then
specify only the Storage Path when you configure the storage server.
■ You can use the host’s default network interface or specify a network
interface.
If the original host used a specific network interface, you do not have to
use the same interface name.
■ If you had configured the previous MSDP storage server to use MSDP
Encryption using KMS service, you must use the same configuration for
the new MSDP storage server.
Step 8 Connect the storage to the Use the storage path that you configured for this replacement host.
host
See the computer or the storage vendor's documentation.
Step 9 Install the NetBackup media See the NetBackup Installation Guide.
server software on the new
host
Step 10 Reconfigure deduplication See “Configuring MSDP server-side deduplication” on page 75.
Step 11 Import the backup images See the NetBackup Administrator's Guide, Volume I.
Chapter 14
Uninstalling MSDP
This chapter includes the following topics:
■ Deactivating MSDP
Deactivating MSDP
You cannot remove the deduplication components from a NetBackup media server.
You can disable the components and remove the deduplication storage files and
the catalog files. The host remains a NetBackup media server.
This process assumes that all backup images that reside on the deduplication disk
storage have expired.
Warning: If you remove deduplication and valid NetBackup images reside on the
deduplication storage, data loss may occur.
Uninstalling MSDP 547
Deactivating MSDP
Step 1 Remove client deduplication Remove the clients that deduplicate their own data from the client
deduplication list.
Step 2 Delete the storage units that See the NetBackup Administrator's Guide, Volume I:
use the disk pool
http://www.veritas.com/docs/DOC5332
Step 3 Delete the disk pool See “Deleting a Media Server Deduplication Pool” on page 516.
Step 4 Delete the deduplication See “Deleting an MSDP storage server” on page 504.
storage server
Deleting the deduplication storage server does not alter the contents of the
storage on physical disk. To protect against inadvertent data loss, NetBackup
does not automatically delete the storage when you delete the storage server.
Step 6 Delete the deduplication host Each load balancing server contains a deduplication host configuration file.
configuration file If you use load balancing servers, delete the deduplication host configuration
file from those servers.
Step 7 Delete the storage directory Delete the storage directory and database directory. (Using a separate
and the database directory database directory was an option when you configured deduplication.)
Warning: If you delete the storage directory and valid NetBackup images
reside on the deduplication storage, data loss may occur.
Multi- NetBackup
Threaded Deduplication
Deduplication
Agent Engine (spoold)
plug-in
Proxy NetBackup
plug-in Deduplication
Manager (spad)
Component Description
The plug-in runs on the deduplication storage server and on load balancing servers.
Multi-Threaded Agent The NetBackup Deduplication Multi-Threaded Agent uses multiple threads for
asynchronous network I/O and CPU core calculations. The agent runs on the storage
server, load balancing servers, and clients that deduplication their own data.
NetBackup Deduplication The NetBackup Deduplication Engine is one of the storage server core components.
Engine It provides many of the deduplication functions, which are described in Table 15-2.
The binary file name is spoold, which is short for storage pool daemon; do not confuse
it with a print spooler daemon. The spoold process appears as the NetBackup
Deduplication Engine in the NetBackup web UI.
NetBackup Deduplication The deduplication manager is one of the storage server core components. The
Manager deduplication manager maintains the configuration and controls internal processes,
optimized duplication, security, and event escalation.
The deduplication manager binary file name is spad. The spad process appears as
the NetBackup Deduplication Manager in the NetBackup web UI.
Proxy plug-in The proxy plug-in manages control communication with the clients that back up their
own data. It communicates with the OpenStorage proxy server (nbostpxy) on the
client.
Reference database The reference database stores the references that point to every data segment of which
a file is composed. Unique fingerprints identify data segments. The reference database
is partitioned into multiple small reference database files to improve scalability and
performance.
The reference database is separate from the NetBackup catalog. The NetBackup
catalog maintains the usual NetBackup backup image information.
Table 15-2 describes the components and functions within the NetBackup
Deduplication Engine.
Deduplication architecture 550
MSDP server components
Component Description
Connection and Task The Connection and Task Manager manages all of the
Manager connections from the load balancing servers and the clients
that deduplicate their own data. The Connection and Task
Manager is a set of functions and threads that does the
following:
Data integrity checking The NetBackup Deduplication Engine checks the integrity of
the data and resolves integrity problems.
Data Store Manager The Data Store Manager manages all of the data container
files. The datastore Manager is a set of functions and threads
that provides the following:
Index Cache Manager The Index Cache Manager manages the fingerprint cache.
The cache improves fingerprint lookup speed.
Reference Database Engine The Reference Database Engine stores the references that
point to the data segments, such as read-from or write-to
references. It manipulates a single database file at a time.
Deduplication architecture 551
Media server deduplication backup process
Component Description
bpbkar
Control path
Data path Client
Media Server Deduplication Pool
The following list describes the backup process when a media server deduplicates
the backups and the destination is a Media Server Deduplication Pool:
■ The NetBackup Job Manager (nbjm) starts the Backup/Restore Manager (bpbrm)
on a media server.
■ The Backup/Restore Manager starts the bptm process on the media server and
the bpbkar process on the client.
■ The Backup/Archive Manager (bpbkar) on the client generates the backup
images and moves them to the media server bptm process.
Deduplication architecture 552
MSDP client components
The Backup/Archive Manager also sends the information about files within the
image to the Backup/Restore Manager (bpbrm). The Backup/Restore Manager
sends the file information to the bpdbm process on the primary server for the
NetBackup database.
■ The bptm process moves the data to the deduplication plug-in.
■ The deduplication plug-in retrieves a list of IDs of the container files from the
NetBackup Deduplication Engine. Those container files contain the fingerprints
from the last full backup for the client. The list is used as a cache so the plug-in
does not have to request each fingerprint from the engine.
■ The deduplication plug-in separates the files in the backup image into segments.
■ The deduplication plug-in buffers the segments and then sends batches of them
to the Deduplication Multi-Threaded Agent. Multiple threads and shared memory
are used for the data transfer.
■ The NetBackup Deduplication Multi-Threaded Agent processes the data
segments in parallel using multiple threads to improve throughput performance.
The agent then sends only the unique data segments to the NetBackup
Deduplication Engine.
If the host is a load-balancing server, the Deduplication Engine is on a different
host, the storage server.
■ The NetBackup Deduplication Engine writes the data to the Media Server
Deduplication Pool.
The first backup may have a 0% deduplication rate, although a 0% rate is
unlikely. Zero percent means that all file segments in the backup data are unique.
Component Description
Component Description
The following list describes the backup process for an MSDP client to a Media
Server Deduplication Pool:
■ The NetBackup Job Manager (nbjm) starts the Backup/Restore Manager (bpbrm)
on a media server.
Deduplication architecture 554
MSDP client–side deduplication backup process
■ The NetBackup Deduplication Engine writes the data to the Media Server
Deduplication Pool.
The first backup may have a 0% deduplication rate, although a 0% deduplication
rate is unlikely. Zero percent means that all file segments in the backup data
are unique.
Chapter 16
Configuring and managing
universal shares
This chapter includes the following topics:
Key benefits
The following information provides a brief description about the key benefits of
universal shares:
■ NAS-based storage target
Unlike traditional NAS-based storage targets, universal shares offer all of the
data protection and management capabilities that are provided by NetBackup,
including Storage Lifecycle Policies (SLPs).
■ Database dump location
Universal shares offer a space saving (deduplicated) dump location, along with
direct integration with NetBackup technologies including data retention,
replication, and direct integration with cloud technologies.
■ Financial and time savings
Universal shares eliminate the need to purchase and maintain third-party
intermediary storage. Use of this storage typically doubles the required I/O
throughput since the data must be moved twice. Universal shares also cut in
half the time it takes to protect valuable application or database data.
■ Protection points
The universal share protection point offers a fast point-in-time copy of all data
that exists in the share. This copy of the data can be retained like any other data
that is protected within NetBackup. All advanced NetBackup data management
facilities such as Auto Image Replication (A.I.R.), storage lifecycle policies,
optimized duplication, cloud, and tape are all available with any data in the
universal share.
■ Copy Data Management (CDM)
The universal share protection point also offers powerful CDM tools. A read/write
copy of any protection point can be "provisioned" or made available through a
NAS (CIFS/NFS) based share. A provisioned copy of any protection point can
be used for common CDM activities, including instant recovery or access of
data in the provisioned protection point. For example, a database that has been
previously dumped to the universal share can be run directly from the provisioned
protection point.
■ Back up and restore without client software and restore to any client
Client software is not required for universal share backups or restores. You can
also restore universal shares to any client. Universal shares work with any
POSIX-compliant operating system that supports NFS or CIFS.
Configuring and managing universal shares 558
Introduction to universal shares
Client support
The universal share feature supports a wide array of clients and data types.
NetBackup software is not required on the client where the share is mounted. Any
operating system that uses a POSIX-compliant file system and can mount a CIFS
or an NFS network share can write data to a universal share. As the data comes
in to the universal share, it is written directly into the Media Server Deduplication
Pool (MSDP). No additional step or process of writing the data to a standard disk
partition and then moving it to the deduplication pool is necessary.
Table 16-1 Process for configuring and using universal shares with an MSDP
build-your-own (BYO) server
Step Description
1 Identify a machine. Make sure that the MSDP BYO server complies with
prerequisites and hardware requirements.
2 In the NetBackup web UI, create a universal share. See Create a universal share
in the NetBackup Web UI Administrator's Guide.
3 Mount the universal share that was created from the NetBackup web UI. See
“Mounting a universal share” on page 582.
5 Optionally, use the ingest mode to dump data or to load backup data from a
workload to the universal share over NFS/CIFS.
When ingest mode is turned on, the backup script triggers the universal share to
persist all the data from memory to disk on the client side at the end of the backup
or the dump. Ingest mode is faster than normal mode as it does not guarantee all
the ingest data is persisted to disk until the ingest mode is turn off.
See “Load backup data to a universal share with the ingest mode” on page 607.
■ Samba services must be installed and running if you want to use share over
CIFS/SMB.
Ensure that the Linux samba and samba winbind packages are installed.
■ yum install samba samba-common samba-winbind
samba-winbind-clients samba-winbind-modules -y
■ You must configure Samba users on the corresponding storage server and enter
the credentials on the client.
See “Configuring universal share user authentication” on page 562.
■ Ensure that the xfsprogs is installed:
■ yum install xfsprogs -y
■ Ensure that the following commands are run to grant permissions to the SMB
shares:
■ setsebool -P samba_export_all_rw on
■ setsebool -P samba_export_all_ro on
■ Install and run NGINX. The minimum recommended NGINX version is 1.24.0.
■ After NGINX is installed, the HTTP web service at port 80 is enabled by default.
Remove /etc/nginx/conf.d/default.conf or edit the file to disable the HTTP
web service if it is not needed.
■ Ensure that the /mnt folder on the storage server is not directly mounted by any
mount points. Mount points should be mounted to its subfolders.
■ Ensure that you configure the storage server for MSDP. If you configure the
universal share feature on BYO after storage is configured or upgraded without
the NGINX service installed, run the command on the storage server:
/usr/openv/pdde/vpfs/bin/vpfs_config.sh --configure_byo
■ Minimum 2.2-GHz clock ■ 16 GB (For 8 TBs to 32 Disk size depends on the size
rate. TBs of storage - 1GB of your backup. Refer to the
■ 64-bit processor. RAM for 1TB of storage). hardware requirements for
■ Minimum 4 cores; 8 cores ■ 32 GBs of RAM for more NetBackup and Media Server
recommended. For 64 than 32 TBs of storage. Deduplication Pool (MSDP).
TBs of storage, the Intel ■ An additional 500MB of If a system has multiple data
x86-64 architecture RAM for each live mount. partitions, all the partitions
requires eight cores. must be the same size.
■ Enable the VT-X option in Example: If a BYO server has
the CPU configuration. a first partition at 4TB, all
additional data partitions must
be at 4TB in size.
Upgrading to NetBackup
You must unmount all the NFS mount points on the client side before you upgrade
to NetBackup to avoid issues when accessing the universal share on the client side
over NFS.
1. Unmount all the universal share that were mounted on the Linux UNIX client.
2. Upgrade to NetBackup.
3. Start the NetBackup services.
4. Mount the universal share again on the Linux UNIX client.
When you configure the MSDP, SPWS service will use the specified user and
group and spws and the group spwsgrp will not be created.
To manage the SPWS service after the MSDP server is configured
If the MSDP storage server is already configured, and the user spws and the
group spwsgrp are created, run the following command on the storage server
to specify the new user and the group.
/usr/openv/pdde/vpfs/bin/spws_config.sh --spwsuser=<spwsuser>
--spwsgroup=<spwsgroup>
The SPWS service changes to use the specified user and the group. The owner
of the files that are used by the SPWS service is also changed.
It does not delete the user spws and the group spwsgrp that are created. You
can delete the user and the group manually.
Note: You must run the spws_config.sh script under the root user so that it can
create the log file in /var/run/vpfs/ and configure the system services.
When you create a universal share from the NetBackup web UI, you can specify
Active Directory users or groups. This approach restricts access to only specified
users or groups. You can also control permissions from the Windows client where
the universal share is mounted. See the NetBackup Web UI Administrator’s Guide
for more information.
For information about setting up Active Directory users or groups with an appliance,
see the NetBackup Appliance Security Guide.
Universal shares can be created with NFS or SMB protocol. When the SMB protocol
is used, SMB must be set up with ADS or in local user mode. The following table
describes how to configure the media server with Active Directory for various
platforms and create a universal share using SMB.
Table 16-3 Describes the requirements for different platforms to join the
Active Directory domain
Platform Requirements
/usr/openv/pdde/vpfs/bin/register_samba_to_ad.sh
--domain=<domain> --username=<username>
Table 16-3 Describes the requirements for different platforms to join the
Active Directory domain (continued)
Platform Requirements
WORM enabled The storage server can be configured to join or leave Active Directory
storage server with Restricted Shell commands.
Flex Scale Review the section Configuring AD server for Universal shares and
Instant Access in the NetBackup Flex Scale Administrator’s Guide.
AKS/EKS AD NetBackup support only SMB local user mode. The SMB server is
configured with local user mode by default.
Once the storage server has been added to an Active Directory domain, a universal
share can be created as normal. Any users and user groups that are specified are
validated using the wbinfo command. The following procedure describes how to
add a universal share to an Active Directory.
Adding a universal share to an Active Directory
1 Open the NetBackup web UI.
2 Create a universal share with the SMB protocol.
3 Mount the shared storage on a Windows client.
Provide all necessary credentials.
4 Verify that the universal share is fully set up, and can be backed up and restored
using a Universal-Share policy.
The following requirements exist to add Microsoft SQL Server instant access to an
Active Directory:
■ The storage server and the client must be in the same domain.
■ The domain user account is required with the necessary permissions to log on
to the Microsoft SQL Server client.
■ In the web UI, register the Microsoft SQL Server instance with the domain user.
Configuring and managing universal shares 565
Prerequisites to configure universal shares
■ See the information on manually adding a SQL Server instance in the NetBackup
for Microsoft SQL Server Administrator's Guide.
■ The domain user credentials are required to use instant access.
Note: For Azure Kubernetes Service (AKS) and Amazon Elastic Kubernetes Service
(EKS) cloud platforms, only a SMB local user can access the SMB share. You must
add SMB users to access the SMB share.
If the SMB service is not part of Windows domain, perform the following steps:
■ For a NetBackup Appliance:
For a NetBackup Appliance, local users are also SMB users. To manage local
users, log in to the CLISH and select Main > Settings > Security >
Authentication > LocalUser. The SMB password is the same as the local
user’s login password.
■ For an MDSP BYO server:
For an MDSP BYO server, create a Linux user (if one does not exist). Then,
add the user to SMB.
For example, the following commands create a username for the SMB service
only:
adduser --no-create-home -s /sbin/nologin <username>
smbpasswd -a <username>
To add an existing user to the SMB service, run the following command:
smbpasswd -a username
useradd <username>
passwd <username>
■ Run the following commands to create user credentials for the SMB service
and enable the user:
smbpasswd -a <username>
smbpasswd -e <username>
Kerberos-based authentication
Use Kerberos-based authentication for universal shares to secure the connection
between clients and servers. All the Kerberos security types krb5, krb5i, and krb5p
are supported for the universal share configuration.
You must follow these steps to configure the Kerberos authentication for universal
shares.
Configuring and managing universal shares 567
Prerequisites to configure universal shares
Table 16-4
Step Task Description
5 Enter the domain user information. User logon name is used for Active
Directory domain login and authentication.
For storage servers, the logon name must be nfs/<storage server FQDN>.
Where the nfs is an NFS service principal and storage server is the host where
your universal shares are created. For example,
nfs/storage-server.mydomain.com.
For a universal share server, create one more user host/<storage server
FQDN>.
For a universal share server, you must create two Active Directory users,
nfs/<storage server FQDN> and host/<storage server FQDN>. For a universal
share client, create only one user, host/<universal share client FQDN>.
6 Set password for the new user.
7 Click Finish to finish the user creation.
8 Double-click the user you have created to open the property window.
9 In Account options list, select AES 128 and AES 256 encryption items.
Note: Password must be the password of the Active Directory user. Otherwise,
the previous password will be modified.
You must configure Kerberos-based authentication both on the servers and the
clients.
For NetBackup BYO environment, before you configure Kerberos authentication
on NetBackup servers and clients, check if the necessary krb5 package is installed
on the system. Run the following commands to check if these packages are installed
or not:
yum install krb5-workstation
pam_krb5 -f
For NetBackup BYO, run the script in the command window. For Flex media
server, you must log in to the media server instance and run the script with
sudo.
For Flex WORM and Flex Scale, you must log in to the WORM or MSDP engine
Restricted Shell to run these commands.
■ Add the key entries.
setting SecureNfs add-krb-user
krbuser=nfs/storage-server.mydomain.com
Note: If there is kdc section defined in krb5.conf file. Copy kdc.conf file
along with /etc/krb5.conf file.
Note: If the universal share client has the existing /etc/krb5.keytab file, use
the vpfs_nfs_krb.sh script to add the key entries.
This option is available only if universal share with object storage in cloud
feature is enabled.
■ In Cloud cache properties, specify the size of the local disk cache in the
Request cloud cache disk space.
The Request cloud cache disk space can only be set here on initial setup.
Any subsequent changes must be made on the storage server properties
page.
Note: When you update the Cloud cache properties setting in storage
server properties page, there is an interruption of the current shared mounts.
When you click Save, the vpfsd process restarts to apply the new value.
In addition, new universal shares cannot be created if the available size is
less than 128GB.
5 At this point, continue to enter values in the remaining fields or click Save to
save the universal share. You can update the remaining fields later from the
universal share’s details page:
■ Select a Quota type: Unlimited or Custom. If you select Custom, also
specify the quota in MB, GB, or TB units.
The Custom quota value limits the amount of data that is ingested into the
share. Quotas are enforced using the front-end terabyte (FETB) calculation
Configuring and managing universal shares 574
Managing universal shares
method. They are Implemented per share and can be modified at any time.
You do not need to remount the share for the change to a take effect.
To update the quote type or value from the universal share’s details page,
click Edit in the Quota section.
■ Specify the User names (Local or Active Directory) and the Group names
(Active Directory only). Only the specified users or groups can access the
share. You can add and update the User names and the Group Names
later from the details page of an existing universal share.
Note: Currently, the User names and the Group names are supported
only for the SMB (CIFS) protocol.
■ Specify Kerberos security methods if the selected protocol is NFS and the
Kerberos service is supported on the selected Storage server.
If you select more than one Kerberos security methods, you can specify
any method as mount command option to the share from client host.
■ Kerberos 5
Uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate
the users.
■ Kerberos 5i
Uses Kerberos V5 for user authentication and performs integrity checking
of NFS operations using the secure checksums to prevent tampering
of the data.
■ Kerberos 5p
Uses Kerberos V5 for user authentication and integrity checking. It
encrypts NFS traffic to prevent traffic sniffing. This option is the most
secure setting but it also involves the most performance overhead.
Note: The image sharing storage server is not available while creating a new
universal share.
*.example.com
..example.com
*.vxindia.veritas.com
*.veritas.com
some.example.com
*.some.example.com
s???me.example.com
s?me.example.com
*.example.com
*.veritas.com
*.example.com
..example.com
*.vxindia.veritas.com
*.veritas.com
Configuring and managing universal shares 576
Managing universal shares
so*me.example.com/
s?me.examp!e.com/
s*me.examp!!!!e.com
so*me.example.com/
s?me.examp!e.com/
s*me.examp!!!!e.com
some.example.com?
some.example.com*
some.example.com?
some.example.com*
some.ex*ample.com
s*ome.example.com
s*me.example.com
some.example.com?
some.example.com*
some.ex*ample.com
s*ome.example.com
s*me.example.com
*some.example.com
Configuring and managing universal shares 577
Managing universal shares
Note: Instant access on Flex WORM storage requires the following services: NGINX,
NFS. SAMBA, WINBIND (if Active Directory is required), SPWS, VPFS.
■ Ensure that the TCP network buffer size is large enough to not hinder Direct
NFS performance. The following command can verify the TCP buffer size:
sysctl -a |grep -e net.ipv4.tcp_[rw]mem
TCP buffer output
Configuring and managing universal shares 578
Managing universal shares
To enable Direct NFS, run the following commands and remove the oranfstab file:
cd $ORACLE_HOME/rdbms/lib
Parameter Usage
Parameter Usage
nfs_version The NFS protocol version that the Direct NFS client uses.
security_default The default security mode that is applicable for all the exported
NFS server paths for a server entry.
server: myNFSServer1
local: 192.168.1.1 path: 192.168.1.2
local: 192.168.2.1 path: 192.168.2.2
local: 192.168.3.1 path: 192.168.3.2
local: 192.168.4.1 path: 192.168.4.2
export: /vol/oradata1 mount: /mnt/oradata1
export: /vol/oradata2 mount: /mnt/oradata2
mnt_timeout: 600
Ensure that you set up the oradism file at the following path:
$ORACLE_HOME/bin/oradism. Direct NFS uses this oradism binary to issue mounts
as root. The file must be local to each node and with the ownership of a root user.
To ensure that the file is local to each node, run the chown root
$ORACLE_HOME/bin/oradism command. Run chmod 4755
$ORACLE_HOME/bin/oradism to ensure that the oradism file has the correct access
permissions.
Client monitoring
Refer to the contents of the following tables for client monitoring.
Configuring and managing universal shares 580
Managing universal shares
Item Description
v$dnfs_files Lists the files that the Direct NFS client has
opened.
C:\>type %ORACLE_HOME%\dbs\oranfstab
server: lnxnfs <=== NFS server Host name
path: 10.171.52.54 <--- First path to NFS server ie NFS server NIC
local: 10.171.52.33 <--- First client-side NIC
export: /oraclenfs mount: y:\
uid:1000
gid:1000
C:\>
The Direct NFS client uses the UID or the GID value to access all NFS servers that
are listed in the oranfstab file. Direct NFS ignores a UID or the GID value of 0.
The UID and the GID used in the earlier example is of an Oracle user from the NFS
server.
The exported path from the NFS server must be accessible for read, write, and run
operations by the Oracle user with the UID and the GID specified in the oranfstab
file. If neither UID nor GID is listed, the default value of 65534, is used to access
all NFS servers listed in oranfstab file.
Configuring and managing universal shares 581
Managing universal shares
Hosts Click Edit to add or delete the hosts that can mount the
share.
3 Mount the universal share using the following one of the following commands:
■ NFSv3:
mount -t nfs <MSDP storage server>:<export path> -o
rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,actimeo=0,vers=3,timeo=600
/mnt/<your_ushare_mount_point_subfolder>
For example:
mount -t nfs
server.example.com:/mnt/vpfs_shares/3cc7/3cc77559-64f8-4ceb-be90-3e242b89f5e9
-o
rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,actimeo=0,vers=3,timeo=600
/mnt/<your_ushare_mount_point_subfolder>
■ NFSv4:
mount -t nfs <MSDP storage server> : <export path> -o
vers=4.0,rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,actimeo=0,
vers=4,timeo=600 /mnt/ <your_ushare_mount_point_subfolder>
Note: If you use NFSv4 on a Flex Appliance application instance, the export
path must be entered as a relative path. Do not include /mnt/vpfs_shares.
Configuring and managing universal shares 584
Mounting a universal share
For example:
mount -t nfs
server.example.com:/3cc7/3cc77559-64f8-4ceb-be90-3e242b89f5e9
-o
rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,actimeo=0,vers=4,timeo=600
/mnt/<your_ushare_mount_point_subfolder>
For NetBackup FlexScale and AKS/EKS cloud platforms, if you use NFSv4
to mount the NFS share on NFS client, you must use the relative share
path without the prefix /mnt/vpfs_shares.
For example, if the export share path is
engine1.com:/mnt/vpfs_shares/usha/ushare1, use NFSv4 to mount it on
client as follows:
mount -t nfs -o 'vers=4' engine1.com:/usha/ushare1
/tmp/testdir.
You can find the mount path on the NetBackup web UI: Storage > Disk storage
> Universal shares.
6 On the Backup Selections tab, enter the path of the universal share.
You can find the export path from the Universal share details page NetBackup
web UI: Storage > Storage Configuration > Universal Share.
For example:
/mnt/vpfs_shares/3cc7/3cc77559-64f8-4ceb-be90-3e242b89f5e9
You can use the NEW_STREAM directive if you require multistream backups.
You can also use the BACKUP X USING Y directive, which allows cataloging
under a different directory than the universal share path. For example: BACKUP
/database1 USING
/mnt/vpfs_shares/3cc7/3cc77559-64f8-4ceb-be90-3e242b89f5e9. In this
example, the backup is cataloged under /demo/database1.
7 Run the Universal-Share policy.
After the backups are created, you can manage the backups with NetBackup
features, such as restore, duplication, Auto Image Replication, and others.
You can instantly access backup copies from local LSU or cloud LSU with web
UI or NetBackup Instant Access APIs.
For more information about instant access for cloud LSU:
See “About instant access for object storage ” on page 374.
For information about NetBackup APIs, see the following website:
https://sort.veritas.com/documents
Select NetBackup and then the version at the bottom of the page.
NetBackup also supports a wide variety of APIs, including an API that can be used
to provision (instant access) or create an NFS/SMB share that is based on any
protection point point-in-time copy. This point-in-time copy can be mounted on the
originating system where the universal share was previously mounted. It can be
provisioned on any other system that supports the mounting of network share.
NetBackup client software is not required on the system where the provisioned
share is mounted.
This API and all NetBackup APIs are described in the NetBackup API Reference
documentation, which is located on the NetBackup primary server.
2 Ensure that the nfs export list exists at the following location:
[MSDP storage directory]/etc/vpfs-shares.exports on BYO or [MSDP
storage directory]/cat/config/vpfs-shares.exports on AKS or EKS if
there are any loud-configured NFS shares.
If the list does not exist, run the following command:
/usr/openv/pdde/vpfs/bin/vpfscld --download_export_list
--share_type nfs
3 Ensure that the smb export list exists at the following location:
[MSDP storage directory]\/etc/vpfs-shares.conf on BYO or [MSDP
storage directory]/cat/config/samba/vpfs-shares.conf on AKS or EKS
if there are any cloud-configured SMB shares.
If the list does not exist, run the following command:
/usr/openv/pdde/vpfs/bin/vpfscld --download_export_list
--share_type smb
Configuring and managing universal shares 590
Restoring data using universal shares
4 Restore data to the shares and recover the metadata with the following
command:
/usr/openv/pdde/vpfs/bin/vpfs_actions -a disasterReovery
--cloudVolume CLOUDVOLUMENAME
/usr/openv/netbackup/bin/bp.start_all
■ Enter a Display name. This name is used in the universal share path.
■ Select the Protocol: NFS or SMB (CIFS)
■ Specify a Host that is allowed to mount the share and then click Add to
list. You can use the host name, IP address, short name, or the FQDN to
specify the host. You can enter multiple hosts for each share.
6 Click Save.
7 NetBackup creates a recovery job. You can view this job by clicking on Restore
activity.
8 Select Workloads > Universal shares > Instant access universal shares
to review the universal share.
Table 16-7 Supported platforms for universal shares with object store
Amazon Elastic Kubernetes Service (EKS) This platform is supported and enabled by
default.
Configuring and managing universal shares 592
Advanced features of universal shares
Table 16-7 Supported platforms for universal shares with object store
(continued)
RHEL 7.6+/8/9 on premise (Object storage This platform is supported. You must
on premise) manually enable this option.
Example:
cat /etc/msdp-release
universal-share-object-store = 1
Example:
3 On the media server or the primary server, run the following commands to
reload the storage server attributes:
The following are optional parameters you can add to the universal share with object
store. These options are located in:
storage_path/etc/puredisk/vpfsd_config.json
Snapshot retention:
■ "cloudFullTaskInterval": 36000,: Automatically creates the full snapshot
for the universal share interval and the default value is 10 hours. This entry must
be an integer using the unit of seconds.
■ "cloudIncrTaskInterval": 1800,: Automatically creates the incremental
snapshot for the universal share interval and the default value is 30 minutes.
This entry must be an integer using the unit of seconds.
■ "cloudFullSnapshotRetention": 172800,: The retention time of the full
snapshot copy. When the retention expires, the full snapshot is deleted from
local storage and the cloud bucket storage. The default value is 48 hours. If the
retention is set longer than 48 hours, there might be an effect on space
reclamation.
Configuring and managing universal shares 594
Advanced features of universal shares
0.5 1 4 400
0.5 20 4 400
1 1 4 600
1 20 4 600
2 1 4 800
3 1 4 1200
4 1 4 1600
5 1 4 2000
5 20 4 2000
6 1 4 2200
7 1 4 2600
8 1 4 3000
9 1 4 3400
10 1 4 3800
10 20 4 3800
Snapshot management:
■ List all of the snapshots which include the full snapshot and incremental snapshot
in the cloud bucket:
/usr/openv/pdde/vpfs/bin/vpfscld –list
■ Manually take snapshot and upload snapshot and data to cloud bucket:
/usr/openv/pdde/vpfs/bin/vpfscld --snapshot
--share_id <share> --snap_type <full|incr>
■ Manually remove a snapshot from local and cloud, please be aware of that the
expired snapshot is not recoverable:
Configuring and managing universal shares 596
Advanced features of universal shares
/usr/openv/pdde/vpfs/bin/vpfscld --expire
--share_id <share> --pit <point in time>
/usr/openv/pdde/vpfs/bin/vpfscld -recover
--share_id <share> [--tgt_id <target>] [--pit <point in time>]
[--force]
Note: To enable object store for universal share and instant access, add
universal-share-object-store = 1 and instance-access-object-store =
1 to /etc/msdp-release.
/etc/msdp-release
instant-access-object-store = 1
3 On the media server or the primary server, run the following commands to
reload the storage server attributes:
Supported platforms
■ Client:
■ RHEL 7.6 and newer, RHEL 8.x and RHEL 9.x
■ Only supported in AWS or Azure.
■ Storage server:
■ Storage server version 19.0 and later
■ RHEL 7.6 and newer, RHEL 8.x and RHEL 9.x
■ NetBackup:
■ Primary server: 10.3 and newer
■ Media server: 10.3 and newer
■ Client: 10.3 and newer
Limitations
■ Universal share accelerator doesn’t support multiple vpfsd instances.
■ DR is not supported for universal share accelerator.
Veritas recommends the use of separate mount points for the universal share
accelerator. You must ensure there is enough, free usable space for the universal
share accelerator if it uses a shared disk or mount point with other applications.
6 On the Backup Selections tab, click Add and enter the path of the universal
share in Pathname or directive and then click Add to list.
You can find the export path from the universal share details page NetBackup
web UI: Storage > Disk storage > Universal shares. For example:
/mnt/vpfs_shares/accl/accl
You can use the NEW_STREAM directive if you require multistream backups.
You can also use the BACKUP X USING Y directive, which allows cataloging
under a different directory than the universal share path. For example: BACKUP
/demo/database1 USING /mnt/vpfs_shares/accl/accl. In this example,
the backup is cataloged under /demo/database1.
7 Run the Universal-Share policy.
After the backups are created, you can manage the backups with NetBackup
features, such as restore, duplication, Auto Image Replication, and others.
■ engine-host: Engine name of the cluster or the storage server of the standalone
server.
■ mode: Accelerator mode, it should be byo.
■ share-id: The universal share ID that can be found in the section storage
configuration and the universal shares tab in the NetBackup web UI.
■ cloud-volume: The cloud volume that is the column Volume in the universal
shares tab.
■ /usr/openv/pdde/vpfs/bin/vpfs_accelerator.sh --delete
--share-id=<id>
■ /usr/openv/pdde/vpfs/bin/vpfs_accelerator.sh --stop-all
2. In the NetBackup web UI, open the universal share list page and select the
accelerator and delete it.
server side are not deleted. To delete the universal share accelerator, you must
delete it from the NetBackup web UI.
To unconfigure the universal share accelerator on the client side, use the
following:/usr/openv/pdde/vpfs/bin/vpfs_accelerator.sh –unconfig
Note: If you run vpfs_accelerator.sh –unconfig it deletes the data for the
universal share accelerator and the data cannot be recovered.
# df -h /mnt/vpfs_shares/test/test
Filesystem Size Used Avail Use% Mounted on
vpfsd 2.0T 1.9G 2.0T 1% /mnt/vpfs_shares/test/test
3. Use the command vpfs_quota to query the quota usage. The command should
be run on the workload computer if it's a universal share accelerator. The output
comes from the command vpfs_quota and may not be the same as in the web
UI.
vpfs_quota
Usage:
vpfs_quota <status> <share_id>
Configuring and managing universal shares 604
Advanced features of universal shares
Example:
Example:
mv /msdp/meta_dir/test/test/quota.dat
/msdp/meta_dir/test/test/quota.dat.bak
Note: This procedure doesn’t support the quota repair on the deduplicate shell, to
repair the quota on WORM storage server requires you to unlock the appliance.
vpfscld --list
/msdp/meta_dir/usa/usa
Name: pit_0f0430a8-4cea-41dc-8e22-ed1b6f7804e9
Type: full
Create_time: 1683642234
■ Name: The snapshot ID, which can be used to recover the share.
vpfscld --recover
--share_id usa
--tgt_id usa_snap
--pit pit_0f0430a8-4cea-41dc-8e22-ed1b6f7804e9
--lsu labvol
4 The new mount point can access all the files in the accelerator.
Example:
/mnt/vpfs_shares/usa_/usa_snap
■ Storage/media server:
■ <storage-path>/log
■ /usr/openv/netbackup/logs/
■ /usr/openv/logs/
Configuring and managing universal shares 607
Advanced features of universal shares
■ Primary server:
■ /usr/openv/netbackup/logs/
■ /usr/openv/logs/
Make sure to check the return value of the commands. If the return value is
not 0, the data might have not been persisted successfully. In that case, you
must back up or dump the data again.
snapshot=[full/incr]
Example:
the policy each time the policy runs. If you use backup_selection, you are required
to always provide the backup_selection.
Use the following for the backup key-value pair:
backup_selection=<path1>:<path2>:<pathN>
Note: The first data volume cannot be disabled for universal share.
vpfs_mounts stop
vpfs_mounts start
Options:
■ --data-volume [--local] [--cloud]
Displays the VPFS data volume usage statistics. By default it displays the
statistics for local storage usage and cloud storage usage. To display the
statistics for local usage only, use the --local option. To display the statistics
for cloud usage only, use --cloud option.
Configuring and managing universal shares 611
Managing universal share services
■ -help --help
Displays help text for the utility.
Here is an example of vpfs_stats --data-volume:
Or
/usr/openv/netbackup/bin/goodies/netbackup stop
Or
/usr/openv/netbackup/bin/goodies/netbackup start
Note: NetBackup 10.3 uses separate vpfsd instance for malware scanning, hence
at least 1 vpfsd instance must be reserved. The vpfsd instance for malware scanning
can be configured by changing the value of numOfScanInstance. The value must
be an integer between 1 and 4, and numOfScanInstance must be less than
numOfInstance.
/usr/openv/pdde/vpfs/bin/vpfs_metadump dedupe
/mnt/vpfs_shares/02b1/02b1e846-949f-5e55-8e39-e9900cd6a25e LT_0.1_20_1
File Name File Size Stored Size Overall Rate Dedupe Rate Compress Rate
[INFO]: /LT_0.1_20_1/db_dump.1of14: 3043.42MB, 30.26MB, 99%, 93.31%, 85%
[INFO]: /LT_0.1_20_1/db_dump.2of14: 3043.42MB, 28.10MB, 99%, 93.94%, 84%
[INFO]: /LT_0.1_20_1/db_dump.3of14: 3045.02MB, 32.78MB, 98%, 92.82%, 85%
[INFO]: /LT_0.1_20_1/db_dump.4of14: 3044.93MB, 38.48MB, 98%, 91.44%, 85%
[INFO]: /LT_0.1_20_1/db_dump.5of14: 3044.93MB, 29.05MB, 99%, 93.78%, 84%
[INFO]: /LT_0.1_20_1/db_dump.6of14: 3044.93MB, 30.06MB, 99%, 93.45%, 84%
[INFO]: /LT_0.1_20_1/db_dump.9of14: 3043.42MB, 26.71MB, 99%, 94.27%, 84%
[INFO]: /LT_0.1_20_1/db_dump.8of14: 3043.42MB, 32.05MB, 98%, 93.07%, 84%
[INFO]: /LT_0.1_20_1/db_dump.10of14: 3043.42MB, 31.12MB, 98%, 93.36%, 84%
[INFO]: /LT_0.1_20_1/db_dump.12of14: 3044.93MB, 31.57MB, 98%, 93.13%, 84%
[INFO]: /LT_0.1_20_1/db_dump.11of14: 3044.93MB, 27.08MB, 99%, 94.23%, 84%
[INFO]: /LT_0.1_20_1/db_dump.7of14: 3043.42MB, 25.31MB, 99%, 94.65%, 84%
[INFO]: /LT_0.1_20_1/db_dump.13of14: 3044.93MB, 31.09MB, 98%, 93.33%, 84%
[INFO]: /LT_0.1_20_1/db_dump.14of14: 3044.93MB, 36.60MB, 98%, 91.79%, 85%
[INFO]: total size: 42620.06MB, stored size: 430.25MB, overall rate: 98.99%,
dedupe rate: 93.33%, compress rate:84%
[0K, 8K): 0.0%
[8K, 16K): 0.0%
[16K, 24K): 0.7%
[24K, 32K): 0.5%
[32K, 40K): 98.8%
[INFO]: total SO: 1368688, average SO: 31K
for MSSQL and Sybase applications. Use the vpfs_actions command-line utility
to manage the algorithm configuration.
To configure the variable-length deduplication algorithm for universal shares
1 Navigate to the following location on the media server:
/usr/openv/pdde/vpfs/bin/
Sample output:
segment_type: "vld"
applications: [{"type": "vld", "sw_min": 16, "sw_max": 32}]
status: 0
Note: For the new environment where image backups do not exist in the
storage, universal share automatically uses VLD v2 instead of VLD when you
specify -segment VLD in the first-time configuration.
Option Description
segment ■ alignment
Use fixed-length deduplication method. It's the default value.
If set to alignment, sw_min and sw_max are not required.
■ vld
The version one of variable length deduplication algorithm.
■ vldv2
The version 2 of variable length deduplication algorithm. It is
recommended to use this version as default.
■ vldv3
Another version of variable length deduplication algorithm.
sw_min The minimum segment size (KB) of the segmentation range (16
- 127).
Option Description
sw_max The maximum segment size (KB) of the segmentation range (17
- 128), this value must be greater than sw_min.
2 The VPFS scheduler daemon starts automatically after at least one feature is
enabled. To start VPFS scheduler manually, run the following command:
/usr/openv/pdde/vpfs/bin/vpfs_sched start
Allowed
values: error, warning, information, and
debug.
Configuring and managing universal shares 617
Managing universal share services
3 Copy dump scripts to the existing shares that are created before feature is
enabled.
/usr/openv/pdde/vpfs/bin/vpfs_actions --action syncShareScript
4 Add the dump starts script to the start of the workload and use the dump_end
script to the end of the workload.
For example:
■ Windows batch dump scripts
\\hostname\ushare-smb1\.share-builtin-scripts\dump_start.bat
--dump-id <dump_id>
\\hostname\ushare-smb1\.share-builtin-scripts\dump_start.bat
--dump-id <dump_id>
/mnt/ushare_smb1/.share-builtin-scripts/dump_start.sh
--dump-id <dump_id>
/mnt/ushare_smb1/.share-builtin-scripts/dump_end.sh
--dump-id <dump_id>
pl-ushare-nfs-1,host.domain.com,/mnt/vpfs_shares/
usha/ushare-nfs-1/backup
pl-ushare-nfs-2,host.domain.com,/mnt/vpfs_shares/usha/ushare
-nfs-2/backup
The backup schedule is configured with a key-value pair where the key is the
weekday (Monday - Sunday) and the value is the list of universal share backup
type. The default configuration is FULL, INCR, CINC every day from Monday
to Sunday.
The following are the supported backup types:
■ FULL: Full backup
■ INCR: Differential incremental backup
■ CINC: Cumulative incremental backup
5 Create a marker file in the configured location in the universal share. After you
configure the market file path in the policy
file VpfsScheduler.autoUShareBackupPolicyFile, create marker file in
designated location to trigger the backup.
The marker must be named with the following format:
BACKUP_SUCCESS_<XXXX>_touch_<Date> or BACKUP_SUCCESS_<XXXX>_touch_<Date>_<Timestamp>
<job_id>,<schedule>,<state>,<status>,<total_size_kb>KB,<start_time>,
<end_time>,<client>,<policy>,<ushare_mount>
3 Resolve the issues that are identified in the log. For example, restart any
services that are required for instant access.
Configuring and managing universal shares 622
Troubleshooting issues related to universal shares
Make sure that the InstantAcess flag is listed in the command's output.
If the flag is not listed, see one of the guides mentioned above to enable instant
access on the storage server.
3 Run the following command:
nbdevconfig -getconfig -stype PureDisk -storage_server
storage_server_name
Whenever a universal share is created on the NetBackup web UI, a mount point is
also created on the storage server.
For example:
{"timestamp":1682906052.752,"threadId":140498966533888,"UID":"ushare
-1","UpdateCounter:":4846,"SegmentNumber":39148,"DedupeRate":0.00,"AvgDedupe
TimePerSegment(ms)":0.534,"AvgSegmentSize":128792.79,"AvgWriteTimePerSegment(
ms)":0.003,"TotalWriteSize":5041984256}
The vpfsd logging is enabled for the following file system operations:
■ getattr
■ rename
Configuring and managing universal shares 624
Troubleshooting issues related to universal shares
■ open
■ read
■ write
■ fsync
■ truncate
■ getDataCacheNodeForWrite
The vpfsd logging is enabled for all the file system operations.
3 Search the messages in the file:
cat /msdp/vol/log/vpfs/vpfsd/vpfs0_vpfsd.log | grep
"WARNING.*exceeded configured threshold"
Checkpoint failures can affect data consistency and an event is triggered if the
checkpoint repeatedly fails.
It is recommended that you configure the NetBackup universal-share policy to
protect the universal share. An event is sent to the NetBackup primary server if the
universal share is not protected or there if there is no successful backup in a
specified period of time (last 24 hours by default).
A syslog and a history log are created for the file. The backup of the universal share
skips the files that have an issue.
Apart from the automatic process, you can use the vpfsck command to manually
verify and detect the file corruption, and the corrupted file is moved to the following
location:
/mnt/vpfs_share/<share dir>/<share ID>/.vpfs_corrupted
■ Requirements
■ Replicating the backup images from the IRE domain to the production domain
Requirements
Following are the requirements to configure isolated recovery environment (IRE)
in a Pull model:
■ WORM storage server: 17.0 or later (on Flex Appliance: 2.1.1 or later)
■ NetBackup BYO media server: 10.1 or later
■ NetBackup Flex Scale: 3.2 or later
■ Access Appliance: 8.2 or later
Table 17-1 lists the supported configuration for MSDP source and targets for isolated
recovery environment.
Configuring isolated recovery environment (IRE) 627
Configuring the network isolation
MSDP, WORM MSDP, WORM BYO, Flex 2.1.1 10.1 17.0 Pull model
MSDP, WORM MSDP, WORM BYO, Flex 10.1.1 17.1 Pull model with IPv6 +
mixed CA support for
IRE hosts
MSDP, WORM, MSDP, WORM, BYO, Flex, Flex 10.2 18.0 Pull model
MSDP Scaleout MSDP Scaleout Scale
MSDP, WORM, MSDP, WORM, BYO, Flex, Flex 10.3 19.0 Web UI
MSDP Scaleout MSDP Scaleout Scale
Note: For NetBackup 10.0 and WORM 16.0, download the hotfix
VRTSflex-HF3-2.1.0-0.x86_64.rpm for the Flex Appliance in the IRE and
NetBackup EEB VRTSflex-msdp_EEB_ET4067891-16.0-3.x86_64.rpm for the
WORM storage server application from the Veritas Download Center.
Note: Both the source and the target domain must meet the minimum software
version requirements.
Cloud Services
Primary Server
Primary
PrimaryServer:
Server
prod-primary Flex Appliance
Storage
Storage
Malware scan
Network administrator:
- Allows ire-msdp outbound connection
- Denies all inbound and outbound connections
IRE administrator:
- Gets certificate from prod-primary
- Allows subnets 10.20.1.0/24, 10.20.2.0/24, 10.100.1.2
- Adds reverse connection for prod-msdp
Note: The list must have at least the primary server, the media servers, and the
DNS server in IRE domain.
Do not add subnets or IP addresses from the domains outside the IRE domain.
■ Enables a unidirectional network access (allow outbound connection from IRE
MSDP server to the other domains) in IRE air gap window. By default, the window
is 24 hours per day.
All the inbound connections that are not in the allowed subnet list are denied.
Note: You can also configure and manage an IRE from the deduplication shell.
See the NetBackup Deduplication Guide for more details.
IRE web UI relies on Storage Platform Web Service (spws) on the IRE MSDP
storage server. If the IRE MSDP storage server runs on a BYO media server, ensure
that the spws service is configured and running. Ensure that NGINX is installed
and started before you configure spws service.
See “Storage Platform Web Service (spws) does not start” on page 734. to configure
spws service.
Note: For Flex Scale, the allowed subnets protect the entire cluster including the
nodes, the NetBackup servers, and MSDP engines. Ensure that all the subnets,
which need to access the cluster are in the allowed subnets.
See “Configuring A.I.R. for replicating backup images from production environment
to IRE BYO environment” on page 642.
For WORM storage server, Flex Scale, and Access Appliance: See the step 2 of
the Configuring data transmission between a production environment and an IRE
WORM storage server topic.
See “Configuring data transmission between a production environment and an IRE
WORM storage server” on page 652.
To configure the reverse connections
1 On the left, click Storage > Disk storage.
2 Click the Storage servers tab.
3 Click on the MSDP storage server that you want to configure.
4 Under Isolated Recovery Environment > Reverse connections, click Add
reverse connection.
5 On the Add reverse connection page, provide the production primary server
name.
6 Select the existing login credentials or add new credentials and click Next.
■ Select existing credentials: Select the existing credentials.
■ Add a new credential: Add a new credential for the production primary
server. Under Credential type, select Username Password authentication
or Use API key.
Note: The user of the production primary server needs privileges in the
default IRE SLP Administrator role.
7 Click Connect.
8 On the next page, select Remote MSDP storage server.
You can select an MSDP storage server from the production domain. If the
MSDP storage server has multiple network interfaces configured and you want
the reverse connection, use another interface rather than the storage server
name. You can type the FQDN of the network interface for the production
MSDP storage server.
Configuring isolated recovery environment (IRE) 632
Configuring an isolated recovery environment using the web UI
9 In the Local interface field, provide the local storage server interface name
for data transmission.
If the IRE MSDP server has multiple interfaces and you want the IRE MSDP
server to use a specific interface to connect to the production MSDP storage
server, type the FQDN of the network interface for the IRE MSDP storage
server.
If nothing is specified in Local interface field, IRE MSDP server uses the
default network interface to connect to the production storage server.
10 Click Add.
A reverse connection is configured from the IRE MSDP server to the production
MSDP server.
5 On the Modify SLP on the remote primary server page, provide the
production primary server name.
6 Select the existing login credentials or add new credentials and click Next.
■ Select existing credentials: Select the existing credentials.
■ Add a new credential: Add a new credential for the production primary
server. Under Credential type, select Username Password authentication
or Use API key.
Note: The user of the production primary server needs privileges in the
default IRE SLP Administrator role.
7 Click Connect.
8 Select the SLP that you want to add a replication operation to the IRE MSDP
storage server and click Next.
9 Select an operation that you want to replicate to IRE MSDP storage server
after the operation and click Next.
10 Select an SLP of the IRE domain for image import after replication completed.
11 On the Window tab, configure SLP window for the replication operation. Create
a new SLP window or select an existing SLP window.
When you adjust the SLP window, ensure that the SLP window is covered by
IRE schedule. If a replication is triggered outside the IRE schedule, reverse
connection does not happen, and the replication job fails.
The Synchronize with the reverse connection schedule helps to replace
the current SLP window with the IRE Schedule. You can adjust the SLP window
based on the IRE Schedule.
The date and time that is shown on the page are based on the time zone of
IRE primary server. If the production primary server and the IRE primary server
are in different time zones, the time difference is calculated and the SLP window
for the production primary server is converted automatically.
Click Finish.
12 Click Save.
All the configurations including MSDP storage server replication target, SLP
window, and replication operation in the SLP are applied to the production
primary server.
Configuring isolated recovery environment (IRE) 634
Configuring an isolated recovery environment using the command line
Task Description
Configure an AIR for replicating backup See “Configuring A.I.R. for replicating backup
images from production environment to images from production environment to IRE BYO
IRE BYO environment. environment” on page 642.
Configure the data transmission between See “Configuring data transmission between a
a production environment and an IRE production environment and an IRE WORM
WORM storage server. storage server” on page 652.
Where:
■ The production primary server name is the fully qualified domain name
(FQDN) of the primary server in your production environment.
■ The production primary server username is the username of a NetBackup
user with permission to list SLPs and SLP windows in the production
environment.
The production primary server username must be in
domain_name\user_name format on Windows.
■ The target primary server name is the FQDN of the primary server in the
IRE. Use the same hostname that you used to configure the SLPs in the
production environment.
■ The target primary server username is the username of a NetBackup user
with permission to list the SLPs and storage units in the IRE environment.
The target primary server username must be in domain_name\user_name
format on Windows.
For example:
production_primary_server=examplePrimary.domain.com
production_primary_server_username=appadmin
ire_primary_server=exampleIREPrimary.domain.com
ire_primary_server_username=appadmin
Configuring isolated recovery environment (IRE) 636
Configuring an isolated recovery environment using the command line
3 Based on the output for your environment, determine a daily schedule that
accommodates the SLP windows and take note of it. In the previous example,
a daily schedule from 10 A.M. to 12:00 P.M. accommodates both SLP windows.
The start times in the output of this command are in the IRE server's time zone.
Note: If the time zone of the production primary server is changed, you must
restart the NetBackup services.
Configuring isolated recovery environment (IRE) 637
Configuring an isolated recovery environment using the command line
4 Run the following command to configure the subnets and IP addresses that
are allowed to access the media server:
/usr/openv/pdde/shell/bin/ire_network_control allow-subnets
--subnets CIDR subnets or IP addresses
Note: The IRE primary server, the IRE media servers, and the DNS server for
the IRE environment must be included in the allowed list. If all these servers
are in the same subnet, only the subnet is required to be in the allowed list.
Note: If your network environment is dual stack, ensure that both IPv4 and
IPv6 subnets and IP addresses of the IRE domain are configured in allowed
subnets. For example, if you specify only IPv6 subnets in the allowed subnet,
all the IPv4 addresses are not allowed to access the IRE storage server.
Configuring isolated recovery environment (IRE) 638
Configuring an isolated recovery environment using the command line
5 Run the following command to set the daily air gap schedule:
/usr/openv/pdde/shell/bin/ire_network_control set-schedule
--start_time time --duration duration [--weekday 0-6]
weekday is optional. It starts from Sunday. You can configure different network
and open or close window for a specific weekday. If it is not specified, the IRE
schedule is the same on each day.
For example:
/usr/openv/pdde/shell/bin/ire_network_control set-schedule
--start_time 10:00:00 --duration 03:00:00
Note: The SLP replication window on the production domain must be configured
to be open at the same time as the IRE schedule. The IRE schedule window
can be different for weekdays. You can configure a window for a specific
weekday.
For example:
/usr/openv/pdde/shell/bin/ire_network_control set-schedule
--start_time 11:00:00 --duration 10:00:00 --weekday 0
Note: If the production and the IRE environments are in different time zones,
the schedule must begin only once per day in both time zones.
For example, if one environment is in the Asia/Kolkata time zone and the other
is in the America/New_York time zone, the following schedule in Kolkata is not
supported: Tuesday start time 22:00:00 and Wednesday start time 03:00:00.
When these start times are converted to the New York time zone, they become
Tuesday start time 12:30:00 and Tuesday start time 17:30:00, which is not
supported.
Note: If you want to open air gap network for 24 hours on all days, you do not
need to configure IRE schedule. However, the IRE media server restricts the
network access from the hosts that are not configured in the subnets that the
air gap allows.
Configuring isolated recovery environment (IRE) 639
Configuring an isolated recovery environment using the command line
Where:
■ The production primary server name is the fully qualified domain name
(FQDN) of the primary server in your production environment.
■ The production primary server username is the username of a NetBackup
user with permission to list SLPs and SLP windows in the production
environment.
The production primary server username must be in
domain_name\user_name format on Windows.
■ The target primary server name is the FQDN of the primary server in the
IRE. Use the same hostname that you used to configure the SLPs in the
production environment.
■ The target primary server username is the username of a NetBackup user
with permission to list the SLPs and storage units in the IRE environment.
For example:
The target primary server username must be in domain_name\user_name
format on Windows.
production_primary_server=examplePrimary.domain.com
production_primary_server_username=appadmin
ire_primary_server=exampleIREPrimary.domain.com
ire_primary_server_username=appadmin
Note: The IRE primary server, the IRE media servers, and the DNS server
for the IRE environment must be included in the allowed list. If all these servers
are in the same subnet, only the subnet is required to be in the allowed list.
Note: If your network environment is dual stack, ensure that both IPv4 and
IPv6 subnets and IP addresses of the IRE domain are configured in allowed
subnets. For example, if you specify only IPv6 subnets in the allowed subnet,
all the IPv4 addresses are not allowed to access the IRE storage server.
For example:
/usr/openv/pdde/shell/bin/ire_network_control set-schedule
--start_time 10:00:00 --duration 03:00:00
To view the current network status and check whether the external network
is open or closed
Run the following command:
/usr/openv/pdde/shell/bin/ire_network_control
external-network-status
To manually close the external network and resume the air gap schedule
Run the following command:
/usr/openv/pdde/shell/bin/ire_network_control resume-schedule
Configuring isolated recovery environment (IRE) 642
Configuring an isolated recovery environment using the command line
Note: A.I.R. configuration operations can be performed when the external network
is open by IRE air gap. All the given operations are performed on the IRE MSDP
server.
Prerequisites
Before you configure A.I.R. for replicating backup images from production
environment to IRE BYO environment, ensure the following:
■ In the case of NetBackup certificate authority (CA), get the CA certificate and
host certificate for the IRE MSDP storage server from the production primary
server.
Configuring isolated recovery environment (IRE) 643
Configuring an isolated recovery environment using the command line
■ External certificate:
/usr/openv/netbackup/bin/nbcertcmd -enrollCertificate -server
<production primary server>
3 This step is not required if you have not configured any IRE schedule. That is
because if the IRE schedule is not configured, MSDP reverse connection is
enabled for 24 hours on all days. The production primary server can configure
the SLP replication operation with any SLP window.
Once the MSDP reverse connection is configured, copy the IRE schedule to
the NetBackup production domain as an SLP window. Use the following
command:
/usr/openv/pdde/shell/bin/sync_ire_window
--production_primary_server production primary server name
--production_primary_server_username production primary server
username [--slp_window_name slp_window_name ]
Where:
The production primary server name is the fully qualified domain name (FQDN)
of the primary server in your production environment.
The production primary server username is the username of a NetBackup user
with permission to list SLPs and SLP windows in the production environment.
The production primary server username must be in domain_name\user_name
format on Windows.
The slp_window_name is the name of the SLP window to be synced with the
IRE window. It is an optional parameter. If the SLP window is not specified, an
SLP window with the name IRE_DEFAULT_WINDOW is created on the production
primary server.
Configuring isolated recovery environment (IRE) 645
Configuring an isolated recovery environment using the command line
4 You can then add the IRE MSDP storage server as a replication target of the
production NetBackup domain. Then add the replication operation to an existing
SLP to replicate from production NetBackup domain to IRE MSDP storage
server using the following command:
/usr/openv/pdde/shell/bin/add_replication_op
--production_primary_server production primary server name
--production_primary_server_username production primary server
username --source_slp_name source slp name
--target_import_slp_name target import slp name
--production_storage_server production storage server name
--ire_primary_server_username ire primary server username
--target_storage_server target storage server name
--target_storage_server_username target storage server username
--production_storage_unit msdp storage unit name used in source
SLP [--slp_window_name slp window name]
Where:
The production primary server name is the fully qualified domain name (FQDN)
of the primary server in your production environment.
The production primary server username is the username of a NetBackup user
with permission to list SLPs and SLP windows in the production environment.
The production primary server username must be in domain_name\user_name
format on Windows.
The production storage server name is the fully qualified domain name (FQDN)
of the production storage server in your production environment.
The ire primary server username is the username for administrator user of IRE
primary server.
The ire primary server username must be in domain_name\user_name format
on Windows.
The source slp name is the SLP name on the production primary server against
which a replication operation is added.
The target import slp name is the import SLP name from IRE primary server.
The target storage server name is the fully qualified domain name (FQDN) of
the target MSDP storage server.
The target storage server username is the username of the target MSDP
storage server.
The slp_window_name is the name of the SLP window that is synced with the
IRE window. Alternatively, it is created on the production primary server before
Configuring isolated recovery environment (IRE) 646
Configuring an isolated recovery environment using the command line
Note: The source SLP and target import SLP need to be created before the
operation.
Where:
■ <production domain> is the fully qualified domain name (FQDN) of the
primary server in your production environment.
Configuring isolated recovery environment (IRE) 647
Configuring an isolated recovery environment using the command line
EveryDayAtNoon:
SLPs: SLP1
Sunday start: 12:00:00 duration: 00:59:59
Monday start: 12:00:00 duration: 00:59:59
Tuesday start: 12:00:00 duration: 00:59:59
Wednesday start: 12:00:00 duration: 00:59:59
Thursday start: 12:00:00 duration: 00:59:59
Friday start: 12:00:00 duration: 00:59:59
Saturday start: 12:00:00 duration: 00:59:59
WeeklyWindow:
SLPs: SLP2
Sunday start: 10:00:00 duration: 01:59:59
Monday NONE
Tuesday NONE
Wednesday NONE
Thursday NONE
Friday NONE
Saturday start: 10:00:00 duration: 01:59:59
Note: If the production environment and the IRE are in different time zones,
the schedule must begin only once per day in both time zones. For example,
if one environment is in the Asia/Kolkata time zone and the other is in the
America/New_York time zone, the following schedule in Kolkata is not
supported: Tuesday start time 22:00:00 and Wednesday start time 03:00:00.
When these start times get converted to the New York time zone, they become
Tuesday start time 12:30:00 and Tuesday start time 17:30:00, which is not
supported.
3 Run the following command to configure which subnets and IP addresses are
allowed to access the WORM storage server:
setting ire-network-control allow-subnets subnets=<CIDR subnets
or IP addresses>
Note: The IRE primary server, the IRE media servers, and the DNS server for
the IRE must be included in the allowed list. If all of these servers are in the
same subnet, only the subnet is required to be in the allowed list. If you have
a dual stack IPv4-IPv6 network, make sure that you add both the IPv4 and the
IPv6 addresses to the allowed list.
Configuring isolated recovery environment (IRE) 649
Configuring an isolated recovery environment using the command line
4 Run the following command to set the daily air gap schedule:
setting ire-network-control set-schedule start_time=<time>
duration=<duration> [weekday=<0-6>]
5 Before you can send data between the production domain and the IRE storage
server, you must add MSDP reverse connections and add the replication
operation.
See “Configuring data transmission between a production environment and
an IRE WORM storage server” on page 652.
Note: The SLP replication window on the production domain must be configured
to be open at the same time as the IRE schedule.
Note: The IRE primary server, the IRE media servers, and the DNS server for
the IRE must be included in the allowed list. If all of these servers are in the
same subnet, only the subnet is required to be in the allowed list. If you have a
dual stack IPv4-IPv6 network, make sure that you add both the IPv4 and the
IPv6 addresses to the allowed list.
Note: If the production environment and the IRE are in different time zones, the
schedule must begin only once per day in both time zones. For example, if one
environment is in the Asia/Kolkata time zone and the other is in the
America/New_York time zone, the following schedule in Kolkata is not supported:
Tuesday start time 22:00:00 and Wednesday start time 03:00:00. When these
start times get converted to the New York time zone, they become Tuesday
start time 12:30:00 and Tuesday start time 17:30:00, which is not supported.
Configuring isolated recovery environment (IRE) 652
Configuring an isolated recovery environment using the command line
■ To view the current network status and check whether the external network is
open or closed:
setting ire-network-control external-network-status
■ To manually close the external network and resume the air gap schedule:
setting ire-network-control resume-schedule
2 Depending on the type of certificate authority that you use for host
communication, do one of the following:
■ If you use a NetBackup Certificate Authority, run the following commands
to request the certificates from the production domain:
setting certificate get-CA-certificate
primary_server=<production primary server>
setting certificate get-certificate primary_server=<production
primary server> token=<token>
Where:
■ <production MSDP server> is the fully qualified domain name (FQDN) of
the MSDP server in your production environment.
■ [remote_primary_server=<production primary server>] is an optional
parameter for the FQDN of the primary server in your production
environment. This parameter is required if the IRE domain uses an
alternative name to access the production primary server. This scenario
usually occurs if the production primary server runs on multiple networks
with multiple hostnames.
■ [local_storage_server=<IRE network interface>] is an optional parameter
for the hostname of the network interface to use for image replication on
the IRE storage server. This parameter is required if the network interface
for replication is different than the IRE storage server name.
Where:
■ <production primary server> is the FQDN of the primary server in your
production environment.
■ <production username> is the username of a NetBackup user with
permission to list SLPs and SLP windows in the production environment.
For Windows users, enter the username in the format <domain
name>\<username>. For other users, enter the username only.
■ [slp_window_name=<SLP window name>] is an optional parameter to give
a name for the SLP window. If you do not provide this parameter, the name
of the SLP window is IRE_DEFAULT_WINDOW.
Configuring isolated recovery environment (IRE) 654
Configuring an isolated recovery environment using the command line
6 If you do not have them already, create a source SLP on the production primary
server and a target import SLP on the IRE primary server. See the section
"Creating a storage lifecycle policy" in the NetBackup Deduplication Guide for
details.
Note: You cannot add the replication operation from NetBackup when you
create the SLPs. Continue to the next step to add the replication operation.
7 Run the following command to add the IRE WORM storage server as a
replication target of the production NetBackup domain and to add the replication
operation to the SLP:
setting ire-network-control add-replication-op
production_primary_server=<production primary server>
production_primary_server_username=<production username>
production_storage_server=<production storage server>
ire_primary_server_username=<IRE username>
source_slp_name=<production SLP name> target_import_slp_name=<IRE
SLP name> target_storage_server=<target storage server>
target_storage_server_username=<target storage server username>
production_storage_unit=<MSDP storage unit> [slp_window_name=<slp
window name>]
Where:
■ <production primary server> is the FQDN of the primary server in your
production environment.
■ <production username> is the username of a NetBackup user with
permission to list SLPs and SLP windows in the production environment.
For Windows users, enter the username in the format <domain
name>\<username>. For other users, enter the username only.
■ <production storage server> is the FQDN of the production storage server
in your production environment.
■ <IRE username> is the username for an administrator on the IRE primary
server. For Windows users, enter the username in the format <domain
name>\<username>. For other users, enter the username only.
■ <source SLP name> is the SLP name from the production primary server
to add the replication operation to.
■ <target SLP name> is the import SLP name from the IRE primary server.
■ <target storage server> is the FQDN of the target WORM storage server
in your IRE environment.
Configuring isolated recovery environment (IRE) 655
Replicating the backup images from the IRE domain to the production domain
8 If you opened the external network at the beginning of this procedure, run the
following command to close it and resume the air gap schedule:
setting ire-network-control resume-schedule
2 On the IRE primary server, configure replication target using a command line.
■ Create a configuration file for adding a replication target with the following
information:
Configuring isolated recovery environment (IRE) 656
Replicating the backup images from the IRE domain to the production domain
V7.5 " rephostname" " " string Specifies the replication target host
name.
V7.5 "replogin" " " string Specifies the replication target storage
server username.
V7.5 "reppasswd" " " string Specifies the replication target storage
server password.
■ Run the following command to apply the configuration file to add replication
target:
/usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig
-storage_server <IRE storage server name> -stype PureDisk
-configlist <config file name>
■ Run the following command to update the disk pool information on the
primary server:
/usr/openv/netbackup/bin/admincmd/nbdevconfig -updatedp -stype
PureDisk -dp <IRE disk pool name>
Sample:
3 Ensure that the production primary server has one import SLP that targets the
relevant MSDP storage unit. Create it if it does not exist.
4 Run the following command on the IRE primary server to manually replicate
the backup images to the production domain.
/usr/openv/netbackup/bin/admincmd/nbreplicate -backupid <backup
id> -cn <local copy number> -rcn <copy number plus 101> -slp_name
reverse-air -target_sts <production MSDP storage server name>
For example,
/usr/openv/netbackup/bin/admincmd/nbreplicate -backupid
client1_1234567890 -cn 1 -rcn 102 -slp_name reverse-air
-target_sts msdp-prod.example.com
Note: Before you start the replication, ensure that the backup image with the
same backup ID does not exist in the production domain. If it exists, you must
expire the image at the production domain first. Otherwise, the new replicated
image will not get automatically imported by the import SLP at the production
primary server.
Chapter 18
Using the NetBackup
Deduplication Shell
This chapter includes the following topics:
Where <username> is the username of the user that you want to add, and
<password> is a password for that user.
The password must have between 15 and 32 characters and must include at
least one uppercase letter, one lowercase letter, one number, and one special
character (_.+~@={}?!).
4 Run the following commands to view the new user:
■ setting user show-user username=<username>
This command shows the information about the new user.
■ setting user list-users
This command shows a list of all local users.
Where <username> is the username of the user that you want to remove.
3 Run the following command. In NetBackup Flex Scale, run this command on
the primary node.
setting MSDP-user add-MSDP-user username=<username>
password=<password>
Where <username> is the username of the user that you want to add, and
<password> is a password for that user.
The username must have between 4 and 30 characters and can include letters
and numbers.
The password must be between 15 and 32 characters and must include at
least one uppercase letter, one lowercase letter, one number, and one special
character (_.+~={}?!).
4 Run the following commands to view the new user. In NetBackup Flex Scale,
run this command on the primary node.
■ setting MSDP-user verify-user username=<username>
This command verifies the username and the password for the new user.
■ setting MSDP-user list
This command shows a list of all MSDP users.
Note: The AD domain is only used for Universal Shares and Instant Access. AD
users are not currently supported on the deduplication shell.
Use the following procedure to connect an AD user domain from the deduplication
shell.
Using the NetBackup Deduplication Shell 663
Managing users from the deduplication shell
3 Open an SSH session to the server as the msdpadm user, or for NetBackup
Flex Scale, as an appliance administrator.
4 Run the following command:
setting ActiveDirectory configure ad_server=<server name>
domain=<domain name> domain_admin=<username>
Where <server name> is the AD server name, <domain name> is the domain
that you want to connect, and <username> is the username of an administrator
user on that domain.
5 When the prompt appears, enter the password for the domain administrator
user.
Where <server name> is the AD server name, <domain name> is the domain
that you want to disconnect, and <username> is the username of an
administrator user on that domain.
3 When the prompt appears, enter the password for the domain administrator
user.
Using the NetBackup Deduplication Shell 664
Managing users from the deduplication shell
Note: Remote directory user passwords cannot be changed from the shell. They
must be changed from the server on which they reside.
Where <username> is the username of the user whose password you want to
change.
4 Follow the prompt to change the password.
The password must have between 15 and 32 characters and must include at
least one uppercase letter, one lowercase letter, one number, and one special
character (_.+~@={}?!).
5 (Optional) By default, passwords do not expire. To specify an expiration date
for the password, run the following command:
setting user set-password-exp-date username=<username>
password_exp_date=<date>
3 Run the following command and specify the maximum duration to keep the
storage immutable and indelible:
setting WORM set-max worm_max=<duration in seconds>
Note: You can also run the catdbutil command in the shell to manage the images.
This command does not appear in the shell menu, but you can run it directly.
However, the arguments for the command cannot include path separators (/). See
“About the NetBackup command line options to configure immutable and indelible
data” on page 232.
You can find the backup ID and the copy number in the output of the retention
policy list command.
3 Run the following command to disable multiple backup images with the same
copy number.
retention policy batch-disable
backupids=<backupid1,backupid2,backupid3,...,backupidn>
copynumber=<number>
You can find the backup ID and the copy number in the output of the retention
policy list command.
Note: You can also run the catdbutil command in the shell to audit the retention
changes. This command does not appear in the shell menu, but you can run it
directly. However, the arguments for the command cannot include path separators
(/). See “About the NetBackup command line options to configure immutable and
indelible data” on page 232.
Using the NetBackup Deduplication Shell 669
Protecting the NetBackup catalog from the deduplication shell
Note: To configure an additional catalog copy, at least one volume other than
vol0 must exist in the /mnt/msdp directory.
Where <volume name> is the volume that you chose in the previous step.
For example:
cacontrol --catalog addshadowcopy /mnt/msdp/vol1
■ /storage_path/etc
■ /database_path/databases/spa
■ /storage_path/var
■ /usr/openv/lib/ost-plugins/pd.conf
■ /usr/openv/lib/ost-plugins/mtstrm.conf
Using the NetBackup Deduplication Shell 671
About the external MSDP catalog backup
■ /database_path/databases/datacheck
5 Run the following command on the MSDP server to set up MSDP catalog
backups.
cacontrol --catalog setupexternalcopy <username> <password>
<frequency in minutes> <slp_name>
The backup interval should only be changed after the external MSDP catalog
backup is set up.
To change the import SLP name
1 On the NetBackup web UI, create an SLP with an import operation.
Select the destination storage as the MSDP local LSU storage unit.
Verify that the retention type is set to Target retention.
2 Add a child rule to the SLP.
Select the operation to Duplication and set the destination storage to the
desired external storage server to store the MSDP catalog backup.
The duplication storage server cannot be the same as the MSDP storage server
specified in step 1. Verify that the retention type is set to Fixed and set the
retention period as desired.
3 Open an SSH session to the MSDP server.
4 Run the following command to change the SLP name that is used to import
the MSDP catalog backup image to the NetBackup:
cacontrol --catalog editexternalcopyslpname <slp_name>
The SLP name should only be changed after the external MSDP catalog backup
is set up.
Using the NetBackup Deduplication Shell 673
Managing certificates from the deduplication shell
By default, the command uses the first primary server entry in the NetBackup
configuration file. You can specify an alternate primary server with the
primary_server parameter. For example:
setting certificate get-certificate primary_server=<alternate
primary server hostname>
Depending on the primary server security level, the host may require an
authorization or a reissue token. If the command prompts that a token is
required for the request, enter the command again with the token for the
host ID-based certificate. For example:
setting certificate get-certificate primary_server=<alternate
primary server hostname> token=<certificate token> force=true
Where:
■ <trust store> is the trust store in PEM format.
■ <host certificate> is the X.509 certificate of the host in PEM format.
■ <key> is the RSA private key in PEM format.
■ [passphrase=<passphrase>] is an optional parameter for the passphrase
of the private key. This parameter is required if the key is encrypted.
■ <host> is the hostname of the host that stores the external certificates.
■ <port> is the port to connect to on the remote host.
Where:
■ <trust store> is the trust store in PEM format.
■ <host> is the hostname of the host that stores the external certificates.
Using the NetBackup Deduplication Shell 676
Managing certificates from the deduplication shell
Where:
■ <host certificate> is the X.509 certificate of the host in PEM format.
■ <key> is the RSA private key in PEM format.
■ [passphrase=<passphrase>] is an optional parameter for the passphrase
of the private key. This parameter is required if the key is encrypted.
■ <host> is the hostname of the host that stores the external certificates.
■ <port> is the port to connect to on the remote host.
3 (Optional) Run the following command to specify the revocation check level
for the external certificates:
setting certificate set-CRL-check-level check_level=<DISABLE,
LEAF, or CHAIN>
Warning: If you remove the existing certificates but have not installed new
certificates, the WORM server can no longer communicate with the primary server.
To switch from one type of certificate authority (CA) to the other, install the new
NetBackup or external certificates before you remove the existing certificates.
Where <server> is the host name of the external KMS server and <key group>
is the KMS server key group name.
3 To verify the KMS encryption status, run the setting encryption kms-status
command.
To configure MSDP encryption without KMS
1 Open an SSH session to the server as the msdpadm user, or for NetBackup
Flex Scale, as an appliance administrator.
2 Run the following command:
setting encryption enable
3 To verify the MSDP encryption status, run the setting encryption status
command.
■ The rotate-kms-keys command rotates the KMS keys under the new KMS
system. KEKs, which are stored in the KMS proxy database, are unencrypted
using the corresponding KMS key and then re-encrypted using the active KMS
key.
AllocationUnitSize The allocation unit size for the data To set the parameter: setting
on the server set-MSDP-param
allocation-unit-size
value=<number of MiB>
DataCheckDays The number of days to check the To set the parameter: setting
data for consistency set-MSDP-param data-check-days
value=<number of days>
LogRetention The length of time to keep logs To set the parameter: setting
set-MSDP-param log-retention
value=<number of days>
SpadLogging The log level for the NetBackup To set the parameter: setting
Deduplication Manager (spad) set-MSDP-param spad-logging
log_level=<value>
SpooldLogging The log level for the NetBackup To set the parameter: setting
Deduplication Engine (spoold) set-MSDP-param spoold-logging
log_level=<value>
WriteThreadNum The number of threads for writing To set the parameter: setting
data to the data container in parallel set-MSDP-param write-thread-num
value=<number of threads>
CloudDataCacheSize The default data cache size when To set the parameter:
the cloud LSU is added. Decrease
setting set-MSDP-param
this value if sufficient free space is
cloud-data-cache-size
not available.
value=<number>
setting get-MSDP-param
cloud-data-cache-size
CloudMapCacheSize The default map cache size when To set the parameter:
the cloud LSU is added. Decrease
setting set-MSDP-param
this value if sufficient free space is
cloud-map-cache-size
not available.
value=<number>
setting get-MSDP-param
cloud-map-cache-size
CloudMetaCacheSize The default meta cache size when To set the parameter:
the cloud LSU is added. Decrease
setting set-MSDP-param
this value if sufficient free space is
cloud-meta-cache-size
not available.
value=<number>
setting get-MSDP-param
cloud-meta-cache-size
CloudUploadCacheSize The default upload cache size when To set the parameter:
the cloud LSU is added. The
setting set-MSDP-param
minimum value is 12 GiB.
cloud-upload-cache-size
value=<number>
setting get-MSDP-param
cloud-upload-cache-size
Using the NetBackup Deduplication Shell 682
Tuning the MSDP configuration from the deduplication shell
setting get-MSDP-param
enable-local-predictive-sampling-cache
setting get-MSDP-param
max-predictive-cache-size
setting get-MSDP-param
max-sampling-cache-size
setting get-MSDP-param
usable-memory-limit
Using the NetBackup Deduplication Shell 683
Tuning the MSDP configuration from the deduplication shell
setting get-MSDP-param
max-cache-size-cluster
setting get-MSDP-param
max-predictive-cache-size-cluster
setting get-MSDP-param
max-sampling-cache-size-cluster
setting get-MSDP-param
usable-memory-limit-cluster
Using the NetBackup Deduplication Shell 684
Setting the MSDP log level from the deduplication shell
setting get-MSDP-param
enable-local-predictive-sampling-
cache-cluster
setting get-MSDP-param
vpfs-pcache-reload-threshold-cluster
Where:
■ <value> is one of the following:
■ minimal: enables the critical, error, authorization, and bug logs
■ short: enables all minimal logs and adds warning logs
■ long: enables all short logs and adds info logs
■ verbose: enables all long logs and adds notice logs
■ full: enables all verbose logs and adds trace messages (all available
logs)
■ none: disables logging
Note: These commands do not appear in the shell menu, but you can run them
directly. The arguments for these commands cannot include path separators
(/).
■ To enable fast checking, which begins the check from container 64 and
does not sleep between checking containers:
dedupe CRC fast
When the fast CRC ends, CRC behavior reverts to the behavior before fast
checking was invoked.
■ To enable fix mode, which runs the check and attempts to fix any
inconsistent metadata:
dedupe CRC enable-fixmode
3 If the health monitor is enabled, stop the monitor using the following command:
setting health disable
4 After disabling the health monitor, use the following command to stop the MSDP
services:
dedupe MSDP stop
Using the NetBackup Deduplication Shell 690
Managing NetBackup services from the deduplication shell
Note: In a cluster environment, you must run this command from the catalog
engine. It cannot be run from the other engines.
the number of vpfsd instances from 1 to up to 16 and distribute the shares cross
all the vpfsd instances.
■ CloudCacheSize
This parameter specifies the local disk cache size. This option applies only to
Universal Shares with object store and Instant Access with object store.
Use the following procedures to manage the VPFS configuration parameters.
To view a VPFS configuration parameter
1 Open an SSH session to the server.
2 Run the following command:
setting vpfs-config get-vpfs-param vpfs_configkey=<parameter>
Where <parameter> is the parameter that you want to change, and <value>
is the value that you want to change it to. For example:
setting vpfs-config set-vpfs-param vpfs_configkey=numOfInstance
vpfs_configvalue=2
Using the NetBackup Deduplication Shell 694
Managing NetBackup services from the deduplication shell
Note: This command is meant for a short debugging session, and the change
is not preserved if you restart the instance. To permanently change the log
level, use the following command:
setting vpfs-config set-vpfs-param vpfs_configkey=logLevel
vpfs_configvalue=<level>
dedupe vpfs force-stop (Use this command only if the dedupe vpfs stop
command does not work or becomes stuck.)
dedupe vpfs start
Using the NetBackup Deduplication Shell 695
Managing NetBackup services from the deduplication shell
3 You can view the details of the NGINX certificate with the following command:
setting nginx show-cert
Using the NetBackup Deduplication Shell 696
Managing NetBackup services from the deduplication shell
Note: These commands do not appear in the shell menu, but you can run them
directly. The arguments for these commands cannot include path separators
(/).
See “About the tool updates for cloud support” on page 293.
■ The msdpimgutil command
This command lets you check deduplication pool encryption status or image
encryption status on the storage server.
See “Checking the image encryption status” on page 493.
■ support diskio vmstat: Displays the information about the wait on the disk
I/O
■ support diskio nmon: Displays the information about the monitor system,
which monitors the disk I/O, the network I/O, and the CPU usage.
■ support diskio disk-volume: Displays the information about the disk volume
■ support process memory-usage: Displays the free and the used memory
Using the NetBackup Deduplication Shell 699
Monitoring and troubleshooting NetBackup services from the deduplication shell
To view a file
1 Open an SSH session to the server.
2 Do one of the following:
■ To view an entire file, run one of the following commands:
Using the NetBackup Deduplication Shell 700
Monitoring and troubleshooting NetBackup services from the deduplication shell
To search a file
1 Open an SSH session to the server.
2 Run one of the following commands:
■ support MSDP-history grep file=<file> pattern=<keyword>
Where <file> is the file name of the file that you want to search and <keyword>
is the naming pattern that you want to search for. For example:
support MSDP-config grep file=spa.cfg pattern=address
To view a file
1 Open an SSH session to the server.
2 Do one of the following:
■ To view an entire file, run the command:
support proc cat file=<file>
Where <file> is the file name of the file that you want to view.
■ To view the last 10 lines of a file, run the following command:
Using the NetBackup Deduplication Shell 701
Monitoring and troubleshooting NetBackup services from the deduplication shell
To search a file
1 Open an SSH session to the server.
2 Run the following command:
support proc grep file=<file> pattern=<keyword>
Where <file> is the file name of the file that you want to search and <keyword>
is the naming pattern that you want to search for. For example:
support proc grep file=spa.cfg pattern=address
Where <share ID> is the ID of the share that you want to view the deduplication
rate of.
To view a file
1 Open an SSH session to the server.
2 Do one of the following:
■ To view an entire file, run one of the following commands:
■ support MSDP-log cat file=<file>
To search a file
1 Open an SSH session to the server.
2 Run one of the following commands:
■ support MSDP-log grep file=<file> pattern=<keyword>
Where <file> is the file name of the file that you want to search and <keyword>
is the naming pattern that you want to search for. For example:
support MSDP-log grep file=spad* pattern=sessionStartAgent
3 Run one of the following commands to collect files of interest from the desired
category:
■ support MSDP-history collect
less than x days ago, enter mtime="-x". To collect the files from more than
x days ago, enter mtime="+x".
For example:
support MSDP-log collect pattern=spoold* mmin="+2"
4 Run the scp command from any category to create a tarball of all previously
collected files (from all categories) and transfer the tarball to the target host
using the scp protocol. For example:
support MSDP-config scp scp_target=user@example.com:/tmp
5 If applicable, run the following command to set the SSH time-out back to the
default:
setting ssh set-ssh-timeout ssh_timeout=600
again.
Operation failed
If the ticket is still pending for approval, and the MSDP administrator runs the
command again, the deduplication shell asks the administrator to wait for an
approval.
For more information about cloud LSU, see the MSDP cloud support chapter.
Chapter 19
Troubleshooting
This chapter includes the following topics:
■ Unable to obtain the administrator password to use an AWS EC2 instance that
has a Windows OS
Windows install_path\NetBackup\logs
UNIX /usr/openv/logs
Note: Only the following types of users can access the logs: root and service users
in Linux systems, and users present in the administrators group of Windows systems.
You can access logging controls in Logging host properties. You can also manage
unified logging with the following commands:
vxlogmgr Manages the log files that the products that support unified logging
generate.
UNIX /usr/openv/logs
Windows install_path\NetBackup\logs
STDATE Long Integer or string Provide the start date in seconds or STDATE = 98736352
in the locale-specific short date and
STDATE = '4/26/11 11:01:00
time format. For example, a locale
AM'
can have the format 'mm/dd/yy
hh:mm:ss AM/PM'
ENDATE Long Integer or string Provide the end date in seconds or ENDATE = 99736352
in the locale-specific short date and
ENDATE = '04/27/11 10:01:00
time format. For example, a locale
AM'
can have the format 'mm/dd/yy
hh:mm:ss AM/PM'
1 = WARNING
2 = ERR
3 = CRIT
4 = EMERG
Example Description
(PRODID == 51216) && ((PID == 178964)|| ((STDATE == Retrieves the log file message for the
'2/5/15 09:00:00 AM') && (ENDATE == '2/5/15 NetBackup product ID 51216 between
12:00:00 PM')) 9AM and 12PM on 2015-05-02.
((prodid = 'NBU') && ((stdate >= ‘11/18/14 Retrieves the log messages for the
00:00:00 AM’) && (endate <= ‘12/13/14 12:00:00 PM’))) || NetBackup product NBU between
((prodid = 'BENT') && ((stdate >= ‘12/12/14 00:00:00 2014-18-11 and 2014-13-12 and the log
AM’) && (endate <= ‘12/25/14 12:00:00 PM’))) messages for the NetBackup product
BENT between 2014-12-12 and
2014-25-12.
(STDATE <= ‘04/05/15 0:0:0 AM’) Retrieves the log messages that were
logged on or before 2015-05-04 for all
of the installed Veritas products.
Note: Only the following types of users can access the logs: root and service users
in Linux systems, and users present in the administrators group of Windows systems.
Troubleshooting 712
About unified logging
Item Example
Display specific Display the log messages for NetBackup (51216) that show only the date, time, message
attributes of the log type, and message text:
messages
vxlogview --prodid 51216 --display D,T,m,x
Display the latest log Display the log messages for originator 116 (nbpem) that were issued during the last 20
messages minutes. Note that you can specify -o nbpem instead of -o 116:
Display the log Display the log messages for nbpem that were issued during the specified time period:
messages from a
specific time period # vxlogview -o nbpem -b "05/03/15 06:51:48 AM"
-e "05/03/15 06:52:48 AM"
Display results faster You can use the -i option to specify an originator for a process:
# vxlogview -i nbpem
The vxlogview -i option searches only the log files that the specified process (nbpem)
creates. By limiting the log files that it has to search, vxlogview returns a result faster. By
comparison, the vxlogview -o option searches all unified log files for the messages that
the specified process has logged.
Note: If you use the -i option with a process that is not a service, vxlogview returns the
message "No log files found." A process that is not a service has no originator ID in the file
name. In this case, use the -o option instead of the -i option.
The -i option displays entries for all OIDs that are part of that process including libraries (137,
156, 309, etc.).
Search for a job ID You can search the logs for a particular job ID:
The jobid= search key should contain no spaces and must be lowercase.
When searching for a job ID, you can use any vxlogview command option. This example
uses the -i option with the name of the process (nbpem). The command returns only the
log entries that contain the job ID. It misses related entries for the job that do not explicitly
contain the jobid=job_ID.
Troubleshooting 713
About legacy logging
Windows install_path\NetBackup\logs
install_path\Volmgr\debug
UNIX /usr/openv/netbackup/logs
/usr/openv/volmgr/debug
To use legacy logging, a log file directory must exist for a process. If the directory
is not created by default, you can use the Logging Assistant or the mklogdir batch
files to create the directories. Or, you can manually create the directories. When
logging is enabled for a process, a log file is created when the process begins.
Each log file grows to a certain size before the NetBackup process closes it and
creates a new log file.
You can use the following batch files to create all of the log directories:
■ Windows: install_path\NetBackup\Logs\mklogdir.bat
■ UNIX: /usr/openv/netbackup/logs/mklogdir
Follow these recommendations when you create and use legacy log folders:
■ Do not use symbolic links or hard links inside legacy log folders.
■ If any process runs for a non-root or non-admin user and there is no logging
that occurs in the legacy log folders, use the mklogdir command to create a
folder for the required user.
■ To run a command line for a non-root or non-admin user (troubleshooting when
the NetBackup services are not running), create user folders for the specific
command line. Create the folders either with the mklogdir command or manually
with the non-root or non-admin user privileges.
Troubleshooting 714
NetBackup MSDP log files
servers, see the NetBackup Logging Reference Guide. The guide is available
through the following URL:
Most interaction occurs on the NetBackup media servers. Therefore, the log files
on the media servers that you use for disk operations are of most interest.
Warning: The higher the log level, the greater the affect on NetBackup performance.
Use a log level of 5 (the highest) only when directed to do so by a Veritas
representative. A log level of 5 is for troubleshooting only.
Specify the NetBackup log levels in the Logging host properties on the NetBackup
primary server. The log levels for some processes specific to certain options are
set in configuration files as described in Table 19-4.
Backups and restores N/A Messages appear in the log files for the following processes:
■ The bpbrm backup and restore manager. The following is the path to the log
files:
UNIX: /usr/openv/netbackup/logs/bpbrm
Windows: install_path\Veritas\NetBackup\logs\bpbrm
■ The bpdbm database manager. The following is the path to the log files:
UNIX: /usr/openv/netbackup/logs/bpdbm
Windows: install_path\Veritas\NetBackup\logs\bpdbm
■ The bptm tape manager for I/O operations. The following is the path to the
log files:
UNIX: /usr/openv/netbackup/logs/bptm
Windows: install_path\Veritas\NetBackup\logs\bptm
Troubleshooting 716
NetBackup MSDP log files
Catalog shadow copies N/A The MSDP catalog shadow copy process writes messages to the following log
files and directories:
UNIX:
/storage_path/log/spad/spad.log
/storage_path/log/spad/sched_CatalogBackup.log
/storage_path/log/spad/client_name/
Windows:
storage_path\log\spad\spad.log
storage_path\log\spad\sched_CatalogBackup.log
storage_path\log\spad\client_name\
Client deduplication N/A The client deduplication proxy plug-in on the media server runs under bptm,
proxy plug-in bpstsinfo, and bpbrm processes. Examine the log files for those processes
for proxy plug-in activity. The strings proxy or ProxyServer embedded in the
log messages identify proxy server activity.
They write log files to the following directories:
■ For bptm:
UNIX: /usr/openv/netbackup/logs/bptm
Windows: install_path\Veritas\NetBackup\logs\bptm
■ For bpstsinfo:
Windows: /usr/openv/netbackup/logs/admin
UNIX: /usr/openv/netbackup/logs/bpstsinfo
Windows: install_path\Veritas\NetBackup\logs\admin
Windows: install_path\Veritas\NetBackup\logs\stsinfo
■ For bpbrm:
UNIX: /usr/openv/netbackup/logs/bpbrm
Windows: install_path\Veritas\NetBackup\logs\bpbrm
Client deduplication N/A The deduplication proxy server nbostpxy on the client writes messages to files
proxy server in the following directory, as follows:
UNIX: /usr/openv/netbackup/logs/nbostpxy
Windows: install_path\Veritas\NetBackup\logs\nbostpxy.
Troubleshooting 717
NetBackup MSDP log files
Deduplication N/A The following is the path name of the log file for the deduplication configuration
configuration script script:
■ UNIX: storage_path/log/pdde-config.log
■ Windows: storage_path\log\pdde-config.log
NetBackup creates this log file during the configuration process. If your
configuration succeeded, you do not need to examine the log file. The only reason
to look at the log file is if the configuration failed. If the configuration process fails
after it creates and populates the storage directory, this log file identifies when
the configuration failed.
Deduplication plug-in N/A The DEBUGLOG entry and the LOGLEVEL in the pd.conf file determine the log
location and level for the deduplication plug-in. The following are the default
locations for log files:
■ UNIX: /var/log/puredisk/pdplugin.log
■ Windows: C:\pdplugin.log
You can configure the location and name of the log file and the logging level. To
do so, edit the DEBUGLOG entry and the LOGLEVEL entry in the pd.conf file.
Device configuration 178 The Disk Service Manager process that runs in the Enterprise Media Manager
and monitoring (EMM) process.
Device configuration 202 The storage server interface process that runs in the Remote Manager and
and monitoring Monitor Service. RMMS runs on media servers.
Device configuration 230 The Remote Disk Service Manager interface (RDSM) that runs in the Remote
and monitoring Manager and Monitor Service. RMMS runs on media servers.
Troubleshooting 718
NetBackup MSDP log files
drcontrol utility N/A You must run the drcontrol utility on the MSDP storage server host. The
command requires administrator privileges.
The utility creates a log file and displays its pathname in the command output.
The utility writes log files to the following directory, depending on the operating
system:
UNIX:
/[storage_path]/log/drcontrol/policy_admin
/storage_path/log/drcontrol/dedupe_catalog_DR
Windows:
storage_path\log\drcontrol\policy_admin
storage_path\log\drcontrol\dedupe_catalog_DR
Installation N/A The NetBackup installation process writes information about the installation of
the deduplication components to a log file in the following directory:
■ UNIX: /var/log/puredisk
■ Windows: %ALLUSERSPROFILE%\Symantec\NetBackup\InstallLogs
NetBackup N/A The NetBackup Deduplication Engine writes several log files, as follows:
Deduplication Engine
■ Log files in the storage_path/log/spoold directory, as follows:
■ The spoold.log file is the main log file
■ The storaged.log file is for queue processing.
■ The storaged_<dsid>.log file is for cloud LSU queue processing.
■ A log file for each connection to the engine is stored in a directory in the
storage path spoold directory. The following describes the pathname to
a log file for a connection:
hostname/application/TaskName/MMDDYY.log
For example, the following is an example of a crcontrol connection log
pathname on a Linux system:
/storage_path/log/spoold/server.example.com/crcontrol/Control/010112.log
Usually, the only reason to examine these connection log files is if a Veritas
support representative asks you to.
■ A VxUL log file for the events and errors that NetBackup receives from polling.
The originator ID for the deduplication engine is 364.
Troubleshooting 719
NetBackup MSDP log files
NetBackup 364 The NetBackup Deduplication Engine that runs on the deduplication storage
Deduplication Engine server.
NetBackup N/A The log files are in the /storage_path/log/spad directory, as follows:
Deduplication Manager
■ spad.log
■ sched_QueueProcess.log
■ SchedClass.log
■ A log file for each connection to the manager is stored in a directory in the
storage path spad directory. The following describes the pathname to a log
file for a connection:
hostname/application/TaskName/MMDDYY.log
For example, the following is an example of a bpstsinfo connection log
pathname on a Linux system:
/storage_path/log/spoold/server.example.com/bpstsinfo/spad/010112.log
Usually, the only reason to examine these connection log files is if a Veritas
support representative asks you to.
You can set the log level and retention period in the Change Storage Server
dialog box Properties tab.
Optimized duplication N/A For optimized duplication and Auto Image Replication, The following are the log
and replication files that provide information:
■ The NetBackup bptm tape manager for I/O operations. The following is the
path to the log files:
UNIX: /usr/openv/netbackup/logs/bptm
Windows: install_path\Veritas\NetBackup\logs\bptm
■ The following is the path name of MSDP replication log file:
/storage_path/log/spad/replication.log
Resilient network 387 The Remote Network Transport Service (nbrntd) manages resilient network
connections connection sockets. It runs on the primary server, on media servers, and on
clients. Use the VxUL originator ID 387 to view information about the socket
connections that NetBackup uses.
Note: If multiple backup streams run concurrently, the Remote Network
Transport Service writes a large amount of information to the log files. In such
a scenario, Veritas recommends that you set the logging level for OID 387 to 2
or less. To configure unified logs, see the NetBackup Logging Reference Guide.
Troubleshooting 720
Troubleshooting MSDP configuration issues
Resilient network N/A The deduplication plug-in logs information about keeping the connection alive.
connections
For more information about the deduplication plug-in log file, see “Deduplication
plug-in” in this table.
Diagnosis The PDDE_initConfig script was invoked, but errors occurred during
the storage initialization.
Second, examine the tpconfig command log file errors about creating
the credentials for the server name. The tpconfig command writes
to the standard NetBackup administrative commands log directory.
Second, refresh the NetBackup web UI. This step clears cached information from
the failed attempt to display the disk volume.
Examine the disk error logs to determine why the volume was marked DOWN.
If the storage server is busy with jobs, it may not respond to primary server disk
polling requests in a timely manner. A busy load balancing server also may cause
this error. Consequently, the query times out and the primary server marks the
volume DOWN.
If the error occurs for an optimized duplication job: verify that source storage server
is configured as a load balancing server for the target storage server. Also verify
that the target storage server is configured as a load balancing server for the source
storage server.
See “Viewing MSDP disk errors and events” on page 735.
Then, restart both components. Do not change the values of the other two shared
memory parameters.
The SharedMemoryEnabled parameter is stored in the following file:
storage_path\etc\puredisk\agent.cfg
If the job details also include errors similar to the following, it indicates that an image
clean-up job failed:
This error occurs if a deduplication backup job fails after the job writes part of the
backup to the Media Server Deduplication Pool. NetBackup starts an image
cleanup job, but that job fails because the data necessary to complete the image
clean-up was not written to the Media Server Deduplication Pool.
Deduplication queue processing cleans up the image objects, so you do not need
to take corrective action. However, examine the job logs and the deduplication logs
to determine why the backup job failed.
See “About MSDP queue processing” on page 518.
See “NetBackup MSDP log files” on page 714.
Note: RefDBEngine and refdb do not refer to nor are they related to the open
source RefDB reference database and bibliography tool.
Troubleshooting 726
Troubleshooting MSDP operational issues
To delete the disk pool, you must first delete the image fragments. The nbdelete
command deletes expired image fragments from disk volumes.
To delete the fragments of expired images
Run the following command on the primary server:
UNIX: /usr/openv/netbackup/bin/admincmd/nbdelete -allvolumes -force
Windows: install_path\NetBackup\bin\admincmd\nbdelete -allvolumes
-force
The -allvolumes option deletes expired image fragments from all volumes that
contain them.
Troubleshooting 729
Troubleshooting MSDP operational issues
The -force option removes the database entries of the image fragments even if
fragment deletion fails.
2 Cancel the incomplete jobs by running the following command for each backup
ID returned by the previous command (xxxxx represents the backup ID):
UNIX: /usr/openv/netbackup/bin/admincmd/nbstlutil cancel -backupid
xxxxx
■ The Both IPv4 and IPv6 option is enabled for the primary server, the media
server that hosts the NetBackup Deduplication Engine, and the client. The Both
IPv4 and IPv6 option is configured in the Network settings host properties.
■ The IPv6 network is configured as a preferred network for the primary server,
the media server that hosts the NetBackup Deduplication Engine, and the client.
The preferred network Match (Above network will be preferred for
communication) property also is enabled. Preferred networks are configured
in the Preferred network host properties.
■ The IPv6 network is chosen for the backup.
Examine the bpbrm log file for an error similar to the following:
probe_ost_plugin: sts_get_server_prop_byname failed: error 2060057
If the error message appears, the NetBackup host name cache may not contain
the correct host name mapping information. The cache may be out of sync if DNS
changes in your network environment were not fully propagated throughout your
environment. It takes some amount of time for DNS changes to propagate throughout
a network environment.
To resolve the problem, do the following on the NetBackup primary server and on
the MSDP storage server:
1. Stop the NetBackup services.
2. Run the following command:
UNIX: /usr/openv/netbackup/bin/bpclntcmd -clearhostcache
Windows: install_path\NetBackup\bin\bpclntcmd.exe -clearhostcache
3. Start the NetBackup services.
For more information about client deduplication logging, see the description of
“Client deduplication proxy plug-in” in the “MSDP log files” topic.
See “NetBackup MSDP log files” on page 714.
The messages may indicate a client name case sensitivity issue in your MSDP
environment.
The NetBackup Deduplication Engine Examine the Disk Logs report for errors that include
(spoold) was too busy to respond. the name PureDisk. Examine the disk monitoring
services log files for details from the deduplication
plug-in.
A user tampered with the storage. Users must not add files to, change files on, delete
files from, or change file permissions on the
storage. If a file was added, remove it.
Troubleshooting 732
Troubleshooting MSDP operational issues
Storage capacity was increased. If you grew the storage, you must restart the
NetBackup services on the storage server so the
new capacity is recognized.
Firewall ports are not open. Ensure that ports 10082 and 10102 are open in
any firewalls between the deduplication hosts.
Operation Activity Monitor job details Status in bpdm and bptm log
files
The message may indicate a client name case sensitivity issue in your MSDP
environment.
Catalog backup
Table 19-8 describes any error messages that may occur when you create or update
a catalog backup policy. The messages are displayed in the shell window in which
you ran the drcontrol utility. The utility also writes the messages to its log file.
Code or Description
message
1 Fatal error in an operating system or deduplication command that the drcontrol utility calls.
110 The command cannot find the necessary NetBackup configuration information.
140 The user who invoked the command does not have administrator privileges.
Troubleshooting 734
Troubleshooting MSDP operational issues
Code or Description
message
227 This error code is passed from the NetBackup bplist command. The MSDP catalog backup policy
you specified does not exist or no backups exist for the given policy name.
For more information about status codes and error messages, see the following:
■ The Troubleshooter in the NetBackup Administration Console.
■ NetBackup Status Codes Reference Guide
1009 8 Authorization Authorization request from <IP> for user <USER> denied
(<REASON>).
1015 1 Critical Task creation failed, could not initialize task class on
server PureDisk:server1.example.com on host
server1.example.com.
1044 multiple multiple The usage of one or more system resources has exceeded a
warning level. Operations will or could be suspended.
Please take action immediately to remedy this situation.
1057 A data corruption has been detected. The data consistency check
detected a data loss or data corruption in the Media Server Deduplication Pool
(MSDP) and reported the affected backups.
The backup ID and policy name appear in the NetBackup Disk Logs report
and the storage_path/log/spoold/storaged.log file on the storage
server.
See “About MSDP storage capacity and usage reporting” on page 488.
See “Troubleshooting MSDP operational issues” on page 723.
Troubleshooting 738
Unable to obtain the administrator password to use an AWS EC2 instance that has a Windows OS
If there is an error log, the issue is that different NetBackup domains use the
same MSDP user to access one MSDP storage server that is not supported by
multi-domain.
■ Backup or restore jobs fail with following error message in job details:
2 If two NetBackup domains use the same MSDP user to access a MSDP server,
update MSDP storage server credentials in the second NetBackup domain.
The second NetBackup domain should use the new MSDP user to access the
MSDP storage server. Run following command in every load balance server
of the MSDP storage server in second NetBackup domain:
tpconfig -update -stype PureDisk -storage_server
<msdp_storage_server> -sts_user_id <user_name> -password
<password>
For Windows:
<install_path>\Veritas\pdde\crcontrol.exe --taskstat
Troubleshooting 741
Troubleshooting the cloud compaction error messages
2. Check the client column for the list of clients that belong to the NetBackup
domain, identify the work load of the clients from one domain, and then work
load of one domain.
3. Run the bpplclients command on one NetBackup domain to list all clients
of that domain.
Step 2 Redirect your backup Redirect your backup jobs to the media server
jobs deduplication pool storage unit. To do so, change the
backup policy storage destination to the storage unit for
the deduplication pool.
http://www.veritas.com/docs/DOC5332
Step 3 Repurpose the storage After all of the backup images that are associated with
the storage expire, repurpose that storage.
■ About direct migration from Cloud Catalyst to MSDP direct cloud tiering
NetBackup 8.3 and later releases include support for MSDP direct cloud tiering.
This new technology is superior with improved performance, reliability, usability,
and flexibility over the previous Cloud Catalyst product. You are encouraged to
move to MSDP direct cloud tiering to take advantage of these improvements as
well as future enhancements.
If you want to continue using Cloud Catalyst, you can do so on servers running
NetBackup versions 8.1 through 8.3.0.2 because those versions are compatible
with NetBackup 9.0 and later. Those older versions are supported as back-level
servers for versions 9.0 and later NetBackup primary server installations. After you
upgrade the NetBackup primary server to a version of 9.0 or later, you must use
the command line to configure a Cloud Catalyst server. You cannot use the web
UI with NetBackup 9.0 and later to configure Cloud Catalyst.
A nbcheck utility test has been added to the NetBackup install process to prevent
Cloud Catalyst servers from being upgraded to version 9.0 and later. If Cloud
Catalyst is detected on the server the install stops. The server remains unchanged,
and continues to run the currently installed version of NetBackup after the upgrade
is stopped.
MSDP direct cloud tier disk pool and storage unit to an existing NetBackup 8.3 or
later MSDP storage server (verify server capacity). Next, modify the storage lifecycle
policies and backup policies to use the new MSDP direct cloud tier storage. Once
all new duplication or backup jobs write to the new MSDP direct cloud tier storage,
the images on the old Cloud Catalyst storage gradually expire. After all those images
have expired, the Cloud Catalyst server can be retired or repurposed.
The advantages of the natural expiration strategy are as follows:
■ Available with NetBackup version 8.3 and later. This strategy gives you improved
performance, reliability, usability, and flexibility available in MSDP direct cloud
tier. Can be used without upgrading to NetBackup 10.0.
■ Can be implemented gradually using new MSDP Cloud storage servers while
Cloud Catalyst storage servers continue to be used.
■ Can be used for all environments including public cloud cold storage (for
example: AWS Glacier or AWS Glacier Deep Archive).
■ All new data is uploaded with the MSDP direct cloud tiering, which uses cloud
storage more efficiently than Cloud Catalyst. The long-term total cloud storage
usage and cost may be reduced.
The disadvantages of the natural expiration strategy are as follows:
■ Until all the old Cloud Catalyst images have been expired and deleted, there is
some duplication of data in cloud storage. This duplication can occur between
the old Cloud Catalyst images and new MSDP direct cloud tier images. Additional
storage costs could be incurred if you use a public cloud environment.
■ Requires a separate server.
■ Cloud Catalyst servers must be maintained until all uploaded images from those
servers have expired or are otherwise no longer needed.
Combination strategy
This strategy works in most environments except those using public cloud cold
storage (example: AWS Glacier or AWS Glacier Deep Archive). This strategy is a
combination of the previous two strategies. To use this strategy, you must first
configure a new NetBackup 8.3 or later MSDP direct cloud tier storage server. Or,
add an MSDP direct cloud tier disk pool and storage unit to an existing NetBackup
8.3 or later MSDP storage server (verify server capacity). Next, modify the storage
lifecycle policies and backup policies to use the new MSDP direct cloud tier storage.
Once all the new duplication or backup jobs write to the new MSDP direct cloud
tier storage, the oldest images on the old Cloud Catalyst storage gradually expire.
When the number of remaining unexpired images on the old Cloud Catalyst storage
drops below a determined threshold, those remaining images are moved. These
Migrating from Cloud Catalyst to MSDP direct cloud tiering 748
About Cloud Catalyst migration strategies
images are moved to the new MSDP direct cloud tier storage using a manually
initiated bpduplicate command. After all remaining images have been moved from
the old Cloud Catalyst storage to the new MSDP direct cloud tier storage, the Cloud
Catalyst server can be retired or repurposed.
The advantages of the combination strategy are as follows:
■ Available with NetBackup version 8.3 and later. This strategy gives you improved
performance, reliability, usability, and flexibility available in MSDP direct cloud
tier. Can be used without upgrading to NetBackup 10.0.
■ Can be implemented gradually using new MSDP direct cloud tier storage servers
while Cloud Catalyst storage servers continue to be used.
■ All new data and all old Cloud Catalyst data are uploaded with MSDP direct
cloud tiering, which uses cloud storage more efficiently than Cloud Catalyst.
The long-term total cloud storage usage and cost may be reduced.
■ Enables retiring of the old Cloud Catalyst servers before all images on those
servers have expired.
The disadvantages of the combination strategy are as follows:
■ Public cloud cold storage environments (for example: AWS Glacier or AWS
Glacier Deep Archive) support restore from the cloud but do not support
duplication from the cloud, so this strategy cannot be used.
■ If public cloud storage is used, potentially significant data egress charges are
incurred. This issue can happen when old Cloud Catalyst images are read to
duplicate them to the new MSDP direct cloud tier storage.
■ Additional network traffic to and from the cloud occurs when the old Cloud
Catalyst images are duplicated to the new MSDP direct cloud tier storage.
■ Until all Cloud Catalyst images have expired or have been moved to MSDP
direct cloud tier storage, there is some duplication of data in cloud storage. This
duplication can occur between the old Cloud Catalyst images and new MSDP
direct cloud tier images, so additional costs could be incurred if you use a public
cloud environment.
■ Requires a separate server.
■ Cloud Catalyst servers must be maintained until all uploaded images from those
servers have expired, have been moved to the new MSDP direct cloud tier, or
are no longer needed.
server can be reimaged and reinstalled as a new MSDP direct cloud tier storage
server using the latest release. If you use an existing server, that server must meet
the minimum requirements to be used.
See “About the media server deduplication (MSDP) node cloud tier” on page 25.
See “Planning your MSDP deployment” on page 34.
Note that this operation would not be an upgrade. Instead, it would be a remove
and reinstall operation. Once the new MSDP direct cloud tier storage server is
available, the nbdecommission -migrate_cloudcatalyst utility is used to create
a new MSDP direct cloud tier. This new storage can reference the data previously
uploaded to cloud storage by Cloud Catalyst. When the migration process is
complete and utility is run, the new MSDP direct cloud tier can be used for new
backup and duplication operations. This new storage can be used for restore
operations of older Cloud Catalyst images.
For more information about the nbdecommission command, see the NetBackup
Commands Reference Guide.
The advantages of the direct migration strategy are as follows:
■ Can be used for all environments including public cloud cold storage (for
example: AWS Glacier or AWS Glacier Deep Archive).
■ Does not require a separate server since the Cloud Catalyst server can be
reimaged as an MSDP direct cloud tier server and used for migration.
The disadvantages of the direct migration strategy are as follows:
■ Cannot be implemented gradually using the new MSDP direct cloud tier storage
servers while Cloud Catalyst storage servers continue to be used for new backup
or duplication jobs. The old Cloud Catalyst storage server cannot be used for
new backup or duplication jobs while the migration process is running.
■ Cloud Catalyst uses cloud storage less efficiently than MSDP direct cloud tier.
This issue is especially true for NetBackup versions older than 8.2 Cloud Catalyst.
This strategy continues to use existing Cloud Catalyst objects for new MSDP
direct cloud tier images. Some of the cloud storage efficiency that is gained with
MSDP direct cloud tier is not realized.
■ Requires a new MSDP server so an existing MSDP server cannot be used and
consolidation of any Cloud Catalyst servers is not possible.
See “About beginning the direct migration” on page 751.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 750
About direct migration from Cloud Catalyst to MSDP direct cloud tiering
About requirements for a new MSDP direct cloud tier storage server
You must use a new MSDP server with no existing disk pools as the new MSDP
direct cloud tier storage server for the migration. You can reinstall and reuse the
Cloud Catalyst server as the new MSDP direct cloud tier server. However, it may
be better to use a new MSDP server with newer hardware and keep the existing
Cloud Catalyst server intact. You can keep the existing Cloud Catalyst server as a
failsafe in case of an unexpected failure during the migration process.
For more information about the minimum requirements for a new MSDP direct cloud
tier storage server:
See “About the media server deduplication (MSDP) node cloud tier” on page 25.
Migration is possible to a system with less free disk space. However, an extra step
is required after the creation of the new MSDP server and before you run the Cloud
Catalyst migration. This extra step involves modifying the default values for
CloudDataCacheSize and CloudMetaCacheSize in the contentrouter.cfg file.
■ If the Cloud Catalyst storage server type ends with _rawd then check the
KMSOptions section of contentrouter.cfg on Cloud Catalyst server. Verify
if KMS is enabled and then locate the KMS key group name. If the
KMSOptions section does not exist, then KMS is not enabled. If the
KMSOptions section does exist, then the KMSEnable entry is True if enabled
and False if disabled.
■ You can use the /usr/openv/pdde/pdcr/bin/keydictutil -–list
command on the Cloud Catalyst server to view these KMS settings (version
8.2 and later of Cloud Catalyst).
■ You can use the /usr/openv/netbackup/bin/admincmd/nbkmsutil
-listkgs command on the NetBackup primary server to list the KMS key
group names. Verify that the KMS key group name you have gathered exists
and is correct.
■ The name to be used for the new disk volume for the migrated MSDP direct
cloud tier storage server.
■ The name to be used for the new disk pool for the migrated MSDP direct cloud
tier storage server.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 752
About direct migration from Cloud Catalyst to MSDP direct cloud tiering
■ Any cloud credentials (if using AWS IAM role, plan to use the access key dummy
and the secret access key dummy).
■ All other cloud-specific configuration information.
■ A list of all NetBackup policies and SLPs that currently write to the Cloud Catalyst
storage server.
After you have gathered the previous list of information, download the
sync_to_cloud utility from the Veritas Download Center and make it available on
the Cloud Catalyst server for use during the premigration procedure.
Verify that the MSDP data selection ID (DSID) used for Cloud Catalyst is 2. Review
the contents of the <CloudCatalyst cache
directory>/storage/databases/catalog directory. There should be one
subdirectory and the name of that subdirectory should be 2. If there are more
subdirectories or if the subdirectory 2 does not exist, contact Veritas Support for
assistance as this issue must be corrected before continuing.
On the primary server, ensure a catalog backup policy (policy type: NBU-Catalog)
exists and it has a policy storage destination other than the Cloud Catalyst storage
server to be migrated. A manual backup of this catalog backup policy is initiated at
certain points in the migration process to enable rollback recovery from a failed
migration. If a catalog backup on storage other than the Cloud Catalyst server does
not exist, recovery from a failed migration may be difficult or impossible.
Note: Any errors that are seen in the following procedure should be addressed
before you begin the final migration. Read the full procedure and text following the
procedure before you begin this process in your environment.
4 Run a catalog cleanup on the primary server using the bpimage -cleanup
command..
Location: /usr/openv/netbackup/bin/admincmd/bpimage -cleanup
-allclients -prunetir
5 Once the catalog cleanup completes, process the MSDP transaction queue
manually on the Cloud Catalyst server using the crcontrol -–processqueue
command and wait for the processing to complete.
Location: /usr/openv/pdde/pdcr/bin/crcontrol -–processqueue
See “Processing the MSDP transaction queue manually” on page 519.
6 Repeat step 5 to verify that all images have been processed.
7 Monitor /usr/openv/netbackup/logs/esfs_storage log on the Cloud Catalyst
server for at least 15 minutes (at a minimum) to ensure that all delete requests
have processed.
8 On the Cloud Catalyst server run the /usr/openv/pdde/pdcr/bin/cacontrol
--catalog recover all_missing command.
Warning: If this step reports any errors, those errors must be addressed before
you continue to the next step. Contact Veritas Support if assistance is needed
in addressing the errors.
Monitor this file for errors and contact Veritas Technical Support if any errors
are reported.
10 When the previous steps have been completed without error, run the
sync_to_cloud utility and wait for it to complete. Running this utility may take
time depending on environment.
See “About beginning the direct migration” on page 751.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 754
About direct migration from Cloud Catalyst to MSDP direct cloud tiering
About installing and configuring the new MSDP direct cloud tier server
You need a new MSDP direct cloud tier server with no existing disk pools for the
Cloud Catalyst migration. This section assumes that the primary server has been
upgraded to the latest version of NetBackup (10.0 or later) which supports migration.
Also, this section also assumes that the latest version of NetBackup (10.0 or later)
has been installed on the media server or appliance to be used for migration.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 755
About direct migration from Cloud Catalyst to MSDP direct cloud tiering
See “About requirements for a new MSDP direct cloud tier storage server”
on page 750.
See “About the media server deduplication (MSDP) node cloud tier” on page 25.
Configure an MSDP direct cloud tier server on the media server to be used for
migration. Do not configure any disk pools for that storage server. You must use
the same KMS settings when configuring the new MSDP direct cloud tier server as
were used for Cloud Catalyst. If the Cloud Catalyst storage server type ends in
_cryptd (for example: PureDisk_amazon_cryptd) then KMS needs to be enabled.
If the Cloud Catalyst storage server type ends in _rawd (for example:
PureDisk_azure_rawd) then KMS may or may not need to be enabled. This
information should be compiled before migration as noted in the About beginning
the direct migration section.
Note: If KMS needs to be enabled then all three KMS-related checkboxes on the
MSDP server configuration screen in the web UI need to be checked. Also, the
KMS key group name from Cloud Catalyst needs to be entered. Mismatched KMS
settings can cause problems attempting to access any of the data that Cloud Catalyst
uploaded. You must verify that all KMS-related information matches.
The new MSDP direct cloud tier server must have at least 1 TB free disk space.
You can migrate to a system with less free disk space. However, an extra step is
required after you create the new MSDP direct cloud tier server and before you run
the Cloud Catalyst migration. This extra step involves modifying the default values
for CloudDataCacheSize and CloudMetaCacheSize in contentrouter.cfg file.
See “About the configuration items in cloud.json, contentrouter.cfg, and spa.cfg”
on page 285.
The new MSDP server should be set to the correct time and you can set the time
by using an NTP server. If the time is incorrect on the MSDP server, some cloud
providers may report an error (for example: Request Time Too Skewed) and fail
upload or download requests. Refer to your specific cloud vendor documentation
for more information.
Note: After configuring the new MSDP server and before continuing, run a manual
backup of the catalog backup policy (policy type NBU-Catalog). Do not skip this
step as it is very important to run this manual backup. This backup establishes a
point in time to return to if the migration does not complete successfully.
Running the migration to the new MSDP direct cloud tier server
Before you continue the process of installing and configuring the new MSDP direct
cloud tier server, it is recommended that you set up logging. If any issues arise
during installation, the logs help with diagnosing any potential errors during migration.
The following items are recommended:
■ Ensure that the /usr/openv/netbackup/logs/admin directory exists before
running the nbdecommission command.
■ Set the log level to VERBOSE=5 in the bp.conf file.
■ Set loglevel=3 in /etc/pdregistry.cfg for OpenCloudStorageDaemon.
■ Set Logging=full in the contentrouter.cfg file.
To run the migration, go to the command prompt on the MSDP direct cloud tier
server and run:
/usr/openv/netbackup/bin/admincmd/nbdecommission -migrate_cloudcatalyst
Note: This utility needs to be run in a window that does not time out or close even
if it runs for several hours or more. If the migration is performed on an appliance,
you need to have access to the maintenance shell and it needs to remain unlocked
while the migration runs. The maintenance shell must remain enabled even if it
runs for several hours or more.
Select the Cloud Catalyst storage server to migrate and enter the information as
prompted by the nbdecommission utility.
The following is an example of what you may see during the migration:
# /usr/openv/netbackup/bin/admincmd/nbdecommission -migrate_cloudcatalyst
MSDP storage server to use for migrated CloudCatalyst: myserver.test.com
Cloud Storage Server Cloud Bucket CloudCatalyst Server Storage Server Type
1) amazon.com my-bucket myserver.test.com PureDisk_amazon_rawd
Enter new disk volume name for migrated CloudCatalyst server: newdv
Enter new disk pool name for migrated CloudCatalyst server: newdp
Enter cloud account username or access key: AAAABBBBBCCCCCDDDDD
Enter cloud account password or
secret access key: aaaabbbbccccddddeeeeffffggg
Before proceeding further, please make sure that no jobs are running on
media server myserver.test.com or media server newmsdpserver.test.com.
This command may not be able to migrate CloudCatalyst
with active jobs on either of those servers.
The next step is to list the objects in the cloud and migrate
the MSDP catalog. The duration of this step depends on how much data
was uploaded by CloudCatalyst.
It may take several hours or longer, so please be patient.
Monitor the output of the nbdecommission command for errors. Other logs to monitor
for activity and potential errors are in the storage_path/log/ directory. You should
monitor the ocsd_storage log and monitor the spad and spoold logs for any
cacontrol command issues.
If an error is encountered and you can correct the error, you can resume the
migration from that point using the start_with option as noted in the output from
Migrating from Cloud Catalyst to MSDP direct cloud tiering 760
About direct migration from Cloud Catalyst to MSDP direct cloud tiering
the nbdecommission command. If you have any questions about the error, contact
Veritas Support before you resume the migration.
Prompts Description
No MSDP storage server found on This output is displayed when the nbdecommission
myserver.test.com. -migrate_cloudcatalyst command is run on a
media server that does not have an MSDP storage
Please create the MSDP storage server
server configured.
before running this utility.
See “About installing and configuring the new MSDP
direct cloud tier server” on page 754.
Disk pools exist for storage server The sample output is displayed when the
PureDisk myserver.test.com. nbdecommission -migrate_cloudcatalyst
command is run on a media server that does have an
CloudCatalyst migration requires a MSDP storage server configured and does have existing
new storage server with no disk pools configured. Cloud Catalyst migration can only
configured disk pools. be run on a new MSDP cloud tier server with no existing
disk pools.
Enter cloud bucket name: If the Cloud Catalyst server is not running at the time of
migration you need to manually enter the existing Cloud
Catalyst bucket or container name. This information is
used for migration.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 761
About postmigration configuration and cleanup
Prompts Description
Enter CloudCatalyst server hostname: If the Cloud Catalyst server is not running at the time of
migration you need to manually enter the server
hostname of the existing Cloud Catalyst server to be
migrated.
Is MSDP KMS encryption enabled If the Cloud Catalyst server is not running at the time of
for amazon.com? migration you may need to manually enter the KMS
(y/n) [n]: configuration settings for the existing Cloud Catalyst
server.
Enter new disk volume name for Enter the name of the MSDP Cloud disk volume to be
migrated CloudCatalyst server: created on the new MSDP cloud tier server. This name
is used for the migrated Cloud Catalyst data.
Enter new disk pool name for Enter the name of the MSDP Cloud disk pool to be
migrated CloudCatalyst server: created on the new MSDP server and used for the
migrated Cloud Catalyst data.
Enter cloud account username or Enter the credentials for the cloud account that is used
access key: to access the Cloud Catalyst data to be migrated. If you
use AWS IAM role to access the data, you should enter
Enter cloud account password or dummy for both the access key and the secret access
secret access key: key.
of reverting to Cloud Catalyst to access your data. This step is an optional and no
functionality is affected if it is never done.
Run the following command to clean up the obsolete objects:
/usr/openv/pdde/pdcr/bin/cacontrol --catalog
cleanupcloudcatalystobjects <lsuname>
You can answer y to this question if you are certain that you do not use the image
sharing feature in your NetBackup environment.
You should leave the default answer of n in place for all other situations or if you
are unsure if your environment does not use image sharing.
You must run an additional command on the image sharing server before you can
access any images that were uploaded to the cloud by Cloud Catalyst. This
command should only be run if you use image sharing. Run the following command
on the image sharing server:
/usr/openv/pdde/pdcr/bin/cacontrol
--catalog buildcloudcatalystobjects <lsuname>
/usr/openv/netbackup/bin/admincmd/nbemmcmd -listhosts
-display_server -machinename myserver.test.com
-machinetype media -verbose
If you need to clear the administrative pause MachineState for a server, run
the following command:
/usr/openv/netbackup/bin/admincmd/nbemmcmd -updatehost
-machinename myserver.test.com -machinetype media
-machinestateop clr_admin_pause -masterserver mymaster.test.com
Note: The -dryrun option does not modify the primary server catalog entries to
move the images to the new MSDP cloud tier server. Therefore, you cannot do a
test restore or other operations to access the data when using the -dryrun option.
After using the -dryrun option you must manually delete the newly added cloud
volume in the cloud storage (for example: AWS, Azure, or other cloud vendor) using
the cloud console or other interface. If you do not delete this new volume, then
future migration operations are affected.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 764
About Cloud Catalyst migration cacontrol options
Note: Multiple cacontrol command options are not intended to be run directly
because running the nbdecommision command activates the cacontrol option.
Carefully review all options in Table B-2.
Table B-2 lists the cacontrol command options that you can use during the Cloud
Catalyst migration and how to use those options.
buildcloudcatalystobjects Location:
/usr/openv/pdde/pdcr/bin/cacontrol
--catalog buildcloudcatalystobjects <lsuname>
This option creates a lookup table for image sharing after successful migration to
the MSDP cloud tier. After migration, this command should be run on the image
sharing server and then the services on that server should be restarted.
cleanupcloudcatalystobjects Location:
/usr/openv/pdde/pdcr/bin/cacontrol
--catalog cleanupcloudcatalystobjects <lsuname>
This option removes unused Cloud Catalyst objects from the cloud after successful
migration to the MSDP cloud tier server. This command can be run as an optional
step which may be run a few days or weeks after the migration. This option cleans
up any Cloud Catalyst objects which the new MSDP cloud tier server does not
need. Do not run unless confident that the migration was successful since you
cannot revert to Cloud Catalyst to access the data once this command is run.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 765
About Cloud Catalyst migration cacontrol options
migratecloudcatalyst Location:
/usr/openv/pdde/pdcr/bin/cacontrol
--catalog migratecloudcatalyst <lsuname>
<cloudcatalystmaster> <cloudcatalystmedia>
[skipimagesharing] [start_with]
migratecloudcatalyststatus Location:
/usr/openv/pdde/pdcr/bin/cacontrol
--catalog migratecloudcatalyststatus <lsuname>
4 Select the catalog backup image that was created before running the
nbdecommission -migrate_cloudcatalyst command to migrate Cloud
Catalyst to MSDP cloud tier server.
5 Complete all steps in the wizard to recover the NetBackup catalog.
6 Stop and restart the NetBackup services on the primary server.
7 On the Cloud Catalyst server, ensure that the esfs.json file has the setting
ReadOnly set to 0.
If you only need to do restores and do not intend to run new backup or
duplication jobs to Cloud Catalyst, then set ReadOnly to 1.
8 Start the NetBackup services on the Cloud Catalyst server.
9 Once the Cloud Catalyst storage server has come online, you can proceed
with restores, backups, or optimized duplication jobs.
Backup or optimized duplication jobs require that ReadOnly is set to 0 in the
esfs.json file.
10 If running a Cloud Catalyst version older than 8.2 (example: 8.1, 8.1.1, 8.1.2),
you may need to deploy a new host name-based certificate for the media
server. You can deploy the certificate by running the following command on
the primary server:
/usr/openv/netbackup/bin/admincmd/bpnbaz –ProvisionCert
<CloudCatalyst host-name>
You must restart the NetBackup services on the Cloud Catalyst server.
11 You may need to run the following command to allow Cloud Catalyst to read
from the bucket in the cloud storage:
/usr/openv/esfs/bin/setlsu_ioctl
<cachedir>/storage/proc/cloud.lsu <bucketname>
No harm is done if you run this command when it is not needed. If you do run
the command, you can see the following output:
return code: -1
File exists.
12 (Optional) Remove the entire MSDP cloud sub-bucket folder in cloud storage
to avoid wasted space and avoid any problems with future migration to MSDP
cloud tier server.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 768
Reverting back to Cloud Catalyst from a failed migration
The following procedure assumes that the Cloud Catalyst server was reused and
or reinstalled as an MSDP cloud tier server or is unavailable for some other reason.
Reverting back to Cloud Catalyst when the server was reused and or
reinstalled when the migration was performed
1 Stop the NetBackup services on the new MSDP cloud tier server.
2 Open the NetBackup web UI.
3 Click Recovery. Then click NetBackup catalog recovery.
4 Select the catalog backup image that was created before running the
nbdecommission -migrate_cloudcatalyst command to migrate Cloud
Catalyst to MSDP cloud tier server.
5 Complete all steps in the wizard to recover the NetBackup catalog.
6 Stop and restart the NetBackup services on the primary server.
7 Reinstall the Cloud Catalyst server using the same NetBackup version and
EEB bundles that were active when migration was performed.
8 Then contact Veritas Technical Support to use the rebuild_esfs process to
recover that Cloud Catalyst server from the data in cloud storage. (The
rebuild_esfs process supersedes the old drcontrol method of recovering
a Cloud Catalyst server. The drcontrol method is deprecated.)
9 (Optional) Remove the entire MSDP cloud sub-bucket folder in cloud storage
to avoid wasted space and avoid any problems with future migration to MSDP
cloud tier server.
Migration failures that occur after the Disk pool message is displayed require
recovering the primary server catalog to revert to Cloud Catalyst.
If you do not recover the primary server catalog, you must manually delete the new
disk pool, disk volume, cloud storage server, and the MSDP cloud tier server. You
must delete these after reverting back to Cloud Catalyst.
The following procedure assumes that the migration fails before the Disk pool
message appears in the output. The procedure also assumes that the Cloud Catalyst
server is not reused as the MSDP cloud tier server for migration.
Reverting back to Cloud Catalyst after a failed migration
1 Stop the NetBackup services on the new MSDP cloud tier server.
2 On the Cloud Catalyst server, ensure that the esfs.json file has ReadOnly
set to 0.
If you only need to do restores and do not intend to run new backup or
duplication jobs to Cloud Catalyst, then set ReadOnly to 1.
3 Start the NetBackup services on the Cloud Catalyst server.
4 Once the Cloud Catalyst storage server has come online, you can proceed
with restores, backups, or optimized duplication jobs.
Backup or optimized duplication jobs require that ReadOnly is set to 0 in the
esfs.json file.
5 If running a Cloud Catalyst version 8.2 or earlier, you may need to deploy a
new host name-based certificate for the media server. You can deploy the
certificate by running the following command on the primary server:
/usr/openv/netbackup/bin/admincmd/bpnbaz –ProvisionCert
<CloudCatalyst host-name>
You must restart the NetBackup services on the Cloud Catalyst server.
Migrating from Cloud Catalyst to MSDP direct cloud tiering 770
Reverting back to Cloud Catalyst from a failed migration
6 You may need to run the following command to allow Cloud Catalyst to read
from the bucket in the cloud storage:
/usr/openv/esfs/bin/setlsu_ioctl
<cachedir>/storage/proc/cloud.lsu <bucketname>
No harm is done if you run this command when it is not needed. If you do run
the command, you can see the following output:
return code: -1
File exists.
7 (Optional) Remove the entire MSDP cloud sub-bucket folder in cloud storage
to avoid wasted space and avoid any problems with future migration to MSDP
cloud tier server.
The following procedure assumes that the migration fails on the Cloud Catalyst
server that was reused and or reinstalled as an MSDP cloud tier server.
Reverting back to Cloud Catalyst after a failed migration when the Cloud
Catalyst server was reused
1 Stop the NetBackup services on the new MSDP cloud tier server.
2 Reinstall the Cloud Catalyst server using the same NetBackup version and
EEB bundles that were active when migration was performed.
3 Then contact Veritas Technical Support to use the rebuild_esfs process to
recover that Cloud Catalyst server from the data in cloud storage. (The
rebuild_esfs process supersedes the old drcontrol method of recovering
a Cloud Catalyst server. The drcontrol method is deprecated.)
4 (Optional) Remove the entire MSDP cloud sub-bucket folder in cloud storage
to avoid wasted space and avoid any problems with future migration to MSDP
cloud tier server.
Appendix C
Encryption Crawler
This appendix includes the following topics:
■ Advanced options
■ Tuning options
Graceful mode
Unless the user specifies a different mode with the crcontrol --encconvertlevel
command, Encryption Crawler’s default mode is Graceful. In this mode, it runs only
when the MSDP pool is relatively idle and no compaction or CRQP jobs are active.
It usually means no backup, restore, duplication, or replication jobs are active on
the MSDP pool when the MSDP pool is idle. To prevent Encryption Crawler from
overloading the system it doesn’t run continuously. When the Encryption Crawler
is in Graceful mode, it may take a longer time to finish.
The Graceful mode checks that the MSDP pool is relatively idle. It checks the pool
state by calculating the I/O statistics on the MSDP pool and checks that no
compaction or CRQP jobs are active before it processes each data container. It
pauses if the MSDP pool is not idle, compaction, or CRQP jobs are active. In most
cases, Graceful mode pauses when backup, restore, duplication, or replication
jobs are active on the MSDP pool.
If the data deduplication rate of the active NetBackup jobs is high, the I/O operation
rate could be low and the MSDP pool could be relatively idle. In this case, the
Graceful mode may run if no compaction or CRQP jobs are active.
Encryption Crawler 773
About the two modes of the Encryption Crawler
If the MSDP fingerprint cache loading is in progress, the I/O operation rate on the
MSDP pool is not low. In this case, the Graceful mode may pause and wait for the
fingerprint cache loading to finish. The Encryption Crawler monitors the spoold log
and waits for the message that begins with ThreadMain: Data Store nodes have
completed cache loading before restarting. The location of the spoold log is:
storage_path/log/spoold/spoold.log. To check if compaction or CRQP jobs
are active, run the crcontrol --compactstate or crcontrol --processqueueinfo
command.
To have the Graceful mode run faster, you can use the Advanced Options of
CheckSysLoad, BatchSize, and SleepSeconds to tune the behavior and performance
of Graceful mode. With a larger number for BatchSize and a smaller number for
SleepSeconds, Graceful mode runs more continuously.
If you turn off CheckSysLoad, Graceful mode runs while backup, restore, duplication,
replication, compaction, or CRQP jobs are active. Such changes can make Graceful
mode more active, however it’s not as active as Aggressive mode.
Aggressive mode
In this mode, the Encryption Crawler disables CRC check and compaction. It runs
while backup, restore, duplication, replication, or CRQP jobs are active.
The Aggressive mode affects the performance of backup, restore, duplication, and
replication jobs. To minimize the effect, use the Graceful mode. This choice
temporarily pauses the encryption process when the system is busy and can slow
down that process. The Aggressive mode keeps the process active and
aggressively running regardless of system state.
The following points are items to consider when Aggressive mode is active:
■ Any user inputs and the last progress are retained on MSDP restart. You don’t
need to re-run the command again to recover. The Encryption Crawler recovers
and continues from the last progress automatically.
■ You must enforce encryption with the encrypt keyword on the ServerOptions
option in the contentrouter.cfg file in MSDP. You must also restart MSDP
before enabling Encryption Crawler, otherwise the Encryption Crawler does not
indicate that it is enabled.
■ If your environment is upgraded from a release older than NetBackup 8.1, you
must wait until the rolling Data Conversion finishes before you enable the
Encryption Crawler. If you don’t wait, the Encryption Crawler does not indicate
that it is enabled.
■ You cannot repeat the Encryption Crawler process after it finishes. Only the
data that existed before you enable encryption is unencrypted. All the new data
is encrypted inline and does not need the scanning and crawling.
Encryption Crawler 774
Managing the Encryption Crawler
Option Description
The num variable is optional and indicates the number for the
partition index (starting from 1). The parameter enables the
Encryption Crawler for the specified MSDP partition.
For example,
[root@vrawebsrv4663 ~]#
/usr/openv/pdde/pdcr/bin/crcontrol --encconverton
Encryption conversion turned on for all
partitions
The num variable is optional and indicates the number for the
partition index (starting from 1). The parameter enables the
Encryption Crawler for the specified MSDP partition.
Option Description
Optionally, you can specify a verbose level (0-2) for this option.
For example,
/usr/openv/pdde/pdcr/bin/crcontrol
--encconvertstate 2
Once the Encryption Crawler is turned on, you can monitor the status, mode, and
progress with the crcontrol --encconvertstate command.
Encryption Crawler 777
Managing the Encryption Crawler
Item Description
Level Shows in which level and mode the Encryption Crawler is.
The value is in the format mode (level), for example Graceful
(1).
Containers Estimated The estimated number of data containers in the MSDP pool
that the Encryption Crawler must process. It’s a statistic
information and there may be inaccuracy for performance
reasons. Once the Encryption Crawler is turned on, the value
is not updated.
Containers Scanned The number of data containers the Encryption Crawler must
process.
Containers Skipped The number of data containers that the Encryption Crawler
skipped. The reasons vary and are described in About the
skipped data containers.
Data Size Scanned The aggregated data size of the scanned data containers for
Containers Scanned.
Encryption Crawler 778
Managing the Encryption Crawler
Item Description
Data Size Converted The aggregated data size of the converted data containers
for Containers Converted.
Progress The proportion of the total estimated data containers that the
Encryption Crawler has scanned.
Conversion Ratio The proportion of the scanned data size which the Encryption
Crawler has converted.
The Progress line in the log can be used to extrapolate how long the Encryption
Crawler is expected to take. For example, if 3.3% of the pool is completed in 24
hours, the process may take about 30 days to finish.
Note: The Encryption Crawler processes the data containers in reverse order from
new to old.
It’s possible to back up new data after encryption is enforced but before the
Encryption Crawler is turned on. If that happens, the Conversion Ratio could be
less than 99% for the new data containers at the beginning. While the process is
running, the value of Conversion Ratio can become higher with the fact that the
older data containers can potentially have more unencrypted data. In this case, the
Conversion Ratio, Containers Converted, and Containers Estimated can help
estimate the speed for these data containers.
Monitoring the change of Conversion Ratio can give some indication for the
proportion of the unencrypted data while the Encryption Crawler is active.
Encryption Crawler 779
Managing the Encryption Crawler
Note: During the encryption process, the progress survives in the case of MSDP
restart.
■ Check if the VpFS root share vpfs0 owns the data containers.
■ The data containers that the VpFS root share vpfs0 owns, are empty.
Advanced options
You can specify the options that are shown under the EncCrawler section in
contentrouter.cfg to change the default behavior of the Encryption Crawler. The
options only affect the Graceful mode and these options don’t exist by default. You
must add them if needed.
After you change any of these values, you must restart the Encryption Crawler
process for the changes to take effect. Restart the Encryption Crawler process with
the crcontrol command and the --encconvertoff and --encconverton options.
You do not need to restart the MSDP services.
After the initial tuning, you may want to occasionally check the progress and the
system effect for the active jobs. You can do further tuning at any point during the
process if desired.
Encryption Crawler 781
Tuning options
SleepSeconds Type: Integer This option is the idle time for the Graceful mode
after it processes a batch of data containers. The
Range: 1-86400
default setting is 5 seconds and the range is
Default: 5 1-86400 seconds.
BatchSize Type: Integer This option is the data container number for
which the Graceful mode processes as a batch
Range: 1-INT_MAX
between the idle time. The default setting is 20.
Default: 20
CheckSysLoad Type: Boolean The Graceful mode does not run if it detects an
active backup, restore, duplication, replication,
Range: yes or no
compaction, or CRQP job.
Default: yes
When you set this option to no, the Graceful
mode does not do the checking. Instead, it
processes a number of BatchSize data
containers, then sleeps for a number of
SleepSeconds seconds, then processes
another batch and then sleeps. It continues this
process until complete.
Tuning options
Tuning the Graceful mode
To have a faster Graceful mode, one can leverage the CheckSysLoad, BatchSize,
and SleepSeconds options to tune the behavior and performance of the Graceful
mode.
See “Advanced options” on page 780.
With a larger number for BatchSize and a smaller number for SleepSeconds, the
Graceful mode runs more continuously. When you turn off CheckSysLoad, the
Graceful mode keeps running while backup, restore, duplication, replication,
compaction, or CRQP jobs are active. Such changes can make the Graceful mode
more aggressive, although not as aggressive as the Aggressive mode. The
advantage is the tuned Graceful mode has less effect on the system performance
than the Aggressive mode for backup, restore, duplication, and replication jobs. It
has even less effect than the Aggressive mode with the lowest level 2. The trade-off,
especially when CheckSysLoad is turned off, is that it becomes semi-aggressive. It
can affect the system performance for the active jobs and it makes the CRC check,
CRQP processing, or compaction take a longer time to run and finish.
Encryption Crawler 782
Tuning options
Actions Explanation
Turn on Encryption Veritas recommends that you wait for fingerprint cache loading to
Crawler in the complete before you perform any backups or turn on the Encryption
Graceful mode Crawler. Determine when to start by monitoring the spoold log and
with the default waiting for the message that begins with ThreadMain: Data Store
settings. nodes have completed cache loading.
After you check on the Encryption Crawler process, review the following:
First, check the Progress item and confirm Encryption Crawler progress.
If there is no progress or not in the expected speed, you need to make
changes to make faster process. Use the Progress item to extrapolate
how long Encryption Crawler is expected to take. For example, if 3.3%
of the pool is completed in 24 hours, the process may take about 30
days to finish.
Actions Explanation
Tune the Graceful You can use the information in Tuning the Graceful mode to speed up
mode to run faster. the Graceful mode. After the initial tuning, you may need to check the
progress and the system effect for the active jobs occasionally. You
can do further tuning at any point during the process if desired. If the
tuned Graceful mode negatively affects the system performance for
the active jobs, you can consider turning off the Encryption Crawler for
some of the MSDP partitions. You can keep it running for other partitions
by following the recommendations in Turn on Encryption Crawler for
part of the MSDP partitions to reduce system effect to reduce the system
effect. You can also consider turning off the DataStore Write permission
for some MSDP partitions by following the recommendations in
Selectively disable DataStore Write for the MSDP partitions to reduce
system effect which have the Encryption Crawler running. If the
processing speed doesn’t meet the expectations, the Aggressive mode
can be leveraged for your environment.
Turn on the You can use the information in Tuning the Aggressive mode to have
Aggressive mode. the best performance for the Encryption Crawler. Veritas recommends
that you start from the lowest level 2, then gradually increase to a higher
level. You may need to check the progress and the system effect for
the active jobs occasionally. You can perform further tuning at any point
during the process if desired.
Find the tuning A faster Encryption Crawler speed usually means more effect on the
point which best system for all active jobs. A combination of tuning options may contribute
balances the a good balance between both.
process speed and
the system effect.
Check the data format of a data container before the Encryption Crawler process.
The following is an example of the output:
data format : [LZO Compressed Streamable, v2, window size 143360 bytes]
data format : [LZO Compressed Streamable, v2, window size 143360 bytes]
data format : [LZO Compressed Streamable, v2, window size 143360 bytes]
data format : [LZO Compressed Streamable, v2, window size 143360 bytes]
data format : [LZO Compressed Streamable, v2, window size 143360 bytes]
[root@rsvlmvc01vm0771 /]# /usr/openv/pdde/pdcr/bin/dcscan
--so-data-format 3080|grep "data format"|grep -i -e "AES" -e "Encrypted"
Check the data format of a data container after the Encryption Crawler process.
The following is an example of the output:
Note: The encryption reporting tool is not supported on Flex WORM setups.
Table C-5
OS and Python Details
requirements
Python requirements for NetBackup Red Hat installations come with Python and there
encryption_reporting are no extra steps for getting Python running.
on Linux Red Hat
installations.
Python requirements for NetBackup 10.0 and newer versions require you to install
encryption_reporting Python 3.6.8-3.9.16. Currently, no additional software
on Windows and Linux SUSE packages are required to be installed. Navigate to the
BYO installations. directory containing encryption_reporting
(\Veritas\pdde on windows and
/usr/openv/pdde/pdcr/bin on Linux SUSE) and run it
as a python script.
By default, the reporting tool creates a thread pool of two threads. The tool uses
these threads to search for unencrypted data or to encrypt the unencrypted data.
A thread is used to process one MSDP mount point to completion. Upon completing
the processing of a mount point, the thread is returned to the thread pool. The
thread is then used to process any additional mount point that is queued up for
processing.
The number of threads is equal to the number of mountpoints that can be processed
concurrently. You can increase or decrease the thread pool’s thread count by
specifying the -n option. The minimum thread count is 1 and the maximum is 20.
The reporting tool is I/O intensive. Increasing the thread count up to the total number
of MSDP mountpoints usually means better performance for the reporting tool. It
also means more load on the system which can affect performance of backup,
restore, deduplication, and replication jobs. No performance gains are observed
for using more threads than there are mountpoints.
When using the reporting tool to search for the unencrypted data, each thread
invokes one instance of dcscan. Each dcscan instance uses roughly N * 160 MB
of memory. In this equation, N is the number of MSDP mountpoints on the server.
If there are a total of 12 MSDP mountpoints, each dcscan instance uses about 1.8
GB of memory. If there are four threads running in the reporting tool, the reporting
tool and the dcscan processes consume more than 7 GB of memory.
On a Windows BYO, the default path to dcscan is C:\Program
Files\Veritas\pdde. If you have dcscan installed somewhere else, you must use
the -d or --dcscan_dir option to specify the correct location.
Encryption Crawler 790
Command usage example outputs
The encryption_reporting does not account for data encrypted with the Encryption
Crawler. If you have previously run the Encryption Crawler to encrypt data, you
must clear the metadata files with the -c option if they exist. Then re-run
encryption_reporting to get up-to-date information.
Veritas does not recommend that you run the reporting tool while the Encryption
Crawler process is active.
KMS conversion during a maintenance window. Conversion may take a very long
time to complete since it depends on the amount of data that needs to be converted.
To convert the legacy KMS to KEK-based KMS
1 Reset the encryption crawler if it was used previously.
/usr/openv/pdde/pdcr/bin/crcontrol --encconvertreset
2 Run the following command to start the KEK rotation process in MSDP.
/usr/openv/pdde/pdcr/bin/crcontrol --kekconverton
3 Set up the new KMS service with the same key group name the current KMS
service is using.
4 Create an active KMS key in the new KMS service.
5 Configure new KMS service in NetBackup with priority of 0.
6 Verify that NetBackup reports both KMS services on the primary server.
/usr/openv/netbackup/bin/nbkmscmd -listKMSConfig
7 Update the priority of the new KMS service to a priority greater than the priority
that is set on the previous KMS service.
/usr/openv/netbackup/bin/nbkmscmd -updateKMSConfig -name
configuration_name [-server primary_server_name] [-priority
priority_of_KMS_server]
Encryption Crawler 794
Updating the KMS configuration
troubleshooting (continued)
general operational problems 727
server not found error 721
U
unified logging 708
format of files 709
uninstalling media server deduplication 546
V
viewing deduplication pool attributes 508
viewing storage server attributes 498
VM backup 176
volume manager
Veritas Volume Manager for deduplication
storage 69
vxlogview command 709
with job ID option 712