0% found this document useful (0 votes)
575 views

Best Practices For Running Dassault Systèmes ENOVIA On SQL Server 2008

This document provides best practices for setting up Microsoft SQL Server 2008 as the database for Dassault Systèmes ENOVIA V6. It recommends dedicating a database server and SQL instance to ENOVIA, installing the appropriate SQL Server edition, and configuring memory, tempdb, and database and file backups to optimize performance. Following these guidelines helps ensure a reliable and high-performing PLM system using ENOVIA and SQL Server 2008.

Uploaded by

pcosro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
575 views

Best Practices For Running Dassault Systèmes ENOVIA On SQL Server 2008

This document provides best practices for setting up Microsoft SQL Server 2008 as the database for Dassault Systèmes ENOVIA V6. It recommends dedicating a database server and SQL instance to ENOVIA, installing the appropriate SQL Server edition, and configuring memory, tempdb, and database and file backups to optimize performance. Following these guidelines helps ensure a reliable and high-performing PLM system using ENOVIA and SQL Server 2008.

Uploaded by

pcosro
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Best Practices for Running Dassault

Systmes ENOVIA on SQL Server 2008


Linking enterprise and multi-discipline engineering users, and enabling all
users to work within a single version of the truth

Technical Reviewers:
Randy DyessSolid Quality Mentors
Shaun Tinline-JonesMicrosoft
boB TaylorMicrosoft

Microsoft Corporation
Published: December 2010
Abstract
Enterprises have come to rely on increasingly robust product lifecycle management (PLM) systems to
streamline their businesses and keep costs down. PLM systems, in turn, rely on high-performance
databases. When the database system underlying the PLM software is unreliable or slow, the PLM
software cannot work efficiently, and the value of the system is eroded. Therefore, selecting a PLM
systemand selecting a database for that PLM systemare important business decisions.
Dassault Systmes ENOVIA V6 is a standards-based, scalable PLM system that is able to support the
largest, most complex product lifecycle management deployments. It provides the flexibility to easily
configure business processes, user interfaces, and infrastructure options to help ensure that they meet an
organizations needs.
This white paper provides best practices for setting up a Microsoft SQL Server 2008 database platform for
ENOVIA V6. It also describes some of the benefits that SQL Server 2008 data management software brings
to an ENOVIA V6 PLM system, demonstrating that together, SQL Server 2008 and ENOVIA V6 provide a
reliable and cost-effective PLM system that can help an enterprise improve time to market and
profitability.

Silvano CorianiMicrosoft
Michael Finocchiaro Dassault Systmes
Tim Besser Dassault Systmes
2

Disclaimer
This document is provided "as-is." Information and views expressed in this document, including URL and
other Internet Web site references, may change without notice. You bear the risk of using it.
Some examples are for illustration only and are fictitious. No real association is intended or inferred.
This document does not provide you with any legal rights to any intellectual property in any Microsoft
product. You may copy and use this document for your internal, reference purposes.


3

Table of Contents
OVERVIEW ............................................................................................................................................... 5
ENOVIA V6 ARCHITECTURE ............................................................................................................................. 5
BEST PRACTICES FOR RUNNING ENOVIA ON SQL SERVER 2008 ................................................................ 6
PLANNING AND INSTALLING THE SQL SERVER 2008 INSTANCE ................................................................................ 6
Use a Dedicated Database Server and SQL Instance ............................................................................. 6
Selecting the Appropriate SQL Server 2008 Edition ............................................................................... 6
Select Which Features to Install ........................................................................................................... 10
Set the Service Account for SQL Server ................................................................................................ 11
Set the Server Collation During Installation ......................................................................................... 13
Choose the Security Mode ................................................................................................................... 16
Decide Which Services to Auto-Start .................................................................................................... 17
TEMPDB BEST PRACTICES ............................................................................................................................... 18
Move tempdb ....................................................................................................................................... 18
Size tempdb.......................................................................................................................................... 19
Performance Tuning for Microsoft SQL Server (tempdb) ..................................................................... 21
Use Read-Committed Snapshot ........................................................................................................... 21
Use Multiple, Dedicated Disks.............................................................................................................. 22
CONFIGURE MEMORY FOR SQL SERVER 2008 ................................................................................................... 22
Configure RAM on 64-bit Servers ......................................................................................................... 22
Configure RAM on 32-bit Servers (V6R2011 and Earlier Only)............................................................. 24
INSTALL ENOVIA DATABASE........................................................................................................................... 25
File Placement on the Disk Subsystem ................................................................................................. 25
Size Data Files Up Front ....................................................................................................................... 26
Distribute the PRIMARY Filegroup Across Files on Multiple Disks (Optional) ...................................... 27
Use Data Compression (Enterprise Edition Only) ................................................................................. 28
ADJUST SQL SERVER 2008 AND DATABASE SETTINGS .......................................................................................... 28
Verifying Database Collation ............................................................................................................... 28
Set Automatic Statistic Updates, Auto Shrink, and Other Options ...................................................... 28
Choose a Database Recovery Model .................................................................................................... 29
ONGOING MANAGEMENT............................................................................................................................... 30
The Database Backup Plan .................................................................................................................. 30
Synchronize ENOVIA Database and File Backups................................................................................. 32
Monitor Index Fragmentation and Defragment when Necessary ....................................................... 32
Periodically Check Database Integrity ................................................................................................. 34
Monitor Space Used ............................................................................................................................. 34
Use SQL Server Agent Jobs and Alerts .................................................................................................. 34
TOOLS FOR MONITORING DATABASE SERVER PERFORMANCE ................................................................................ 35
SQL Server Activity Monitor ................................................................................................................. 35
Standard Database Reports in SSMS ................................................................................................... 36
Performance Monitor .......................................................................................................................... 36
4

CHECKLIST .............................................................................................................................................. 39
SUMMARY ............................................................................................................................................. 41
LINKS FOR FURTHER INFORMATION ...................................................................................................... 42


5

Overview
Dassault Systmes ENOVIA delivers a collaborative product lifecycle management (PLM) solution
to customers in a diverse set of industries, including aerospace, automotive, life sciences, and
industrial equipment. ENOVIA V6, the latest version of the ENOVIA software, is an enterprise
PLM solution for implementing business processes, letting all users view, share, and create in
3D. ENOVIA V6 delivers a single PLM platform for intellectual property (IP) management that
makes global collaboration possible via an HTTP connection.
ENOVIA V6 Architecture
The ENOVIA V6 architecture is composed of centralized database with distributed file stores.
The database is accessible to all users, regardless of their location. The remote sites use a local
file server to load large representation files, eliminating time-consuming network transmissions.
The ENOVIA V6 clients use an HTTP or HTTPS connection to the application server. All
communication between sites supports proxy-reverse proxy implementation and encryption
capabilities if Secure Sockets Layer (SSL) is implemented. In addition to the IT security schemas,
the ENOVIA V6 applications deliver specific security schemas for user management (P&O, or
permissions and ownerships) and data access (security).
The ENOVIA V6 server contains the ENOVIA Live Collaboration environment, and the
applications are deployed via a Web Application Archive (WAR) or Enterprise Archive (EAR) file
in a Java Platform, Enterprise Edition (Java EE) web application server. These components, along
with the required database server and other application servers, make up the ENOVIA PLM
solution. Figure 1 provides a graphical representation of the ENOVIA V6 architecture.

Figure 1: ENOVIA V6 architecture
6

Best Practices for Running ENOVIA on SQL
Server 2008
Among the strengths of using Microsoft SQL Server 2008 as a database platform for ENOVIA is
the relative simplicity of managing and tuning the database engine. There is, nevertheless, a
range of best practices for managing SQL Server 2008 production deployments and for getting
the best performance and reliability from the database platform. Many of these best practices
are standard procedures, irrespective of the ENOVIA application, while some specific guidance is
driven by the nature of the ENOVIA database and application.
The following sections discuss a range of best practices and recommended choices for ENOVIA
deployments on SQL Server 2008.
Planning and Installing the SQL Server 2008 Instance
The following sections describe best practices for planning and installing the SQL Server 2008
instance.
Use a Dedicated Database Server and SQL Instance
Dassault Systmes recommends hosting the SQL Server 2008 database on a dedicated server for
better scalability. Using a dedicated server prevents the ENOVIA database workload from having
to compete for CPU, memory, network, and I/O with other processes, and permits individual
tuning of servers dedicated to different roles (for example, application servers).
Selecting the Appropriate SQL Server 2008 Edition
The database platform for ENOVIA V6R2011 can be either the Standard or Enterprise edition of
Microsoft SQL Server 2008 Service Pack 1 (SP1) or Microsoft SQL Server 2008 R2. For most
functional aspects of ENOVIA, SQL Server 2008 Standard and Enterprise editions are equivalent.
However, the Enterprise edition of SQL Server 2008 includes a variety of higher-end features,
including the ability to use more than four CPUs in a database server. Table 1 summarizes some
of the key distinctions between editions.
7

Table 1: Choose Enterprise or Standard edition of SQL Server
Choose Standard edition if: Choose Enterprise edition if:
Your database server has <= 4 CPU sockets Your server has > 4 CPU sockets
Your high-availability requirements can be met
with a 2-node failover cluster
Your high-availability requirements are best met
with a failover cluster with 3+ nodes
You will use SQL Server Backup Compression

Any of these other features are important for
your deployment of ENOVIA:
Managing server capacity with Resource
Governor
Online index rebuilds
Hot add of RAM and CPU
SQL Server 2008 Enterprise edition also has a wide array of advanced data warehousing features
for analytics, data mining, and extraction, transformation, and loading (ETL) of data. For more
information about the features of SQL Server 2008, see SQL Server 2008 Editions.
Note: SQL Server 2008 data compression helps compress the data inside a database,
and it can help reduce the size of the database. Data compression provides data space
savings, and because compressed data is stored in fewer pages, queries need to read
fewer pages from the disk, thereby improving the performance of I/O-intensive
workloads. However, extra CPU resources are required on the database server to
compress and decompress the data while data is exchanged with the application.
Therefore, it is important to understand the workload characteristics when deciding to
enable SQL Server data compression.
Table 2 provides information on which version of the Windows Server operating system
supports which version of SQL Server 2008.
Table 2: Platform support for SQL Server 2008
Platform SQL Server Clients SQL Server
Windows Server 2008 (64-bit)
x64 (Intel, AMD)
SQL Native Client 10.0 on each
server that requires connection
to SQL Server 2008.
Microsoft SQL Server 2008
Standard or Enterprise editions
Windows Server 2008 R2 (64-
bit) x64 (Intel, AMD)
SQL Native Client 10.0 on each
server that requires connection
to the SQL Server 2008 R2.
Microsoft SQL Server 2008 R2
Standard or Enterprise editions
Note: ENOVIA V6R2011 requires SQL Server 2008 SP1

Because the host operating system support for ENOVIA V6R2011 is only available for Windows
Server 2008 and Windows Server 2008 R2 operating systems with x64 (64-bit) architecture,
8

installing the equivalent x64 (64-bit) binaries for SQL Server 2008 and SQL Server 2008 R2 is
recommended.
Before installation, ensure that you are using the correct setup program. Fortunately, SQL Server
2008 makes this verification easy. The main SQL Server 2008 setup wizard has an Options page
to specify which architecture will be installed (see Figure 2). Ensure that the 32-bit variant (x86)
is not selected.
Note: The ENOVIA V6 Application Server is not supported on the Itanium 64-bit IA64
architecture.

Figure 2: Verify the architecture to be installed
All computers that need to connect to the SQL Server back end must have the SQL Native Client
components installed. Depending on their host operating system architecture (x86 or x64) and
the relative ENOVIA V6 application components installed (32-bit or 64-bit), they also require the
Open Database Connectivity (ODBC) Data Source Name (DSN) to be configured in the
appropriate way.
On the application server side, ENOVIA V6R2011 (May 2010 release) is available in both 32-bit
and 64-bit versions running on 32-bit and 64-bit Java virtual machines (JVMs) respectively. In a
production system, installing the 64-bit release is highly recommended to fully take advantage
of the characteristics of the host operating system (since Windows Server 2008 and Windows
Server 2008 R2 are only supported as 64-bit operating systems). Only the V6R2011x release is
9

supported in 64-bit mode on 64-bit JVMs. To configure the required data source name and
connect to SQL Server from the application server, see Table 3.
Table 3: Creating the ODBC Data Source Names
Operating System Version ENOVIA V6R2011 or
V6R2011x JVM Version
ODBC Data Source
Administrator Version
Windows Server 2008 x64 64-bit (R2011 and later) Control Panels ODBC Data
Source Administrator
Windows Server 2008 x64 32-bit (R2011 and earlier) %WINDIR%\SysWOW64\odbcad3
2.exe
Windows Server 2008 R2 x64 64-bit (R2011 and later) Control Panels ODBC Data
Source Administrator
Windows Server 2008 R2 x64 32-bit (R2011 and earlier) %WINDIR%\SysWOW64\odbcad3
2.exe

For ENOVIA Studio Modeling Platform Rich Clients such as MQL, Matrix, System, and Business,
the supported operating system architectures have changed. ENOVIA V6R2010x and previous
releases were 32-bit only, while ENOVIA V6R2011 has both 32-bit and 64-bit versions. This
means that for 32-bit applications running on 64-bit operating system versions, the required
ODBC Data Source names have to be created using the 32-bit version of the ODBC Data Source
Administrator found at %WINDIR%\SysWOW64\odbcad32.exe. For any other combinations (32-
bit apps on 32-bit operating systems and 64-bit apps on 64-bit operating systems), the required
ODBC Data Source names must be created using the Control Panels ODBC Data Source
Administrator.
For ENOVIA V6R2011x and later releases, only the 64-bit versions of the ENOVIA Studio
Modeling Platform Rich Clients are available, so the required ODBC Data Source names must be
created using the Control Panels ODBC Data Source Administrator. Refer to Table 4 for a
summary.
Table 4: Creating the ODBC Data Source Names
Operating System Version ENOVIA V6R2011 and
V6R2011x Version
ODBC Data Source
Administrator Version
Windows client operating system
32-bit
32-bit (R2011 and earlier) Control Panels ODBC Data
Source Administrator
Windows client operating system
64-bit
32-bit (R2011 and earlier) %WINDIR%\SysWOW64\odbcad3
2.exe
Windows client operating system
64-bit
64-bit Control Panels ODBC Data
Source Administrator
10

It is important to note that for ENOVIA V6R2011, the 32-bit version is mainly intended for test
and demonstration environments that are still running on 32-bit Windows Server operating
systems; for production environments, the 64-bit version of ENOVIA V6R2011 on x64 releases of
the Windows Server operating system is recommended.
Select Which Features to Install
SQL Server 2008 includes a rich array of components for data management and business
intelligence (BI). A minimal installation for ENOVIA requires only the SQL Server Database Engine
and Management Tools. These selections are made in the Feature Selection step of the SQL
Server 2008 setup. The choices shown in Figure 3 are the minimum features for an ENOVIA
database installation.

Figure 3: SQL Server features required for an ENOVIA installation
Many database administrators (DBAs) also find it useful to install the SQL Server Books Online
product documentation and Integration Services. For other servers that need to access SQL
Server, you have to install SQL Native Client: You can find the installation package (Sqlncli.msi)
available as part of the SQL Server 2008 installation DVD, or you can download the installation
package from the Microsoft web site as part of the Microsoft SQL Server 2008 or SQL Server
2008 R2 Feature Pack.
11

Note: When using SQL Server 2008 as the database platform for ENOVIA, you must
install SQL Server 2008 SP1 on the server and all client computers. This service pack is
also required for SQL Native Client 10 components on every computer that needs to
connect to the database server. After the service pack is installed, the version of the
Native Client should be 10.00.1779 or newer. If you have already created a data source,
you need to re-create the data source after downloading this patch. Please refer to the
Dassault Systmes Program Directory to know the exact SQL Server service pack
supported for ENOVIA V6 on SQL Server 2008.
Set the Service Account for SQL Server
The SQL Server Database Engine runs as a Windows service. Like all Windows services, the SQL
Server Database Engine must execute in the context of a specific Windows user account. The
service account for the SQL Server Database Engine is set during installation and can be changed
afterward using SQL Server Configuration Manager. Changing the service account for any SQL
Serverrelated service directly through the Windows Services management snap-in is not
supported.
In general, a domain account is recommended instead of a SQL Server account or a local account
for the Database Engine service. The same recommendation applies to the SQL Server Agent
service. As shown in Figure 4, you configure the service account during SQL Server 2008 setup at
the Server Configuration step.

Figure 4: Specify SQL Server service accounts
12

Note that in Figure 4, svc-EnoviaSqlAgent and svc-EnoviaDatabase are accounts on the Active
Directory domain of which the server is a member; you do not have to use these exact names
for your accounts.
Ensure That the Database Engine Can Use Fast File Initialization
SQL Server 2008 can initialize its data files instantly, without arduously zeroing out the files on
disk. This saves time when creating new database files and when growing existing files. For large
databases, the impact can be significant. To use this capability, however, the Database Engine
service requires a specific permission in the Windows operating system. Dassault Systmes
recommends granting this permission to the service account to get the performance benefits of
fast file initialization.
The permission required to use fast file initialization is called Perform Volume Maintenance
Tasks. To grant this permission to the Database Engine service account, use the following steps:
1. Open the Local Security Policy management snap-in by running Secpol.msc.
2. In the left task pane, expand the tree to expose Security Settings > Local Policies > User
Rights Assignment, and then select the User Rights Assignment node. Scroll though the
list in the right pane until you see the Perform volume maintenance tasks policy (see
Figure 5).


Figure 5: Manage Windows user rights to allow fast file initialization
3. Double-click Perform volume maintenance tasks to open the properties dialog box. Use
this dialog box to add the service account used for the SQL Server Database Engine.
13

Figure 6 shows svc-EnoviaDatabase (the example service account) added to the list of
accounts having this permission.


Figure 6: Grant SE_MANAGE_VOLUME_NAME to SQL Server Database Engine service account
You can grant this permission any time before or after SQL Server 2008 is installed. If SQL Server
2008 is already installed, however, you need to restart the Database Engine service to see the
effects of this change. Until you restart the Database Engine, large file initializations will take a
long time as SQL Server 2008 defaults to zeroing out each bit inside of the file.
Set the Server Collation During Installation
The collation setting in SQL Server 2008 affects how the database software handles sorting and
comparisons of text data. Different collations let the Database Engine account for various
expectations of sorting and comparing text across languages and cultures. It also makes it
possible for a particular installation to be configured to use case-sensitive vs. case-insensitive
comparison (e.g., does smith = Smith) and accent-sensitive vs. accent-insensitive
comparison (e.g., does elan = eln). SQL Server 2008 lets collation be configured at the
instance and database levels and at more granular levels within databases.

14

Dassault Systmes ENOVIA is designed to use Windows collation with SQL Server 2008.
Windows collations are collations defined for SQL Server 2008 to support the Windows system
locales available for the operating system on which SQL Server 2008 instances are installed. By
specifying a Windows collation for SQL Server 2008, the instance of SQL Server 2008 uses the
same code pages and sorting and comparison rules as an application that is running on a
computer for which you have specified the associated Windows locale. For example, the French
Windows collation for SQL Server 2008 matches the collation attributes of the French locale for
Windows.
Dassault Systmes recommends setting this collation at the SQL Server 2008 instance level to
ensure that it affects all databases used by ENOVIA. It is strongly recommended that server
collation be set at the time SQL Server 2008 is installed. Changing the server collation after
installation is a difficult and involved procedure.
When discussing ENOVIA, you must decide whether to use case-sensitive (CS) or case-insensitive
(CI). Whether you choose CS or CI, you need to make sure that your queries are designed for the
proper case sensitivity.
Administrators can configure ENOVIA for either CI mode or CS mode when using a SQL Server
database. The underlying database must be configured to support the desired sensitivity. By
default, SQL Server 2008 is supported as case-insensitive.
In general, case sensitivity does not have an impact on ENOVIA Live Collaboration performance.
An exception may be queries using case-insensitive comparison operators on a case-sensitive
database. For information on configuring the case sensitivity, see Case-Sensitive Mode in the
ENOVIA MQL Guide.
Collation is set during SQL Server 2008 installation at the Server Configuration step (see
Figure 7). Click the Collation tab, and then click Customize for Database Engine collation.
15


Figure 7: Set the server Windows collation for the Database Engine
After clicking Customize, the dialog box shown in Figure 8 appears. Select Windows collation
designator and sort order. In the Collation designator drop-down list, select the Windows
collation. Click OK to return to the Setup wizard.
16


Figure 8: Choose server collation for a dedicated ENOVIA instance
Choose the Security Mode
ENOVIA requires mixed mode authentication, which can be configured during SQL Server 2008
setup in the Database Engine Configuration step (see Figure 9).
17


Figure 9: Set the server authentication to Mixed Mode
When choosing mixed mode, you must provide a strong password for the built-in system
administrator (sa) account.
While the sa account can be used to administer SQL Server, it is a best practice to have
administrators log on to the server with their individual user accounts. Use the Specify SQL
Server administrators section of the dialog box to add one or more domain accounts as SQL
Server administrators.
SQL Server 2008 setup does not add any accounts or groups to the sysadmin role by default, but
it is possible to add administrators after the setup is complete by logging in to the SQL Server
using the sa logon and password and adding users to the sysadmin role. Clicking Add Current
User grants sysadmin rights to the user currently executing the setup program.
Decide Which Services to Auto-Start
At minimum, the SQL Server Database Engine should be configured with automatic startup for
stand-alone servers. The SQL Server Agent service is a job scheduling engine that is useful for
running regular maintenance or health-checking tasks. Dassault Systmes therefore
recommends configuring SQL Server Agent with manual startup for production database
servers.
The service startup types can be configured in SQL Server 2008 setup at the Server Configuration
step, which is shown in Figure 4. After installation, service startup types can be changed in SQL
Server Configuration Manager.
18

Note: While the startup for a stand-alone SQL Server 2008 instance can be set to
Automatic, the startup for Windows Server failover cluster service should be set to
Manual. Instead of the service being started automatically, the Windows Server failover
cluster service monitors and manages the instance. The startup type should be set to
Manual for all cluster-aware services, including full-text search and SQL Server Agent,
and cannot be changed during installation.
Microsoft recommends that you configure service accounts individually to provide least
privileges for each service, where SQL Server services are granted the minimum
permissions they need to complete their tasks. For more information, see Setting Up
Windows Service Accounts (http://msdn.microsoft.com/en-us/library/ms143504.aspx)
and SQL Server Configuration - Service Accounts (http://msdn.microsoft.com/en-
us/library/cc281953.aspx) in SQL Server Books Online.

Tempdb Best Practices
The configuration of the SQL Server tempdb database is critical for best performance because of
the potential of added performance stress on tempdb from new features such as read-
committed snapshot isolation level and online index operations.
It is recommended that tempdb be isolated from other database activity and be placed on its
own set of physical disks. It is especially important to use RAID 10 for tempdb.
Move tempdb
By default, tempdb files are located in the same directory as the other SQL Server system
databases. For busy servers, this is usually not optimal. Tempdb should be moved during the
installation process to a high-performance, fault-tolerant disk volumefor example, a RAID 1 or
RAID 10 volume on fast disks. Tempdb should not be placed on a RAID 5 volume because of the
inferior write performance of RAID 5. When configuring especially busy high-performance
systems, solid-state disks (SSDs) may also be considered for tempdb.
If you did not move the tempdb during the installation process, move it after the installation is
completed by using the ALTER DATABASE statement. For example, to move tempdb to the G:
drive and the log file to the H: drive, execute the following commands in a Microsoft SQL Server
Management Studio (SSMS) query window:
USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'G:\SQLData\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'H:\SQLData\templog.ldf');
GO
19

You must restart SQL Server 2008 for this change to take effect. Note that SQL Server 2008
creates fresh copies of the tempdb data and log files every time the service starts, so there is no
need to move any files yourself. Just restart the service, and you will see the files in the new
location(s) you have specified.
Size tempdb
Pre-sizing tempdb to a sufficiently large size is strongly recommended. Tempdb should be
configured on a separate disk from the operating system and data files. All the files in tempdb
should be identical in size since SQL Server 2008 uses a proportional fill algorithm. Tempdb
sizing should be performed after a series of thorough tests so that it has had a chance to grow to
a meaningful size. You may also want to increase the FILEGROWTH setting for tempdb to 50
megabytes (MBs). This prevents tempdb from expanding too frequently, which can affect
performance. Set the tempdb database to auto grow, but use this option to increase disk space
for unplanned exceptions.
For information on SQL Server 2008 tempdb size recommendations, see Optimizing tempdb
Performance.
When the READ_COMMITTED_SNAPSHOT database option is set to ON, logical copies are
maintained for all data modifications performed in the database. Every time a row is modified
by a specific transaction, the instance of the Database Engine stores a version of the previously
committed image of the row in tempdb until the transaction that modified the row is
committed. The tempdb database should be sized to have sufficient capacity to store these row
versions in addition to the other objects that are usually stored in tempdb.
Set the file growth increment to a reasonable size to prevent the tempdb database files from
growing by too small a value. If the file growth is too small compared to the amount of data that
is being written to tempdb, tempdb may have to constantly expand, and this will negatively
affect performance.
Note: To avoid negatively impacting performance, administrators should monitor file
growth and not enable automatic file growth. Every time SQL Server 2008 is started, the
tempdb file is re-created with the default size. While tempdb can grow, it does take
resources to perform this task. To reduce the overhead of tempdb growing, you may
want to permanently increase the default size of tempdb after carefully monitoring its
growth.
Table 5 provides general guidelines for setting the FILEGROWTH increment for tempdb files.
20

Table 5: Tempdb FILEGROW increment recommendations
Tempdb File Size SQL Server Client
0 to 100 MB 10 MB
100 to 200 MB 20 MB
200 MB or more 10%
Note: You may have to adjust this percentage based on the speed of the
I/O subsystem on which the tempdb files are located. To avoid potential
latch time-outs, we recommend limiting the auto grow operation to
approximately two minutes. For example, if the I/O subsystem can
initialize a file at 50 MB per second, the FILEGROWTH increment should
be set to a maximum of 6 gigabytes (GBs), regardless of the tempdb file
size. If possible, use instant database file initialization to improve the
performance of auto grow operations.
You should also consider adding multiple data files to the tempdb filegroup. Using multiple files
reduces tempdb contention and yields significantly better scalability. As a general rule, create
one data file for each processor core on the server. For example, a four-processor dual-core
server would be configured with eight tempdb data files.
It is also important to remember the proportional fill strategy when creating additional tempdb
files. Even if the files are different sizes, it is possible to have contention due to the proportional
fill strategy. When creating new files, ensure that all tempdb data files are the same size to
minimize proportional fill creating file contention.
To add multiple data files, use the ALTER DATABASE statement with the ADD FILE clause:
USE master;
GO
ALTER DATABASE tempdb
ADD FILE
(
NAME = tempdev2,
FILENAME = 'C:\tempdb2.ndf',
SIZE = 100MB,
FILEGROWTH = 50MB
) ;

For more information on capacity planning and optimization of tempdb for SQL Server 2008, see
Optimizing Databases.
Additional information related to tempdb for SQL Server 2008 can be found in Working with
tempdb in SQL Server 2005.
21

Performance Tuning for Microsoft SQL Server (tempdb)
SQL Server provides two options to keep the statistics current: auto create statistics and auto
update statistics. These two options are set to TRUE by default. For versions of ENOVIA v6 prior
to VR2011x, it is necessary to set both options for tempdb to FALSE. Modifying these settings
increases CPU performance and user response time for ENOVIA VPLM.
You can set auto create statistics and auto update statistics for tempdb using SQL Server
Management Studio or by using the following command:
USE [master]
GO
ALTER DATABASE [tempdb] SET AUTO_CREATE_STATISTICS OFF WITH NO_WAIT
GO
ALTER DATABASE [tempdb] SET AUTO_UPDATE_STATISTICS OFF WITH NO_WAIT
GO

Note: For subsequent releases of ENOVIA V6R2011x, it will no longer be necessary to
manually set these options for tempdb.
Use Read-Committed Snapshot
ENOVIA requires the use of read-committed snapshot isolation. By using the read-committed
snapshot isolation level introduced in Microsoft SQL Server 2005, you can reduce blocking and
deadlocking issues caused by lock contention.
Every time a row is modified by a specific transaction, the instance of the Database Engine
stores a version of the previously committed image of the row in tempdb. The read-committed
snapshot isolation level provides the following benefits:
SELECT statements do not lock data during a read operation. Read transactions do not
block write transactions and vice versa.
Since SELECT statements do not acquire locks in most cases, the number of locks
required by a transaction is reduced, which reduces the system overhead required to
manage locks.
The possibility of blocking is significantly reduced.
SELECT statements can access the last committed value of the row while other
transactions are updating the row without getting blocked.
The number of blocks and deadlocks is reduced.
Fewer lock escalations occur.

For more information about the read-committed snapshot isolation level, see Using Snapshot
Isolation.
22

Use Multiple, Dedicated Disks
If your database server has sufficient disk resources and your server is heavily utilized, typical
best practices for database file placement should be followed for tempdb. Table 6 summarizes
these best practices.
Table 6: Best practices for tempdb
If You Have Then
One disk volume available for tempdb Place all tempdb data and log files here, preferably
with no other heavily utilized files on this volume.
Two disk volumes available for tempdb Place the tempdb log on one volume. Place all
tempdb data files on the other volume.
More than two disk volumes available for tempdb Place the tempdb log alone on one volume.
Distribute tempdb data files evenly across the
remaining volumes.
Configure Memory for SQL Server 2008
The following sections describe best practices for configuring memory.
Configure RAM on 64-bit Servers
SQL Server 2008 can dynamically manage memory without administrator intervention. While in
most cases this is adequate, servers that host large databases with heavy user traffic may need
some limits on memory usage to prevent performance degradation. If the size of the ENOVIA
database (actual data space used) is significantly greater than the amount of RAM on the
database server, conflicting demands on the memory may develop over time. SQL Server 2008
will try to continue growing its in-memory data cache to avoid reading from disk, but the
operating system will push back, trying to allocate memory for itself and other processes. The
result can be excessive memory paging and a negative effect on performance.
If your ENOVIA database (data + indexes) is or is expected to be much larger than the total RAM
on the database server, Dassault Systmes recommends configuring the minimum (min) and
maximum (max) server memory options in SQL Server 2008. Use the recommendations in Table
7 to select memory settings appropriate for your environment.
23

Table 7: Memory recommendations
Physical RAM SQL Server Min Server
Memory (MB)
SQL Server Max Server
Memory (MB)
8 GB 4,096 5,120 (5 GB)
16 GB 8,192 12,288 (12 GB)
24 GB 12,288 18,432 (18 GB)
32 GB 16,384 25,600 (25 GB)
48 GB 32,768 39,936 (39 GB)
64 GB 49,152 56,320 (55 GB)
96 GB 73,728 88,064 (86 GB)
128 GB 104,448 120,832 (118 GB)

The min and max server memory options can be set in the Server Properties dialog box in SSMS
or by using Transact-SQL (T-SQL), as in the following example:
exec sp_configure 'show advanced options', 1
RECONFIGURE
GO
exec sp_configure 'min server memory', 16384
RECONFIGURE
GO
exec sp_configure 'max server memory', 25600
RECONFIGURE
GO
An in-depth discussion of sizing SQL Server 2008 memory can be found at Ask the Performance
Team: SQL and the Working Set and in this entry in Slava Oks Blog.
Another option to consider is granting the Lock pages in memory permission to the SQL Server
service account, which lets SQL Server 2008 lock its buffer memory and refuse to page the
buffer memory out in response to requests from the operating system. Granting this permission
can give the Database Engine additional ability to preserve its caches and maintain performance,
especially if other processes are competing for resources on the server.
Dassault Systmes, however, recommends hosting the ENOVIA database on a dedicated server.
If this recommendation is followed, the Lock pages in memory permission is less important. In
addition, improvements in memory management in both SQL Server 2008 and Windows Server
2008 make it less likely that this setting will have a large impact. For additional information, see
How To: Enable the Lock Pages in Memory Option.
24

Configure RAM on 32-bit Servers (V6R2011 and Earlier Only)
The use of 32-bit database servers is increasingly rare. Dassault Systmes strongly recommends
that you run your ENOVIA database on a 64-bit server running a 64-bit Windows operating
system with a 64-bit edition of SQL Server 2008. The key advantage of a 64-bit server for
database workloads is the ability to efficiently use much larger amounts of RAM.
If you are using a 32-bit server to host the ENOVIA database, Dassault Systmes recommends
configuring SQL Server 2008 to use as much RAM as possible. By default, a 32-bit process (i.e.,
an x86 install of SQL Server 2008) can use at most 2 GB of system RAM. If the database server
has 4 GB or more of RAM, SQL Server 2008 should be configured to use Address Windowing
Extensions (AWE) memory. A simplified description of the procedure for enabling AWE memory
for SQL Server 2008 is as follows:
1. Add the /PAE switch to the Windows boot.ini file. An example of this procedure can be
found at http://support.microsoft.com/kb/283037. Note that a server restart is required
for this change to take effect.
2. Grant the Lock pages in memory permission to the SQL Server service account. This
permission is required for SQL Server 2008 to use AWE memory. Refer to How To:
Enable the Lock Pages in Memory Option if necessary.
3. Execute the following commands in a query window in SSMS:
exec sp_configure 'show advanced options', 1
RECONFIGURE
GO
exec sp_configure 'awe enabled', 1
RECONFIGURE
GO
exec sp_configure 'min server memory', 8192
RECONFIGURE
GO
exec sp_configure 'max server memory', 12288
RECONFIGURE
GO
Substitute appropriate values (in MB) for the min and max server memory settings. The
example uses values that might be appropriate for a server with 16 GB of RAM.
4. Restart SQL Server 2008.
25

More information and background on supporting large amounts of memory in 32-bit servers can
be found at the following links:
Memory Architecture
Using AWE
Enabling Memory Support for Over 4 GB of Physical Memory
Enabling AWE Memory for SQL Server
Server Memory Options
PAE and /3GB and AWE, Oh My
Install ENOVIA Database
The following sections describe best practices for installing the ENOVIA database.
File Placement on the Disk Subsystem
Before installing the ENOVIA database, Dassault Systmes recommends planning the placement
of the files that make up the database. A SQL Server 2008 best practice is to place these files on
separate physical disk volumes to get the best performance from the overall system. These disk
volumes should not be used for any other heavily accessed files or for the Windows Server
operating system.
It is critical to place your database files on fault-tolerant RAID sets. The ultimate safety of your
ENOVIA data depends on the data and log files residing on redundant physical disk storage.
Dassault Systmes recommends RAID 10 (also called RAID 1 + 0) for the data file, although RAID
5 may be acceptable for systems with lower I/O requirements. For the log file, RAID 10 or RAID 1
is recommended. The log file should not be placed on a RAID 5 volume because RAID 5 has a
lower write performance than the other RAID levels. Table 8 shows a summary of
recommendations for where to place the ENOVIA database files.

26

Table 8: Recommendations for placement of ENOVIA database
File Type Placement Recommended
RAID Level
Disk Speed
Data
(.mdf, .ndf)
Physically separate disks or
logical unit number (LUN) from
the log
Disk(s) should not be used for
other heavy file I/O
RAID 10 or RAID 5 10,000 RPM or faster
Log
(.ldf)
Physically separate disks or LUN
from data files
Disk(s) should not be used for
other heavy file I/O
RAID 1 or RAID 10
RAID 5 is discouraged
for log
10,000 RPM or faster

Note that when placing data and log files on separate disk volumes, the disk volumes should be
physically separate. The drive letters on which the files are placed should map to distinct
physical disk drive sets. Placement on different partitions/LUNs on a single disk drive set does
not help performance or fault tolerance.
Depending on the available I/O subsystem and the number of physical disks, it is recommended
that you keep the following entities on separate volumes, controllers, or I/O paths:
Application database data files
Application database log files
Tempdb database data files
Tempdb database log files
Size Data Files Up Front
The databases of most ENOVIA installations grow over time. For best long-term performance,
Dassault Systmes recommends sizing data and log files to match the expected long-term size of
the ENOVIA database.
If no explicit size is specified, SQL Server 2008 creates data and log files based on the size of a
special template database called model. This is usually just a few MBs in size. As data flows in to
the ENOVIA database, SQL Server 2008 automatically extends the data and log files as needed.
For large, heavily used databases, this eventually produces data files that are highly fragmented
on disk, and this fragmentation has a negative effect on performance. There is also a small run-
time performance penalty every time a file is extended.
These problems can easily be avoided by sizing files appropriately when the database is created.
The following guidelines are recommended for setting the initial size of database files.
27

1. Estimate long-term size of the ENOVIA database (for example, one to three years).
2. To get the recommended data file size:
Add 50% for small-sized databases (< 100 GB).
Add 35% for medium-sized databases (> 100 GB).
Add 25% for larger databases (> 500 GB).
3. Maintain the size of the transaction log file so that it scales with the level of transaction
activity.
Note: Optimization of the tempdb transaction log benefits performance and disk usage.
For more information on managing the tempdb transaction log for SQL Server 2008, see
Managing the Size of the Transaction Log File.
Table illustrates these calculations for a new ENOVIA database.
Table 9: Recommendations for sizing ENOVIA database
Constituent Example 1 Example 2
Estimated Long-Term Size of
Data + Indexes (1-3 years)
50 GB 400 GB
Add buffer 25 GB 140 GB
Initial Data File Size 75 GB 540 GB
Initial Log File Size 15 GB 100 GB
Although the initial size of the database should be sufficient for some time, you should enable
automatic file growth to avoid a production problem if the data files run out of space.
As a rule, the file size for the ENOVIA database should be actively managed. Dassault Systmes
recommends configuring data files at 1-GB growth increments and log files at 1-GB growth
increments. Avoid using large percent-based growth settings on large log files because the
initialization of new log segments can temporarily block activity on the database.
Distribute the PRIMARY Filegroup Across Files on Multiple Disks (Optional)
For heavily used ENOVIA databases, overall performance can be affected by the throughput
limitations of working with a single data file on a single disk volume.
With SQL Server 2008, you can easily spread the I/O load for your ENOVIA database across
multiple disk volumes or RAID sets by mapping SQL Server filegroups to multiple files. (Note that
it is not necessary to create multiple database files on the same disk volume because this does
not provide significant performance gains).
28

You should only take on the extra administrative work of distributing the primary filegroup
across file on multiple disks if you have confirmed that disk I/O is a bottleneck for your system.
The monitoring techniques discussed later in this paper can help you make this determination.
For high-end systems and workloads, additional tuning and optimization techniques are
available; refer to this guide (Analyzing I/O Characteristics and Sizing Storage Systems for SQL
Server Database Applications) for more information.
Use Data Compression (Enterprise Edition Only)
SQL Server 2008 Enterprise edition includes the ability to natively compress its data pages on
disk. This saves space on the disk, and because it reduces the amount of disk I/O executed for a
given amount of data, data compression also improves performance. When data compression is
enabled, data pages are also compressed in the buffer pool (i.e., RAM). This provides the
additional benefit of increasing the effective amount of application data that can be accessed
from the data cache rather than accessed from the disk.
Adjust SQL Server 2008 and Database Settings
The following sections describe best practices for database settings.
Verifying Database Collation
When the ENOVIA database is created, it assumes the server collation (set during SQL Server
2008 installation) unless otherwise specified. It is good practice to verify that the collation for
the database is the one the ENOVIA software expects. As shown in Figure 10, this can be verified
in the Database Properties dialog box in SSMS. The collation displayed in this dialog box should
be Windows collation.

Figure 10: Verify the Windows collation for ENOVIA
Set Automatic Statistic Updates, Auto Shrink, and Other Options
SQL Server 2008 uses internal statistics about the data stored in tables to decide which indexes,
if any, to use for most efficiently executing a query. As table data changes over time, these
statistics need to be kept current. Each SQL Server database has the option for these statistics to
29

be updated automatically (statistics update can also be initiated manually, if needed). Dassault
Systmes recommends that auto update of statistics be enabled for the ENOVIA database.
Figure 11 shows the Database Properties dialog box with the Options page selected. The
highlighted section lets you view and configure the settings for automatic statistics. Auto Create
Statistics and Auto Update Statistics should be set to True. Auto Update Statistics
Asynchronously can be set to your preference.

Figure 11: Enable automatic updates of statistics
Two other options are also shown in the highlighted section of Figure 111 with their
recommended settings selected; both Auto Close and Auto Shrink should be set to False.
For additional information about how SQL Server uses statistics and best practices in defining
and maintaining statistics information to support the query processor during query executions,
you can refer to Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 and Using
Statistics to Improve Query Performance.
Choose a Database Recovery Model
The recovery model of a SQL Server database determines how changes are recorded and stored
in the databases transaction log. The three options for the recovery model setting on your
ENOVIA database are Full, Bulk-logged, and Simple:
Full recovery model. Best for preventing critical data loss and restoring data to a specific
point in time, and generally used by enterprise production systems. If the transaction
30

log is available, it is possible to get up-to-the-minute recovery and point-in-time restore
if the end of the transaction log is backed up and restored. The trade-off for the full
recovery model is that it requires more administrator attention and a more complex
backup/restore strategy.
Bulk-logged recovery model. For databases that have critical data, but for which the
DBA wants to minimize storage or performance impact of bulk operations (e.g., bulk
data loads, table copies, index maintenance). It provides nearly the recovery power of
the full recovery model but with some limitations.
Simple recovery model. Appropriate if the data backed up is not critical, data is static or
does not change often, or if data loss is not a concern for the organization. If an active
database is lost with the simple recovery model, the organization loses all transactions
since the last full or last differential backup. This model is typical for test environments
or production databases that are not mission critical or have very light activity.
In general, ENOVIA databases are frequently updated and hold information that is of high
importance to the organization. Especially for large ENOVIA databases, full backups may not be
feasible with high frequency (e.g., daily or more often). For these reasons, Dassault Systmes
recommends using the full recovery model with ENOVIA databases.
Administrators must be aware of the management implications of the full and bulk-logged
recovery models. In particular, under these recovery models, transaction logs must be backed
up regularly to prevent them from filling up or exhausting available disk space.
Ongoing Management
The following sections describe best practices for ongoing database management.
The Database Backup Plan
One of the most important ongoing maintenance tasks for a business-critical database such as
ENOVIA is managing backups. Dassault Systmes recommends you carefully plan, document,
and test the backup and recovery strategy for the ENOVIA database.
The types of database backups allowed in SQL Server 2008 are:
Full database backup. Contains all the data in the database plus enough transaction log
information to allow for recovering that data.
Differential database backup. Contains all the data changed since the last full backup.
Differential backups make it possible for a smaller, faster backup to be taken, which can
later be combined with the full backup to restore the database to the time of the
differential backup.
Transaction log backup. Contains all the database changes since the last (full or
differential) database or log backup. Periodically backing up the transaction log lets you
restore the last database backup followed by subsequent log backups to recover to the
latest point in time needed.
31

Database file or filegroup backup. Contains only data from specified file(s) or
filegroups(s) to allow quick, piecemeal restores if only part of the database is lost. These
types of backup are not recommended for ENOVIA deployments.
The different types of backup let you back up with the frequency appropriate to your business
requirements, even if your databases are very large. A typical backup strategy for a large
database (for example, several hundred gigabytes) might include a weekly full backup, nightly
differential backups, and transaction log backups every 30 minutes.
If the database was lost, the DBA would:
1. Perform a log backup if the log file is still accessible (this is referred to as a tail of the
log backup).
2. Restore the last full backup.
3. Restore the last differential backup.
4. Restore all the transaction log backups since the last differential backup, in sequence.
If properly executed, this returns the database to operation with at most 30 minutes of work
lost.
For small databases, it is perfectly valid to simply perform full backups once a day or even more
frequently. This approach is simple, but is practical only for databases that can be backed up
very quickly.
It is important to remember that for databases using the full or bulk-logged recovery model,
regular transaction log backups are required to keep the log from growing out of control.
The right backup strategy depends on the database size and level of change activity in your
installation. This can vary greatly, even across deployments of ENOVIA, so there is no one-size-
fits-all recommendation. However, keep the following in mind as you plan your backup
strategy:
If you choose the full recovery model, the transaction log can grow very quickly and
should be scheduled for backup at multiple times throughout the day. Performing
multiple backups ensures that the log is truncated frequently and provides for improved
data recoverability.
If you choose the simple recovery model, schedule full database backups for at least
once per day. With this method, data recoverability is limited to the last full database
backup.
Database backups should be stored off site when possible and appropriate.
Back up a database immediately after you make extensive changes or perform non-
logged operations.
32

Be sure to include regular backups of the master and msdb databases as part of your
overall backup strategy. The master database contains information about the databases
on your system, and the msdb database stores information about scheduled jobs.
If you are using SQL Server 2008 Enterprise edition or later, use backup compression to
reduce the size of backups and the time required to take them.
Finally, remember that what you really need in an emergency is a restore, not a backup.
Therefore, plan your restore strategy carefully, and test restoration of your database from
actual backups to validate that you have covered everything.
Synchronize ENOVIA Database and File Backups
An additional consideration when managing overall backup of the ENOVIA application is the
interaction between the SQL Server database and the file repository. Since the Resource Tier
includes both the ENOVIA database and the ENOVIA file server(s), a full and consistent backup
of the Resource Tier requires both database and file backups.
The ENOVIA database contains references (essentially, file pointers) to content in the file
repository. To ensure maximum consistency between the database file references and the
actual content of the file repository, some attention must be paid to backing up both
simultaneously.
Dassault Systmes recommends choosing one of two approaches to do this:
Option 1: Take cold backupsthat is, take the system offline to halt user activity, back
up both the database and file repository, and then put the system back online.
Option 2: Take online backups and accept some small potential inconsistency. The
issues with this approach can be minimized if the backups are taken during periods of
low activity.
More information about this advanced topic is available from Dassault Systmes.
Monitor Index Fragmentation and Defragment when Necessary
SQL Server 2008 uses indexes to provide fast access to information when users or applications
request it. These indexes are maintained by the Database Engine as the table data grows and/or
changes. Over time, the indexes can become fragmented; especially in databases that handle
heavy insert, update, and delete activity. An index is fragmented when the physical ordering on
disk does not match the logical order of the data (as defined by the index key) or when data
pages that contain the index are dispersed across non-adjacent sections of the disk.
Fragmentation of an index can reduce the speed of data access and result in slower application
performance. It can also cause more disk space to be used than is actually necessary. Index
fragmentation can be corrected by reorganizing or rebuilding the index.
You can tell which indexes, if any, have fragmentation problems by using the
sys.dm_db_physical_stats() system function. This function provides many details about the
33

physical layout of the index. However, the most important result column for tracking
fragmentation is avg_fragmentation_in_percent. This column indicates how fragmented the
index is on disk. A low number means low fragmentation (good); a high number means high
fragmentation (bad).
For example, this query returns index physical stats for all the indexes in the current database:
SELECT OBJECT_NAME(object_id),
index_id,
page_count,
index_type_desc,
avg_fragmentation_in_percent,
fragment_count
FROM sys.dm_db_index_physical_stats(db_id(),NULL,NULL,NULL,'LIMITED')
To identify the indexes by name, you can join against the sys.indexes system view.
Similar information is also available in the Standard Reports in SSMS. To view this report, right-
click the ENOVIA database, select Reports > Standard Reports, and select the Index Physical
Statistics report on the fly-out menu.
Table 10 provides general guidelines for interpreting the avg_fragmentation_in_percent value.
Table 10: Guidelines for handling fragmentation
Fragmentation Recommended Action
< 5% Do nothing
5% to 30% Reorganize with
ALTER INDEX REORGANIZE
> 50% Rebuild with
ALTER INDEX REBUILD WITH (ONLINE=ON)
or
CREATE INDEX with DROP_EXISTING=ON

Reorganizing an index does not block user access to the index while underway. However,
rebuilding or re-creating the index does prevent user access to the index. The exception to this is
if the ALTER INDEX REBUILD statement is used with the ONLINE = ON option. Note that online
index rebuild requires the Enterprise edition of SQL Server 2008.
Periodically checking index fragmentation and taking any necessary corrective action is
important to maintaining the performance of your ENOVIA deployment. The rate at which
34

fragmentation may occur depends on user activity, but as a general rule, Dassault Systmes
recommends checking index fragmentation at least monthly.
For more information about reorganizing and rebuilding indexes, see Reorganizing and
Rebuilding Indexes.
Periodically Check Database Integrity
Despite the sophisticated data management techniques embedded in the SQL Server Database
Engine, some corruption can still occur in a database, most notably as the result of a hardware
problem. To head off the potential impact of such problems, you should regularly check the
integrity of the database with the T-SQL statement DBCC CHECKDB.
It is best practice to include database integrity checks in a scheduled job that executes DBCC
CHECKDB or to use a scheduled Maintenance Plan that includes the Check Database Integrity
task.
If any problems are detected, you can restore from backups (usually the best option) or use one
of the REPAIR options on DBCC CHECKDB.
Monitor Space Used
Over time, the space used in your ENOVIA database for data and indexes will increase.
Regardless of the initial space allocations you have made, you occasionally need to check the
amount of space remaining in data and log files and on the host disk volumes.
You can check the space consumed and remaining for the ENOVIA database using the Standard
Reports in SSMS. The Disk Usage report displays the currently reserved space in data and log
files and the amount of disk space that is in use. This is also a convenient place to check for any
recent data or log file growth. If growth is occurring, it may indicate that a rearrangement or
expansion of data files is necessary.
Details on space usage can also be obtained via T-SQL by using the sp_spaceused stored
procedure. See sp_spaceused (Transact-SQL) in SQL Server Books Online for details.
Use SQL Server Agent J obs and Alerts
Many of the ongoing database management and monitoring tasks required to administer your
ENOVIA database can and should be automated. SQL Server 2008 includes SQL Server Agent, a
job scheduling and alerting service that is tightly intertwined with the SQL Server Database
Engine. SQL Server Agent can be used to schedule regular backups, database integrity checks,
index reorganization jobs, and many other routine tasks. It can also be used to check Database
Engine metrics and alert someone if they are outside normal bounds.
SQL Server 2008 Maintenance Plans also provide a neatly packaged way to define and schedule
routine management tasks for a database or for multiple databases.
35

Dassault Systmes recommends using these tools to configure automated maintenance and
monitoring tasks for the ENOVIA database.
Tools for Monitoring Database Server Performance
Once your ENOVIA database is in production, it is important to establish a baseline of its normal
performance characteristics and to check its health and status periodically. The following
sections discuss some of the tools and techniques available for monitoring database server
performance.
SQL Server Activity Monitor
A good, easy-to-use starting point for monitoring your SQL Server 2008 instance is the Activity
Monitor in SSMS. This tool combines several key pieces of information into one view. Although
it does not provide highly detailed insight on its own, Activity Monitor can be a good part of
your SQL Server monitoring toolbox.
To open Activity Monitor, right-click the server icon in the Object Explorer pane of SSMS and
select the Activity Monitor item. There is also a button for Activity Monitor on the Standard
toolbar in SSMS. The Activity Monitor opens as a new tab in the main Management Studio
window.
The Overview section shows summary graphs of four key metrics for getting a sense of the load
on the server. The % Processor Time is an average of CPU utilization across all processors.
Waiting Tasks shows the number of tasks waiting on CPU, memory, disk, locks, or other
resources. The Database I/O graph shows overall disk I/O throughput of SQL Server 2008. Batch
Requests/sec shows the number of individual T-SQL batches executed per second (for example,
SQL statements or stored procedures calls from applications).
The other sections (also called panes) in the Activity Monitor tab provide detailed snapshots of
different aspects of the Database Engines current workload. While a full discussion of these
sections and how to interpret the information provided is beyond the scope of this white paper,
a brief introduction to each section follows.
Processes section: This section presents a list of the active user connections to the
server. For each connection, useful troubleshooting information is provided, including
the current database the connection is using, what resource the connection is waiting
on (if any), and whether the connection is blocked by another connection. The
Processes section is thus useful for getting an idea of the number of active connections
and what sort of performance roadblocks they may be experiencing.
Resource Waits section: This section provides a summary of the waits the Database
Engine is observing in current activity. Examples of waits are network I/O for clients to
pull results across the wire, disk I/O for the server to pull data from disk into buffer
memory, or lock waits while one connection waits on a transaction lock held by another
connection. Besides the type of wait, this section shows the cumulative and recent
36

amount of time spent on this type of wait and the number of connections experiencing
it. This information is just a coarse summary of a highly detailed set of tuning
information available from SQL Server 2008 wait statistics. It can be quite useful,
though, to get an indication of system bottlenecks at a glance. Much more information
about using wait stats to understand the performance of SQL Server 2008 can be found
in SQL Server 2005 Waits and Queues.
Data File I/O section. This section lists all data and transaction log files in use by
databases on the current SQL Server 2008 instance. With each file, recent read and
write throughput (in MB/sec) is reported. The last column in this section is particularly
interesting. The Response Time (ms) column reports disk latency for I/O against the file.
This is a good number to look at to begin an investigation of whether a workload is I/O
bound.
Recent Expensive Queries section. This section lists some of the most expensive queries
the engine has recently executed. Besides the query text, average execution metrics are
listed for each.
For more information about this tool, see the Activity Monitor topic in SQL Server Books Online.
Standard Database Reports in SSMS
Another easy-to-use resource is the set of database Standard Reports in SSMS. These reports
are accessible by right-clicking the icon for the ENOVIA database in the Object Explorer pane and
selecting Reports > Standard Reports. The reports available on the fly-out menu provide basic
and advanced information on many aspects of the database.
Performance Monitor
Windows Performance Monitor
1
, a management tool included with Windows, is a key tool for
understanding the performance characteristics of your database server. Performance Monitor
lets you observe a wealth of metrics about the activity of the operating system and applications
such as SQL Server 2008. Performance Monitor can be used to observe these metrics
interactively in real time or to log them to disk for later review.
It is good practice to establish a baseline understanding of the performance of your database
system after it is deployed. This lets you better understand issues if your system runs into
performance or other problems later. By comparing observed behavior in a problem situation to
your baseline expectations, you will be able to more easily focus on the root cause.



1
Also known as System Monitor in some editions of Windows Server
37

The following list of Performance Monitor counters is a condensed set of valuable
measurements you can use to understand your database servers activity. It only scratches the
surface of the SQL Server 2008 monitoring that is possible, but is a useful starting point.
Table 11: Some Performance Monitor counters
Type Object Counter Instance Description
CPU Processor % Processor
Time
- Total CPU use on the server
CPU Process % Processor
Time
Sqlservr CPU use by SQL Server alone
I/O Physical Disk Disk sec/Read All volumes
used by
SQL
Latency of reads from disk. If this is high,
server may be I/O bound.
I/O Physical Disk Disk sec/Write All volumes
used by
SQL
Latency of writes to disk. If this is high,
server may be I/O bound.
Memory Memory Page Faults/sec - Rate at which Windows could not find
the page it needed at the expected
location in memory.
Memory Memory Pages/sec - Rate at which Windows read or wrote to
the system paging file. If the ratio of this
to Page Faults/sec is more than a few
percentage points, memory may be
insufficient.
Memory SQL Server:
Buffer Manager
Buffer cache hit
ratio
- % of the time data is found in the SQL
Server 2008 memory cache, rather than
read from disk. Should be > 90%.
Memory SQL Server:
Memory
Manager
Total Server
Memory (KB)
- Total memory used by the SQL Server
2008 process. If this is near total system
memory, more memory may be needed,
or you may need to reduce max server
memory.
SQL SQL Server: SQL
Statistics
Batch
Requests/sec
- Number of SQL batches executed by SQL
Server 2008 per second. An easy gauge
of how actively the server is servicing
client activity.
SQL SQL Server:
General
Statistics
User
Connections
- Number of open connections to SQL
Server 2008.


38

Note that if you are using a named instance of SQL Server 2008, the name of the object in
Performance Monitor is slightly different. For default (not named) instances, the object name is
SQL Server: <object>. For a named instance, the object name is MSSQL$InstanceName:
<object>. An example of this for an instance named ENOVIA is shown in Figure 12.

Figure 12: Performance Monitor counters for a named instance of SQL Server

39

Checklist
For convenience, the following checklist summarizes and consolidates the best practices
discussed in this white paper. After familiarizing yourself with the preceding in-depth discussion,
you can use this as a reminder to cover all the recommended best practices.
Use a dedicated database server and instance.
Choose the right SQL Server 2008 edition (Standard or Enterprise) for your needs.
Use 64-bit SQL Server 2008 if your hardware allows. Consider acquiring 64-bit
hardware if it does not.
Use a low-privilege Windows account for the SQL Server 2008 and SQL Server Agent
services.
Grant the Perform volume maintenance tasks permission to the SQL Server service
account to allow fast file initialization.
Set the collation to Windows collation when installing SQL Server 2008.
Set the server authentication mode to mixed.
Configure the SQL Server Database Engine service and the SQL Server Agent service to
auto-start.
Size and place tempdb per best practices (as described in detail in this document).
Set the maximum server memory for SQL Server per recommendations.
Place the data and log files on physically distinct disk volumes.
Use RAID 10 or RAID 5 for data files; use RAID 10 or RAID 1 for log files.
Use large initial size for data and log files, based on projected growth.
Leave automatic file growth enabled just in case.
Spread the PRIMARY filegroup across equal-sized files on multiple disk volumes
(optional).
Use SQL Server data compression (SQL Server 2008 Enterprise edition only).
Verify that the ENOVIA database is using Windows collation.
On the ENOVIA database, use the following settings:
o Auto Create Statistics = True
o Auto Update Statistics = True
o Auto Close = False
o Auto Shrink = False
40

Use the full recovery model for the ENOVIA database.
Ensure that frequent transaction log backups are scheduled to prevent the log file
from growing out of control.
Plan and implement an effective backup strategy.
Test database restore using live backups.
Be sure to back up the master and msdb databases in addition to the ENOVIA
database.
Monitor index fragmentation, and defragment when necessary.
Periodically check database integrity.
Monitor the amount of space used.
Use SQL Server Agent jobs and alerts to automate and schedule maintenance and
monitoring.
Get familiar with SQL Server monitoring tools, and establish a baseline performance
profile for your system.


41

Summary
Dassault Systmes ENOVIA software, coupled with Microsoft SQL Server 2008 database
software, provides your enterprise with the platform to build an agile, globally competitive
product development process.
SQL Server 2008 provides an ideal database platform for ENOVIA. SQL Server 2008 is an
enterprise-ready, comprehensive, integrated data management and analysis platform that
makes it possible for organizations to reliably manage large, mission-critical workloads and
complex business applications. SQL Server 2008 provides rapid data integration and data mining
and fast, intuitive analysis and reporting capabilities. Its built-in features deliver reliability and
security on a scalable foundation that includes a powerful database engine.
Using the best practices discussed in this paper can help you optimize performance of Dassault
Systmes ENOVIA for PLM and can help you avoid and minimize problems. The links in the
following section provide even more resources to ensure a successful implementation of
ENOVIA on SQL Server 2008.

42

Links for Further Information
For general information, visit the Dassault Systmes ENOVIA Resource Library.
SQL Server 2008 information can be found in Books Online:
SQL Server 2008 Books Online
SQL Server 2008 Product Documentation (TechNet)
SQL Server 2008 R2 Product Documentation (TechNet)

See the SQL Server Best Practices portal for technical white papers, the SQL Server Best
Practices Toolbox, Top 10 Lists, and other resources.

Following is a list of technical white papers that were tested and validated by the SQL Server
development team. These can help you learn more about specific SQL Server topics.
Analyzing I/O Characteristics and Sizing Storage Systems for SQL Server Database
Applications
Statistics Used by the Query Optimizer in Microsoft SQL Server 2008
Microsoft SQL Server 2008 R2 Best Practices Analyzer
Troubleshooting Performance Problems in SQL Server 2008
SQL Server 2008 Monitoring
Using SQL Server 2008 Resource Governor
Using Statistics to Improve Query Performance
SQL Server 2008 Administration Topics
High Availability with SQL Server 2008 R2
Getting Started with SQL Server 2008 Failover Clustering
SQL Server 2008 Failover Clustering
Running SQL Server 2008 in a Hyper-V Environment


SQL Server 2008 Upgrade Technical Reference Guide
The Data Loading Performance Guide for SQL Server 2008
Introduction to New T-SQL Programmability Features in SQL Server 2008

You might also like