1660220 - Microsoft SQL Server - Common misconceptions
The SAP Knowledge Base Article 1660220 addresses common misconceptions about Microsoft SQL Server, clarifying that many widely held beliefs are incorrect. It emphasizes the importance of understanding SQL Server's behavior regarding database statistics, transaction logs, and maintenance tasks, and warns against manual updates and certain repair options that may lead to data loss. The article also provides guidance on managing database performance and space utilization effectively.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0 ratings0% found this document useful (0 votes)
9 views
1660220 - Microsoft SQL Server - Common misconceptions
The SAP Knowledge Base Article 1660220 addresses common misconceptions about Microsoft SQL Server, clarifying that many widely held beliefs are incorrect. It emphasizes the importance of understanding SQL Server's behavior regarding database statistics, transaction logs, and maintenance tasks, and warns against manual updates and certain repair options that may lead to data loss. The article also provides guidance on managing database performance and space utilization effectively.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 5
EID” SAP Knowledge Base Article
1660220 - Microsoft SQL Server: Common misconceptions
Component: BC-DB-MSS (Basis Components > Database Interface, Database Platforms > SQL Server in SAP
NetWeaver Products), Version: 9, Released On: 05.04.2024
| Symptom
Some widely accepted information about the Microsoft SQL Server database is shown not to be entirely correct upon closer
examination. This Knowledge Base Article lists widespread incorrect assumptions about Microsoft SQI. Server and explains
why they are wrong.
1. The Microsoft SQL Server Agent job Update_Tabstats updates the database statistics which are used by the database
optimizer to calculate execution plans and therefore it s critical for performance ifthe job fails.
When using recover model simple the transaction log cannot run full
The result set of database accesses is always ordered by the primary key even if no ORDER BY clause is used expli
If database accesses hang for along time, the problem is caused by a deadlock.
Updating Microsoft SQL Server database statistics manvally (with a SQL Server Agent job, with a SQL Server
Maintenance Plan or by other means) is part of maintenance and therefore required on a regular basis
Reorganizing some or all database objects is a required maintenance task and should therefore be carried out on a
regular basis
7. DBC CHECKDB and DBC CHECKTABLE with option repair_allow_data_loss allow you to repair database
Inconsistencies and will not eause any data loss.
8, After archiving or deleting data from a table the table and its indexes will occupy less space in the database and the
database itself will also occupy less space.
9. Ifthe autogrow option is configured for all datafiles Microsoft SQL Server will grow all file in a balanced way.
10. High memory paging may cause the database to go in SUSPECT status.
weer
| Environment
+ SAP Systems based on SAP NetWeaver;
+ Microsoft SQL Server;
| Resolution
1. The SQL Server Agent job Update_Tabstats updates the database statisties which are used by the database optimizer to
calculate execution plans and therefore it is critical for performance ifthe job fails.
Job SAP CCMS___|
execution plans
Instead, it is part of the database monitoring framework implemented and provided by SAP.
Itis not natively included in a Microsoft SQL Server installation, but is developed and delivered by SAP and was introduced
with Basis Release 7.00 SP12.
Ipdate_Tabstats does not touch the optimizer statistics at all and has no influence on
‘The job executes stored procedure sap_update_tabstats,
It collects meta information about database objects (e.g. keyfigures like the number of table rows, the reserved size of an
object, the row modification counter, and many’ more) and stores them persistently inthe database.
‘As SQL Server does not keep any history for such keyfigures, SAP collects and stores them with ths job in order to make
historical information about database objects available.
‘This allows to analyze how certain properties of database objects change over the time and serves as source of information
for SAP DB monitoring transactions (e.g, DBACockpit, fastest growing tables, .)
Also see SAP Note 1178916 for more information.
2, When using recovery model simple the transaction log cannot run full
‘The transaction log of a database consists of one or more files. You can decide for each file if you allow SQL Server to
autogrow the file if required or not (autogrow on/off). With recovery model simple you ensure that SQL Server will truncate
the log at each checkpoint - still, this doesn't mean that the transaction log cannot run full. Imagine you have a very longrunning transaction and all transaction log space is consumed before the transaction reaches the point in time where it
commits. In such a case the transaction log can run full even if you are using recovery model simple. To resolve you need to
take a closer look at the transactions - is there a long running transaction which keeps SQL. Server from truncating the log? Is
it normal that this transaction takes so much time or is it caused by a wrong executiong plan, bad I/O or any other
performance-degrading issue?
To understand this in detail you need to make yourself familiar with how transaction log truncation works, meaning: which
parts of the transaction log are considered active and which are considered inactive when the truncation is carried out. Parts
cannot be truncated as long as they are stil active, For a detailed example please see SAP Note 421644.
3. The result set of database accesses is always ordered by the primary key even if no ORDER BY clause is used explicitly
‘The result set is only ordered by the key if SQL Server uses the primary index for the access.
Since the database optimizer chooses the access path dynamically based on existing statistics, you cannot assume that SQL.
Server ALWAYS uses the primary index for certain accesses and therefore cannot rely on an ordered result.
Ifyou require the result of a query to be ordered you must use an ORDER BY clause.
4, If database accesses hang for a long time, the problem is caused by a deadlock.
‘This assertion is incorrect.
Genuine deadlocks (in other words, the mutual blocking of several transactions) are quickly recognized by Microsoft SQL
Server, and eliminated by canceling one of the blocking transactions with an SQL Error 1205 within seconds.
Database accesses that hang for a long time may have a wide variety of causes (blocking locks or suboptimal execution plans
for example), but are not deadlocks.
A good starting point to analyze why certain actions hang is creating snapshots of the current situation with hangman, Refer
to SAP Notes 948633, 541256 and 806342,
Ifyou really encounter a deadlock (which becomes evident by the occurence of an SQL. Error 1205), you can analyze it in more
detail by following SAP Notes 111291 and 32129.
5. Updating Microsoft SQL Server database statistics manually, with a SQL Server Agent job or a SQL Server Maintenance
Plan is required on a regular basis.
Updating statisties on a regular basis is important but with SQL Server there is no need to schedule this task explicitly!
As long as the autostats feature is properly enabled, Server automatically detects if statistics need to be updated and carries
out this task for you.
‘The decision if statistics are considered out of date depends on several factors like the number of rows in the table and the
number of rows modified since the last statistics update.
For details on the algorithm please see:
+ Statistics Used by the Query Optimizer in Microsoft SQL Server 2000
+ Statistics Used by the Query Optimizer in Microsoft SQL Server 2005
+ Statistics Used by the Query Optimizer in Microsoft SQL Server 2008
‘To ensure that the automatic statisties update is enabled correctly, please refer to the configuration note for your SQL. Server
release:
+ SQL Server 2008: SAP Note 1237682
+ SQL Server 2012: SAP Note 1702408
+ SQL-Server 2014: SAP Note 1986775
+ SQLServer 2016: SAP Note 2312935
+ SQL Server 2017: SAP Note 2484657
+ SQL Server 2019: SAP Note 279607
+ SQL-Server 2022: SAP Note 218776
Bottom line:
Don't update optimizer statistics manually for any object (or the whole database) unless SAP explicitly asks you to do so. It
will produce 1/0 load and will not have any benefit.
‘The only exception to this rule are tables which contain date information. To ensure proper statistics for those tables at any
‘times you need to follow SAP Note 1558087 and schedule an update job for such tables.
6, Reorganizing some or all database objects is a required maintenance task and should therefore be carried out on a regular
basis.Abad overall performance or a bad performance of single database operations is often believed to be caused by the
fragmentation of tables and indexes. As a solution, reorganization or rebuild appears to be the cure. This might apply for
other relational database management systems, but for SQL Servern in most situations both, the assumption that the bad
performance is caused by fragmentation, and trying to solve the problem by reorganizing or rebuilding database objects, are
false
For this reason, SAP explicitly recommends not to reorganize or rebuild any database objects on a regular basis. You should
not even reorganize or rebuild objects as an attempt to solve a performance problem as long as it is not evident that
fragmentation is the root cause of the problem (which it hardly ever is).
Please see SAP Note 159316 which explains this topic in more detail.
7. DBCC CHECKDB and DBCC CHECKTABLE with option repair_allow_data_loss allow you to repair database
inconsistencies and will not cause any data loss.
‘The repair_allow._data_loss option is not a tool which can perform magic to retrieve back data from pages which are
physically damaged or contain logically incorzect information. Instead, it does more oF less exactly what its name says: it tries
to retrieve as much data as possible and will discard as much data as required to return to a consistent version ofthe affected
object(s).
Itis important to understand that database inconsistencies are in almost all eases eaused by malfunetions on lower layers
(ypically hardware or driver malfunetions). This means that due to a malfunction on these lower layers one or more database
pages are damaged - meaning their content is not fully correct anymore to a certain extent. There are various types of
database inconsistencies e.g. pages might not linked properly anymore, links between pages might be missing completely,
pages from the allocation maps (GAM, TAM, SGAM) might contain ineorrect data, pages might be damaged to an extent that
they do not even have the physical structure of a SQL Server page anymore.
Ifyou are very very lucky this affects a page which was cached for faster access in your main memory and the inconsistent
page hasn't yet been written back to the disk. This is what we call a transient inconsistency but unfortunately an inconsistency
is hardly ever a transient one.
In most cases the inconsistent pages are in the database files or in the log files. This means the incorrect information is on
disk and there is no proper version of the affected page(s) anymore. This should make it clear why you cannot simply
"recover" from an inconsistency. An inconsistency is a damaged page - there is no way to make the database guess what the
correct content of an inconsistent page would have been and to let the DB simply revert the page content back to the correct
In most eases you will have more than one inconsistency. In order to judge how bad the situation is you need very exhaustive
knowledge of SQL Server to understand which kind of pages (e.g. index pages, data pages, leaf pages, allocation map pages)
are affected and which impact this has.
Using repair_allow_data_loss in order to let the database discard everything that cannot be interpreted or read properly
anymore is no solution - in most eases it will even make things worse and still there is no guarantee that this option will even
be able to recreate a consistent version of the affected object(s) - despite accepting data loss. You might still have
inconsistencies left afterwards as depending on how bad the situation is, it might not even be possible anymore to return to a
physically consistent state. On the other hand and much more important: this leads to completely uncontrollable,
unpredictable dataloss and there is no way to log or trace what is thrown away. You will have data loss and this will cause
inconsistencies on SAP application level (usually SEVERE inconsistencies). For these reasons, SAP does not support the
‘usage of repair_allow_data_loss. See SAP Note 142731 and SAP Note 1597910 for further details.
8. After archiving or deleting data from a table the table and its indexes will occupy less space in the database and the
database itself will also occupy less space.
‘There are different key figures which inform you about the space consumption of an object (reserved size, data size, index
ize, unused size). Ifyou have a large table and you delete a large amount of data from the table (e.g. by reorganizing the
entries from application level or by archiving) SQL Server will not release the freed space back to the data files. Instead, it will
keep the freed space reserved for the object.
Ifyou really have the need to release the space back to the datafiles and to then release it back to the filesystem, please refer to
SAP KBA 1721843 for more details. If you are not urged to gain back the space on filesystem level, SAP recommends to simply
leave the object as it is. SQL Server will reuse the freed space as soon as new entries are inserted into the table.
9. Ifthe autogrow option is configured for all datafiles Microsoft SQL. Server will grow all files in a balanced way.
‘The assumption that MS SQL Server grows files in a balanced way if autogrow is switched on for all files is a common
misconception.
SQL Server uses a proportional filling algorithm to distribute new data over all existing datafiles. This is described in more
detail in SAP Note 1238993 - even though the note explicitly mentions SQL Server Release 2008 it works the same way for
all releases,Briefly explained: ifnew data needs to be added to the database, SQL Server distributes the new data over all datafiles which
still provide free space. SQL Server will not grow any file as long there is at least one datafile which still provides free space.
Even if autogrow is configured for all datafiles SQL Server will wait until ALL files are full - only ifall datafile are completely
fall and SQL Server needs to add new data it will autogrow files. Ifyou are using SQL Server release >= 2008 and have set
trace flac 117, SQL Server will grow all existing datafiles with autogrow=on at an autogrow event. In any other case, SQL.
Server will grow a single file only! All new data will then go into this single grown file until this file is full again. Then, as soon
as new data needs to be added again, SQL Server will epeat the previously explained procedure and will grow a single fle
only,
In order to allow MS SQL Server to use all files at any time, we strongly recommend you to make sure that you always have
free space left in all existing datafiles. If some of your datafiles are currently full, extend them if your disk layout allows it,
For SQL Server Releases >= 2008 Microsoft provides trace flag 1117 which makes SQL Server grow all files instead of growing,
a single file only at an autogrow event. This reduces monitoring efforts to ensure proper data distribution. Please see note SAP
‘Note 1238993 for more details and to learn how to set the trace flag,
As of SQL Server 2016, the trace flag 1117 is not needed anymore. See SAP Note 2204688 for more information,
10, High memory paging may cause the database to go in SUSPECT status.
Messages in the RRORLOG like "A significant part of sql server process memory has been paged out. This may result in a
performance degradation” is an indication that you have not set Locked Pages in Memory (LPiM) as per SAP Note 1134345.
SAP recommends using LPiM.
‘The reasons for your database going into Suspect state are described on SAP Note 8166;
| See Also
- Suspect database
1134345 - Using locked pages for SQL Server
SAP Note 423644 - SQL Error 9002: The transaetion logis fll
SAP Note 1237682 - Configuration Parameters for SQL Server 2008
SAP Note 1702408 - Configuration Parameters for SQL. Server 2012
SAP Note 1986775 - Configuration Parameters for SQL Server 2014
SAP Note 2312035 - Configuration Parameters for SQL Server 2016
SAP Note 2 Configuration Parameters for SQL Server 2019
SAP Note 3218776 - Configuration Parameters for SQL Server 2
SAP Note 948633 - Hangman.vbs
SAP Note 541256 - Hangman 4.1
SAP Note 806342 - FAQ: Analyzing exclusive database locks on SQL Server
SAP Note 1178916 - Update_Tabstats job causes performance problems
SAP Note 111201 - FAQ: Deadlock analysis for SQL server
SAP Note 32129, Deadlock analysis for the SQL Server
{8087 - SQ. Server Statistics Maintenance
Reorganizing tables on SQlL Server
SAP Note 142731 - DBC checks for SQL Server
SAP Note 1597910 - Handling of database corruptions on SQL Server
SAP Note 1238993 - Data File Management for SQL Server
SAP Note 2204688 - trace flags 1117 and 1118 not used in sql 2016
SAP KBA 1721843 - MSSQL: Post-steps after archiving, deleting or compression
| Keywords
corruption, DBo2, execution path, update statisties, repair_allow_data_loss, autogrow, paging, page out, paged, suspect
status, suspect mode
| Attributes
Key Value
other SAP Business Warehouse > Basis System and Installation > BW Database Platforms > BW on Mirosoft SQLServer
Components _(BW-SYS-DB-MSS)| Products
Products
SAP NetWeaver all versions
| This document refers to
SAP Note/KBA
1721843
948633,
1692
806342
541256
420644
2129
2912035
2294688
1986775
1702408
1597910
159316
1558087
ua273t
1238993,
1237682
1178916
194945
sn1291
Component
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS
BC-DB-MSS-CCM
BC-DB-MSS
BC-DB-MSS
| This document is referenced by
SAP Note/KBA.
ryai843,
‘Component
Title
MSSQL: Post-steps after archiving, deleting or compression
Hangman vbs
Suspect database
FAQ: Analyzing exclusive database locks on SQL Server
‘Hangman...
‘SQLerror 9002: Transaction logis full
Deadlock analysis for the SQL server
‘race flags 1117 and 1138 not used in sql 2016 and higher
Configuration Parameters for SQLServer 2014
Configuration Parameters for SQLServer 2012
Handling of database corruptions on SOL Server
‘Reorganization of tables on SQL Server
‘SQL Server Statistics M:
‘SQLServer DBOC checks
‘Data File Management for SQL Server
Update ‘Tabstats job causes system performance problems
Using locked pages for SQL. Server
MSSOL: Post-steps after archiving, deleting or compression,