Kimberly Killian

What is causing database slowdowns?

February 2, 2016 by

Why is my database so slow? This query used to be so much faster. Why does it take so long to rebuild my index? How come it was fine last month? Every day I am asked these types of questions by clients. Every day! A lot of database developers and application developers do not realize that indexes are ever changing entities within your database or rather they need to be monitored closely and managed periodically to remain efficient. I cannot even count the times someone tells me “but we have index’s on this or that column and it was fine last month” and so on. All while they fail to realize or even tell me that the database just took on, updated or deleted 1,000,000 records for example, which would definitely change the footprint of the data, making the index’s unsound or in need of help. Even adding 50 new users that use the data differently could require new indexes. That being said, I decided to automate a quick and easy data gathering and reporting job that helps to answer these questions. Most of the time query performance questions can be answered by determining the fragmentation levels of index’s, if there are missing index’s, duplicate index’s, unused index’s and what are the heavy hitters in regards to queries and are queries running in memory or to disk and how many executions. My favorite thing to do with SQL Server is automate, automate and automate the tasks that are asked of me over and over.

This simple job includes 9 tables used for collecting data, 7 reports, and one SQL agent job to tie it all together and do a cleanup step and finally kick off the tasks. Steps 1-5 below will show how to individually set up the job however, you can also just run “scripts 1-setup tables.sql” and “2-sgent job.sql” that are included here, however, make sure you update the variables with your information prior to running.

Prerequisites:

  1. I have created a small database on each server that I monitor called DBA_Monitoring, this is where I do most of my DBA tasks. This is optional of course; I like to keep my dba monitoring separate from other production databases. You can call yours whatever you like just be sure to change it in the scripts provided before you run them.
  2. You must have dbMail enabled on your server that you will be monitoring and note your mail profile name as this will be used to send reports via email.

Creating the Collection Package

Step 1: Create the tables

I have created 9 tables in DBA_Monitoring to house each type of data collected in this job. 7 are for data collection and 2 are staging tables. These will need to be generated only once and will be truncated between data collection runs. Copy / Paste the scripts below and run them in your DBA_Monitoring database.


Step 2: Collect the data

This step will collect data into the tables created in step 1 above. The queries here use data from SQL Server Dynamic Management Views (DMV), so data is only current between server restarts. Because of this, it is a good idea to know when the last server restart was. In my case, restarts for database servers is usually my responsibility or I am in touch with the server guys to verify the last server reboot. This ensures that I have enough data in the DMV’s that my reports are useful. Make sure you change the variables marked with <> throughout the scripts prior to running. In addition to this, some of the queries contain a threshold for the amount of rows to bring back, so do not forget to modify this to your needs as well. Copy/Paste the scripts below to collect data.


Step 3: Generate the reports

Each report is generated with HTML and sent via dbMail. These reports are called from within the SQL Agent Job one at a time after all of the data has been collected. Copy / Paste and run the scripts below to see the reports. Make sure to change the variables marked <> throughout.

Report: Unused Index

This report will provide unused indexes that are currently setup. DMV data is greatly important here as all indexes are not activated all the time. So be careful of your last server restart when reviewing this report. This report will also provide the appropriate drop statement that you can run to remove the unused index following analysis. I highly recommend scripting and saving the indexes prior to removing them in the event you need to add them back later.


Server Name
DB Name
Object Name
Index Name
User Seek
User Scan
User Lookups
User Updates
Table Rows
Drop Statement
ServerName
DatabaseName
sampleObject
IX_Sample
196
77
0
140
631
DROP INDEX [IX_Samples] ON [dbo].[databaseName]

Report: Missing Index

This report will contain the missing indexes along with the create statement that could be used following analysis to generate the missing index. DMV data is greatly important here as all indexes are not activated all the time. So be careful of your last server restart when reviewing this report.


Server Name
DB Name
Avg Estimated Impact (milliseconds)
Last User Seek
Table Name
Create Statement
ServerName
DatabaseName
299.64
2016-01-19T15:38:11.190
Some Table
CREATE INDEX [IX_xxx_servername] ON [databaseName].[dbo].[xxx] (xx)

Report: Duplicate Index

This report contains duplicate or overlapping indexes and a drop statement to clean them up following analysis. I highly recommend scripting and saving the indexes prior to removing them in the event you need to add them back later.


The example below has been truncated to only show up to 3 columns however the report will return up to 10 index columns.

Server Name
DB Name
Table Name
Overlapping Index
Col1
Col2
Col3
ServerName
DatabaseName
Table name
Index name
Column name
Column name
Column name

Report: Longest Running Queries

This report will contain the longest running/most expensive queries. This report is useful to see how many times a query is executed along with how long it takes to run.


Server Name
DB Name
Query
Execution Count
Max Elapsed Time (ms)
Avg Elapsed Time (ms)
ServerName
DatabaseName
Query
16
200011
29751

Report: Index Reads with Statement

This report will contain statistics against each object executed including the query that ran. In addition to this you will also see total logical reads and total physical reads. These are great for determining if queries are being run in memory or disk. These statistics are useful to determine if adding additional indexes could assist the query to run faster. Another benefit could be in determinig hardware updates such as memory or even server level settings such as the “maximum server memory” setting.


Server Name
DB Name
Object
Execution Count
Total Logical Reads
Total Physical Reads
Statement
ServerName
Database Name
Object name
1
1
0
Query using the index here

Report: Index Reads with Counts

This report will contain counts for each database object executed against. This is similar to the Index Reads with Statement report however this report shows just the counts. This is great for just a quick peak into what is going on.


This report will contain counts for each database object executed.

Server Name
DB Name
Object
Execution Count
Total Logical Reads
Total Physical Reads
ServerName
DatabaseName
Object Name
10
420
4

Report: Fragmentation Levels

This report will contains index fragmentation % and page count for each index. When looking at the fragmentation %, a good rule of thumb for managing indexes is:

  • 0-5% fragmentation leave alone
  • 6-30% reorganize the index
  • 31% or higher rebuild the index

Also, do not forget to consider page count. Low page count and high fragmentation may not be an issue. For example 80% fragmentation and 5 pages should not cause alarm.


Server Name
DB Name
Table
Index
Fragmentation %
Page Count
ServerName
DatabaseName
Table
Index
44
1254

Step 4: Purge Data

Following the data collection you will need to clean up the temp and staging tables before your next run. Paste this script to clean up the temp and staging tables. In the SQL Agent Job, I have this as the first step therefore the previous data is available between runs. If running it manually outside the SQL Agent job, you can just run this after you run reports. Copy / Paste the script below to purge the data prior to the next run.


Step 5: SQL Agent Job

The SQL Agent Job ties the entire process together by calling steps 2-4 above. I usually call this job ad-hoc when someone complains about performance, however you could schedule it as well. Again, just a warning the data is only valid between server restarts so if you restarted the server a day or two ago your data will not be as rich as data collected after a month between the last server restart, so be wary of restarts. Below is what the SQL Agent Job looks like:

Copy/Paste/Run the script below to create the above SQL Agent Job. Again please change the variables marked with <>


Summary:

This is a step by step method to collect a high level overview of the condition of the Index’s and query executions for a specific database on your server. This information is taken directly from SQL Server making it readily available at any time as long as there has been a reasonable amount of time between server restarts. Data collected may help you answer the ever reoccurring questions about database performance and what you can do to improve it.

Included in this package are the following:

  • Tables:
    • IndexOverviewReadsWithCounts_stage
    • IndexOverviewReadsWithQuery_stage
    • IndexOverviewReadsWithCounts
    • IndexOverviewReadsWithQuery
    • IndexOverview_DuplicateIndex
    • IndexOverview_FragLevels
    • IndexOverview_LongestQueries
    • IndexOverview_MissingIndex
    • IndexOverview_UnusedIndex
  • Reports
    • Duplicate Index
    • Missing Index
    • Unused Index
    • Longest Queries
    • Fragmentation Levels
    • Index Read Counts
    • Index Read Statements
  • SQL Agent Job
    • Index Overview

References:


Kimberly Killian

Kimberly Killian

With over 20 years of total IT experience, Kim has a very diverse background. She has worked in positions as a programmer, IT architect, DB Engineer, DBA, ETL developer as well as Senior Level Technical Consultant.

For the last 15 years has focused her technology experience within the world of database solutions and specifically with MS SQL Server. She has published numerous articles spotlighting SQL Server and is an expert in monitoring and reporting on SQL Server performance.

View all posts by Kimberly Killian
Kimberly Killian
SQL Server performance tuning

About Kimberly Killian

With over 20 years of total IT experience, Kim has a very diverse background. She has worked in positions as a programmer, IT architect, DB Engineer, DBA, ETL developer as well as Senior Level Technical Consultant. For the last 15 years has focused her technology experience within the world of database solutions and specifically with MS SQL Server. She has published numerous articles spotlighting SQL Server and is an expert in monitoring and reporting on SQL Server performance. View all posts by Kimberly Killian

4,457 Views