Steve Simon

Quick and dirty server monitoring with SQL Server Reporting Services

October 8, 2014 by

Introduction

A few month back, I found myself in a position where the client wanted a ‘monitoring tool’ to utilize on a daily basis to ascertain the status of SQL Server Systems and to continually monitor disk space capacity. Being a typical Monday morning quarter back and utilizing my favorite SQL Server Tool, SQL Server Reporting Services, I came up with the following ‘ah-ha’ solution.

Getting started

Whilst the dashboard that I designed for the above mentioned client contained twenty queries (most of which utilized data management views DMV’s), I have chosen one query in particular to share with you. We shall see how this query may be incorporated into a similar dashboard.

The query below records the actual and average run times of a particular agent job, in addition to passing back a defined field called ‘color’. ‘Color’ is set at run time and is set dependent on how many ‘actual seconds’ elapse relative to average times.

The eagle-eyed reader will note that the job name has been hard coded. We shall see that this may be controlled by parameters as we go forward. Lastly note the field color which shall be used extensively in our monitoring activities.

Let us have a quick look at the code below:

When run, the code generates the following results:

The actual and average run times of a particular agent job - results generated after running the above query

Creating our monitoring project

As a starting point, I create a Reporting Services project within Visual Studio Development tools, with a shared data connection through to our SQL Server.

The Reporting Services project

Our database connection looks as follows:

Database connection

Note that I have selected MSDB as my database, as the DMVs reside within this database.

As I want my dashboard to be as flexible as possible, I want the user to have the freedom to select the job of his or her choice.

This said, what I need is a list of relevant agent jobs on my server.

To create this list I now create a dataset to ‘store’ these job names. I create a ‘local’ data source from the shared data source ‘Timelinx’(created in a previous step). This data will be accessed via a report parameter (discussed below):

Data source properties dialog

The code to populate this list is fairly straight forward and may be seen below:

My ‘create a data set’ screen is shown below:

Create a dataset screen

My completed dataset now appears under the Dataset tab on the left (as seen below):

Completed dataset appears under the Dataset tab on the left

This said and done, I now need to define a parameter to permit the user to select which job he or she wishes to view.

Creating our job parameter

Declaring a parameter is quick and fairly simple. The data for the parameter will be obtained from the dataset that we just created. The completed parameter definition is shown below:

Report Parameter Properties dialog

I can now preview the contents of my ‘Jobs’ dataset, to select the name of the job that I am interested in examining:

Previewing the contents of the ‘Jobs’ dataset

Now this is ‘fine and dandy’ however in order to actually view the data (relevant to the actual job), we must incorporate a matrix onto our drawing surface:

Incorporating a matrix onto the drawing surface

Our matrix appears as follows:

Appearance of the matrix

Our next task is to define the dataset that will contain the results for our matrix. There are numerous ways of obtaining the necessary data including the creation of a stored procedure and using native T-SQL code.

While I normally use a stored procedure (which is more efficient), in this case most of the results are coming from MSDB and I am averse to creating a stored procedure within that system database.

Now, there are workarounds that one can do to ‘compile’ the stored procedure from another database and yet trick it into thinking that it is in fact being created from within the proper MSDB region.
However this is not for the scope of this article.

This said, I enclose the code within the ‘Query Text Box’.

Further, note the parameter(@JobName) inserted for the job name (see below):

JobName parameter

I now click the ‘Refresh Fields’ button on the screen above to ensure that all my fields will be extracted. Going to the ‘Fields’ tab on the left hand side of my dataset screen, I click the ‘Fields’ button and note the fields that will be extracted:

Change query and calculated fields dialog

Once I click ‘OK’ to complete, my work surface should show our newly created dataset as follows:

Work surface shows newly created dataset

Connecting the parameter value to the dataset query

This done, I click on the parameter tab to complete the process, by letting the dataset know that a parameter is necessary to complete my query AND that that parameter will be selected by the end user:

Parameter tab - Choose query parameter values

Upon completion of this step, we shall note that we now have two datasets. One contains a list of the ‘Agent Jobs’ and the other will eventually contain data relevant to the selected job.

Two datasets now appear in the Report Data tab

Setting the ‘datasetname’ for the matrix

Let us now set up the matrix by connecting the Matrix’s ‘Data Set Name’ Property to the dataset that we just created. This may be seen in the following screen dump:

Connecting the Matrix’s ‘Data Set Name’ Property to the dataset

Our last chore is to remove the column grouping from the matrix (as it is not required for our example). Merely right click on the ‘ColumnGroup’ tab and select ‘Delete group only’:

Removing the column grouping from the matrix

Further I add a few more columns to the matrix through the use of the context menu (right click):

Adding more columns to the matrix

Our completed matrix looks as follows, once fully populated. Note that the grouping is by date:

Appearance of the fully populated matrix

Sorting our data

As I wish to have run dates in descending order, i.e. my most recent run at the top of the list, I right click on the ‘run_date’ grouping (highlighted above) and select the sorting option and order it from ‘Z-A’ as shown below:

Change sorting options tab in the Group properties dialog

Now running the query, I obtain the following results. First my selection screen:

Selecting Agent Job Name

The screen dump below shows the returned dataset:

The resulting dataset

Adding the eye candy

With the skeleton of our report now complete, we need to add a bit of ‘oomph’ to report, highlighting anomalies. The solution that I utilized for this example was to use the appropriate fill for the ‘color’ field. I use a switch constraint to achieve this goal:

Using a switch constraint to achieve anomalies highlighting

When run, the results become immediately apparent, as shown below:

Image showing the results becoming immediately apparent

It should be noted that there are certainly other ways of achieving the same results. Examples are through calculations made at run time and it is left up to the reader to try for themselves.

A final touch would be to show only the results for the latest run. In reality this would be done as a part of a series of report ‘blocks’ and is shown fully completed in the following screen dumps.

Creating a full monitoring dashboard based upon what we have just developed

For our last example I created a report with a series of six rectangles. Five of these are purly for visual effect. The upper left rectangle has had its ‘Fill’ property altered as follows:

Fill property has been altered

Thus when the report is run, the following is achieved. Note that the latest run today was clear (green). If there had been issues the color would have changed to red.

Run results screen

Conclusion

Having a dashboard similar to the dashboard shown above provides the DBA and developer alike with a quick reaction information system to monitor the server and jobs BEFORE small issues become major fires.

Steve Simon
SSRS monitoring

About Steve Simon

Steve Simon is a SQL Server MVP and a senior BI Development Engineer with Atrion Networking. He has been involved with database design and analysis for over 29 years. Steve has presented papers at 8 PASS Summits and one at PASS Europe 2009 and 2010. He has recently presented a Master Data Services presentation at the PASS Amsterdam Rally. Steve has presented 5 papers at the Information Builders' Summits. He is a PASS regional mentor. View all posts by Steve Simon

168 Views