Jefferson Elias

SQL Server Audit feature – DDL event auditing examples

April 10, 2017 by

Introduction

In a previous article “SQL Server Audit feature – discovery and architecture“, we’ve seen the basics to build an auditing system based on SQL Server Audit feature. In the introduction, we talked a little bit about the « General Data Protection Regulation » a.k.a. GDPR that allows personal data circulation (in Europe) that is a subject in the mouth of a lot of IT professionals.

GDPR makes audit a requirement and that’s a sufficient reason to take a look at different solution to achieve the same goal. For each one, we can pinpoint advantages and disadvantages of adopting it and eventually take a final decision to define which the «best» tool is.

In this article, we will take a close look at three ways to achieve the following goal: audit all DDL activity. We will review implementation of this goal using:

I recommend users interested in security and particularly auditing to read Minette Steynberg’s article on SQLShack.com entitled “Creating a successful auditing strategy for your SQL Server databases“.

As you could expect, in the following sections, we will then implement the example of auditing DDL activity at the server and database levels in three manners using DDL Triggers, default SQL Trace mining and finally SQL Server Audits. This will allow us to enlist pros and cons to the adoption of each of these three methods, at least in my point of view. Once we are done with the example, we could conclude on the best choice from my own perspective.

A concrete example: auditing DDL events

In my opinion, there is no fixed rule that should apply once for all, for every company, for every application or software. It’s actually the responsibility of the audit implementer to choose the appropriate solution for a given situation. After all, the only question his manager will ask him is to know whether there is or not an audit on a particular aspect of SQL Server.

But, there are so many elements that can influence this implementation:

  • Degree of knowledge and experience of the audit implementer. We tend to choose the solution that is familiar.
  • Application environment or context. For instance: SQL Server version and edition or available disk space that we can dedicate to auditing.
  • Business requirements and needs for auditing. Sometimes, they are not justified, but have to be implemented, sometimes it’s the contrary like enabling C2 auditing for non-C2 certified environments.
  • Available resources: number of persons to assign, time, efforts, budget…

But this is not the subject here, so to convince you that « All Roads Lead to Rome », let’s implement DDL Auditing using three different techniques (included SQL Audits). We will then review them and try to find advantages to use SQL Audits in comparison to the two other techniques.

Note
Alternatively, we can use ApexSQL Log to perform DDL auditing.

Auditing DDL Events with triggers

As we have seen in the presentation of the architecture of SQL Audits, there are mainly two kinds of DDL events: those which are database-related and those which are server-related. It’s the same for DDL triggers! So we will have a server ddl trigger and multiple database ddl triggers.

As most of us already know, DDL Triggers are programmable objects in SQL Server. It means we can do more than just auditing with them. On technet, you will read the following list of use cases that DDL triggers are fit to accomplish:

DDL triggers can be used for administrative tasks such as auditing and regulating database operations.

Use DDL triggers when you want to do the following:

  • You want to prevent certain changes to your database schema.
  • You want something to occur in the database in response to a change in your database schema.
  • You want to record changes or events in the database schema.

Each DDL trigger implied in the audit should perform as follows:

So, one who wants to audit DDL events with this solution must define which information he wants to collect and where he wants to store it. For demonstration purpose, we will keep it simple and say that we will collect the following information and store it to tables:

  • The moment when the event occurred
  • The login name
  • The client computer name and application
  • The current database name of the session
  • The kind of event that occurred
  • The information about the object (SchemaName, ObjectName) – We could also include a « SubObjectName » information in the list in order to handle, for example, events related to table partitionning management
  • The T-SQL statement that fired the trigger
  • Optionally, we could also keep the complete event descriptor which is of XML data type.

In order to get this information, we will use the EVENTDATA built-in function which returns an XML describing the event that has just occurred. The returned value of this function is the optional information listed above. As we get an XML data type, we can query it using XQuery.

You will find below two examples of what we get back using this function. The first one is extracted from a server-level DDL event and the next one from a database-level DDL event.

Server Level Event Example

As we can see, we don’t get exactly the same structure. This means there are two different Xml Schema Definitions and we can specialize our change logs.

The following table structure should be defined in order to store data about DDL events:

And the following trigger code should be defined:

As explained above, there are DDL events on server scope and on database scope and it’s not possible to create a single trigger that will do the job. Actually, you will need to create previous table and trigger multiple times:

  • A server DDL change log table and a server DDL trigger using the following CREATE statement:

  • A database DDL change log table and a database DDL trigger per database as all database users can be assigned a single database. The trigger will be created as follows:

Note:

  1. You can create a central table to store DDL events data for all audited databases, but you will need to create a database trigger in every database you want to audit. Plus, you will eventually need to review login mappings and user permissions as well.
  2. If you want to implement this solution, don’t forget to adjust table names so that it’s like for instance:
    • ServerChangeLog for server-related DDL events
    • DatabaseChangeLog for database-related DDL events

You should also adjust the insert statement in triggers so that it uses those names.

We’ve seen the first method. Let’s review some its advantages and disadvantages.

Advantages Disadvantages
Flexibility and « storage-friendly »
  • We can store any information we consider relevant and also ignore others.
  • As it’s a piece of code we write and manage, we can easily extend it later.
« User-friendly »
  • Once in place, it’s easy to run a SELECT statement and filter to get just what we want.
Advanced usages
  • We can also build complex solutions like a « DDL Firewall » which, based on session information and some tables of parameters would disallow even an allowed SQL Login to complete its DDL statement (by raising an error and setting session in an uncommittable state).
Additional management tasks
  • If there is a bug in our code, we have to correct it and deploy it again on all servers (and databases).
  • Based on environment requirements, you could need to clean up « old » records or records you don’t want to keep forever as they are part of regular maintenance.
  • I already had the case during database installation or upgrade of an application where I needed to drop the trigger because it was causing trouble and I was unable to install the application.
Performance impact
  • Triggers are known to have a performance impact on queries as they are part of the transaction. They are not part of the SQL Server process. Thus, it can impact application and eventually lead to outages.
Can’t prevent external alteration
  • One with sufficient permissions can either disable tracing or delete rows in audit log tables so that we won’t even know something has happened.

We could modify this trigger-based approach to output to a file, but it would require either to assign advanced permission or to be enabled xp_cmdshell and it’s not necessarily the best approach…

Let’s now do the same using default SQL Trace.

Auditing DDL events using default trace

The SQL Server default trace is an old functionality (since at least SQL Server 2005) that provides the ability to track some key events, primarily related to the configuration options. This feature is considered depreciated and should not be used in new developments. We should use Extended Events instead.

The default trace tracks DDL Changes, password changes, server configuration and database settings changes or file growth…

It’s enabled when default trace enabled Server Configuration Option is set to 1. So, when implementing auditing using default trace, we must ensure that this setting is always up and running.

Another point that has to be mentioned is that the output of this feature is limited to a set of 5 files of maximum 20 MB which are rolled over. This means that to keep a suitable history, we must take a copy of trace files as regularly as possible.

This means that we do not need a lot of work to audit DDL events. We “just” need to:

  1. Enable default trace.
  2. Create a scheduled task that will ensure the functionality is still on and adjust when it’s not the case.
  3. Build and regularly schedule a script that will copy audit files.

Until now, we’ve just talked about writing and storing default trace files. Let’s now talk about how to read from a trace file.

Reading from a trace file requires the use of fn_trace_gettable built-in function:

For default trace, the following query should fit the basic need to know that « something happened »:

This will give us the following kind of results:

As we did for DDL Trigger auditing, let’s review some advantages and disadvantages of auditing using default trace.

Advantages Disadvantages
« User-friendly »
  • Once in place, it’s easy to run a SELECT statement and filter to get just what we want.
Performance impact
  • This feature is built-in and asynchronous. That means it’s totally integrated with SQL Server and has (in theory) no impact on application performance.
Alteration detection
  • This feature allows users with sufficient privileges to delete a trace file, but there will be gaps in default trace file numbering. So, we won’t be able to know what happened, but we’ll know somebody did something and tried to cover his back.
  • The default trace state changed are tracked, so we will get some information about who made this change
Additional management tasks
  • The tasks listed above has to be done (at installation) and regular checks that audit is still running must be planned.
Flexibility
  • We can’t alter columns tracked by default trace.
  • It can’t be extended.
  • This feature is depreciated.
Readability
  • It’s not that easy to get to the information we want…

Auditing DDL events using SQL Audits

Reminder

SQL Server Audits fundamentals have been introduced in my previous article. For readers who don’t have time to read this full article, let’s review a summary of its contents (it’s actually taken out of this article).

SQL Server Audit takes advantage of the Extended Events feature and uses events groups as input. We also refer to these groups as Audit Action groups. Audit action groups relate to a « depth level » in SQL Server: either they are server-, or database- or audit- related. Audit action groups are mapped to a server audit specification or a database audit specification, primarily based on our audit policy.

All these components together form what we call a server audit. The product of an audit must be stored somewhere. SQL Server audit has some flexibility for it and provides three different kinds of output, which are known as audit targets. Audit target is either a file on the server host, the application log or the security log.

You will find below a diagram that summarizes the architecture of SQL Server Audits.

Back to the example

We will first create our audit object. It will be called Audit-Demo-DDL. Here is the T-SQL statement to do so. Alternately, you can use SSMS.

This object is created with an OFF status. It should not appear in the list returned by the following query:

Now, let’s create the server audit specification using the following statement:

Now, we can enable the server audit:

Now, let’s try it out! You will find below a little script which creates a table, adds a primary key constraint, inserts some data and drops the table.

I said previously that the sys. server_audit_status was a particularly useful table. Here is the first usage of this great view: getting the location of the current server audit file. This is performed with the following query:

As I get back the audit_file_path, I am able to read this file and get the list of actions that have been audited so far using sys.fn_get_audit_file function.

We can by the way put the value of the audit_file_path column into a variable. So the query to read current audit file looks like this:

And here is what I get back when I run this statement:

Apparently, the ALTER TABLE and DROP TABLE statement have not been tracked… We should maybe add the SCHEMA_OBJECT_CHANGE_GROUP action group or create a database audit specification if this option is available…

Anyway, this shows us we must really take care of the action groups we add to a server audit and also test that this audit does the job we think it should do.

Let’s clean up our stuff. You will find below the statements to drop the objects we created in this section.

As for previous ways to implement a DDL audit, let’s review some advantages and disadvantages of using SQL Server Audits

Advantages Disadvantages
« User-friendly »
  • Once in place, it’s easy to run a SELECT statement and filter to get just what we want.
  • We can easily add new action groups to an existing server audit
Performance impact
  • This feature is built-in and asynchronous. That means it’s totally integrated with SQL Server and has (in theory) no impact on application performance.
Alteration detection
  • Audit is audited, so there is a possibility to detect (not prevent) audit alteration
Additional management tasks
  • A backup of audit files should be in place if we use the file as audit target
  • A backup or at least an extractor of audit event logs in application log should be implemented to keep it in a safe place
  • We can also invent a centralization mechanism that would take the content of these files and integrate it in a central management database
  • Before any change on server audit objects on a production server, we must ensure that this change does what we think it will do. A test script should be built and kept up to date
Flexibility
  • We can’t alter columns tracked by default trace.
Complexity
  • To be able to audit all DDL events, we saw that we could forget one or several action groups. There are plenty of action groups and it’s not easy to get the right ones at first try…

Conclusion

As a conclusion, we can say that SQL Server Audit is a great feature delivered starting with Standard Edition that is scalable to ensure a professional enterprise-level auditing of server activity. Its implementation is easy, but needs additional tasks to be performed to ensure the kind of zero audit data loss.

While it’s not perfect, SQL Server Audit is, to me, the best feature available out of the box to implement security auditing in SQL Server and go a step forwards to « GDPR compliance ».

Previous article in this series:

See more

For fault tolerant auditing with centralized storage and reporting, consider ApexSQL Audit, an enterprise level SQL Server auditing and compliance tool.

Resources

 

Jefferson Elias

Jefferson Elias

Living in Belgium, I obtained a master degree in Computer Sciences in 2011 at the University of Liege.

I'm one of the rare guys out there who started to work as a DBA immediately after his graduation. So, I work at the university hospital of Liege since 2011. Initially involved in Oracle Database administration (which are still under my charge), I had the opportunity to learn and manage SQL Server instances in 2013. Since 2013, I've learned a lot about SQL Server in administration and development.

I like the job of DBA because you need to have a general knowledge in every field of IT. That's the reason why I won't stop learning (and share) the products of my learnings.

View all posts by Jefferson Elias
Jefferson Elias
Auditing

About Jefferson Elias

Living in Belgium, I obtained a master degree in Computer Sciences in 2011 at the University of Liege.

I’m one of the rare guys out there who started to work as a DBA immediately after his graduation. So, I work at the university hospital of Liege since 2011. Initially involved in Oracle Database administration (which are still under my charge), I had the opportunity to learn and manage SQL Server instances in 2013. Since 2013, I’ve learned a lot about SQL Server in administration and development.

I like the job of DBA because you need to have a general knowledge in every field of IT. That’s the reason why I won’t stop learning (and share) the products of my learnings.

View all posts by Jefferson Elias

1,395 Views