Kaloyan Kosev

How-to: SQL Server file-snapshot backups in Azure

September 20, 2016 by

After receiving new additions to backup and restore capabilities of SQL Servers like file and page restores, I thought that nothing will surprise me anymore, but well, here it is; SQL Server file-snapshot backups in Azure.

SQL Server 2016 introduces a new type of SQL Server backup for database files stored in Microsoft Azure blob storage.

This new type of backup is called File-Snapshot backup and it leverages the storage snapshot capabilities of Azure blob storage to take nearly instantaneous backups and perform incredibly fast restores.

Within the article we will cover the following topics:

  • Understand what file-snapshot backups is and how it works
  • How we can configure it and use it in production
  • Restoring a database to a point in time using file-snapshot backups
  • And how much faster it actually is than traditional streaming backup

First of all, note file snapshots leverage the storage snapshot capabilities within Microsoft azure. It is quite different from the standard backup and recovery process, so let’s dive and review the base objects of which the file snapshot consists.

Base files are our standard database files – the database files (mdf and ndf) and the transaction log file (ldf). To take advantage of the file-snapshot backup it is mandatory for us to store them within the Azure blob storage.

Snapshots retain the differential changes between the database base files and the exact moment we have initiated a file-snapshot backup.

The .BAK container stores pointers instead of data pages here. The pointers are tied with the exact snapshots we would need in case of a database recovery.

The first file-snapshot backup performed must be a Full one. Upon starting the backup it creates an azure snapshot of all the database base files – both data files and transaction logs, then it establishes the transaction log backup chain and writes the locations of the file-snapshots into the *.bak container file as pointers.

After the initial full backup that was required to establish the transaction log chain you can only perform transaction log backups because each and every transaction log file-snapshot backup set has file-snapshots of all the database files – both data and log and can be used in a RESTORE operation for the whole database or just the log.

Meaning that full database file-snapshot backup is required only once and all other backups can (and should) be transaction log file-snapshot backups.

Of course there are some limitations when using file-snapshot backup:

  • Within the current version of Azure the backup file itself cannot be stored using premium storage, which to explain quickly is a high performance storage relying on SSD drives
  • The frequency of the backups cannot be shorter than 10 minutes
  • The maximum number of snapshots (backups cannot be over 100)
  • Every time when you are performing a RESTORE it is required to use WITH MOVE statement
  • Both the base objects and the file-snapshots must use the same storage account

Let’s set up the test environment we will be working with;

In Microsoft Azure, as a start, we have created a resource group “fsbdemorg” and within a storage account “fsbdemostorage” true the Azure Portal.

Let’s create a standard virtual machine using the wizard, size of ‘Basic A1’ will be more than enough (note the data disk will not be used at the moment, except maybe for comparing the speed between standard and file-snapshot backups, so make it a bit smaller, not 1 TB).

Once our SQL Server is live, up and running, we can create a CREDENTIAL for it to use the remote blob*:

We would need a database, so let us create one using the blob**:

Some test tables, couple millions of test rows later we are ready to start with the tests.

To perform a backup the database using file snapshot backup we would need to use BACKUP TO URL along with the FILE_SNAPSHOT statements as follows:

To continue the backup chain we should go with a LOG backups:

Note that the name of the backup files should be different because they coexist in the same blob:

The restore process is fairly different than the traditional streaming backup. The file-snapshot backups contains a snapshot of all database files and the restore process requires only up to two adjacent file-snapshot backups sets. The traditional streaming backups would need the entire backup chain, which usually consist of full, differential and numerous transaction log backup files.

To perform a restore at the time when a backup is taken only the specific backup set is required along with the base blobs. We would need to use WITH MOVE statement even if we are recovering the original database.

Processed 0 pages for database ‘Database01_Recovery’, file ‘Database01’ on file 1.
Processed 0 pages for database ‘Database01_Recovery’, file ‘Database01_log’ on file 1.
RESTORE DATABASE successfully processed 0 pages in 0.439 seconds (0.000 MB/sec).

To perform a restore to a point in time between two backup sets we would need two adjacent backup sets. One before and one after the desired time. In our scenario to perform a recovery at 19:15:00 on 17th September 2016 we would need the files Database08.BAK, Database09.BAK together with the base files.

We will be restoring the backup set Database08.BAK fully and will use the STOPAT statement when restoring the set Database09.BAK:

Processed 0 pages for database ‘Database01’, file ‘Database01’ on file 1.
Processed 0 pages for database ‘Database01’, file ‘Database01_log’ on file 1.
RESTORE DATABASE successfully processed 0 pages in 1.275 seconds (0.000 MB/sec).
Processed 0 pages for database ‘Database01’, file ‘Database01’ on file 1.
Processed 0 pages for database ‘Database01’, file ‘Database01_log’ on file 1.
RESTORE LOG successfully processed 0 pages in 0.148 seconds (0.000 MB/sec).

The restore is completed almost instantly, now we can start with the speed tests, so …

How much faster the file-snapshot backup actually is than traditional streaming backup? Let us do a simple test with backup and restore operations. We will start with table having 2336000 rows, then we double it and so on.

Standard file-snapshot operation of the database with 2336000 rows took 0.061 seconds.

The same operation but using traditional streaming backup took 1.888 seconds.

After performing similar backup and recovery operations with bigger tables we have the following results:

The file-snapshot backup using Azure Snapshot service is ~64 times faster on average than traditional streaming backup, and file-snapshot restore is ~23 times faster on average by the tests performed above.

Additional notes and links:

* Creating SQL Credentials with Shared access signature (SAS)
** SQL Server data files in Microsoft Azure


Kaloyan Kosev

Kaloyan Kosev

Kaloyan Kosev is a SQL Server DBA working for Hewlett-Packard in the past 5 years. His daily work routine is a mixture of troubleshooting bugs and issues, consulting and advising clients and presenting to students.

View all posts by Kaloyan Kosev
Kaloyan Kosev
0 comments