Archive

Archive for the ‘Backup’ Category

Exploring backup read io performance

January 24, 2010 Leave a comment

I was recently exploring how to increase the backup read throughput on one of our SQL servers. Below are some interesting facts i found.

I would say that one of the most important reminders that came from the exercise is, do not assume that 2 databases being backup up on the same server using an identical backup command means that the processes are identical under the hood.

  1. Backup read threads are spawned 1 per physical device used by the database. (This is documented in Optimising Backup & Restore Performance in SQL Server)
  2. Multiple database files on 1 disk will not increase throughput because 1 disk = 1 thread and the thread works through the database files 1 at a time.
  3. Backup read buffers are evenly distributed across the number of read threads.
  4. Backups are pure IO operations, they do not read pages from the buffer pool.
  5. When passing in @MaxTransferSize it appears to be a suggestion rather than implicit and SQLServer will assign the requested value if it can otherwise it can pick another lower value.
    So, quite a few statements there… Where is the proof? Well the best find had to be trace flag 3213 which outputs information regarding backup decisions made. Below is an extract of this output which i will then talk through.
    2010-01-22 12:00:02.45 spid78      Backup/Restore buffer configuration parameters
    2010-01-22 12:00:02.45 spid78      Memory limit: 32765MB
    2010-01-22 12:00:02.45 spid78      Buffer count:               40
    2010-01-22 12:00:02.45 spid78      Max transfer size:          448 KB
    2010-01-22 12:00:02.45 spid78      Min MaxTransfer size:       64 KB
    2010-01-22 12:00:02.45 spid78      Total buffer space:         17 MB
    2010-01-22 12:00:02.45 spid78      Buffers per read stream:    10
    2010-01-22 12:00:02.45 spid78      Buffers per write stream:   8
    2010-01-22 12:00:02.45 spid78      Tabular data device count:  4
    2010-01-22 12:00:02.45 spid78      FileTree data device count: 0
    2010-01-22 12:00:02.45 spid78      Filesystem i/o alignment:   512
  • Red covers the max transfer size. I actually asked for 1mb but only got 448 KB. Additionally i also noticed that where i kick of multiple backups (but all requesting 1mb) that the transfer size tends to decrease the more backups you have. So, no one backup is necessarily the same.
  • Green covers buffer distribution. So, i asked for 40 buffers. The database being backed up has data devices on 4 physical disks so gets 4 read threads. Buffers per read stream is 10 which is (40 buffers / 4 threads).
  • Blue covers read threads. The database backed up had data files on 4 physical disks. This is exposed as the Tabular data device count and confirms the statement in point 1 that you get 1 read thread per physical device as documented by MS.

So, what about statements 2 & 4? Well, i monitored the reads to the individual files using  sys.dm_virtual_io_file_stats and took a number of snapshots whilst performing a backup. There are plenty of scripts you can download to take the snapshots yourself such as this one. Once the backup completed i looked at the time slices and you can see the following.

  • Total mb read during backup = total data held in the file. From this i drew the conclusion its not reading any of the data held in the buffer pool which makes a lot of sense as the backup includes the transaction log.
  • Querying the statistics at different time intervals you see the first datafile MB’s growing and then the second data file mb’s don’t start growing until first is complete hence its going 1 file at a time. However, if you have multiple files on multiple disks you do see 1 file on each disk being read from. I’ve not mentioned increasing the number of backup devices and changing block sizes as my case specifically did not call for it but you can read about that here.

The last thing i want to say since i have touched on single threaded backup reads is that i`m keen not to spawn any new urban legends. Whilst this is true for the backups in the context of per physical disk device, that’s it! Its worth reading this article about urban legends around SQL threads.

Categories: Backup, Performance, SQL Server

Transaction log backup deadlock

February 23, 2008 1 comment

Recently we started to see deadlock errors when backing up our transaction logs. The "important" part of the error is shown below.

Could not insert a backup or restore history/detail record in the msdb database. This may indicate a problem with the msdb database. The backup/restore operation was still successful.

What this meant was that the transaction log backup was occurring but the entry in the msdb was not being made as it was being chosen as a deadlock victim so we investigated the cause of the problem as we had some processes that used this information to copy transaction logs to other servers and we needed it to be complete.

We used trace flag 1222 to output the deadlock information to the error log and found the culprit to be a Microsoft stored procedure called "sp_delete_backuphistory" that is called by SQL 2005 maintenance plans when you use the "History cleanup task" and tick the "Backup and Restore History". Having a look at the stored procedure it was obvious why it was deadlocking so we decided to log our findings with Microsoft. Microsoft have confirmed the bug and have stated it will be fixed in SQL 2008 but have stated they will not be issuing a KB in the immediate future which is one of the reasons i decided to blog about it.

I have had a look at the latest 2008 CTP and can confirm that Microsoft has updated the stored procedure to avoid the deadlocking and i noticed they also added a non-clustered index on the backup_finish_date in the backupset table (finally). I would also like to point out that the changes made to the stored procedure could easily be ported back to SQL 2005 so i`m a little surprised they have not been.

Work Around

To avoid this specific issue, we took the approach of identifying a generic window when transaction log backups would not be running on 95% of our server estate and changed the "History cleanup task" to run at this time. For the remaining 5% we worked out per server windows and now we do not see the issue on any of our servers.

A brief history of msdb backup history tables

Now, for those of you who want to know more about this problem and are wondering why this problem seems to only just be appearing in SQL 2005 here is a brief history lesson.

Prior to SQL 2005, maintenance plans never gave the ability to delete old backup history but the procedure "sp_delete_backuphistory" did exist. So, many DBA’s would find that their MSDB’s were growing rather large and if they used Enterprise Manager to do a restore it would hang for ages as it read the large backup tables. So, people would then find out about "sp_delete_backuphistory" and schedule it as a job but quite often the first time it was run it would take ages (some time days) to run due to poor coding and volume of data so people then implemented their own more efficient code (Google sp_delete_backuphistory and you will see what i am talking about, for example, see here).

So, what does this lead to? Well, not so many people using the MS stored procedure prior to 2005! But then SQL 2005 rocks up and we have the ability to call the procedure via the gui! Well, lets tick that puppy!!! 😀 We do need to keep that msdb trim after all and that is how we get to where we are now!

Categories: Backup, Deadlock, Errors, SQL Server

The Job whose owner kept coming back……

December 30, 2007 Leave a comment

I thought i would share this little quirk about the SQL Agent jobs for maintenance plans.

One of our members of staff had left and we had the usual case of a few jobs failing with:

"Unable to determine if the owner (DOMAIN\xxx) of job <JOB_Name> has server access (reason: error code 0x534. [SQLSTATE 42000] (Error 15404))."

So, we went around and updated the job owners to one of our appropriate generic admin accounts. A few days later some of the jobs started to fail again with the same error, since we knew we had performed the update previously it was time to investigate how the job had been set back to the old user account.

It was quickly determined that the only jobs that had reverted back to the old owner were the jobs created by maintenance plans so we focused our attention here. It turns out that when you save a change to a maintenance plan the job owners are reset to the owner of the maintenance plan. The owner of the maintenance plan will be the account used to connect to the server in SSMS when creating the plan.

With this determined a slight variation of our fix was deployed. First we changed the job owners and next we updated the owner of the maintenance plan using the script at the end of the post. The script is in two parts, the first part shows you who owns what and the second updates the owner to an account you specify.

Agent jobs being created with a user account have always been a procedural problem. This is simply another variation on the problem that we need to take into consideration and put a process in place to deal with. The most likely processes are either to only create a maintenance plan when logged on with a generic account or run the script after creating the maintenance plan.

I am however curious why Microsoft have implemented updating the jobs in this manner and see it as having the potential to cause significant problems in environments that may not be monitoring their jobs as closely as is required and end up with maintenance tasks not running for some time. How to get around this? Well, given the nature of maintenance plans and the fact you must be a sysadmin to see or create them, surely it makes sense to have the owner as the SQL Service account or a user created by SQL for maintenance plans? Currently someone has posted this feature to connect here and i’ve added my two pennies worth so if you feel it should change then have you say too!

--See who owns which packages

SELECT

name, description,SUSER_SNAME(ownersid)

FROM

msdb.dbo.sysdtspackages90

--Now we update the owner to an appropriate domain account. Either the service account or a generic admin account is good.

UPDATE

msdb.dbo.sysdtspackages90

SET

OWNERSID = SUSER_SID('YOUR_DOMAIN\APPROPRIATE_ACCOUNT')

WHERE

OWNERSID = SUSER_SID('YOUR_DOMAIN\OLD_ACCOUNT')