Archive

Archive for the ‘SQL Server’ Category

When is SELECT INTO not minimally logged?

June 20, 2015 3 comments

If you had asked me that question a month ago i would have said “Never, it is always minimally logged and this behaviour is documented in BOL’. But now I know better…

Recently i have been working on an application migration from SQL 2008 R2 –> SQL 2014 and we found a huge drop in performance for some SELECT INTO statements. I traced the issue down to the fact the operation was being fully logged and then we engaged Microsoft to work out why. Read more…

Targeting CHECKDB SQL Agent jobs for resource governing

March 6, 2014 Leave a comment

Some time ago I was looking into the performance of DBCC CHECKDB on one of our servers and came to the conclusion that the best Degree of Parallelism (DOP) for CHECKDB was not the same as our server level setting and since CHECKDB does not accept a DOP hint the only option to limit DOP when the server level setting is not appropriate is to use the resource governor.

In our environment CHECKDB is scheduled using SQL Agent and I could have chosen to simply target all SQL Agent jobs and have them all assigned to a single pool but this was not appropriate for all my SQL Agent jobs so I needed a solution targeting individual jobs. I  thought this would be a relatively straight forward task but there were some challenges so i wanted to share them and my solution.

Read more…

Windows 2008 file transfer performance improvements

October 9, 2012 Leave a comment

I have been reading about Server Message Block (SMB) 3 and thought to myself it would be good to remind people about SMB 2 as many systems have not upgraded to Windows 2008 to take advantage of it.

For more background on SMB, please see this article.

When Microsoft introduced SMB 2 they pretty much started again. During one of my recent migrations I found that the file copy improvements from Windows 2008 R2 have literally been jaw dropping and the highlights are. Read more…

SQL Server 2008 uses half the CPU’s

July 6, 2012 Leave a comment

I recently got my hands on a couple of 4 socket servers with Intel E7-4870’s (10 cores per cpu) and with hyper threading enabled that gave me 80 logical CPU’s.

The server has Windows 2008 R2 SP1 along with SQL 2008 (Currently we can not deploy SQL 2008 R2 for the application being hosted).

When SQL Server started I noticed only 2 NUMA nodes were configured and 40 logical cores where there should have been 4 NUMA nodes and 80 logical cores (see below).

clip_image001

Read more…

Categories: Configuration, Numa, SQL Server

SQL Agent and SSAS Commands

May 30, 2011 Leave a comment

SQL Agent can be used to execute SSAS commands such as processing of a dimension and i can see that it is useful when you want to quickly schedule an adhoc processing task but there is a major drawback you need to be aware of.

You are not able to specify the database to connect to and whilst you may think that it does not matter because the database is specified in the connection but it really does matter. Below are extracts of a processing task from profiler which was executed using a SQLAgent task.

image

So, what is happening here? Read more…

Parallelism, CPU Time & DMV’s

May 5, 2011 1 comment

Whilst reviewing the CPU statistics of a system that i knew was CPU bound i found the numbers were not adding up and i was not seeing the code i expected to see as a top CPU consumer so i decided to going digging.

I quickly identified that if the query has gone parallel it:

  • Only shows as one thread in sys.dm_exec_requests because sys.dm_exec_requests does not show blocking tasks and parallel threads appear self blocking. If you want to see all active threads including blocked you should use sys.dm_os_waiting_tasks but there is no cpu time there….
  • Any cpu time shown is only relevant to the coordinator thread not the sum of all the related parallel threads.

I also used my favourite tool sp_whoisactive written by Adam Machanic but it did not help me either so i e-mailed Adam and had an enlightening mail exchange. The bottom line was that it is not possible to get an accurate value for CPU if a query has gone parallel! Below is an extract from the mail exchange with Adam reproduced with his permission.

Correct. It is not possible to get an accurate value for CPU if a query has gone parallel. The best you can get is the number of context switches (which Who is Active provides in @get_task_info = 2 mode). This is not the same thing, but it is correlated: a context switch means that the process was using CPU time, and was switched off of the CPU. What you don’t know is whether it used its entire quantum each time, or only 1/2 of the quantum, or whatever. So it’s not exactly a simple translation. But it’s a lot better than nothing.

Adam did continue on to talk about a potential method to expose a more accurate cpu value through Who is Active’s delta mode and shortly after he delivered! Smile

Adam announced the accurate CPU time deltas in this post. To get the CPU delta’s you need to be running version 11 and the parameter you need are documented here.

So, to summarise

  • It is impossible to get a run time cumulative value of CPU for a spid that has gone parallel and it is vital you remember this when your looking at your DMV’s otherwise you could be barking up the wrong tree.
  • sp_whoisactive can give you a runtime delta of CPU time for a parallel query which will enable you to spot CPU sapping parallel queries.

I hope you find this information useful and i would also like to say a big thank you to Adam Machanic.

Categories: DMV, Performance, SQL Server

Server Side Aliases

November 16, 2010 Leave a comment

Over the years i have come across a few situations where server side connections to SQL server fail when you use a DNS alias that points back to the server your initiating the connection from but you can connect remotely.

Its an annoying problem which has a very unhelpful error message that changed in different versions of SQL. In SQL 2000 you are presented with

"Login failed for user ‘(null)’. Reason: Not associated with a trusted SQL Server connection."  and in SQL 2005 + SQL 2008 its “Login failed. The login is from an untrusted domain and cannot be used with Windows authentication”.

You will also see eventid 537 in the security logs

One of the most common reasons a system is setup with an alias pointing back on itself is because a consolidation has taken place and you don’t want to change the connection strings. However some people simply got burnt when Microsoft first released the security patch which introduced this change and i still find people being burnt today.

Cause

NTLM reflection protection was introduced as part of security fix MS08-068. This causes a local authentication failure when using a dns alias which bubbles up and becomes the error described above.

Relevant MS Articles are MS08-068 & http://support.microsoft.com/kb/926642 and cause extract is:

This problem occurs because of the way that NT LAN Manager (NTLM) treats different naming conventions as remote entities instead of as local entities. A local authentication failure might occur when the client calculates and caches the correct response to the NTLM challenge that is sent by the server in local "lsass" memory before the response is sent back to the server. When the server code for NTLM finds the received response in the local "lsass" cache, the code does not honour the authentication request and treats it as a replay attack. This behaviour leads to a local authentication failure.

Solution

You either need to use the local name rather than DNS alias or there are steps described in the resolutions section of the articles to disable the protection totally or for a specific alias.

Exploring backup read io performance

January 24, 2010 Leave a comment

I was recently exploring how to increase the backup read throughput on one of our SQL servers. Below are some interesting facts i found.

I would say that one of the most important reminders that came from the exercise is, do not assume that 2 databases being backup up on the same server using an identical backup command means that the processes are identical under the hood.

  1. Backup read threads are spawned 1 per physical device used by the database. (This is documented in Optimising Backup & Restore Performance in SQL Server)
  2. Multiple database files on 1 disk will not increase throughput because 1 disk = 1 thread and the thread works through the database files 1 at a time.
  3. Backup read buffers are evenly distributed across the number of read threads.
  4. Backups are pure IO operations, they do not read pages from the buffer pool.
  5. When passing in @MaxTransferSize it appears to be a suggestion rather than implicit and SQLServer will assign the requested value if it can otherwise it can pick another lower value.
    So, quite a few statements there… Where is the proof? Well the best find had to be trace flag 3213 which outputs information regarding backup decisions made. Below is an extract of this output which i will then talk through.
    2010-01-22 12:00:02.45 spid78      Backup/Restore buffer configuration parameters
    2010-01-22 12:00:02.45 spid78      Memory limit: 32765MB
    2010-01-22 12:00:02.45 spid78      Buffer count:               40
    2010-01-22 12:00:02.45 spid78      Max transfer size:          448 KB
    2010-01-22 12:00:02.45 spid78      Min MaxTransfer size:       64 KB
    2010-01-22 12:00:02.45 spid78      Total buffer space:         17 MB
    2010-01-22 12:00:02.45 spid78      Buffers per read stream:    10
    2010-01-22 12:00:02.45 spid78      Buffers per write stream:   8
    2010-01-22 12:00:02.45 spid78      Tabular data device count:  4
    2010-01-22 12:00:02.45 spid78      FileTree data device count: 0
    2010-01-22 12:00:02.45 spid78      Filesystem i/o alignment:   512
  • Red covers the max transfer size. I actually asked for 1mb but only got 448 KB. Additionally i also noticed that where i kick of multiple backups (but all requesting 1mb) that the transfer size tends to decrease the more backups you have. So, no one backup is necessarily the same.
  • Green covers buffer distribution. So, i asked for 40 buffers. The database being backed up has data devices on 4 physical disks so gets 4 read threads. Buffers per read stream is 10 which is (40 buffers / 4 threads).
  • Blue covers read threads. The database backed up had data files on 4 physical disks. This is exposed as the Tabular data device count and confirms the statement in point 1 that you get 1 read thread per physical device as documented by MS.

So, what about statements 2 & 4? Well, i monitored the reads to the individual files using  sys.dm_virtual_io_file_stats and took a number of snapshots whilst performing a backup. There are plenty of scripts you can download to take the snapshots yourself such as this one. Once the backup completed i looked at the time slices and you can see the following.

  • Total mb read during backup = total data held in the file. From this i drew the conclusion its not reading any of the data held in the buffer pool which makes a lot of sense as the backup includes the transaction log.
  • Querying the statistics at different time intervals you see the first datafile MB’s growing and then the second data file mb’s don’t start growing until first is complete hence its going 1 file at a time. However, if you have multiple files on multiple disks you do see 1 file on each disk being read from. I’ve not mentioned increasing the number of backup devices and changing block sizes as my case specifically did not call for it but you can read about that here.

The last thing i want to say since i have touched on single threaded backup reads is that i`m keen not to spawn any new urban legends. Whilst this is true for the backups in the context of per physical disk device, that’s it! Its worth reading this article about urban legends around SQL threads.

Categories: Backup, Performance, SQL Server

The evils of implicit conversions

January 18, 2009 Leave a comment

I wanted to put up a brief post showing the impact of an implicit conversion on the performance of a query (or not……). In the example i will show an implicit conversion negatively impacting query performance and an implicit conversion that does not impact performance…..

So, we need to setup the test environment using the code below.


SET ANSI_WARNINGS OFF
--********      Create Test Data           ******************
CREATE TABLE #data_test(ukey INT IDENTITY(1,1) PRIMARY KEY , first   VARCHAR(200),second VARCHAR(200))
DECLARE    @first INT@second INT
SELECT
@first = 1
WHILE @first < 250000
BEGIN
INSERT 
#data_test
SELECT  REPLICATE(@first,@first), REPLICATE(@first,@first)
SELECT  @first = @first +1
END
CREATE NONCLUSTERED INDEX
stuf_1 ON #data_test (first)
--********      End of Test Data            ******************

 

Now, with the test data in place we can run the following 2 queries and observe the differences.

/* This uses a variable declared as an NVARCHAR */
EXEC sp_executesql N'SELECT * FROM #data_test WHERE first = (@p0) ', N'@p0 nvarchar(200)',@p0 = N'1'
--Scan count 1, logical reads 3093, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

/* This uses a variable declared as an VARCHAR */
EXEC sp_executesql N’SELECT * FROM #data_test WHERE first = (@p0)’, N’@p0 varchar(200)’,@p0 = ‘1’
–Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Below each statement is the io incurred and the difference on this tiny little table is > 3000 IO’s just because we used unicode (nvarchar) instead of non-unicode (varchar) and i`m sure you can imagine that on a larger table this becomes a significant overhead. So, why has this happened? Lets take a look at the plans.

Read more…

Categories: Performance, SQL Server

The overhead of a non-unique clustered index

October 20, 2008 Leave a comment

So, we all know that if we create a clustered index that is not unique that we will incur a 4 byte overhead right? Well not always because as usual, it depends….. Geeked

When you create a non-unique clustered index SQL server must maintain uniqueness so it adds a hidden 4 byte column which is populated for each non-unique row (not every row) but what many people may not realise is that this is actually a variable length column so if your table has no variable length columns you have to incur another 4 bytes to maintain the variable offset data giving you a total of 8 bytes per row instead of 4 bytes.

A few bytes may not sound much but when dealing with multi billion row tables it soon adds up so its important to know how the space consumption breaks down.

Below is an extract from Books Online 2005 "Estimating the size of a clustered index"

"The uniqueifier is a nullable, variable-length column. It will be nonnull and 4 bytes in size in rows that have nonunique key values. This value is part of the index key and is required to make sure that every row has a unique key value."

Its great to see that this hidden column is now documented but a bit of additional clarity around its potential variable length property storage overhead would nice.

Finally I thought I would visual this hidden data overhead for you with a screen shot from Danny’s awesome Internals Viewer

ole0

Update: Thanks to Christian Bolton for clarification that the overhead is for each non-unique row which i have now reflected in the post.

Changing the Data Files Location after Installation

August 10, 2008 Leave a comment

The other day i wanted to change the “Data Files” location for a 2005 database engine installation and a 2005 Analysis Services installation which you can specify under the advanced options during installation. I quickly found out that there appears to be no documented ways to do this other than uninstall SQL Server and install again specifying a new location for data files. It’s also not as simple as moving your system databases as “Data files” covers things like server errors logs, sql agent logs, replication default directory etc. So, as the uninstall route was not one i was prepared to go down i sat down and worked out how to do it and below are the results.

Read more…

MS SQL Server Book of Wisdom

May 21, 2008 Leave a comment

I was chatting with a friend today and he asked “Have you ever seen those little books of wisdom?”. We quickly decided that we could write a MS SQL Book of Wisdom, Below is a summary of what ensued for your amusement. Now, some of the statements are actually based on bad real life advice and many we just made up. Can you tell which is which? Also please comment if you have got any good entries for the Book……

Categories: SQL Server, Uncategorized

Transaction log backup deadlock

February 23, 2008 1 comment

Recently we started to see deadlock errors when backing up our transaction logs. The "important" part of the error is shown below.

Could not insert a backup or restore history/detail record in the msdb database. This may indicate a problem with the msdb database. The backup/restore operation was still successful.

What this meant was that the transaction log backup was occurring but the entry in the msdb was not being made as it was being chosen as a deadlock victim so we investigated the cause of the problem as we had some processes that used this information to copy transaction logs to other servers and we needed it to be complete.

We used trace flag 1222 to output the deadlock information to the error log and found the culprit to be a Microsoft stored procedure called "sp_delete_backuphistory" that is called by SQL 2005 maintenance plans when you use the "History cleanup task" and tick the "Backup and Restore History". Having a look at the stored procedure it was obvious why it was deadlocking so we decided to log our findings with Microsoft. Microsoft have confirmed the bug and have stated it will be fixed in SQL 2008 but have stated they will not be issuing a KB in the immediate future which is one of the reasons i decided to blog about it.

I have had a look at the latest 2008 CTP and can confirm that Microsoft has updated the stored procedure to avoid the deadlocking and i noticed they also added a non-clustered index on the backup_finish_date in the backupset table (finally). I would also like to point out that the changes made to the stored procedure could easily be ported back to SQL 2005 so i`m a little surprised they have not been.

Work Around

To avoid this specific issue, we took the approach of identifying a generic window when transaction log backups would not be running on 95% of our server estate and changed the "History cleanup task" to run at this time. For the remaining 5% we worked out per server windows and now we do not see the issue on any of our servers.

A brief history of msdb backup history tables

Now, for those of you who want to know more about this problem and are wondering why this problem seems to only just be appearing in SQL 2005 here is a brief history lesson.

Prior to SQL 2005, maintenance plans never gave the ability to delete old backup history but the procedure "sp_delete_backuphistory" did exist. So, many DBA’s would find that their MSDB’s were growing rather large and if they used Enterprise Manager to do a restore it would hang for ages as it read the large backup tables. So, people would then find out about "sp_delete_backuphistory" and schedule it as a job but quite often the first time it was run it would take ages (some time days) to run due to poor coding and volume of data so people then implemented their own more efficient code (Google sp_delete_backuphistory and you will see what i am talking about, for example, see here).

So, what does this lead to? Well, not so many people using the MS stored procedure prior to 2005! But then SQL 2005 rocks up and we have the ability to call the procedure via the gui! Well, lets tick that puppy!!! 😀 We do need to keep that msdb trim after all and that is how we get to where we are now!

Categories: Backup, Deadlock, Errors, SQL Server

SSMS Log file viewer and Deadlock Graphs

January 23, 2008 1 comment

Firstly I must say a big thank you to Microsoft for the new deadlock trace flag 1222. Compared to the trace flag output for 1204 & 1205 that you had to use in SQL 2000 it’s a walk in the park to interpret.

Anyway, back to the post at hand! This is a quick FYI as i`m not going to go through how to interpret a deadlock graph because Bart Duncan does a fantastic job of it here.

When you enable 1222, it will output the deadlock information to the error log. If your using the log file viewer and steam on in and do your analysis you will probably find you get your deadlock victim the wrong way round like I did in the first cut of my analysis. Fortunately I did realise my mistake which made me look at the output again and I realised that the output in the log is upside down! This is because the log file viewer sorts the log so that the most recent entry is first and as such reverses the deadlock output. I`m not aware of any way to configure the sort order of the log file viewer and exporting the log exports it in the same order its displayed….

So, when looking at your deadlock information you have 2 options.

  • Find the occurrence of the words “deadlock – list” and read upwards.
  • Grab a copy of the error log from the servers log directory and open it in notepad.
    Categories: Deadlock, Errors, SQL Server

    The Job whose owner kept coming back……

    December 30, 2007 Leave a comment

    I thought i would share this little quirk about the SQL Agent jobs for maintenance plans.

    One of our members of staff had left and we had the usual case of a few jobs failing with:

    "Unable to determine if the owner (DOMAIN\xxx) of job <JOB_Name> has server access (reason: error code 0x534. [SQLSTATE 42000] (Error 15404))."

    So, we went around and updated the job owners to one of our appropriate generic admin accounts. A few days later some of the jobs started to fail again with the same error, since we knew we had performed the update previously it was time to investigate how the job had been set back to the old user account.

    It was quickly determined that the only jobs that had reverted back to the old owner were the jobs created by maintenance plans so we focused our attention here. It turns out that when you save a change to a maintenance plan the job owners are reset to the owner of the maintenance plan. The owner of the maintenance plan will be the account used to connect to the server in SSMS when creating the plan.

    With this determined a slight variation of our fix was deployed. First we changed the job owners and next we updated the owner of the maintenance plan using the script at the end of the post. The script is in two parts, the first part shows you who owns what and the second updates the owner to an account you specify.

    Agent jobs being created with a user account have always been a procedural problem. This is simply another variation on the problem that we need to take into consideration and put a process in place to deal with. The most likely processes are either to only create a maintenance plan when logged on with a generic account or run the script after creating the maintenance plan.

    I am however curious why Microsoft have implemented updating the jobs in this manner and see it as having the potential to cause significant problems in environments that may not be monitoring their jobs as closely as is required and end up with maintenance tasks not running for some time. How to get around this? Well, given the nature of maintenance plans and the fact you must be a sysadmin to see or create them, surely it makes sense to have the owner as the SQL Service account or a user created by SQL for maintenance plans? Currently someone has posted this feature to connect here and i’ve added my two pennies worth so if you feel it should change then have you say too!

    --See who owns which packages

    SELECT

    name, description,SUSER_SNAME(ownersid)

    FROM

    msdb.dbo.sysdtspackages90

    --Now we update the owner to an appropriate domain account. Either the service account or a generic admin account is good.

    UPDATE

    msdb.dbo.sysdtspackages90

    SET

    OWNERSID = SUSER_SID('YOUR_DOMAIN\APPROPRIATE_ACCOUNT')

    WHERE

    OWNERSID = SUSER_SID('YOUR_DOMAIN\OLD_ACCOUNT')

    My old mate sp_recompile

    October 12, 2007 Leave a comment

    As soon as i saw the error messages in the logs i thought to myself "Oh my, that did not happen in testing" (ok, maybe it was more colourful than that).

    We were creating a clustered index on a tiny little table and the index went through fine. However, the application started to generate the message "Could not complete cursor operation because the table schema changed after the cursor was declared". My gut reaction was to restart each application server in the cluster but having restarted the first one it made no difference. It suddenly clicked that SQL Server must be dishing out the cursor plan from cache.

    Now, I did not want to restart the SQL servers because only a small part of the application was affected and I did not want a more significant outage. So, how do we get the plan out of cache? The table below details your options with the corresponding impact.

    Action

    Pros

    Cons

    EXEC sp_recompile ‘object’

    Minimal impact. When passing a table name all procedures referencing it will be recompiled. Plans in cache not referencing the table will stay in cache.

    You have to know the name of the object(s) needing to be recompiled.

    DBCC FREEPROCCACHE

    Quick and dirty.

    The procedure cache for the server is cleared so the server will slow down whilst the cache populates again.

    Restart SQL

    I suppose you could say you are 100% sure you have nailed the sucker.

    You have a system outage and you have to wait for your procedure and buffer cache to repopulate.

    The lesson to take away here is to always use sp_recompile when making any kind of DDL changes, i also tend to use it on stored procs & views too. I normally always have it in my scripts so believe you me i gave myself a good talking to about forgetting to put it in this time Open-mouthed smile

    And on a related note, have you come across sp_refreshview? No? Well, its worth knowing about.

    Categories: Performance, SQL Server

    SSMS Restore backup error

    September 14, 2007 Leave a comment

    We had a requirement to allow someone to create and restore databases on a test server today and i thought to myself "Thats easy, i`ll just grant the "Create Any Database" right to the appropriate user, thats when the pain began!

    The user was using SSMS connecting using a SQL login to restore a database and when they went to specify the backup location they got an error to the effect "Cannot access the specified path or file on the server". After clicking ok the tree view in the locate backup dialog was empty and if you typed in the path and filename manually you still recieved an error.
    So, i dug out the profiler and found that xp_fixeddrives was being called and decided to check it out. It turns out that when executing xp_fixeddrives using a SQL login it returns no results! Because of this the error is generated in SSMS and the tree view is not populated.
    I spent some time investigating xp_fixeddrives and came to the following conclusions regarding its behaviour.

    • Currently the only way to get xp_fixeddrives to return results when using SQL authentication is to add it to the sysadmin role.
    • I extensively explored proxy account configurations to get xp_fixeddrives to work under SQL authentication and i could find no way to get it working.
    • If you use Windows authentication xp_fixeddrives works as expected. You do not need sysadmin privledges nor do you have to do any proxy configuration.
    • Some poking around with process monitor showed that when using windows authentication or the sysadmin process that SQLSERVR.exe spawns processes to gather the details using the service account. When using SQL Authentication with sysadmin SQLSERVR.exe does not spawn anything or generate any security errors so what ever is going on is beyond what i can work out (My skills with a debugger are to limited :)  ).

    At the end of all this i went down the Windows authentication root to work around the issue. For some reason SSMS still generates the error but the tree view is then populated and you can select the file so i can live with that as i suspect exploring why its still generating an error may end up being a bottomless pit of time.

    Finally, this only affects SSMS. If the user had done the restore using T-SQL it would have worked (despite a warning about updating the restore history in msdb).

    Categories: Configuration, SQL Server

    Server level VLF report

    July 25, 2007 Leave a comment

    I read Tony Rogerson’s blog on Virtual Log Files today and it reminded me that i really should knock up a little report to list all databases on a server and the number of VLF’s per database. Since I had been busy writing some other Operational reports I was in the right frame of mind so knocked up what you see below.

    The Report code

    CREATE TABLE #VLFS (fileid INT,filesize bigint,startoffset bigint,fseqno bigint,status INT,parity INT,createlsn VARCHAR(1000))

    CREATE TABLE #Results (SRV_Name NVARCHAR(500),Database_Name NVARCHAR(500),VLFS INT)

    EXEC MASTER.dbo.sp_msforeachdb

        @command1 = 'USE ? INSERT INTO #VLFS EXEC(''DBCC LOGINFO WITH TABLERESULTS'')',

        @command2 = 'insert into #Results SELECT @@SERVERNAME,''?'',count(*) from #vlfs',

        @command3 = 'truncate table #vlfs'

    --For this example i`ve just returned the results but you can

    --just as easily write the results to a local or central server.

    --I write mine to a central server for reporting on.

    SELECT * FROM #Results

    DROP TABLE #vlfs

    DROP TABLE #Results

    Example output

    Results

    For me the beauty of this little report is that i have set it up so that it runs on all 150 of my servers distributed around the world and logs to my central logging server. From here I get one report that tells me the databases and the servers they are hosted on that have excessive VLF’s. Having said that in a well managed environment i should never find any databases with lots of VLF’s……..

    You may also be wondering "How many VLF’s are too many?". Tony did not cover this but Kimberly Tripp did in a post she put up a few years ago (See point 8) which was when I first learned about VLF’s. I don’t however think this is a hard and fast number and obviously the performance gains will be less if your not far off this number.

    The last thing I am going to mention in this post is part of the code I used in the report. I used the undocumented procedure sp_msforeachdb. This is a great little procedure that will cycle through your databases executing up to 3 commands. To use it, where you would have put a database name you put a ‘?’ and its as simple as that! Incidentally there is also a sp_msforeachtable.

    The joy of template parameters

    July 15, 2007 Leave a comment

    I thought I would do a quick blog about template parameters since I have been writing allot of standard deployment scripts for our 2005 builds and have used them extensively.

    So what are they? Well they are place holders and you would you use them in the same places that you would probably put a variable. The key difference comes when you assign the values. With variables you work though the code setting the values as appropriate, with template parameters all you do is press CTRL-SHIFT-M and you get the dialogue box shown below. Simply fill in the values, click ok and your ready to run your script.

    The format of a parameter is <Parameter Name, SQL Data type, default value> so to get the Mirrored DB name shown in the screen shot you would enter <Mirrored_DB_Name,nvarchar(128),‘My_Mirror’> where you wanted the value to appear.

    This feature is available in Management Studio and Query Analyser, if you have ever used one of the standard Microsoft query templates you may have noticed that parameters are used in them as well.

    I only realised template parameters existed a few years back because someone pointed it out to me. If this is the first time you have come across them (or had forgotten about them) I hope you find them as useful as I do.

    Categories: SQL Server, SSMS

    The x64 experience

    July 10, 2007 Leave a comment

    The 64 bit beast has been out there for a while now and new servers are generally 64 bit compatible. The first thing that often comes into my mind when i think 64 bit is performance & memory with "compatibility" hot its heels (and recollection of a good article by Linchi Shea).

    My fears around compatibility have been pretty much put to rest and we now recommend SQL 2005 x64 as our base build. However it does seem that 64 bit builds need a bit more attention. I say this because recently i’ve been having to register 64 bit dll’s again and having mentioned this to a few DBA’s the conversations went a bit like "Really? So was it x.dll or y.dll". Fortunately though the problems seem to only be affecting the tools (touch wood) but i am curious as to what experiences others are having with 64 bit servers so please do leave some comments.

    Now, to help others who run head first into the very misleading errors i experienced i`ll detail the errors and the fixes below.

    The first problem

    This one occurred when we were doing a server build. One of our final tasks was to deploy a SSIS package so we went to open the package in BIDS and were greeted with the following error when loading the package.

    "The package failed to load due to error 0xC0010014 "One or more error occurred. There should be more specific errors preceding this one that explains the details of the errors. This message is used as a return value from functions that encounter errors.". This occurs when CPackage::LoadFormXML fails."

    Some research threw up a few red herrings but the next clue came when i decided to connect to the SSIS instance using SSMS. At this point i got an error stating

    "Error loading type library/DLL"

    Further research threw up the following document which gave me a few options none of which i liked and more importantly the top solution was use the latest SP which i was already running so i decided to go hunting and use the sysinternals process monitor to see if i could find the issue. 15 minutes later the culprit was found to be the DTS.dll which process monitor identified as throwing some unusual errors so i located the 64 bit version and use REGSVR32 to register the DTS.DLL again and this resolved my problems.

    The second problem

    This one came a few weeks down the line on the same server. I was working in SSMS and connected to SSIS, when expanding the tree view i got the following error.

    "Library not registered. (Exception from HRESULT: 0x8002801D (TYPE_E_LIBNOTREGISTERED))"

    This time i went straight to my trusty process monitor and found that the server was trying to use the 32 bit version of msdtssrvrutil.dll. I located the 64 bit version and registered it and everything worked again. This time i also took to opportunity to check as best i could that everything else was working.

    Its been a few weeks since the issues now and all’s well so hopefully the server will behave, until the next time………

    Categories: SQL Server