Archive

Archive for the ‘SQL Server’ Category

When is SELECT INTO not minimally logged?

June 20, 2015 3 comments

If you had asked me that question a month ago i would have said “Never, it is always minimally logged and this behaviour is documented in BOL’. But now I know better…

Recently i have been working on an application migration from SQL 2008 R2 –> SQL 2014 and we found a huge drop in performance for some SELECT INTO statements. I traced the issue down to the fact the operation was being fully logged and then we engaged Microsoft to work out why. Read more…

Targeting CHECKDB SQL Agent jobs for resource governing

March 6, 2014 Leave a comment

Some time ago I was looking into the performance of DBCC CHECKDB on one of our servers and came to the conclusion that the best Degree of Parallelism (DOP) for CHECKDB was not the same as our server level setting and since CHECKDB does not accept a DOP hint the only option to limit DOP when the server level setting is not appropriate is to use the resource governor.

In our environment CHECKDB is scheduled using SQL Agent and I could have chosen to simply target all SQL Agent jobs and have them all assigned to a single pool but this was not appropriate for all my SQL Agent jobs so I needed a solution targeting individual jobs. I  thought this would be a relatively straight forward task but there were some challenges so i wanted to share them and my solution.

Read more…

Windows 2008 file transfer performance improvements

October 9, 2012 Leave a comment

I have been reading about Server Message Block (SMB) 3 and thought to myself it would be good to remind people about SMB 2 as many systems have not upgraded to Windows 2008 to take advantage of it.

For more background on SMB, please see this article.

When Microsoft introduced SMB 2 they pretty much started again. During one of my recent migrations I found that the file copy improvements from Windows 2008 R2 have literally been jaw dropping and the highlights are. Read more…

SQL Server 2008 uses half the CPU’s

July 6, 2012 Leave a comment

I recently got my hands on a couple of 4 socket servers with Intel E7-4870’s (10 cores per cpu) and with hyper threading enabled that gave me 80 logical CPU’s.

The server has Windows 2008 R2 SP1 along with SQL 2008 (Currently we can not deploy SQL 2008 R2 for the application being hosted).

When SQL Server started I noticed only 2 NUMA nodes were configured and 40 logical cores where there should have been 4 NUMA nodes and 80 logical cores (see below).

clip_image001

Read more…

Categories: Configuration, Numa, SQL Server

SQL Agent and SSAS Commands

May 30, 2011 Leave a comment

SQL Agent can be used to execute SSAS commands such as processing of a dimension and i can see that it is useful when you want to quickly schedule an adhoc processing task but there is a major drawback you need to be aware of.

You are not able to specify the database to connect to and whilst you may think that it does not matter because the database is specified in the connection but it really does matter. Below are extracts of a processing task from profiler which was executed using a SQLAgent task.

image

So, what is happening here? Read more…

Parallelism, CPU Time & DMV’s

May 5, 2011 1 comment

Whilst reviewing the CPU statistics of a system that i knew was CPU bound i found the numbers were not adding up and i was not seeing the code i expected to see as a top CPU consumer so i decided to going digging.

I quickly identified that if the query has gone parallel it:

  • Only shows as one thread in sys.dm_exec_requests because sys.dm_exec_requests does not show blocking tasks and parallel threads appear self blocking. If you want to see all active threads including blocked you should use sys.dm_os_waiting_tasks but there is no cpu time there….
  • Any cpu time shown is only relevant to the coordinator thread not the sum of all the related parallel threads.

I also used my favourite tool sp_whoisactive written by Adam Machanic but it did not help me either so i e-mailed Adam and had an enlightening mail exchange. The bottom line was that it is not possible to get an accurate value for CPU if a query has gone parallel! Below is an extract from the mail exchange with Adam reproduced with his permission.

Correct. It is not possible to get an accurate value for CPU if a query has gone parallel. The best you can get is the number of context switches (which Who is Active provides in @get_task_info = 2 mode). This is not the same thing, but it is correlated: a context switch means that the process was using CPU time, and was switched off of the CPU. What you don’t know is whether it used its entire quantum each time, or only 1/2 of the quantum, or whatever. So it’s not exactly a simple translation. But it’s a lot better than nothing.

Adam did continue on to talk about a potential method to expose a more accurate cpu value through Who is Active’s delta mode and shortly after he delivered! Smile

Adam announced the accurate CPU time deltas in this post. To get the CPU delta’s you need to be running version 11 and the parameter you need are documented here.

So, to summarise

  • It is impossible to get a run time cumulative value of CPU for a spid that has gone parallel and it is vital you remember this when your looking at your DMV’s otherwise you could be barking up the wrong tree.
  • sp_whoisactive can give you a runtime delta of CPU time for a parallel query which will enable you to spot CPU sapping parallel queries.

I hope you find this information useful and i would also like to say a big thank you to Adam Machanic.

Categories: DMV, Performance, SQL Server

Server Side Aliases

November 16, 2010 Leave a comment

Over the years i have come across a few situations where server side connections to SQL server fail when you use a DNS alias that points back to the server your initiating the connection from but you can connect remotely.

Its an annoying problem which has a very unhelpful error message that changed in different versions of SQL. In SQL 2000 you are presented with

"Login failed for user ‘(null)’. Reason: Not associated with a trusted SQL Server connection."  and in SQL 2005 + SQL 2008 its “Login failed. The login is from an untrusted domain and cannot be used with Windows authentication”.

You will also see eventid 537 in the security logs

One of the most common reasons a system is setup with an alias pointing back on itself is because a consolidation has taken place and you don’t want to change the connection strings. However some people simply got burnt when Microsoft first released the security patch which introduced this change and i still find people being burnt today.

Cause

NTLM reflection protection was introduced as part of security fix MS08-068. This causes a local authentication failure when using a dns alias which bubbles up and becomes the error described above.

Relevant MS Articles are MS08-068 & http://support.microsoft.com/kb/926642 and cause extract is:

This problem occurs because of the way that NT LAN Manager (NTLM) treats different naming conventions as remote entities instead of as local entities. A local authentication failure might occur when the client calculates and caches the correct response to the NTLM challenge that is sent by the server in local "lsass" memory before the response is sent back to the server. When the server code for NTLM finds the received response in the local "lsass" cache, the code does not honour the authentication request and treats it as a replay attack. This behaviour leads to a local authentication failure.

Solution

You either need to use the local name rather than DNS alias or there are steps described in the resolutions section of the articles to disable the protection totally or for a specific alias.