Archive

Archive for the ‘Performance’ Category

The anatomy of a process update – Part 2

February 21, 2013 9 comments

In part 1 of this 2 part post I shared an overview of the procedure to optimise process update. I will now walk through the optimisation process on one of our multi terabyte systems.

Part 1 shows a visualisation of process update and i have captured timings in profiler for all the stages shown and I now present my starting point for tuning below.

image

We have 3 stages that can be optimised.

  • Dimension build – This comes in at 92 seconds which is acceptable so i won’t be discussing this further but you can refer to part 1 for tuning options.
  • Partition index processing – Almost 25 minutes is spent here so tuning is required.
  • Commits – Almost 7 agonising minutes are spent waiting for the phase 2 commit with the server lock in place.

Read more…

Advertisements

The Anatomy of a process update – Part 1

February 9, 2013 13 comments

I have been working on optimising the process update of one of our multi terabyte systems and this 2 part post will break down the optimisation process and share with you some very interesting results.

In part 1 I will share an overview of the procedure to optimise process update and in part 2 I will apply this to our system and share some startling discoveries.

Overview

Internally process update is a complex operation and can be difficult to describe so to facilitate explanation I have created the visualisation shown below of the process which also helps to show the single threaded and multithreaded execution paths.

Read more…

SSAS Native v .net Provider

November 30, 2010 Leave a comment

Recently I was investigating why a new server which is in its parallel running phase was taking significantly longer to process the daily data than the server its due to replace.

The server has SQL & SSAS installed so the problem was not likely to be in the network transfer as its using shared memory. As i dug around the SQL dmv’s i noticed in sys.dm_exec_connections that the SSAS connection had a packet size of 8000 bytes instead of the usual 4096 bytes and from there i found that the datasource had been configured with the wrong provider but what was really interesting and the point of the blog is the performance difference which i have shown below.

Rows per second
.Net SqlClient Data Provide 30,000
SQL Server Native Client 10 240,000

That’s right! For a single partition, the native client was able to process 240,000 rows per second where as the .net client maxes out at 30,000. That means the SQL Native Client is 800% faster! I knew that the .net providers were slower but I had never gathered any metrics before. If your looking after a SSAS server I would definitely recommend taking a few minutes to check which provider is configured in the datasource.

Another point to consider is you may have a custom solution that is doing your ETL and utilising the .net providers. This would also be impacted by the .net provider throughput limits and a switch over to SSIS could dramatically improve your ETL.

Analysis Server appears to hang…..

April 5, 2009 4 comments

We had an on going problem where by users would suddenly start complaining that the Analysis Server was hanging. When investigating the box we could see that there appeared to be no physical constraints. Disks were a bit above average, CPU was very low and there was plenty of memory.

When looking at the server through the SSAS activity viewer (part of the community samples) we could see that users were definitely queuing up and many of the queries should have returned in less than a second but were hanging around for ever (30 minutes or more). It was as if we were experiencing some form of blocking…….

To compliment our monitoring we use ASTrace (also part of the community sample) to do real time logging of what’s happening on the SSAS server to a SQL Server and amongst the capabilities it gives us is the ability to run a little procedure to show us what mdx/xmla is running with duration etc (its much more friendly than the activity viewer). So, when we were experiencing the trouble our script showed that an MDX query that touched multiple partitions totalling 100’s of gb’s appeared to be at the head of the chain every time.

Read more…