Recently I needed to investigate why SSAS synchronisation commands were intermittently failing and the errors pointed to a file lock on a SSAS data file. To identify the file lock I needed to run process monitor for as long as it takes to capture the problem even if it takes weeks or months for it to occur. This presents the following problems
- How to prevent the process monitor capture from consuming all available disk space.
- How to ensure we do not over write the data we are looking for.
- How to ensure the capture stops once we have the data we are looking for.
Very recently I encountered a production issue and to assist me in the root cause analysis I needed information/clues that are not exposed by any SSAS interfaces so I created the metadata sniffer.
Some time ago I was looking into the performance of DBCC CHECKDB on one of our servers and came to the conclusion that the best Degree of Parallelism (DOP) for CHECKDB was not the same as our server level setting and since CHECKDB does not accept a DOP hint the only option to limit DOP when the server level setting is not appropriate is to use the resource governor.
In our environment CHECKDB is scheduled using SQL Agent and I could have chosen to simply target all SQL Agent jobs and have them all assigned to a single pool but this was not appropriate for all my SQL Agent jobs so I needed a solution targeting individual jobs. I thought this would be a relatively straight forward task but there were some challenges so i wanted to share them and my solution.
For a while now I been analysing a situation and i have got it down to a fine art now where by I do the following
- Create a memory a memory intensive situation where i am using 85% of available memory and SSAS is working hard to free memory.
- Process add a dimension and capture statistics.
- Review statistics, contemplate the results and then tweak something.
- Start at step 1 again.
The other day when i did my process add I was presented with the following error.
The following exception occurred while an operation was being performed on a data source view: The type initializer for ‘System.Xml.Schema.XmlSchemaComplexType’ threw an exception.;Exception of type ‘System.OutOfMemoryException’
In part 1 of this 2 part post I shared an overview of the procedure to optimise process update. I will now walk through the optimisation process on one of our multi terabyte systems.
Part 1 shows a visualisation of process update and i have captured timings in profiler for all the stages shown and I now present my starting point for tuning below.
We have 3 stages that can be optimised.
- Dimension build – This comes in at 92 seconds which is acceptable so i won’t be discussing this further but you can refer to part 1 for tuning options.
- Partition index processing – Almost 25 minutes is spent here so tuning is required.
- Commits – Almost 7 agonising minutes are spent waiting for the phase 2 commit with the server lock in place.
I have been working on optimising the process update of one of our multi terabyte systems and this 2 part post will break down the optimisation process and share with you some very interesting results.
In part 1 I will share an overview of the procedure to optimise process update and in part 2 I will apply this to our system and share some startling discoveries.
Internally process update is a complex operation and can be difficult to describe so to facilitate explanation I have created the visualisation shown below of the process which also helps to show the single threaded and multithreaded execution paths.
Its official, 2012 ended on a high for me, I received confirmation that I have passed the Microsoft SSAS Maestro certification!!! I won’t repeat the details of the what the Maestro certification is but you can find details here and here.
Before I go on, I want to stop and say I loved the whole Maestro experience, i found it inspirational and invigorating! Thank you for the opportunity.
Now, what is the challenge i`m referring to? Well, I announced to the team of highly skilled BI developers I regularly work with that I was now a Maestro so they set me my first Maestro task to further optimise their already highly optimised process update and the results are really really interesting and I will be posting a 1 or 2 part blog entitled “The Anatomy of a process update” which will be coming shortly so stay tuned!
As a teaser, during the process of pulling apart process update some of the moments of discovery I had went like this.
“That explains why x appeared to yield no benefit….”
“Really!?!?! I never saw that coming”
“Wow! That’s one for the blog!”
“Maestro Moment — Realising the true potential and implications of the discovery…”