Press "Enter" to skip to content

Category: Analysis Services

Big SSAS News In SQL Server 2019 CTP 2.3

Chris Webb is excited about what’s in SQL Server 2019 CTP 2.3:

With the release of CTP 2.3 of SQL Server 2019 today there was big news for Analysis Services Tabular developers: Calculation Groups. You can read all about them in detail in this blog post:

https://blogs.msdn.microsoft.com/analysisservices/2019/03/01/whats-new-for-sql-server-2019-analysis-services-ctp-2-3/

In my opinion this is the most important new feature in DAX since… well, forever. It allows you to create a new type of calculation – which in most cases will be a time intelligence like a year-to-date or a previous period growth – that can be applied to multiple measures; basically the same thing that we have been doing in SSAS Multidimensional for years with the time utility/shell/date tool dimension technique. It’s certainly going to solve a lot of problems for a lot of SSAS Tabular implementations, many of which have hundreds or even thousands of measures for every combination of base measure and calculation type needed.

Click through for more of Chris’s thoughts and how calculation groups will make your life easier.

Comments closed

QueryMemoryLimit In SSAS 2019

Shabnam Watson covers a new setting in Analysis Services 2019:

The purpose of this setting is limit the amount of memory any single query can take. This setting is extremely useful when you want to limit the amount of memory consumption per query for queries across the board. Before this setting, it was possible to have an extremely poorly written query eat up all of a server’s memory and bring all other queries down to a halt. You can see an example of a such a query and SSAS memory settings in my previous post here.

Read on for details about what it does and what happens when a query reaches the memory limit.

Comments closed

Understanding Analysis Services Memory Behavior

Shabnam Watson walks us through SQL Server Analysis Services memory settings and application behavior under memory stress:


If memory consumption is below the Low limit everything is fine and it is free to stay in memory. Once the consumption passes the Low limit a cleaner thread wakes up and tries to clean up memory. At this point price of memory is no longer zero. It starts from 2 at the Low limit and goes as high as 1000 when memory consumption reaches the Total limit. The higher the memory pressure the more aggressive the cleaner gets. Once memory consumption reaches the Hard limit all connections/sessions are closed and queries are cancelled with an out of memory error.

This is a thorough explanation with some good demos and terrible queries.  Give it a read.

Comments closed

More Tabular Best Practices

Ginger Grant continues her series on Analysis Services Tabular best practices:

Optimize your DAX Code

While it is not easy to performance tune DAX you can do it, by evaluating the DAX Query Plan and VeritPaq Queries, and SQLBI’s VertiPaq Analyzer. Also, you can also look to use functions which perform better, for example COUNTROWS instead of DISTINCTCOUNT or ADDCOLUMNS instead of SUMMARIZE. Whenever possible use the CALCULATE function instead of the FILTER function, as CALCULATE filters for context inside the parenthesis and are more efficient. Also all of the iterative functions SUMX, COUNTX etc., should be used sparingly as the row-by-row transactions they create are less efficient and should be used only when SUM or COUNT will not work.  When evaluating if a value missing, if it is possible, use ISEMPTY instead of ISBLANK as ISEMPTY looks only for the presence of a row, which is faster than the evaluation performed by ISBLANK.

Read on for several more items in this vein.

Comments closed

More Tabular Best Practices

Ginger Grant has a few more best practices for working with Analysis Services tabular models:

Modify Timestamps to Split Date and Time

When there is a field where the date and time are both needed, the values should be separated so that there is both a date field and a time field.   Having date time in two fields assists in the dictionary encoding as the date and time fields can be separately sorted into columns where the values are the same, decreasing the number of dictionary entries.  To further improve compression, only include the seconds if absolutely necessary, as add decreasing the cardinality will increase compression.

Click through for more tips.

Comments closed

In Praise Of Tabular Editor

Teo Lachev shares a positive review of Tabular Editor, a community tool for working with Tabular models:

What tool do you use for Analysis Services Tabular development? SSDT right, what else? Here is a little secret. I almost don’t use SSDT anymore, except for limited tasks, such as importing new tables and visualizing relationships. I switched to a great community tool – Tabular Editor and you should too if you’re frustrated with the SSDT Tabular Designer. Back in 2012 Microsoft ported the Power Pivot designer to SSDT to let BI practitioners implement Tabular models. This is why you still get weird errors that Excel has encountered some error. Microsoft haven’t made any “professional” optimizations despite all the attention that Tabular gets. As a result, developers face:

  • Performance issues – As your model grows in complexity, it gets progressively slower for even simple changes, such as renaming columns. The problem of course is that any change results in a commit operation to the workspace database. SSDT requires a workspace database for the Data View but it slows down all tasks even if it doesn’t have data. While the data view is useful for data analysts, I’d personally rather sacrifice it to gain development speed.

  • The horrible measure grid – Enough said. To Microsoft credit, the Tabular Explorer helps somewhat but it still doesn’t support the equivalent of the SSAS MD script editor.

  • No automation for repetitive tasks – It’s not unusual to create many measure variants, such as YTD, QTD. SSDT doesn’t help much automating them.

It does look interesting.

Comments closed

Best Practices For Tabular Models

Ginger Grant has started a new series on best practices for Analysis Services Tabular models:

Data Type Selection

The data type selected will impact the physical storage used, not the compression of the models in memory.  It is important whenever possible to reduce the cardinality of the data in order to be able to sort the data effectively.  When storing decimal numbers, unless you need many significant digits, store the data as Currency as it will take less space in physical storage than decimal.

Click through for additional tips.

Comments closed

Storing Constants For MDX Calculated Measures

Chris Webb walks us through an interesting performance problem when using Analysis Services multidimensional:

All it does is take the value of the Sales Amount measure at the lowest granularities of the Customer, Date and Product dimensions, multiply it by 0.08 to find a tax value, and because [Tax Amount] is a real, non-calculated measure, the result of the calculation aggregates up through the cube. [I know that I don’t have to aggregate the result of this specific calculation but remember that this is a simplified example – in the real case I did have to write the calculation using Scope statements – and anyway the best way of handling a basic multiplication like this would be with a measure expression]

The performance was sub-second for my test query and I was happy, but then I realised that the same tax rate was being used in other calculations and may change in the future, so I thought I would store the value 0.08 in a calculated measure:

Chris walks through several iterations of this before landing on a solution which is both reasonable and fast.

Comments closed

Clearing The SSAS Cache Using C#

Shabnam Watson shows us a small console program to clear the SQL Server Analysis Services cache:

First let me give you a little background of why you would want to clear SSAS cache from C# code when you can do this using an XMLA command from SSMS.

If you have a slow MDX/DAX SSAS query , you have a couple of options for improving the performance (assuming no hardware changes):

  1. Rewrite the query differently if you have control over the query. (You will have two queries that you want to compare against the same database.)
  2. Make changes to the SSAS database to follow a better design. (You will have one query to run against two databases)

Regardless of which route you go, you should compare the performance before and after the changes to see how much you gained from the change.

Click through for more, including the code.

Comments closed

Changing Connection Strings In VertiPaq Analyzer

Shabnam Watson shows us how to change the connection string in VertiPaq Analyzer, a plugin for Excel:

While trying to set up VertiPaq Analyzer on a new computer, I ran into a problem where Excel was not letting me change the SSAS connection that was built in the workbook. It turns out I had missed one of steps in the instructions in the workbook. As a result, when I got to Connection Properties, everything was grayed out and this message was at the bottom:

Some properties cannot be changed because this connection was modified using PowerPivot Add-in.

Read on to see how to fix this.  And check out VertiPaq Analyzer if you’re working heavily with Analysis Services Tabular or Power BI.

Comments closed