Press "Enter" to skip to content

Category: Analysis Services

Speeding Up Excel Pivot Table Performance

Chris Webb shows how you can improve performance of Excel pivot tables hitting Analysis Services Multidimensional models:

Back in 2016 I wrote the following blog post about changes to the way Excel 365 generated MDX queries for PivotTables connected to Analysis Services, Power Pivot/the Excel Data Model and Power BI datasets:

https://blog.crossjoin.co.uk/2016/07/08/excel-2016-pivottable-mdx-changes-lead-to-big-query-performance-gains/

I know it sounds boring and not something you need to worry about but trust me, this is important – these changes solved the vast majority of Excel PivotTable performance problems that I encountered when I was a consultant so you should read the above post before continuing.

Unfortunately, earlier this year these changes had to be partially rolled back because in some rare cases the queries generated returned incorrect results; this means that you may find that values for subtotals and grand totals are again being returned even when they aren’t being displayed. The good news is that you should still be able to get the improved performance with a few minor tweaks.

Read on to see what those tweaks are.

Comments closed

Finding Columns and Measures Used in SSAS Tabular

Kasper de Jonge uses Power BI to figure out which columns are being used in an SSAS Tabular model:

I got an interesting question about being able to figure out which measures and columns are being used for all tabular models in a server, either by known reports or by self service users using Power BI. To solve this I decided to use Power BI :).

To get the required information I decided to capture and parse the queries being that are being send to AAS and parse the results Power BI desktop over a period of time. In this post I describe how to do it.

Check it out, and also read the comments for an additional tip from Bill Anton.

Comments closed

Column Masking with SSAS and Power BI

Dustin Ryan digs deep into the bag of tricks:

Last week I was asked to tackle a requirement by a customer adopting Analysis Services to enable data exploration and ad hoc analysis by their users. One of their requirements was to secure columns based on a grant related to a cost center. For example, a grant has several attributes, with some attributes being considered “sensitive” and other attributes considered “non-sensitive”. Non-sensitive grant attributes would accessible to all users while a subset of the attributes in the grant table considered “sensitive” would be accessible to users related to the corresponding cost center. The challenge here is that while Analysis Services supports column level security, dynamic column level security is not supported. So my colleague and friend, the great Steve Pontello, and I put our heads together to address the requirement.

Read on to see how they did it and the resulting solution’s limitations.

Comments closed

Reviewing Analysis Services DMVs in SSMS

Brett Powell has a project to show Analysis Services DMVs in SQL Server Management Studio:

Just like with registered servers in SSMS, if you’re a BI administrator or developer that regularly needs to access and analyze Analysis Services models, whether that’s Power BI Premium, Azure AS, or SSAS, then quick access to the available DMVs aids your productivity.

For example you may want to quickly retrieve the DAX measures in a model including their expressions, the columns of a table, the largest columns in terms of memory consumption or cardinality, the row-level security roles and role memberships, etc. Obviously it wouldn’t be efficient to open the Power BI Desktop file or the Visual Studio solution to obtain this information and a tool like the Tabular Model Schema Reference v2.0 might be more than you need for short term and light ad hoc scenarios.

Click through for a link to Brett’s project as well as instructions on how to use it and a quick demo.

Comments closed

Modeling Semi-Additive Measures

Paul Poco shows a couple techniques for modeling semi-additive measures in Analysis Services and Power BI:

As mentioned earlier, the most commonly encountered approach is Option 2, the snapshot fact table. The main drawback of this approach is that the fact table’s size will grow extremely fast. For example, if you want to calculate the headcount in a company with 10,000 employees on average, and you want 5 years of historical data, you will add 10,000 rows per day to your fact table – that gives you (10,000 * 365 * 5 =) 18,250,000 rows after 5 years.  

If you used the first approach, Option 1, the fact table would be (10,000 * 5 =) 50,000 rows after 5 years, assuming your employees change position or quit the company once a year, on average. 

The snapshot fact table (Option 2) is (18,250,000 / 50,000 =) 365 times bigger. On the bright side, as the data is very repetitive, you might get a very good compression ratio on these tables.  

Check it out. Semi-additive measures are not as common as additive measures, but you’re liable to have a couple of them in your data model.

Comments closed

AutoSetDefaultInitialCatalog in Analysis Services

Chris Webb goes on a journey to understand what the AutoSetDefaultInitialCatalog property in Analysis Services does:

In Shabnam Watson’s recent blog post on a bug she found when trying to create a Live connection from Power BI to Analysis Services she mentioned that the AutoSetDefaultInitialCatalog server property could be used to solve her problem. This piqued my interested because I’d seen this property but had no idea what it did exactly or why it was there. Luckily, now I work for Microsoft, it’s even easier for me to find out about things like this from the dev team and Akshai Mirchandani was able to help.

First of all, what does it do? The documentation on this property has just been added here, and this is what it says:

Chris has connections, and we get to benefit from that.

Comments closed

Power BI Desktop to SSAS Error with No Models

Shabnam Watson takes us through various cases where you might hit an error connecting your Power BI Desktop installation to an SSAS server:

I recently ran into an error while connecting live from Power BI Desktop to a SSAS server. Everything was on-premises.There was no cloud component involved. I had full admin rights to the SSAS server and could see all databases and models from SSMS and other tools but was getting this error from PBI Desktop.

The Server you’re trying to connect to doesn’t have any models or you don’t have permissions to access them.

If I explicitly specified the name of the database I wanted to connect to, then it would connect and show me the database contents but if I did not specify the database name, I would get this error. The question is why?

Read on for several possible answers.

Comments closed

Connecting to Redshift from Azure Analysis Services

Gilbert Quevauvilliers shows how we can connect to Amazon Redshift from Azure Analysis Services:

I am busy working with a customer and had a challenge when using Azure Analysis Services to connect to Amazon Redshift via an ODBC connection.

The first issue that I encountered was the following error: OLE DB or ODBC error: [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application; AWS PROD. This lead me to a few websites and the one that got me to my solution was Tabular: Error while using ODBC data source for Importing Data

Below are the steps on how I installed, configured and got the connection and refresh working.

Read on for those steps.

Comments closed

Calculated Columns and Memory Usage in Analysis Services

Teo Lachev troubleshoots a customer issue:

Scenario: A client reports a memory spike during processing. They have a Tabular semantic model deployed to Azure Analysis Services. They fully process the model daily. The model normally takes less than 50 GB RAM but during processing, it spikes five times and Azure Analysis Services terminates the processing task complaining that it “reached the maximum allowable memory in our pricing tier”. Normally, fully processing the model should take about twice the memory but five times?

Teo gives us the explanation for this problem as well as a recommendation on how to fix it.

Comments closed

Naive Bayes Predictions with Analysis Services

Dinesh Asanka shows how you can use the Naive Bayes algorithm in an Analysis Services data mining project:

Microsoft Naive Bayes is a classification supervised learning. This data set can be bi-class which means it has only two classes. Whether the patient is suffering from dengue or not or whether your customers are bike buyers or not, are an example of the bi-class data set. There can be multi-class data set as well.

Let us take the example which we discussed in the previous article, AdventureWorks bike buyer example. In this example, we will use vTargetMail database view in the AdventureWorksDW database.

During the data mining algorithm wizard, the Microsoft Naive Bayes algorithm should be selected as shown in the below image.

Of mild interest is that it’s a two-class classifier here, but it’s a multi-class classifier in the (much) later ML.NET.

Comments closed