Unfortunately, in Tabular projects, there’s no direct way to add a description for perspectives – not even withBIDS Helper.
In Tabular projects, there is a (modal) popup window for managing perspectives…
The answer is not a great one, so hopefully the SSAS team picks up on this and improves the Tabular experience.
As I’m currently planning to migrate the entire BI architecture of one of my customers to the cloud, this made me think: can we ditch SSAS as we know it already in favor of Power BI? What are the alternatives?
To study that, I’ve put some diagrams together to show the possibilities of moving BI to the cloud. First, I’ll discuss the possible architectures, then the impossible architecture (but maybe the situation I was looking for).
One man’s opinion: there will be SSAS for Azure. I have no proof of this, and the nice part about having no proof is that I can throw out wild speculation without fear of violating NDA…. But to me, Power BI solves a different problem and acts in conjunction with SSAS rather than as its replacement. I also don’t see any technical reasons why SSAS couldn’t live in the cloud, and so that leads me to believe that it will be there eventually. But hey, it turns out wild speculation is occasionally wrong…
In an enterprise solution, you might have 10+ perspectives – some of which might have similar names – and without a clear description it will be confusing for a user (especially new users) to know which perspective is the correct one.
A better idea is to add a description/annotation property for each perspective where a more helpful text description can be provided indicating the business process, common types of analysis, etc. This would provide a metadata hook for self-service reporting tools (e.g. Excel, Power BI) as well as enterprise data cataloging solutions such as Azure Data Catalog.
Another helpful feature would be the ability to set the visibility of a perspective – or if you’re more familiar with the Tabular vernacular: “hide it from client tools”!
While you’re reading about perspectives, fill out Bill’s SSAS survey.
What’s clear from these examples is that trying to relate what’s going on in the query to what you see in Profiler is quite tricky even for seemingly simple queries; for most real-world queries it would be almost impossible to do so with total confidence. That said, when I’m tuning queries I usually comment out large parts of the code to try to isolate problems, thus creating much simpler queries, and I hope the value of this post will lie in you being able to spot similar patterns in Profiler to the ones I show here when you do the same thing. In part 3 of this series I’ll show you some practical examples of how all this information can help you tune your own queries.
Whenever I read Profiler, my next question is “Is there an extended event which covers this?”
There are various types of Auditing in the Microsoft BI stack. There is auditing in SSRS, SharePoint, SSAS and not forgetting SQL has its own auditing.
Today I am looking at the SSAS auditing – you can find out more about it onTechNet.
Just because it’s in a cube doesn’t mean we shouldn’t be able to audit it.
When developing an SSAS Tabular model, you can choose one of two options for handling the underlying data:
In-Memory Mode (aka Imported Mode). Stores the data in the in-memory model, so all queries are satisfied by the data imported into the Tabular model’s storage engine. This requires the model to be processed for updated data to become available for reporting. This mode is conceptually analogous to MOLAP in SSAS multidimensional models (though the SSAS architecture differs significantly).
DirectQuery Mode. Leaves the data in the source and sends the queries to the underlying database. In this case there’s no processing and SSAS serves as a semantic model to improve the user experience. This mode is conceptually analogous to ROLAP in SSAS multidimensional models (though there are architectural / implementation differences between DirectQuery and ROLAP).
It looks like DirectQuery mode doesn’t fit all circumstances, but there are a few cases in which it makes a lot of sense.
Koen Verbeeck shows how to use nested display folders in Analysis Services and get Power BI to use them as well:
On the same day, I also found out it’s possible to nest display folders in SSAS. Which was even better, because I have a project with dozens of measures and this functionality really makes the difference. All you have to do is putting backslashes to indicate where a new child folder begins
This makes intuitive sense, so good on Microsoft for supporting this.
Just recently a reply was made to the Connect item, highlighting the fact, that the current values of the Data/Log/Temp and Backup Directories – meaning the currently configured values – is exposed through the Server.ServerProperties collection. According to the answer, only public property values are exposed.
Using PowerShell, we can now retrieve the desired information from any given instance of Analysis Services. Doing so would look something like this:
It’s good to know that this information is available via Powershell.
So obviously we brought back internet Sales for all years captured into the cube. So now we need to understand the WHERE clause and how to use it properly in MDX.
The “WHERE” clause in MDX is used to define another slicer and limit the attributes from the defined slicer. The slicer used in the “WHERE” clause cannot be the same slicer used when defining your rows hence why I use the date dimension. The MDX query below depicts the WHERE clause and syntax:
I liked Dan’s introductory-level presentation on learning MDX, and this blog series is following along those same lines.
My story with this half-baked product (the Dashboard you are about to see), is that I needed some way of tracking performance on a couple of Analysis Services (SSAS) query servers. There are a lot of good posts and talks about how to collect and store performance counters and SSAS logs out there, and I suggest you look into this, this or that, if you need inspiration.
The current data set is about 200K rows, as I am sampling each server every 5th minute.
Both of these are valuable tools in a Microsoft BI environment.