In both of these scenarios you will not be able to save the package. So what the heck are you supposed to do?! Here’s where my tunnel vision (and panic) sets in. How was I supposed to get my SSAS objects processed?
I could always script out my processing tasks using SSMS and drop them in a SQL Agent job step. But I have multiple environments and multiple cubes so each one would have to be hard coded. Not a great idea, so scratch that.
Click through to learn the best way to fix this.
Last week a new, free tool for analysing the performance of MDX queries on SSAS Multidimensional was released: Analysis Services Query Analyzer. You can get all the details and download it here:
…and here’s a post on LinkedIn by one of the authors, Francesco De Chirico, explaining why he decided to build it:
I’ve played around with it a bit and I’m very impressed – it’s a really sophisticated and powerful tool, and one I’m going to spend some time learning because I’m sure it will be very useful to me.
Read on for the rest of Chris’s review, including product screenshots.
In many cases, SSAS works efficiently with default settings right out of the box. However, when you have large databases, substantial number of concurrent users, insufficient resources on your server, or when best practices are not followed during SSAS database design, you can run into performance bottlenecks and problems. In these scenarios, you need to know what to measure and how to measure them, what’s normal for your environment (benchmark), and you need to have some amount of historical measurements to be able to see the events that lead to a certain bad performance/failure point. Once you have this data, you can improve your server’s performance by addressing the problem(s).
This is a tour de force of an article, absolutely worth reading if you plan on dealing with Analysis Services at some point. Even if you don’t build your own tool, you’ll learn a lot about what drives SSAS performance and what indicates that there might be a problem.
Power BI Report Server
There’s a new release of Power BI Report Server available, and you can read all about it here:
The blog post highlights the fact that you can connect to SSRS shared datasets via OData – which is basically what I was talking about here.
This is the big one for me, but there are a few other product updates that Chris covers as well.
This really didn’t make any sense. However, in one of the Discover Begin/End events, the same number appeared again: 8192 (this time explicitly marked as locale identifier). Hmmm, I had problems with weird locales before. I dug into my system, and yes, the English (Belgium) locale was lingering around. I removed it from my system and lo and behold, I could log into SSAS with SSMS again. Morale of the story: if you get weird errors, make sure you have a normal locale on your machine because apparently the SQL Server client tools go bonkers.
Worth reading the whole thing. And also maybe just using en-US for all locales; at least that one gets tested…
You can go to this like from Microsoft about more limitations.
There was also difference in labeling when designing a Tabular Model in Visual Studio (SQL Server Data Tools – SSDT) and making a change to the Model from SQL Server Management Studio – SSMS.
In SSDT, The Property for the Model is DirectQuery and the values are On and Off.
It’s certainly not the end of the world, but I can see it being a bit annoying to deal with.
SSAS 2017 introduces the 1400 compatibility level. Here are just some highlights of the new features:
New infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This enables support for a range of additional data sources, and data transformation and mashup capabilities.
Support for BI tools such as Microsoft Excel enables drill-down to detailed data from an aggregated report. For example, when end-users view total sales for a region and month, they can view the associated order details.
Object-level security to secure table and column names in addition to the data within them.
Enhanced support for ragged hierarchies such as organizational charts and chart of accounts.
Various other improvements for performance, monitoring, and consistency with the Power BI modeling experience.
There’s plenty more where that came from (unless you’re a Multidimensional fan…), so click through for the details.
One such an example of ad-hoc DBA tasks was when I had to delete about 600 partitions from a measure group that had thousands of partitions. Doing this manually would be ridiculous, so at the time I created a SQL script that used some dynamic T-SQL to create the delete commands in XMLA. XMLA has no “delete if exist” type syntax, so if I needed to run this again, this dynamic SQL output wouldn’t work. And so I decided that if I had to run the same task a gain I would write a PowerShell script that would run DSC-style and drop the partitions that were no longer required. And funnily enough, that is exactly what I had to do.
I knew I would be able to create a Powershell script that used AMO to check if a partition exists and drop it if it did. I also wanted the script to take into account any other partitions in other measure groups that may also need to be dropped. So I made sure the script uses PowerShell switches that can be included when calling the function, and if they are included then the pertaining partitions in that measure group will be deleted. So you can run the script for one, some or all of the measure groups in a cube.
Click through for the script.
The best method to implement row-level security in a published Power BI model or SSAS Tabular model consumed from the Power BI service will depend on will depend on your data and requirements. The method I demonstrate here is one of the most flexible approaches and one that I commonly use in my projects.
Click through to watch the video.
The previous post looked at how to process SSAS Tabular models with TMSL. Since SQL Server adds new JSON capabilities in 2016, let’s look at how to convert TMSL JSON to a Table with OPENJSON. OPENJSON is a new function in SQL Server 2016 that, per Microsoft:
OPENJSON is a table-valued function that parses JSON text and returns objects and properties from the JSON input as rows and columns.
In short, OPENJSON converts JSON text to a table. Since TMSL is JSON, this function can convert a SSAS Tabular processing script into a table. This could be useful if you wanted to document a SSAS processing schedule.
That’s an interesting use of OPENJSON.