Koen Verbeeck shows how to use nested display folders in Analysis Services and get Power BI to use them as well:
On the same day, I also found out it’s possible to nest display folders in SSAS. Which was even better, because I have a project with dozens of measures and this functionality really makes the difference. All you have to do is putting backslashes to indicate where a new child folder begins
This makes intuitive sense, so good on Microsoft for supporting this.
In the code below, the first thing we do is enable Ad Hoc Distributed Queries so we can try out the OPENROWSET method. The advantage to this method is not having a linked server and being able to call it directly out of TSQL. Once we have that enabled we write our query and you’ll notice that we are essentially doing 2 queries. The first query is the LDAP query inside the OPENROWSET function. Once those results are returned we are using another query to get what we want from the result set. Here is where I want you to stop and think about things. If my LDAP query pulls back 50 attributes, or “columns” in SQL terms, and I tell it I only want 10 of them, what did I just do? I brought back a ton of extra data over the wire for no reason because I’m not planning to use it. What we should see here is that the columns on both SELECT statements are the same. They do not, however, have to be in the same order. The reason for that is because LDAP does not guarantee to return results in the same order every time. The attribute or “column” order in your first SELECT statement determines the order of your final result set. This gives you the opportunity to alias anything if you need to.
You can query LDAP using SELECT statements, but the syntax isn’t T-SQL, so in my case, it was a bit frustrating getting the data I wanted out of Active Directory because I was used to T-SQL niceties. Nevertheless, this is a good way of pulling down AD data.
On Power BI Desktop, you don’t even have a choice – the only route to connect to data is via the “Get Data/Power Query” interface. Which is A-Okay with me. Even with Excel, I now connect to ANY data using Power Query.
Use Power Query to fill all your Get Data needs
Yes, ANY data. Even if I could connect using Power Pivot to those data sources and did not need any transformation – I still always use Power Query.
Power Query to get data, Power Pivot to model data. Avi then gives a few examples of scenarios, explaining where each fits in.
A non-blocking operator is one that consumes and produces rows at the same time. Nested loop joins are non-blocking operators.
A blocking operator is one that requires that all rows from the input have been consumed before a single row can be produced. Sorts are blocking operators.
Some operators can be somewhere between the two, requiring a group of rows to be consumed before an output row can be produced. Stream aggregates are an example here.
Gail ends by explaining that this is why “Which way do you read an execution plan, right-to-left or left-to-right?” has a correct answer: both ways. This understanding of blocking, non-blocking, and partially-blocking operators will also help you optimize Integration Services data flows by making you think in terms of streams.
The configuration of columns is perhaps a critical part of the entire ETL process as it helps us build mapping metadata for your ETL. In fact, regardless of where or not SSIS/SSMS can detect delimiters, if you skip Column Mapping section – your ETL will fail validation. In order to clarify how Ragged right formatted files work, I have gone a step back and used Figure 4 to actually displayed a preview of our fictitious Fruits transaction dataset from Notepad++. It can already be seen from Notepad++ that the file only has row delimiter in a form of CRLF.
Read the whole thing.
Above, the GUI incorrectly base the restore on a copy only backup. After using the timeline dialog to point to an earlier point in time, you can see that the GUI now has changed so it bases the restore on this potentially non-existing copy only backup. Not a nice situation to be in if the person doing the restore hasn’t practiced using the T-SQL RESTORE commands.
It’s important to be able to write the relevant T-SQL queries to restore your database, just in case you run into one of these issues.
What is this all about? It took me a bit of digging, but what it boils down to is that Microsoft made a fundamental change to how things are managed within Azure. You will now find documentation on these two different deployment models: Classic Deployments and Resource Manager Deployments. These two different set of Powershell cmdlets reflect these different models, as anything for Classic Deployments are handled by cmdlets in the Azure and Azure.Storage modules. All the Resource Manager Deployment stuff is handled by the AzureRM* modules.
This is the first in a series and serves as an introduction to the topic.
Of course you will need to know what is allowed for you to use for the time zone name. Fortunately for us, this list is stored in the registry of the server. In other words, you can use whatever timezones are installed on the server. For a complete list you can query the sys.time_zone_info DMV:
If you work at a company with international dealings, you probably already have a time zone table somewhere, but this is a nice way of encapsulating possibly-slow time zone conversion and calculation operations.
Whilst down the rabbit hole, I discovered just in passing via a beanstalk article that there’s actually been a command line interface for PuTTY called
plink. D’oh! This changed the whole direction of the solution to what I present throughout.
plink.exeas the command line interface for PuTTY we can then connect to our remote network using the key pre-authenticated via pageant. As a consequence, we can now use the
shell()command in R to use plink. We can then connect to our database using the standard Postgres driver.
PuTTY is a must-have for any Windows box.
Just recently a reply was made to the Connect item, highlighting the fact, that the current values of the Data/Log/Temp and Backup Directories – meaning the currently configured values – is exposed through the Server.ServerProperties collection. According to the answer, only public property values are exposed.
Using PowerShell, we can now retrieve the desired information from any given instance of Analysis Services. Doing so would look something like this:
It’s good to know that this information is available via Powershell.