So Pivot is easy and simple to do, but you have to be careful about the nature and quality of source data set. If it is normal to have a name repeated in the source data, then an aggregation needs to be set properly. if you expect each name to appear once, then setting it as Do Not Aggregate works better because you can use error handling mechanism in Power Query to handle error somehow.
This is a good sight easier than writing a bunch of SUM(CASE) statements or using the PIVOT operator in T-SQL.
In M, the DateTime.LocalNow() function returns the current system date and time at the time the function is called – which means that you could call it multiple times in the same query and get different results. There’s also the DateTime.FixedLocalNow() function which also returns the system date and time; the difference between the two is, as the documentation says:
This value is fixed and will not change with successive calls, unlike DateTime.LocalNow, which may return different values over the course of execution of an expression.
This is good to know for those query-tuning scenarios.
Pesonally, I am still struggling to see where PowerBI fits in my organization. I am the only BI professional here, so I have to do every bit of the process from data modeling to building front end applications. Right now, my organization has a data warehouse with some processes in the warehouse, a Datazen environment and an SSRS environment. There is no SSAS cubes or any power users using PowerPivot to analyze data.
Data warehouses serve a particular role in an environment: they answer known business questions and give consistent answers across an organization. I see Power BI as a tool with a few separate uses depending upon organizational size and maturity. I think its best use in shops which are not large enough, well-established enough, or with enough non-IT business intelligence expertise is BI developers building beautiful dashboards for business data consumers, feeding from existing systems (including data warehouses). In that sense, it is a complement to a Kimball-style data warehouse.
As a Modern Excel enthusiast (Modern Excel = Excel + Power Pivot + Power Query = Magic!), I found myself hesitant in embracing the new world of Power BI. Many of those inhibitions have shed away as Microsoft has continued to innovate and deliver an outstanding experience with Power BI. But I could not get over the feeling that going from Excel to Power BI felt like a one-way street.
You could upload/import an Excel Power Pivot model into Power BI desktop or onto PowerBI.com. But then you could not get it back. You could not get it back in Excel. In thefirst blog post on the site (yes the very first) Rob called us gridheads, and that we are.
Excel is still the top tool for business users. Anything you can do to make life easier for your Excel users makes life easier for you as well.
I’ve mapped suburbs to County because that was the lowest level I’ve found in data category for geographic information. (Place and Address cannot be used for Filled Map at the time or writing this post). and I got Nothing! Not event a small area on the map. I’ve tried then removing the district and putting suburb, region, country format with County as the data category which didn’t helped again.
I’ve found that I can map some locations based on Postal Code as you see below. However not Postal Code is not always good distinguishing field for a region, as multiple regions might have a postal code shared.
Filled maps have the potential to be powerful tools, but they aren’t perfect. Check out Reza’s post for the full scoop.
While it’s fairly common to need to load fixed-width files using Power Query or Power Query (and there’s a nice walkthrough of how to do this here), occasionally you might want to use Power Query and Excel to create a fixed-width output for another system, or maybe to create some test data. You might not want to do it often but I can imagine that when/if Power Query is integrated into SSIS this will be a slightly less obscure requirement; at the very least, this post should show you how to use a couple of M functions that are under-documented.
I don’t see this being a particularly common request, but I guess I can see some scenario in which we’re loading data into a legacy system.
Having recently been having rebuilding my machine I finally decided to automate the process of installing the software I need.
This was a life saver as I was reinstalling a few times to try and figure out why I wasn’t getting sound on my external monitor. So I was gradually uninstalling everything until I found out that it was Hyper-v that was causing the problem.
The outcome meant I was installing PowerBI lots and had to automate it.
This looks like the first step toward a Chocolatey script.
Koen Verbeeck shows how to use nested display folders in Analysis Services and get Power BI to use them as well:
On the same day, I also found out it’s possible to nest display folders in SSAS. Which was even better, because I have a project with dozens of measures and this functionality really makes the difference. All you have to do is putting backslashes to indicate where a new child folder begins
This makes intuitive sense, so good on Microsoft for supporting this.
On Power BI Desktop, you don’t even have a choice – the only route to connect to data is via the “Get Data/Power Query” interface. Which is A-Okay with me. Even with Excel, I now connect to ANY data using Power Query.
Use Power Query to fill all your Get Data needs
Yes, ANY data. Even if I could connect using Power Pivot to those data sources and did not need any transformation – I still always use Power Query.
Power Query to get data, Power Pivot to model data. Avi then gives a few examples of scenarios, explaining where each fits in.
When comparing the features of the Online Client with the Desktop version of Power BI, there is one very obvious difference, there is no way to create a data model in Power BI online. It is not possible to create a data model using the online client. The online client is designed to connect to an existing online source such as Sales Force or Azure DB. If you are using an existing model, there is no need to create one. When using the enterprise gateway, which uses an on-premises database such as a SQL Server, SSAS or Hana, the data model is contained within the database exposed via the enterprise gateway, so again no reason exists to create a data model. Report creation can occur either using the online client or desktop as there is compelling technical reason that I am aware of which would determine where the report is created.