Press "Enter" to skip to content

Month: May 2022

Downloading a Report Authored in Browser as PBIX

James Bartlett solves a tricky problem:

Most of you have probably run into a situation where someone in your organization has authored a report in the Power BI web service, and now they want to make changes that can only be done with Power BI Desktop. So, you try to download the PBIX file from the Power BI web service, only to discover that you can’t, because if it wasn’t created as a PBIX, it can’t be downloaded as a PBIX. Infuriating!

Read on for a solution as well as a Powershell function to make it easier.

Comments closed

Discovering Data Drift with DVC

Milecia McGregor looks at a version control system for ML projects (and data):

What happens when the machine learning model you’ve worked so hard to get to production becomes stale? Machine learning engineers and data scientists face this problem all the time. You usually have to figure out where the data drift started so you can determine what input data has changed. Then you need to retrain the model with this new dataset.

Retraining could involve a number of experiments across multiple datasets, and it would be helpful to be able to keep track of all of them. In this tutorial, we’ll walk through how using DVC, an open source version control system for machine learning projects, can help you keep track of those experiments and how this will speed up the time it takes to get new models out to production, preventing stale ones from lingering too long.

My team is working on integrating DVC. It’s a really good project for analytics teams, as it extends the notion of version control to datasets and helps you tie in code (source control), models (tools like MLflow), and data.

Comments closed

Flink 1.15 Released

Joe Moser and Yun Gao announce Apache Flink 1.15:

Thanks to our well-organized and open community, Apache Flink continues to grow as a technology and remain one of the most active projects in the Apache community. With the release of Flink 1.15, we are proud to announce a number of exciting changes.

One of the main concepts that makes Apache Flink stand out is the unification of batch (aka bounded) and stream (aka unbounded) data processing, which helps reduce the complexity of development. A lot of effort went into this unification in the previous releases, and you can expect more efforts in this direction.

Apache Flink is not only growing when it comes to contributions and users, but also out of the original use cases. We are seeing a trend towards more business/analytics use cases implemented in low-/no-code. Flink SQL is the feature in the Flink ecosystem that enables such uses cases and this is why its popularity continues to grow.

Flink SQL is Feasel’s Law in action.

Comments closed

Fun with Natural Full Join

Lukas Eder shows off natural joins:

At first I though of the UNION CORRESPONDING syntax, which doesn’t really exist in most SQL dialects, even if it’s a standard feature. But then, I remembered that this is again a perfect use case for NATURAL FULL JOIN, this time slightly differently from the above example where two tables are compared for contents. This time, we want to make sure the two joined tables never have matching rows, in order to get the UNION like behaviour.

I wasn’t aware of the notion of natural joins because they’re not available in SQL Server. They are available in Oracle, Postgres, and MySQL. Fun as Lukas’s blog post is, I could see natural joins going wrong in so many ways.

Comments closed

Finding Azure SQL DB Backup History

Taiob Ali takes us through a new DMV:

There is a new DMV currently in preview which returns information about backups of Azure SQL databases except for the Hyperscale tier. Microsoft official documentation is here.

If you run the example query as-is from the above documentation some of the columns do not make sense.

Taiob includes a better query which provides the type of information you’re used to in on-premises SQL Server.

Comments closed

Forced Parameterization and Local Variables

Erik Darling tries something out:

I think it was sometime in the last century that I mentioned I often recommend folks turn on Forced Parameterization in order to deal with poorly formed application queries that send literal rather than parameterized values to SQL Server.

And then just like a magickal that, I recommended it to someone who also has a lot of problems with Local Variables in their stored procedures.

They were curious about if Forced Parameterization would fix that, and the answer is no.

Shot down by the third paragraph of the intro. That’s rough. Still, click through for the demo.

Comments closed

Creating an Info Button in Power BI

Kristyna Hughes shows how to create an info tooltip in Power BI:

The steps below will walk through how to add an information icon to the report, making a tooltip page containing your additional information, and enabling the tooltip to allow users to hover over the icon and see the information.

This can be quite useful, especially as it gets context information out of the way of users after they don’t need it anymore. That’s important for dashboards you expect people to look at regularly.

Comments closed

Comparing Databricks to Synapse Spark Pools

Corrinna Peters makes comparisons:

There are different cases for using both depending on the specific needs and requirements, Synapse and Databricks are similar, but both have their own areas of specialities or rather areas where they are above the other.

Data Lake – they both allow you to query the data from the data lake, Synapse uses either the SQL on demand pool or Spark and Databricks uses the Databricks workspace once you have mounted the data lake. If you are predominately a SQL user and prefer the code and the BI developer feel then Synapse would be the correct choice whereas if you are a Data Scientist and prefer to code in Python or R then Databricks would feel more at home.

Read on for a nuanced take. My less nuanced take is, Databricks beats the pants off of Synapse Spark pools in terms of performance. Synapse has a much better overall ecosystem, expanding beyond Spark and into T-SQL (in two flavors) and log/event analytics with KQL. If you’re spending 100% of your time in Spark and don’t care about the rest, use Databricks; if Spark is a relatively small part of your warehousing work, use Synapse.

1 Comment

Trying out AutoML in R

JLaw calls a timeout:

In this fourth (and hopefully final) entry in my “Icing the Kicker” series of posts, I’m going to jump back to the first post where I used tidymodels to predict whether or not a kick attempt would be iced. However, this time I see if using the h2o AutoML feature and the SuperLearner package can improve the predictive performance of my initial model.

The results are just about what I would have expected: they provide a good floor but a human with knowledge of the data and skill with techniques can still beat out-of-the-box AutoML processes. Still, knowing what that floor is can help a lot: run some AutoML tool for a few minutes/hours/days and you have an easy way of letting the business side know the expected model quality. If AutoML already exceeds expectations, you’re golden. If AutoML is close to expectations (on either end, just above or just below), you as a skilled human should be able to improve things a bit more, especially once you have a chance to analyze what the AutoML processes did. If AutoML is way below business expectations of quality, perhaps this isn’t the best project to spend time on. H/T R-Bloggers.

Comments closed