Press "Enter" to skip to content

Author: Kevin Feasel

Qualitative Analysis with Dendograms

Stephanie Evergreen explains what denodgrams are and when they can be useful visuals:

Dendrograms are not THE most common qualitative visual because they require a data generated through a hierarchical cluster analysis. Cluster analysis can be a useful tool in analyzing qualitative data. By clustering groups of participants with similar qualitative codes, you can better understand your findings. According to Henry & team, this analysis can help “reveal things like participant motive and the reasons behind counterintuitive findings.”

Check out Henry’s article to learn more about the analysis. Here, let’s just focus on describing a dendrogram that could display those hierarchical cluster analysis findings. They can be a little confusing at first, especially since the x-axis has 100% closest to the y-axis when we aren’t used to seeing it that way. Walk through this example with us.

Click through for an example. If it’s confusing at first, read to the end, as I think the concrete example helps everything click.

Comments closed

Errors Updating Stats on Columnstore Indexes

Max Vernon walks us through some problems trying to update statistics on columnstore indexes:

The first error above would be seen if you have a SQL Server Agent job that updates statistics. The second error is how it looks in an SSMS Query window.

The error message claims that UPDATE STATISTICS can only be used on a columnstore index with the STATS_STREAM option. However, the Microsoft Docs UPDATE STATISTICS shows very “thin” documentation for the option, showing only these two tidbits:

<update_stats_stream_option>
Identified for informational purposes only. Not supported. Future compatibility is not guaranteed.

Columnstore indexes really don’t want their stats updated, apparently, and will fight you tooth and nail to prevent it.

Comments closed

Long Weekend Learning Items

Kevin Chant has a few things you should check out if you’ve got several days of downtime:

Install and learn Docker
You can take the time over the Easter weekend to download and install Docker to see how you can use it. In addition, there are multiple posts online by people online that you can use as a starting point.
However, to start with you can read how to download and install it in detail here.

There are some good things on this list. Even if you don’t have a long weekend ahead of you, pick up some of these items gradually.

Comments closed

Quick Thoughts on Dot-Sourcing in Powershell Modules

Cody Konior wants to speed up module loading:

One of the more heated ideas is that you should combine all of your scripts into a massive file and execute it once instead. Currently dbatools uses a mix of these techniques in great detail and to great success.

I don’t want to use a single file because I don’t like build the idea of a pre-compile build pipeline for a scripting language, so here’s an alternative and benchmarks of how these techniques stack up (taken on Windows 10 PS 5.1 obviously your results may differ).

Check out Cody’s example.

Comments closed

Downgrading a SQL Server Database

Dave Mason goes against the flow:

One of the recurring questions I see on Stack Overflow is “How do I restore a SQL Server backup to a previous version of SQL Server?” The answer, of course, is you don’t. Upgrading a database to a newer (major) version is a one-way ticket–at least as far as the database files and subsequent backups go. I recently found myself in a similar position as all those hapless Stack Overflow questioners. I had a customer that had migrated to a newer version of SQL and they wanted to roll back to the previous version. What to do?

A couple of thoughts immediately came to mind. There’s the SQL Server Import and Export Wizard and the Generate and Publish Scripts Wizard. Neither of these sounded convenient. In particular, generating a script with both schema and 500 GB of data sounded like a fruitless endeavor. Two other options sounded much more appealing. So I focused on those.

Dave has a couple of creative methods effectively to downgrade a database.

Comments closed

Centralized Kubernetes Management with Rancher

Praveen Sripati shows us how we can use Rancher to create a Kubernetes cluster anywhere, using AWS as an example:

In the previous blog  we explored on setting up an K8S Cluster on the AWS Cloud without using any additional softwares or tools. The Cloud Providers make it easy to create a K8S Cluster in the Cloud. The tough part is securing, fine tuning, upgradation, access management etc. Rancher provides a centralized management of the different K8S Clusters, these can be in any of the Cloud (AWS, GCP, Azure) or In-Prem. More on what Rancher has to offer on top of K8S here. The good thing about Rancher is that’s it’s 100% Open Source and there is no Vendor Lock-in. We can simply remove Rancher and interact with the K8S Cluster directly.

I like this kind of tooling because it reduces cloud lock-in. For something like Kubernetes, where the whole point is orchestration of ephemeral containers, there’s a lot of benefit in being able to shift between services as needed.

Comments closed

Contrasting Flink with Kafka Streams

Sourabh Verma contrasts Apache Flink with Kafka Streams:

Initially, I would like you all to focus on a few questions before comparing the frameworks:
1. Is there any comparison or similarity between Flink and the Kafka?
2. What could be better in Flink over the Kafka?
3. Is it the problem or system requirement to use one over the other?

I’m generally happy with both technologies as well as Spark Streaming. But as Sourabh points out, there are differences to keep in mind.

Comments closed

Time Travel in Snowflake

Koen Verbeeck shows an interesting feature in Snowflake:

Time travel in Snowflake is similar to temporal tables in SQL Server: it allows you to query the history rows of a table. If you delete or update some rows, you can retrieve the status of the table at the point in time before you executed that statement. The biggest difference is that time travel is applied by default on all tables in Snowflake, while in SQL Server you have to enable it for each table specifically. Another difference is Snowflake only keeps history for 1 day, configurable up to 90 days. In SQL Server, history is kept forever unless you specify a retention policy.

How does time travel work? Snowflake is built for the cloud and its storage is designed for working with immutable blobs. You can imagine that for every statement you execute on a table, a copy of the file is made. This means you have multiple copies of your table, for different points in time. Retrieving time travel data is then quite easy: the system has only to search for the specific file that was valid for that point in time. Let’s take a look at how it works.

It looks interesting, though the “Snowflake doesn’t have backups like you know them in SQL Server” gives pause.

Comments closed

Emailing Data From Power BI Via PowerApps and Flow

Erik Svensen investigates how to email specific records of data from Power BI:

One of my clients called me the other day and asked whether it was possible to export the selected order that was selected in the current report page – as she wanted to send the information to another user. I explained the export data feature from the visual action menu but she didn’t want to download a file and then locate that and then switch to Outlook and click new mail – type the correct the e-mail and attach the file – that was not very Power like – to much clicky clicky – because all the data was actually available when she had filtered the report for that particular record – the e-mail she wanted to mail the data to and off course the data she saw on the screen.

Hmm… Let’s see how we can use the PowerPlatform stack to solve this requirement.

Erik got everything working, so check it out.

Comments closed