Press "Enter" to skip to content

Curated SQL Posts

ChartAccent Power BI Custom Visual

Devin Knight continues his Power BI custom visuals series:

In this module you will learn how to use the ChartAccent LineChart Custom Visual. This visual is a custom line chart that allows you to annotate individual data points, data series and ranges.

This visual runs the risk of getting very “noisy” but that can be fine if you’re building a presentation and want to build a somewhat complicated, annotated visual.

Comments closed

Streaming Performance Counters Into Power BI

Chris Koester shows how to load Performance Counters (i.e., what Perfmon displays) into Power BI in near real time:

In the previous post I showed how you can Push Data into Power BI Streaming Datasets with C#. That example used dummy data. In this post I’ll show how to push performance counter data into a Power BI Streaming Dataset as a real world example. This scenario allows for monitoring a computer or application in near real time in the browser.

I won’t go through the steps of creating a Power BI Streaming Dataset. You can reference my previous post if you need instructions. I will note that the value names that you choose in the Streaming Dataset must match the C# property names for the script to work. This is noted in the code comments as well.

Check it out.

Comments closed

Associative Purchasing Behavior With DAX

Matt Allington solves a particular associative purchasing behavior problem:

The requirement is to be able to select a single customer from a list of customers and see what products they have purchased.  Then the next things is to be able to compare sales of this same group of products across all customers while ignoring sales of all other products.  Stated another way, the report should show sales from all customers but only for the products purchased by the original selected customer.  Now there is a “cheats” solution to this problem using visual level filters – more on that later.  The purpose of this article is to show a DAX only solution.

Read Matt’s answer, but also check out the comments for an even better solution if you’re using a proper star schema.

Comments closed

Using Relog To Load Perfmon Counters Into SQL Server

Raul Gonzalez shows how to use the Relog executable to bring Perfmon counters into SQL Server:

Relog is an application and as such it might be able to connect to a SQL Server. In this case, the way to connect is via ODBC driver, hence we have to create a new DSN in the computer we want to run Relog.exe, which does not have to be necessarily the same server where SQL is installed.

If we click Start and start typing ODBC it’ll show the 2 versions available in modern OS’s which are 32bit and 64bit, we need to choose the 64bit version to make it work.

Seems like being relog.exe a bit old, the only driver that actually works is the oldest, but whatever, it works!, so we choose the SQL Server 10.00.xxx one.

He also provides a bonus script at the end, which builds a dynamic pivot table of your counter information.

Comments closed

Automatic Tuning In SQL Server

Grant Fritchey is a fan of SQL Server 2017’s automatic tuning feature:

The core of automatic tuning at this point in time (because I’m sure it’s going to evolve) is the ability of the query engine to spot when a query has generated a new plan and that new plan is causing performance to degrade. This is known as a regression in the plan. It comes from bad parameter sniffing, changes in statistics, cumulative updates, or the big notorious one, the cardinality estimator introduced in SQL Server 2014 (it’s been almost four years, I’m not calling it new any more). Spotting a plan regression prior to SQL Server 2016 and the introduction of the Query Store was a major pain. However, now it’s easy. You can spot them by reading the data collected. Further, Microsoft can spot them by reading the data collected, and it does.

If the engine sees that a plan is causing a regression (you have to have Query Store enabled for this), then it will write out a suggestion for fixing it to the new DMV, sys.dm_db_tuning_recommendations. If you further enable automatic tuning on your database, then SQL Server can automatically substitute the last good plan for you, fixing the problem. Further, SQL Server monitors this fix and if, over time, it’s clear that the forced plan is causing problems, it will unforce the plan, again automagically.

Click through for more information, including a query to read from the tuning recommendations DMV.

Comments closed

Unit Testing Spark Streaming DStreams

Anuj Saxena gives an example of using StreamingSuiteBase to build unit tests for DStreams in Spark Streaming:

So what’s the problem? How to execute streaming logic in a test environment.

We can write Integration test cases and provide the actual environment in the integration test. But for unit testing, we need a testing environment which should not depend on any external application.

Click through for the example.

Comments closed

The Need For Multiple Warehouse Architectures

James Serra argues in favor of a data lake approach and a traditional data warehouse:

I think the ultimate question is: Can all the benefits of a traditional relational data warehouse be implemented inside of a Hadoop data lake with interactive querying via Hive LLAP or Spark SQL, or should I use both a data lake and a relational data warehouse in my big data solution?  The short answer is you should use both.  The rest of this post will dig into the reasons why.

I touched on this ultimate question in a blog that is now over a few years old at Hadoop and Data Warehouses so this is a good time to provide an update.  I also touched on this topic in my blogs Use cases of various products for a big data cloud solutionData lake detailsWhy use a data lake?and What is a data lake? and my presentation Big data architectures and the data lake.  

Read on for James’s argument, which is good.  My argument is summed up as follows:  the purpose of a data warehouse is to solve known business problems—that is, to help build reports that people on the business side need based on established requirements.  The purpose of a data lake is to hold all kinds of data and curate it for when people come looking for something they didn’t know they needed.

Comments closed

Failover Groups In Azure SQL Database

Jim Donahoe shows off Failover Groups in Azure SQL Database.  Part 1 involves setting up a Failover Group:

In my former company, we had 22 web applications that all had connections to various databases.  We had all of our databases configured for Geo-Replication already, but still if we had to failover, we had to update each connection string for the web apps along with others which became a tedious process. In came Failover Groups to the rescue!  With a Failover Group, I was able to create two endpoints that stayed the same no matter which server was primary/secondary.  I liked to think of these as my Availability Group Listeners as they kinda serve the same functionality: Route traffic to a node depending on if its read-only or not.  Best part?  It’s configured through the Azure Portal SO EASILY!  You can use PowerShell as well, but for this blog post, I will walk through the creation via the Portal.  I will make a separate post or attach a script at some point for the PowerShell deployment.

Before we start the configuration portion of this though, let’s take a look at how Microsoft defines what a Failover Group is.  I found this definition here:  “Azure SQL Database auto-failover groups (in-preview) is a SQL Database feature designed to automatically manage geo-replication relationship, connectivity, and failover at scale.”  Sounds pretty interesting, right? Let’s make one!

In Part 2, Jim shows how to connect to SQL Server using the Failover Group listener:

Well, now that the easy stuff is out of the way, let’s talk about how you connect to these groups via SSMS.  This is where some of the confusion happens.  When I first configured a Failover Group, the first thing I tried to do was connect to the Primary server via SSMS thinking it will work just like an Always On Listener in traditional SQL Server…NEWP!

If you’re running a production database on Azure SQL Database, you might want to look at Failover Groups.

Comments closed

The Argument For Single-Socket Servers

Joe Chang wants us to think about socket counts:

It might seem that the 2-socket system continues to be a good choice, as two processors with an intermediate number of cores is less expensive than one processor with twice as many cores. An example is the Xeon Gold 6132 14-core versus the Xeon Platinum 8180 28-core processors. In addition, the two-socket system has twice the memory capacity and nominally twice as much memory bandwidth.

So, end of argument, right? Well, no.

Click through for his argument in favor of single-socket machines for OLTP systems.

Comments closed

Power BI Usage Metrics

Gogula Aryalingam shows how to access Power BI usage metrics for a report or dashboard:

Each app workspace gets its own report usage metrics data set, it’s just that you don’t see it when you are in the portal. In order to access it (at least for now) you need to use Power BI Desktop. When you open Power BI Desktop, you need to sign-in with the appropriate login, and then choose Power BI service from Get Data menu item. You then get listed with a set of app workspaces; within each you would find a list of all the datasets that were every published to each of the workspaces. Additionally, Power BI will also give you two more datasets: Report Usage Metrics Model and Dashboard Usage Metrics Model. However, these data models will only show up if you had attempted to view usage metrics at least once on one of the reports of the app workspace.

Read the whole thing.

Comments closed