Press "Enter" to skip to content

Month: January 2018

Visualizing Progress Using Power BI

Stacia Varga methods of visualizing progress toward a goal using Power BI:

Another interesting way to look at goal tracking for a goal in which time is an important element, such as my daily Move goal, is to use a KPI visualization.

Just as many businesses use KPIs, which is an abbreviation for key performance indicators, I can use a KPI to see my current metric value, as of the last date for which I have collected data. In Power BI, not only can I see this value, but I can also see how it compares to the target at a glance, through the use of color. Red is bad and green is good, by default, but I can use the formatting options to change this. And I can see how the value trends over time, much like my current line and clustered column chart does.

Click through for several techniques.

Comments closed

Defining Table-Valued Function Metadata

Bill Fellows has a query which gets input parameters for table-valued functions:

In my ever expanding quest for getting all the metadata, I how could I determine the metadata for all my table valued functions? No problem, that’s what sys.dm_exec_describe_first_result_set is for. SELECT * FROM sys.dm_exec_describe_first_result_set(N'SELECT * FROM dbo.foo(@xmlMessage)', N'@xmlMessage nvarchar(max)', 1) AS DEDFRS

Except, I need to know parameters. And I need to know parameter types. And order. Fortunately, sys.parameters and sys.types makes this easy. The only ugliness comes from the double invocation of row rollups

Click through for the script.

Comments closed

Monitoring Kafka Streaming Pipelines

Randhir Singh shows how to use open-source tools to monitor Kafka streaming pipelines:

The solution uses the following open-source tools. The solution architecture is illustrated below.

  • Apache Kafka Connect is a tool to stream data between Apache Kafka and other components.
  • InfluxDB which is a time series database from InfluxData. It will be used to store time series data from Kafka input and output topics.
  • Influx sink connector from Datamountaineer. It is a connector and sink to write events from Kafka to InfluxDB.
  • Chronograf is an open-source monitoring solution from InfluxData.

Click through for the solution.

Comments closed

Functions In Spark

Fisseha Berhane continues his RDDs vs DataFrames vs SparkSQL series, this time looking at functions:

Let’s use Spark SQL and DataFrame APIs ro retrieve companies ranked by sales totals from the SalesOrderHeader and SalesLTCustomer tables. We will display the first 10 rows from the solution using each method to just compare our answers to make sure we are doing it right.

All three approaches give the same results, though the SQL approach seems to me to be the easiest.

Comments closed

Building ML Services Models

I have part two of my three-part series on SQL Server Machine Learning Services modeling:

We used sp_execute_external_script to build a model and generate some number of days worth of predictions.  Now I’d like to break that up into two operations:  training a model and generating predictions.  The biggest reason I might want to do this is if model generation is very expensive relative to prediction generation.  It’s obviously not very expensive to build a random number generator following a Poisson distribution, but imagine we had a complicated neural net that took hours or days to train—we’d want to use that model over and over, as long as the model came close enough to representing the underlying reality.

So now we’re going to create a model.

Click through to see a more complete example, something closer to production-ready.

Comments closed

Make Your Powershell Cmdlets Hypothetical

Rob Sewell shows you how to implement Confirm, Verbose, and WhatIf parameters in Powershell cmdlets:

I have done this before because if the file does not exist then Set-Content will create a new file for you, but with this function I can check if the file exists first with the ValidateScript before running the rest of the function.

As you can see I add variables from my PowerShell code into the “The Item” and “The Change”. If I need to add a property of an object I use $($Item.Property).

So now, if I want to see what my new function would do if I ran it without actually making any changes I have -WhatIf added to my function automagically.

It’s easy to do and makes your code that much better.

Comments closed

Using Computed Columns For Pattern Matching

Erik Darling shows how you can use a computed column to speed up %something searches:

We were talking about computed columns, and one of our students mentioned that he uses computed columns that run the REVERSE() function on a column for easier back-searching.

What’s back-searching? It’s a word I just made up.

The easiest example to think about and demo is Social Security Numbers.

One security requirement is often to give the last four.

Obviously running this for a search WHERE ssn LIKE '%0000' would perform badly over large data sets.

One of the trickiest parts of performance tuning is understanding that the way people want to see data is not necessarily the way you should store the data.

Comments closed

SQL Persistent Storage In Azure Container Services

Andrew Pruski shows how to use Kubernetes persistent volumes in Azure Container Services:

I’ve been playing around with SQL Server running in Kubernetes in Azure Container Services (AKS)for a while now and I think that the technology is really cool.

You can get a highly available instance of SQL Server up and running with a few lines of code! Ok, there’s a bit of setup to do but once you get that out of the way, you’re good to go.

One thing that has been missing though, is persistent storage. Any changes made to the SQL instance would have been lost if the pod that it was running in failed and was brought back up.

Until now.

Click through to learn how.  It’s certainly not trivial, but Andrew does a good job showing us the step-by-step.

Comments closed

Cycling The SQL Server Error Log

Kenneth Fisher recommends cycling the SQL Server error log regularly:

You’ll note I have it set to keep 32 files. This way I have at least a months worth of information. It’s probably more than I need, but for my personal box, that’s what I chose. For your company, it’s going to depend on your needs. You can keep up to 99 files but that seems a bit excessive to me. Of course, these files aren’t all that large (usually) so it’s up to you.

I’d go with a bit more than 32—if you have three service restarts in March, you won’t have a full month’s worth of information.  Regardless of the exact number you choose, it’s a smart idea to cycle this regularly.

Comments closed

Diagnosing TempDB Growth

Ginger Keys has a few tips around managing tempdb:

So what happened to cause this inconsistent growth and how should I fix it?

Tempdb is used (and can grow) when users explicitly create objects like temp tables, variables, cursors, or stored procedures. Tempdb is also used when the database engine creates work tables for sorts or spooling, and row versioning operations.

It’s often difficult to diagnose what happened on a SQL Server after the fact. There are many resources online to determine what is currently using tempdb but not a lot of information from processes that may have run days ago.

Read on for some tips to help manage tempdb growth better.

Comments closed