Press "Enter" to skip to content

Month: June 2017

Finding Candidates For Memory-Optimized Tables

Ned Otter points out a very interesting report in SSMS 2016 and 2017, which helps you determine if you should migrate a table to be memory-optimized:

The chart attempts to display both the best candidates and the degree of difficulty for migration. But there are a few problems with the “difficulty” rating of this internal query, and as a result, if we really want to know how easy/difficult the process might be, we’ll have to roll our own query.

Read on for more details, as well as a script Ned has put together to do the same in T-SQL.

Comments closed

Bayesian Average

Jelte Hoekstra has a fun post applying the Bayesian average to board game ratings:

Maybe you want to explore the best boardgames but instead you find the top 100 filled with 10/10 scores. Experience many such false positives and you will lose faith in the rating system. Let’s be clear this isn’t exactly incidental either: most games have relatively few votes and suffer from this phenomenon.

The Bayesian average

Fortunately, there are ways to deal with this. BoardGameGeek’s solution is to replace the average by the Bayesian average. In Bayesian statistics we start out with a prior that represents our a priori assumptions. When evidence comes in we can update this prior, computing a so called posterior that reflects our updated belief.

Applied to boardgames this means: if we have an unrated game we might as well assume it’s average. If not, the ratings will have to convince us otherwise. This certainly removes outliers as we will see below!

This is a rather interesting article and you can easily apply it to other rating systems as well.

Comments closed

Re-Shaping Data Flows

Maneesh Varshney explains some methods to trim the fat out of analytical data flows:

Big data comes in a variety of shapes. The Extract-Transform-Load (ETL) workflows are more or less stripe-shaped (left panel in the figure above) and produce an output of a similar size to the input. Reporting workflows are funnel-shaped (middle panel in the figure above) and progressively reduce the data size by filtering and aggregating.

However, a wide class of problems in analytics, relevance, and graph processing have a rather curious shape of widening in the middle before slimming down (right panel in the figure above). It gets worse before it gets better.

In this article, we take a deeper dive into this exploding middle shape: understanding why it happens, why it’s a problem, and what can we do about it. We share our experiences of real-life workflows from a spectrum of fields, including Analytics (A/B experimentation), Relevance (user-item feature scoring), and Graph (second degree network/friends-of-friends).

The examples relate directly to Hadoop, but are applicable in other data platforms as well.

Comments closed

Spark Streaming Vs Kafka Streams

Mahesh Chand Kandpal contrasts Kafka Streams with Spark Streaming:

The low latency and an easy-to-use event time support also apply to Kafka Streams. It is a rather focused library, and it’s very well-suited for certain types of tasks. That’s also why some of its design can be so optimized for how Kafka works. You don’t need to set up any kind of special Kafka Streams cluster, and there is no cluster manager. And if you need to do a simple Kafka topic-to-topic transformation, count elements by key, enrich a stream with data from another topic, or run an aggregation or only real-time processing — Kafka Streams is for you.

If event time is not relevant and latencies in the seconds range are acceptable, Spark is the first choice. It is stable and almost any type of system can be easily integrated. In addition it comes with every Hadoop distribution. Furthermore, the code used for batch applications can also be used for the streaming applications as the API is the same.

Read on for more analysis.

Comments closed

Combinatorics With Joins

Dmitry Zaytsev explains the math behind why query plans can be so inefficient when dealing with a large number of joins:

Let’s talk about the sequence of table joins in detail. It is very important to understand that the possible number of table joins grows exponentially, not linearly. Fox example, there are only 2 possible methods to join 2 tables, and the number can reach 12 methods for 3 tables. Different join sequences can have different query cost, and SQL Server optimizer must select the most optimal method. But when the number of tables is high, it becomes a resource-intensive task. If SQL Server begins going over all possible variants, such query may never be executed. That is why, SQL Server never does it and always looks for a quite good plan, not the best plan. SQL Server always tries to reach compromise between execution time and plan quality.

There are ways you can help the optimizer, and one of my favorite query tuning books was all about table selection.

Comments closed

Multi-Instance Powershell Queries

Jana Sattainathan shows an easy way of performing an operation against a number of SQL Server instances:

Today, I received a really simple request. There was an UPDATE statement that had to be run on quite a few databases in multiple instances.  This was for a vendor software we use and the statement was sent to me by a Helpdesk professional who manages the product.

You need to have the SQLPS (older) or the SQLSERVER (newer) PowerShell module installed.

Click through for the code.  If you want to extend this further, I’d look at using a Central Management Server to pull the instances and maybe looking at parallel processing if you have a large number of servers.

Comments closed

Rotating Tiles Custom Visual

Devin Knight continues his Power BI custom visuals series:

In this module you will learn how to use the Rotating Tile Custom Visual.  The Rotating Tile gives you the ability to display multiple metrics on a single visual that rotates through each value you wish to display.  This allows you to save valuable space on your reports!

This feels like the type of thing that works on a dashboard but would get frustrating if you used it for time-sensitive data or data which required thoughtful analysis.

Comments closed

Reducing TempDB Usage With Memory-Optimized Objects

Mark Wilkinson shows how to replace temp tables (or table variables) with memory-optimized table variables to reduce tempdb latching:

If all worked, you should now see that we have contention on the sysschobjs table. Earlier we discussed using sp_help to get index details on system tables, if we do that now and look at index 2, we will see the lead column is nsclass which is a tinyint field. Using a tinyint as a lead column is typically a terrible idea since there is little selectivity on such a narrow field, and this is no exception.

This isn’t the only case of contention you might see with system objects related to temporary tables. We ran into a few different contention scenarios with tempdb:

  • Contention on sysschobjs again, but on index 3. This index leads with the name of the temporary table and is fairly narrow so you can fit a lot of records on a single index page. Because of this, if you are running lots of concurrent procedures that create temporary tables with the same or similar names, it creates a hot spot on a single page, leading to more contention.

  • Temporary table auto-stats. Statistics objects for all tables (including temporary tables) are stored in the sys.sysobjvalues table. If you get enough auto-stats generations on temporary tables you can see contention here.

Mark’s post reads like a book chapter and he does a great job of summing up the problem and the solution.

Comments closed

Database Code Analysis

William Brewer has an interesting article on performing code analysis on database objects:

In general, code analysis is not just a help to the individual developer but can be useful to the entire team. This is because it makes the state and purpose of the code more visible, so that it allows everyone who is responsible for delivery to get a better idea of progress and can alert them much earlier to potential tasks and issues further down the line. It also makes everyone more aware of whatever coding standards are agreed, and what operational, security and compliance constraints there are.

Database Code analysis is a slightly more complicated topic than static code analysis as used in Agile application development. It is more complicated because you have the extra choice of dynamic code analysis to supplement static code analysis, but also because databases have several different types of code that have different conventions and considerations. There is DML (Data Manipulation Language), DDL (Data Definition Language), DCL (Data Control Language) and TCL (Transaction Control Language).  They each require rather different analysis.

William goes on to include a set of good resources, though I think database code analysis, like database testing, is a difficult job in an under-served area.

Comments closed