Press "Enter" to skip to content

Day: August 3, 2017

Explaining Singular Value Decomposition

Tim Bock explains how Singular Value Decomposition works:

The table above is a matrix of numbers. I am going to call it Z. The singular value decomposition is computed using the svd function. The following code computes the singular value decomposition of the matrix Z, and assigns it to a new object called SVD, which contains one vector, d, and two matrices, u and v. The vector, d, contains the singular values. The first matrix, u, contains the left singular vectors, and vcontains the right singular vectors. The left singular vectors represent the rows of the input table, and the right singular vectors represent their columns.

Tim includes R scripts to follow along, and for this topic I definitely recommend following along.

Comments closed

Thoughts On Exactly-Once Processing And First-In First-Out

Kevin Sookocheff looks into Amazon’s Simple Queue Service and explains some concepts of distributed messaging systems in the process:

In an ideal scenario, the five minute window would be a complete non-issue. Unfortunately, if you are relying on SQS’s exactly-once guarantee for critical use cases you will need to account for the possibility of this error and design your application accordingly.

On the message consumer side, FIFO queues do not guarantee exactly once delivery, because in simple fact, exactly once delivery at the transport level is provably impossible. Even if you could ensure exactly-once delivery at the transport level, it probably isn’t what you want anyways — if a subscriber receives a message from the transport, there is still a chance that it can crash before processing it, in which case you definitely want the messaging system to deliver the message again.

Instead, FIFO queues offer exactly-once processing by guaranteeing that once a message has successfully been acknowledged as processed that it won’t be delivered again. To understand more completely how this works, let’s walk through the details of how you go about consuming messages from SQS.

It’s a great read, so check it out.

Comments closed

Measure-Object Differences

Klaas Vandenberghe notes a discrepancy in how Measure-Object works, based on syntax:

So, is the -InputObject parameter broken? Not really, but we need to be aware of the logic behind it.
Maybe we are used to work with -ComputerName for a lot of cmdlets and functions, and we rely blindly upon the ability of the command to handle whatever collection we provide. We know this executes the action we chose, like Get-DbaSqlService or Get-DbaOperatingSystemor whatever, separately to every computer in the collection. The collection is ‘fold out’, ‘unpacked’, ‘split’.

-InputObject doesn’t do that! Is that wrong? Not necessarily, it may be a choice of design to have possibilities to enquire the properties of the collection itself and those of the ‘members’ or ‘children’. It’s just a surprise the behaviour differs from pipeline input to parameter input.

Something to keep in mind when writing Powershell scripts.

Comments closed

Inference Attacks

Phil Factor explains that your technique for pseudonymizing data doesn’t necessarily anonymize the data:

It is possible to mine data for hidden gems of information by looking at significant patterns of data. Unfortunately, this sometimes means that published datasets can reveal sensitive data when the publisher didn’t intend it, or even when they tried to prevent it by suppressing any part of the data that could enable individuals to be identified

Using creative querying, linking tables in ways that weren’t originally envisaged, as well as using well-known and documented analytical techniques, it’s often possible to infer the values of ‘suppressed’ data from the values provided in other, non-suppressed data. One man’s data mining is another man’s data inference attack.

Read the whole thing.  One big problem with trying to anonymize data is that you don’t know how much the attacker knows.  Especially with outliers or smaller samples, you might be able to glean interesting information with a series of queries.  Even if the application only returns aggregated results for some N, you can often put together a set of queries where you slice the population different ways until you get hidden details on individual.  Phil covers these types of inference attacks.

Comments closed

Azure SQL Database Deployment Account Errors

Steve Jones troubleshoots an issue with Azure SQL Database:

 I’ve had most builds work really well. I tried a number of things, but kept getting a few items in the build. There were login errors or network errors, both of which bothered me since I could manually log in with SSMS from the same machine as my build agent.

I suspected a few things here, one of which was the use of named pipes for the Shadow database and TCP for Azure SQL Database.

Eventually, I decided to fall back with msbuild, ignoring VSTS, and make sure all my parameters were correct.

Read on for the rest of the story.

Comments closed

Combining Densities

Paul White explains how the SQL Server cardinality estimator will build an estimate involving multiple single-column statistics:

The task of estimating the number of rows produced by a GROUP BY clause is trivial when only a single column is involved (assuming no other predicates). For example, it is easy to see that GROUP BY Shelf will produce 21 rows; GROUP BY Bin will produce 62.

However, it is not immediately clear how SQL Server can estimate the number of distinct (Shelf, Bin) combinations for our GROUP BY Shelf, Bin query. To put the question in a slightly different way: Given 21 shelves and 62 bins, how many unique shelf and bin combinations will there be? Leaving aside physical aspects and other human knowledge of the problem domain, the answer could be anywhere from max(21, 62) = 62 to (21 * 62) = 1,302. Without more information, there is no obvious way to know where to pitch an estimate in that range.

Yet, for our example query, SQL Server estimates 744.312 rows (rounded to 744 in the Plan Explorer view) but on what basis?

Read on for debugger usage, Shannon entropy calculations, and all kinds of other fun stuff.

Comments closed

Thinking About Implicit Conversions

Bert Wagner shows how implicit conversions in a predicate can ruin query performance:

Why? Because SQL is performing that implicit conversion to the numeric datatype for every single row in my table. Hence, it can’t seek using the index because it ends up having to scan the whole table to convert every record to a number first.

And this doesn’t only happen with numbers and string conversion. Microsoft has posted an entire chart detailing what types of data type comparisons will force an implicit conversion:

This is one of those things that can easily elude you because the query will often return results in line with what you expect, so until you have a performance problem, you might not even think to check.

Comments closed

Missing Index DMV Limitations

Brent Ozar goes into detail on why you should not blindly trust missing index recommendations in SQL Server:

SQL Server’s telling us that it needs an index to do an equality search on LastAccessDate – because our query says LastAccessDate = ‘2016/11/10’.

But in reality, that’s not how you access datetime fields because it won’t give you everyone who accessed the system on 2016/11/10 – it only gives you 2016/11/10 00:00:00. Instead, you need to see everyone on that day, like this:

Read the whole thing.  The crux of this is that the missing index recommendation process only gets to see what you’re running at the time you run it, so it can’t generalize all that well; that’s your job.

Comments closed

Selecting Into A Specific Filegroup

Andrew Pruski shows off a new feature in SQL Server 2017:

Now I can run the SELECT…INTO statement using the new ON FILEGROUP option. I’m going to run an example SELECT statement to capture Sales in the UK: –

We are one step closer to CTAS on-prem…  Being able to select into a specific filegroup is nice when you want to segregate tables by filegroup to make recovery of the most critical tables faster:  having a primary filegroup, and then a filegroup for the critical tables for your application, followed by the history tables and other large tables that the app doesn’t need immediately.

Comments closed

Isolation Level Basics

Randolph West describes the primary isolation levels in SQL Server:

There are four isolation levels in SQL Server (as quoted from SQL Server Books Online):

  • Read uncommitted (the lowest level where transactions are isolated only enough to ensure that physically corrupt data is not read)
  • Read committed (Database Engine default level)
  • Repeatable read
  • Serializable (the highest level, where transactions are completely isolated from one another)

Read on for a discussion of what these mean, as well as how optimistic versus pessimistic concurrency (in this case, Read Committed Snapshot Isolation versus Read Committed) comes into play.

1 Comment