Press "Enter" to skip to content

Month: May 2017

The Value Of Log Shipping

Robert Davis explains that log shipping can be better than mirroring for database migrations:

This topic has come up several times recently, so I feel the need to blog on it. As the person who wrote the book on Database Mirroring, it will probably come as a surprise to many of you that I believe that log shipping is a much better tool for database migrations than database mirroring.

I’m not just talking about the fact that database mirroring is deprecated (since SQL Server 2012) and log shipping is not. Both are still in SQL Server to this day. Because database mirroring is deprecated, it is no longer receiving bug fixes (except maybe critical security bugs) and no work is being done to make sure that it works with new features in current and future versions. Log shipping is still receiving both of these things. I will lay out the real reasons below.

Robert makes two compelling arguments in favor of log shipping.

Comments closed

The Bayesian Trap

David Smith links to a video describing an application of Bayes’s Theorem and gives the example of medical tests:

If you get a blood test to diagnose a rare disease, and the test (which is very accurate) comes back positive, what’s the chance you have the disease? Well if “rare” means only 1 in a thousand people have the disease, and “very accurate” means the test returns the correct result 99% of the time, the answer is … just 9%. There’s less than a 1 in 10 chance you actually have the disease (which is why doctor will likely have you tested a second time).

Now that result might seem surprising, but it makes sense if you apply Bayes Theorem. (A simple way to think of it is that in a population of 1000 people, 10 people will have a positive test result, plus the one who actually has the disease. One in eleven of the positive results, or 9%, actually detect a true disease.)

This goes to sensitivity/recall (in the medical field, they call it sensitivity; in the documents world and in the Microsoft ML space, they call it recall):  True positives / (True positives + False negatives).  Supposing a million people, 1000 will have the disease.  Of those 1000, we expect the test to find 990 (99%).  Of the 999,000 people who don’t have the disease, we expect the test to produce 9990 false negatives (1%).  990 / (990 + 9990) = 9%.

Comments closed

SOS_UnfairMutexPair

Ewald Cress talks about SOS_UnfairMutexPair:

The focal point of the mutex’s state – in fact one might say the mutex itself – is the single Spinlock bit within the 32-bit lock member. Anybody who finds it zero, and manages to set it to one atomically, becomes the owner.

Additionally, if you express an interest in acquiring the lock, you need to increment the WaiterCount, whether or not you managed to achieve ownership at the first try. Finally, to release the lock, atomically set the spinlock to zero and decrement the WaiterCount; if the resultant WaiterCount is nonzero, wake up all waiters.

Now one hallmark of a light-footed synchronisation object is that it is very cheap to acquire in the non-contended case, and this class checks that box. If not owned, taking ownership (the method SOS_UnfairMutexPair::AcquirePair()) requires just a handful of instructions, and no looping. The synchronisation doesn’t get in the way until it is needed.

However, if the lock is currently owned, we enter a more complicated world within the SOS_UnfairMutexPair::LongWait() method.

I love the statement that “This is not a very British class at all.”  Read the whole thing.

Comments closed

Selecting Into A Filegroup

Denis Gobo points out a new feature in SQL Server 2017:

With the CTP2 build of SQL Server 2017, you now have the ability of doing a SELECT INTO operation into a specific filegroup.

The syntax looks like this

SELECT * INTO TableName ON FileGroup FROM SomeQuery

What is the use of this you might ask? Well maybe you have some ad-hoc queries where you save some data but you don’t want it sitting on your expensive SAN. Or maybe you populate staging tables on the fly and you want it to end up on a specific SSD because you need the speed of the SSD disk for these operations.

This might also be helpful in migrating tables to different storage.

Comments closed

Monthly Job Run Time Averages

Tywan Terrell has a script to see how his monthly SQL Agent jobs are performing in terms of average run time:

Sometime as a ETL developer or Database Administrator you will need to gain insight into SQL Agent job executions times. This  insight can be used to proactively monitor the processing times of the various jobs running within your data environment.

Information about jobs execution times is stored in the MSDB database in table sysjobhistory. This table has the start time and the run duration times which I have used to create a report that will show the average job start and end times by month for all jobs running on a instance of SQL Server.

This is a very useful start.  If I start counting on this data, I’d do two things:  first, save it somewhere else permanently (because you want to clear out SQL Agent job history occasionally so the GUI doesn’t choke when you try to view job history); and second, look more at percentiles, particularly 95th and 99th percentiles for frequently-running jobs.

Comments closed

The Continued Importance Of ETL

Andy Leonard explains that good old ETL remains vital to an organization:

A Problem

As Jen points out earlier in her Analytics Market Commoditization and Consolidation post (you should read it all – it’s awesome – like all of Jen’s posts!) many analytics solution providers share the “Same look, same marketing story, same saves time and allows users [to] avoid evil IT.”

I can hear some of you thinking, “Are you telling us analytics doesn’t work, Andy?” Goodness no. I’m telling you hype and sales strategy work in the analytics market as well as anywhere. When asked why a solution may not perform to expectations, the #1 response is “your data is not clean.”

Data engineering (think ETL specifically designed for analytics and “big data”) is the backbone behind data science.  To Andy’s point, the data engineer’s job is to get clean, context-heavy data in front of a data scientist, the same way a “classical” Business Intelligence specialist works with analysts.

Comments closed

More Advice For Data Scientists

Charles Parker provides more Dijkstra-style wisdom for budding data scientists:

Raise your standards as high as you can live with, avoid wasting your time on routine problems, and always try to work as closely as possible at the boundary of your abilities. Do this because it is the only way of discovering how that boundary should be moved forward.

Readers of this blog post are just as likely as anyone to fall victim to the classic maxim, “When all you have is a hammer, everything is a nail.” I remember a job interview where my interrogator appeared disinterested in talking further after I wasn’t able to solve a certain optimization using Lagrange multipliers. The mindset isn’t uncommon: “I have my toolbox.  It’s worked in the past, so everything else must be irrelevant.”

There’s some good advice in here.

Comments closed

Load Testing Kafka

Satish Bhor shows off Pepper-Box, a load generator which can stress test Apache Kafka:

Pepper-Box is a Kafka load generator application that can be used as a plugin for JMeter or standalone utility. It allows sending plain text Kafka messages (JSON, XML, CSV, or any other custom format), as well as Java serialized objects. Pepper-Box includes a template engine and random data generation function which helps to design message in any format. If we use it with JMeter then we can use all JMeter features. Pepper-Box is very useful in streaming analytics and data pipelines implementation, where input data format is tightly coupled with business problems.

Pepper-Box includes four main components.

I’m going to keep an eye on this tool.

Comments closed

Power BI Premium

James Serra explains the Power BI Premium tier:

For costs, it allows an unlimited number of users since it is priced by aggregate capacity (see Power BI Premium calculator).  Users who need to create content in Power BI will still require a $10/month Power BI Pro seat, but there is no per-seat charge for consumption.

For scale, it runs on dedicated hardware giving capacity exclusively allocated to an organization for increased performance (no noisy neighbors).  Organizations can choose to apply their dedicated capacity broadly, or allocate it to assigned workspaces based on the number of users, workload needs or other factors—and scale up or down as requirements change.

They’re throttling down Power BI Free, making it really just for personal use, but I think the Premium tier will help with pricing for adoption.

Comments closed