Press "Enter" to skip to content

Curated SQL Posts

Always Encrypted With Secure Enclaves In SQL Server 2019

Jakub Szymaszek walks us through Virtualization Based Security memory enclaves in Windows Server 2019 and SQL Server 2019:

Today, we are super excited to announce that you can now try and evaluate Always Encrypted with secure enclaves in the preview of SQL Server 2019.

Always Encrypted with secure enclaves in SQL Server 2019 preview uses an enclave technology called Virtualization Based Security (VBS) memory enclaves in the upcoming version of Windows (Windows Server 2019 and Windows 10, version 1809), which is currently also in preview. A VBS enclave is an isolated region of memory within the address space of a user-mode process. The isolation of VBS enclaves is provided by the Windows hypervisor, which makes VBS enclaves appear as black boxes, not only to the processes containing them, but also all other processes and the Windows OS on the machine. Even machine administrators are not able to see the memory of the enclave. The below screenshot shows what an admin would get to see when browsing the enclave memory using a debugger (note the question marks, as opposed to the actual memory content).

The compliance regime is shifting toward preventing high-privilege users (DBAs, sysadmins, etc.) from accidentally or maliciously exposing sensitive information, so it makes sense that this is the primary security push.  I think that these changes are starting to make Always Encrypted a better option than a roll-your-own data encryption model.

Comments closed

Chicago Parking Ticket Data Set

Bob Pusateri shows us a new data set to mess with:

A few weeks ago I came across this blog post by Matt Chapman. Matt filed FOIA requests with the City of Chicago and, after multiple attempts, was able to get access to over 36 million parking tickets written between 2003 and 2016. Matt goes on to explain Chicago’s parking ticket database, how he processed the data, analyzed it, and in one location got Chicago to put up additional “No Parking” signs to reduce parking tickets in that spot by 50%. That is most definitely using analytics for a great cause!

But let’s get back to that data for a second, that’s what really interests me. Matt shared his raw data for others to analyze, but it was formatted as a PostgreSQL dump. Now PostgreSQL is a great tool with an even greater price, but it’s not always the easiest to use. After spinning up a Linux VM and spending hours setting everything up as best I could, I still couldn’t get the dump to restore properly. Apparently I didn’t have all the exact versions of certain extensions installed, and because of that the tables couldn’t be loaded. Grrrr.

Bob has our backs, though, and has a properly-formatting, normalized parking ticket data set that weighs in at about 500MB.

Comments closed

Running SQL Server 2019 In Docker

Andrew Pruski walks us through setting up SQL Server 2019 CTP 2 on Linux with Docker for Windows:

If you’ve been anywhere near social media this week you may have seen that Microsoft has announced SQL Server 2019.

I love it when a new version of SQL is released. There’s always a whole new bunch of features (and improvements to existing ones) that I want to check out. What I’m not too keen on however is installing a preview version of SQL Server on my local machine. It’s not going to be there permanently and I don’t want the hassle of having to uninstall it.

This is where containers come into their own. We can run a copy of SQL Server without it touching our local machine.

Click through for the step-by-step.

Comments closed

SQL Server 2019 Containers Available

The SQL Server team has a getting started post on pulling down the latest CTP in a container, as well as some additional container features:

SQL Server 2019 is now available on Red Hat Enterprise Linux as a Red Hat Certified Container Images and Ubuntu-based container images enabling you to take advantage of the latest SQL Server engine innovations such as new SQL Graph features, and Data Discovery and Classification. We are also making it possible to adopt SQL Server in containers with existing scenarios such as Replication and Distributed Transaction which are now part of SQL Server 2019 on Linux.

This makes it easier to get started with SQL Server 2019 without potentially messing up your already-working systems.

Comments closed

Improvements In Table Variable Performance In SQL Server 2019

Matthew McGiffen tries out SQL Server 2019 to test a scenario where table variables were giving poor estimates in prior versions:

One of the most popular posts on my blog last year was where I pretty much suggested that people not use table variables:

Think twice before using table variables

This wasn’t new information when I wrote it, but bad performance due to the use of table variables remained such a common anti-pattern that I thought it was worth stressing again.

So, when I saw the above 2019 feature I thought I’d better investigate and update what I’m telling people.

TL;DR It looks like table variables are no longer a problem.

Read the whole thing.  This has the potential of changing long-standing advice going back a decade regarding table variables.

Comments closed

PFS Corruption When Moving From SQL Server 2014

Paul Randal notes a bug in SQL Server 2014:

I’m seeing reports from a few people of DBCC CHECKDB reporting PFS corruption after an upgrade from SQL Server 2014 to SQL Server 2016 or later. The symptoms are that you run DBCC CHECKDB after the upgrade and get output similar to this:

Msg 8948, Level 16, State 6, Line 5
Database error: Page (3:3863) is marked with the wrong type in PFS page (1:1). PFS status 0x40 expected 0x60.
Msg 8948, Level 16, State 6, Line 5
Database error: Page (3:3864) is marked with the wrong type in PFS page (1:1). PFS status 0x40 expected 0x60.
CHECKDB found 2 allocation errors and 0 consistency errors not associated with any single object.
CHECKDB found 2 allocation errors and 0 consistency errors in database 'MyProdDB'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (MyProdDB).

I’ve discussed with the SQL Server team and this is a known bug in SQL Server 2014.

Read on for the fix and additional good advice.

Comments closed

Writing To Elasticsearch With Spark Streaming

Anuj Saxena has an example of writing data from a Spark Streaming pipeline out to Elasticsearch:

There’s been a lot of time we have been working on streaming data. Using Apache Spark for that can be much convenient. Spark provides two APIs for streaming data one is Spark Streaming which is a separate library provided by Spark. Another one is Structured Streaming which is built upon the Spark-SQL library. We will discuss the trade-offs and differences between these two libraries in another blog. But today we’ll focus on saving streaming data to Elasticseach using Spark Structured Streaming. Elasticsearch added support for Spark Structured Streaming 2.2.0 onwards in version 6.0.0 version of “Elasticsearch For Apache Hadoop” dependency. We will be using these versions or higher to build our sbt-scala project.

Click through for an example.

Comments closed

Wasting Money With Data Science

Giovanni Lanzani has a post with the controversial title above:

Some data is gathered, given to data scientists, and — after two weeks — the first demo takes place. The results are promising, but they need a bit more time.

Fine. After all, the data was messy: they had to clean it up and go back to the source a couple of times.

Two weeks pass and the new results are even nicer. With 70% accuracy, they can predict if a patient will go home after their visit to the emergency room.

This is much better than random (50%)! A full-fledged pilot starts.

They are faced with a couple of challenges to go from model to data product:

  • How to send the source data to the model is unclear;

  • Where the model should run;

  • The hospital operations need to change, as the intake happens with pen and paper;

  • They realize that without knowing to which department the patient will go, they won’t add any value;

  • To predict the department, the model need the diagnosis. But once the diagnosis gets typed in the computer, the patient has reached their destination: the model is useless!

I think it’s a fair point:  it’s easy from the standpoint of internal researchers to look for things which they can do, but which don’t have much business value.  The risk on the other side is that you’ll start diving into a high-potential-value problem and then realize that the data isn’t there to draw conclusions or that the relationships you expected simply aren’t there.

Comments closed

Be Careful Of P-Hacking

Vincent Granville discusses the problem of p-hacking:

I read an article this morning, about a top Cornell food researcher having 13 studies retracted, see here. It prompted me to write this blog. It is about data science charlatans and unethical researchers in the Academia, destroying the value of p-values again, using a well known trick called p-hacking, to get published in top journals and get grant money or tenure. The issue is widespread, not just in academic circles, and make people question the validity of scientific methods. It fuels the fake “theories” of those who have lost faith in science.

The trick consists of repeating an experiment sufficiently many times, until the conclusions fit with your agenda. Or by being cherry-picking about the data you use, or even discarding observations deemed to have a negative impact on conclusions. Sometimes, causation and correlations are mixed up on purpose, or misleading charts are displayed. Sometimes, the author lacks statistical acumen.

Usually, these experiments are not reproducible. Even top journals sometimes accept these articles, due to

  • Poor peer-review process

  • Incentives to publish sensational material

Wansink is a charlatan.  But beyond p-hacking is Andrew Gelman and Eric Loken’s Garden of Forking Paths.  Gelman’s blog, incidentally (example), is where I originally learned about Wansink’s shady behaviors.  Gelman also warns us not to focus on the procedural, but instead on a deeper problem.

1 Comment

Databricks Delta Now Available On Azure

Cihan Biyikoglu and Singh Garewal announce the availability of Databricks Delta on Azure Databricks:

Using an innovative new table design, Delta supports both batch and streaming use cases with high query performance and strong data reliability while requiring a simpler data pipeline architecture:

Increased query performance – Able to deliver 10 to 100 times faster performance than Apache Spark(™) on Parquet through the use of key enablers such as compaction, flexible indexing, multi-dimensional clustering and data caching.

Improved data reliability – By employing ACID (“all or nothing”) transactions, schema validation / enforcement, exactly once semantics, snapshot isolation and support for UPSERTS and DELETES.

Reduced system complexity – Through the unification of batch and streaming in a common pipeline architecture – being able to operate on the same table also means a shorter time from data ingest to query result. Schema evolution provides the ability to infer schema from input data making it easier to deal with changing business needs.

The Azure version of Databricks is quickly reaching parity with the classic AWS-hosed version.

Comments closed