Press "Enter" to skip to content

Day: December 4, 2017

In-Memory OLTP Isolation Levels

Kendra Little has a webcast + transcript covering isolation levels when dealing with In-Memory OLTP:

Whatever you’re testing, and I really think this is an exciting area of SQL Server to say, “hey could I engineer something really fast with this?” Whatever you’re testing, the isolation levels here can be confusing, because this is really different than disk based rowstore tables in SQL Server and disk based columnstore in SQL Server, as well.

It’s not bad, and I think once you start playing with it it isn’t even hard with the In-Memory OLTP isolation levels. In part, because there’s only three isolation levels! There’s actually fewer isolation levels, and once you get used to how they work, I think it makes sense.

But it is really different, it’s definitely worth playing around with.

I’d really like to see Read Committed isolation become available for In-Memory OLTP someday, but in the meantime, Kendra does a good job walking through the isolation levels which actually exist.

Comments closed

Functional Programming And Microservices

Bobby Calderwood might win me over on microservices with talk like this:

This view of microservices shares much in common with object-oriented programming: encapsulated data access and mutable state change are both achieved via synchronous calls, the web of such calls among services forming a graph of dependencies. Programmers can and should enjoy a lively debate about OO’s merits and drawbacks for organizing code within a single memory and process space. However, when the object-oriented analogy is extended to distributed systems, many problems arise: latency which grows with the depth of the dependency graph, temporal liveness coupling, cascading failures, complex and inconsistent read-time orchestration, data storage proliferation and fragmentation, and extreme difficulty in reasoning about the state of the system at any point in time.

Luckily, another programming style analogy better fits the distributed case: functional programming. Functional programming describes behavior not in terms of in-place mutation of objects, but in terms of the immutable input and output values of pure functions. Such functions may be organized to create a dataflow graph such that when the computation pipeline receives a new input value, all downstream intermediate and final values are reactively computed. The introduction of such input values into this reactive dataflow pipeline forms a logical clock that we can use to reason consistently about the state of the system as of a particular input event, especially if the sequence of input, intermediate, and output values is stored on a durable, immutable log.

It’s an interesting analogy.

Comments closed

Storage For In-Memory OLTP

Ned Otter has started a new series on In-Memory OLTP:

A memory-optimized database must have a special filegroup designated for memory-optimized data, known as a memory-optimized filegroup. This special filegroup is logically associated with one or more “containers”.What the heck is a “container”? Well, it’s just a fancy word for “folder”, nothing more, nothing less. But what is actually stored in those fancy folders?

Containers hold files known as “checkpoint file pairs”, which are also known as “data and delta files”, and these files persist durable memory-optimized data (in this blog post series, I’ll use the terms CFP and data/delta files interchangeably). You’ll note on the following image that it clearly states in bold red letters, “NO MAXSIZE” and “STREAMING”. “NO MAXSIZE” means that you can’t specify how large these files will grow, nor can you specify how large the container that houses them can grow (unless you set a quota, but you should NOT do that). And there’s also no way at the database level to control the size of anything having to do with In-Memory OLTP storage – you simply must have enough available free space for the data and delta files to grow.

This is the first potential resource issue for In-Memory OLTP: certain types of data modifications are no longer allowed if the volume your container resides upon runs out of free space. I’ll cover workload recovery from resource depletion in a future blog post.

Read the whole thing.  I’m looking forward to this series.

Comments closed

Protecting RDP With SSH

Chrissy LeMaire has a two-parter on enabling SSH tunneling on Windows 10.  First, if you are using the Fall Creators Update:

Gotta say I’m super thankful for Chris K’s blog post “Enabling the hidden OpenSSH server in Windows 10 Fall Creators Update (1709) — and why it’s great!“, otherwise this would have taken me far longer to figure out.

So next, Run PowerShell As Administrator, then generate a key.

cd C:\windows\system32\OpenSSH
ssh-keygen -A

Alternatively, if you are not using the Fall Creators Update:

First, bash for Windows must be setup. This requires Windows 10 or Windows Server 2016.

Note: this was written for Windows 10 pre-1709. Apparently, the new update contains a ton of changes. Developer mode is not required and you install your Linux distro from the Windows Store. Seems that it may even include Open SSH right out the box. I’ll test on Tuesday and let you all know. Till then, here is how to do it if you’ve got Windows 10 without Fall Creators Update (FCU).

Doing this limits the ability of an attacker to snoop on your RDP traffic.

Comments closed

Updating Statistics On System Tables

Dan Guzman shows that you can update statistics on system tables:

The solution in many cases to simply update statistics on the underlying system tables indexes used by the problem DMV queries. This can be done selectively by identifying the system table indexes referenced in execution plan seek and scan operators of the problem query execution plan and then executing UPDATE STATISTICS on each index. However, the task is somewhat tedious.

Alternatively, one can simply update stats on all the system tables. Below is a script that generates and executes DDL to update stats on all non-empty system tables, making quick work of this.

Click through for the script, as well as an interesting note if you try to use constructs like @sql = @sql + N’some string’ in your code.

Comments closed

Reporting Services Project Gymnastics

Nate Johnson had a bad experience with Visual Studio-based SQL Server Reporting Services projects:

So, what have we learned?  Well, for one, this is a crappy situation born of poor in-product support.  I should be able to configure Solution-level shared Data Sources, use them in as many Projects (within said Solution) as I want, and have VS configuration management support them; bonus points for doing so with saved & encrypted credentials.  Ideally, when we check this into source-control, we’d check in the “DEV” environment flavor connection-configs.  Then, when the reports get deployed to the “PROD” SSRS server, the same globally shared Data Sources are already present (and they don’t get over-written, thankfully by default!), configured by the DBA with prod credentials, and nobody in the development pipeline needs to know said credentials.  Yay?

Not exactly a ringing endorsement.

1 Comment

Don’t Forget NOCOUNT

Lonny Niederstadt shows just how expensive printing out result counts can be:

OK.  Now Aaron Bertrand has a post from February 2016…
Performance Surprises and Assumptions : SET NOCOUNT ON
https://sqlperformance.com/2016/02/t-sql-queries/nocount

In that blog post the potential performance benefit of NOCOUNT ON was elusive.  And, to be honest, this round of my tests was NOT to learn about NOCOUNT but other stuff.  Just happened to stumble on NOCOUNT when I was sifting through stuff after some early testing.  But with these results in hand, maybe NOCOUNT has a few more performance surprise for all of us 🙂

For a single run, printing out counts isn’t that expensive, but when doing a lot of work, it can add up.

Comments closed