Press "Enter" to skip to content

Category: T-SQL Tuesday

Compression Delay

Rob Farley digs into Compression Delay as part of real-time operational analytics:

The thing with Operational Analytics is that the analytical data, reporting data, warehouse-style data, is essentially the same data as the transactional data. Now, it doesn’t look quite the same, because it’s not been turned into a star-schema, or have slowly changing dimension considerations, but for the purposes of seeing what’s going on, it’s data that’s capable of handling aggregations over large amounts of data. It’s columnstore.

Now, columnstore data isn’t particularly suited to transactional data. Finding an individual row within columnstore data can be tricky, and it’s much more suited to rowstore. So when data is being manipulated quite a lot, it’s not necessarily that good to be using columnstore. Rowstore is simply better for this.

But with SQL 2016, we get updateable non-clustered columnstore indexes. Data which is a copy of the underlying table (non-clustered data is a copy – clustered data or heap data is the underlying table). This alone presents a useful opportunity, as we can be maintaining a columnstore copy of the data for analytics, while handling individual row updates in the rowstore.

Read the whole thing.

Comments closed

T-SQL Tuesday Roundup

Chris Yates provides a round-up for the latest T-SQL Tuesday:

The roundup is finally here, and cheers to all of you who participated. We had a great turnout this month with many returning participants along with some newcomers.

We had a wide range of topics with many great insights from everyone, but don’t take my word for it. Check out the links below and see what your colleagues from around the SQL Community had to say:

Click through for the links.

Comments closed

Desired Enhancements

Jason Brimhall has a list of some nice SQL Server updates and bug fixes, followed by some things he’d like to see:

Digging a little deeper on this one. I would really love to see an enhancement to Resource Governor. Not just any enhancement will do. I need it to be enhanced so it will also affect the reporting services engine and the integration services engine in addition to the database engine. I want to be able to use RG to prevent certain reports from over consuming resources within the SSRS engine. Or for that matter, I want to make sure certain SSIS packages do not consume too much memory. If I can implement constraints on resources for these two engines it would be a huge improvement.

Check it out.

Comments closed

Changing AG Endpoint Ports

Matt Gordon walks through changing endpoint ports:

Since a communication failure within an AG is usually a “hair on fire” kind of event, a quick fix may be desired. The quickest fix I’ve found is to change the port on which the AG endpoint is listening. While the Microsoft help pages on this contain enough information to put together the right script eventually, the first time this happened to me I really would have liked a blog post specific to this issue that gave me the right script to use.

In keeping with this T-SQL Tuesday’s theme, below is my SQL Server present. Please note that I used 5023 as an example but your choice can be flexible depending on what else is consuming ports on your machine. Hopefully this helps somebody (or me if I manage to travel back in time and encounter this same issue)

In an ideal world, you’d want a bit more control over what’s running on various ports to prevent this scenario, but that’s probably wishful thinking.

Comments closed

DBCC CLONEDATABASE

Kennie Pontoppidan talks about a new DBCC command:

This command enables you to take a copy of the schema and the statistics for an existing database into a new database. According to the documentation, DBCC CLONEDATABASE

  • Creates a new destination database that uses the same file layout as the source but with default file sizes as the model database.

  • Creates an internal snapshot of the source database.

  • Copies the system metadata from the source to the destination database.

  • Copies all schema for all objects from the source to the destination database.

  • Copies statistics for all indexes from the source to the destination database.

I knew there were ways of scripting out statistics, but DBCC CLONEDATABASE looks like a new and interesting beast.

Comments closed

SSIS Deployment Models

Ginger Grant argues in favor of the project deployment model in SSIS:

Prior to SQL Server 2012, there was no project deployment. SSIS code was all deployed as packages. These packages could be stored within MSDB or they could be stored and run from the file system. In disorganized places like the one where I worked, they were deployed in both. Assuming nothing much changed since I left, they have all versions of SQL Server which were released prior to the day the new IT Director started in 2012. There was DTS on SQL Server 97, 2000 and SSIS on 2005, 2008 and 2008 R2. No reason to upgrade anything which still worked was their motto. When space was a problem, one could always go build another server. I think the LAN administrator was happiest when he was able to justify building a new server as he could spend hours shopping for parts on the internet and building the latest server.

I believe that, in the business, we call this a “nightmare scenario.”  Read on for ways in which a project model would help.

Comments closed

2016 SOS_RWLock

Ewald Cress continues his series on internals, and looks at how SOS_RWLock has changed in SQL Server 2016:

Allow me to call out some layout comparison points against the 2014 version:

  • There is no separate member to track the shared reader count.

  • The four-byte spinlock is gone.

  • The four-byte waiting writer count is gone.

  • The two chunks of four-byte padding (for qword alignment of pointers) are gone.

  • The WaitListEntry structure hasn’t changed at all.

Ewald also covers Compare-And-Swap operations in detail.  Definitely a good read.

Comments closed

T-SQL Tuesday Roundup

Michael Swart rounds up the usual suspects:

There’s always some anxiety when throwing a party. Wondering whether it will be a smash. Well I had nothing to worry about with the twenty bloggers who participated last week. You guys hit it out of the park!

Michael put a lot of effort into making his round-up look nice and making my life a little easier by exposing me to a couple blogs I didn’t know about.  Great job.

Comments closed

Using Query Store

Justin Goodwin looks at Query Store in SQL Server 2016:

You can see that our problem query is incredibly easy to find in the top left window based on total duration. Also notice that in the top right Plan summary window, there is currently only one available plan for the query (plan_id 49).

We need to figure out how we can get our ‘good plan’ using Trace Flag 9481 as an available plan that we can force using the Query Store.

The cardinality estimator change in SQL Server 2014 wasn’t perfect, but when you can fix individual plans like this, it makes SQL Server much more powerful.

1 Comment

Row-Level Security

Steve Jones talks about row-level security:

I’ve wanted this feature to be easy for a long time. In fact, I’ve implemented a similar system a few times in different applications, but it’s been a cumbersome feature to meet, plus each developer needs to understand how the system works for it to work well. Even in the case where we once used views to hide our RLS, it was a performance issue.

Microsoft has made things easier with their Row Level Security feature. This was actually released in Azure in 2015, but it’s now available in SQL Server 2016 for every on premise installation as well.

Essentially for each row, there is some data value that is checked to determine if a user has access. This doesn’t mean a join. This doesn’t mean you write a lot of code. The implementation is simple, and straightforward, and I like it.

The implementation is pretty easy, but I’m concerned about the performance.  At least from my early view, this adds a good bit of performance drag on your queries.  That’s not so bad in a data warehouse environment, but in a busy transactional system, I’m not sure it can keep up.  I’d be interested in hearing other thoughts, though.

Comments closed