Press "Enter" to skip to content

Curated SQL Posts

Filestream Garbage Collection Bug

Andy Mallon found a bug with Filestream:

I tracked the problem back to this specific scenario: When you have a database that is part of an Availability Group, and you drop a table that contains filestream data, the filestream garbage collection does not clean up the data container subdirectory that corresponds to that table. Garbage collection will continue to clean up other items (eg, deleted rows), but the dropped table never gets cleaned up.

I’ve logged this as a connect item for your up-voting pleasure.

Go forth and upvote.

Comments closed

In-Memory Analytics

Sunil Agarwal introduces us to In-Memory Analytics, forthcoming in SQL Server 2016:

SQL Server 2016 has significant advancements over SQL Server 2014 for In-Memory analytics. Some highlights are functionality (e.g. ability to create traditional nonclustered index to enforce PK/FK), performance (e.g. addition of new BatchMode operators, Aggregate pushdown), Online index defragmentation, and supportability (e.g. new DMVs, Perfmon counters and XEvents).

His post talks a little bit about in-memory, but focuses more on clustered columnstore indexes.  I like that columnstore indexes are getting V3 improvements, and I think they’ll be even more useful.  Whether the “in-memory” part becomes useful is a different question; I personally have seen a very limited adoption of In-Memory OLTP (and a few huge bugs for the people brave enough to try it).

Comments closed

Run DBCC CHECKDB

Andy Galbraith reminds us to check database consistency:

At the end of the day you can’t afford *not* to run CHECKDB- you just can’t.  Your database objects are only as reliable as your last clean CHECKDB (meaning your last CHECKALLOC + CHECKCATALOG + individual CHECKTABLE’s on all of your tables – but more on that in a moment).

If you are not running regular CHECKDB, you may have endless unknown corruption across your databases, and you won’t find out until someone tried to access the data.

Even if your production instance is running at 100% all the time (which is a good indicator that you need more resources), you can run CHECKDB against backups restored to a different server.

Comments closed

Entity Framework Performance Problems

Cristian Satnic points out potential performance issues with using Entity Framework:

Entity Framework will not exactly issue SELECT * FROM commands – what it will do though is have explicit SELECT statements that include ALL columns in a particular table. If you see that most SQL queries are selecting all columns this way (especially from large tables when it appears that the UI is not using all that data) then you know that developers got a little sloppy with their code. It’s very easy in Entity Framework to bring back all columns – it takes more work and thought to build a custom LINQ projection query to select only the needed columns.

It is taking all of my strength not to say “A tell-tale sign of an Entity Framework performance problem is seeing Entity Framework in your environment.”

Satnic’s advice isn’t EF-specific, but his link to Microsoft guidance is.

Comments closed

Master Data Services Bug

Kenneth Nielsen has found a bug in Master Data Services:

Something is not right here, there are some inconsistency in the way MDS behaves in regard to attribute management.

  • Changes to an attributes length is not supported in web interface
  • Changes to an attributes length is supported in Excel Add-In
  • Changes to an attributes length is limited to max 1000 in Excel Add-in
  • New attributes support a length of max 4000 in web interface
  • New attributes support a length of max 1000 in Excel Add-in
  • It is not at all possible to change an attributes length once it is set to 4000

With SQL Server 2016 CTP 3.1, Nielsen found a couple of these have been corrected, but not all of them.

Update, 2016-01-04:

Kenneth notes that the bug will be fixed in CTP 3.3.  Very nice.

Comments closed

Loading Azure SQL Database Data

Mickey Stuewe digs into Azure SQL Database:

While I waited…and waited for my Utility data to be inserted into my Azure database, I did some poking around to see if it was even possible to restore a local SQL Server 2014 backup to an Azure database. Guess what, I found something (And there was much rejoicing).

On CodePlex, I found a SQL Database Migration Wizard that can be used to restore the database. They even had Migration Wizards for 2008R2, 2012, and 2014. SQL Database Migration Wizard v3.15.6, v4.15.6 and v5.15.6

If you have an MSDN license, go play with this.  Even if you don’t, the lowest tier Azure SQL Database instances are free, so there’s no reason not to learn about them.

Comments closed

Power Pivot Compression

There might be a theme to today’s posts…

Matt Allington shows us compression in Power Pivot:

Power Pivot would end up storing a table that looks more like the black table above (rather than the blue one), keeping just the minimum amount of information it needs to rebuild the real table of data on the fly when and if required.   If the black RLE table ended up taking more space than the original column of data, then there would be no benefit of RLE and the original column of data would be stored.  Power Pivot may use one or more of the other compression techniques used as well as, or instead of RLE – it all depends on the specifics of the actual data.

This is a very interesting look at ways the Power Pivot team optimize data storage.

Comments closed

Shredding SSAS XEs

Bill Anton shows three separate methods for parsing SQL Server Analysis Services Extended Event data:

Warning, this method involves the use of .NET – proceed with caution! That said, please proceed because the results are certainly worth it.

The same 32MB file that took 5 minutes using the first method, 2 minutes using the second method, now only took ~3 seconds using this method. Also, the processing time is essentially linear – so even when we bump it up to 4 files (or ~128MB of data) its only going to take ~12 seconds. This would have brought us down to under 5 minutes to process all 3GB of data.

Again, I’m not a .NET expert (not really even a novice) so the code I’m sharing is most likely terrible…but it works (at least for me) and its fast. So here it is…read through it and feel free to heckle me in the comments 😉

Don’t heckle Bill; thank him.

Comments closed