SQL Server 2016 has significant advancements over SQL Server 2014 for In-Memory analytics. Some highlights are functionality (e.g. ability to create traditional nonclustered index to enforce PK/FK), performance (e.g. addition of new BatchMode operators, Aggregate pushdown), Online index defragmentation, and supportability (e.g. new DMVs, Perfmon counters and XEvents).
His post talks a little bit about in-memory, but focuses more on clustered columnstore indexes. I like that columnstore indexes are getting V3 improvements, and I think they’ll be even more useful. Whether the “in-memory” part becomes useful is a different question; I personally have seen a very limited adoption of In-Memory OLTP (and a few huge bugs for the people brave enough to try it).
At the end of the day you can’t afford *not* to run CHECKDB- you just can’t. Your database objects are only as reliable as your last clean CHECKDB (meaning your last CHECKALLOC + CHECKCATALOG + individual CHECKTABLE’s on all of your tables – but more on that in a moment).
If you are not running regular CHECKDB, you may have endless unknown corruption across your databases, and you won’t find out until someone tried to access the data.
Even if your production instance is running at 100% all the time (which is a good indicator that you need more resources), you can run CHECKDB against backups restored to a different server.
Entity Framework will not exactly issue SELECT * FROM commands – what it will do though is have explicit SELECT statements that include ALL columns in a particular table. If you see that most SQL queries are selecting all columns this way (especially from large tables when it appears that the UI is not using all that data) then you know that developers got a little sloppy with their code. It’s very easy in Entity Framework to bring back all columns – it takes more work and thought to build a custom LINQ projection query to select only the needed columns.
It is taking all of my strength not to say “A tell-tale sign of an Entity Framework performance problem is seeing Entity Framework in your environment.”
Satnic’s advice isn’t EF-specific, but his link to Microsoft guidance is.
Something is not right here, there are some inconsistency in the way MDS behaves in regard to attribute management.
- Changes to an attributes length is not supported in web interface
- Changes to an attributes length is supported in Excel Add-In
- Changes to an attributes length is limited to max 1000 in Excel Add-in
- New attributes support a length of max 4000 in web interface
- New attributes support a length of max 1000 in Excel Add-in
- It is not at all possible to change an attributes length once it is set to 4000
With SQL Server 2016 CTP 3.1, Nielsen found a couple of these have been corrected, but not all of them.
While I waited…and waited for my Utility data to be inserted into my Azure database, I did some poking around to see if it was even possible to restore a local SQL Server 2014 backup to an Azure database. Guess what, I found something (And there was much rejoicing).
On CodePlex, I found a SQL Database Migration Wizard that can be used to restore the database. They even had Migration Wizards for 2008R2, 2012, and 2014. SQL Database Migration Wizard v3.15.6, v4.15.6 and v5.15.6
If you have an MSDN license, go play with this. Even if you don’t, the lowest tier Azure SQL Database instances are free, so there’s no reason not to learn about them.