I’ve been asked a lot of questions about updating statistics in SQL Server over the years. And I’ve asked a lot of questions myself! Here’s a rundown of all the practical questions that I tend to get about how to maintain these in SQL Server.
I don’t dig into the internals of statistics and optimization in this post. If you’re interested in that, head on over and read the fahhhbulous white paper, Statistics Used by the Query Optimizer in SQL Server 2008. Then wait a couple days and chase it with it’s charming cousin, Optimizing Your Query Plans with the SQL Server 2014 Cardinality Estimator.
I’m also not talking about statistics for memory optimized tables in this article. If you’re interested in those, head over here.
This is a great distillation of important and oft-misunderstood content.
When you get to the PolyBase configuration screen, you have the option to run this as a standalone instance, or to use the SQL Server as part of a scale-out group. Personally, I don’t think that anyone should ever choose the standalone instance option. You can always run a SQL Server configured for a scale-out group as a standalone instance, BUT you can’t change (at least not today) a SQL Server configured as a standalone PolyBase instance to run as part of a PolyBase cluster once you have completed the install.
This note sounds like the argument for clustering all SQL Server instances.
In SQL Server 2014 we have had 18 Extended Events and with Service Pack 1 we have received 1 more to be a total of 19 Extended Events for studying the Columnstore Indexes and the Batch Mode processing. In SQL Server 2016 that number has been greatly increased – there are whooping 61 Extended Events, that will give us an important insight into the Columnstore Indexes.
Even more important, Sunil & his team have given an own category inside the Extended Events – a category that is named Columnstore, which will ease the search for the basic columnstore events. Be aware though not all Extended Events related to Columnstore Indexes are included in that category – even including all channels will give you 41 Extended Events, while hiding the other 20 Extended Events, which are sometimes not categorised at all and at other times are stored under different categories, such as Execution or Error, for example. I believe the reason behind not changing the old Extended Events category is quite simple – Microsoft always looks for avoiding breaking existing applications.
There’s a lot here to digest, so read the whole thing.
What the what?? I literally JUST ran a query exactly like this, but without the join. I haven’t mixed aggregate and non-aggregate columns in the query without a GROUP BY…the only aggregate is in the subquery, and it’s all by its little lonesome!
It’s funny what one little letter can do to you.
This works for tables, stored procedures, views etc, but there are a few things to be careful about. It doesn’t change the code behind code based objects so you need to modify that as well. And of course any time you use sp_rename you’ll get the warning:
As Kenneth notes, this does not change any underlying code, so renaming columns can potentially break code.
So….when did “DatabaseIntegrityCheck – SYSTEM_DATABASES” start? At 1500 – is that 3pm? You may be able hash out that this translates to 12:15am local time…but what if you want to perform datetime-style math on the RunDate/RunTime? Sure you can do multiple leaps to say (RunDate>X and RunDate<=Y) AND (RunTime>A and RunTime<=B), but you then need to explicitly format your X, Y, A, and B in the appropriate integer-style format. Wouldn’t it be easier to just be able to datetime math?
The next part is even worse – quick – how long did the first instance of “ServerA_RESTORE_FROM_PROD_V2” run?
4,131 somethings (seconds, ms, etc), right?
Maybe (maybe!) there was a valid reason for the SQL Agent tables to have such screwy values for date, time, and duration; regardless, this is a sheer pain to deal with today.