Press "Enter" to skip to content

Month: October 2016

Columnstore Improvements

Warner Chaves discusses improvements in columnstore indexes in SQL Server 2016:

SQL Server first introduced Columnar Storage with the SQL 2012 Enterprise release. In this release, Columnstores were read-only indexes, so it required to drop the index, load the table or partition and then rebuild the index to refresh it with the latest data.

SQL Server 2014 upgraded Columnstores with full read-write capabilities, allowing the Columnstore to become the ‘clustered’ index for the table and hold all the data instead of just being one more index on top of row-organized data. 2014 also introduced many improvements to batch operations so more pieces of an execution plan could take advantage of this faster processing mode.

Read on to see changes in 2016.

Comments closed

Resource Governor

Mark Wilkinson walks through Resource Governor:

Now that we know which statistics we can gather, we need to actually start gathering them. While the DMVs for Resource Governor are great, they will only give you an aggregate of the usage information since the last time the statistics were reset, or the last time services were restarted.

In most cases it makes sense to store your data in a separate table so you can calculate differentials between two time periods. For our example we are only going to be interested in request counts and CPU usage. For this, we will create the following table

Read the whole thing.

Comments closed

Bugfixes To SQLCover

Ed Elliott notes that there is a new version of SQLCover out:

There have been a couple of fixes in SQLCover this week, kindly submitted by John Mclusky (https://github.com/jmclusky):

Code coverage not reported correctly for CTEs at the end of a stored procedure if the ‘with’ is immediately preceded with a semicolon

and

DeclareTableVariableStatement statements cannot be covered, so report falsely as missing coverage

Go check out SQLCover.

Comments closed

Data Lake Planning

Melissa Coates discusses some of the planning involved with creating a data lake:

Does a Data Lake Replace a Data Warehouse?

I’m biased here, and a firm believer that modern data warehousing is still very important. Therefore, I believe that a data lake, in an of itself, doesn’t entirely replace the need for a data warehouse (or data marts) which contain cleansed data in a user-friendly format. The data warehouse doesn’t absolutely have to be in a relational database anymore, but it does need a semantic layer which is easy to work with that most business users can access for the most common reporting needs.

On this question, my answer is “Absolutely not.”  Data warehouses are designed to answer specific, known business questions.  They’re great for regulatory reporting, quarterly reports to shareholders, weekly reports to management, etc.  Data lakes are designed for ad hoc analysis of information.  Read the whole thing.

Comments closed

Datetime Conversion Change

Dan Guzman notes a change in behavior in how datetime fields are upconverted:

SQL Server 2016 and Azure SQL Database V12 use the raw datetime internal value without rounding during conversion to another temporal type. The value is rounded only once during conversion, to the target type precision. The end result will be the same as before SQL Server 2016 when the target type precision is 3 or less. However, the converted value will be different when the target type precision is greater than 3 and the internal time unit interval is not evenly divisible by 3 (i.e. rounded source datetime millisecond value is 3 or 7). Note the non-zero microseconds and nanoseconds in the script results below and that rounding is based on the target type precision rather than the source.

This is a good thing on net, but be aware of this if you try to compare datetime versus datetime2 values.

Comments closed