Press "Enter" to skip to content

Category: Data Types

Metadata-Only Column Changes with SQL Server 2016

Paul White takes us through several metadata-only changes which SQL Server 2016 introduced:

These changes can be metadata-only because the underlying binary data layout does not change when Column Descriptor row format is used (hence the need for compression). Without compression, row store uses the original FixedVar representation, which cannot accommodate these fixed-length data type changes without rewriting the physical layout.

You may notice that tinyint is omitted from the integer types list. This is because it is unsigned, while the other integer types are all signed, so a metadata-only change is not possible. For example, a value of 255 can fit in one byte for tinyint, but requires two bytes in any of the signed formats. The signed formats can hold -128 to +127 in one byte when compressed.

This is very interesting, but note the long list of requirements for it to work, notably that compression must be enabled on all indexes and partitions.

Comments closed

Explaining SQL_VARIANT

Kenneth Fisher explains what the SQL_VARIANT data type is used for:

The SQL_VARIANT data type is an interesting beast. It is a data type that can store most types of data. So a datedecimalintvarchar etc. can be stored in this single data type. This sounds great right? Well there are a few issues here.

I don’t think I’ve ever used SQL_VARIANT data types before. It always struck me as a refuge for not wanting to think about what the proper data type should be. In fairness to it, though, I’ve seen plenty of unthoughtful solutions using NVARCHAR as well.

Comments closed

Illogical Errors and Implicit Conversion

Aaron Bertrand takes us through a problem with seemingly indeterminate query errors:

I’ve talked about illogical errors before. In several answers on Database Administrators (onetwothree), I show how you can use a CASE expression or TRY_CONVERT to work around an error where a non-numeric value, that should have been filtered out by a join or other clause, still leads to a conversion error. Erland Sommarskog raised a Connect item over a decade ago, still unaddressed, called “SQL Server should not raise illogical errors.”

Recently we had a scenario where a query was failing on one server but not another. But this was slightly different; there were no numerics involved. Imagine this scenario: a source table has a column that is varchar(20). A query creates a table variable with a column that is varchar(10), and inserts rows from the source table, with a filter in place that only exposes values that are 10 characters or less.

In a lot of cases, of course, this scenario is perfectly fine, and everything works as expected.

Read the whole thing. There is a method to the madness, and Aaron explains how it can come up in some cases but not others.

Comments closed

SQL Server and Bytes Per Character

Solomon Rutzky explains that just because you’ve got a VARCHAR column, it’s not necessarily one byte per character:

For VARCHAR, some of you might be thinking that it was “1” until recently when SQL Server 2019 introduced the “_UTF8” collations. Nope. The last time “1” was correct for VARCHAR was back in SQL Server 7.0, before SQL Server 2000 introduced the Windows collations which offered some Double-Byte Character Sets.

For NVARCHAR, some of you might be thinking that it was “2” until SQL Server 2012 introduced the “_SC” collations that fully support Supplementary Characters (UTF-16). Sorry, still incorrect. “2” was never technically correct for NVARCHAR, it was only temporarily correct for the first few years (until Supplementary Characters were defined in Unicode 3.1, released in March, 2001). Ever since SQL Server 7.0 introduced the NCHARNVARCHAR, and NTEXT datatypes, it has been possible to store whatever UTF-16 byte sequences you want, even if they are currently undefined. The older collations do not recognize surrogate pairs / Supplementary Characters, but that’s not related to SQL Server’s ability to store and retrieve any 16-bit code point. As long as you are using a font that supports Supplementary Characters, they should display correctly.

Solomon is one of a handful of people I’ve met who has collations and characters down cold.

Comments closed

PARSE, CAST, and CONVERT

Max Vernon gives us three ways to change data types:

PARSE provides a mechanism to convert a wide variety of character based dates into a datetime data type. From the Docs:

Returns the result of an expression, translated to the requested data type in SQL Server.

Use PARSE only for converting from string to date/time and number types. For general type conversions, continue to use CAST or CONVERT. Keep in mind that there is a certain performance overhead in parsing the string value.

PARSE has its value, but comes at a cost, as Max shows.

Comments closed

Data Type Conversions in Predicates

Bert Wagner takes us through a troublesome table design:

This table stores data for an application that has many different types of Pages. Each Page stores different types of data, but instead of creating a separate table for each type, we store all the different data in the varchar DataValue column and maintain the original data type in the DataType column.

This structure reduces the complexity required for maintaining our database (compared to creating possibly hundreds of tables, one for each PageName) and makes querying easier (only need to query one table). However, this design could also lead to some unexpected query results.

This is your daily reminder that an attribute should be a thing which describes an entity, not one of multiple things.

Comments closed

Explaining Implicit Conversion

Monica Rathbun explains to us what implicit conversion is and when it can go wrong:

Another quick post of simple changes you can make to your code to create more optimal execution plans. This one is on implicit conversions. An implicit conversion is when SQL Server must automatically convert a data type from one type to another when comparing values, moving data or combining values with other values. When these values are converted, during the query process, it adds additional overhead and impacts performance.

Read on for more info, including a common scenario where implicit conversion causes performance degradation.

Comments closed

Using The ROWVERSION Type For ETL

Max Vernon shows us how to use the ROWVERSION data type to tell how much work you have to do to ETL data over from one table to another:

The OLTP table implements a rowversion column that is automatically updated whenever a row is updated or inserted. The rowversion number is unique at the database level, and increments monotonically for all transactions that take place within the context of that database. The dbo.OLTP_Updates table is used to store the minimum row version available inside the transaction used to copy data from the OLTP table into the OLAP table. Each time this code runs it captures incremental changes. This is far more efficient than comparing all the rows in both tables using a hashing function since this method doesn’t require reading any data other than the source data that is either new, or has changed.

I think this is the first time I’ve seen someone use ROWVERSION types successfully.

Comments closed