Press "Enter" to skip to content

Category: Data Types

A Critique of XML

Andy Leonard isn’t XML’s biggest fan:

If you are sending me (or some other hapless victim data engineer) lots of data that resides in a stable schema – one in which the number, order, data type, etc. of the columns never change – using XML, I have a question:

Why?

Why are you using XML to transmit this data?

Read the whole thing. My approximate thoughts (because it is fairly early when I’m writing this, so I might have missed something) are:

  1. XML is most useful with an XSLT, a document describing the shape and rules of the XML data. This is a big advantage over CSV, as it helps you retain information on data types, data lengths, and other details which get lost in the comma.
  2. Speaking of which, CSVs run a high risk of needing to use the separator as a native character. The problem is that there is no single right way to indicate that “That comma is a separator, but this comma is just a comma.” Different parsers work differently, and one of my lengthy rants about PolyBase is that it helpfully indicates that you have a quoted delimiter here and helpfully removes it before barfing on the commas inside quotations. There is actually an ANSI standard character for separator which is not supposed to occur in the wild…but how many people actually use it? Especially considering that most tools don’t interpret it correctly, so you lose some of the readability of CSVs in the process.
  3. That said, for stable schemas with a known separator (or at least a known mechanism for differentiating separators from naturally occurring characters), separated values works well.
  4. And that said, Parquet works better, assuming you don’t have a lot of long string columns. If you’re dealing mostly with numeric data, Parquet will compress much better, will retain data types and lengths, and won’t be a repetitious blob of angle brackets. But a lot of tools still don’t support Parquet, which is a downside.
  5. Basically, this is why we can’t have nice things.
1 Comment

Data Types and CONCAT_WS

Koen Verbeeck hits a concatenation issue:

I was writing some dynamic SQL that generates some SQL statements to load my facts and dimensions into a data warehouse. Some of those SQL statements can become very long, for example if a dimension has a lot of columns. When debugging, I noticed a couple of statements failing with various errors. Turns out, they were truncated after 4000 characters. What was going on?

Read on to see what happened.

Comments closed

Comparative Data Type Efficiency in Power BI

Soheil Bakhshi doesn’t want everything in Power BI to be decimals and strings:

In my consulting experience working with customers using Power BI, many challenges that Power BI developers face are due to negligence to data types. Here are some common challenges that are the direct or indirect results of inappropriate data types and data type conversion:

  • Getting incorrect results while all calculations in your data model are correct.
  • Poor performing data model.
  • Bloated model size.
  • Difficulties in configuring user-defined aggregations (agg awareness).
  • Difficulties in setting up incremental data refresh.
  • Getting blank visuals after the first data refresh in Power BI service.

Read on for a dive into these problems.

Comments closed

Another Problem with Nullable Columns

Erik Darling tells the unpleasant truth about NULL:

Table definitions have a similar effect on developers. In today’s post, I’m going to use temp tables as an example, but the same thing can happen with regular tables, too.

The issue isn’t with NULL values themselves, of course. The table definition  we’re going to use will allow NULLs, but no NULLs will be present in the data.

The issue is with how you query NULLable columns, even when no NULLs are present.

As a card-carrying member of the League of No-Null Workers, I am happy to tell you all about how much I hate NULL in databases. And this isn’t even the big reason.

Comments closed

Dynamic SQL and String Data Types

Erik Darling theorizes about strings:

If you write the good kind of dynamic SQL, that is:

1. Parameterized

2. Executed with sp_executesql

You’ll probably have run into some silly-ish errors in the practice. Namely, that sp_executesql expects your SQL string and your Parameter string to be NVARCHAR(…).

Read on for some thoughts on data types, max-ness, and the like.

Comments closed

Conversion Precedence in SQL Server

Kenneth Fisher sorts out data types:

With the expectation that you would get back a decimal. Did you wonder why it converted the integers to decimals rather than the other way around? If not you probably should have. I’ll give you a hint. It’s not because it’s the first value in the equation, although that does make a difference.

Thought about it? In case you didn’t figure it out T-SQL (and again, I’m going to say all programing languages where implicit conversions are allowed) has a conversion precedence order. In other words in any calculation where an implicit conversion is required the datatype with a value higher on the list is converted to the datatype with the lower value on the list.

The parenthetical statement above is correct. This makes things really convenient until you have an implicit conversion you weren’t expecting. There’s an advantage to explicit-only languages, though those do involve more typing.

Comments closed

Data Type Conversions in SQL Server

Kenneth Fisher talks types:

Did you know that 1/2=0? If you didn’t you could be in for some interesting surprises.

1/2 returns 0 because 1 and 2 are both integers. When you divide two integers you get back an integer. Now, we, as human beings, know that what you probably wanted was to get back 0.5. Unfortunately, computers really aren’t all that smart and we have to be very explicit with them. 

Read on for more details, as well as a chart of which conversions may be implicit and which must be explicit.

Comments closed

Unicode Character Generation in Power Query

Meagan Longoria needs more Unicode:

You may have used the UNICHAR() function in DAX to return Unicode characters in DAX measures. If you haven’t yet read Chris Webb’s blog post on the topic, I recommend you do. But did you know there is a Power Query function that can return Unicode characters? This can be useful in cases when you want to assign a Unicode character to a categorical value.

Click through to see how this works.

Comments closed

Resolving Implicit Conversion on a Join

Andrea Allred has a process:

As I was troubleshooting a performance issue, I noticed that there was an implicit conversion (SQL Server automatically converts the data from one data type to another) happening in my join. The join was on a column that was named the same in both tables, but one was datatype INT (integer) and the other was a datatype of VARCHAR(50) (variable character up to 50 places).

Read on for one way to resolve this issue…so long as no other calling code expects a string on a call.

Comments closed

Linear Data Structures

Dave Saunders describes the concept of linear data structures:

As we said in the introduction, a data structure is ‘linear’ if the elements form a sequence.

That means that the data structure has a first and last element, and each element is connected to its previous and next element.

– An ‘array’ is a linear data structure; the items are stores sequentially.

– A ‘graph’ is not a linear data structure; any node can be linked to any other node in the graph – there is no fixed ‘sequence’.

Click through for several examples of linear data structures.

Comments closed