Press "Enter" to skip to content

Category: Normalization

Database Normalization: Abnormal Forms

I draw the logical conclusion: the opposite of normal forms is, of course, abnormal forms:

This video covers a variety of topics, effectively wrapping up the series on normalization. We look at data warehousing, including why the Kimball-style star schema is a really bad design in theory but a perfectly reasonably design in practice. We cover the chimera of “overnormalization” and I throw out a hot take. And we finally slag on denormalization.

Click through for the video.

Comments closed

The Utility of 6th Normal Form

I have a new video:

In this video, explain what Sixth Normal Form (6NF) is and why it slots in as the third most-important normal form. We look at two separate use cases in which 6NF can make sense and I provide some guidance on when 5NF is good enough versus when 6NF is better.

6th Normal Form doesn’t necessarily make sense all the time, but there are some really good use cases for it.

Comments closed

An Overview of 4th Normal Form

I continue a series on database normalization:

In this video, [I] explain what Fourth Normal Form (4NF) is and why I consider 5NF to be significantly more important. Even so, 4NF does make it easy to explain a certain common class of problem, allowing it to provide some measure of utility.

4th Normal Form is a special case of the much more exciting 5th Normal Form, but I do have a bit of a soft spot for it.

Comments closed

Embrace the Power of 5th Normal Form

I have a new video up:

In this video, we drill into the other most important normal form, learning what Fifth Normal Form (5NF) is, why Boyce-Codd Normal Form is not enough, and examples of why 5NF can be such a challenge to implement.

Until I read CJ Date’s Database Design and Relational Theory (2nd edition), my level of appreciation for 5th Normal Form was somewhat limited, but that’s mostly because I didn’t understand it well at all. I liked the connection trap example in this article, but Date’s book was the first really good explanation of 5NF and just how powerful it is. My hope is that I was successfully able to convey that power to audiences.

Comments closed

A Primer on Boyce-Codd Normal Form

I have a new video:

In this video, we drill into one of the two most important normal forms, learning what Boyce-Codd Normal Form (BCNF) is, how you can get to BCNF, and a practical example of it. We also learn why I cast so much shade on 2nd and 3rd Normal Forms.

Boyce-Codd Normal Form is one of the two most important normal forms, and I’m pretty happy with the way this video came together to explain how you can get from 1NF into BCNF, as well as the specific benefits this provides.

Comments closed

Two Takes on First Normal Form

Joe Celko defends the honor of First Normal Form:

You do not need a complete understanding of regular expressions or ICD codes to follow this article, so don’t worry too much about it. The reason for posting the simplified regular expression was to scare you. My point was that this regular expression would be a pretty impressive CHECK constraint on this column. Shall we be honest? Despite the fact that we know the best programming practice is to detect an error as soon as possible, do you believe that the original poster wrote such a constraint for the concatenated list of ICD codes?

I’m willing to bet that any such validation is being done in an input tier by some poor lonely program, in an application language. Even more likely, it’s not being done at all.

First Normal Form (1NF) says that this concatenated string is a repeated group, and we need to replace it with a proper relational construct.

In the meantime, I’ve continued my series on database normalization and call First Normal Form overrated:

In this video, we start at the ground floor with 1st Normal Form. We’ll learn what people think it is, what it really is, and why it’s not as great as it’s cracked up to be.

I agree with Joe that his ICD-10 code example is a bad database design. The area in which I don’t agree—and for this, I’m leaning heavily on C.J. Date—is that repeating groups actually violate 1NF. My video covers this in a bit more detail and I also include a quotation from Date’s recent book on database design talking about how 1NF has nothing to do with repeating groups or atomicity, and that 1NF could even include relvars inside of relvars (an example Joe shows 1NF preventing).

Comments closed

Data Modeling with Spark–Breaking Data into Multiple Tables

Landon Robinson tokenizes data:

The result of joining the 2 DataFrames – pets and colorsdisplays the nicknamecolor and age of the pets. We went from a normalized dataset where common & recurring values weresubstituted for numeric representation s— to a slightly more denormalized dataset. Let’s keep going!

This is an interesting example of a useful technique but I strongly disagree with Landon about whether this is normalization. Translating a natural key to a surrogate key is not normalizing the data and translating a surrogate key to a natural key (which is what the example does) is not denormalizing the data. A really simplified explanation of the process is that normalization is ensuring that like things are grouped together, not that we build key-value lookup tables for everything. That’s why Landon’s “denormalized” example is just as normalized as the original: each of those attributes describes a unique thing about the pet identified by its (unique) nickname. This would be different if we included things like owner’s name (which could still be on that table), owner’s age, owner’s height, a list of visits to the vet for each pet, when the veterinarians received their licenses, etc.

Comments closed

Saving Space with 6NF in SQL Server

Aaron Bertrand has a two-parter. Part one sets up the problem:

We often build logging or other insert-only tables where we store large strings like URLs, host names, or error messages. It’s usually not until the table has terabytes of data that we realize there might have been a better way. If we are logging traffic or exceptions for our own application, it’s likely that we record the same URL, host name, or error message on millions of rows. What if we only had to write that URL or host name or message text once, the first time we saw it? In this tip, I want to share one idea for abstracting away recurring values, reducing storage, and making search queries faster (especially those with wildcards) without requiring immediate changes in the application layer.

Part two maximizes the savings:

In my previous tip, I showed how we can make a growing logging table leaner by moving large, repeating strings to their own dimension tables. The solution there involved an AFTER INSERT trigger and assumed that we could change the applications to recognize the new table structure in relatively short order.

Check out both posts for more details. If you’re confused about my calling this 6NF and Aaron mentioning dimension tables, the answer is that he’s talking about the end result and I’m describing the process.

Comments closed