Randolph West has a recommendation:
I’ll also pre-emptively note that if this table was simply an append-only archive table, the row size would not really matter. Unfortunately, this table participates in thousands of transactions per day, and as the original developers used Entity Framework and didn’t think much of using
NVARCHAR(MAX)
, the entire row is coming over the wire into the application each time it is queried.As I’ve written previously about this kind of thing, this is not a good design pattern. Using the
VARBINARY(MAX)
data type withCOMPRESS
in theINSERT
/UPDATE
queries — andDECOMPRESS
in theSELECT
queries — is a much better design pattern and dramatically reduces the amount of data transferred over the network. Additionally, SQL Server needs significantly less space to store, maintain, and back up this compressed data.
Read on to see the likely benefits from doing this. I’d say that if your main purpose of storing the JSON is just to pass a blob back and forth, then yes, do compress. If you’re frequently shredding these sorts of large documents within SQL Server…well, probably time for a better data model.
Comments closed