Compression Performance

Rolf Tesmer digs into the case of compression of building an index whose leading column has a low cardinality:

That first one is a cracker – it hit me once when compressing a SQL Server table (600M+ rows) on a 64 core Enterprise SQL Server.  After benchmarking several other data compression activities I thought I had a basic “rule of thumb” (based on GB data size and number of rows)… of which just happened to be coincidence!

This also begs the question of why would you use low selectivity indexes?  Well I can think of a few cases – but the one which stands out the most is the identification of a small number of rows within a greater collection – such as an Index on TYPE columns (ie; [ProcessingStatusFlag] CHAR(1) = [P]rocessed, [U]nprocessed, [W]orking, [F]ailed, etc)

… AND SO – lets do some testing to validate this puppy!

There’s a significant difference here, so check out Rolf’s post for the details.

Related Posts

Backup Compression And Encryption

Arun Sirpal shows the combined effects of backup encryption and backup compression in SQL Server 2017: Do not forget about the certificate! Warning: The certificate used for encrypting the database encryption key has not been backed up. Imagine if you need to recover the backup and you can’t?  You will get the dreaded thumbprint error. […]

Read More

Finding Compressable Indexes

Tracy Boggiano has a script to help you figure out which indexes make sense to compress: We can write procedure check periodically rather a table will benefit from compression or not.  There are a few tricks though: If you have sparse columns you cannot compress the table, we check for that in lines 70-74. Next […]

Read More

Categories

March 2016
MTWTFSS
« Feb Apr »
 123456
78910111213
14151617181920
21222324252627
28293031