Compression Performance

Rolf Tesmer digs into the case of compression of building an index whose leading column has a low cardinality:

That first one is a cracker – it hit me once when compressing a SQL Server table (600M+ rows) on a 64 core Enterprise SQL Server.  After benchmarking several other data compression activities I thought I had a basic “rule of thumb” (based on GB data size and number of rows)… of which just happened to be coincidence!

This also begs the question of why would you use low selectivity indexes?  Well I can think of a few cases – but the one which stands out the most is the identification of a small number of rows within a greater collection – such as an Index on TYPE columns (ie; [ProcessingStatusFlag] CHAR(1) = [P]rocessed, [U]nprocessed, [W]orking, [F]ailed, etc)

… AND SO – lets do some testing to validate this puppy!

There’s a significant difference here, so check out Rolf’s post for the details.

Related Posts

Looking At Compressed Pages

Jess Pomfret shows us what compressed data looks like in SQL Server: We first need to switch on trace flag 3604: this will write the output of our DBCC PAGE command to the messages tab instead of the event log. There are 4 parameters for DBCC PAGE: we will need to pass in the database name (or id), the […]

Read More

What Happens With Data Compression + Backup Compression

Jess Pomfret tests what happens when you enable backup compression for databases with already-compressed tables in SQL Server: What happens if I use data compression and backup compression, do I get double compression? This is a great question, and without diving too deeply into how backup compression works I’m going to do a simple experiment […]

Read More

Categories

March 2016
MTWTFSS
« Feb Apr »
 123456
78910111213
14151617181920
21222324252627
28293031