Causing Error 666 When Loading Into Columnstore Index

Joe Obbish has moved into Erik Darling’s Internet Basement and has a doozy of a first post there:

I need to find a relatively efficient way to advance the CSILOCATOR because I need to do it over 2 billion times, if my theory is correct about the maximum allowed value. Both updating all of the rows in a delta rowgroup and deleting and reinserting advance the CSILOCATOR. I expected that small batch sizes would work best, and they did. For my table’s schema, the sweet spot for updates is about 275 rows and the sweet spot for delete/inserts is about 550 rows. Delete/inserts appeared to be faster than updates for the purpose of constantly reloading the same rows over and over.

Great post, Brent!

Related Posts

Batch Mode Normalization

Paul White digs into batch mode normalization and its consequences for performance: I mentioned in the introduction that not all eight-byte data types can fit in 64 bits. This fact is important because many columnstore and batch mode performance optimizations only work with data 64 bits in size. Aggregate pushdown is one of those things. There are […]

Read More

Intellisense and the DAC

Slava Murygin doesn’t like severity 20 errors just popping up for no good reason: Yesterday I’ve needed to use Dedicated Administrator Connection (DAC) once in a while, and because I have all kinds of notifications in my system, I immediately got an “Severity 20” alert. As you probably know, Severity 20 Errors “Indicate system problems […]

Read More

Categories

May 2019
MTWTFSS
« Apr Jun »
 12345
6789101112
13141516171819
20212223242526
2728293031