Optimizing Clustered Columnstore Inserts

Sunil Agarwal talks about clustered columnstore index logging:

There you have it; our recommendation is to choose a batchsize of > 102400 to get benefits of minimal logging with clustered columnstore index. In the next blog, I will discuss parallel bulk import and locking optimizations.

My experience is that you really want to insert in large batches.

Related Posts

Where Columnar Databases Struggle

Teo Lachev makes a good point regarding columnar databases: A large company uses the SAP HANA ERP system. Users requires real-time access to transactional data. To avoid performance degradation, SLT replication (trigger-based change data capture) replicates data to another SAP HANA system that is used solely for reporting. The problem is that the more detailed […]

Read More

When Rowstore Compression Beats Columnstore

Joe Obbish looks at scenarios where page-level compression on rowstore tables can beat columnstore compression in terms of resultant table size: It’s certainly more difficult to come up with a demo that works without string columns, but consider how the page compression algorithm works. Data can be compressed on page basis, which includes both multiple rows […]

Read More

Categories

January 2016
MTWTFSS
« Dec Feb »
 123
45678910
11121314151617
18192021222324
25262728293031