Thinking About Databases At Scale

Chris Adkin has a great post explaining some of the hardware and query principles behind scale issues:

All execution plans iterators that require memory grants have two fundamental code paths, one path for when the memory grant is blown and memory spills out into tempdb and one for when the memory grant is correct or under-estimated. Perhaps the database engine team may at some point include a third option, which is for when the grant can be accommodated inside the CPU cache.

As an example, if you run a log record generation intensive workload on the same CPU socket as the log writer, usually socket 0, this will run in a shorter time compared to running the exact same workload in a different socket

This is the type of post where I catch just enough of it to know that I need to dig deeper and learn more.

Related Posts

The Basics Of Kubernetes

Chris Adkin gives us a rundown on Kubernetes: With the announcement of SQL Server 2019 big data clusters at Ignite, Kubernetes (often abbreviated to K8s) now stands front and center as part of Microsoft’s data platform vision. The obvious inference being that this is something that the Microsoft data platform community is going to show […]

Read More

In Lieu Of Lambda Architecture, Using Faster Databases

Justin Langseth argues that the Lambda architecture is not really necessary if you are using the right data stores: Basically, the idea is to keep the fast stuff fast and the slow stuff slow. I wrote a paper 14 years ago on the challenges of real-time data warehousing. Fortunately, both the data streaming, database, and BI […]

Read More

Categories

July 2017
MTWTFSS
« Jun Aug »
 12
3456789
10111213141516
17181920212223
24252627282930
31