DBCC Checking Memory-Optimized Tables

Brent Ozar shows us how to run a DBCC check against a memory-optimized table.  The answer is not trivial:

THE EASY FIX: RUN FULL NATIVE BACKUPS EVERY DAY, AND FREAK OUT WHEN THEY FAIL.

Backup failures aren’t normally a big deal, but if you use in-memory OLTP on a standalone server or a failover clustered instance, backup failures are all-out emergencies. You need to immediately find out if the backup just ran out of drive space or lost its network connection, or if you have game-over Hekaton corruption.

Note that you can’t use SAN snapshot backups here. SQL Server won’t read the In-Memory OLTP pages during a snapshot backup, which means they can still be totally corrupt.

This works fine for shops with relatively small databases, say under 500GB.

Brent also has a Connect item to fix this.  Upvote if you have memory-optimized, durable tables.

Related Posts

How Far Along Is That CHECKDB?

Kevin Feasel

2018-06-06

DBCC

Steve Stedman shows us how to get percentage complete of a DBCC CHECKDB run: You have a DBCC CHECKDB script running, something like the following, and it may take several hours to run to confirm if there is any corruption in your SQL Server Database. 1 DBCC CHECKDB ('MyDatabaseName') WITH DATA_PURITY, NO_INFOMSGS; Then someone asks […]

Read More

sp_BlitzInMemoryOLTP

Ned Otter announces a new tool for troubleshooting memory-optimized databases: Instance level evaluates the following: the version/edition of SQL server SQL Server ‘max memory’ setting memory clerks XTP memory consumers, aggregated XTP memory consumers, detailed the value of the committed_target_kb column from sys.dm_os_sys_info whether or not instance-level collection of execution statistics has been enabled for all […]

Read More

Categories

April 2016
MTWTFSS
« Mar May »
 123
45678910
11121314151617
18192021222324
252627282930