Memory-Optimized Table Maintenance

Ned Otter has a great post looking at what you can and cannot do with memory-optimized tables containing certain types of indexes:

Now, let’s attempt to create a NONCLUSTERED COLUMNSTORE INDEX:

ALTER TABLE dbo.InMemADD
ADD INDEX NCCI_InMem NONCLUSTERED COLUMNSTORE (col1);

Msg 10794, Level 16, State 76, Line 76
The feature ‘NONCLUSTERED COLUMNSTORE’ is not supported with memory optimized tables.

It fails because we can only create a CLUSTERED columnstore index (CCI). For 25 years, Microsoft SQL Server differentiated between indexes that physically ordered data on storage (CLUSTERED) and those that did not (NONCLUSTERED). Unfortunately, they chose to ignore that pattern when creating the syntax for memory-optimized tables; using the word CLUSTERED is required when creating a columnstore index on memory-optimized tables. 

It’s a great post with plenty of trial and error.

Index Rebuilds Reset DMV Counters

Clive Strong notes that an index rebuild will reset certain DMV counters:

As it transpires, an index rebuild will reset the counters for this index within the sys.dm_db_index_usage DMV and this is potentially very dangerous unless you are aware of this. Normally, we determine whether or not an index is in use by looking at this information, but if you perform regular maintenance on this table, you could be resetting the data which you rely on for an accurate decision.

Read the whole thing.

Resumable Online Index Rebuild

Arun Sirpal shows off a SQL Server 2017 and Azure SQL Database feature:

Clearly I am in a paused state with 5.48% percent of the operation complete. In the original query window where I started the index rebuild, I receive:

Msg 1219, Level 16, State 1, Line 4 Your session has been disconnected because of a high priority DDL operation. Msg 0, Level 20, State 0, Line 3

A severe error occurred on the current command.  The results, if any, should be discarded.

DTUs when in a paused state, it drops back down to pretty much 0 DTU consumption (red arrow below).

If you have a hard nightly maintenance window to hit, being able to pause index rebuilds is a pretty nice feature.

Set A Fill Factor

Monica Rathbun wants you to set a better fill factor than the default:

Please, please, please Admins do not leave your default index fill factor at 0. This means you are telling SQL Server to fill the page 100% full when creating indexes. This also means you are forcing it to a new page when additional inserts are done. These are called PAGE SPLITS which can take time to perform and is a resource intensive operation. Having a high fill factor will cause more index fragmentation, decrease performance and increase IO.

If you find that this is how your system is configured, all is not lost. You can correct this by changing the default value so that new indexes will be created with proper factor and rebuilding your existing indexes with another fill factor value. I like to use 80 across the board for most, of course there is always the “it depends” scenario that arises but 80 is a pretty safe bet. One of those “it depends” would be on logging table that has the correct clustering key and never gets updates in between values (make sense?), I don’t want a fill factor of 80.  I’d want 0/100 to maximize page density as page splits wouldn’t occur if the clustered key is monotonically increasing.

Monica also has a couple scripts, one for changing the across-the-board default and one for changing a particular index.

Finding Compressable Indexes

Tracy Boggiano has a script to help you figure out which indexes make sense to compress:

We can write procedure check periodically rather a table will benefit from compression or not.  There are a few tricks though:

  1. If you have sparse columns you cannot compress the table, we check for that in lines 70-74.
  2. Next if is a index type of 1 (a clustered index) it cannot contain LOB data types image, ntext, or text.  Which these days shouldn’t be a problem, they have been after all deprecated, so you aren’t using them.  We check for these in lines 76-84.
  3. Finally, we perform the same test for LOB data on index type of 2 (nonclustered indexes) in lines 85-95.

If everything checks out OK we call our second procedure to calculate the percentage of space we would space and if we would save more than 25% then we create script to compression the index.  Now we have a @Debug parameter in here so it can just show us what code it would execute to determine what it will do.

Click through for the code for the two procedures Tracy has created.

Automating Index Maintenance On Azure SQL DB

Arun Sirpal shows how to use Azure Automation to rebuild indexes on an Azure SQL Database database:

The answer is via Azure Automation.

At a high level this is what I did.

  • Create an Automation Account.

  • Create a credential.

  • Create a PowerShell Runbook which has the code for index rebuilds.

  • Create a schedule and link it to the above.

  • Configure parameters within the schedule (if any).

  • Configure logging level (if desired).

Click through for the detailed steps.

Index Changes And Recompilation

Kendra Little explains how index modifications affect the plan cache:

Creating an index doesn’t remove a plan from cache

We run our Demo Query five times. Querying sys.dm_exec_query_stats and related DMVs, we see five executions, plan_generation_num = one.

Then we create an index on a table referenced by the query.

Querying the DMVs we still see five executions, plan_generation_num = one, and the query plan in cache. It hasn’t been removed!

Read on for more, including what happens when you run the query again, what happens when you rebuild indexes, and what happens when you drop an index.

Deletes And Indexes

Jeff Schwartz looks at the performance cost of indexes when it comes to deleting rows:

Many articles concerning SQL Server discuss how record insertion overhead increases with each additional index. They discuss b-tree manipulations and page splits in addition to leaf and non-leaf levels. However, few discuss the fact that deletion overhead increases as well, especially when large numbers of records are deleted by individual queries. Recently, I was working with a client who regularly purged large numbers of records from tables that ranged in size from large to gigantic. For example, one table contained over 6.5 billion records. I added an index (4th overall) to one table expressly for the purpose of expediting the large deletion process, and the deletion run ran longer, despite using the new index! To determine how the numbers of indices and records to be deleted interact, I conducted an experiment to test several combinations. The specifics of the tests and their corresponding results are summarized below.

Check it out.  There’s certainly more to the story than “add indexes to improve performance.”

Rebuilding Versus Reorganizing Rowstore Indexes

Paul Randal explains the difference between rebuilding and reorganizing rowstore indexes:

Rebuilding an index requires building a new index before dropping the old index, regardless of the amount of fragmentation present in the old index. This means you need to have enough free space to accommodate the new index.

Reorganizing an index first squishes the index rows together to try to deallocate some index pages, and then shuffles the remaining pages in place to make their physical (allocation) order the same as the logical (key) order. This only requires a single 8-KB page, as a temporary storage for pages being moved around. So an index reorganize is extremely space efficient, and is one of the reasons I wrote the original DBCC INDEXDEFRAG for SQL Server 2000 (the predecessor of ALTER INDEX … REORGANIZE).

If you have space constraints, and can’t make use of single-partition rebuild, reorganizing is the way to go.

Click through for the rest of the story.

Missing Index DMV Limitations

Brent Ozar goes into detail on why you should not blindly trust missing index recommendations in SQL Server:

SQL Server’s telling us that it needs an index to do an equality search on LastAccessDate – because our query says LastAccessDate = ‘2016/11/10’.

But in reality, that’s not how you access datetime fields because it won’t give you everyone who accessed the system on 2016/11/10 – it only gives you 2016/11/10 00:00:00. Instead, you need to see everyone on that day, like this:

Read the whole thing.  The crux of this is that the missing index recommendation process only gets to see what you’re running at the time you run it, so it can’t generalize all that well; that’s your job.

Categories

October 2017
MTWTFSS
« Sep  
 1
2345678
9101112131415
16171819202122
23242526272829
3031