Press "Enter" to skip to content

Month: March 2016

NUMA With Few Cores

Denny Cherry asks and answers the question of how many NUMA nodes we should use on a server with a large amount of RAM but relatively few cores:

For this example, let’s assume that we have a physical server with 512 Gigs of RAM and two physical NUMA nodes (and two CPU sockets). We have a VM running in that machine which has a low CPU requirement, but a large working set. Because of this we have 4 cores and 360 Gigs of RAM presented to the VM.

The answer is not trivial, making this an interesting question.

Comments closed

Database In Recovery

James Anderson had a database which would drop into In Recovery mode a few times throughout the day:

The database in question wasn’t stuck in recovery, it would slip in and out of the status throughout the day. Normally, I would only ever expect to see a database in recovery during a restore or after a service restart. Once recovery is complete, I would not expect to see the database slip into ‘in recovery’ again. I especially wouldn’t expect a database to keep slipping in and out of recovery.

The answer is a true head-slapper.  Whose head, I’ll leave up to you…

Comments closed

Optimizing OR Clauses

Daniel Hutmacher looks at different ways of optimizing queries with multiple conditionals and different parameters:

The SQL Server query optimizer can find interesting ways to tackle seemingly simple operations that can be hard to optimize. Consider the following query on a table with two indexes, one on (a), the other on (b):

SELECT a, b
FROM #data
WHERE a<=10 OR b<=10000;

The basic problem is that we would really want to use both indexes in a single query.

We get to see a few different versions of the query as well as the execution plans which result.

Comments closed

Using The OUTPUT Clause

Steve Jones shows how to use the OUTPUT clause with an INSERT statement:

I got asked a question about the OUTPUT clause recently and realized I didn’t remember the syntax. I’ve rarely used this, so I had to look it up and thought this would be a good basic post.

The idea with OUTPUT is that the data from the inserted and deleted tables can be output from the INSERT statement, outside of your triggers. This is the same data, but you can access it in the insert.

The format is

INSERT xxx OUTPUT yyyy INTO @zzz VALUES (or SELECT) mmmm

If I had one thing I could change about OUTPUT, I’d like to be able to output directly into variables for those cases in which I know I’m only going to get one result (or maybe I only care about one arbitrary result in a set).

Comments closed

Graphing CPU Utilization

Slava Murygin uses spatial data types to graph CPU utilization on a SQL Server instance:

That diagram provides you about 260 last minutes of Server CPU usage and measured in percents.

As you can see my SQL Server is mostly doing nothing and only during that blog-post writing in the last half and hour or so it is heavily running test scripts to over-utilize my CPU, but it still barely goes more than 60% of CPU (Blue line).

The Red line represents all other processes besides of SQL Server and you can tell if anything else from outside is impacting your performance.

Combined with Glenn Berry’s diagnostic queries, you could generate some quick analytics.  I’d still use R for anything more than slightly complicated, but this is great for those environments in which you don’t have good alternative tooling.

Comments closed

Introduction To Service Broker

Colleen Morrow is starting a series on Service Broker:

So why am I such a big fan of Service Broker and if it’s so great, why isn’t everybody using it? Let me start by telling you why I’m such a fan.

  • Asynchronous – The biggest benefit of Service Broker, or probably any messaging technology, is that is decouples long running processing from the client application.  A great example of what I mean by this is Amazon.com.  When you place an order at Amazon, a whole series of backend processes are initiated.  Your payment is processed, inventory is verified and updated, fulfillment and shipping is initiated, etc.  All of which ultimately end with a box arriving on your doorstep.  But the website doesn’t wait for all of that stuff to finish before it thanks you for your order.  Your order information is added to a queue and the page returns.  All that other stuff is handled asynchronously.  And that’s what Service Broker lets you do in your database.

I think the biggest impediment to adoption of Service Broker is that there was never a friendly UI.  The same applied to Extended Events in 2008.  Both involve a non-trivial amount of setup and maintenance, and the tooling just hasn’t been there for Service Broker.  I know they’re still making (minor) improvements to the product, but if they wanted a big improvement, putting a friendly UI tie-in with Management Studio would go a long way.

1 Comment

Maintaining Full-Text Indexes

Dave Mason talks about full-text index maintenance:

My first encounter with full text indexes and degraded performance was related to an enhancement I made to an aspx page years ago. I wanted all of the search fields to use an AutoComplete AJAX extender to mimic the behavior you see when you type a few letters into the search field on Google.com or Bing.com. A traditional non-clustered index wasn’t sufficient for the “Location Address” field, so I settled on a full text index–it worked very well.

After some amount of time (I don’t remember how long), performance slowed considerably. I was surprised to find the full text index for “Location Address” had a large number of fragments. I wish I had kept some notes on my findings. I can’t remember how may fragments there were, but I’m thinking it was in the 15-20 range. If memory serves me, Orange Co., FL has about 400,000 physical location addresses. The underlying table had one row per location address. Knowing me, the indexed column was probably VARCHAR(100) or VARCHAR(128). This does’t seem like a huge amount of data, so I was surprised the full text searches were slow, even with 15-20 fragments. Reorganizing the related full text catalog made a world of difference. Performance improved drastically.

All indexes need maintenance.  Dave has a script to help with full-text indexes.

Comments closed

Memory-Optimized Temp Objects

Jos de Bruijn shares a couple scenarios in which In-Memory OLTP can improve performance—using memory-optimized table types and replacing certain types of temp tables with schema-only memory-optimized tables:

Tempdb can be a performance bottleneck for many applications. Workloads that intensively use table-valued parameters (TVPs), table variables and temp tables can cause contention on things like metadata and page allocation, and result in a lot of IO activity that you would rather avoid.

What if TVPs and temp tables could live just in memory, in the memory space of the user database? In-Memory OLTP can help! Memory-optimized table types and SCHEMA_ONLY memory-optimized tables can be used to replace traditional table types and traditional temp tables, bypassing tempdb completely, and providing additional performance improvements through memory-optimized data structures and data access methods.

I’ve used both of these techniques to good effect, but the harsh limitations in 2014 prevented me from doing as much with them as I wanted.

Comments closed