Press "Enter" to skip to content

Category: T-SQL Tuesday

CAP_CPU_PERCENT

Robert Davis looks at the CAP_CPU_PERCENT option in Resource Governor:

The need for this setting came about because MAX_CPU_PERCENT is not applied unless the server is busy. This could lead to a situation where queries in a low priority resource pool starts running while the server is idle and are allowed to consume all the CPU they can. Then high priority queries spin up, and they can’t immediately get the CPU they need due to the low priority queries not being capped. CAP_CPU_PERCENT came along and was designed to set a hard limit that the queries in a pool could not go over even if the server is idle. For example, if you cap the CPU at 25%, the queries in the pool will not exceed 25% no matter how idle the server is.

Problem solved, right?

When the end of a section is a yes/no question, the answer is usually “no.”  Read on before this burns you.

Comments closed

Nothing New Under The Sun

Kevin Hill reminisces and warns:

Installation defaults that are going to bite you (not version specific, and the installer is getting better):

  • Files all on the C drive

  • One TempDB data file (improved in SQL 2016)

  • Backups on C drive

  • No automated backups

  • Allow SQL to use ALL the memory

  • Allow SQL to use ALL the CPUs

  • Builtin\Administrators group not default*

  • Compressed backup set to OFF

There’s good advice here, so read on.

Comments closed

Who Monitors The Monitors?

Dave Mason discusses monitors and what happens when they fail:

I was reminded of this recently in my little SQL Server world. I have a number of garden variety alerts set up, plus some other more custom monitoring stuff, which is mostly tied to DDL triggers and event notifications. The one thing all of them have in common is database mail. You can probably guess where I’m going with this. Yep, database mail stopped working. A couple weeks passed before I realized it. Fortunately, out of all the alerts I should have been notified about, none of them were serious.

How would I prevent this happening in the future? I guess I could build another system to monitor my monitoring system. Something like System C, which monitors System B, which monitors System A. But where would that end? System D? System E? Where should the line be drawn? I don’t know that there’s a right answer here, although admittedly, the farther into the alphabet you get, the more absurd it sounds.

At some level, process becomes the answer.  In my case, not before I create a few more systems…

Comments closed

Observations On Azure SQL Data Warehouse

Jeffrey Verheul is running this month’s T-SQL Tuesday.  Here is his contribution:

A thing that can make migrations to the cloud a bit more difficult, is that Azure SQL databases are basically a contained datastore (you would call it a “contained database” when you run it on-premise). This means that you (by default) can’t connect from one database to the other. This could mean that you need to rewrite your applications or stored procedures, or maybe even redesign your entire database/application/domain model.

This also means that running a stored procedure from the Ola Hallengren’s maintenance solution can only be done on the specific database, and not from the master database like the on-premise version does. These small challenges can be overcome, but it does mean code-duplication in your databases because the maintenance procedures need to be deployed to every single database.

Read on for more observations regarding Azure SQL Data Warehouse.

Comments closed

Thinking About Azure SQL Database

Kevin Hill with an introductory-level discussion of Azure SQL Database:

Some basic terminology:

  • Cloud: No such thing.  It is just your stuff on someone else’s machines that they maintain for you.

  • Virtual Machine (VM): A Virtual Server on some physical servers…yours, or someone else’s.

  • Azure: Fancy name for Microsoft’s cloud. As a noun or an adverb it means “blue”.  Or a small butterfly.

  • Azure SQL database: Just a database in Azure on some storage

  • Azure Virtual Machine: A VM on Microsoft’s Azure servers, that you do not have to maintain the underlying physical infrastructure.

This is a nice, very high-level introduction to why Azure SQL Database exists.

Comments closed

T-SQL Tuesday 081 Roundup

Jason Brimhall has a roundup of T-SQL Tuesday #081:

One of the tricks to becoming and staying a top tier data talent or professional is a perpetual cycle to learn, adapt, change, and evolve. We must be in a continual cycle of self evaluation and self modification. Let’s call this by something else – we must be agile. There I said the five letter word. Think about it in broad strokes with your career – it is a development process with perpetual evaluation, review and tweaks.

Now think about the invite and see how that fits with what I just said or with the, cough cough, agile flow. You start (albeit very basically) with a need for enhancement, then you plan which pieces of the enhancement you can accomplish, you then do the work (whether successful or not), then after you deliver the work you conclude with a retrospective (what went well and what needs to change). Yes! I do feel rather dirty for sneaking this on everybody like this. That said, when you think about the model and apply it in broad strokes to your career path – it has merit.

Read on to see who participated this month and Jason’s thoughts.

Comments closed

Temporal Database Theory

Kennie Pontoppidan reads and reviews a book on temporal database theory:

I have chosen to blog about Richard T. Snodgrass’ book “Developing time-oriented database applications in SQL.” I heard about this book last year around this time, when I started to investigate the new temporal feature “System versioned tables” in SQL Server 2016. I believe it was my old colleague Peter Gram from Miracle who pointed out the book to me, and usually when Peter recommends a book, I buy it and (eventually) read it. It was also about time (no pun intended), since I’m giving a talk on “All things time-related” for two SQL Saturdays during the next few months, and I needed to spice up the presentation with some new material.

In this blog post, I will quickly scratch down a few of the takeaways, I have taken from the book already.

Sounds like an interesting read.

Comments closed

Extended Events Audit

Steve Jones creates an audit with Extended Events:

The third part of the invitation was to write this. I covered what I did, and some of what I learned. I’ll add a bit more here.

I certainly was clumsy working with XE, and despite working my way through the course, I realize I have a lot of learning to do in order to become more familiar with how to use XE. While I got a basic session going, depending on when I started it and what I was experimenting with, I sometimes found myself with events that never went away, such as a commit or rollback with no corresponding opening transaction.

This T-SQL Tuesday was a bit broader in scope, so it has been interesting watching people respond.

Comments closed

Compression Delay

Rob Farley digs into Compression Delay as part of real-time operational analytics:

The thing with Operational Analytics is that the analytical data, reporting data, warehouse-style data, is essentially the same data as the transactional data. Now, it doesn’t look quite the same, because it’s not been turned into a star-schema, or have slowly changing dimension considerations, but for the purposes of seeing what’s going on, it’s data that’s capable of handling aggregations over large amounts of data. It’s columnstore.

Now, columnstore data isn’t particularly suited to transactional data. Finding an individual row within columnstore data can be tricky, and it’s much more suited to rowstore. So when data is being manipulated quite a lot, it’s not necessarily that good to be using columnstore. Rowstore is simply better for this.

But with SQL 2016, we get updateable non-clustered columnstore indexes. Data which is a copy of the underlying table (non-clustered data is a copy – clustered data or heap data is the underlying table). This alone presents a useful opportunity, as we can be maintaining a columnstore copy of the data for analytics, while handling individual row updates in the rowstore.

Read the whole thing.

Comments closed