Press "Enter" to skip to content

Author: Kevin Feasel

T-SQL Medians

Daniel Hutmacher has a post showing how to calculate medians and percentiles in T-SQL:

Medians as a concept are simple enough. If you have a large number of values, like a range of statistical values, you want to pick the middle one. The median, as opposed to the average is useful for a number of reasons, one of them that you can reduce the effect of so-called outlier values.

The fact that SQL Server doesn’t have a fast, built-in median function surprises me, to be honest.  The best alternative I’ve found was a CLR function in SQL#.

Comments closed

Logical Data Models

Kevin Kline discusses logical data modeling:

In a recent blog post entitled Is Logical Data Modeling Dead?, Karen Lopez (b | t) comments on the trends in the data modeling discipline and shares her own processes and preferences for logical data modeling (LDM). Her key point is that LDMs are on the decline primarily because they (and their creators) have failed to adapt to changing development processes and trends.

I love all things data modeling. I found data models to be a soothing and reassuring roadmap that underpinned the requirements analysis and spec writing of the Dev team, as well as a supremely informative artifact of the Dev process which I would constantly refer to when writing new T-SQL code and performing maintenance. However, as time has passed, I have been surprised by how far it has fallen out of favor.

This is an interesting discussion.  I’m not sure I’ve ever created a true logical data model.  I’ve worked with systems which could potentially take advantage of them, but they never hit the top of the priority list.

Comments closed

Choosing Between Optimistic Concurrency Levels

Kendra Little has a cheat sheet for comparing the two optimistic concurrency levels:

SQL Server offers two flavors of optimistic locking for traditional disk-based tables: Read Committed Snapshot Isolation (RCSI), and Snapshot Isolation. They are each great tools to reduce blocking and make applications faster, particularly in databases that have lots of tiny reads and writes that need to be quick, or mixed-use patterns (lots of little reads and writes + larger reporting queries).

Both isolation levels are available in SQL Server Standard Edition and Web Edition as well as Enterprise. That’s right, they’re cheap and easy. Each of them are controlled by database level settings, and default to disabled when you install SQL Server.

The moral of the story:  both of these are awesome, both have potential drawbacks, and both need testing.  I’ve had good experiences with RCSI, but even then, maybe about 1% of procedures need specific locking hints (either NOLOCK or an explicit lock) to maintain previous application behavior and to deal with the problem Kendra brought up.  Moral of the story:  test, test, test.

Comments closed

Power BI With SSAS

Jens Vestergaard’s T-SQL Tuesday entry involves Power BI feeding from Analysis Services:

My story with this half-baked product (the Dashboard you are about to see), is that I needed some way of tracking performance on a couple of Analysis Services (SSAS) query servers. There are a lot of good posts and talks about how to collect and store performance counters and SSAS logs out there, and I suggest you look into this, this or that, if you need inspiration.

The current data set is about 200K rows, as I am sampling each server every 5th minute.

Both of these are valuable tools in a Microsoft BI environment.

Comments closed

Altering Columns

Kenneth Fisher points out that there are defaults when altering columns:

So here is the thing. When you change one you change them all. That means if you don’t specify a precision when you can then you get the default. That’s not exactly a common problem though. Usually what you are changing is the precision (or possibly the datatype). What is a common mistake is not specifying the nullability.

When modifying DDL, make sure that you keep it consistent and complete.

Comments closed

Concurrency Simulator

Michael J. Swart has come up with a concurrency simulator:

Capacity planning is difficult for DBAs who expect growth. Will there be enough CPU, Memory or I/O to serve the anticipated load? One category falls outside those three, logical contention.

Logical contention is a problem where excessive blocking causes throughput to suffer. It would be great to get advanced warning. One essential strategy is to make use of the blocked process report. The problem is that blocked process reports are an alarm metric, not a guage metric. In other words, the blocked process report can indicate when there is a problem, but it is poor at giving advanced notice.

This is a nice visual tool to begin to understand the topic.  The same principles apply to road traffic, water flow, etc.

Comments closed

Conditional Processing

Bill Fellows shows options for handling date-based conditional processing:

Do you see the problem? Really, there are two but the one I’m focused on is the use of GETDATE to determine which branch of logic is executed. Today is Monday and I need to test the logic that runs on Friday. Yes, I can run these steps in isolation and given that I’m not updating the logic that fiddles with the branches, my change shouldn’t have an adverse effect but by golly, that sucks from an testing perspective. It’s also really hard to develop unit tests when your input data is server date. What are you going to do, allocate 5 to 7 days for testing or change the server clock. I believe the answer is No and OH HELL NAH!

This isn’t just an SSIS thing, either. I’ve seen the above logic in TSQL as well. If you pin your logic to getdate/current_timestamp calls, then your testing is going to be painful.

I liken this to solving dependency injection problems in general:  make the caller define the date or date part.  That way, your test callers can define other dates and the “smarts” around which branch to take move up to a more swappable layer.

Comments closed

Getting Running SQL Agent Jobs

Andy Mallon introduces us to xp_sqlagent_enum_jobs, tells us not to use it, and gives us an alternative:

If you landed on this post, I’m guessing you know the answer to this, so I’ll be quick. xp_sqlagent_enum_jobs is an undocumented (and thus, unsupported) extended procedure that gives information on whether a job is currently running, when it last ran, when it runs next, etc.

The supported alternative is to use sp_help_job (which calls xp_sqlagent_enum_jobs), but sp_help_job is difficult to use in automation because trying to do INSERT #temp EXEC sp_help_job will result in an error due to nested INSERT…EXEC statements. As a result, people often turn to calling the unsupported xp directly.

I never actually knew about this procedure; I always just wrote queries against the msdb tables.  Andy’s solution is more elegant than what I normally come up with, though.

Comments closed

Copy-Only Backups

Tibor Karaszi talks about copy-only backups:

If you specify COPY_ONLY for a full backup, it will not affect the following differential backups. I.e., the following differential backups will be based on  the last full backup which was not performed with COPY_ONLY. Another way of looking at this is that a full backup using COPY_ONLY will not reset the DIFF (also known as DCM) page, which is page 6 and repeated approximately every 4 GB in each database file.

If you specify COPY_ONLY for a log backup, it will not affect the following log backups. I.e., the log backup chain will be without the one(s) produced using COPY_ONLY. Another way of looking at this is that a log backup using COPY_ONLY does not empty (truncate) the log.

That’s it! COPY_ONLY for a full backup does not, in any way, affect the restoreability for log backups.

The copy-only is a great feature, but understand what it does and how it works.

Comments closed

DR On The Cheap

Derik Hammer’s final Availability Group architecture post covers disaster recovery on the cheap:

This architecture can be used when your organization does not value their secondary data center the same as the primary. It is a best practice to have matched or similar hardware between your primary and disaster recovery sites but that is not always possible. When costs need to be reduced it is better to have one failover server that you know can handle the work load rather than two servers which are under powered. Under powered hardware can easily become an effective outage if they cause timeouts as soon as a production work load is placed on them.

For many small to medium-sized enterprises, this might be the easiest route to sell to management—it’s hard to get management to have “redundant” servers which normally don’t get used.

Comments closed