Press "Enter" to skip to content

Author: Kevin Feasel

Spreading Out Multi-Server Agent Runs

Tracy Boggiano shows how to distribute SQL Agent job runtimes for multi-server jobs using MSX/TSX:

First, you need to decide how many time blocks or hours you want the jobs to run in.  So let’s start with scenario one where you pick for example four time blocks.  First, you declare a variable with the time block in it and we will feed in the @@SERVERNAME to let determine a value for the time block that server will run.  Then we wrap our code around our time block, our example we will run Index Maintenance for a 12 period spread out for three hours.  Mind you for my index process which I probably should blog about as well I am processing one index at a time have something that BREAKs out of the procedure when it exceeds the time block it is.  So below we run Index Maintenace between start the index maintenance job on a server between the hours 6  PM and 5 AM based on the time block value we got back.

Click through for a sample.

Comments closed

There’s Only One Way To Order

Matthew McGiffen notes that there is only one way to order, and that is to use the ORDER BY clause:

Everyone, at the beginning of their SQL career, get’s told that it is important to include an ORDER BY if they want the results ordered. Otherwise the order in which they are returned is not guaranteed.

But then you run queries a lot of times that don’t need a specific order – and you see that they (at least seem to) come out in the same order every time. You could (almost) be forgiven for thinking you can rely on that.

There was even a question on a Microsoft SQL certification exam a few years ago that asked what the default order was for records returned by a simple SELECT – the answer it was looking for was that it would be according to the order of the clustered index. So you can rely on that – right?

Wrong. The question was a bad question, and the answer was incorrect. Let’s look at this in action.

Order is never guaranteed to be stable unless you specify a unique ordering using ORDER BY.

Comments closed

Hidden Extended Events: The Debug Events

Jess Pomfret goes looking for Extended Events relating to the transaction log:

I was troubleshooting an issue last week which led to me firing up extended events to look at records being written to the transaction log, I typed into the search bar ‘Transaction’ hoping to find something that would do the trick and didn’t quite find what I was looking for.

After a few more failed attempts I headed to the internet and found a post by Paul Randal describing exactly what I needed for this situation, using the [sqlserver].[transaction_log] event. Hold on, that’s exactly what I searched for.  I ran the T-SQL within his blog post, the event was successfully created and gave me the information I was looking for.

But, as Jess points out, you can still get to it from the GUI.  Read on to learn how.

Comments closed

A SQL Client For Apache Flink

Alex Woodie points out that Apache Flink now has a SQL client built in:

Apache Flink has contained SQL functionality since Flink version 1.1, which introduced a SQL API based on Apache Calcite and a table API, too. While the combined SQL and Table API today provides valuable ways for developers to apply well-understood relational data and SQL constructs to the world of stream data processing, its usefulness is somewhat limited.

For starters, only Scala and Java experts can avail themselves of API, according to the description of the new SQL client, which is codenamed FLIP-24. What’s more, any table program that was written with the SQL and Table API had to be packaged with Apache Maven, a Java-based project management tool, and submitted to the Flink cluster before running.

With the launch of the SQL CLI Client in Flink version 1.5, the Flink community is taking its support for SQL in a new direction. According to the FLIP-24 project page, providing an interactive shell will not only make Flink accessible to non-programmers, including data scientists, but it will also eliminate the need for a full IDE to program Flink apps. With millions of SQL-loving data analysts out there, the benefits could certainly be vast.

Good stuff.  Feasel’s Law in action.

Comments closed

Using Temporal Tables For Auditing Changes

Nisarg Upadhyay shows how to use temporal tables for basic data auditing:

To audit the data changes for a specific table, we should perform the time-based analysis of temporal tables. To do that, we must use the ‘FOR SYSTEM_TIME’ clause with below temporal-specific sub-clauses to the query data across the current and history tables. Let me explain the output of queries using different sub-clauses. Below is the setup:

  1. I inserted a product named ‘Flat Washer 8’ with List price 0.00 in the temporal table at 09:02:25 AM.

  2. I changed the List Price at 10:13:56 AM. New Price is 500.00.

Temporal tables aren’t going to solve all of your auditing problems but it can be useful.

Comments closed

Does Your Server Have Too Much Memory?

Brent Ozar has a few tips to see if you have too much memory:

Sounds impossible, right? The saying goes that you can never be too rich or too thin or have too much memory.

However, there is one good indication that your SQL Server is probably overprovisioned, and to explain it, I need to cover 3 metrics.

1. Max Server Memory is set at the instance level: right-click on your SQL Server name in SSMS, click Properties, Memory, and it’s “Maximum server memory.” This is how much memory you’re willing to let the engine use. (The rocket surgeons in the audience are desperate for the chance to raise their hands to point out different things that are or aren’t included in max memory – hold that thought. That’s a different blog post.)

If you have too much memory, I’d happily borrow a cup of RAM.

Comments closed

Why Nobody Is Reading Your Report

Stephanie Evergreen really cuts to the chase:

Here’s the hard truth: Your report probably sucks. Mine sure did. The heart of your content is likely fine, maybe even helpful. But, if you are anything like the hundreds of reports I see every year, the entire set of cultural norms we have somehow developed around reporting is just setting us up for failure, writing a destiny where no one is reading the report.

Why? Let me lay out the most common issues I see and propose some strategic solutions.

There’s an emphasis here on academic papers but it also applies to corporate work too.

Comments closed

Trigger Or Constraint?

Andy Levy points out that you shouldn’t use a trigger when a default constraint will do:

We want to spend our SQL Server licensing dollars wisely, so why ask it to do unnecessary work? More CPU time, more IO, and our client is waiting longer for the process to complete (I don’t like making anyone wait).

There’s a second “gotcha” with the AFTER INSERT method that applies to only some use cases. Let’s say you’re loading some historical data into the table, and that data includes the LastModified date. Or maybe your application has a very specific date that should be inserted into the table for this field.

Andy makes good points.

Comments closed

Alternatives To Temp Tables In SSIS

Tim Mitchell gives us a few methods for avoiding temp tables in SQL Server Integration Services:

While temp tables are a good option for in-flight data transformation, there are some unique challenges that arise when using temp tables in SSIS.

SQL Server Integration Services uses tight metadata binding for data flow operations. This means that when you connect to a relational database, flat file, or other structure in an SSIS data flow, the SSIS design-time and runtime tools will check those data connections to validate that they exist and that the metadata has not changed. This tight binding is by design, to avoid potential runtime issues arising from unexpected changes to the source or destination metadata.

Because of this metadata validation process, temp tables present a challenge to the SSIS data flow. Since temp tables exist only for the duration of the session(s) using them, it is likely that one of these tables created in a previous step in an SSIS package may not be present when validation needs to occur. During the design of the package (or even worse, when you execute the deployed package in a scheduled process), you could find yourself staring at an “object not found” error message.

It’s good to have alternatives, though there are times when you really just need a temp table.

Comments closed