Press "Enter" to skip to content

Category: Syntax

UUIDs in Postgres

Ryan Lambert is not a fan:

This month’s PgSQLPhriday #015 topic is about UUIDs. Lætitia Avrot is this month’s host, see her post for the full challenge text. The topic is described as a debate between the Database People and Developers. I’m not sure there’s such a clean divide on people’s opinions on the topic, as I know plenty of Database People that have settled on using UUIDs as their default. Similarly, I know even more developer types that have followed the arguably more conventional choice of using an auto-incrementing ID.

Click through for the debate. Most of this is similar to the argument on the SQL Server side, though without the specter of clustered indexes added to the mix.

Comments closed

Replacing DISTINCT with EXISTS

Andy Brownsword makes a switch:

The DISTINCT clause in a query can help us quickly remove duplicates from our results. Sometimes it can be beneficial to stop and ask why. Why do we need to use the clause, why are we receiving duplicates from our data?

I see this typically due to a JOIN being used where we don’t really want all of those results. This could be a ‘does something exist’ check such as if a customer has ever ordered before. The issue comes when there are multiple rows returned like a frequent customer in this example.

As an alternative to this, Andy shows how you can use the EXISTS clause to find records matching some criterion.

Comments closed

What ANSI_NULLS Does

Chad Callihan answers one of the mysteries of the universe:

Have you ever seen ANSI_NULLS being set to ON or OFF in a script or stored procedure? I’ve seen or heard it mentioned a few times recently and thought it would be worth demonstrating an example of what this setting does.

Read on for the answer, including a truth table. But definitely keep ANSI_NULLS set to ON, as several important bits of functionality in SQL Server expect it that way.

Comments closed

A Focus on TRY_PARSE()

Andy Brownsword takes a closer look at TRY_PARSE():

In the previous post we looked at the functions TRY_CASTTRY_CONVERT, and TRY_PARSE and how they compared. I wrapped up and said that my preference for new developments would be to use TRY_PARSE due to the tighter control which it provides us.

As with everything in SQL Server however, there is no ‘best’ approach, it depends. I therefore wanted a separate post to look into the specifics with TRY_PARSE and areas where it may work more or less effectively.

Andy hits both the good and the bad of TRY_PARSE() and I recommend checking out this post. It’s great for parsing one row or a small number of rows (call it 5-10K or so), and really bad at parsing large numbers of rows.

Comments closed

Dynamic SQL in Stored Procedures

Erik Darling gets serious:

I’ve spent a lot of time on this blog telling you different reasons why you should, when you should, and how you should use dynamic SQL.

In this post, I’m going to go into some of the finer points of how I approach dynamic SQL to avoid issues — not performance issues — more procedural issues.

Things you should use to protect yourself from wonky object names, string truncation, object identification, and more.

Read on for Erik’s tips for handling dynamic SQL.

Comments closed

Options for Forcing Parallelism

Chad Callihan looks at a pair of options:

Just because something works doesn’t mean it’s the right thing to do. I had that type of challenge to my database morals recently when facing a query that refused to go parallel.

Read on to learn more. Note that neither of these relates to MAXDOP because that doesn’t determine whether a plan will go parallel (though you can use it to prevent a plan from going parallel).

Comments closed

Tracking Progress on Deletion

Kenneth Fisher has a way:

Alternate title: How do I tell how far I am on that command?

This little command (frequently with a WHERE clause) is a great way to tell how far along you are on any given command. Specifically, I’ve been running a bunch of DELETEs recently. I’ve got something like 5-50 million rows of data to delete and I’d like to know how far along I am. A common way to do this would be just a simple count.

Click through for Kenneth’s technique, as well as good information from Jeff Moden in the comments.

Not mentioned in this post is that hopefully, your massive delete operations are running in batches, as trying to delete 5 million or more rows in a single DELETE command is…resource-intensive.

Comments closed

TRY: CAST, CONVERT, and PARSE

Andy Brownsword tries and tries and tries again:

In the previous post we looked at how ISNUMERIC and TRY_CAST work and why we may want to utilise the latter when building validation for our data. When SQL Server 2012 rolled around it wasn’t only TRY_CAST which was added, we also had TRY_CONVERT and TRY_PARSE introduced too.

Here we’re going to look at how those function and the differences in the outputs which they can provide. We’ll start with the sample data below as the basis for these demonstrations:

Andy focuses mostly on the behavioral differences, where I like TRY_PARSE() a lot, especially because you can control locale for dates and times. When it comes to performance, I’ve found TRY_CAST() and TRY_CONVERT() to be essentially the same performance-wise. Maybe there’s a tiny difference between the two but there’s no guarantee of it.

TRY_PARSE(), however, calls into the .NET CLI for each execution and will be considerably slower. If you’re parsing a few or a few dozen values, you probably won’t notice the difference. If you’re parsing tens of thousands of values (or more), I assure you that you’ll notice the difference.

Comments closed