Press "Enter" to skip to content

Category: Syntax

Built-In SQL Server Functions

Tywan Terrell has a listing of various functions built into SQL Server:

SQL Server starting with 2012 ship with a robust set of functions that can be used to make code perform faster, with fewer lines of code. The functions  can be used in ETL Process to provide better error handling. A example of this would be the Try_Parse function that allows you to check if a value can be converted.

Another example would be using the FIRST_VALUE() and LAST_VALUE() functions which work against a set of data. These functions are very useful when looking for things like month over month averages and when doing calculations. The table below contain a list of function that are supported starting with 2012 along with some examples of how to use them.

He breaks them down into four categories and provides examples.  Functions can bring their own set of problems with query performance, but most of them can be very useful.

Comments closed

Generating Comma-Delimited Strings

Shane O’Neill has a good reason to upgrade to SQL Server 2017, which is not having to deal with FOR XML PATH hacks anymore:

Commas are all the rage nowadays:

There are a fair number of questions nowadays about returning data from a database in a comma separated string. Sure the application should probably do that but hey, database servers are expensive, why not get some bang for your bucks!

Protip:  I use the colon as a separator because sometimes commas will get stuck in the Ethernet cable.

Comments closed

Understanding CROSS APPLY

Andy Levy has a T-SQL programming breakthrough:

Finally, this week I had a breakthrough. I was working on updating a bunch of data but it was breaking on a small subset of that data. In this case, I was attempting to JOIN two tables on fields that should have been INTs, but in a very small number of cases one side was using a comma-delimited string. The user told me that someone else had done these updates in the past and didn’t encounter the problem I was having (so I knew that it was something i was doing “wrong”), but given that it was only a handful of broken updates she was OK with manually doing the updates (we were scripting it because we were updating potentially tens of thousands of records).

I am not OK with manually fixing this in the future. I wanted to know how the other DBA had done it before. I dug into some history and found CROSS APPLY. My nemesis. I was determined to figure out how to use it this time.

The APPLY operator is extremely powerful in the right set of circumstances.  Andy shows the “classic” use case, but there are a number of other uses for the operator.

Comments closed

Creating Graph Objects In SQL Server

Steve Jones creates a simple graph relationship in SQL Server 2017:

What does all that mean? No idea. Clearly there is JSON that’s returned here and can be deserialized to gather meanings. Is this useful? I think graphs solve a certain set of problems very well, and more efficiently than relational systems. Certainly I could implement a graph structure relationally, but at scale I’m not sure the queries would be as easy to write or run as quickly.

I don’t know if I’d use a graph structure in any of the problems we try to solve in the SQLServerCentral app, but who knows. Maybe we would if we could.

Steve leaves this with more questions than answers, but he does give a very simple repro script if you want to futz about with graphs.

Comments closed

UNION ALL Ordering

Paul White shows how UNION ALL concatenation has changed since SQL Server 2008 R2:

The concatenation of two or more data sets is most commonly expressed in T-SQL using the UNION ALL clause. Given that the SQL Server optimizer can often reorder things like joins and aggregates to improve performance, it is quite reasonable to expect that SQL Server would also consider reordering concatenation inputs, where this would provide an advantage. For example, the optimizer could consider the benefits of rewriting A UNION ALL B as B UNION ALL A.

In fact, the SQL Server optimizer does not do this. More precisely, there was some limited support for concatenation input reordering in SQL Server releases up to 2008 R2, but this was removed in SQL Server 2012, and has not resurfaced since.

It’s an interesting article about an edge case.

Comments closed

Dealing With NULL

Jeff Mlakar has a pair of comparisons for NULL handling, with ISNULL vs COALESCE and CONCAT vs + for concatenation:

We expect this much from IsNull. However, coalesce is a little different. will take the data type from the first non-null value passed and use that for the table definition. This might not always be what you want because if you pass bits you might get integers. If you pass an array of integers and floats you will get numeric. Be aware if this isn’t what you wanted.

Read the whole thing.

Comments closed

Graph Data In SQL Server

Terry McCann has a first look at SQL Server 2017’s graph data capabilities:

SQL Graph is a similar concept to what is described above, but built in to the core SQL Server engine. This means 2 new table types NODE and EDGE and a few new TSQL functions in particular MATCH(). SQL Graph at the time of writing is only available in SQL 2017 ctp 2.0. You can read more and download ctp2.0 here https://blogs.technet.microsoft.com/dataplatforminsider/2017/04/19/sql-server-2017-community-technology-preview-2-0-now-available/. Once ctp 2.0 is installed there is nothing else you need to do to enable the new graph syntax and storage.

There is an example you can download from Microsoft which is a similar set up to the example in the image above. However I have used some real data shredded from IMDB the internet movie database. This data is available to download from Kaggle https://www.kaggle.com/deepmatrix/imdb-5000-movie-dataset

Click through for a video demonstration as well.

Comments closed

SQL Server Graph Database

The SQL Server team announces graph extensions in SQL Server 2017:

Graph extensions are fully integrated in the SQL Server engine. Node and edge tables are just new types of tables in the database. The same storage engine, metadata, query processor, etc., is used to store and query graph data. All security and compliance features are also supported. Other cutting-edge technologies like columnstore, ML using R Services, HA, and more can also be combined with graph capabilities to achieve more. Since graphs are fully integrated in the engine, users can query across their relational and graph data in a single system.

This is interesting.  One concern I have had with graph databases is that graphs are storing the same information as relations but in a manner which requires two distinct constructs (nodes and edges) versus one (relations).  This seems to be a hybrid approach, where the data is stored as a single construct (relations) but additional syntax elements allow you to query the data in a more graph-friendly manner.  I have to wonder how it will perform in a production scenario compared to Neo4j or Giraph.

Comments closed

Using OUTPUT To Get Change Counts

Manoj Pandey shows how to use the OUTPUT clause to determine the number of records inserted, updated, or deleted after a DML statement:

–> Question:

How can I get the numbers of records affected in the Merge statement, INSERT,UPDATE,DELETE separately and store it in a variable so I can get it in the application side?

Thanks !
–> My Answer:

You need to use OUTPUT clause with MERGE statement

Click through for a code sample.  The OUTPUT clause also works for non-MERGE statements like INSERT, UPDATE, and DELETE, though the “get changes by type” problem is really limited to the MERGE statement.

Comments closed