Press "Enter" to skip to content

Category: JSON

Constructing JSON Objects in SQL Server

Hasan Savran checks out a couple of functions new to SQL Server 2022:

JSON Functions are introduced to SQL Server in version 2016. Saving JSON documents and retrieving documents using JSON Functions brings many possibilities to SQL Server. It is great to see that Microsoft continues to add different functions to the original JSON functions set.

    Today, I will explain two new JSON functions which are available in SQL Server 2022 and Azure SQL Database. 

Read on to learn more about these functions.

Leave a Comment

JSON Enhancements in Azure SQL DB and SQL Server 2022

Umchandar Jayachandran has an announcement:

Today, we are announcing the public preview of JSON enhancements in Azure SQL Database and SQL Server 2022 CTP 2.0. This preview contains an enhancement to ISJSON function and three new JSON functions – JSON_PATH_EXISTS, JSON_OBJECT and JSON_ARRAY. Currently, the ISJSON function allows you to test if a string value contains a valid JSON object or array. The new optional json_type_constraint parameter in ISJSON function can now be used to test conformance of JSON documents to the IETF RFC 8259 specification. This capability allows you to test for strings that contain a JSON value, scalar, object, or array. This functionality is like the IS JSON predicate in the ANSI SQL standard. The new JSON_PATH_EXISTS function allows you to test for the existence of a specific SQL/JSON path expression in a JSON document. This functionality is like the JSON_EXISTS predicate in the ANSI SQL standard. The new ANSI SQL compatible JSON value constructors – JSON_OBJECT and JSON_ARRAY functions allow you to construct JSON object or array from SQL data.

Even if you don’t store data in JSON format, there are good reasons why you might need to accept data in JSON format (or emit data in JSON format), especially when working with languages like R and Python.

Comments closed

Building posexplode() in the Serverless SQL Pool

Jovan Popvic rides to the rescue with JSON:

The array cells are pivoted and returned as simple scalar columns. Now you can simply use WHERE or GROUP BY clauses to filter or summarize information by array element values. Another very useful piece of information might be the index of every element (generated as pos column).

Spark enables you to use the posexplode() function on every array cell. The posexplode() function will transform a single array element into a set of rows where each row represents one value in the array and the index of that array element. As a result, one row with the array containing three elements will be transformed into three rows containing scalar cells. This flattened/normalized representation is much easier for the analysis.

Once the array is flattened and normalized, you can easily analyze the data and find how much people knowing SQL or Java.

Read on to see how you can implement the equivalent of POSEXPLODE() using OPENJSON() in the Azure Synapse Analytics serverless SQL pool.

Comments closed

Ordered String Splitting with OPENJSON

Aaron Bertrand splits and cares about sort order:

Last year, I wrote about replacing all your CLR or custom string splitting functions with native calls to STRING_SPLIT. As I work on a project migrating several Microsoft SQL Server instances to Linux, I am encountering one of the roadblocks I mentioned last time: the need to provide an element in the output to indicate the order of the elements in the input string. This means STRING_SPLIT in its current form is out, because 1) it offers no such column; and, 2) the results are not guaranteed to be returned in any specific order. Are there other ways to achieve this functionality at scale and without CLR?

As Koen mentions in the comments, you can now get STRING_SPLIT with a sort parameter, but Aaron’s response is also valid: not everybody will have access to that today, so it still makes sense to understand the options.

Comments closed

Compressing JSON in SQL Server

Randolph West has a recommendation:

I’ll also pre-emptively note that if this table was simply an append-only archive table, the row size would not really matter. Unfortunately, this table participates in thousands of transactions per day, and as the original developers used Entity Framework and didn’t think much of using NVARCHAR(MAX), the entire row is coming over the wire into the application each time it is queried.

As I’ve written previously about this kind of thing, this is not a good design pattern. Using the VARBINARY(MAX) data type with COMPRESS in the INSERT/UPDATE queries — and DECOMPRESS in the SELECT queries — is a much better design pattern and dramatically reduces the amount of data transferred over the network. Additionally, SQL Server needs significantly less space to store, maintain, and back up this compressed data.

Read on to see the likely benefits from doing this. I’d say that if your main purpose of storing the JSON is just to pass a blob back and forth, then yes, do compress. If you’re frequently shredding these sorts of large documents within SQL Server…well, probably time for a better data model.

Comments closed

A Cross-Platform Comparison of JSON in Relational Databases

Lukas Eder gives us some information on SQL/JSON and where different relational database management systems are in their JSON journies:

Building (dogfooding) on top of our own SQL/JSON API has revealed a lot of caveats of the various SQL/JSON implementations across vendors, and to be frank, it’s been a bit of a sobering experience. Despite there now being the ISO/IEC TR 19075:6 standard (mostly driven by Oracle this time), many vendors have already implemented some kind of JSON support, and it looks differently in all dialects – to the extent where writing vendor agnostic SQL/JSON is almost impossible with hand written native SQL. You’ll need an API like jOOQ or any other abstraction to standardise the different dialects.

Click through for the survey.

Comments closed

JSON Patching in Cosmos DB

Hasan Savran does some JSON modification:

JSON is a quite common format for data in these days. NoSQL databases save data in JSON format. Even Relational Databases like SQL Server give you option to save or retrieve data in JSON format. When we need to update a property in a JSON document, we need to update the whole JSON document. This can be a problem specially if document size is larger. This is where JSON Patch feature comes in. JSON Patch is a format that let you update JSON document partially. 

     You can do more than updating a property. You can add, remove, replace, move, or test a value by using this format. Let’s look at its details by using the following JSON document as the source.

Read on to see how patching works with Cosmos DB in particular.

Comments closed

JSON Basics with SQL Server

Steve Jones takes us through querying straightforward JSON data in SQL Server:

Recently I saw Jason Horner do a presentation on JSON at a user group meeting. I’ve lightly looked at JSON in some detail, and I decided to experiment with this.

All in all, I’ve been pretty happy with the syntax for JSON manipulation in T-SQL. I’m not the biggest user of JSON around, but when I’ve needed to slice or build JSON, even when I needed to build it in a certain way to emulate an old application, it has worked for me.

Comments closed

From JSON to SQL Server

Phil Factor has some helper functions for us when working with JSON data:

If you know the structure and contents of a JSON document, then it is possible to turn this into one or more relational tables, but even then I dare to you claim that it is easy to tap in a good OpenJSON SELECT statement to do it. If you don’t know what’s in that JSON file, then you’re faced with sweating over a text editor trying to work it all out. You long to just get the contents into a relational table and take it on from there. Even then, You’ve got several struggles before that table appears in the result pane. You must get the path to the tabular data correct, you have to work out the SQL Datatypes, and you need to list the full panoply of keys. Let’s face it: it is a chore. Hopefully, all that is in the past with these helper functions.

Click through for those functions.

Comments closed