Press "Enter" to skip to content

Category: JSON

Using JSON Arrays instead of JSON Objects for Serialization

Lukas Eder makes a recommendation:

Why, yes of course! jOOQ is in full control of your SQL statement and knows exactly what column (and data type) is at which position, because you helped jOOQ construct not only the query object model, but also the result structure. So, a much faster index access is possible, compared to the much slower column name access.

The same is true for ordinary result sets, by the way, where jOOQ always calls JDBC’s ResultSet.getString(int), for example, over ResultSet.getString(String). Not only is it faster, but also more reliable. Think about duplicate column names, e.g. when joining two tables that both contain an ID column. While JSON is not opinionated about duplicate object keys, not all JSON parsers support this, let alone Java Map types.

Read on for some insight into when you might want to choose either of the two approaches, and why Lukas went with JSON arrays instead of JSON objects for object serialization in jOOQ.

Leave a Comment

JSON Lines Support in Microsoft Fabric

Jovan Popovic makes an announcement:

We’re happy to announce the preview of JSON Lines (JSONL) support in the OPENROWSET(BULK) function for Microsoft Fabric Data Warehouse and SQL endpoints for Lakehouses.

The OPENROWSET(BULK) function allows you to query external data stored in the lake using well-known T-SQL syntax. With this update, you can now also query files in JSON Lines format, expanding the range of supported formats and simplifying access to semi-structured data.

Click through to see it in action.

Comments closed

JSON Indexes in SQL Server 2025 CTP 2.0

Daniel Hutmacher gives it a try:

Starting today, the public preview of SQL Server 2025 is available to download!

One really interesting new feature that got my attention was the addition of JSON indexes. I’m a big fan of everything that makes working with JSON easier, since JSON blobs are so much easier to work with than table variables when you’re moving data from point A to point B. This is especially true when you’re working with complex, relational data.

Daniel lays out some of the limitations of JSON index creation and also some of the performance gains you might see from it. This will be most helpful in data engineering scenarios, shredding JSON from various services, but the normalization purist in me says that if you’re shredding JSON enough to need indexes, it’s probably time to normalize that data.

1 Comment

Working with the JSON Data Type in Azure SQL DB

Dennes Torres tries out the JSON data type in Azure SQL Database:

Before this new field type, JSON data was typically stored in varchar(max) columns. There are many features to use with JSON values stored in varchar(max) columns and variables, but storing JSON as regular strings is still limited.

The built-in JSON type expands the possibilities. Using an actual JSON column, it becomes easier to build constraints related to JSON columns, for example.

Dennes also spends a lot of the article covering the JSON_ARRAYAGG() and JSON_OBJECTAGG() functions.

Comments closed

Working with JSON_OBJECTAGG() and JSON_ARRAYAGG() in Azure SQL

Koen Verbeek tries out a couple of fairly new functions:

I need to construct JSON from data in our database, but I find the existing FOR JSON PATH limited when the data is not located in one single row but rather scattered over multiple rows. Is there another method on how to handle JSON data in SQL Server? Learn how to use the new SQL Server JSON Functions JSON_OBJECTAGG and JSON_ARRAYAGG in this article.

These aren’t available on-premises yet, though given that there’s a new version of SQL Server coming out in 2025, there’s a good chance we’ll have it by then.

Comments closed

The JSON Data Type in Azure SQL DB

Koen Verbeeck takes a peek at what’s coming in SQL Server 2025:

We have data coming into our database from a REST API endpoint. The data is formatted as JSON documents. Is there an efficient way to handle JSON data within the SQL Server ecosystem? In this article, we look at the new JSON data types for SQL Server.

The end result is not particularly clear-cut here: reads slightly faster, writes considerably slower. There aren’t any indexes on the data type at this time, so no opportunity for improvement there.

Comments closed

Storing and Parsing JSON in SQL Server

Ed Pollack talks JSON:

Like XML, JSON is an open standard storage format for data, metadata, parameters, or other unstructured or semi-structured data. Because of its heavy usage in applications today, it inevitably will make its way into databases where it will need to be stored, compressed, modified, searched, and retrieved.

Even though a relational database is not the ideal place to store and manage less structured data, application requirements can oftentimes override an “optimal” database design. There is a convenience in having JSON data close to related relational data and architecting its storage effectively from the start can save significant time and resources in the future.

Read on for plenty of examples and tips. Ideologically, I have no problem parsing JSON to load data into SQL Server. I have no real problem storing data in JSON if the calling application takes that JSON as-is and does not expect the database to modify or shred that JSON. I have no problem taking relational data and creating JSON structures to send out to calling applications. My problem comes when you store the data as JSON but then expect the database to manage data. Treat the JSON blob as atomic and we’re fine; otherwise, I want to make that data relational, as befits a relational database.

2 Comments

JSON and JSONB Data Types in Postgres

Andrea Gnemmi covers a pair of data types to manage one thing:

We have all encountered the need to store non-structured or semi-structured data in an RDBMS; XML or JSON data in particular. This can be complicated, especially in the past with limited technical options, and even more complicated if we want to query this data efficiently.

Read on to learn more about the differences between JSON and JSONB, as well as mechanisms you can use to query subsets of the data.

Comments closed

Emulating the FILTER Clause in Oracle

Lukas Eder notes a bit of ANSI SQL:

The following aggregate function computes the number of rows per group which satifsy the FILTER clause:

SELECT  COUNT(*) FILTER (WHEREBOOK.TITLE LIKE'A%'),  COUNT(*) FILTER (WHEREBOOK.TITLE LIKE'B%'),  ...FROMBOOK

This is useful for pivot style queries, where multiple aggregate values are computed in one go. For most basic types of aggregate function, it can be emulated simply by using CASE expressions, because standard aggregate functions ignore NULL values when aggregating. 

Lukas shows how you can also implement this logic using JSON_TRANSFORM() though I think I’d just as soon stick with COUNT(CASE WHEN BOOK.TITLE LIKE 'A%' THEN 1 END) and try hard not to think about shredding JSON.

Comments closed