Press "Enter" to skip to content

Author: Kevin Feasel

Contained Availability Groups in SQL Server 2022

Sean Gallardy is intrigued:

SQL server 2022 (SQL Dallas) has added what is called “Contained Availability Groups”, coming as a nice surprise to many DBAs or other admins out there (incidental DBAs, small shops, etc.) which *could* severely remove a large chunk of administrative overhead and challenges. There’s quite a lot to like about them, but like all things in life there are also some drawbacks that one needs to be aware exist, while these are a giant and amazing step forward for AGs, much like any other tool it has its place and time.

Definitely read the pros and cons of this before giving it a try in the 2022 preview.

Comments closed

T-SQL Tuesday 150 Round-Up

Kenneth Fisher finishes a roundup:

This months subject for T-SQL Tuesday was Your first technical job. I have to say I really really enjoyed reading everyone’s stories and it in a lot of ways made me feel closer to all of you. Thanks for participating!

And of course, if I forgot someone please let me know so I can get them added!

Click through to see everybody’s entries.

Comments closed

Azure Synapse Link for SQL

Chuck Heinzelman makes an announcement:

Azure Synapse Link for SQL is an automated system for replicating data from your transactional databases (both SQL Server 2022 and Azure SQL Database) into a dedicated SQL pool in Azure Synapse Analytics. The process of setting up a link from your SQL data to Azure Synapse takes just a few clicks and a matter of minutes rather than hours or days for traditional ETL processes. Once configured, your initial data is replicated into the target dedicated SQL pool. After the initial table seeding, changes made to your source data are replicated in near real-time. 

I’ll be interested in seeing how it performs and how efficient that change feed processor is.

Comments closed

Azure SQL DB REST Endpoint EAP

Davide Mauri opens up a preview:

A new, exciting capability for Azure SQL Database has been announced today at //Build: Azure SQL Database has a new built-in feature that allows native integration with external REST endpoints. This means that integration of Azure SQL Database with Azure Functions, Azure Logic Apps, Cognitive Services, Event Hubs, Event Grid, Azure Containers, API Management and in general any REST or even GraphQL endpoint is just one line of code away. The feature is one of the most requested by developers and MVPs and it enormously reduces the friction to integrate Azure services with Azure SQL Database. 

Hmm, on the one hand, I can see this being useful. On the other hand, ASMX in SQL Server 2005, anybody?

Comments closed

JSON Enhancements in Azure SQL DB and SQL Server 2022

Umchandar Jayachandran has an announcement:

Today, we are announcing the public preview of JSON enhancements in Azure SQL Database and SQL Server 2022 CTP 2.0. This preview contains an enhancement to ISJSON function and three new JSON functions – JSON_PATH_EXISTS, JSON_OBJECT and JSON_ARRAY. Currently, the ISJSON function allows you to test if a string value contains a valid JSON object or array. The new optional json_type_constraint parameter in ISJSON function can now be used to test conformance of JSON documents to the IETF RFC 8259 specification. This capability allows you to test for strings that contain a JSON value, scalar, object, or array. This functionality is like the IS JSON predicate in the ANSI SQL standard. The new JSON_PATH_EXISTS function allows you to test for the existence of a specific SQL/JSON path expression in a JSON document. This functionality is like the JSON_EXISTS predicate in the ANSI SQL standard. The new ANSI SQL compatible JSON value constructors – JSON_OBJECT and JSON_ARRAY functions allow you to construct JSON object or array from SQL data.

Even if you don’t store data in JSON format, there are good reasons why you might need to accept data in JSON format (or emit data in JSON format), especially when working with languages like R and Python.

Comments closed

The Future Object in Scala

Gulshan Singh visits from the future:

You have units of work that you want to run asynchronously, so you don’t block while they’re running. A future gives you a simple way to run an algorithm concurrently. A future starts running concurrently when you create it and returns a result at some point, well, in the future. In Scala, we call that a future returns eventually.

The Future instance is a handle to an eventually available result. You can continue doing other work until the future completes, either successfully or unsuccessfully.

You may also know of Futures as Promises. It’s quite similar to async calls in .NET as well.

Comments closed

Low-Latency Flink

Jun Qin and Nico Kruber have started a series on low-latency streaming in Apache Flink. The first two posts of the series are up, starting with the overview:

Latency can refer to different things. LatencyMarkers in Flink measure the time it takes for the markers to travel from each source operator to each downstream operator. As LatencyMarkers bypass user functions in operators, the measured latencies do not reflect the entire end-to-end latency but only a part of it. Flink also supports tracking the state access latency, which measures the response latency when state is read/written. One can also manually measure the time taken by some operators, or get this data with profilers. However, what users usually care about is the end-to-end latency, including the time spent in user-defined functions, in the stream processing framework, and when state is accessed. End-to-end latency is what we will focus on.

Part 2 discusses direct latency optimization techniques:

When interacting with external systems (e.g., RDBMS, object stores, web services) in a Flink job for data enrichment, the latency in getting responses from external systems often dominates the overall latency of the job. With Flink’s Async I/O API (e.g., AsyncDataStream.unorderedWait() or AsyncDataStream.orderedWait()), a single parallel function instance can handle many requests concurrently and receive responses asynchronously. This reduces latencies because the waiting time for responses is amortized over multiple requests.

Stay tuned for more posts in the series.

Comments closed

Minimum Viable Data Mesh in Azure

Paul Andrew was on a podcast:

For Paul, delivering a single data mesh data product on its own is not all that valuable – if you are going to go to the expense of implementing data mesh, you need to be able to satisfy use cases that cross domains. And the greater value is in cross-domain interoperability, getting to a data product that wasn’t possible before. And, you need to deliver the data platform alongside those first 2-3 data products, otherwise you create a very hard to support data asset, not really a data product.

When thinking about minimum viable data mesh, Paul views an approach leveraging DevOps and generally CI/CD – or Continuous Integration/Continuous Deliver – as very crucial. You need repeatability/reproducibility to really call something a data product.

Click through for the interview as well as Scott Hirleman’s summary.

Comments closed

Allowing for Cross-Database Access via Module Signing

Tom Zika enables cross-database access:

I’ve recently had to revisit this topic and spent a lot of time recalling the details. So I’m writing this blog post mainly as a reminder for myself.

The most helpful part will be the diagram detailing all the components and their relation and a comprehensive example anyone can follow.

I’m not going to cover Module Signing in general (I’ll leave that to Solomon Rutzky).
Nor will I cover other ways to achieve Cross DB access (like Cross DB Ownership chaining) because this is superior from the security standpoint.

Click through to see a good way of providing access to cross-database resources without explicitly granting rights to users.

Comments closed