Press "Enter" to skip to content

Author: Kevin Feasel

Managed Instance Link in Preview

Dani Ljepava announces support for Managed Instance link is now in public preview:

As of today, we are pleased to announce that the link feature for Managed Instance is available in the open public preview, in all Azure regions worldwide. It can be used with existing, or new managed instances, and SQL Server 2019 Enterprise, or Developer edition, including SQL Server 2022 CTP (available through EAP). We have also released the tooling support for the link in the form of automated wizards available in SQL Server Management Studio, starting from SSMS v18.11.1.

With the link, replicated databases from SQL Server on Managed Instance are usable as R/O secondary replicas. While the link is in operation, transactions commited on SQL Server (primary) are instantaneously committed to Managed Instance (secondary). This provides an exact replica of your SQL Server database on Managed Instance, synced near real-time. The link was built to be resilient, in case of the network being down, SQL Server being rebooted, or maintained, or in case of some other issue, the link will automatically resume replicating where it has left off when the issue has been resolved.

Support for 2019 is a shrewd idea, given the SQL Server version adoption curve for companies. This isn’t going to replace having a proper availability group for high availability or even (most) disaster recovery options, though, because the link is currently one-way—though Dani does mention eventual support for bi-directional operation with SQL Server 2022.

Comments closed

Power BI Misconceptions

Reza Rad has a video (and article):

Misconception 1: Power BI is not an enterprise reporting tool, it is only good for self-service.

This is a misconception. And it is there because many people who have heard of Power BI, are not aware of the data modeling engine, the data transformation, and other main components of it. Maybe they just know Power BI as a visualization tool.

Power BI came to the market with the promise of binging data analysis to everyone using extra-ordinary self-service ability using Power BI Desktop and Power BI Service. However, Power BI itself is built on top of Microsoft enterprise data analysis toolset.

Read on for more information about this, as well as four other misconceptions.

Comments closed

Running MSDTC on Linux Containers in Kubernetes

Amit Khandelwal reminds us that MSDTC exists:

It’s been a while since I’ve had the opportunity to write and share a blog post about SQL Server containers and Linux. Today, I’d like to show you how to set up and use MSDTC (Microsoft Distributed Transaction Coordinator) to execute distributed transactions for SQL Server containers running on a Kubernetes platform.

Please see the following documentation for more information on DTC and SQL Server on Linux. How to configure MSDTC on Linux – SQL Server | Microsoft Docs.

I kid (sort of) but it is good to see as much parity between the Windows and Linux versions of SQL Server as possible.

Comments closed

Dynamic DAGs with Apache Airflow

Bhavya Garg explains how we can create dynamic directed acyclic graphs in Apache Airflow:

Airflow dynamic DAGs can save you a ton of time. As you know, Apache Airflow is written in Python, and DAGs are created via Python scripts. That makes it very flexible and powerful (even complex sometimes). By leveraging Python, you can create DAGs dynamically based on variables, connections, a typical pattern, etc. This very nice way of generating DAGs comes at the price of higher complexity and subtle tricky things that you must know

Read on for an example.

Comments closed

Using the Q&A Visual in Power BI

Gauri Mahajan tries out the Q&A visual:

The speed at which the options for data hosting, data processing and data management keep growing, the options for data consumption have also been growing at the same pace. Traditionally, applications and reports used to be the most common and most frequent means of consuming data. As data consumption means matured with time, chatbots, analytics engines, machine learning and artificial intelligence tools and many others. Traditionally, to explore the data, some of the common mechanisms have been using database query languages, preparation of reports by report designers and data exploration in a self-service manner by power users. With the evolution of capabilities like machine learning, artificial intelligence, natural language processing and others, some of the popular and modern methods of data exploration includes natural language-based data analysis, voice-enabled data exploration using smart devices, computer vision-based data analysis, etc. While many of these methods are highly sophisticated and need user training for a user to employ these data exploration methods, natural language-based data exploration is one of the most popular data exploration methods. This method is offered out-of-box by many reporting tools including Tableau and Power BI as well.

The Q&A visual is a really cool concept which works a surprising amount of the time. The problem is that when it doesn’t work, it feels like pushing a string: no matter what you do, it just doesn’t quite do what you need it to.

Comments closed

A Conceptual Discussion of Active Learning

Kevin Jacobs teaches us to learn:

Active Learning is a method in which data is annotated in s smart way. With data annotation, you would normally get to see a randomly selected item which you need to label. This however can lead to a lot of repetition of similar items which you have to label. This is a waste of time. A better way would be to use Active Learning. For Active Learning, a batch of random items is selected first. Then, a lightweight classifier is used for evaluating the previously annotated data.

Basically, run your prediction mechanism, find the things about which the mechanism is least certain, and figure those out. Doing this reduces ambiguity and quickly leads to a better model.

Comments closed

Dealing with Shift Times

Kenneth Fisher knows what time it is:

One of the more interesting jobs I’ve had over the years was for a company that created emergency room software. It was pretty cool software and I learned a lot, both about writing queries in SQL Server and about how a software company can be run. One of the more interesting things in the various reports we created was the concept of shift calculations. In other words, what happened during a given shift.

I’ve had to do something similar (though it was for nurse scheduling rather than emergency rooms). Things get really tricky when you start dealing with 12-hour and 16-hour shifts, tracking overtime, and the like.

Comments closed

From Cosmos DB to the Serverless SQL Pool

Jovan Popovic shows off Synapse Link:

The serverless SQL pools enable you to implement near-real-time analytics solutions on top of your Cosmos DB data. Serverless SQL pools with the Synapse Link provide a cost-effective analytics solution for analyzing NoSQL data stored in Cosmos DB, which is not affecting or spending the resource units on your Cosmos DB transactional store. You can run heavy analytics on the serverless SQL pools that will not affect your workload or price of the main Cosmos DB transactional store. The serverless SQL pools enable you to use the T-SQL query language for analytics that enables you to connect the reporting & analytics tools (such as Power BI, Analytics Services) from a large ecosystem that works with SQL Server or Azure SQL database.

When you are integrating the serverless SQL pools in your solution, you need to apply some best practices. There are general best practices for the serverless SQL pools in the Synapse Analytics workspace, but some of these settings are not applicable to the Cosmos DB scenario. Probably you will use only a subset of the best practices that you can find here. In this post, you will find only the best practices that you should apply in the Cosmos DB solution and some additional hints that could help you to optimize your solution.

Click through to see how the process works and a few recommendations.

Comments closed

PSProjectStatus

Jeffery Hicks wants to check Git status:

I write a lot of PowerShell modules. And probably like you, I am working on more than one project at a time. I was finding it difficult to keep track of what I was working on and what I might be neglecting. So I turned to PowerShell and created a tool that I use to keep on top of my projects. The PowerShell module is called PSProjectStatus and you can install it from the PowerShell Gallery. You can find the project on GitHub, but I thought I’d provide an introduction here.

Read on to see how it works.

Comments closed