Press "Enter" to skip to content

Month: November 2024

Apache Kafka 3.9 Now Available

Colin McCabe announces Apache Kafka 3.9:

We are proud to announce the release of Apache Kafka 3.9.0. This is a major release, the final one in the 3.x line. This will also be the final major release to feature the deprecated Apache ZooKeeper® mode. Starting in 4.0 and later, Kafka will always run without ZooKeeper.

That’s a pretty big change, but there are also quite a few other significant changes here to check out.

Comments closed

Configuring Azure Database Watcher

Rod Edwards configures Azure Database Watcher to watch databases in Azure:

First off, at the time of writing, this is still in Preview, and is only for Azure SQL PaaS offerings, namely Azure SQL DB and SQL Managed Instance, so if you’re out of luck if you’re using SQL on VM. Expect this to be added at some point in future, its number 2 on the published roadmap.

Preview or GA…the long and short of it is that it allows collection of performance AND config data into a central datastore for ALL of your SQL MI and Azure DB estate. With all of the data in one place, then dashboards are connected to here for easier estate-wide visualisations.

Read on for a step-by-step guide on configuring it. But also pay attention to Rod’s note near the end that troubleshooting setup is a pain—there aren’t many useful logs that show exactly why it isn’t working.

Comments closed

Reviewing Kusto Query History in Microsoft Fabric

Dennes Torres looks over prior commands:

We can consume a Kusto database in Fabric from many different places: Notebooks, semantic models, real time dashboards and more. Kusto register all queries sent by the consumers in the query history.

Sometimes, either for logging purpose or to analyze and fix some bug, we need to identify the queries the database is receiving and executing.

Read on to see what you can do with query history in Kusto.

Comments closed

Analyzing Azure Network Security Group Flow Logs

Reitse Eskens says the bits must flow:

I had an interesting question lately where I was requested to show all the network traffic within an Azure landing zone. Specifically source and target IP, protocol and port. From the aspect of Zero Trust, it’s important to show both successful and failed connections in your network. To be able to answer this question I had prepared myself by enabling the so-called flow logs on the Network Security Groups (NSG). NSG’s are used to control traffic on the IP and port level between resources. There’s no packet inspection, just a check if IP 1 is allowed to connect to IP 2 on port 3. In this specific case, it also had to do with a migration to Azure Firewall where all the NSG rules had to be validated.

But getting the data is one thing, finding out what is in it is something else. In this blogpost I’ll drag you along the steps I took to get the raw JSON data into a SQL table and analyse the data.

Read on for the process and quite a bit of T-SQL code.

Comments closed

Verification of PostgreSQL Checksums with WAL-G

Marat Bogatyrev talks checksums with us:

A key aspect of maintaining backup integrity is understanding data checksums. Without proper checksum validation, detecting data corruption becomes virtually impossible. Therefore, we will start with The Importance of Data Checksums.

Read on to learn more about how PostgreSQL, in particular, deals with checksums. SQL Server has similar functionality for pages and backup files and it’s worth knowing about and enabling checksum capabilities in both database platforms.

Comments closed

Charting Microsoft Fabric Workspace Activity

Sandeep Pawar creates a chart:

Semantic Link Labs v0.8.3 has list_activities method to get the list of all activities in your Fabric tenant. It uses the same Power BI Admin - Get Activity Events API but this API now also includes Fabric activities. Note that this is an Admin API so you need to be a Fabric administrator. Check the API details.

To answer the above question, I will use the admin.list_activity_events , loop over the last 30 days and plot the results by Fabric item type in my personal tenant:

Click through for that code. Though if you’re going to do something similar in your environment, I recommend not using a line chart for this visual, as line charts indicate a flow over time and this is definitely point-in-time categorical data. A bar chart or dot plot would be better in that case.

Comments closed

Partitioning in Oracle versus PostgreSQL

Umair Shahid continues a series on migrating from Oracle to PostgreSQL:

Table partitioning is a database design technique that divides a large table into smaller, more manageable sub-tables called partitions. Each partition holds a subset of the data based on specific criteria, such as date ranges, categories, or hash values. While partitioning makes it seem like you’re working with a single large table, behind the scenes, queries and operations are distributed across multiple partitions.

Read on to understand the differences between the two platforms. In this case, there are some pretty significant differences.

Comments closed

Reasons to Migrate from Synapse to Fabric

James Serra has a list:

Many customers ask me about the advantages of moving from Azure Synapse Analytics to Microsoft Fabric. Here’s a breakdown of the standout features that make Fabric an appealing choice:

  • Unified Environment for All Users
    Fabric serves everyone—from report writers and citizen developers to IT engineers—unlike Synapse, which primarily targets IT professionals.
  • Hands-Free Optimization
    Fabric is auto-optimized and fully integrated, allowing most features to perform well without requiring technical adjustments.

I suppose that James is too politic to give what I’d consider the top reason: because there have actually been meaningful updates to Microsoft Fabric in the past year. I’m not sure you can really say the same thing about Azure Synapse Analytics.

The tricky part about this, however, is that–to my knowledge, at least–there’s no clean way to migrate dedicated SQL pools.

Comments closed