Press "Enter" to skip to content

Author: Kevin Feasel

Software-as-a-Service: Single DB or Per-Client DB

Greg Low makes a choice:

On-premises applications are mostly single-tenant. They support a single organization. We do occasionally see multi-tenant databases. They hold the same types of information for many organizations.

But what about SaaS based applications? By default, you’ll want to store data for many client organizations. Should you create a large single database that holds data for everyone? Should you create a separate database for each client? Or should you create something in-between.

As with most things in computing, there is no one simple answer to this. Here are the main decision points that I look at:

Click through for Greg’s thoughts on the matter. Most of these factors are also relevant for on-premises SQL Server installations, not just Azure SQL DB/Managed Instance.

Comments closed

Motion Detecting and Alerting with Kafka and ksqlDB

Wei Rui and Yinsidi Jiao take us through a scenario:

Managing IoT (Internet of Things) devices and their produced data or events can be a challenge. On one hand, IoT devices usually generate massive amounts of data. On the other hand, IoT hardware has many limitations to process the data generated, such as cost, physical size, efficiency, and availability. You need a back-end system with high scalability and availability to process the growing volume of data. Things become more challenging when dealing with numerous devices and events in real time, and considering the required availability, latency, scalability, and agility for different usage and scenarios.

For Confluent Hackathon 2022, we built an end-to-end motion detection and alerting system, which currently acts as a home surveillance system, on top of Apache Kafka® and ksqlDB to demonstrate how easy it is to build IoT solutions by leveraging Confluent Cloud.

Read on to see how it works.

Comments closed

An Introduction to Event Sourcing

Aasif Ali provides a high-level introduction to the concept of event sourcing:

Event sourcing is a way to store data as events in an append-only log. It only keeps the latest version of the entity state. This method stores the state of a database object as a sequence of events. It is essentially a new event each time the object changed state, from the beginning of the object’s existence. An event can be anything that is generated by a user, a mouse click, a key press on a keyboard, and so on. It is a great way to atomically update the state and publish events. Not just can we query these events, but we can also use the event log to reconstruct past states, and as a foundation to automatically adjust the state to cope with retroactive changes.

Events are immutable, they cannot be changed. This well-known rule of event stores is often the first defining characteristic of event stores and event sourcing.

Read on to see how this concept works and how products like Apache Kafka make event sourcing viable.

Comments closed

Apache Flink 1.16 Released

Godfrey He makes an announcement:

To reduce the cost of migrating Hive to Flink, we introduce HiveServer2 Endpoint and Hive Syntax Improvements in this version:

The HiveServer2 Endpoint allows users to interact with SQL Gateway with Hive JDBC/Beeline and migrate with Flink into the Hive ecosystem (DBeaver, Apache Superset, Apache DolphinScheduler, and Apache Zeppelin). When users connect to the HiveServer2 endpoint, the SQL Gateway registers the Hive Catalog, switches to Hive Dialect, and uses batch execution mode to execute jobs. With these steps, users can have the same experience as HiveServer2.

Read on for a pretty large hit list.

Comments closed

OpenSSL Patch incoming

Steven Vaughan-Nichols has bad news for us:

So we should all be concerned that Mark Cox, a Red Hat Distinguished Software Engineer and the Apache Software Foundation (ASF)’s VP of Security, this week tweeted, “OpenSSL 3.0.7 update to fix Critical CVE out next Tuesday 1300-1700UTC.”

How bad is “Critical”? According to OpenSSL, an issue of critical severity affects common configurations and is also likely exploitable. 

There isn’t enough detail yet to know exactly what the issue is. It’s forthcoming, however, so time to get those patch windows ready.

Comments closed

Choosing between Synapse Spark Notebooks or Job Definitions

Arun Sethia and Arshad Ali explain when you might use a Spark notebook versus a job definition:

Synapse Spark Notebook is a web-based (HTTP/HTTPS) interactive interface to create files that contain live code, narrative text, and visualizes output with rich libraries for spark based applications. Data engineers can collaborate, schedule, run, and test their spark application code using Notebooks. Notebooks are a good place to validate ideas and do quick experiments to get insight into the data. You can integrate the Synapse Notebook into Synapse pipeline.

The Notebook allows you to combine programming code with markdown text and perform simple visualizations (using Synapse Notebook chart options and open-source libraries). In addition, running code will supply immediate feedback, output, and progress tracking within Notebook.

Click through for the comparison.

Comments closed

Useful Add-On Packages for Shiny

Mandy Norrbo has a list:

There are a growing number of Shiny users across the world, and with many users comes an increasing number of open-source “add-on” packages that extend the functionality of Shiny, both in terms of the front end and the back end of an app.

This blog will highlight 5 UI add-on packages that can massively improve your user experience and also just add a bit of flair to your app. Each package will have an associated example app (some more inspired than others) that I’ve created where you can actually see the UI component in action. All code for example apps can be found on our GitHub.

Click through for the list, as well as examples of how they work.

Comments closed

Debugging Stream Table Joins

Philip Schmitt dives in to a problem:

Joining two topics to aggregate the data is one of the fundamental operations in stream processing. But that’s not to say that it’s simple. Let me show you what can go wrong! This article chronicles my journey to join two Apache Kafka topics—stumbling into and out of various pitfalls. I‘m going to show you…

– How to debug co-partitioning with kcat (formerly kafkacat)

– How to avoid the number one pitfall of using kcat

– Stream–table join semantics in action

There’s a lot of useful information in this post.

Comments closed

Feature Branching for Database Projects

Olivier Van Steenlandt describes one branching strategy and applies it to database development:

Depending on how you have defined your branching strategy, you will start development differently. Below I’m defining a few different branching strategies:

1. No branching

2. Branching/environment

3. Branching/feature

4. …

In the past, I have used all of the above. I need to tell you that the Branching/feature strategy allows me to be the most flexible for database development. Why? Let’s dive into this method for now:

Read on to learn more.

Comments closed

Search Optimization in Snowflake

Arun Sirpal doesn’t have time to create indexes:

I will use a clone of the table to compare it to when search optimisation is on. I will make sure no caching in on which could affect the test.
I activate the feature via:

ALTER TABLE data_staging ADD SEARCH OPTIMIZATION;

This takes time! If you run something like the below to confirm 100% completion. This is because there is a maintenance service that runs in the background responsible for creating and maintaining the search access path:

Click through to see what happens and the kinds of performance gains Arun realized.

Comments closed