Press "Enter" to skip to content

Category: Hadoop

Contrasting Spark and Flink for Streaming Use Cases

Deepthi Mohan and Karthi Thyagarajan contrast two products:

Apache Flink and Apache Spark are both open-source, distributed data processing frameworks used widely for big data processing and analytics. Spark is known for its ease of use, high-level APIs, and the ability to process large amounts of data. Flink shines in its ability to handle processing of data streams in real-time and low-latency stateful computations. Both support a variety of programming languages, scalable solutions for handling large amounts of data, and a wide range of connectors. Historically, Spark started out as a batch-first framework and Flink began as a streaming-first framework.

In this post, we share a comparative study of streaming patterns that are commonly used to build stream processing applications, how they can be solved using Spark (primarily Spark Structured Streaming) and Flink, and the minor variations in their approach. Examples cover code snippets in Python and SQL for both frameworks across three major themes: data preparation, data processing, and data enrichment. If you are a Spark user looking to solve your stream processing use cases using Flink, this post is for you. We do not intend to cover the choice of technology between Spark and Flink because it’s important to evaluate both frameworks for your specific workload and how the choice fits in your architecture; rather, this post highlights key differences for use cases that both these technologies are commonly considered for.

Read on for an analysis of the two products.

Comments closed

A Primer on Databricks Unity Catalog

Beginner’s Hadoop gives us an overview:

The Databricks Unity Catalog is a feature provided by Databricks Unified Data Analytics Platform that allows you to organize and manage metadata about your data assets, such as tables, databases, and views. It provides a centralized metadata repository that enables users to discover, understand, and collaborate on data assets within a Databricks environment. The Unity Catalog integrates with various data sources and supports different metadata management capabilities.

Read on for an overview of what it does.

Comments closed

Events in Apache Kafka

Lucia Cerchie explains the key metaphor in an event-driven system:

We can speak broadly, maybe even a little philosophically, about what events are. Events are “things that happen,” or sometimes, they are otherwise defined as representations of facts. All data is, in a way, a result of humans trying to grok events. At the same time, I honestly don’t find the definition helpful if we leave it at this level. Do we ever design apps around things that don’t happen? (Don’t think about that too hard.) 

Let’s get concrete: Events that might affect real-time data pipelines and applications, including things like Pinterest saves, USPS address changes, ship coordinate updates, and credit card transactions. 

Read on for more about events and how they drive the design of Kafka topics and the applications which use them.

Comments closed

Bring Fabric to the Data Lakehouse

Ust Oldfield ties together Databricks and Microsoft Fabric:

We’ve built countless Lakehouses for our customers and influenced the design of many more. With the advent of Fabric, many organisations with existing lakehouse implementations in Azure are wondering what changes Fabric will herald for them. Do they continue with their existing lakehouse implementation and design, or do they migrate entirely to Fabric?

For many, the answer will be to continue as-is. They’ve invested a lot of time and money in establishing a Lakehouse – to migrate now to a slightly different technology stack would be a very costly exercise! There also isn’t a need to migrate from a lakehouse implementation in Databricks to one in Fabric as there aren’t concrete benefits to be realised.

For those using Power BI as their semantic and reporting layers, as well as using Databricks SQL or Synapse Serverless as the serving layer, Fabric provides a perfect opportunity to rationalise the architecture and to bring about substantial performance gains through the Direct Lake connectivity and V-Order compression in Fabric.

Read on to see what Ust means, using a couple of architecture diagrams along the way.

Comments closed

Apache Doris and Data Colocation

Frank Z takes us through a use case for Apache Doris:

In data analytics, fast query performance is more of a result than a guarantee. What’s more important than the result itself is the architectural design and mechanism that enables quick performance. This is exactly what this post is about. I will put you into context with a typical use case of Apache Doris, an open-source MPP-based analytic database.

The user, in this case, is an all-category Q&A website. As a billion-dollar listed company, they have their own data management platform. What Doris does is support the data filtering, packaging, analyzing, and monitoring workloads of that platform. Based on their huge data size, the user demands quick data loading and quick response to queries. 

This sounds a lot like sharding of the data, where you segregate data for a particular customer/entity into its own database (and possibly instance), with the exception that queries are expected to go over a number of shards rather than focus on a single one.

Comments closed

Read and Write Data with PySpark

Dustin Vannnoy has two of the three R’s down:

Every Spark pipeline involves reading data from a data source or table. For data engineers we usually end the pipelines by writing the transformed data. In this tutorial we walk through some of the most common format and cloud storage locations for reading and writing with Spark. We’ll save some of the advanced Delta Lake capabilities for another tutorial.

Click through to see how to read from and write to CSV, JSON, and Parquet formats. Dustin has examples of working with Azure Blob Storage, S3, and Google Cloud Storage, and even some database examples with JDBC.

Comments closed

Apache Kafka 3.5 Released

Mickael Maison has an announcement for us:

We are proud to announce the release of Apache Kafka® 3.5.0. This release contains many new features and improvements. This blog post will highlight some of the more prominent features. For a full list of changes, be sure to check the release notes.

See the Upgrading to 3.5.0 from any version 0.8.x through 3.4.x section in the documentation for the list of notable changes and detailed upgrade steps.

The ability to migrate Kafka clusters from ZK to KRaft mode with no downtime is still an early access feature. It is currently only suitable for testing in non-production environments. See KIP-866 for more details.

Click through for some of the highlights and check out the full release notes as well.

Comments closed

Creating a Spark UDF

The Big Data in Real World team creates a Spark user-defined function in Scala:

In this post we are going to create a Spark UDF which converts temperature from Fahrenheit to Celsius.

Here is our data. We have day and temperature in Fahrenheit.

And, of course, it’s roughly the same in PySpark. Also, note that user-defined function performance will take a hit, and that answers are fairly consistent through the years, so save these for when you need them.

Comments closed

Killing Multiple YARN Applications at Once

The Big Data in Real World team doesn’t have time to mess around:

If you work with Apache Hadoop, you may find yourself needing to kill multiple YARN applications at once. While you can kill them one by one using the yarn application -kill command, this can be a tedious and time-consuming process. Fortunately, there is a faster way to kill multiple YARN applications at once using the yarn application command in combination with awk.

Click through to see how. I will say, though, remembering some of these sed+grep+awk solutions I’ve written in the past makes me happy that Powershell is object-based…

Comments closed

Listing Topics in Kafka without Zookeeper

The BIg Data in Real World team has a quick one for us:

Kafka uses Zookeeper to manage it’s internal state. So it is not possible to run Kafka without Zookeeper. Even if you don’t have access to Zookeeper in your organization, there is a Zookeeper cluster running which your Kafka cluster connects to.

So, how to list topics and execute other commands if we don’t have access to Zookeeper?

Eventually, this won’t even be a question, as Kafka already has production versions using KRaft, and by Kafka 4.0, there won’t be a Zookeeper to kick around anymore.

Comments closed