Press "Enter" to skip to content

Category: Data Lake

Reading Delta Tables via SQL Code in a Microsoft Fabric Python Notebook

Gilbert Quevauvilliers writes a SQL statement:

I come from a TSQL background, so using SQL makes it easy for me to work with data.

There are multiple ways to use SQL in a PySpark notebook, and when I started using a Python notebook it was not so straightforward.

In this blog post I will show you how I use SQL Code.

As mentioned previously I am by no means an expert, I typically find a way that works, is fast and doesn’t consume a lot of capacity. If that works consistently for me then that is how I go about it.

Click through for the solution, which uses DuckDB. As such, the SQL syntax isn’t T-SQL—it’s more like psql. But it does do a great job of interacting with Parquet files and Delta tables.

Leave a Comment

Two Direct Lakes in Microsoft Fabric

Nikola Ilic does a bit of digging:

Before you proceed, in case you don’t know what Direct Lake is, I’ve got you covered in this article, where you can learn and understand various Direct Lake concepts, as well as in which scenarios you might consider implementing Direct Lake semantic models. Now that you know what Direct Lake is, let’s digest the latest news…

A couple of days ago, I was reading the official blog post about the latest enhancement to the Direct Lake storage mode for semantic models in Microsoft Fabric. The official blog post can be found here.

Click through for that announcement and what it means.

Leave a Comment

Spring Cleaning for Lakehouse Tables with VACUUM

Chen Hirsh says it’s time to do a bit of cleanup:

Delta tables create new files for every change made to the table (insert, update, delete). You can use the old files to “time travel” – to query or restore older versions of your table. This is a wonderful feature, but over time, these files accumulate in your storage and will increase your storage costs.

Read on for a primer of the VACUUM command, how frequently you might want to run the command, and how much data you want to save. This example is specifically around using Databricks, but the mechanisms work exactly the same for other lakehouses like Microsoft Fabric.

Leave a Comment

Comparing Apache Iceberg to Delta Lake

Maria Zakourdaev compares technologies:

Public cloud blob storage has been a standard for data lakes for the last 10 years. Blob storage, at first, came to solve data warehouse storage limitations. It is very cheap and has unlimited capacity. You can store any data format (structured, semi-structured, or unstructured) in the data lake located on a blob storage, and keep any amount of raw data for an unlimited time. When considering Apache Iceberg vs Delta Lake, both can manage data efficiently. Depending on the access frequency, data can be stored on cold or warm types of cloud storage, saving even more costs.

Read on to see how the two techniques compare along several dimensions, as well as some general guidance at the end on which to choose.

Leave a Comment

Securing Parquet Files

Vamshidhar Morusu writes on vulnerabilities:

Although open-source Java libraries are essential for contemporary software development, they frequently introduce serious security flaws that put systems at risk. The risks are highlighted by recent examples:

  • Deep Java Library (CVE-2025-0851): Attackers can write files outside of designated directories due to a path traversal vulnerability in DJL’s archive extraction tools. Versions 0.1.0 through 0.31.0 are affected by this vulnerability, which may result in data corruption or illegal system access. Version 0.31.1 has a patch for it.
  • CVE-2022-42003, Jackson Library: Unsafe serialization/deserialization configurations in the well-known JSON parser cause a high-severity problem (CVSS 7.5) that could result in denial-of-service attacks.

These illustrations highlight how crucial it is for open-source libraries to have careful dependency management, frequent updates, and security audits. Companies should enforce stringent validation and use automated vulnerability scanning tools.

Click through for a more detailed view of a third CVE, as well as tips to protect your data.

Leave a Comment

Iceberg Data Support in OneLake

Matthew Hicks isn’t replicating data anymore:

Microsoft OneLake is the single, unified, logical data lake that allows your entire organization to store, manage, and analyze data in one place. It provides seamless integration with various data sources and engines, making it easier to derive insights and drive innovation.

At the most recent Microsoft Build conference, we announced the integration effort between Snowflake and OneLake, which aims to allow users of both Snowflake and Microsoft Fabric to work on the same Iceberg data in OneLake, with no data duplication/movement needed. More recently, we released the preview of OneLake’s Iceberg table format support, which included the ability for Snowflake to write Iceberg tables directly to OneLake.

Click through for more information about the current status of this feature, as well as what’s coming soon.

Comments closed

Writing Data into a Microsoft Fabric Lakehouse via Notebook

Stepan Resl writes some code:

Since Lakehouse is one of the key items within Microsoft Fabric, it is important to know how to write data into it in various formats and using different tools. One of the most common tools is notebooks, as they provide great flexibility and speed for development and testing with graphical outputs. In this article, I want to focus primarily on the following types of notebooks:

  • PySpark
  • Python

Click through to see how it works in both notebook types.

Comments closed

Table Compaction in Apache Spark

Miles Cole groups things together:

If there anything that data engineers agree about, it’s that table compaction is important. Often one of the first big lessons that folks will learn early on is that not compacting tables can present serious performance issues: you’ve gotten your lakehouse pilot approved and it’s been running for a couple months in production and you find that both reads and writes are increasingly getting slower and slower while your data volumes have not increased drastically. Guess what, you almost surely have a “small file problem”.

What engineers won’t always sing the same tune on is how and when to perform table compaction.

Read on for a dive into the power of compaction (converting a large number of small files into a small number of large files) and plenty of tips along the way.

Comments closed

Data Retention for Data in the Microsoft Fabric Lakehouse

Kenneth Omorodion clears out some data:

More than before, organizations now aim for a well-defined approach to manage their data storage effectively. Some reasons for this include operational efficiency, cost management, regulatory compliance, and strategic decision-making. In this article, I will describe an approach on data retention management​ for Lakehouse files to manage data storage when the data exists as files in the Fabric Lakehouse.

There’s nothing built in but Kenneth makes it easy.

Comments closed

Data Lakes, Warehouses, and Lakehouses

Noa Shavit disambiguates three terms:

data warehouse is a repository and platform for storing, querying, and manipulating data. Warehouses are particularly suited for structured data used for decision support and business intelligence. Modern data warehouses have become more efficient, flexible, and scalable (particularly in the context of massively parallel processing and distributed computation), but they still bear the mark of their early development in the previous century.

The data warehouse concept dates back to data marts in the 1970s. After a long incubation period, the idea began to bear fruit commercially at IBM in the late 1980s and early 1990s. Data warehousing improved on the inefficiency of data marts, siloed data stores maintained by individual departments. 

Click through to learn more about each of the three concepts and how they relate.

Comments closed