Press "Enter" to skip to content

Category: Python

A Primer on Object-Oriented Python

Leela Prasad has class:

In Python, you define a class by using the class keyword followed by a name and a colon. Then you use .__init__() to declare which attributes each instance of the class should have:

Click through for an introduction to object-orientation as it exists in Python. I have my strong functional programming biases—which is part of why I don’t particularly love Python as a programming language—but if you are going to get comfortable with Python, you’ll get a lot of value out of learning how classes work.

Comments closed

The Importance of Semantic Link

Nikola Ilic excerpts from a forthcoming book:

Since Microsoft Fabric was publicly unveiled in May 2023, there has been an ocean of announcements around this new platform. In full honesty, plenty of those were just a marketing or rebranding of the features and services that already existed before Fabric. Hence, in this ocean of announcements, some features went under the radar, with their true power still somehow hidden behind the glamour of those “noisy neighbors”. 

Semantic Link is probably one of the best examples of these hidden Fabric gems. 

Click through to learn more about Semantic Link and check out Nikola and Ben Weissman’s book as well.

Comments closed

Documenting Microsoft Fabric Workspaces via Semantic Link Labs

Prathy Kamasani does a bit of documentation:

Documentation is a critical and tedious part of every project. However, it is essential to review existing developments or document new ones. When the Power BI API was initially released, I attempted to do similar things. I wanted to know how to use the API to obtain an inventory of a tenant – Power BI Template – Prathy’s Blog…. Now, I believe I am achieving the same goal but using my current favourite functionality, Fabric Notebooks.

In this blog post, I will discuss using Semantic Link and Semantic Labs to get an overview of workspaces and their contents within specified workspaces via Fabric Notebook. This is just a way of doing it; plenty of blogs discuss various things you could do with Semantic Link. Also, I want to use this to document what I have learned. I like how I can generate a Lakehouse and automatically create Delta Tables as needed.

Click through to learn more about how this works.

Comments closed

Updating the Default Lakehouse of a Notebook

Sandeep Pawar makes a change:

I have written about default lakehouse of a Fabric notebook before here and here. However, unless you used the notebook API, there was no easy/quick way of removing all/selective lakehouses or updating the default lakehouse of a notebook. But thanks to tip from Yi Lin from Notebooks product team, notebookutils.notebook.updateDefinition has two extra parameters, defaultLakehouse and defaultLakehouseWorkspace which can be used to update the default lakehouse of a notebook. You can also use it to update environment attached to a notebook. Below are some scenarios how it can be used.

Click through for those scenarios.

Comments closed

Domain Lineage in Microsoft Fabric

Sandeep Pawar creates 1000 words of value:

In Fabric, you can use the Domains to create a data mesh architecture. It allows you to organize the data and items by specific business domains within the organization and make the overall data architecture decentralized. You can create domains within domains and assign workspaces to each domain. As it grows, you may find it challenging to understand how the domains & workspaces have been organized. Below code will help you trace the domains, subdomains and the workspaces assigned to them.

Click through to see how you can use the graphviz library in Python to generate a simple domain chart.

Comments closed

Natural Language Pre-Processing with Python

Harris Amjad does some text cleanup:

Natural Language Processing (NLP) is currently all the rage in the current machine learning landscape. With technologies like ChatGPT, Gemini, Llama, and so many other state-of-the-art text generators getting popular with the mainstream public, many newcomers are pouring into the field of NLP. Unfortunately, before we delve into how these fancy chatbots work, we must understand how we are engineering and treating our data before we feed it to our model. In this tip, we will introduce and implement some basic text preprocessing and cleaning techniques with Python.

Click through for some common operations. Some of these are very important for certain tasks but likely unhelpful for others. That could include things like lower-casing all words or removing stopwords. There are also some operations like spell checking and jargon expansion (or replacement) that you will likely want to include in a real-life project with actual people entering the data, versus a tidy sample dataset.

Comments closed

Simple Data Cleanup with Pandas

Ivan Palomares Carrascosa builds a process:

Few data science projects are exempt from the necessity of cleaning data. Data cleaning encompasses the initial steps of preparing data. Its specific purpose is that only the relevant and useful information underlying the data is retained, be it for its posterior analysis, to use as inputs to an AI or machine learning model, and so on. Unifying or converting data types, dealing with missing values, eliminating noisy values stemming from erroneous measurements, and removing duplicates are some examples of typical processes within the data cleaning stage.

As you might think, the more complex the data, the more intricate, tedious, and time-consuming the data cleaning can become, especially when implementing it manually.

Ivan handles some of the most common types of data clean work and shows a simple way of implementing these.

Comments closed

Working with Excel Files in Databricks

Chen Hirsh deals with truly big data:

Excel is one of the most common data file formats, and, as data engineers, we are required to read data from it on almost every project. Excel is easy to use, and you can customize it quickly, like adding a column and changing data. But the same things that made it the go-to format for users, make it hard to read by Data platforms. Adding a column might break a pipeline, and changing datatypes, for example, adding text to a column that only held numeric data before, might cause a nasty error downstream.

Working in Databricks, you can read and write Excel files, but you need to pay attention to some pitfalls. So let’s get started, working with Excel files on Databricks!

Click through for a way to do this using PySpark. H/T Madeira Data Solutions blog.

Comments closed

Generating a Multi-Aggregate Pivot in Spark

Richard Swinbank troubleshoots an issue:

I’m using a stream watermark to handle late arriving data – basically1) my watermark enables the stream to accept data arriving up to 10 seconds late …and that’s where the problem shows up.

When I run this streaming query – in Azure Databricks I can do this simply with display(df_pivot) – I receive the error:

AnalysisException: Detected pattern of possible ‘correctness’ issue due to global watermark. The query contains stateful operation which can emit rows older than the current watermark plus allowed late record delay, which are “late rows” in downstream stateful operations and these rows can be discarded. Please refer the programming guide doc for more details. If you understand the possible risk of correctness issue and still need to run the query, you can disable this check by setting the config `spark.sql.streaming.statefulOperator.checkCorrectness.enabled` to false.

Read on to learn more about the scenario, the issue, and the solution.

Comments closed