Press "Enter" to skip to content

Category: Python

Boosting versus Bagging in Tree Models

Vinod Chugani compares two techniques for working with trees:

Ensemble learning techniques primarily fall into two categories: bagging and boosting. Bagging improves stability and accuracy by aggregating independent predictions, whereas boosting sequentially corrects the errors of prior models, improving their performance with each iteration. This post begins our deep dive into boosting, starting with the Gradient Boosting Regressor. Through its application on the Ames Housing Dataset, we will demonstrate how boosting uniquely enhances models, setting the stage for exploring various boosting techniques in upcoming posts.

Read on for more information. The neat part about the “boosting versus bagging” debate is that both techniques are quite useful. Although boosting (via algorithms like XGBoost or LightGBM) is the more popular technique, bagging (random forest) is extremely powerful in its own right.

Comments closed

Tracking Python Packages in Snowflake

Kevin Wilkie takes a peek:

When working with one of the many modern computer languages that use libraries, one of the many things to be aware of – as a developer – is the version of the libraries available for your usage.

Since there are multiple languages in Snowflake that use libraries, let’s go over how to check out the versions that come installed and how to install one yourself.

Read on for those answers. Well, one answer and one conundrum.

Comments closed

A Primer on Object-Oriented Python

Leela Prasad has class:

In Python, you define a class by using the class keyword followed by a name and a colon. Then you use .__init__() to declare which attributes each instance of the class should have:

Click through for an introduction to object-orientation as it exists in Python. I have my strong functional programming biases—which is part of why I don’t particularly love Python as a programming language—but if you are going to get comfortable with Python, you’ll get a lot of value out of learning how classes work.

Comments closed

The Importance of Semantic Link

Nikola Ilic excerpts from a forthcoming book:

Since Microsoft Fabric was publicly unveiled in May 2023, there has been an ocean of announcements around this new platform. In full honesty, plenty of those were just a marketing or rebranding of the features and services that already existed before Fabric. Hence, in this ocean of announcements, some features went under the radar, with their true power still somehow hidden behind the glamour of those “noisy neighbors”. 

Semantic Link is probably one of the best examples of these hidden Fabric gems. 

Click through to learn more about Semantic Link and check out Nikola and Ben Weissman’s book as well.

Comments closed

Documenting Microsoft Fabric Workspaces via Semantic Link Labs

Prathy Kamasani does a bit of documentation:

Documentation is a critical and tedious part of every project. However, it is essential to review existing developments or document new ones. When the Power BI API was initially released, I attempted to do similar things. I wanted to know how to use the API to obtain an inventory of a tenant – Power BI Template – Prathy’s Blog…. Now, I believe I am achieving the same goal but using my current favourite functionality, Fabric Notebooks.

In this blog post, I will discuss using Semantic Link and Semantic Labs to get an overview of workspaces and their contents within specified workspaces via Fabric Notebook. This is just a way of doing it; plenty of blogs discuss various things you could do with Semantic Link. Also, I want to use this to document what I have learned. I like how I can generate a Lakehouse and automatically create Delta Tables as needed.

Click through to learn more about how this works.

Comments closed

Updating the Default Lakehouse of a Notebook

Sandeep Pawar makes a change:

I have written about default lakehouse of a Fabric notebook before here and here. However, unless you used the notebook API, there was no easy/quick way of removing all/selective lakehouses or updating the default lakehouse of a notebook. But thanks to tip from Yi Lin from Notebooks product team, notebookutils.notebook.updateDefinition has two extra parameters, defaultLakehouse and defaultLakehouseWorkspace which can be used to update the default lakehouse of a notebook. You can also use it to update environment attached to a notebook. Below are some scenarios how it can be used.

Click through for those scenarios.

Comments closed

Domain Lineage in Microsoft Fabric

Sandeep Pawar creates 1000 words of value:

In Fabric, you can use the Domains to create a data mesh architecture. It allows you to organize the data and items by specific business domains within the organization and make the overall data architecture decentralized. You can create domains within domains and assign workspaces to each domain. As it grows, you may find it challenging to understand how the domains & workspaces have been organized. Below code will help you trace the domains, subdomains and the workspaces assigned to them.

Click through to see how you can use the graphviz library in Python to generate a simple domain chart.

Comments closed

Natural Language Pre-Processing with Python

Harris Amjad does some text cleanup:

Natural Language Processing (NLP) is currently all the rage in the current machine learning landscape. With technologies like ChatGPT, Gemini, Llama, and so many other state-of-the-art text generators getting popular with the mainstream public, many newcomers are pouring into the field of NLP. Unfortunately, before we delve into how these fancy chatbots work, we must understand how we are engineering and treating our data before we feed it to our model. In this tip, we will introduce and implement some basic text preprocessing and cleaning techniques with Python.

Click through for some common operations. Some of these are very important for certain tasks but likely unhelpful for others. That could include things like lower-casing all words or removing stopwords. There are also some operations like spell checking and jargon expansion (or replacement) that you will likely want to include in a real-life project with actual people entering the data, versus a tidy sample dataset.

Comments closed