Press "Enter" to skip to content

Category: Python

K-Fold Cross-Validation in Python

Shanthababu Pandian gives us a primer on k-fold cross-validation:

In each set (fold) training and the test would be performed precisely once during this entire process. It helps us to avoid overfitting. As we know when a model is trained using all of the data in a single shot and gives the best performance accuracy. Resisting this k-fold cross-validation helps us to build the model as a generalized one.

To achieve this K-Fold Cross Validation, we have to split the data set into three sets, Training, Testing, and Validation, with the challenge of the volume of the data.

Read on for the explanation and an example.

Comments closed

Using the Softmax Classifier in PyTorch

Muhammad Asad Iqbal Khan takes us through one of the classifier options available to PyTorch:

While a logistic regression classifier is used for binary class classification, softmax classifier is a supervised learning algorithm which is mostly used when multiple classes are involved.

Softmax classifier works by assigning a probability distribution to each class. The probability distribution of the class with the highest probability is normalized to 1, and all other probabilities are scaled accordingly.

Read on to learn some of the properties of the Softmax classifier, as well as how you can use this for multi-class classification in PyTorch.

Comments closed

Trying out FLAML

Gavita Regunath provides an overview of FLAML:

FLAML is short for Fast and Lightweight Automated Machine Learning library. It is an open-source Python library created by Microsoft researchers in 2021 for automated machine learning (AutoML). It is designed to be fast, efficient, and user-friendly, making it ideal for a wide range of applications.

Click through to learn more and to give it a spin with a pair of notebooks.

Comments closed

Statistical Analysis in Azure ML

Tomaz Kastrun continues an advent of Azure ML. Day 18 takes us through feature exploration:

Azure Machine Learning is also a great tool to do ordinary statistical analysis, graph plotting and everything that goes along.

Let’s get an open dataset, that is available on UCI Machine Learning repository and import it in the pandas dataframe.

Day 19 picks up with feature engineering:

Yesterday we have shown, that statistical analysis and all bolts and whistles can be done super simple in Azure machine learning. Today we will continue with feature engineering and modelling.

So, what is feature engineering? Is a general process and can involve both feature construction: adding new features from the existing data, and feature selection: choosing only the most important features for improving model performance, reducing data dimensionality, doing log-transformation, removing outliers, to do scaling (normalisation, standardisation), imputations, general transformation (and others, as polynomial), variable creation, variable extraction and so on.

Comments closed

Sharing Results between Notebooks with MSSparkUtils

Liliam Leme provides an answer to a common Synapse Spark pool question:

I’ve been reviewing customer questions centered around “Have I tried using MSSparkUtils to solve the problem?”

One of the questions asked was how to share results between notebooks. Every time you hit “run” in a notebook, it starts a new Spark cluster which means that each notebook would be using different sessions. Making it impossible to share results between executions of notebooks. MSSparkUtils offers a solution to handle this exact scenario. 

Read on to see what MSSparkUtils is and how it helps in this case.

Comments closed

Pipelines and Jobs in Azure ML

Tomaz Kastrun continues an advent on Azure ML. Day 11 covers pipelines:

A pipeline is set of instructions (or a workflow) for executing particular work of a machine learning task. The idea behind pipelines is that will help the team of data scientists and machine learning engineers standardize workflow and incorporate best practices of preparing data, producing training models, executing the models and deploying them. Pipelines will help improve and build workflow efficiently and in such a way that it can be reusable.

And the idea behind it, is to split a machine learning process into smaller tasks, a multistep workflow, where each step is a separate component than can be developed, upgraded, optimised, configured, automated, and deleted separately. And these steps, connected through interfaces, form a workflow.

Day 12 makes us get a job:

An Azure ML job executes a task against a specified compute target. This is also how the job is created. By configuring a new job, you can also scale out model training, since there are single node and distributed training available.

A simple job command would be to execute a command in a Docker container. And further parameter sweeping can be executed, by specifying it in the job itself. 

Comments closed

Running Python Code from R via Reticulate

Rick Pack crosses the streams:

I wanted a REPL (read-evaluate-print-loop) so that I could quickly experiment with Python without, for the moment, leaping over what some consider one of the biggest hurdles to Python usage: Work environment set up.

The reticulate R package by Posit enables the use of Python while working within the R Studio IDE. One can find a Posit tutorial here.

Read on for Rick’s notes.

Comments closed

Working with the AML Python SDK

Tomaz Kastrun continues a series on Azure Machine Learning. Day 9 takes us through a piece of the Python SDK:

Python SDK namespace is azureml.core.environment. Environments specify the set of Python packages, environment variables, and software settings around your training and scoring scripts. In addition to Python, you can also configure PySpark, Docker and R for environments.

You can use namespace  Environment (or created object/asset) to make deployment and code reusable for training purposes at given docker images, configurations and compute type.

Day 10 shows us how to work with the Python SDK via VS Code or a local Jupyter notebook:

Let’s continue to explore the power of SDK and the namespaces. And we will look into namespace that will help you connect to Azure ML resources with Python SDK.

Comments closed

AML Environments and SDKs

Tomaz Kastrun continues an advent of Azure ML. First up is environments:

We have explored how to create a compute instance and compute target and learned that ML frameworks and scripting packages always come preinstalled.

Choosing the right set of components (CPU, GPU, RAM, Core) and corresponding software (OS, ML Framework, packages) can be time-consuming.

Under Curated environments, you will find predefined environments, with settings for running particular frameworks, like PyTorch or TensorFlow.

Then an overview of the Azure CLI and Python SDK for AML:

What is Azure CLI? It is an Azure Command Line, a great tool for running commands out of CMD. It is a multi-platform and can be run from Azure or from the client’s machine. It is great for scripting and automating repetitive tasks or making the complex task look like lines of code, especially when it comes to infrastructure, managing, provisioning and monitoring. It can also be run from Azure Cloud Shell. It is native to Azure and can be used across all the services and offerings. Usually, the Azure CLI commands start with “az ..”. On top of that, you can also install Azure Machine Learning CLI, as an extension to Azure CLI. The AML CLI will give you additional commands to manage resources for machine learning.

The same functionality (to some extent) in Azure Machine Learning can be achieved with Python SDK. In addition to that, it offers also great ways to create and manage resources you use for training and deployment of models.

And, so that we can catch up a bit to Tomaz, one more post covering the Python SDK:

Looking briefly into Azure CLI and Python SDK, let’s explore the power of SDK and the most important namespaces.

Comments closed