Press "Enter" to skip to content

Category: Machine Learning

MLflow 2.0 Now Available

Mike Cornell announces MLflow 2.0:

Today, we are thrilled to announce the availability of MLflow 2.0. Building upon MLflow’s strong platform foundation, MLflow 2.0 incorporates extensive user feedback to simplify data science workflows and deliver innovative, first-class tools for MLOps. Features and improvements include extensions to MLflow Recipes (formerly MLflow Pipelines) such as AutoML, hyperparameter tuning, and classification support, as well modernized integrations with the ML ecosystem, a streamlined MLflow Tracking UI, a refresh of core APIs across MLflow’s platform components, and much more.

I like a lot of what MLflow does; it’ll be interesting to see how quickly different products adopt 2.0.

Comments closed

Working with Multi-Channel Bots in Azure

Matt Eland creates a mega-bot:

The Azure Bot Service is effectively a registration for a conversational AI application on Azure. This registration allows you to connect a deployed chatbots to a wide number of supported channels that users can use to interact with the bot.

This lets you build one bot that can serve a variety of users across multiple different channels, including both text and voice channels.

Additionally, the Azure Bot Service gives you a centralized place to manage, secure, and monitor your bot, regardless of which channel people use to interact with your app.

Read on for an important caveat, as well as more information on Azure Bot Service.

Comments closed

Pattern Learning in Amazon SageMaker

Vishaal Kapoor, et al, take us through an example of pattern learning in Amazon SageMaker:

Pattern learning automatically analyzes your data and surfaces textual constraints that may apply to your dataset. For the example with phone numbers, pattern learning can analyze the data and identify that the vast majority of phone numbers follow the textual constraint [1-9][0-9]{2}-[0-9][4]. It can also alert you that there are examples of invalid data so that you can exclude or correct them.

In the following sections, we demonstrate how to use pattern learning in Data Wrangler using a fictional dataset of product categories and SKU (stock keeping unit) codes.

Read on for the scenario.

Comments closed

Fine-Tuning Hugging Face for Named Entity Recognition in Japanese

Tsuyoshi Matsuzaki tries out a named entity recognition project with the Hugging Face library:

Now a lot of AI companies (such as, OpenAI, NLP Cloud, Google, NVIDIA, etc) are providing pre-trained large language models including methods that tune to enable models trained. Among such tools and framework, HuggingFace is widely used and providing over 20,000 transformer-based models.

In this post, I’ll show you brief fine-tuned example of transformer models in Hugging Face for your beginning.
In the last part of this post, I’ll also optimize training with DeepSpeed which is well integrated with HuggingFace transformers.

Click through for the results of this analysis.

Comments closed

Working with Transformer Models for Machine Translation

Stefania Cristina continues a series on transformer models. First up is plotting loss curves:

We have previously seen how to train the Transformer model for neural machine translation. Before moving on to inferencing the trained model, let us first explore how to modify the training code slightly, in order to be able to plot the training and validation loss curves that can be generated during the learning process. 

The training and validation loss values provide important pieces of information, because they allow us to have a better insight on how the learning performance is changing over the number of epochs, and help us diagnose any problems with learning that can lead to an underfit or an overfit model. They will also inform us about the epoch at which to use the trained model weights at the inferencing stage.

Then we get to try it out:

We have seen how to train the Transformer model on a dataset of English and German sentence pairs, as well as how to plot the training and validation loss curves in order to diagnose the model’s learning performance and decide at which epoch to inference the trained model. We are now ready to inference the trained Transformer model for the purpose of translating an input sentence.

In this tutorial, you will discover how to inference the trained Transformer model for neural machine translation. 

Click through for the results and to see exactly why there’s so much computational effort dumped into high-end trained models.

Comments closed

Training a Language Transformer Model

Stefania Cristina continues a series on building a language transformer:

We have put together the complete Transformer model, and now we are ready to train it for neural machine translation. We shall be making use of a training dataset for this purpose, which contains short English and German sentence pairs. We will also be revisiting the role of masking in computing the accuracy and loss metrics during the training process. 

In this tutorial, you will discover how to train the Transformer model for neural machine translation. 

Read on for the process, including a lot of code.

Comments closed

Lack of Training on the Edge

Pete Warden explains a phenomenon:

One of the most frequent questions I get asked from people exploring machine learning beyond cloud and desktop machines is “What about training?”. If you look around at the popular frameworks and use cases of edge ML, most of them seem focused on inference. It isn’t obvious why this is the case though, so I decided to collect my notes in a post here, so I can have something to refer to when this comes up (and organize my own thoughts too!).

Pete’s reasons make sense. I think the last one is the most important.

Comments closed

Feathr in the Linux Foundation

Hangfei Lin and Jinghui Mo make an announcement:

We’re excited to announce today that Feathr is joining LF AI & Data, the Linux Foundation’s umbrella foundation supporting open source innovation in artificial intelligence (AI) and data. Feathr is a feature store that simplifies machine learning (ML) feature serving and improves developer productivity.

“We’re excited to welcome Feathr to LF AI & Data and for it to be part of our technical project portfolio (41 projects and growing) with a community of over 17K developers,” said Dr. Ibrahim Haddad, Executive Director of LF AI & Data. “We aim to support Feathr to expand its user base, grow its community of developers, become a leader within its own category, and enable collaboration and integration opportunities with other projects. We look forward to the project’s continued growth and success as part of LF AI & Data.”

Alex Woodie has more background:

Feathr was originally developed at LinkedIn to help manage and serve features used in its machine learning applications. Instead of manually working with features as part of an individual data pipeline, Feathr automates and standardizes the interaction with the data type, which is used in both the training and inference stages of machine learning.

Comments closed

Choosing between Neural Network Types

Jason Brownlee takes us through three common classes of neural network and explains when each is useful:

In this post, you will discover the suggested use for the three main classes of artificial neural networks.

After reading this post, you will know:

– Which types of neural networks to focus on when working on a predictive modeling problem.

– When to use, not use, and possible try using an MLP, CNN, and RNN on a project.

– To consider the use of hybrid models and to have a clear idea of your project goals before selecting a model.

Read the whole thing.

Comments closed

Interpreting Kernel SHAP

Michael Mayer digs into Kernel SHAP:

In their 2017 paper on SHAP, Scott Lundberg and Su-In Lee presented Kernel SHAP, an algorithm to calculate SHAP values for any model with numeric predictions. Compared to Monte-Carlo sampling (e.g. implemented in R package “fastshap”), Kernel SHAP is much more efficient.

I had one problem with Kernel SHAP: I never really understood how it works!

Needless to say, Michael knows Kernel SHAP a lot better now, considering there’s now a kernelshap package for us.

Comments closed