Press "Enter" to skip to content

Category: Machine Learning

Plotting Training and Testing Results with tidyAML

Steven Sanderson builds a plot:

In the realm of machine learning, visualizing model predictions is essential for understanding the performance and behavior of our algorithms. When it comes to regression tasks, plotting predictions alongside actual values provides valuable insights into how well our model is capturing the underlying patterns in the data. With the plot_regression_predictions() function in tidyAML, this process becomes seamless and informative.

Read on to see how the function works and the kind of result you can expect from it.

Comments closed

tidyAML Updates

Steven Sanderson has been busy. First up, a post on tidyAML updates:

One of the standout features in this release is the addition of extract_regression_residuals(). This function empowers users to delve deeper into regression models, providing a valuable tool for analyzing and understanding residuals. Whether you’re fine-tuning your models or gaining insights into data patterns, this enhancement adds a crucial layer to your analytical arsenal.

Then, Steven goes into detail on .drap_na:

In the newest release of tidyAML there has been an addition of a new parameter to the functions fast_classification() and fast_regression(). The parameter is .drop_na and it is a logical value that defaults to TRUE. This parameter is used to determine if the function should drop rows with missing values from the output if a model cannot be built for some reason. Let’s take a look at the function and it’s arguments.

After that, we get to see an updated function:

In response to user feedback, we’ve enhanced the internal_make_wflw_predictions() function to provide a comprehensive set of predictions. Now, when you make a call to this function, it includes:

  1. The Actual Data: This is the real-world data that your model aims to predict. Having access to this information helps you assess how well your model is performing on unseen instances.
  2. Training Predictions: Predictions made on the training dataset. This is essential for understanding how well your model generalizes to the data it was trained on.
  3. Testing Predictions: Predictions made on the testing dataset. This is crucial for evaluating the model’s performance on data it hasn’t seen during the training phase.

You can also check out the package’s GitHub repository and see more.

Comments closed

Exporting to CSV in Azure ML Designer

Tom LaRock saves a file:

The most popular feature in any application is an easy-to-find button saying “Export to CSV.” If this button is not visibly available, a simple right-click of your mouse should present such an option. You really should not be forced to spend any additional time on this Earth looking for a way to export your data to a CSV file.

Well, in Azure ML Studio, exporting to a CSV file should be simple, but is not, unless you already know what you are doing and where to look. I was reminded of this recently, and decided to write a quick post in case a person new to ML Studio was wondering how to export data to a CSV file.

Click through for one false start and then the correct answer.

Comments closed

Sample Data in Azure ML Designer

Tom LaRock shows us where the hidden data is:

Recently I was working inside of Azure ML Studio and wanted to browse the sample datasets provided. Except I could not find them. I *knew* they existed, having used them previously, but could not remember if that was in the original ML Studio (classic) or not.

After some trial and error, I found them and decided to write this post in case anyone else is wondering where to find the sample datasets. You’re welcome, future Tom!

Click through to see where those sample datasets are. And yeah, they don’t get updated that frequently. And that’s probably a good thing, as it means when you run the demo two years after someone created it, you’ll still get predictable results.

Comments closed

Batching Text Analytics with Azure AI Services

Matt Eland tries out the TextAnalytics client:

We’ll talk about each one of these capabilities briefly as we cover the results, but at a high level what we want to do is:

  • Perform sentiment analysis to determine if the text is positive, negative, neutral, or mixed.
  • Summarize the text using abstractive summarization which summarizes the text with new text generated by a large language model (LLM).
  • Summarize the text using extractive summarization which summarizes the text by extracting key sentences or parts of sentences to convey the overall meaning.
  • Extract key phrases of interest from the text document.
  • Perform entity recognition and linked entity recognition to determine the major objects, places, people, and concepts the document discusses.
  • Recognize any personally identifiable information (PII) present in the document for potential redaction.
  • Analyze the text for healthcare specific topics such as treatment plans or medications.

Read on to see how a certain passage of text fares.

Comments closed

Troubleshooting an Azure ML Deployment Locally

I have a new video:

In this video, I take us through the process of creating a local deployment of an Azure ML managed endpoint. We will cover requirements, why you might want to do this, and common problems you may run into along the way.

This was a fun video to make, especially in anticipating the sorts of problems that come up along the way. I won’t pretend that it’s comprehensive but it does hit several of the most common problems I see (or cause).

Comments closed

Integrating Azure ML and Power BI

I have a new video:

In this video, I show off how easy it is to integrate Azure ML and Power BI, at least once you get past all of the trouble trying to integrate them.

I expected this to be easy. It turns out that the “make it look easy” depends on having several things in place already and using the correct (by which I mean “old”) deployment type.

Comments closed

Getting Started with Semantic Kernel in C#

Matt Eland tries out Semantic Kernel:

Generative AI systems use large language models (LLMs) like OpenAI’s GPT 3.5 Turbo (ChatGPT) or GPT-4 to respond to text prompts from the user. But these systems have serious limitations in that they only include information baked into the model at the time of training. Technologies like retrieval augmentation generation (RAG) help overcome this by pulling in additional information.

AI orchestration frameworks make this possible by tying together LLMs and additional sources of information via RAG. Additionally, AI orchestration systems can provide capabilities to generative AI systems, such as inserting records in a database, sending emails, or calling out to external systems.

In this article we’ll look at the high-level capabilities building AI orchestration systems in C# with Semantic Kernel, a rapidly maturing open-source AI orchestration framework.

Click through to see how things work.

Comments closed

Oracle OCI Labeling with Bounding Boxes

Brendan Tierney continues a series on image classification:

In a previous post, I gave examples of how to label data using OCI Data Labeling. It was a simple approach to data labeling images for input to AI Vision. In that post, we just gave a label for the image to indicate if the image contained a Cat or a Dog. Yes, that’s a very simple approach, and we can build image classification models, and use the resulting model to predict a label for new images. These would be labeled as a Cat or a Dog with a degree of certainty. Although this simple approach can give OK-ish results, we typically want a more detailed model and predictions. For a more detailed approach, we can use Object Detection. For this, we need to prepare our data set in a slightly different way and Yes it does take a bit more time to prepare. Or perhaps it takes a lot more time to prepare the data. But this extra time in preparing the data should (in theory) give us a more accurate model.

This post will focus on creating a new labeled dataset using bounding boxes, and in a later post, we’ll examine the resulting model to see if it gives better or more accurate results.

Read on for the process.

Comments closed