Press "Enter" to skip to content

Category: Machine Learning

The Rise of Single-Purpose ML Frameworks

Pete Warden describes a phenomenon:

The GGML framework is just over a year old, but it has already changed the whole landscape of machine learning. Before GGML, an engineer wanting to run an existing ML model would start with a general purpose framework like PyTorch, find a data file containing the model architecture and weights, and then figure out the right sequence of calls to load and execute it. Today it’s much more likely that they will pick a model-specific code library like whisper.cpp or llama.cpp, based on GGML.

This isn’t the whole story though, because there are also popular model-specific libraries like llama2.cpp or llama.c that don’t use GGML, so this movement clearly isn’t based on the qualities of just one framework. The best term I’ve been able to come up with to describe these libraries is “disposable”. I know that might sound derogatory, but I don’t mean it like that, I actually think it’s the key to all their virtues! They’ve limited their scope to just a few models, focus on inference or fine-tuning rather than training from scratch, and overall try to do a few things very well. They’re not designed to last forever, as models change they’re likely to be replaced by newer versions, but they’re very good at what they do.

Pete calls them disposable ML frameworks, though I’d call them single-purpose frameworks to contrast with general-purpose ML frameworks like PyTorch and TensorFlow.

Comments closed

Creating an Image Classification Model in Oracle OCI Vision

Brendan Tierney separates the cats and the dogs:

In this post, I’ll build on the previous work on preparing data, to using this dataset as input to building a Custom AI Vision model. In the previous post, the dataset was labelled into images containing Cats and Dogs. The following steps takes you through creating the Customer AI Vision model and to test this model using some different images of Cats.

This post is part four of a series (first part, second part, third part) on custom image classification in Oracle.

Comments closed

ML with Keras and TensorFlow over Streaming Kafka Data

Paul Brebner gives us a streaming scenario for model training:

One of the goals of incremental learning is to train a model continuously from streaming data. Incremental learning from streaming data means you don’t need all the data in memory at once, and the model is as up-to-date as possible, which can matter for real-time use cases. The third driver for incremental learning that I mentioned in the previous blog is when there is concept drift in the data itself—but we’ll ignore this aspect for the time being. 

In the last blog we demonstrated batch training with TensorFlow, and mentioned that TensorFlow, being a neural network framework, has the potential for incremental learning—just like animals and people do. In this blog, we will set ourselves the task of using TensorFlow to demonstrate incremental learning from the same static drone delivery data set of busy/not busy shops that we used in the last blog. 

Read on to see the code, results, and warnings.

Comments closed

A Primer on Vector Search

Phil Booth takes a look at vector search systems:

Recently I built a system that uses vector search to logically truncate long documents and retain the most significant parts according to some search term. I’m a dummy, with no background in machine learning or mathematics, so there were new concepts for me to understand and implementation details to figure out. This post summarises what I learned.

Vector search and vector databases are becoming a fairly hot topic, so this at least grounds you on what they are.

Comments closed

BotChat BiWeekly

Mala Mahadevan starts a newsletter:

I do my best to find trustworthy sources to learn from, but you know how it is – sometimes it’s tough to tell what’s legit. So, if you ever see me post something that seems a bit off, please cut me some slack. These aren’t necessarily my opinions, just things that caught my eye.

What I learn is just my take on what I heard or read. It might not always jive with what the original speaker or writer means, or understand. I don’t use any fancy AI bots like ChatGPT to help me out. I just quote stuff and break it down in my own words.

Mala focuses on a pair of videos. I snuck into the newsletter with a few bomb-throwing statements, particularly around anthropomorphism (the assignment of human or human-like qualities to non-humans). Anthropomorphism is extremely common in language. It’s all well and good as metaphor, but once you start to believe it for real, that’s when you end up in trouble.

Comments closed

Training a Code-First Model in Azure ML

I have a new video:

In this video, we walk through the code in an Azure Machine Learning project and see how the pieces fit together.

There are a few more videos to go in this Azure ML series and I would recommend going through them in order to understand how we got to this video, but this one is what I’ve been building toward.

Comments closed

SeamlessM4T: Multimodal Speech and Text Translation

Facebook has announced a new library:

Today, we’re introducing SeamlessM4T, the first all-in-one multimodal and multilingual AI translation model that allows people to communicate effortlessly through speech and text across different languages. SeamlessM4T supports:

  • Speech recognition for nearly 100 languages
  • Speech-to-text translation for nearly 100 input and output languages
  • Speech-to-speech translation, supporting nearly 100 input languages and 36 (including English) output languages
  • Text-to-text translation for nearly 100 languages
  • Text-to-speech translation, supporting nearly 100 input languages and 35 (including English) output languages

The open source library is available on GitHub and you can also get the model itself on HuggingFace. The nicest thing about all of this is that, unlike existing translation services, you can run it entirely offline and perform the inference on local compute.

Comments closed

Text-to-Video with Azure Open AI and Semantic Kernel

Sabyasachi Samaddar continues a series on generating video from a series of text prompts:

Welcome back to the second part of our journey into the world of Azure and OpenAI! In the first part, we explored how to transform text into video using Azure’s powerful AI capabilities. This time, we’re taking a step further by orchestrating our application flow with Semantic Kernel.

Semantic Kernel is a powerful tool that allows us to understand and manipulate the meaning of text in a more nuanced way. By using Semantic Kernel, we can create more sophisticated workflows and generate more meaningful results from our text-to-video transformation process.

In this part of the series, we will focus on how Semantic Kernel can enhance our application and provide a smoother, more efficient workflow. We’ll dive deep into its features, explore its benefits, and show you how it can revolutionize your text-to-video transformation process.

Read on for an understanding of how Semantic Kernel fits in and what you can do with it.

Comments closed

ML Model Interactions and hstats

Michael Mayer has a new R package for us:

This post is mainly about the third approach. Its beauty is that we get information about all interactions. The downside: it is as good/bad as partial dependence functions. And: the statistics are computationally very expensive to compute (of order n^2).

Different R packages offer some of these H-statistics, including {iml}, {gbm}, {flashlight}, and {vivid}. They all have their limitations. This is why I wrote the new R package {hstats}:

Click through for an overview of the package and an example of how it works.

Comments closed

Creating a Simple Video with Azure Open AI and Cognitive Services

Sabyasachi Samaddar has an interesting project:

In today’s digital age, video content has become a powerful medium for communication and storytelling. Whether it’s for marketing, education, or entertainment purposes, videos could captivate and engage audiences in ways that traditional text-based content often cannot. However, creating compelling videos from scratch can be a time-consuming and resource-intensive process.

Fortunately, with the advancements in artificial intelligence and the availability of cloud-based services like Azure Open AI and Cognitive Services, it is now possible to automate and streamline the process of converting text into videos. These cutting-edge technologies provide developers and content creators with powerful tools and APIs that leverage natural language processing and computer vision to transform plain text into visually appealing and professional-looking videos.

This document serves as a comprehensive guide and a starting point for developers who are eager to explore the exciting realm of Azure Open AI and Cognitive Services for text-to-video conversion. While this guide presents a basic implementation, its purpose is to inspire and motivate developers to delve deeper into the possibilities offered by these powerful technologies.

Click through for a guide on how to do it.

Comments closed