Press "Enter" to skip to content

Category: Generative AI

Fine-Tuning an Azure AI Model

Tomaz Kastrun updates a generative AI model:

Fine-tuning is the process of optimizing a pretrained model by training it on your specific dataset, which often contains more examples than you can typically fit in a prompt. Fine-tuning helps you achieve higher quality results for specific tasks, save on token costs with shorter prompts, and improve request latency.

Read on to see how you can do this. Note that you’ll need to set up the fine-tuning data in a particular format for whatever model you’re using.

Comments closed

Azure AI and Content Safety

Tomaz Kastrun continues a series on Azure AI, this time focusing on content safety functionality. First up is an overview of the product:

Content safety Azure AI service detects harmful user-generated and AI-generated content in applications and services. It includes text and image APIs that allow you to detect harmful or inappropriate material. This service is , as all other services, easy to integrate to your app.

After that is how to access items via the SDK:

The Python SDK contains several functions to analyze text, images, and manage blocklists in text moderation. With the SDK you can cover the following scenarios:

  • Text moderation: Detect hate speech, sexual, selfharm, violence content in text.
  • Image moderation: Detect hate speech, sexual, selfharm, violence content in images.

Coming back to example we covered yesterday – moderating the text content – we can alternate the filtering to suit your needs. 

Click through to see how it works.

Comments closed

Deployment Parameters in Azure AI Foundry

Tomaz Kastrun continues a series on Azure AI:

Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. There’s no token limit for this section, but it will be included with every API call, so it counts against the overall token limit

Click through for a description of each part of the deployment parameters section.

Comments closed

Deployment in Azure AI Foundry

Tomaz Kastrun continues a series on Azure AI:

When you are in Azure AI Foundry, on the left navigation bar, select “Model Catalog”.

For this demo, I will be selecting multimodal model “gpt-4” that can work with images and text.

Click “> Deploy” and select the deployment type and also customize the deployment details.

Tomaz has some step-by-step instructions, a bit of detail on deployment types, and a bit of info on how to consume the results.

Comments closed

Creating a Project in Azure AI Foundry

Tomaz Kastrun continues a series on Azure AI:

Azure AI models inference service provides access to the most powerful models available in the Azure AI model catalog. Coming from the key model providers in the industry including OpenAI, Microsoft, Meta, Mistral, Cohere, G42, and AI21 Labs; these models can be integrated with software solutions to deliver a wide range of tasks including content generation, summarization, image understanding, semantic search, and code generation.

The Azure AI model inference service provides a way to consume models as APIs without hosting them on your infrastructure. Models are hosted in a Microsoft-managed infrastructure, which enables API-based access to the model provider’s model. API-based access can dramatically reduce the cost of accessing a model and simplify the provisioning experience.

Read on to learn more about what you get when you create a project.

Comments closed

Features in Azure AI Foundry

Tomaz Kastrun continues a series:

Azure AI Foundry is all purpose tool that provides all of the capital ingredients that data scientists would need in order to create, develop and deploy the generative AI applications. The platfrom supports and gets you the following services and abilitiies:

Click through for those features and how you can access the Azure AI Foundry.

Comments closed

A Review of the Azure AI Foundry

Tomaz Kastrun starts a new series:

Microsoft Azure offers multiple services that enable developers to build amazing AI-powered solutions. Azure AI Foundry brings these services together in a single unified experience for AI development on the Azure cloud platform.

Until now, developers needed to work with multiple tools and web portals in a single project. With Azure AI Foundry, these tasks are now simplified and offers same environment for better collaboration.

Read on to see more about the Azure AI Foundry.

Comments closed

Generative AI Answers: Do Not Trust, Do Verify

Erik Darling speaks wisdom:

Here’s what I’ve used it for with some success:

  • Creating images for Beer Gut Magazine
  • Summarizing long documents
  • Writing boilerplate stuff that I’m bad at (sales and marketing drivel, abstracts, lists of topics)

But every time I ask it to do that stuff, I really have to pay attention to what it gives me back. It’s often a reasonable starting place, but sometimes it really goes off the rails.

That’s true of technical stuff, too. Here’s where I’ve had a really bad time, and if there’s anything you know deeply and intimately, you’ll find similar problems too.

Click through for Erik’s experience. That’s pretty close to my own, and is a big part of why I refer to generative AI models as being akin to drunken interns: sure, give them assignments, but you’d better double-check every part of it.

Comments closed

Security Risk Profile in AI-Generated Code

Jerome Robert reviews the papers:

As such, nowadays, almost all developers use some form of AI-generated code — and they absolutely should. AI tools make developers’ lives easier by leveraging the knowledge cultivated by the development community over time and across the globe to overcome obstacles that, while potentially new and challenging to them, have long been addressed. They can reasonably trust that code to perform the function they want to achieve — and can test it to be sure.

But can they trust that code to be secure? Absolutely not. With all that time and work spent committing functional code, just as much, if not more, is spent navigating the security backlog afterward.

Click through for a summary of two recent academic papers, as well as links to the papers themselves.

Comments closed