Press "Enter" to skip to content

Category: Generative AI

An Overview of Azure OpenAI and the Azure AI Hub

Tomaz Kastrun has a pair of posts. First up, an overview of Azure OpenAI:

Let’s first address the elephant in the room. We have explored the Azure AI Foundry and the we have also Azure OpenAI. So what is the core difference? Let’s take a look:

The services in the back:

  • Azure AI Services has much broader AI capabilities and simpler integration into applications and usage of the real world. With mostly pre-build API for all services (face recognition, document recognition, speech recognition, computer vision, image recognition, and more) that will allow better interoperabilty and and connection to machine learning services (Azure Machine Learning Service).
  • Azure OpenAI is focusing primarly on OpenAI LLM models (Azure AI services supports many others) and provides great agents for conversations, content tools, RAG and natural language services.

After that comes an overview of the Azure AI Hub and AI projects:

In AI Foundry portal, hubs provide the environment for a team to collaborate and organize work, and help you as a team lead or IT admin centrally set up security settings and govern usage and spend. You can create and manage a hub from the Azure portal or from the AI Foundry portal, and then your developers can create projects from the hub.

In essence, Hubs are the primary top-level Azure resource for AI Foundry. Their purpose is to to govern security, connectivity, and computing resources across playgrounds and projects.

Leave a Comment

Fine-Tuning an Azure AI Model

Tomaz Kastrun updates a generative AI model:

Fine-tuning is the process of optimizing a pretrained model by training it on your specific dataset, which often contains more examples than you can typically fit in a prompt. Fine-tuning helps you achieve higher quality results for specific tasks, save on token costs with shorter prompts, and improve request latency.

Read on to see how you can do this. Note that you’ll need to set up the fine-tuning data in a particular format for whatever model you’re using.

Leave a Comment

Azure AI and Content Safety

Tomaz Kastrun continues a series on Azure AI, this time focusing on content safety functionality. First up is an overview of the product:

Content safety Azure AI service detects harmful user-generated and AI-generated content in applications and services. It includes text and image APIs that allow you to detect harmful or inappropriate material. This service is , as all other services, easy to integrate to your app.

After that is how to access items via the SDK:

The Python SDK contains several functions to analyze text, images, and manage blocklists in text moderation. With the SDK you can cover the following scenarios:

  • Text moderation: Detect hate speech, sexual, selfharm, violence content in text.
  • Image moderation: Detect hate speech, sexual, selfharm, violence content in images.

Coming back to example we covered yesterday – moderating the text content – we can alternate the filtering to suit your needs. 

Click through to see how it works.

Leave a Comment

Deployment Parameters in Azure AI Foundry

Tomaz Kastrun continues a series on Azure AI:

Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. There’s no token limit for this section, but it will be included with every API call, so it counts against the overall token limit

Click through for a description of each part of the deployment parameters section.

Leave a Comment

Deployment in Azure AI Foundry

Tomaz Kastrun continues a series on Azure AI:

When you are in Azure AI Foundry, on the left navigation bar, select “Model Catalog”.

For this demo, I will be selecting multimodal model “gpt-4” that can work with images and text.

Click “> Deploy” and select the deployment type and also customize the deployment details.

Tomaz has some step-by-step instructions, a bit of detail on deployment types, and a bit of info on how to consume the results.

Leave a Comment

Creating a Project in Azure AI Foundry

Tomaz Kastrun continues a series on Azure AI:

Azure AI models inference service provides access to the most powerful models available in the Azure AI model catalog. Coming from the key model providers in the industry including OpenAI, Microsoft, Meta, Mistral, Cohere, G42, and AI21 Labs; these models can be integrated with software solutions to deliver a wide range of tasks including content generation, summarization, image understanding, semantic search, and code generation.

The Azure AI model inference service provides a way to consume models as APIs without hosting them on your infrastructure. Models are hosted in a Microsoft-managed infrastructure, which enables API-based access to the model provider’s model. API-based access can dramatically reduce the cost of accessing a model and simplify the provisioning experience.

Read on to learn more about what you get when you create a project.

Leave a Comment

A Review of the Azure AI Foundry

Tomaz Kastrun starts a new series:

Microsoft Azure offers multiple services that enable developers to build amazing AI-powered solutions. Azure AI Foundry brings these services together in a single unified experience for AI development on the Azure cloud platform.

Until now, developers needed to work with multiple tools and web portals in a single project. With Azure AI Foundry, these tasks are now simplified and offers same environment for better collaboration.

Read on to see more about the Azure AI Foundry.

Leave a Comment