Press "Enter" to skip to content

Category: Generative AI

Testing ChatGPT with Bad Advice

Louis Davidson continues a series:

As started in part 1 of this series, I have set out to test an LLMs ability to technical edit. For my first set of tests, I am using a pair of articles I created, filled with very bad advice. The advice is the same for both articles, but what differs is the intro and the conclusion. One says the advice is good, the other said it is bad. It is all very very bad, including a really terrible SELECT statement versus loop construct that will cause an eternal loop that inserts into a temporary table.

My goal is to see how much of that advice will be noted as bad, and if it says anything nice at all about the text, etc. If you want to see the entire documents, you can get them here in a zip file, both in text and word document formats.

Starting with an extreme example like this is fine, I believe. Given the results, they were fine, though it sounds like Louis won’t be out of a job anytime soon.

Leave a Comment

Data Conversion via Generative AI

Grant Fritchey rearranges some data:

The DM-32 is a Digital Mobile Radio (DMR) as well as an analog radio. You can follow the link to understand all that DMR represents when talking radios. I want to focus on the fact that you have to program the behaviors into a DMR radio. While the end result is identical for every DMR radio, how you get there, the programming software, is radically different for every single radio (unless you get a radio that supports open source OpenGD77, yeah, playing radio involves open source as well). Which means, if I have more than one DMR radio (I’m currently at 7, and no, I don’t have a problem, shut up) I have more than one Customer Programming Software (CPS) that is completely different from other CPS formats. Now, I like to set up my radios similarly. After all, the local repeaters, my hotspot, and the Talkgroups I want to use are all common. Since every CPS is different, you can’t just export from one and import to the next. However, I had the idea of using AI for data conversion. Let’s see how that works.

Click through for the scenario as well as Grant’s results. Grant’s results were pretty successful for a data mapping operation, though choice of model and simplicity of the input and output examples are important to generate the Python code.

Leave a Comment

Feeding Language Models Bad Advice

Louis Davidson begins an experiment:

So, I got this idea to test out a few LLM, ChatGPT and the Web and Office Copilot at the very least and see how they handle a load of bad advice. So I put out a question on X, asking:

“A request! Send me your most realistic, but worst, SQL Server management advice. I want to test (and write an article) about using AI to fact check writing.”

And if there’s anything this community is good at, it’s providing bad advice for purposes of lampooning.

We are going to need to wait a week to see Louis’s results, but you can check out some of the terrible advice a variety of X users proffered.

Leave a Comment

Generative AI Assistance in Building Power BI Custom Visuals

Kurt Buhler discusses a process:

In Power BI, advanced report creators often need to use custom visuals to fulfill their requirements or create certain designs. In previous articles that we published at SQLBI, we discussed the options available to make custom visuals, such as SVG visuals that you can make by using DAX. We also gave examples of when you might choose one approach over another, for example, if you want to make a bullet chart. However, creating custom visuals in Power BI is complex, and requires technical skills that most Power BI report creators do not have. In this article, we examine how you can use AI assistance to help you plan and create custom visuals.

A high-level overview of the process we will take and the desired result is below. It is important to emphasize that this article focuses on the general process, and not specific steps to obtain the result.

Click through for a long-form article on the subject. I’m generally fairly sour on relying too much on generative AI solutions for, well, much of anything. That’s part of why you see so few posts on the topic here. My main problem is that it works best in situations where you already know enough to separate wheat from chaff, or good code from broken/insecure/buggy code. I think Kurt strikes a good tone in this article and it’s well worth the read.

Comments closed

Thoughts on LLM Ethics

Eugene Meidinger has some thoughts:

The more I tried to research practical ways to make use of ChatGPT and Power BI, the more pissed I became. Like bitcoin and NFTs before it, this is a world inextricably filled with liars, frauds, and scam artists. Honestly many of those people just frantically erased blockchain from their business cards and scribbled on “AI”.

There are many valid and practical uses of AI, I use it daily. But there are just as many people who want to take advantage of you. It is essential to educate yourself on how LLMs work and what their limitations are.

I am saddened that my rants on the topic didn’t merit Eugene explicitly mentioning me. My natural response will be to rant harder until I receive the attention I desire. In the meantime, read the whole thing.

Comments closed

Azure AI Foundry Notes

Tomaz Kastrun wraps up a series on Azure AI. First up is tracing in Azure AI Foundry:

Tracing is a powerful tool that offers developers an in-depth understanding of the execution process of their generative AI applications. Though still in preview (in the time of writing this post), It provides a detailed view of the execution flow of the application and the essential information for debugging or optimisations.

After that, we can see how to evaluate model results:

With evaluation you performing iterative, systematic evaluations with the right evaluators and measure and address potential response quality, safety, or security concerns throughout the AI development lifecycle, from initial model selection through post-production monitoring.

With the Evaluation in Azure AI Foundry, you can evaluation the GenAI Ops Lifecycle production. In addition, it also gives you the ability to  assess the frequency and severity of content risks or undesirable behavior in AI responses.

Finally, Tomaz wraps up the series with some notes on documentation:

Documentation and material for Azure AI Foundry are plentiful and growing on a daily basis, since the topic on AI and GenAI is evermore so popular.

I appreciate the challenge that Tomaz has of putting together 25 blog posts in a month, especially when they’re all tied to a single theme.

Comments closed

Prompt Flow in Azure AI

Tomaz Kastrun continues a series on Azure AI. First up is an introduction to Prompt Flow:

Prompt flow in Azure AI Foundry is development tool for designing the flows (streamlines) for the complete end-to-end development cycle of LLM’s AI application. You can create, iterate, test, orchestrate, debug, and monitor your flows.

After that, we get a demonstration a Prompt Flow in Python:

Prompty gives you the ability to create an end-to-end solution, like RAG where you can chat with LLM over an article or document, where you can ask to classify the input data (list of URLs,…)

Prompty is a markdown file, structured in YAML and encapsulates a series of metadata fields pivotal for defining the model’s configuration and the inputs. After this front matter is the prompt template, articulated in the Jinja format.

Comments closed

Models and Endpoints in Azure AI Foundry

Tomaz Kastrun continues a series on Azure AI:

Models from the model catalog can be deployed using programming languages or using the Foundry studio.

Model deployment has two types: Deploy from the base model or deploy from the fine-tuned model. The difference is that fine-tuned model is model taken from the model catalog and later tuned to an additional dataset, as the base model is the model as it is available in Azure AI Foundry.

Click through for a bit more information on the process.

Comments closed

An Overview of Azure OpenAI and the Azure AI Hub

Tomaz Kastrun has a pair of posts. First up, an overview of Azure OpenAI:

Let’s first address the elephant in the room. We have explored the Azure AI Foundry and the we have also Azure OpenAI. So what is the core difference? Let’s take a look:

The services in the back:

  • Azure AI Services has much broader AI capabilities and simpler integration into applications and usage of the real world. With mostly pre-build API for all services (face recognition, document recognition, speech recognition, computer vision, image recognition, and more) that will allow better interoperabilty and and connection to machine learning services (Azure Machine Learning Service).
  • Azure OpenAI is focusing primarly on OpenAI LLM models (Azure AI services supports many others) and provides great agents for conversations, content tools, RAG and natural language services.

After that comes an overview of the Azure AI Hub and AI projects:

In AI Foundry portal, hubs provide the environment for a team to collaborate and organize work, and help you as a team lead or IT admin centrally set up security settings and govern usage and spend. You can create and manage a hub from the Azure portal or from the AI Foundry portal, and then your developers can create projects from the hub.

In essence, Hubs are the primary top-level Azure resource for AI Foundry. Their purpose is to to govern security, connectivity, and computing resources across playgrounds and projects.

Comments closed