Press "Enter" to skip to content

Category: Generative AI

Generative AI Answers: Do Not Trust, Do Verify

Erik Darling speaks wisdom:

Here’s what I’ve used it for with some success:

  • Creating images for Beer Gut Magazine
  • Summarizing long documents
  • Writing boilerplate stuff that I’m bad at (sales and marketing drivel, abstracts, lists of topics)

But every time I ask it to do that stuff, I really have to pay attention to what it gives me back. It’s often a reasonable starting place, but sometimes it really goes off the rails.

That’s true of technical stuff, too. Here’s where I’ve had a really bad time, and if there’s anything you know deeply and intimately, you’ll find similar problems too.

Click through for Erik’s experience. That’s pretty close to my own, and is a big part of why I refer to generative AI models as being akin to drunken interns: sure, give them assignments, but you’d better double-check every part of it.

Comments closed

Security Risk Profile in AI-Generated Code

Jerome Robert reviews the papers:

As such, nowadays, almost all developers use some form of AI-generated code — and they absolutely should. AI tools make developers’ lives easier by leveraging the knowledge cultivated by the development community over time and across the globe to overcome obstacles that, while potentially new and challenging to them, have long been addressed. They can reasonably trust that code to perform the function they want to achieve — and can test it to be sure.

But can they trust that code to be secure? Absolutely not. With all that time and work spent committing functional code, just as much, if not more, is spent navigating the security backlog afterward.

Click through for a summary of two recent academic papers, as well as links to the papers themselves.

Comments closed

Generating Embeddings in Oracle from a Function or Trigger

Brendan Tierney continues a series on generative AI in Oracle:

In my previous post, I gave examples of using Cohere to create vector embeddings using SQL and of using a Trigger to populate a Vector column. This post extends those concepts, and in this post, we will use OpenAI.

Warning: At the time of writing this post there is a bug in Oracle 23.5 and 23.6 that limits the OpenAI key to a maximum of 130 characters. The newer project-based API keys can generate keys which are greater than 130 characters. You might get lucky with getting a key of appropriate length or you might have to generate several. An alternative to to create a Legacy (or User Key). But there is no guarantee how long these will be available.

Assuming you have an OpenAI API key of 130 characters or less you can follow the remaining steps. This is now a know bug for the Oracle Database (23.5, 23.6) and it should be fixed in the not-too-distant future. Hopefully!

Read on to learn more.

Comments closed

Myths and Reality of Copilot for Power BI

Kurt Buhler puts together an essay:

However, recent months reveal rising skepticism, concern and possibly even disillusionment with generative AI tools, both from investors (especially from investors) and from the public. Despite the massive investment, enthusiasm, and promotion, these tools seem to be seeing limited adoption and aren’t yet showing the measurable value that fulfills their promises. And yet, paradoxically, many professionals will agree anecdotally that they use generative AI tools regularly, and that these tools seem to help them be more productive in certain tasks. Furthermore, there are concrete success stories where generative AI is bringing value, such as the models like the latest versions of Alphafold (from Google) and ESMfold (from Meta) that aid in protein folding for pharmaceutical companies more effectively find potential new drug candidates. So, who are these tools for, what problems do they solve, and how can we use them effectively? This is too big of a topic for even Bink and Bonk the Data Goblins to solve, so let’s narrow the focus, a bit.

This is a must-read, and Kurt even provides a de-goblinified PDF version for management.

Comments closed

Building an App to Use Fabric AI Skills Locally

Sandeep Pawar takes us on-premises:

If you are a regular reader of this blog, you probably know I have been testing Fabric AI Skills extensively. I have written three blogs so far on various ways the AI Skills endpoint can be used. The feature is still in preview but I am excited to see how it can be used to create new solutions as it matures.

I was curious to test if the AI Skills endpoint can be used locally and in other applications. This will open many opportunities to integrate it in different tools, inside and outside of Fabric ecosystem. So, I built an app using Gradio to make API calls to the endpoint and show the results in a local browser along with interactive plots.

Click through for a link to the code and some instructions on how to build it yourself.

Comments closed

Building a Microsoft Fabric AI Skill to Generate Data

Sandeep Pawar tries out AI Skills in Microsoft Fabric:

In the last blog I wrote, I showed how to call the AI Skills endpoint in a Fabric notebook. Being able to call the endpoint programmatically creates many opportunities to integrate AI Skills in different applications. One that I thought of was using AI Skills as a function. Function Calling or Tools is a specific use case in Gen AI to create structured output based on a function or behavior instructed by the user. I am not referring to that as AI Skills can only return a table. Instead, what if you created a number of these AI Skills that are grounded in your data with specific functions built to get the intended output? You could serve/share these with end users who can call these functions to generate data/results. Think of these as macros in Excel.

Click through for an example of how it works.

Comments closed

Using AI Skills as Cell Magics in Microsoft Fabric Notebooks

Sandeep Pawar takes a look at a new preview capability:

The public preview of AI Skills in Microsoft Fabric was announced yesterday. AI Skills allows Fabric developers to create their own GenAI experience using data in the lakehouse. Unlike Copilot, which is an AI assistant, AI Skills lets users build a validated Q&A application that queries lakehouse data by converting natural language questions into T-SQL queries. It’s only available in paid F64+ SKUs. You can watch the below video for Copilot, AI Skills and Gen AI experiences in Fabric:

Read on for more details on how it works.

Comments closed

Chat with Your Own Data in Streamlit and Azure Open AI

I have a new video:

In this video, I show how we can make a GPT-4 deployment aware of our own custom data, without needing to fine-tune the model. I talk about meta prompts and the Retrieval Augmented Generation (RAG) pattern, and then show how you can set this up using Azure AI Search and Azure OpenAI. Then, I bring it back to Streamlit and give users the option between chatting with a generic GPT-4 deployment and chatting over custom data.

I try to make my videos 10 minutes in length. They usually end up at 15-18 minutes. This one clocks in at more than 30 minutes and there’s very little fluff.

Comments closed

Chat with Azure OpenAI in Streamlit

I have a new video:

In this video, I show how we can integrate an Azure OpenAI GPT-4 model into our Streamlit dashboard. Along the way, I also show off how easy it is to create multiple pages and talk a bit about session state and secrets management as well.

The fun part about this is, there’s not even that much code involved. Streamlit handles most of the conversational aspects and you’re primarily responsible for saving history.

Comments closed

Thoughts on Copilot in Fabric

Marc Lelijveld notes a pattern:

Lately it’s copilot before and after in all Microsoft communications and announcements. Also, during the Fabric Community Conference in Las Vegas where I was last week, it was hard to keep count of the number of times copilot was mentioned. Next to Microsoft communications, everyday conversations with customers and colleagues include copilots everywhere too. Which is fine, as long as we manage to keep things in perspective.

In this blog I will further elaborate on my experiences with copilot, where I experienced it as a useful addition to my day-to-day job and also where we should be careful. And no, this is not a rant on copilot, but just sharing my vision and everyday experience when talking about copilot to my customers.

I appreciate Marc’s grounding here. There’s a huge amount of hype around the various copilots, but after the initial demos, there’s a good bit less utility than meets the eye at present.

Comments closed