Press "Enter" to skip to content

Category: Generative AI

Building a Vector Data Demo Database for SQL Server 2025

Andy Yun has a new demo database:

Today, I have the honor and pleasure of debuting a new presentation for MSSQLTips: A Practical Introduction to Vector Search in SQL Server 2025 (you can watch the recording here too). To accompany that new presentation, I opted to create a new demo database instead of retrofitting one of my existing demo databases. And I’m sharing it with you so you don’t have to go through the headache of taking an existing database and creating vector embeddings.

Click through for Andy’s demo database, which is approximately 16 GB in size, so not a tiny one.

Leave a Comment

Copilots, MCP Servers, and Connection Strings

Chad Baldwin shares a warning:

Well, a few days ago, I ran into the result of one of those awkward pieces when combining the MSSQL extension for VS Code, MSSQL MCP Server and Copilot.

The short of it is…I asked Copilot to change the connection used by the MSSQL extension to use a particular database. I later asked Copilot to describe a table in the database (which uses the MSSQL MCP server), only for it to claim the table didn’t exist. I realized right away it was due to competing connections between the MSSQL extension and the MSSQL MCP Server configuration. It was also at that moment where I realized this situation could potentially be SO MUCH worse than simply not finding a table…

So let’s set up a worst case scenario and see what happens.

This is basically the equivalent of “Wait, that SSMS window was production? Uh-oh.” Not that this has ever happened to me, of course. Or any of you. Nope.

Leave a Comment

EchoLeak: Zero-Click Copilot Vulnerability

Alex Woodie reports on a vulnerability:

The Microsoft Copilot vulnerability, dubbed EchoLeak, was listed as CVE-2025-32711 in the NIST’s National Vulnerability Database, which gave the flaw a severity score of 9.3. According to Aim Labs, which discovered EchoLeak and shared its research with the world last week, the “zero-click” flaw could “allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user’s awareness, or relying on any specific victim behavior.” Microsoft patched the flaw the following day.

The blog post linked above is pretty interesting. Microsoft has patched the vulnerability, so this particular attack vector shouldn’t be an issue. But it will certainly open up the doors for more fun ways of exploiting generative AI-based services.

Comments closed

Trying out Microsoft Fabric Data Agents

Wolfgang Strasser gives a generative AI solution built into Microsoft Fabric a try:

Today, I wanted to give the new Fabric Data Agents a try. According to the documentation, a Fabric Data Agent is defined as follows:

Data agent in Microsoft Fabric is a new Microsoft Fabric feature that allows you to build your own conversational Q&A systems using generative AI. A Fabric data agent makes data insights more accessible and actionable for everyone in your organization. With a Fabric data agent, your team can have conversations, with plain English-language questions, about the data that your organization stored in Fabric OneLake and then receive relevant answers. This way, even people without technical expertise in AI or a deep understanding of the data structure can receive precise and context-rich answers.

Let’s give it a try and build our first Data Agent.

Click through for the pre-requisites, the setup process, and how everything looked for Wolfgang.

Comments closed

Local Vector Search in SQL Server 2025

Andy Yun gives vector search a try:

With the announcement of SQL Server 2025 Public Preview, hopefully you are interested in test driving Vector Search.

Microsoft has already posted demo code, but it’s only for OpenAI on Azure. But many of us are wondering about running things locally. So I thought I’d share a step-by-step of getting Ollama setup and running locally on my laptop. End-to-end, these instructions should take less than 30 minutes to complete.

Andy’s process involves downloading and running an embedding model to generate the vectors, creating an external model pointing to Ollama, and using it to generate embeddings.

Comments closed

Building a Multi-Agent Orchestrator with Flink and Kafka

Sean Falconer builds an orchestration engine:

Just as some problems are too big for one person to solve, some tasks are too complex for a single artificial intelligence (AI) agent to handle. Instead, the best approach is to decompose problems into smaller, specialized units so that multiple agents can work together as a team.

This is the foundation of a multi-agent system—networks of agents, each with a specific role, collaborating to solve larger problems.

Read on for the overview. There’s also a code repository and a free e-book on the topic.

Comments closed

Model Documentation via Fabric Data Agent

Chris Webb gets some answers:

AI is meant to help us automate boring tasks, and what could be more boring than creating documentation for your Power BI semantic models? It’s such a tedious task that most people don’t bother; there’s also an ecosystem of third party tools that do this job for you, and you can also build your own solution for this using DAX DMVs or the new-ish INFO functions (see here for a good example). That got me wondering: can you use Fabric Data Agents to generate documentation for you? And what’s more, why even generate documentation when you can just ask a Data Agent the questions that you’d need to generate documentation to answer?

For a simple scenario, Chris was able to get pretty solid results. As complexity grows, your mileage may vary.

Comments closed

Testing ChatGPT with Bad Advice

Louis Davidson continues a series:

As started in part 1 of this series, I have set out to test an LLMs ability to technical edit. For my first set of tests, I am using a pair of articles I created, filled with very bad advice. The advice is the same for both articles, but what differs is the intro and the conclusion. One says the advice is good, the other said it is bad. It is all very very bad, including a really terrible SELECT statement versus loop construct that will cause an eternal loop that inserts into a temporary table.

My goal is to see how much of that advice will be noted as bad, and if it says anything nice at all about the text, etc. If you want to see the entire documents, you can get them here in a zip file, both in text and word document formats.

Starting with an extreme example like this is fine, I believe. Given the results, they were fine, though it sounds like Louis won’t be out of a job anytime soon.

Comments closed

Data Conversion via Generative AI

Grant Fritchey rearranges some data:

The DM-32 is a Digital Mobile Radio (DMR) as well as an analog radio. You can follow the link to understand all that DMR represents when talking radios. I want to focus on the fact that you have to program the behaviors into a DMR radio. While the end result is identical for every DMR radio, how you get there, the programming software, is radically different for every single radio (unless you get a radio that supports open source OpenGD77, yeah, playing radio involves open source as well). Which means, if I have more than one DMR radio (I’m currently at 7, and no, I don’t have a problem, shut up) I have more than one Customer Programming Software (CPS) that is completely different from other CPS formats. Now, I like to set up my radios similarly. After all, the local repeaters, my hotspot, and the Talkgroups I want to use are all common. Since every CPS is different, you can’t just export from one and import to the next. However, I had the idea of using AI for data conversion. Let’s see how that works.

Click through for the scenario as well as Grant’s results. Grant’s results were pretty successful for a data mapping operation, though choice of model and simplicity of the input and output examples are important to generate the Python code.

Comments closed

Feeding Language Models Bad Advice

Louis Davidson begins an experiment:

So, I got this idea to test out a few LLM, ChatGPT and the Web and Office Copilot at the very least and see how they handle a load of bad advice. So I put out a question on X, asking:

“A request! Send me your most realistic, but worst, SQL Server management advice. I want to test (and write an article) about using AI to fact check writing.”

And if there’s anything this community is good at, it’s providing bad advice for purposes of lampooning.

We are going to need to wait a week to see Louis’s results, but you can check out some of the terrible advice a variety of X users proffered.

Comments closed