Press "Enter" to skip to content

Author: Kevin Feasel

Data Exfiltration Protection and Pip

I have a post borne from frustration:

I have an Azure Synapse Analytics workspace which uses a managed virtual network and includes data exfiltration protection. I also have a Spark pool. My goal is to import a few packages and use them in a Spark notebook.

Doing so is pretty easy from the Synapse workspace. I navigate to the Manage hub and then choose Apache Spark pools from the Analytics pools menu. Select the ellipsis for my Spark pool and then choose Packages.

From there, because I plan to update Python packages, I can upload a requirements.txt file and have Pip do its job.

But then it doesn’t… Click through to learn why, as well as the workaround for this. It’s stuff like this which makes me say data exfiltration protection is a feature administrators will (mostly) like and developers will hate. Especially because there’s no obvious indicator why this was happening in the error message itself.

Comments closed

Creating Boilerplate Pester Assertions

Jeffrey Hicks builds a useful snippet:

During this process, I decided I needed to help myself speed up the test writing phase. I have a standard set of tests that I like to use for functions in my module. But copying and pasting code snippets is tedious. I know I could create a set of VS Code snippets, but that feels limiting and I’d have to make sure the snippets are available on all systems where I might be running VS Code. Instead, I wrote a PowerShell function to accelerate developing Pester 5.x tests.

My function takes a module and extracts all of the public exported functions. For each function, it creates a set of standard Pester assertions. These are the baseline or boilerplate tests that I always want to run for each function. Each function is wrapped in a Describe block. Although, I can opt for a Context block instead. This command will also insert tags. Note that my code for the tag insertion relies on the ternary operator from PowerShell 7.

Click through for the code.

Comments closed

Row Goal Woes with the EXCEPT Operator

Nigel Foulkes-Nock ran into a problem:

In many cases, this works well, but recently I’ve seen examples where it becomes troublesome, specifically when trying to process higher data volumes of data.

The same code can behave perfectly on a small dataset, but then cause issues on a larger database built in exactly the same format. This results in Queries changing from taking a few seconds to struggling to complete.

Read on to see why, as well as one solution that Nigel details.

Comments closed

Hierarchical Partition Keys in Cosmos DB

Hasan Savran looks at partition keys:

Selecting a partition key for your Cosmos DB is one of the most important choices you need to make for your Cosmos DB project. You really need to take your time and have a plan for your project. Where is this application will be in 1 year? 5 years? How much data are you planning to store? If your application will become popular and you start to have users all over the county or world, do you think your partition key can oversee a growth like this? These are the some of the questions you need to ask yourself. Selecting a partition key is like selecting a life partner for your project. You need a good one that will grow with your project together.

     Sometimes, it does not matter how much time you spend to find a good partition key. Your document simply does not have good one. In those cases, usually the best thing you can do is combining multiple properties together and generate a unique custom property called synthetic key. 

Read on for a better solution to the problem than a synthetic key.

Comments closed

Automatically Stopping Data Explorer Clusters

Gabi Lehner has good news for us:

Azure Data explorer team is constantly focused on reducing COGS and making sure users are paying only for value they are getting.

As part of this initiative, we’re now adding a new automatic capability to stop unused clusters.

In case, you created a cluster and did not ingest any data to it or even if you ingested data and later, you’re not running any queries or ingesting new data for days, we will automatically stop that cluster.

I have two thoughts on this. First, good. Frankly, every cloud service should have automatic pausing unless it makes sense not to—that is, pausing should be the default, not a feature you add later. This is especially true for expensive data processing services.

Second, based on the description, I think I’d like a little more control over this, in terms of how long we go before auto-stop kicks off. It’s ten days, which is a reasonable + large number, but other numbers could make just as much sense for a given user. I like the idea that we see in Databricks and in Azure Synapse Analytics Spark pools: give me a reasonable default, but let me change it in case the reasonable default can’t cut it for some reason.

Comments closed

Tower of Hanoi in T-SQL

Tomaz Kastrun would like to play a game:

T-SQL Code for the popular game of “Tower of Hanoi”, that can be played in Microsoft SQL Server, Azure Data Studio or any other T-SQL editor with support of query execution.

Given that this is the game you use to teach students recursion, I figured a T-SQL based solution would be interesting. Well, Tomaz has the solution and the workspace to play it yourself.

Comments closed

Creating a Synapse Workspace with Data Exfiltration Protection

I have a post on creating a new Azure Synapse Analytics workspace:

As a quick upshot, having a managed VNet set up means that any Spark pools you create will have subnet segregation, meaning that the Spark machines will be in their own subnet, away from everything else. This provides a bit of cross-pool protection for you automatically. It also performs similar network isolation for your Synapse workspace, keeping it separated from other workspaces. The other big thing it does is create managed private endpoints to the serverless and dedicated SQL pools, which means that any network traffic between these pools and resources in the Synapse workspace will be guaranteed to transit over Azure networks and not the public internet, at least until it gets to you hitting the web.azuresynapse.net URL (and there are additional methods to lock down that part of it that we won’t cover today).

By default, the portal will not create a managed virtual network, so you’ll need to enable it at creation time. You cannot enable or disable the managed virtual network setting after a workspace has been created, so if you make a mistake, you’d need to rebuild the workspace, though you can at least use the same storage account.

One last thing that managed virtual networks offer you is the ability to enable data exfiltration protection.

Click through to see how it all works. Data exfiltration protection can limit you a bit, and that can be quite frustrating, but it does what it says…in the same way that Draconis did what he said.

Comments closed

Quantifier Predicates in SQL

Joe Celko takes us through quantifier predicates:

SQL is based on set theory and logic. The only bad news is that many programmers have never had a class on either of those topics. They muddle through using the Boolean operators in their programming language and think that’s all there is to formal logic.

Let’s flashback to the early days of logic and play catch up. We need to start with syllogisms. Syllogisms are logical forms made up of combinations of two statements about classes of things that lead to a conclusion. They were invented by the Greeks and written up by Aristotle in Prior Analytics. You might have run into them, If you had a philosophy class that included lectures on formal fallacies. The three forms of statements allowed are:

Click through to receive a brief primer on formal logic and learn more about how SQL implements these concepts.

Comments closed

Reference to Database Not Supported in this Version of SQL Server

Diego Nieto-Arroyo troubleshoots an issue:

In this article I will show how to resolve and overcome errors while attempting to query a system tables or any table for that matter, via Linked Servers. The error seen below is the result of the issue we encountered.

Msg 40515, Level 16, State 2, Line 1

Reference to database and/or server name in ‘xxxx.sys.sp_tables_info_90_rowset_64’ is not supported in this version of SQL Server.

Read on to see what causes this issue and how you can resolve it.

Comments closed