Press "Enter" to skip to content

Day: December 30, 2021

Data Exfiltration Protection and Pip

I have a post borne from frustration:

I have an Azure Synapse Analytics workspace which uses a managed virtual network and includes data exfiltration protection. I also have a Spark pool. My goal is to import a few packages and use them in a Spark notebook.

Doing so is pretty easy from the Synapse workspace. I navigate to the Manage hub and then choose Apache Spark pools from the Analytics pools menu. Select the ellipsis for my Spark pool and then choose Packages.

From there, because I plan to update Python packages, I can upload a requirements.txt file and have Pip do its job.

But then it doesn’t… Click through to learn why, as well as the workaround for this. It’s stuff like this which makes me say data exfiltration protection is a feature administrators will (mostly) like and developers will hate. Especially because there’s no obvious indicator why this was happening in the error message itself.

Comments closed

Creating Boilerplate Pester Assertions

Jeffrey Hicks builds a useful snippet:

During this process, I decided I needed to help myself speed up the test writing phase. I have a standard set of tests that I like to use for functions in my module. But copying and pasting code snippets is tedious. I know I could create a set of VS Code snippets, but that feels limiting and I’d have to make sure the snippets are available on all systems where I might be running VS Code. Instead, I wrote a PowerShell function to accelerate developing Pester 5.x tests.

My function takes a module and extracts all of the public exported functions. For each function, it creates a set of standard Pester assertions. These are the baseline or boilerplate tests that I always want to run for each function. Each function is wrapped in a Describe block. Although, I can opt for a Context block instead. This command will also insert tags. Note that my code for the tag insertion relies on the ternary operator from PowerShell 7.

Click through for the code.

Comments closed

Hierarchical Partition Keys in Cosmos DB

Hasan Savran looks at partition keys:

Selecting a partition key for your Cosmos DB is one of the most important choices you need to make for your Cosmos DB project. You really need to take your time and have a plan for your project. Where is this application will be in 1 year? 5 years? How much data are you planning to store? If your application will become popular and you start to have users all over the county or world, do you think your partition key can oversee a growth like this? These are the some of the questions you need to ask yourself. Selecting a partition key is like selecting a life partner for your project. You need a good one that will grow with your project together.

     Sometimes, it does not matter how much time you spend to find a good partition key. Your document simply does not have good one. In those cases, usually the best thing you can do is combining multiple properties together and generate a unique custom property called synthetic key. 

Read on for a better solution to the problem than a synthetic key.

Comments closed

Row Goal Woes with the EXCEPT Operator

Nigel Foulkes-Nock ran into a problem:

In many cases, this works well, but recently I’ve seen examples where it becomes troublesome, specifically when trying to process higher data volumes of data.

The same code can behave perfectly on a small dataset, but then cause issues on a larger database built in exactly the same format. This results in Queries changing from taking a few seconds to struggling to complete.

Read on to see why, as well as one solution that Nigel details.

Comments closed

Automatically Stopping Data Explorer Clusters

Gabi Lehner has good news for us:

Azure Data explorer team is constantly focused on reducing COGS and making sure users are paying only for value they are getting.

As part of this initiative, we’re now adding a new automatic capability to stop unused clusters.

In case, you created a cluster and did not ingest any data to it or even if you ingested data and later, you’re not running any queries or ingesting new data for days, we will automatically stop that cluster.

I have two thoughts on this. First, good. Frankly, every cloud service should have automatic pausing unless it makes sense not to—that is, pausing should be the default, not a feature you add later. This is especially true for expensive data processing services.

Second, based on the description, I think I’d like a little more control over this, in terms of how long we go before auto-stop kicks off. It’s ten days, which is a reasonable + large number, but other numbers could make just as much sense for a given user. I like the idea that we see in Databricks and in Azure Synapse Analytics Spark pools: give me a reasonable default, but let me change it in case the reasonable default can’t cut it for some reason.

Comments closed