Press "Enter" to skip to content

Month: September 2021

An Overview of Bicep

Steve Jones pumps and he pumps:

Bicep is a transpiler, meaning it takes one language and translates it into another. In this case, the Bicep language will move code into the ARM JSON templates. JSON is really for machines, not humans, so the idea is to give sysadmins and developers an easy way to describe resources they need to deploy into Azure.

The language is new, and it’s on Github. This is a DSL (domain specific language), which means it was designed for a specific purpose. With the 0.3 release, the language is built into the Azure CLI and Azure PoSh utilities, so this will do the transpilation for you. There’s also a decompiler to go from an ARM template back to Bicep. It’s also supported by Microsoft, which is always a plus if you need to call for some issue.

Click through for more information.

Comments closed

Persist Sample Percent Bugfix in SQL Server

John Sterritt has good news for us:

Hi Everyone, this is John Sterrett. I am a SQL Server Consultant in Austin, TX. Last year I blogged about a feature called Persist Sample Percent. It had a nasty bug that could negatively impact performance. I have great news to share. The fix is now rolled into SQL 2016 SP2 CU17 and SQL 2019 CU10Pedro Lopes let me know that with the fix now queued for SQL 2017 CU26, this becomes fixed in all versions.

Read on to see what this mean and why it’s important.

Comments closed

The Building Blocks of Extended Events

Ed Pollack takes us through the basics of extended events in SQL Server:

Extended Events are an excellent way to collect data about a SQL Server that provides a vast array of events that can be used for performance monitoring, troubleshooting, or auditing a server. In this article, I’ll explain the building blocks of Extended Events data collection.

While using Extended Events is not overly complex, building a reliable system to collect, parse, and store events over time without any data loss can be challenging.

This article walks through the steps to create, configure, and implement Extended Events in SQL Server, providing the prerequisite code and concepts to build an automated collection process.

Read the whole thing.

Comments closed

Connecting to Power BI PPU via SQL Server Profiler

Gilbert Quevauvilliers wants to use SQL Server Profiler for a good reason:

I was recently looking on how to use SQL Profiler to connect to my Premium Per User (PPU) dataset so that I could see where the processing of my incremental partitions was.

When I first tried to connect using the default options, I got the error “Either the race with the ID of ‘AAA’ does not exist in the server of ID ‘BBB’, or the user does not have permissions to access the object”

This was rather a confusing error and it took me a little while to figure it out.

Click through to see the solution.

Comments closed

Azure DevOps Templates for Data Platform Deployments

Kevin Chant has some toys for us:

For my T-SQL Tuesday contribution this month is I want to introduce my Azure DevOps templates for Data Platform deployments.

This months T-SQL Tuesday is hosted by Frank Geisler. Frank has invited us to write about deploying SQL components through descriptive methods and build some new cool templates for them.

Which is good timing for me, because I co-presented a session on the day this post is published. I showed how to use YAML in Azure DevOps for Data Platform deployments at Data Platform Virtual Summit. .

Click through to learn more and see Kevin’s repos, as well as more information on the topic.

Comments closed

A Trace Flag (Generally) to Avoid

Erik Darling takes us through trace flag 3608:

According to the docs:

Prevents SQL Server from automatically starting and recovering any database except the master database. If activities that require TempDB are initiated, then model is recovered and TempDB is created. Other databases will be started and recovered when accessed. Some features, such as snapshot isolation and read committed snapshot, might not work. Use for Move System Databases and Move User Databases.

Note: Do not use during normal operation.

Scope: global only

But it turns out it can do quite a bit of harm. It seems that many things stop working when it’s in use, though, including statistics getting automatically created.

Click through to see what kinds of things fail to work as a result of this trace flag.

Comments closed

Environments in Azure ML

Luis Valencia explains what environments are in Azure ML:

An Environment defines Python packages, environment variables, and Docker settings that are used in machine learning experiments, including in data preparation, training, and deployment to a web service. An Environment is managed and versioned in an Azure Machine Learning Workspace. You can update an existing environment and retrieve a version to reuse. Environments are exclusive to the workspace they are created in and can’t be used across different workspaces.

In basic terms for a developer, it’s basically a Docker Image with all the needed dependencies (conda/pip packages) to run your experiment.

A friendly word of advice from some bad experiences: stick with the curated environments as much as you can. Those are easy and rarely fail. Building your own environments from Conda files is a possibility, but it’s an, err, probabilistic exercise as to whether your compute target will actually work or not.

Comments closed

Testing Stock Market Efficiency with Compression Algorithms

Holger von Jouanne-Diedrich has a clever test:

One of the most fiercely fought debates in quantitative finance is whether the stock market (or financial markets in general) is (are) efficient, i.e. whether you can find patterns in them that can be profitably used.

If you want to learn about an ingenious method (that is already present in anyone’s computer) to approach that question, read on!

As soon as I saw the post, my Eugene Fama senses were tingling. The results are not surprising (at least, to anyone who got my reference in the prior sentence), but I did enjoy the rather clever approach to the question.

Comments closed

Pipelined Functions in Powershell

Robert Cain continues a series on functions in Powershell:

In my previous post, I covered the use of PowerShell Advanced Functions. I highly suggest you read it if you haven’t, it provides some foundational knowledge that will be important to understand for this post.

In this post, we’ll see how to pipeline enable your functions. Just like a cmdlet, you’ll be able to take input from the pipeline, work with it, then send it out your function back into the pipeline.

Making your code pipeline-friendly is especially important if you want others to use your functions, as that’s one of the biggest benefits of Powershell as a language.

Comments closed

Tools and Tips for Accessibility

Daron Yöndem shares insights:

Last week, as a new employee, I went through Microsoft’s internal employee learning portal and found the Accessibility 101 online course. To my surprise, the course did have a good amount of practical information and connected the concept of accessibility nicely to inclusion and diversity. In this post, I want to share a couple of the practical steps to help you step up your accessibility game. If you are where I was, I’m sure you will love these.

Click through for some easy ways to improve presentations and webpages. Most of this is a few minutes’ worth of effort but can pay dividends. On a side note, congrats to Daron for the Microsoft gig. I enjoyed working with him in the past and know he’ll do great there.

Comments closed