Upon using the Chrome Web Developer Tools to analyze the network calls being made between my browser and the booking service, I stumbled upon an easy to use and completely unprotected REST API:
I love the bonus hack at the end.
This is a short post but it can save you some wandering and searching.
Sometimes when you try to find and fix issues with Cloudera Manager you will want to increase the log level to debug so you can see what’s wrong.
The procedure cannot be found in the documentation (or at least cannot be found easily), so here how it’s done:
As you’d expect, going into debug mode generates a lot of data on a real cluster, so use sparingly.
4. A Spark Dataframe is not the same as a Pandas/R Dataframe
Spark Dataframes are specifically designed to use distributed memory to perform operations across a cluster whereas Pandas/R Dataframes can only run on one computer. This means that you need to use a Spark Dataframe to realize the benefits of the cluster when coding in Python or R within Databricks.
This is a nice set of tips for people getting started with Spark and Databricks.
The following steps show how to create a new column in a table using existing custom function code. This works in Power BI as well as in Power Query in Excel:
The guide is entirely screenshot-driven, so it’s easy to go through.
So we have two entries for this stored procedure. I included the statement sql handle to show that each statement handle has its own text. Let’s parse that text to see each statement. I copied the parsing SQL from this Plan Cache article.
This is a good thing to keep in mind if you’re trying to figure out how often a procedure gets called:
SUM on the execution counts grouped only by text might not give you the results you expect.
A great thing about these snippets is that you can add your own and they can be exactly how you want them.
To get started with this open the Command Pallet with Ctrl+Shift+P and type in ‘snippets’.
Scroll down and find the SQL option. Open it and it will bring you to the SQL.json file in which we’ll be storing our SQL Snippets.
I had to migrate a bunch of SSMS snippets to Azure Data Studio and was not that happy with the experience, especially for some of the more complicated snippets.
So, when specifying a “new_reseed_value“, the possible scenarios covered are:
1. Rows exist
2. No rows due to none inserted since the table was created
3. No rows due to
What’s missing? The following scenario:
No rows due to
Click through to see how
DBCC CHECKIDENT behaves differently depending upon the scenario.
In a previous post, I gave an overview to integration tests and documenting integration points. In this post, I will give a practical example of developing and performing integration tests with the Pester framework for PowerShell. With a data platform, especially one hosted in Azure, it’s important to test that the Azure resources in your environment have been deployed and configured correctly. After we’ve done this, we can test the integration points on the platform, confident that all the components have been deployed.
The code for performing integration tests is written in PowerShell using the Pester Framework. The tests are run through Azure DevOps pipelines and are designed to test documented integration points. The PowerShell scripts, which contain the mechanism for executing tests, rely upon receiving the actual test definitions from a metadata database.
Click through for the script.
One of the newer features in the Power BI Admin Portal is the ability to view all of a tenant’s Workspaces. As I was browsing through the collection of workspaces, I noticed several marked as Orphaned. What is an orphaned workspace, and how does it occur?
I was expecting orphaned workspaces to be a new thing where you pay for an Azure service using a distributed blockchain technology called Gruel (or maybe Grool).