Press "Enter" to skip to content

Category: Cloud

Reading Azure DevOps Results in Powershell

Mark Broadbent doesn’t let the lack of an official Powershell module get in the way:

In my post Using Azure CLI to query Azure DevOps I explained how you can use the Azure CLI to query Azure DevOps so you can obtain useful information on builds, releases, and other useful information. The solution required a certain level of skill with JMESPath to manipulate your result sets -which as explained can be a little confusing.

However once you have a bare bones result set, it is likely that you will want to consume these results in a more user-friendly environment such as PowerShell so that you can build upon these data sets. I thought this would be an easy thing to do, but as you will see below it was anything but.

Read on for some thoughts and a sample script.

Comments closed

Storing Power BI Audit Logs in Blob Storage

Gilbert Quevauvilliers works around a built-in constraint with Power BI Audit Logs:

With the new Power BI Get-PowerBIActivityEvent I wanted to find a way where I could automate the entire process where it all runs in the cloud.

One of the current challenges with the Audit logs is that they only store 90 days, so if you want to do analysis for longer than 90 days the log files have to be stored somewhere. Why not use Azure Blob Storage?

Whilst these steps might appear to be rather technical if you follow them and you have access to an Azure Subscription you can do this too.

Gilbert warns us up-front that this will be a lengthy post and that is quite true. But if you need to hold those audit logs more than 90 days, this is a great way of doing so.

Comments closed

Connecting to Snowflake with Power BI

Gilbert Quevauvilliers shows us how we can connect from a Snowflake DB instance to Power BI using DirectQuery:

The first thing I did was to install the ODBC Drivers.

I installed the 64bit drivers where I had my Power BI Desktop installed, and I also installed it on all the Servers where I had the On-Premise Data gateway installed.

Below is the link that I used which should always be the latest version

https://sfc-repo.snowflakecomputing.com/odbc/win64/latest/index.html

One thing to note is all that I did was I installed the ODBC driver I did not actually do any configuration of the ODBC driver, this is because it will be configured in Power BI Desktop.

Read on for the configuration instructions as well as getting past “it works in Power BI Desktop.”

Comments closed

Quick Hits on Azure Databricks Performance

Rayis Imayev has a few thoughts on optimizing delta table-based workloads in Azure Databricks:

2) Enable the Delta cache – spark.databricks.io.cache.enabledtrue
There is a very good resource available on configuring this Spark config setting: https://docs.microsoft.com/en-us/azure/databricks/delta/optimizations/delta-cache

And this will be very helpful in your Databricks notebook’s queries when you try to access a similar dataset multiple times. Once you read this dataset for the first time, Spark places it into internal local storage cache and will speed up the process of further referencing it for you.

Click through for several more along these lines.

Comments closed

Fun with Metaphors: Data Lakehouses

Ben Lorica, et al, have a new metaphor to try out:

Over the past few years at Databricks, we’ve seen a new data management paradigm that emerged independently across many customers and use cases: the lakehouse. In this post we describe this new paradigm and its advantages over previous approaches.

The Data Lake’s Aristotelian counterpart is the Data Swamp. I’m working on a similar comp for the Data Lakehouse (Data Swampboat? Data Swamphouse is too easy), but in the meantime, that one person who goes and slaughters your application’s performance by butchering the data in your Data Lakehouse? That’s a Data Jason.

1 Comment

Azure SQL Hyperscale Auto-Scaling

Davide Mauri explains how automatically to scale Azure SQL Hyperscale:

Azure SQL Hyperscale is the latest architectural evolution of Azure SQL, that has been natively designed to take advantage of the cloud. One of the main key features of this new architecture is the complete separation of Compute Nodes and Storage Nodes. This allow for independent scale of each service, making Hyperscale more flexible and elastic.

In this article I will describe how it is possible to implement a solution to automatically scale your Azure SQL Hyperscale database up or down, to dynamically and automatically adapt to different workload levels without the requiring manual .

Davide has some test measures of how much downtime you see and give you a couple thoughts on how you can track when it’s time to scale up or down.

Comments closed

Resource Limitations with Azure Data Factory

Paul Andrew has a public service announcement for us:

As far as I can tell Microsoft do an excellent job at managing data centre capacity so I completely understand the reason for having limitations on resources in place. There is no such thing as a limitless cloud platform.

Note; in a lot of cases (as you’ll see in the below table for Data Factory) the MAX limitations are only soft restrictions that can easily be lifted via a support ticket. Please check before raising alerts and project risks.

Click through for the limits, and “contact support” definitely is good advice if you’re expecting to push past those limits.

Comments closed

Parameterizing a Data Factory Linked Service to a REST API

Meagan Longoria had to parameterize a linked service connecting to a REST API recently:

In order to pass dynamic values to a linked service, we need to parameterize the linked service, the dataset, and the activity.

I have a pipeline where I log the pipeline start to a database with a stored procedure, lookup a username in Key Vault, copy data from a REST API to data lake storage, and log the end of the pipeline with a stored procedure. My username and password are stored in separate secrets in Key Vault, so I had to do a lookup with a web activity to get the username. The password is retrieved using Key Vault inside the linked service. Data Factory doesn’t currently support retrieving the username from Key Vault so I had to roll my own Key Vault lookup there.

Click through for the instructions.

Comments closed

Moving Data Around in Azure Synapse Analytics

Niko Neugebauer looks at some techniques for copying data into a table in an Azure Synapse Analytics SQL Pool:

First of all, let us list some of them (and I am not even attempting on providing all of them, of course):
– INSERT INTO … SELECT FROM … (the most well known one)
– SELECT INTO … FROM … (the most well-known to perform well, since it will create a HEAP while copying most of the properties from the original table(s))
– CREATE TABLE … AS SELECT … (the old way, which must be like 10 years old on PDW/APS & Azure SQL DW, but that has never gotten into a Box Product or Azure SQL Database)
– Polybase (that will use the External Tables & externally allocated data to transfer into Azure SQL DW)
– BCP (good old tested friend that will give you a pain in the neck until you dominate it)
– OPENROWSET / BULK INSERT (some very good and very old friends with complicated histories (who remembers all the code pages?, settings and uncertain future mostly because of their original restrictions, I guess)
– COPY INTO … (the brand new command that will allow you under very neat privileges to copy data from the external storage accounts, much like BULK INSERT)

In this blog post I will simply focus on those features that have not been ported (hopefully just yet): CTAS & COPY INTO.

Read on to see how these two work. Also, I too have wanted CTAS in on-premises SQL Server for years.

Comments closed

ACID Transactions with Cosmos DB

Hasan Savran shows how you can use the Cosmos DB SDK to create ACID transactions:

What about Azure Cosmos DB? It’s a NoSQL database, probably you can’t do ACID Transactions right? WRONG! Azure Cosmos DB has been supporting ACID transaction for some time now. We were able to create ACID transactions by using stored procedures of Cosmos DB. Last year (2019) Cosmos DB team introduced ACID transactions to Cosmos DB SDK. Now, we can create transactions by using C# just like writing transactions by using SQLClient class for SQL Server!

      To create an ACID transaction in Cosmos DB SDK, we need to use TransactionalBatch object. You need add all operations in transaction to TransactionalBatch object. All the operations attached to the TransactionalBatch object must share the same partition key. In the following example, I created three objects and attach them to TransactionalBatch object. To start the transaction, I ran the ExecuteAsync() function.  This function runs the transaction and returns the responses for each operation.

I’d think you would need to set a strong consistency level as well.

Comments closed