Press "Enter" to skip to content

Month: August 2022

Azure Functions and Azure Database Options

Sarah Dutkiewicz continues a series on learning Azure. First up, Azure Functions:

Azure Functions are not something you’ll see rendered on a front-end somewhere. They’re a serverless solution used for doing things in the back-end and the middle tier. 

After that, Sarah touches on database options:

There are many databases on Azure – including relational data in Azure SQL, NoSQL with Azure Cosmos DB, and even some popular databases in the open source realm such as MySQL and PostgreSQL. These are just a few of the data stores available. Check this page of Azure Databases for a matrix of the databases available compared by their features.

Click through for quite a few links and information on when to use what.

Comments closed

Neo4j Imports and Case Sensitivity

Steve Jones is getting me in a ranting mood:

I kept editing the file and trying different things. I compared what I had locally with what was on GitHub. Eventually, I realized this is the issue:

{employeeID:row.EmployeeID}

In the GitHub csv, the first row has headers with EmployeeID. In my local file, the header is “employeeID” (lower case). As soon as I edited this, it worked.

Case sensitivity is a big historical mistake.

Comments closed

Theming and Contrast Adjustments for Diffify

Tim Brock continues a series on theming diffify:

It’s difficult to design a website that is “just right” for everyone. For instance, while reds and greens can be difficult to discern for some dichromats and anomalous trichromats, most trichromats have no such problem (peak daylight sensitivity lies in the yellow part of the spectrum, between red and green). Moreover, these colours also have common cultural semantics (though these do, of course, vary by culture). We also care about aesthetics.

Because of this conflict and more besides, we decided the best approach to making the site more accessible was through “theming”. 

Click through to see what this entails.

Comments closed

Serverless Compute for Databricks SQL

Nikhil Jethava and Shankar Sivadasan make an announcement:

We are excited to announce the preview of Serverless compute for Databricks SQL (DBSQL) on Azure Databricks. DBSQL Serverless makes it easy to get started with data warehousing on the lakehouse. Serverless compute for DBSQL helps address challenges customers face with cluster startup time, capacity management, and infrastructure costs:

Click through for more details and a short video. Azure Synapse Analytics and Databricks are definitely going head-to-head in the modern data warehousing space and I’m fine with that—hopefully it makes both products better as a result.

Comments closed

Multi-Developer Power BI Development

Reza Rad architects a solution for multiple developers working on a Power BI project:

Before I start explaining the architecture, it is important to understand the challenge and think about how to solve it. The default usage of Power BI involves getting data imported into the Power BI data model and then visualizing it. Although there are other modes and other connection types, however, the import data is the most popular option. However, there are some challenges in a model and a PBIX file with everything in one file. Here are some;

– Multiple developers cannot work on one PBIX file at the same time. Multi-Developer issue.

– Integrating the single PBIX file with another application or dataset would be very hard. High Maintenance issue.

– All data transformations are happening inside the model, and the refresh time would be slower.

– The only way to expand visualization would be by adding pages to the model, and you will end up with hundreds of pages after some time.

– Every change, even a small change in the visualization, means deploying the entire model.

– Creating a separate Power BI file with some parts it referencing from this model would not be possible; as a result, you would need to make a lot of duplicates and high maintenance issues again.

– If you want to re-use some of the tables and calculations of this file in other files in the future, it won’t be easy to maintain when everything is in one file.

– And many other issues.

After laying out all of the challenges, Reza puts together a plan to resolve them.

Comments closed

Updates to AzureDevOps-AzureSQLDatabase Repo

Kevin Chant updates a repo:

In this post I want to cover some significant updates to an Azure SQL Database repository that I have been doing for one of the public GitHub repositories that I share.

Due to the fact that I have updated the AzureDevOps-AzureSQLDatabase repository. Which contains an example of a SQL Server database project that you can use to perform CI/CD on an Azure SQL Database using Azure DevOps.

It does this by using the popular state-based migration method of creating a dacpac file based on the contents of a database project. From there, the dacpac file can be used to update one or more databases.

Click through for those updates.

Comments closed

Improvements to GENERATE_SERIES

Erik Darling notes some improvements:

With the release of CTP 2.1, the problems that we saw the first time around are all gone. But there are still a couple small caveats that you should be aware of.

There’s also been a change in the way you call the function, too. You not longer need the START and STOP identifiers in the function.

There are still some limitations but it does look like the function is considerably better in CTP 2.1.

Comments closed

Power BI Enhanced Refresh API and Custom Connectors

Chris Webb starts a new series:

I love the new Power BI Enhanced Refresh API: it allows you to do things like refresh individual tables in your dataset, override incremental refresh policies, control the amount of parallelismcancel refreshes and a lot more, while being easier to use than the XMLA Endpoint. However, like the XMLA Endpoint, one problem remains: how can you schedule a dataset refresh using it? One option is to create a custom connector for Power Automate (similar to what I described here for the Export API, before the Power BI export actions for Power Automate had been released): this not only allows you to schedule more complex refreshes but also gives you more flexibility over scheduling and do things like send emails if refreshes fail.

Read on for a link to an in-depth guide on creating a custom connector as well as a few notes on the topic.

Comments closed

Azure Databricks Initialization Scripts

Alex Crampton explains how initialization scripts work in Azure Databricks:

This blog will demonstrate the use of cluster-scoped initialisation scripts for Azure Databricks. An example will run through how to configure an initialisation script to install libraries on to a cluster that are not included in the Azure Databricks runtime environment. It will cover how to do this firstly using the Databricks UI, followed by how to include it in your CI/CD solutions.

Read on for some examples.

Comments closed

Security Practices for Delta Sharing

Andrew Weaver, et al, share some advice:

When you enable Delta Sharing, you configure the token lifetime for recipient credentials. If you set the token lifetime to 0, recipient tokens never expire.

Setting the appropriate token lifetime is critically important for regulatory, compliance and reputational standpoint. Having a token that never expires is a huge risk; therefore, it is recommended using short-lived tokens as best practice. It is far easier to grant a new token to a recipient whose token has expired than it is to investigate the use of a token whose lifetime has been improperly set.

Click through for eight such tips.

Comments closed