Press "Enter" to skip to content

Month: December 2022

Column Exclusion and Rename in Snowflake

Kevin Wilkie plays duck-duck-goose with columns:

With Snowflake, we could do many different things that we’re not used to seeing with a SELECT statement. We’re all used to seeing this – SELECT * and it shows all kinds of columns.

With Snowflake, we can tell Snowflake NOT to show certain columns by using the EXCLUDE operator.

Read on to see how it works and specific requirements around operation. In addition, Kevin shows a way to perform aliasing.

Comments closed

Identifying Rows in sp_wait_for_database_copy)_sync

Jose Manuel Jurado Diaz troubleshoots a problem:

As you could see in this public documentation Auto-failover groups overview & best practices – Azure SQL Database | Microsoft Learn about sp_wait_for_database_copy_sync “sp_wait_for_database_copy_sync prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a sp_wait_for_database_copy_sync procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.”

Our customer asked about several scenarios to understand this behaviour and also, verify if there is possible to identify the rows that have not been synced. For this, I developed a POC to test it:

Read on to see what you’d need to do.

Comments closed

FHIR and Azure Health Services

Steve Hughes provides an overview of FHIR and what Azure has to offer:

With the recent updated mandates in the healthcare environment in the United States, Microsoft has continued to expand its capability to support the FHIR standard for integrating healthcare data. While the standard is well documented and Microsoft’s capabilities are expansive, it falls on data professionals to interpret that data and build meaningful reports and produce meaningful insights from the data as it is collected and integrated across environments. This requires a good working knowledge of JSON in SQL to manipulate complex data models. In the session, we did a short review of the FHIR standard and the overall implementation of FHIR in Azure. From there we reviewed the resulting data in the data lake and in Synapse. That was followed up with an overview into the heart of complex SQL using JSON functions in Synapse. Whether or not you are active in healthcare today, this will be an enlightening session on how to use JSON SQL functions within the Azure SQL platforms.

Read on to learn more.

Comments closed

Finding “Stolen” CPU on SQL Server

Ajay Dwiveldi has a public service announcement for us:

The above panel is showing any SQLServer that needs DBA attention. But there is this server highlighted in the above screenshot showing a significant gap b/w OS CPU and SQL CPU. Any gap of over 15-20% b/w OS & SQL CPU consumption is something that should be investigated. This was an indicator that there is some other non sqlservr process that is consuming high CPU on the host.

Friends don’t let friends run anything other than SQL Server on a SQL Server machine.

Comments closed

Pipelines and Jobs in Azure ML

Tomaz Kastrun continues an advent on Azure ML. Day 11 covers pipelines:

A pipeline is set of instructions (or a workflow) for executing particular work of a machine learning task. The idea behind pipelines is that will help the team of data scientists and machine learning engineers standardize workflow and incorporate best practices of preparing data, producing training models, executing the models and deploying them. Pipelines will help improve and build workflow efficiently and in such a way that it can be reusable.

And the idea behind it, is to split a machine learning process into smaller tasks, a multistep workflow, where each step is a separate component than can be developed, upgraded, optimised, configured, automated, and deleted separately. And these steps, connected through interfaces, form a workflow.

Day 12 makes us get a job:

An Azure ML job executes a task against a specified compute target. This is also how the job is created. By configuring a new job, you can also scale out model training, since there are single node and distributed training available.

A simple job command would be to execute a command in a Docker container. And further parameter sweeping can be executed, by specifying it in the job itself. 

Comments closed

DCountIf in KQL

Robert Cain continues a series on KQL:

In the previous post of this series, Fun With KQL – DCount, we saw how to use the dcount function to get an estimated count of rows for an incoming dataset.

It’s common though to want to filter out certain rows from the count. While you could do the filtering before getting to the dcount, there’s an alternative function that allows you to do the filtering right within it: dcountif.

Read on to learn more about how this function works, as well as several useful examples.

Comments closed

Sharing Excel Reports Connected to Power BI Datasets

Chris Webb makes us take a step back and think:

love Excel as an alternative to Power BI reports, especially now that Excel Online allows you to interact with browser-based Excel reports that use Power BI datasets or the Excel Data Model (aka Power Pivot) as a source. After all, why try to recreate a PivotTable in a Power BI report when you can give your users an actual PivotTable? You can publish Excel reports to Power BI and view them there; however if you want to share these reports using OneDrive for Business, outside Power BI, you’ll need to think carefully about how you do so before you allow other people to view the report.

Click through for more information.

Comments closed

Disabling Inline Database Log Flushes

Bob Dorr talks about a trace flag:

SQL Server optimizes database log file flush requests, performing these flush requests inline on the active worker. Certain patterns of log record activity may encounter increased spinlock contention while performing the log cache block flush activities.

Trace flag -T8904 (startup only trace flag) disables inline log flush, limiting the contention possibility from many workers to the subset of background LogWriter workers.  When the trace flag is enabled, the worker adding log records mark the log cache block to be flushed and a background LogWriter worker performs the flush activity.

Read on to learn more, including a call-out to Curated SQL favorite Lonny Niederstadt.

Comments closed

Changes to the Power BI Datamarts UI

Dennes Torres notices the difference between the two pictures:

I wrote an article about Power BI Datamarts which was released together the feature, on the exact moment it was being announced in a conference. Of course, this feature has being evolved since that time.

I would emphasize the performance. The general UI performance seems way better than when it was first released. Let’s talk about some more news on the Power BI Datamarts, specially on the UI.

Read on for an overview of those changes.

Comments closed

Encryption Features in SQL Server

Matthew McGiffen gives us a high-level overview of what’s available in terms of encryption in SQL Server:

SQL Server has had the native ability to encrypt data since SQL Server 2005. This included functionality that could be used to encrypt individual items and columns of data as well as the Transparent Data Encryption (TDE) feature which was available with the enterprise edition of SQL Server and could be used to encrypt all data where it is stored on disk. We then didn’t see significant additions to the features available for encryption until 2016 when Microsoft added Always Encrypted for column encryption. In SQL Server 2019 Microsoft made TDE available in standard edition, and also in SQL Server 2019 the ability to use enclaves was added to Always Encrypted to improve the available functionality for interacting with encrypted data. Finally in SQL Server 2022 further enhancements were made to the set of functionality available when working with Always Encrypted with enclaves.

Click through for a table and some details.

Comments closed