Press "Enter" to skip to content

Month: February 2020

Delta Lake and ACID Properties

Kundan Kumarr notes that Spark’s Delta Lake allows for ACID transactions:

DeltaLog is the crux of Delta Lake which ensures atomicity, consistency, isolation, and durability of user-initiated transactions. DeltaLog is an ordered record of transactions. Every transaction performed since the inception of Delta Lake Table, has an entry in the DeltaLog (also known as the Delta Lake transaction log). It acts as a single source of truth, giving users access to the last version of a DeltaTable’s state. It provides serializability, the strongest level of isolation level. Let’s see how DeltaLog ensures ACID Transactions.

Click through for the explanation.

Comments closed

Check Those R Repos

John Mount has a public service announcement:

In a lot of our R writing we casually say “install from CRAN using install.packages('PKGNAME')” or “update your packages by using update.packages(ask = FALSE, checkBuilt = TRUE) (and answering ‘no’ to all questions about compiling).”

We recently became aware that for some users this isn’t complete advice.

The above depends on your R install pointing to a repository that is in fact up to date. To check what repositories you are using please use the command options('repos').

The specific example here is around the Microsoft R Archive Network (MRAN), which stays at fixed dates. This is for a good reason: because it helps companies standardize on a known set of versions of R packages by default. That way you don’t have version 1.8 of a package in dev and then get 1.9 in production and find out that something broke between the two versions.

Comments closed

Should DBAs Learn Kubernetes?

Randolph West makes me violate Betteridge’s Law of Headlines:

So this question, whether a SQL Server DBA really needs to know about Kubernetes, is really a question about whether DBAs need to know about the plumbing that runs the infrastructure upon which our databases reside.

In October 2018 I asked, “What is a DBA anyway?” It was a week after another post where I declared the DBA role “history.” My answer is:

Yes! You need to know Kubernetes if you’re a SQL Server DBA.

I agree with Randolph that it’s useful for a DBA to have at least some working understanding of Kubernetes, especially around being able to troubleshoot database issues on the platform. Read on for Randolph’s take on the matter.

Comments closed

RESULT_SCAN() in Snowflake

Koen Verbeeck introduces us to the RESULT_SCAN() function in Snowflake DB:

I’m doing a little series on some of the nice features/capabilities in Snowflake (the cloud data warehouse). In each part, I’ll highlight something that I think it’s interesting enough to share. It might be some SQL function that I’d really like to be in SQL Server, it might be something else.

This post builds upon part 6 of the series, which dealt with query history. There it is explained how Snowflake caches the query results. You can find a query in the history and take a look at what was returned. Using the RESULT_SCAN table function, you can do this with SQL. Let’s take a look at an example.

This is an interesting function. Click through to see it in action.

Comments closed

Warning Signs with Power BI Development and Administration

Brett Powell has a great post warning you of common pitfalls with Power BI implementations:

Overly Broad User Classifications

It might be tempting to classify users in the organizations into only two segments or personas such as ‘end users’ and ‘creators’. You might logically reason that ‘creators’ will be assigned pro licenses and be trained to develop and publish content while ‘end users’ will be trained on how to consume and access content.

This simple binary distinction may be appropriate when you’re first getting started with Power BI but I’d suggest a bit more granularity reflecting the significantly different skills, features, and complexity associated with developing different kinds of Power BI content. At a minimum, I split the creators into ‘Report Authors’ and ‘Data Modelers’ with the report authors learning to build visually rich and intuitive user experiences based on the robust, secure, and performant datasets created by the data modelers.

There’s a lot of good reading in here.

Comments closed

Finding SMO Objects with Specific Properties

Sander Stad wants to find a specific subset of SMO objects:

In some situations I want to search through lots of objects to look for certain properties in SMO (SQL Server Management Objects)

This is also the case in this situation. I wanted to know all the different objects that had a property called “Schema”.

But what to do with all those different properties and methods we could look up. I mean, there are hundreds of objects in there and each of them have many methods and properties.

Click through for the Powershell script.

Comments closed

Troubleshooting Slow Power BI Report Server Reports

Jamie Wick helps us figure out why that Power BI Report Server report is loading so slowly:

Troubleshooting “slow” reports in PowerBI Report Server (or SQL Server Reporting Services) can be an arduous task. End users are often unable to provide detailed (or reliable) data that a report took longer to load today than it did the last time it was run. Even if a user states that the report is now taking 10 seconds longer to load, that additional time needs to be attributed to a specific step in the report generation process before it can be improved/fixed.

In the report server database (ReportServer by default) there is a view (ExecutionLog) that can provide detailed statistics about each execution of a report. Note: ExecutionLog3 view is the newest/current version and the ExecutionLog and ExecutionLog2 views are for backwards compatibility. By default the execution log entries are retained for 60 days.

The view that Jamie shows also works for SQL Server Reporting Services reports, so it can help there as well.

Comments closed

Connecting to Snowflake with Power BI

Gilbert Quevauvilliers shows us how we can connect from a Snowflake DB instance to Power BI using DirectQuery:

The first thing I did was to install the ODBC Drivers.

I installed the 64bit drivers where I had my Power BI Desktop installed, and I also installed it on all the Servers where I had the On-Premise Data gateway installed.

Below is the link that I used which should always be the latest version

https://sfc-repo.snowflakecomputing.com/odbc/win64/latest/index.html

One thing to note is all that I did was I installed the ODBC driver I did not actually do any configuration of the ODBC driver, this is because it will be configured in Power BI Desktop.

Read on for the configuration instructions as well as getting past “it works in Power BI Desktop.”

Comments closed

Fun with Metaphors: Data Lakehouses

Ben Lorica, et al, have a new metaphor to try out:

Over the past few years at Databricks, we’ve seen a new data management paradigm that emerged independently across many customers and use cases: the lakehouse. In this post we describe this new paradigm and its advantages over previous approaches.

The Data Lake’s Aristotelian counterpart is the Data Swamp. I’m working on a similar comp for the Data Lakehouse (Data Swampboat? Data Swamphouse is too easy), but in the meantime, that one person who goes and slaughters your application’s performance by butchering the data in your Data Lakehouse? That’s a Data Jason.

1 Comment

Quick Hits on Azure Databricks Performance

Rayis Imayev has a few thoughts on optimizing delta table-based workloads in Azure Databricks:

2) Enable the Delta cache – spark.databricks.io.cache.enabledtrue
There is a very good resource available on configuring this Spark config setting: https://docs.microsoft.com/en-us/azure/databricks/delta/optimizations/delta-cache

And this will be very helpful in your Databricks notebook’s queries when you try to access a similar dataset multiple times. Once you read this dataset for the first time, Spark places it into internal local storage cache and will speed up the process of further referencing it for you.

Click through for several more along these lines.

Comments closed