Press "Enter" to skip to content

Month: April 2021

Pre-Loading SSAS Databases into Memory Post-Restart

Nigel Foulkes-Nock explains why that first query after restarting SSAS can be slow:

When the SQL Server Analysis Services (SSAS) Tabular Service is started, it can take a long time before it is ready to be queried. This can cause delays to Service, not to mention confusion.

This Blog Post will explain what is happening during this time and a method that can be used to improve. It’s worth mentioning that the SSAS Tabular Databases that this has been used on are quite large (> 100Gb).

Click through for the answer, as well as a technique to warm up those servers so an end user doesn’t wind up being the one to pay for this wait.

Comments closed

sp_QuickieStore in Action

Erik Darling has a new stored procedure. First up, an introduction:

If you’ve ever tried to write a query to find stuff in Query Store and quickly gotten frustrated, you’re not alone.

Whomever designed that schema hates you. It’s probably the same person who designed Extended Events.

I know, there’s a GUI. The stuff you can find in there is definitely useful, but I often find myself wishing it did a little bit more. Asking for additional features often feels like porridge-begging, and even if something new did get added eventually, it wouldn’t help you in the moment.

With all that in mind, I wrote sp_QuickieStore to try to take some of that pain away. As of this post, it’s V1. There will likely be changes and fixes and all that good stuff, but right now it’s good enough for an introduction.

Erik then shows off the results:

Under the more concise mode, you get one set of results back that combines metrics from query_store_runtime_stats, along with query text, query plan, and context settings. Some previews below. There’s a lot more in the actual results, but the screen caps would get very repetitive.

This looks really interesting, so go check it out.

Comments closed

DirectQuery on REST APIs

Chris Webb illuminates us:

One of the most common questions I get asked is “How can I use Power BI in DirectQuery mode on top of a REST API?”. This seems like a reasonable thing to do but almost everyone who tries it will fail, and in this post I will explain why.

To answer this question we first of all have to review the two main ways of working with data in Power BI: Import mode and DirectQuery mode. In Import mode data is cached in Power BI’s own internal database and all the DAX queries that are generated by your reports are answered from there. In DirectQuery mode no data is stored inside Power BI; instead, when a report is run and DAX queries are fired off against your dataset, Power BI in turn generates queries against the data source to get the data needed. Most of the data sources that can be used with DirectQuery mode in Power BI are relational databases and so that means Power BI will generate SQL queries to get data from them, but Power BI can also generate queries in other languages too.

Read on for the bad news, although there are some third-party products which can make it work in specific cases.

Comments closed

The Power Automate Custom Visual in Power BI

Imran Burki tries out a new custom visual:

Using the Power Automate Custom Visual in Power BI is the same process as using any custom visual. We’ll use the Defects Dashboard I created from my last blog post as an example. We want to send a Teams message when we notice defects in a plant require the attention of the plant supervisor. After that, we want to create a meeting in Outlook to discuss findings from our dashboard. Previously, there wasn’t a straightforward way to do this directly in Power BI. However, with the Power Automate Custom Visual, we can create flows directly in Power BI without ever having to leave Power BI! Now that’s cool! Let’s get started.

This is really interesting for setting up rules-based alerting.

Comments closed

Storing dbatools as a Package in Azure DevOps

Kevin Chant has a process for us:

In this post I want to cover how you can store dbatools PowerShell module as a package in Azure DevOps. By using the Azure Artifacts service.

I want share some knowledge about this because did a demo of it at Malta Data Saturday. By the end of this post you will have a better understanding of Azure Artifacts and a workaround if you encounter a problem publishing a package.

Read on for the process.

Comments closed

Double-Join Syntax in T-SQL

Greg Dodd unleashes a monster:

We now have all of the data, but the question is, how do we stop the Person’s name coming back if the HireDate is after 1 Jan 2000?

We can’t simply put it in a where clause, because then the Course Title won’t come back. We could put it as part of the join condition to Instructor, but that will only limit the hire date coming back. We could use that to build a case statement around the FirstName and LastName fields, but what other options are there?

What we want to do is limit the join to Person to only return if Instructor returns, and then we could put the join condition on Instructor.

I’ll admit that whenever I see this syntax, I tend to remove it and replace with a subquery or APPLY operation. I don’t like multi-joins at all for the reasons Greg mentions at the end: it’s uncommon and difficult for a T-SQL developer to parse mentally.

Comments closed

Live Extended Events Data with Azure SQL Database

Grant Fritchey is doing it live:

Once you’ve created an Extended Events Session that is output to Azure Storage, you’ve done most of the work. The trick is really simple. Get the Azure Storage account set up with a Container. Create a Shared Access Signature (SAS) with the right permissions (rwl, read, write, list). Get the token from the SAS (it’s a long string). Use it, along with the path to the container to create a Database Scoped Credential. Create the session using the same path and container that you defined in the Credential. Done. You’ve got an Azure Extended Events session gathering data for you and outputting to a file in Azure Storage.

Now, what I’d like to tell you is that you can open up the Live Data window from SSMS. You can’t.

Grant does give us a workaround which kind of does the trick, but this is an obvious place where some additional developer care would be valuable.

Comments closed

Writing Check Constraints in SQL Sever Data Tools

Chris Johnson has a how-to guide:

That all looks ok, and everything published fine, and kept on publishing fine until I happened to check the generated script for some of the publishes (I was in the early stages of development at this point so deployments were just being done through Visual Studio direct to my machine).

When I did I noticed that every deployment I was getting code like this:

That’s dropping all my constraints, recreating them using WITH NOCHECK, and then using WITH CHECK to check the existing data. Obviously all the data will pass those checks, but that’s going to add some time to my deployments, and as the size of the data and the number of checks both increase, these checks will take more and more time.

Read on to understand what’s happening. I’d call this a fairly silly limitation on the part of SSDT.

Comments closed

HDFS Data Encryption at Rest

Arun Kumar Natva takes us through the process of encrypting data at rest in Cloudera Data Platform:

HDFS Encryption delivers transparent end-to-end encryption of data at rest and is an integral part of HDFS. End to end encryption means that the data is only encrypted and decrypted by the client. In other words, data remains encrypted until it reaches the HDFS client.

Each HDFS file is encrypted using an encryption key. To prevent the management of these keys (which can run in the millions) from becoming a performance bottleneck, the encryption key itself is stored in the file metadata. To add another layer of security, the file encryption key is stored in encrypted form, using another “encryption zone key”.

Read on to learn more and to see how it all fits together.

Comments closed

Grafana Changing License

Alex Woodie has some bad news for us:

Grafana is switching licensing of its core products from Apache License 2.0 to the more restrictive Affero General Public License (GPL) v3. The company made the change in an attempt to balance the value of open source with Grafana’s monetization strategy, CEO Raj Dutt announced yesterday.

Grafana has been considering a license change for some time, Dutt wrote in a blog post on April 20. This week, the company finally felt the time was right to move.

“Oof” was my first response. I know that a pretty large percentage of companies won’t touch AGPL. I don’t know if we’ll see these companies adopt the commercial version of Grafana, see the companies switch over to something else, or see developers fork Grafana and come up with some other product. AGPL is not quite as scary for companies when a product is at the end of the chain, as visualization and dashboarding products tend to be, but for many companies, that doesn’t matter.

Comments closed