Press "Enter" to skip to content

Author: Kevin Feasel

PASS Summit Q&A: Intelligent Query Processing

Kathi Kellenberger has a follow-up of some questions after a PASS Virtual Summit session:

Last week, I presented a session on Intelligent Query Processing for the first ever Virtual PASS Summit. This summit had a mix of live and pre-recorded session. During the pre-recorded sessions, the speaker could hang out with the attendees in the discussion room and join a virtual chat room at the end.  My session was live, so I answered questions a few questions during the session. There were a couple of questions that I couldn’t answer fully during the session, but all the questions were interesting, so I’ll go through them here.

Click through for the questions and answers.

Comments closed

PASS Summit Q&A: The Curated Data Platform

I answer some questions:

On Thursday, I presented a session at PASS Summit entitled The Curated Data Platform. You can grab slides and links to additional information on my website. Thank you to everyone who attended the session.

During and after the session, I had a few questions come in from the audience, and I wanted to cover them here.

Most of the questions were around document databases, so check them out.

Comments closed

Finding the Physical Path of a SQL Server Backup on a Container

Jack Vamvas is looking for love files in all the wrong places:

I’m migrating some SQL Server databases to Openshift Containers. The SQL Server is set up with persistent disk , with a dedicated persistent disk partition for the SQL Server defaultbackup directory. I don’t have access to the underlying files via command line and can only use command line. How can I get the physical disk device , which will then allow me to create a RESTORE DATABASE statement pointing to the device?

Read on for the answer, including a T-SQL script to find where these files live.

Comments closed

Finding Unused Columns in Power BI Data Models

Matt Allington wants to trim the fat:

I have a saying in Power BI. Load every column you need, and nothing that you don’t need. The reason for this advice is that columns can make your data model bigger and less performant. You will of course need some columns in your data model for different purposes. Some are used for defining measures and some are used for slicing, dicing and summarising your data in the various visuals. But it is very common for people to load everything from the source, meaning that some of the columns are likely to be loaded but not used. Once the data model is ready and the reporting is done, it can be beneficial to remove the columns that are not being used and are not likely to be used for ad hoc reporting in the near future. The question is – how do you find the columns not being used? This is where Imke’s Power BI Cleaner tool comes in; I will show you how to use it below.

Read on for Seven Minute Abs for your Power BI data model.

Comments closed

The DevOps Learning Curve

Grant Fritchey gives us the low-down on learning about DevOps:

If you’re attempting to implement automation in and around your deployments, you’re going to find there is quite a steep learning curve for DevOps and DevOps-style implementations. Since adopting a DevOps-style release cycle does, at least in theory, speed your ability to deliver better code safely, why would it be hard?

Click through for an idea, including tools to use and some first steps.

Comments closed

Creating Jupyter Books in Azure Data Studio

Drew Skwiers-Koballa takes us through creating and deploying Jupyter Books:

The notebook experience in Azure Data Studio allows users to create and share documents containing live code, execution results, and narrative text. Potential usage includes data cleaning and transformation, statistical modeling, troubleshooting guides, data visualization, and machine learning. Jupyter books compile a collection of notebooks into a richer experience with more structure and a table of contents.  In Azure Data Studio we are able not only to use Jupyter books but also create and share them. Learn the basics of notebooks in Azure Data Studio from the documentation and read on to learn how to leverage a GitHub Action to publish and share remote Jupyter books.

Click through for the process of creating, opening, and distributing Jupyter Books.

Comments closed

Visualizing Analysis Services Tasks with the Job Graph

Chris Webb is excited:

More details about it, and how it can be used, are in the samples here:

https://github.com/microsoft/Analysis-Services/tree/master/ASJobGraphEvents

The data returned by the Job Graph event isn’t intelligible if you look at the text it returns in Profiler. However if you save a .trc file with Job Graph event data to XML you can use the Python scripts in the GitHub repo to generate DGML diagrams that can be viewed in Visual Studio, plus Gantt charts embedded in HTML. Of course to do this you’ll need to have Python installed; you’ll also need to have Visual Studio and its DGML editor installed (see here for details).

Read on to see how it looks and Chris’s thoughts on the matter.

Comments closed

Error Handling in R

Adi Sarid compares a few methods for error handling in R:

Error catching can be hard to catch at times (no pun intended). If you’re not used to error handling, this short post might help you do it elegantly.

There are many posts about error handling in R (and in fact the examples in the purrr package documentation are not bad either). In this sense, this post is not original.

However, I do demonstrate two approaches: both the base-R approach (tryCatch) and the purrr approach (safely and possibly). The post contains a concise summary of the two methods, with a very simple example.

Read the whole thing. H/T R-Bloggers

Comments closed

Import Files From Sharepoint Into Power Query

Imke Feldmann solves a problem:

When you use the UI to import files from SharePoint, you’ll end up with the Sharepoint.Files function. This function can become fairly or super slow when you use it on large SharePoint sites. This is due to the fact, that it will retrieve metadata for ALL files that lie on the site. Meaning: The root site whose URL you have to enter as the function argument. So I’ve developed a better way for File import from SharePoint.

Click through for the solution and how to use it. Imke reports 2X query performance when reading Sharepoint data, so it’s worth checking out.

Comments closed

Using the READPAST Query Hint

Rajendra Gupta looks at a lesser-used query hint:

If we specify the READPAST hint in the SQL queries, the database engine ignores the rows locked by other transactions while reading data. Suppose you have a transaction that blocked a few rows in a table for updating the information in those rows. Now, if another user starts a transaction and specifies the READPAST query hint, the query engine ignores these rows and returns the remaining rows satisfying the data requirement of the query. It might return incorrect data as well.

There are some very limited uses for this hint, though they are out there.

Comments closed