Press "Enter" to skip to content

Day: November 17, 2020

Folders in Azure Synapse Analytics


Wolfgang Strasser checks out a small but helpful addition to Azure Synapse Analytics
:

Good morning, day, afternoon or night – wherever and whenever you read this blog post! My day started with a nice surprise when I connected to one of our Azure Synapse workspaces …

Sometimes, it’s those little things that make (development) life easier – you can now add folders to structure the list of development artefacts in Synapse:

Read on to see how, including how you can bring order to the chaos of existing Synapse Analytics workspaces.

Comments closed

Waterfall Visuals

Mike Cisneros takes us through cases when waterfall charts are useful:

In our workshops, we often put a grid of a dozen charts up on the screen, and say to the participants, “Most of the charts you’ll need to communicate effectively in business are right here on the screen. 99% of the time, one of the visuals you see here will get your message across effectively. And as you can see there aren’t any really unusual charts here. You’ve probably seen all of these before.” 

If, at this point, somebody in the room says, “Actually, I’ve never heard of a ______ chart before,” you can almost always fill in the blank with the word “waterfall.”

Waterfall charts are really useful in a few scenarios, but I see them get misused far too frequently.

Comments closed

PASS Summit Q&A: Intelligent Query Processing

Kathi Kellenberger has a follow-up of some questions after a PASS Virtual Summit session:

Last week, I presented a session on Intelligent Query Processing for the first ever Virtual PASS Summit. This summit had a mix of live and pre-recorded session. During the pre-recorded sessions, the speaker could hang out with the attendees in the discussion room and join a virtual chat room at the end.  My session was live, so I answered questions a few questions during the session. There were a couple of questions that I couldn’t answer fully during the session, but all the questions were interesting, so I’ll go through them here.

Click through for the questions and answers.

Comments closed

PASS Summit Q&A: The Curated Data Platform

I answer some questions:

On Thursday, I presented a session at PASS Summit entitled The Curated Data Platform. You can grab slides and links to additional information on my website. Thank you to everyone who attended the session.

During and after the session, I had a few questions come in from the audience, and I wanted to cover them here.

Most of the questions were around document databases, so check them out.

Comments closed

Finding the Physical Path of a SQL Server Backup on a Container

Jack Vamvas is looking for love files in all the wrong places:

I’m migrating some SQL Server databases to Openshift Containers. The SQL Server is set up with persistent disk , with a dedicated persistent disk partition for the SQL Server defaultbackup directory. I don’t have access to the underlying files via command line and can only use command line. How can I get the physical disk device , which will then allow me to create a RESTORE DATABASE statement pointing to the device?

Read on for the answer, including a T-SQL script to find where these files live.

Comments closed

Finding Unused Columns in Power BI Data Models

Matt Allington wants to trim the fat:

I have a saying in Power BI. Load every column you need, and nothing that you don’t need. The reason for this advice is that columns can make your data model bigger and less performant. You will of course need some columns in your data model for different purposes. Some are used for defining measures and some are used for slicing, dicing and summarising your data in the various visuals. But it is very common for people to load everything from the source, meaning that some of the columns are likely to be loaded but not used. Once the data model is ready and the reporting is done, it can be beneficial to remove the columns that are not being used and are not likely to be used for ad hoc reporting in the near future. The question is – how do you find the columns not being used? This is where Imke’s Power BI Cleaner tool comes in; I will show you how to use it below.

Read on for Seven Minute Abs for your Power BI data model.

Comments closed

The DevOps Learning Curve

Grant Fritchey gives us the low-down on learning about DevOps:

If you’re attempting to implement automation in and around your deployments, you’re going to find there is quite a steep learning curve for DevOps and DevOps-style implementations. Since adopting a DevOps-style release cycle does, at least in theory, speed your ability to deliver better code safely, why would it be hard?

Click through for an idea, including tools to use and some first steps.

Comments closed

Creating Jupyter Books in Azure Data Studio

Drew Skwiers-Koballa takes us through creating and deploying Jupyter Books:

The notebook experience in Azure Data Studio allows users to create and share documents containing live code, execution results, and narrative text. Potential usage includes data cleaning and transformation, statistical modeling, troubleshooting guides, data visualization, and machine learning. Jupyter books compile a collection of notebooks into a richer experience with more structure and a table of contents.  In Azure Data Studio we are able not only to use Jupyter books but also create and share them. Learn the basics of notebooks in Azure Data Studio from the documentation and read on to learn how to leverage a GitHub Action to publish and share remote Jupyter books.

Click through for the process of creating, opening, and distributing Jupyter Books.

Comments closed