Press "Enter" to skip to content

Author: Kevin Feasel

Power BI Admin Portal

Melissa Coates looks at the Power BI tenant settings in the admin portal:

Keep in mind that these selections apply to all users across the entire tenant. At this time we can’t control them by groups or anything of that nature.

In addition to the above settings for controlling user experience, the Admin Portal is also the place for viewing usage metrics which are helpful for determining who runs what how often (it’s not everything we could possibly want to know, but it’s a good start). The other two options, manage users and audit logs, redirect you over to the Office 365 Admin Center.

Another week, another few dozen Power BI additions…

Comments closed

Delayed Durability

Chris Taylor gives a use case for Delayed Durability:

During development and initial testing on our own hardware, we had the migration at the time running at ~25minutes for around 600 packages (ie. tables) covering (what we termed) RawSource–>Source–>Staging which was well within the performance requirements for the stage that development was at and for what was initially set out. The rest of this blog post will hone in specifically on Source–>Staging only.

However, once we transferred the solution to the clients development environment things took a turn for the worse. In our environment we were running VMs with 8 cores, 16GB RAM and utlising SSDs. The client environment was running SQL Server 2016 Enterprise on VMWare vSphere 5.5, 8 vCPUs, 32GB RAM (for Integration, Development was half this) but the infrastructure team have done everything in their power to force all VMs onto the lower tier (ie. slow disks) of their 3-PAR SAN and throttle them in every way possible, just to make things more of a challenge. Even though the VM’s themselves were throttled we were confident that we wouldn’t see too much of a performance impact, especially as this was only a subset of the processing to be done so we needed it to be quick and it will only ever get longer and longer.

Chris walks through the hallmarks of when Delayed Durability might work, and the big one for me is the way data migration works:  full reloads.  The important thing is to have a durable source and a process to repeat data loads when things get missed; in this case, it’s a full reload, but in other cases it could be watchdog applications which compare data sets on each side.

Comments closed

Query Folding In Power Query

Reza Rad discusses the performance implications of query folding in M:

I can’t start talking about the issue without explaining what Query Folding is, so let’s start with that. Query Folding means translating Power Query (M) transformations into native query language of the data source (for example T-SQL). In other words; when you run Power Query script on top of a SQL Server database, query folding will translate the M script into T-SQL statements, and fetch the final results.

Click through for more details.  The advice here sounds pretty similar to what we get for optimizing Integration Services:  push as much of the heavy lifting onto well-optimized source queries as possible, particularly when it comes to filtering out rows.

Comments closed

Calling Cognitive Services With R

David Smith has written a go-to guide for connecting to Azure Cognitive Services using R:

There’s no official R package (yet!) for calling Cognitive Services APIs. But since every Cognitive Service API is just a standard REST API, we can use the httr package to call the API. Input and output is standard JSON, which we can create and extract using the jsonlite package.

(There’s also an independent R interface to the text APIs. And there are already Python SDKs for many of the services, including the Face API.)

This is also useful for other REST APIs for times when there isn’t already a pre-built package to do most of the translation work for you.

Comments closed

Saving DBCC Results

Wayne Sheffield shows how to save CHECKDB results to table:

Okay, let’s try this out. At the above BOL links, there are two DBCC commands that are documented to use the TABLERESULTS option: OPENTRAN and SHOWCONTIG. Testing all of the other DBCC commands shows that this option can also be used on the CHECKALLOC, CHECKDB, CHECKFILEGROUP and CHECKTABLE commands. I’m going to continue this post with using DBCC CHECKDB. If you check BOL, it does not mention the TABLERESULTS option.

As Wayne notes, you can do this for other DBCC commands as well.

Comments closed

Graphing Row Counts With R

I look at one use of R for DBAs:

I have a client data warehouse which holds daily rollups of revenue and cost for customers.  We’ve had some issues with the warehouse lately where data was not getting loaded due to system errors and timeouts, and our services team gave me a list of some customers who had gaps in their data due to persistent processing failures.  I figured out the root cause behind this (which will show up as tomorrow’s post), but I wanted to make sure that we filled in all of the gaps.

My obvious solution is to write a T-SQL query, getting some basic information by day for each customer.  I could scan through that result set, but the problem is that people aren’t great at reading tables of numbers; they do much better looking at pictures.  This is where R comes into play.

Click through for the code and a walkthrough of what each line is doing.

Comments closed

Visualizing SQL Server Agent Jobs

Daniel Hutmacher shows how to visualize SQL Server Agent job runtimes using spatial data types:

If all you have is a hammer, everything will eventually start looking like a nail. This is generally known as Maslow’s hammer and refers to the fact that you use the tools you know to solve any problem, regardless if that’s what the problem actually needs. With that said, I frequently need a way to visualize the load distribution of scheduled jobs over a day or week, but I could never be bothered to set up a web server, learn a procedural programming language or build custom visualizations in PowerBI.

So here’s how to do that without leaving Management Studio.

Click through for discussion and link to the code.

Comments closed

New SSMS And SSDT

In this blast of new things, Andy Leonard makes mention that there are new versions of Management Studio and Data Tools available:

It’s Release Day! 🙂

New versions of SQL Server Data Tools (SSDT) are available here. SSDT 16.5 and 17.0 (RC1) are available. Also available are Data-Tier Application Framework (DacFx) versions 16.5 and 17.0 (RC1).

New versions of SQL Server Management Studio (SSMS) are available here. SSMS 16.5 and 17.0 (RC1) versions are available for SSMS.

It’s going to be a busy couple of days for some people…

Comments closed

New Features In 2016 SP1

Niko Neugebauer looks at new functionality released as part of SQL Server 2016 SP1:

CREATE OR ALTER. I almost cried when I found out that it was implemented. I was asking, begging, threatening, crying for years to get this in the SQL Server. Now, I can finally have future project deployments of those who are not using SSDT running with much less problems.
Now we can modify and deploy objects like Stored Procedures, Triggers, User-Defined Functions, and Views without any fear. Just “Make it so!”

Yeah, they’re not as big as “every Enterprise Edition development feature over the past decade is now available to anybody” but there are some nice additions here.

Comments closed