Press "Enter" to skip to content

Month: May 2017

Reusing Power BI Datasets

Melissa Coates shows a few ways to reduce the number of redundant Power BI datasets in an environment:

To reuse an imported dataset, there are three options I’m aware of:

  1. Report in the Power BI Service. This refers to using the web interface for creating a new report separate from the original PBIX file.

  2. Analyze In Excel. This refers to creating an Excel report and can currently be used by anyone with read or edit access to the dataset. Hence, very useful for self-service BI purposes.

  3. Power BI Service Live Connection. This refers to creating a Power BI Desktop report. This option can currently only be used by people with edit permissions on the dataset (so not appropriate for broad self-service BI reporting at this time).

Click through for a detailed demonstration of each.

Comments closed

Gathering Detailed System Information On A Set Of Servers

Amy Herold has a quick Powershell script to retrieve detailed system info (via msinfo32.exe) for a set of servers:

With this script you can generate system information files and save them to a specified location. It makes sure a connection can be made to the server first, and then outputs the file. The files are created one at a time, so if you pass in a longer list of servers, you shouldn’t crash your machine. From my testing, this will take some time to run as these files don’t output quickly. Despite that, the output is worth it. This can be modified to pull your list of servers from a file or from a Central Management Server (CMS) instance.

This is a useful script, with the next step being to turn it into a cmdlet that accepts the set of servers from the pipeline.

Comments closed

Azure SQL DW Reference Architectures

James Serra shows how Azure SQL Data Warehouse can fit into various warehousing architectures:

Do staging, data refinement and reporting all from SQL DW.  You can scale compute power up when needed (i.e. during staging, data refinement, or large number of users doing reporting) or down to save costs (i.e. nights and weekends when user reporting is low).  The pros of this option are by reducing the number of technologies you are building a simpler solution and reducing the number of copies of the data.  The cons are since everything is done on SQL DW you can have performance issues (i.e. doing data refinement while users are reporting), can hit the SQL DW concurrent query limit, and can have a higher cost since SQL DW is the highest-cost product, especially if you are not able to pause it.  Pausing it reduces your cost to zero for compute, only having to pay for storage (see Azure SQL Data Warehouse pricing), but no one can use SQL DW when paused

Click through for three other architecture ideas.

Comments closed

Loading Azure SQL DW: Embrace The CTAS

John Hoang, Joe Sack, and Martin Lee explain different loading patterns for loading Azure SQL Data Warehouse:

SQL Data Warehouse supports many loading methods, including SSIS, BCP, the SQLBulkCopy API, and Azure Data Factory (ADF). These methods all share a common pattern for data ingestion. By comparison, the PolyBase technology uses a different approach that provides better performance.

PolyBase is by far the fastest and most scalable SQL Data Warehouse loading method to date, so we recommend it as your default loading mechanism. PolyBase is a scalable, query processing framework compatible with Transact-SQL that can be used to combine and bridge data across relational database management systems, Azure Blob Storage, Azure Data Lake Store and Hadoop database platform ecosystems (APS only).

If you’re curious about Azure SQL Data Warehouse, this is a must-read.

Comments closed

More On Adaptive Joins

Erik Darling has a couple more posts on adaptive joins in SQL Server 2017.  First, he wonders what happens when you add scalar functions to the mix:

See, this isn’t SARGable either (and no, SCHEMABINDING doesn’t change this). When a predicate isn’t SARGable, you take away an index seek as an access choice. You don’t see too many Nested Loops with an index scan on the other end, do you?

No.

So there you go. It’s not the function itself that bops our Adaptive Join on the head, but the lack of SARGability.

Erik also looks at using APPLY and EXISTS to make sure you can use adaptive joins with that dynamic duo:

I do have to point out that Cross Apply used to only be implemented as a Nested Loops Join. I learned that many years ago from one of the best articles written about Cross Apply by Paul White. That changed recently — it’s possible to see it implemented with a Hash Join in at least 2016. I’ve seen it crop up in Cross Apply queries without a TOP operator.

The latter results are a bit surprising.

Comments closed

Kinesis Data Generation

Allan MacInnis shows off a new data generation tool for Amazon’s Kinesis:

Amazon Kinesis Streams and Amazon Kinesis Firehose enable you to continuously capture and store terabytes of data per hour from hundreds of thousands of sources. Amazon Kinesis Analytics gives you the ability to use standard SQL to analyze and aggregate this data in real-time. It’s easy to create an Amazon Kinesis stream or Firehose delivery stream with just a few clicks in the AWS Management Console (or a few commands using the AWS CLI or Amazon Kinesis API). However, to generate a continuous stream of test data, you must write a custom process or script that runs continuously, using the AWS SDK or CLI to send test records to Amazon Kinesis. Although this task is necessary to adequately test your solution, it means more complexity and longer development and testing times.

Wouldn’t it be great if there were a user-friendly tool to generate test data and send it to Amazon Kinesis? Well, now there is—the Amazon Kinesis Data Generator (KDG).

Check it out if you’re using Kinesis and need to do some load testing.

Comments closed

Getting Execution Plans In Spark

Anubhav Tarar shows how to get an execution plan for a Spark job:

There are three types of logical plans:

  1. Parsed logical plan.
  2. Analyzed logical plan.
  3. Optimized logical plan.

Analyzed logical plans go through a series of rules to resolve. Then, the optimized logical plan is produced. The optimized logical plan normally allows Spark to plug in a set of optimization rules. You can plug in your own rules for the optimized logical plan.

Click through for the details.

Comments closed

Machine Learning At Build 2017

Adnan Masood looks at some of the new machine learning offerings in Azure:

Language Understanding Intelligent Service (LUIS) is one of the marquee offerings in cognitive services which contains an entire suite of NLU / NLP capabilities, teaching applications to understand entities, utterances, and genera; commands from user input. Other language services include Bing Spell Check API which detect and correct spelling mistakes, Web Language Model API which helps building knowledge graphs using predictive language models Text Analytics API to perform topic modeling and do sentiment analysis, as well as Translator Text API to perform automatic text translation. The Linguistic Analysis API is a new addition which parses and provide context around language concepts.

In the knowledge spectrum, the Recommendations API to help predict and recommend items, Knowledge Exploration Service to enable interactive search experiences over structured data via natural language inputs, Entity Linking Intelligence Service for NER / disambiguation, Academic Knowledge API (academic content in the Microsoft Academic Graph search), QnA Maker API, and the newly minted custom Decision Service which provides a contextual decision-making API with reinforcement learning features. Search APIs include Autosuggest, news, web, image, video and customized searches.

There are some nice products available on the Azure platform and Adnan does a good job of outlining them.

Comments closed

Building A Spinning Globe With R

James Cheshire shows how to use R to create an image of a spinning globe:

It has been a long held dream of mine to create a spinning globe using nothing but R (I wish I was joking, but I’m not). Thanks to the brilliant mapmate package created by Matt Leonawicz and shed loads of computing power, today that dream became a reality. The globe below took 19 hours and 30 processors to produce from a relatively low resolution NASA black marble data, and so I accept R is not the best software to be using for this – but it’s amazing that you can do this in R at all!

Now all that is missing is a giant TV and an evil lair.

Comments closed