Press "Enter" to skip to content

Month: April 2017

Managing The Pace Of Change

Kellan Danielson and the rest of the Power Pivot Pro team discuss the pace of change in the data platform:

@djharshany I’ve found Pocket (https://getpocket.com/) really useful for saving items for later. I’m on a schedule as well – I save a lot of articles and then pour through them when I’m on an airplane or waiting in line somewhere. #productivityhack

I think this furious pace of technological development has made me much more aware 1) of the amount of noise out in the world that I’m safe ignoring and 2) of how we need to stay vigilant in producing content that cuts through the noise.

Given that these are people who specialize in the fastest-moving part of the Microsoft data platform, it’s worth getting their thoughts on the rapid pace of change.

Comments closed

Text Normalization With Spark

Engineers at Treselle Systems have put together a two-part series on text normalization using Apache Spark.  First, they walk through normalizing the text:

We have used Spark shared variable “broadcast” to achieve distributed caching. Broadcast variables are useful when large datasets need to be cached in executors. “stopwords_en.txt” is not a large dataset but we have used in our use case to make use of that feature.

What are Broadcast Variables?
Broadcast variables in Apache Spark is a mechanism for sharing variables across executors that are meant to be read-only. Without broadcast variables, these variables would be shipped to each executor for every transformation and action, which can cause network overhead. However, with broadcast variables, they are shipped once to all executors and are cached for future reference.

From there, they dig into details on what the Spark engine did and why we see what we do:

Note: Stage 2 has both reduceByKey() and sortByKey() operations and as indicated in job summary “saveAsTextFile()” action triggered Job 2. Do you have any guess whether Stage 2 will be further divided into other stages in Job 2? The answer is: yes Job 2 DAG: This job is triggered due to saveAsTextFile() action operation. The job DAG clearly indicates the list of operations used before the saveAsTextFile() operations.Stage 2 in Job 1 is further divided into another stage as Stage 2. In Stage 2 has both reduceByKey() and sortByKey() operations and both operations can shuffle the data so that Stage 2 in Job 1 is broken down into Stage 4 and Stage 5 in Job 2. There are three stages in this job. But, Stage 3 is skipped. The answer for the skipped stage is provided below “What does “Skipped Stages” mean in Spark?” section.

There’s some good information here if you want to become more familiar with how Spark works.

Comments closed

SQL Server Migration With dbatools

Garry Bargsley gives an example of how quickly you can migrate a SQL Server instance:

So in just 1 minute and 34 seconds you have migrated all of the following from one server to another.

  • All SP_Configure settings

  • Any Custom Error Messages

  • Any SQL Credentials

  • All Linked Servers

  • Database Mail along with Configuration and Profiles and Accounts

  • All user objects in System Databases

  • All Backup Devices

  • Any System Triggers

  • All User Databases

  • All Logins

  • Any Data collectors

  • Any Security Audits

  • All Endpoints, Policy Management, Resource Governor, Extended Events

  • And Finally All SQL Server Agent Jobs, Schedules, Operators, Alerts

These are probably very small databases (as it was a test instance), but dbatools is quite impressive.

Comments closed

Tidyverse Updates

Hadley Wickham has two announcements.  First, for a slew of tidyverse packages:

Over the couple of months there have been a bunch of smaller releases to packages in the tidyverse. This includes:

  • forcats 0.2.0, for working with factors.
  • readr 1.1.0, for reading flat-files from disk.
  • stringr 1.2.0, for manipulating strings.
  • tibble 1.3.0, a modern re-imagining of the data frame.

This blog post summarises the most important new features, and points to the full release notes where you can learn more.

Second, a new version of dplyr is coming:

dplyr 0.6.0 is a major release including over 100 bug fixes and improvements. There are three big changes that I want to touch on here:

  • Databases
  • Improved encoding support (particularly for CJK on windows)
  • Tidyeval, a new framework for programming with dplyr

You can see a complete list of changes in the draft release notes.

You can already get a tech preview of the new dplyr if you’re interested in trying it out.

Comments closed

Data Classification In Power BI

Steve Hughes describes how Power BI data classification works:

Power BI Privacy Levels “specify an isolation level that defines the degree that one data source will be isolated from other data sources”. After working through some testing scenarios and trying to discover the real impact to data security, I was unable to effectively show how this might have any bearing on data security in Power BI. During one test was I shown a warning about using data from a website with data I had marked Organizational and Private. In all cases, I was able to merge the data in the query and in the relationships with no warning or filtering. All of the documentation makes the same statement and most bloggers are restating what is found in the Power BI documentation as were not helpful. My takeaway after reviewing this for a significant amount of time is to not consider these settings when evaluating data security in Power BI. I welcome comments or additional references which actually demonstrate how this isolation actually works in practice. In most cases, we are using organizational data within our Power BI solutions and will not be impacted by this setting and my find improved performance when disabling it.

As Steve notes, this is not really a security feature.  Instead, it’s intended to be more a warning to users about which data is confidential and which is publicly-sharable .

Comments closed

Cardinality Estimation On COUNT(*)

Paul White digs into how the cardinality estimator works with COUNT aggregations containing HAVING clauses:

The approach SQL Server takes is to assume that each group is most likely to contain the overall mean (average) number of rows. This is simply the cardinality divided by the number of unique values. For example, for 1000 rows with 20 unique values, SQL Server would assume that (1000 / 20) = 50 rows per group is the most likely value.

Turning back to our original example, this means that the computed count column is “most likely” to contain a value around (19614 / 575) ~= 34.1113. Since density is the reciprocal of the number of unique values, we can also express that as cardinality * density = (19614 * 0.00173913), giving a very similar result.

Definitely worth a careful read.

Comments closed

The Basics Of SparkR

Yanbo Liang has an introductory article on what SparkR is and why you might want to use it:

However, data analysis using R is limited by the amount of memory available on a single machine and further as R is single threaded it is often impractical to use R on large datasets. To address R’s scalability issue, the Spark community developed SparkR package which is based on a distributed data frame that enables structured data processing with a syntax familiar to R users. Spark provides distributed processing engine, data source, off-memory data structures. R provides a dynamic environment, interactivity, packages, visualization. SparkR combines the advantages of both Spark and R.

In the following section, we will illustrate how to integrate SparkR with R to solve some typical data science problems from a traditional R users’ perspective.

This is a fairly introductory article, but gives an idea of what SparkR can accomplish.

Comments closed

Be Wary Of Case Sensitivity

Slava Murygin points out that for SQL Server on Linux, case sensitivity matters a lot:

So, in that manner, SQL Server files will be located in “c:\var\opt\mssql\” directory!
Isn’t it simple?!!!
There will be no effort at all to refurbish old SQL SQL Server code for new Linux platforms!

Here I’ll give you an example of Database creation using different addressing methods:
1. I use Windows notation for data file and Linux notation for log file.
2. Because Linux systems are case sensitive it is very important to know it when you work with Linux. However, Microsoft allows you to be reluctant. You can use upper or lower case in the order you want.

Click through for an example.

Comments closed