Press "Enter" to skip to content

Author: Kevin Feasel

Tidyverse Updates

Hadley Wickham has two announcements.  First, for a slew of tidyverse packages:

Over the couple of months there have been a bunch of smaller releases to packages in the tidyverse. This includes:

  • forcats 0.2.0, for working with factors.
  • readr 1.1.0, for reading flat-files from disk.
  • stringr 1.2.0, for manipulating strings.
  • tibble 1.3.0, a modern re-imagining of the data frame.

This blog post summarises the most important new features, and points to the full release notes where you can learn more.

Second, a new version of dplyr is coming:

dplyr 0.6.0 is a major release including over 100 bug fixes and improvements. There are three big changes that I want to touch on here:

  • Databases
  • Improved encoding support (particularly for CJK on windows)
  • Tidyeval, a new framework for programming with dplyr

You can see a complete list of changes in the draft release notes.

You can already get a tech preview of the new dplyr if you’re interested in trying it out.

Comments closed

Data Classification In Power BI

Steve Hughes describes how Power BI data classification works:

Power BI Privacy Levels “specify an isolation level that defines the degree that one data source will be isolated from other data sources”. After working through some testing scenarios and trying to discover the real impact to data security, I was unable to effectively show how this might have any bearing on data security in Power BI. During one test was I shown a warning about using data from a website with data I had marked Organizational and Private. In all cases, I was able to merge the data in the query and in the relationships with no warning or filtering. All of the documentation makes the same statement and most bloggers are restating what is found in the Power BI documentation as were not helpful. My takeaway after reviewing this for a significant amount of time is to not consider these settings when evaluating data security in Power BI. I welcome comments or additional references which actually demonstrate how this isolation actually works in practice. In most cases, we are using organizational data within our Power BI solutions and will not be impacted by this setting and my find improved performance when disabling it.

As Steve notes, this is not really a security feature.  Instead, it’s intended to be more a warning to users about which data is confidential and which is publicly-sharable .

Comments closed

Cardinality Estimation On COUNT(*)

Paul White digs into how the cardinality estimator works with COUNT aggregations containing HAVING clauses:

The approach SQL Server takes is to assume that each group is most likely to contain the overall mean (average) number of rows. This is simply the cardinality divided by the number of unique values. For example, for 1000 rows with 20 unique values, SQL Server would assume that (1000 / 20) = 50 rows per group is the most likely value.

Turning back to our original example, this means that the computed count column is “most likely” to contain a value around (19614 / 575) ~= 34.1113. Since density is the reciprocal of the number of unique values, we can also express that as cardinality * density = (19614 * 0.00173913), giving a very similar result.

Definitely worth a careful read.

Comments closed

The Basics Of SparkR

Yanbo Liang has an introductory article on what SparkR is and why you might want to use it:

However, data analysis using R is limited by the amount of memory available on a single machine and further as R is single threaded it is often impractical to use R on large datasets. To address R’s scalability issue, the Spark community developed SparkR package which is based on a distributed data frame that enables structured data processing with a syntax familiar to R users. Spark provides distributed processing engine, data source, off-memory data structures. R provides a dynamic environment, interactivity, packages, visualization. SparkR combines the advantages of both Spark and R.

In the following section, we will illustrate how to integrate SparkR with R to solve some typical data science problems from a traditional R users’ perspective.

This is a fairly introductory article, but gives an idea of what SparkR can accomplish.

Comments closed

Be Wary Of Case Sensitivity

Slava Murygin points out that for SQL Server on Linux, case sensitivity matters a lot:

So, in that manner, SQL Server files will be located in “c:\var\opt\mssql\” directory!
Isn’t it simple?!!!
There will be no effort at all to refurbish old SQL SQL Server code for new Linux platforms!

Here I’ll give you an example of Database creation using different addressing methods:
1. I use Windows notation for data file and Linux notation for log file.
2. Because Linux systems are case sensitive it is very important to know it when you work with Linux. However, Microsoft allows you to be reluctant. You can use upper or lower case in the order you want.

Click through for an example.

Comments closed

Using h2o.ai On HDInsight

Xiaoyong Zhu shows how to set up h2o.ai on Azure HDInsight:

H2O Flow is an interactive web-based computational user interface where you can combine code execution, text, mathematics, plots and rich media into a single document, much like Jupyter Notebooks. With H2O Flow, you can capture, rerun, annotate, present, and share your workflow. H2O Flow allows you to use H2O interactively to import files, build models, and iteratively improve them. Based on your models, you can make predictions and add rich text to create vignettes of your work – all within Flow’s browser-based environment. In this blog, we will only focus on its visualization part.

H2O FLOW web service lives in the Spark driver and is routed through the HDInsight gateway, so it can only be accessed when the spark application/Notebook is running

You can click the available link in the Jupyter Notebook, or you can directly access this URL:

https://yourclustername-h2o.apps.azurehdinsight.net/flow/index.html

Setup is pretty easy.

Comments closed

Embedded Solr With Scala

Anurag Srivastava shows how to use Embedded Solr using an example written in Scala:

Embedded Solr has the same interface as Solr without requiring an HTTP connection. When we “embed” Solr into a Java an application, it provides the exact same API that you would use if you were connecting to a remote Solr instance. We can use embedded Solr for in-memory testing because when we implement test cases, it should not depend on any external resources.

Read on for the code sample.

Comments closed

Optimizing For Ad Hoc Workloads

Kendra Little has soured a bit on the Optimize for Adhoc Workloads setting:

Once upon a time, I was really excited about getting this configuration item in SQL Server 2008. Early versions of SQL Server 2005 weren’t all that great at managing the size of the execution plan cache: it could really balloon up and eat away at the buffer pool. But the SQL Server team did a good job at tuning those algorithms in later service packs for 2005 and future versions, and it became much less of an issue.

Personally, I’ve never had a case where enabling ‘Optimize for Adhoc Workloads’ improved performance in a way that I could measure. It may save you a small amount of memory, it may not.

I don’t mean this as a big insult. Trying to save a penny every time you go to the grocery store could add up, if you grocery shop very frequently. But hopefully that’s not one of your major revenue sources over time.

It’s an interesting counter-argument and worth reading.

Comments closed