Press "Enter" to skip to content

Day: April 13, 2017

Cardinality Estimation On COUNT(*)

Paul White digs into how the cardinality estimator works with COUNT aggregations containing HAVING clauses:

The approach SQL Server takes is to assume that each group is most likely to contain the overall mean (average) number of rows. This is simply the cardinality divided by the number of unique values. For example, for 1000 rows with 20 unique values, SQL Server would assume that (1000 / 20) = 50 rows per group is the most likely value.

Turning back to our original example, this means that the computed count column is “most likely” to contain a value around (19614 / 575) ~= 34.1113. Since density is the reciprocal of the number of unique values, we can also express that as cardinality * density = (19614 * 0.00173913), giving a very similar result.

Definitely worth a careful read.

Comments closed

The Basics Of SparkR

Yanbo Liang has an introductory article on what SparkR is and why you might want to use it:

However, data analysis using R is limited by the amount of memory available on a single machine and further as R is single threaded it is often impractical to use R on large datasets. To address R’s scalability issue, the Spark community developed SparkR package which is based on a distributed data frame that enables structured data processing with a syntax familiar to R users. Spark provides distributed processing engine, data source, off-memory data structures. R provides a dynamic environment, interactivity, packages, visualization. SparkR combines the advantages of both Spark and R.

In the following section, we will illustrate how to integrate SparkR with R to solve some typical data science problems from a traditional R users’ perspective.

This is a fairly introductory article, but gives an idea of what SparkR can accomplish.

Comments closed

Be Wary Of Case Sensitivity

Slava Murygin points out that for SQL Server on Linux, case sensitivity matters a lot:

So, in that manner, SQL Server files will be located in “c:\var\opt\mssql\” directory!
Isn’t it simple?!!!
There will be no effort at all to refurbish old SQL SQL Server code for new Linux platforms!

Here I’ll give you an example of Database creation using different addressing methods:
1. I use Windows notation for data file and Linux notation for log file.
2. Because Linux systems are case sensitive it is very important to know it when you work with Linux. However, Microsoft allows you to be reluctant. You can use upper or lower case in the order you want.

Click through for an example.

Comments closed

Embedded Solr With Scala

Anurag Srivastava shows how to use Embedded Solr using an example written in Scala:

Embedded Solr has the same interface as Solr without requiring an HTTP connection. When we “embed” Solr into a Java an application, it provides the exact same API that you would use if you were connecting to a remote Solr instance. We can use embedded Solr for in-memory testing because when we implement test cases, it should not depend on any external resources.

Read on for the code sample.

Comments closed

Using h2o.ai On HDInsight

Xiaoyong Zhu shows how to set up h2o.ai on Azure HDInsight:

H2O Flow is an interactive web-based computational user interface where you can combine code execution, text, mathematics, plots and rich media into a single document, much like Jupyter Notebooks. With H2O Flow, you can capture, rerun, annotate, present, and share your workflow. H2O Flow allows you to use H2O interactively to import files, build models, and iteratively improve them. Based on your models, you can make predictions and add rich text to create vignettes of your work – all within Flow’s browser-based environment. In this blog, we will only focus on its visualization part.

H2O FLOW web service lives in the Spark driver and is routed through the HDInsight gateway, so it can only be accessed when the spark application/Notebook is running

You can click the available link in the Jupyter Notebook, or you can directly access this URL:

https://yourclustername-h2o.apps.azurehdinsight.net/flow/index.html

Setup is pretty easy.

Comments closed

Optimizing For Ad Hoc Workloads

Kendra Little has soured a bit on the Optimize for Adhoc Workloads setting:

Once upon a time, I was really excited about getting this configuration item in SQL Server 2008. Early versions of SQL Server 2005 weren’t all that great at managing the size of the execution plan cache: it could really balloon up and eat away at the buffer pool. But the SQL Server team did a good job at tuning those algorithms in later service packs for 2005 and future versions, and it became much less of an issue.

Personally, I’ve never had a case where enabling ‘Optimize for Adhoc Workloads’ improved performance in a way that I could measure. It may save you a small amount of memory, it may not.

I don’t mean this as a big insult. Trying to save a penny every time you go to the grocery store could add up, if you grocery shop very frequently. But hopefully that’s not one of your major revenue sources over time.

It’s an interesting counter-argument and worth reading.

Comments closed

SQL Server Backup To Azure Tool Causing Restore Errors

Jack Li diagnoses an issue in which the Microsoft SQL Server Backup to Microsoft Azure Tool causes errors when trying to restore a database on an Azure VM with SQL Server 2008 R2:

I worked on an interesting issue today where a user couldn’t restore a backup.   Here is what this customer did:

  1. backed up a database from an on-premises server (2008 R2)
  2. copied the file to an Azure VM
  3. tried to restore the backup on the Azure VM (2008 R2 with exact same build#)

But he got the following error:

Msg 3241, Level 16, State 0, Line 4
The media family on device ‘c:\temp\test.bak’ is incorrectly formed. SQL Server cannot process this media family.
Msg 3013, Level 16, State 1, Line 4
RESTORE HEADERONLY is terminating abnormally.

We verified that he could restore the same backup on the local machine (on-premises).  Initially I thought the file must have been corrupt during transferring.   We used different method to transfer file and zipped the file.  The behavior is the same.   When we backed up a database from the same Azure VM and tried to restore, it was successful.

Click through for Jack’s findings as well as a couple workarounds.

Comments closed

On-Prem Power BI Gateway

Steve Hughes shows how to set up a data gateway for Power BI:

First, I will not be discussing the personal gateway in this post. If you have chosen to use the personal gateway, you have limited functionality and should consider using the on-premises data gateway for corporate use.

The on-premises data gateway (referred to as gateway throughout this post) “acts as a bridge, providing quick and secure data transfer between on-premises data and the Power BI, Microsoft Flow, Logic Apps, and PowerApps services.” (ref) Much of what is discussed here will apply to all of the services referenced above, but our primary concern is related to Power BI. Please refer to references at the end of this post for details about data sources supported within the gateway.

Click through for more information.

Comments closed

Measuring Correlation In SQL

Phil Factor shows how to calculate Kendall’s Tau and Spearman’s Rho in SQL:

Kendall’s Tau rank correlation is a handy way of determining how correlated two variables are, and whether this is more than chance. If you just want a measure of the correlation then you don’t have to assume very much about the distribution of the variables. Kendall’s Tau is popular with calculating correlations with non-parametric data. Spearman’s Rho is possibly more popular for the purpose, but Kendall’s tau has a distribution with better statistical properties (the sample estimate is close to a population variance) so confidence levels are more reliable, but in general, Kendall’s tau and Spearman’s rank correlation coefficient are very similar. The obvious difference between them is that, for the standard method of calculation,  Spearman’s Rank correlation required ranked data as input, whereas the algorithm to calculate Kendall’s Tau does this for you.  Kendall’s Tau consumes any non-parametric data with equal relish.

Kendall’s Tau is easy to calculate on paper, and makes intuitive sense. It deals with the probabilities of observing the agreeable (concordant) and non-agreeable (discordant) pairs of rankings. All observations are paired with each of the others, A concordant pair is one whose members of one observation are both larger than their respective members of the other paired observation, whereas discordant pairs have numbers that differ in opposite directions. Kendall’s Tau-b takes tied rankings into account.

I appreciate Phil putting this series together.  I’d probably stick with R, but it’s good to have options.

Comments closed