Press "Enter" to skip to content

Month: May 2017

Presto On HDInsight

Ashish Thapliyal shows how to install Presto on an HDInsight cluster:

What is Presto?

Presto is a distributed SQL query engine optimized for ad-hoc analysis at interactive speed. It supports standard ANSI SQL, including complex queries, aggregations, joins, and window functions. Presto is becoming popular SQL interactive query engine that has grabbed the attention and mind-share in Big data communities.

What are the key advantages of Presto?

1- It’s very fast – Presto was designed and written from the ground up for interactive analytics and approaches the speed of commercial data warehouses.

2- Presto can query data where it lives – Presto supports many data sources via the number of connectors that community has built. You can query HDFS , Hive, Azure Storage or data stored in SQL Server , My SQL , CosmosDB or Cassandra etc.

You can install Presto in one simple step with HDInsight Script Action feature

Read on for instructions and showing how to connect this to other Azure products like CosmosDB and Azure SQL Database.

Comments closed

Star-Schema Benchmark With Hive + Druid

Carter Shanklin and Slim Bouguerra run a Hadoop OLAP system running Hive and Druid against the Star-Schema Benchmark battery of queries:

How did we arrive at the query used to build the OLAP index? There is a systematic procedure:

  1. The union of all dimensions used by the SSB queries is included in the index.
  2. The union of all measures is included in the index. Notice that we pre-compute some products in the index.
  3. Druid requires a timestamp, so the date of the transaction is used as the timestamp.

You can see that building the index requires knowledge of the query patterns. Either an expert in the query patterns architects the index, or a tool is needed to analyze queries or to dynamically build indexes on the fly. A lot of time can be spent in this architecture phase, gathering requirements, designing measures and so on, because changing your mind after-the-fact can be very difficult.

One thing I don’t like so much is that they removed the ORDER BY clauses from some of the queries, as making this change makes it more difficult to use these results for “it’s totally not a comparison so don’t sue us Oracle” purposes.

Comments closed

SQL In Spaaaaaaacccce!

Drew Furgiuele knows that in space, no-one can hear your Sev 18 alerts:

Over the last few months, I’ve had a new itch: I wanted to get into the world of high altitude ballooning. The concept is pretty simple: get a balloon and some helium, tie it to a payload, and let it go. The balloon travels a certain height and distance, then bursts, and your payload falls back to earth. That in itself is pretty interesting to me, and it’s not prohibitively expensive: students have done it for a couple hundred dollars. For a few dollars more, you can put a camera on it and take pictures as it travels.

The thing is, I wanted to do more than that. The maker in me wanted to do something special, something no one (to my knowledge) has done before. I not only wanted to launch a balloon and a camera, I wanted to put SQL Server up there, too. So that’s why we’re announcing the High Altitude SQL Server Project (HASSP).

I love it.

Comments closed

Figuring Out If That Powershell Variable Exists

Richie Lee has a method for checking whether a variable exists in Powershell:

Recently I needed to check that a variable exists in the PowerShell session currently running. This is actually far easier than it sounds. So here is a simple demo for how it works. The magic here is the “Test-Path variable:my_variable” on lines 4. It tests that a variable of that name exists. If it does, great, let’s print out the value. If not, let’s alert that it doesn’t. The second example of this on line 11 will do exactly that.

Read on to see Test-Path in action.

Comments closed

T-SQL FizzBuzz Golf

Kenneth Fisher plays the FizzBuzz game with T-SQL and his chat piles on:

Now normally FizzBuzz is done with a loop, but as Russ said, we are using T-SQL so batch code is always the goal. That said, what table should I query to get the numbers 1-100? Well, I decided I’d just do something simple here and use a system view that’s more than 100 rows, the ROW_NUMBER function, and restrict it using TOP.

Read on to see several answers to this problem, some better than others.

Comments closed

SQL On Docker Environment Variables

James Anderson lists the current environment variables that you can set for a SQL Server on Linux Docker container:

As of CTP 2.1 for SQL Server 2017 a set of new environment variables are available. These variables allow us to configure each SQL Server container as we spin them up. The first version of SQL Server on Linux came with:

ACCEPT_EULA
SA_PASSWORD

These had to be set for the container to start. The SA_PASSWORD has be a complex password or the container will not start. CTP 2.1 introduced:

Read on for the new variables and an example on how to use them.

Comments closed

Three-Way Variance Analysis

Bogdan Anastasiei shows how to perform a three-way variance analysis when the third-order and second-order effects are both statistically significant:

In the formula above the interaction effect is, of course, dosegendertype. The ANOVA results can be seen below (we have only kept the line presenting the third-order interaction effect).

Df Sum Sq Mean Sq F value   Pr(>F)
dose:gender:type   2    187    93.4  22.367 3.81e-10

The interaction effect is statistically significant: F(2)=22.367, p<0.01. In other words, we do have a third-order interaction effect. In this situation, it is not advisable to report and interpret the second-order interaction effects (they could be misleading). Therefore, we are going to compute the simple second-order interaction effects.

This is definitely not a trivial article, but there are useful techniques in it.

Comments closed

Securing Kafka-To-Spark

Mark Grover explains how to secure communications between Apache Kafka and Apache Spark:

However, to read data from secure Kafka in distributed fashion, we need Hadoop-style delegation tokens in Kafka (KAFKA-1696), support for which doesn’t exist at the time of this writing (Spring 2017).

We considered various ways to solve this problem but ultimately decided that the recommended solution to read data securely from Kafka (at least until Kafka delegation tokens support is introduced) would be for the Spark application to distribute the user’s keytab so it’s accessible to the executors. The executors will then use the user’s keytab shared with them, to authenticate with the Kerberos Key Distribution Center (KDC) and read from Kafka brokers. YARN distributed cache is used for shipping and sharing the keytab to the driver and executors, from the client (that is, the gateway node). The figure below shows an overview of the current solution.

This turns out to be a bit more difficult than I would have anticipated.

Comments closed

Basic Q&A Maker With Non-Default Error Message

Steph Locke shows how to take a Q&A bot and change the default message when there is no relevant match:

Last week, I blogged about building a no-code Q&A bot for your website. One little niggle I had with the bot was the response when it could match a user input to a Q&A. I wondered how to change “No match found!”.

I looked around the qnamaker.ai site and couldn’t find a place I could change this. I submitted some feedback and the great people at the other of the Q&A site responded super quickly. I’ve raised a number of feedback points with them and I must say they’ve absolutely amazing at responding. But I digress.

Read on for the answer.

Comments closed