Press "Enter" to skip to content

Month: May 2016

Configure SAP HANA With Impala

Sreedhar Bolneni has a walkthrough on integrating SAP HANA with Impala:

Assuming an existing Cloudera Enterprise cluster with Impala services and HANA instances are running and that the HANA host has access to Impala daemons, configuring the integration is fairly straightforward

  1. Install the Impala ODBC driver on the HANA host.

  2. Configure the Impala data source.

  3. Create remote source and virtual tables using SAP HANA Studio; then test.

There are a lot of screenshots and configuration files to help guide you through.

Comments closed

SSIS + Google Distance Matrix API

Terry McCann shows how to use the Google distance matrix API to calculate a distance from a starting point to an ending point:

The most basic use is to calculate the distance from A to B. This is what we will look at using SSIS. The API takes in an origin and a destination. These can either be a lat & lng combo or an address which we be translated into Lat & lng when executed. To demonstrate the process I have created a database and loaded it with some sample customer and store data. We have 2 customers and 4 stores. These customers are linked to different stores. You can download the example files at the bottom of the page.

I like that this shows just how easy it is to hit an API with an SSIS component.  If I had one wish with this article, I’d use Biml to generate the package rather than talking through the tasks and components.  Regardless, check this out; it’s a great use of the script component.

Comments closed

“RAID” And Backups

Kenneth Fisher explains that you can set up backup strategies similar to different RAID types:

RAID 0

Splitting the backup data between multiple files.

This is actually a fairly common way to speed up a large backup. IO is one of the slowest part of the whole process so by splitting the backup into multiple files we can reduce our backup time by quite a bit. Having multiple files will also increase your restore time but you do have to make sure that all of the files are available. If you lose one file then the whole backup is useless.

This is done by listing multiple locations for the backup.

This is an interesting use of RAID as metaphor.

Comments closed

Data Warehouse Design Tips

Dustin Ryan has part one of a two-part series on data warehouse design best practices:

2. Store additive measures in the data warehouse.

The best type of measures to store in the data warehouse are those measures that can be fully aggregated. A measure that can be fully aggregated is a measure that can be summarized by any dimension or all dimensions and still remain meaningful. For instance, a Sales Amount measure can be summarized by Product, Date, Geography, etc. and still provide valuable insight for the customer.

Measures that cannot be fully aggregated, such as ratios or other percentage type calculations should be handled in the semantic model or the reporting tool. For example, a measure such as Percentage Profit Margin stored in a table cannot be properly aggregated. A better option would be to store the additive measures that are the base for the Percentage Profit Margin, such as Revenue, Cost, Margin, etc. These base measures can be used to calculate the ratio in a query, semantic model, or reporting tool.

The first five tips are non-controverisal and act as a good baseline for understanding warehousing with SQL Server.  Do check it out.

Comments closed

Principal Component Analysis Using R

Nina Zumel delves into principal component regression using R (via R Bloggers):

Data tends to come from databases that must support many different tasks, so it is exactly the case that there may be columns or variables that are correlated to unknown and unwanted additional processes. The reason PCA can’t filter out these noise variables is that without use of y, standard PCA has no way of knowing what portion of the variation in each variable is important to the problem at hand and should be preserved. This can be fixed through domain knowledge (knowing which variables to use), variable pruning and y-aware scaling. Our next article will discuss these procedures; in this article we will orient ourselves with a demonstration of both what a good analysis and what a bad analysis looks like.

All the variables are also deliberately mis-scaled to model some of the difficulties of working with under-curated real world data.

This does read like an academic paper, so it’s pretty heavy reading.  It’s also very good reading from a great writer, so take some time and give it a read if you do data analysis.

Comments closed

Say It With Screenshots

Brent Ozar continues his series on interviewing tactics:

After writing about “For Technical Interviews, Don’t Ask Questions, Show Screenshots”, lots of folks asked what kinds of screenshots I’d show. Here’s this week’s example.

I show each screenshot on a projector (or shared desktop) to the candidate and say:

  1. What’s this screen from?

  2. What does the screen mean?

  3. If it was a server you inherited from someone else, would there be any actions you’d take?

  4. What questions might you want to ask before you take those actions?

  5. Would there be any drawbacks to your actions?

  6. What would be the benefits of your actions?

I have started to use this in interviews and I’m already loving it.  I don’t want people to memorize minutia (“Name all of the policies available in Policy-Based Management”) but if I show a picture of the different policies, that should jog your memory on when you’ve used PBM to solve interesting problems.

Comments closed

Creating M Functions From Parameterized Queries

Chris Webb shows how to take a parameterized query in Power BI and create an M function from it:

All of these examples involve writing M code manually. The big change in the latest version of Power BI Desktop is that you can do the same thing using just the UI.

Let’s take the classic example of combining data from multiple Excel workbooks and update it to show how things work now.

Say you have a folder containing three Excel workbooks containing sales data for January, February and March and you want to load data from all three into a single table into Power BI. The first thing to do is to create a new parameter in Power BI Desktop that returns the filename, including path, of one of the Excel files. Call it ExcelFilePath and configure it as shown here:

This is pretty cool.

Comments closed

Configuring Apache Flink

Awanish at Edureka shows how to install and configure Apache Flink:

Apache Flink is an open source platform for distributed stream and batch data processing. It can run on Windows, Mac OS and Linux OS. In this blog post, let’s discuss how to set up Flink cluster locally. It is similar to Spark in many ways – it has APIs for Graph and Machine learning processing like Apache Spark – but Apache Flink and Apache Spark are not exactly the same.

To set up Flink cluster, you must have java 7.x or higher installed on your system. Since I have Hadoop-2.2.0 installed at my end on CentOS ( Linux ), I have downloaded Flink package which is compatible with Hadoop 2.x. Run below command to download Flink package.

Flink is another streaming system.  Check out this SlideShare presentation to see the differences between Flink and Spark.

Comments closed

Creating An ETL Process In Powershell

Max Trinidad is building a Powershell-based solution for ETL from scratch:

So after the drive gets mapped to T: drive, we need to look and collect the type of logs we want to pull. In my scenario, I’m looking for all log labeled “*.Events.*.log.*”. One caveat discovered previously, these text logs file doesn’t contains servername information. But, No Problem! This is another opportunity to be creative with PowerShell.

Here we use the Cmdlet “Get-ChildItem” with the “Sort-Object” to sort the results by its object property “LastWriteTime“. You will file this property very useful later as you progress in our data collection process. This results set wil need to be stored in PowerShell Object

I’m interested in seeing where this goes, especially because my first choice for ETL would be SSIS with Biml.

Comments closed

Power Query For The Rest Of Us

Reza Rad talks about using Power Query in a distinctly non-BI fashion:

As an introduction to this series, I want to take you to the path that leads me to use Power Query here. You might be aware that I am teaching Power BI courses, and most of my courses are online and Live. This means that courses are not recorded videos, it is me on the other side of the line with full interactive audio and video experience with students with Go2Meeting application. Students connecting to me from other places in the world. So I do need an event date/time scheduler that I can announce date and time of the event in different time zones.

Fortunately there is a very good website that helps to find a date/time in different time zones. In this website I can set my input parameters as the date/time of my event locally (in my city), and name of the event, and duration.

In Part 2, Reza shows grouping and concatenation:

Now that I have values in multiple columns I can concatenate them all into one string with Table.ToList function which converts a table to List. This function can concatenate all columns of table into one column (because List is a single columned data structure).

The actual concatenation happens by Combiner function; Combiner.CombineTextByDelimiter(“, “) which concatenate values with a delimiter which I set to be comma. So here is the expression for my new custom column:

Part 3 is forthcoming and should wrap up this series.

Comments closed