Press "Enter" to skip to content

Author: Kevin Feasel

Installing Power BI Report Server

Adam Saxton has a video showing how to install and configure Power BI Report Server:

In this video, I look at how to install and configure the May 2017 Preview of Power BI Report Server. Power BI Report Server has a new standalone install experience and this product allows for Power BI reports to be rendered in the web portal along with paginated reports.

This will get you started with the new version.

I was really excited about this preview until I realized that, for now, it only works for Analysis Services data sources.

Comments closed

Azure AD On Azure SQL DB

Arun Sirpal shows how to set up Azure SQL Database to use Azure Active Directory accounts:

I think it is important to highlight a couple of points, more specifically around the requirement of ADALSQL.DLL and proper setup of AD which I will highlight below and reference some links, please do this as it lays the foundation for you.

ADALSQL.DLL

You need ADALSQL.DLL which is part of the latest SQL Server Management Studio (SSMS) to test access. This stands for Active Directory Authentication Library for SQL Server.

This goes through some of the issues Arun had setting everything up and provides workarounds and explanations.

Comments closed

Check Your Transactions

John Morehouse talks about a mistake he made:

The other day I had to update some records, in Production.  I’m a firm believer of using explicit transactions and double checking things before committing a transaction.  This helps ensure things go as expected.  This also allows me a way to rollback the changes if they don’t.  It happens.

However, this means that I have to COMMIT said explicit transaction.  And not go to lunch without doing so.

Can you see my mistake?  I bet you can.

Fortunately, it sounds like it wasn’t a critical problem.  If you want to check for open transactions, Jack Vamvas has a couple methods.

Comments closed

Building Temporal Tables

Bert Wagner shows how to create a temporal table in SQL Server 2016:

I want to make my life easier by using temporal tables! Take my money and show me how!

I’m flattered by your offer, but since we are good friends I’ll let you in on these secrets for free.

First let’s create a temporal table. I’m thinking about starting up a car rental business, so let’s model it after that:

There are some places where temporal tables can get better (particularly around feeling more like a type 2 slowly changing dimension), but I’m pretty happy with this feature.

Comments closed

Finding Partition Boundaries

Kenneth Fisher shows how to find the min and max values for a partition:

So what does it do? Per BOL

Returns the partition number into which a set of partitioning column values would be mapped for any specified partition function in SQL Server 2016.

So it basically tells us which partition any given row is in. This can be particularly handy at times. For example, if you want to know the min and max values of a column per partition.

Read on for a couple scripts which use $Partition.

Comments closed

The Power Of The Stacked Ensemble

Funda Gunes describes the value of ensemble models in data science competitions:

A simple way to enhance diversity is to train models by using different machine learning algorithms. For example, adding a factorization model to a set of tree-based models (such as random forest and gradient boosting) provides a nice diversity because a factorization model is trained very differently than decision tree models are trained. For the same machine learning algorithm, you can enhance diversity by using different hyperparameter settings and subsets of variables. If you have many features, one efficient method is to choose subsets of the variables by simple random sampling. Choosing subsets of variables could be done in more principled fashion that is based on some computed measure of importance which introduces the large and difficult problem of feature selection.

In addition to using various machine learning training algorithms and hyperparameter settings, the KDD Cup solution shown above uses seven different feature sets (F1-F7) to further enhance the diversity.  Another simple way to create diversity is to generate various versions of the training data. This can be done by bagging and cross validation.

I think there’s a pretty strong contrast between competitions and general practice, where you’re doing everything you can to eek out a higher prediction score in the competition, but in practice, you’re aiming to balance a “good enough” prediction with hardware/time requirements and code complexity, and so the model selection process can be quite different.

Comments closed

Lessons From A Data Analysis Exercise

Bill Schmarzo has an interesting post summarizing the results of an MBA class exercise involving data analysis:

Lesson #2:  Quick and dirty visualizations are critical in understanding what is happening in the data and establishing hypotheses to be tested. For example, the data visualization in Figure 1 quickly highlighted the importance of offensive rebounds and three-point shooting percentage in the Warriors’ overtime losses.

Read the whole thing.

Comments closed

Understanding Bootstrap Aggregating (Bagging)

Gabriel Vasconcelos explains the bagging technique:

The name bagging comes from boostrap aggregating. It is a machine learning technique proposed by Breiman (1996) to increase stability in potentially unstable estimators. For example, suppose you want to run a regression with a few variables in two steps. First, you run the regression with all the variables in your data and select the significant ones. Second, you run a new regression using only the selected variables and compute the predictions.

This procedure is not wrong if your problem is forecasting. However, this two step estimation may result in highly unstable models. If many variables are important but individually their importance is small, you will probably leave some of them out, and small perturbations on the data may drastically change the results.

Read on to see how bootstrap aggregation works and how it solves this solution instability problem.

Comments closed

Presto On HDInsight

Ashish Thapliyal shows how to install Presto on an HDInsight cluster:

What is Presto?

Presto is a distributed SQL query engine optimized for ad-hoc analysis at interactive speed. It supports standard ANSI SQL, including complex queries, aggregations, joins, and window functions. Presto is becoming popular SQL interactive query engine that has grabbed the attention and mind-share in Big data communities.

What are the key advantages of Presto?

1- It’s very fast – Presto was designed and written from the ground up for interactive analytics and approaches the speed of commercial data warehouses.

2- Presto can query data where it lives – Presto supports many data sources via the number of connectors that community has built. You can query HDFS , Hive, Azure Storage or data stored in SQL Server , My SQL , CosmosDB or Cassandra etc.

You can install Presto in one simple step with HDInsight Script Action feature

Read on for instructions and showing how to connect this to other Azure products like CosmosDB and Azure SQL Database.

Comments closed

Star-Schema Benchmark With Hive + Druid

Carter Shanklin and Slim Bouguerra run a Hadoop OLAP system running Hive and Druid against the Star-Schema Benchmark battery of queries:

How did we arrive at the query used to build the OLAP index? There is a systematic procedure:

  1. The union of all dimensions used by the SSB queries is included in the index.
  2. The union of all measures is included in the index. Notice that we pre-compute some products in the index.
  3. Druid requires a timestamp, so the date of the transaction is used as the timestamp.

You can see that building the index requires knowledge of the query patterns. Either an expert in the query patterns architects the index, or a tool is needed to analyze queries or to dynamically build indexes on the fly. A lot of time can be spent in this architecture phase, gathering requirements, designing measures and so on, because changing your mind after-the-fact can be very difficult.

One thing I don’t like so much is that they removed the ORDER BY clauses from some of the queries, as making this change makes it more difficult to use these results for “it’s totally not a comparison so don’t sue us Oracle” purposes.

Comments closed