Press "Enter" to skip to content

Day: October 11, 2017

Predicting Advertising Budgets With Kafka Streams

Boyang Chen explains how Pinterest uses Kafka Streams to reduce advertising overdelivery:

Overdelivery occurs when free ads are shown for out-of-budget advertisers. This reduces opportunities for advertisers with available budget to have their products and services discovered by potential customers.

Overdelivery is a difficult problem to solve for two reason:

  1. Real-time spend data: Information about ad impressions needs to be fed back into the system within seconds in order to shut down out-of-budget campaigns.

  2. Predictive spend: Fast, historical spend data isn’t enough. The system needs to be able to predict spend that might occur in the future and slow down campaigns close to reaching their budget. That’s because an inserted ad could remain available to be acted on by a user. This makes the spend information difficult to accurately measure in a short timeframe. Such a natural delay is inevitable, and the only thing we can be sure of is the ad insertion event.

This is a very interesting architectural overview.

1 Comment

R And Python: Two Growing Languages

David Smith notes that as fast as Python is growing, R is as well:

Python has been getting some attention recently for its impressive growth in usage. Since both R and Python are used for data science, I sometimes get asked if R is falling by the wayside, or if R developers should switch course and learn Python. My answer to both questions is no.

First, while Python is an excellent general-purpose data science tool, for applications where comparative inference and robust predictions are the main goal, R will continue to be the prime repository of validated statistical functions and cutting-edge research for a long time to come. Secondly, R and Python are both top-10 programming languages, and while Python has a larger userbase, R and Python are both growing rapidly — and at similar rates.

I had a discussion about this last night.  I like the language diversity:  R is more statistician-oriented, whereas Python is more developer-oriented.  They both can solve the same set of problems, but there are certainly cases where one beats the other.  I think Python will end up being the more popular language for data science because of the number of application developers moving into the space, but for the data analysts and academicians moving to this field, R will likely remain the more interesting language.

Comments closed

Polybase And HDInsight

I have a post up on trying to integrate Polybase with HDInsight:

But now we run into a problem:  there are certain ports which need to be open for Polybase to work.  This includes port 50010 on each of the data nodes against which we want to run MapReduce jobs.  This goes back to the issue we see with spinning up data nodes in Docker:  ports are not available.  If you’ve put your HDInsight cluster into an Azure VNet and monkey around with ports, you might be able to open all of the ports necessary to get this working, but that’s a lot more than I’d want to mess with, as somebody who hasn’t taken the time to learn much about cloud networking.

As I mention in the post, I’d much rather build my own Hadoop cluster; I don’t think you save much maintenance time in the long run going with HDInsight.

Comments closed

Installing Zeppelin With Spark2 Support On HDP

Paul Hernandez shows how to install Apache Zeppelin 0.7.3 on Hortonworks Data Platform 2.5 in order to gain Spark2 support:

As a recent client requirement I needed to propose a solution in order to add spark2 as interpreter to zeppelin in HDP (Hortonworks Data Platform) 2.5.3
The first hurdle is, HDP 2.5.3 comes with zeppelin 0.6.0 which does not support spark2, which was included as a technical preview. Upgrade the HDP version was not an option due to the effort and platform availability. At the end I found in the HCC (Hortonworks Community Connection) a solution, which involves installing a standalone zeppelin which does not affect the Ambari managed zeppelin delivered with HDP 2.5.3.
I want to share how I did it with you.

Read on to see how Paul did it.  It’s not trivial but Paul lays out the process step-by-step.

Comments closed

Working With CosmosDB

Derik Hammer has an introductory article showing how to work with CosmosDB to store and use document-style data:

Querying Cosmos DB is more powerful and versatile. The CreateDocumentQuery method is used to create an IQueryable<T> object, a member of System.Linq, which can output the query results. The ToList() method will output a List<T> object from the System.Collections.Generic namespace.

Derik also shows how to import the data into Power BI and visualize it.  It’s a nice article if you’ve never played with CosmosDB before.

Comments closed

Power BI Conditional Formatting

Matt Allington shows us how to implement conditional formatting in Power BI:

Conditional formatting is one of the easiest ways to turn tables of boring data into a visual that almost makes the numbers jump out at you on a page.  There is nothing worse than looking at pages and pages of numbers and then trying to find insights from those numbers.  Conditional formatting helps you format your tables (and matrices) of data so that the patterns and outliers in the data are easier to spot at a glance.  Take a look at the before and after images below and see how much easier it is to see the variations in performance.

It turns out to be pretty easy, so check it out.

Comments closed

The Database Dialectic

Rob Farley sees a series of database syntheses, and the Big Data movement is a part of that:

When CLR came in, people said it was a T-SQL killer. I remember a colleague of mine telling me that he didn’t need to learn T-SQL, because CLR meant that he would be able to do it all in .Net. Over time, we’ve learned that CLR is excellent for all kinds of things, but it’s by no means a T-SQL killer. It’s excellent for a number of reasons – CLR stored procedures or functions have been great for things like string splitting and regular expressions – and we’ve learned its place now.

I don’t hear people talking about NoSQL like they once did, and it’s been folded somehow into BigData, but even that seems to have lost a little of its lustre from a year or two ago when it felt like it was ‘all the rage’. And yet we still have data which is “Big”. I don’t mean large, necessarily, just data that satisfies one of the three Vs – volume, velocity, variety.

Rob brings an interesting perspective to the topic, particularly as one of the early Parallel Data Warehouse bloggers.

Comments closed

The Market For IoT Analytics

Kennie Nybo Pontoppidan tells a story for T-SQL Tuesday:

At the time in Rehfeld R&D, we experimented with making Effektor a metadata repository for a Hadoop data warehouse, where instead of generating tables and ETL processes in the different data warehouse layers, the synchronization engine in the product would generate the Hive objects on top of Hadoop tables. We never made more than an overall spec and a prototype, but the experiment gave us some insight into the technologies around Hadoop.

Around that time, Phillips released the Hue lightbulbs, and our COO bought us two packs to play with. The idea was to create a physical BI dashboard, where lightbulbs would display KPIs, and change color according to its value and the KPI threshold. I still think that was a brilliant idea, and I would love to see more use of consumer electronics in enterprise BI.

His basic maturity model makes sense; as much as I really want to disagree with the maturity model, I can’t.  Good read.

Comments closed

Automatic Identity Value Reseeding

Tracy Boggiano shows how to build an automated identity column reseeding solution:

You have tables that have a lot of data inserted into them and deleted that use identity values and run out integers to use.  I have over 3000+ databases where this can occur so we have an alerts setup that checks the tables then checks a table to see if setup to be auto reseeded based on rather of database engineers have indicated it is safe to do so.  If it is that table is auto reseed either to one the maximum negative number for the datatype else we are alerted and we check with our database engineers on how to handle that table.  Keep in mind we are reseeding tables that have been deemed OK to reseed automatically.

Click through for the code, which includes reseeding logic, a job to run reseed operations, and a whitelist table for the tables which you want to allow automatic reseeding.

Comments closed

NOLOCK Doesn’t Mean No Locks

Bert Wagner points out that SELECT queries with NOLOCK can still cause blocking to occur:

This is where my understanding of NOLOCK was wrong: while NOLOCKwon’t lock row level data, it will take out a schema stability lock.

schema stability (Sch-S) lock prevents the structure of a table from changing while the query is executing. All SELECT statements, including those in the read uncommitted/NOLOCK isolation level, take out a Sch-S lock. This makes sense because we wouldn’t want to start reading data from a table and then have the column structure change half way through the data retrieval.

However, this also means there might be some operations that get blocked by a Sch-S lock. For example, any command requesting a schema modification (Sch-M) lock gets blocked in this scenario.

Read on to see which types of commands take schema modification locks, and ways to minimize the pain.

Comments closed