Press "Enter" to skip to content

Day: October 20, 2017

Page Ranking With Kafka Streams

Hunter Kelly walks through a page ranking algorithm:

Once you have the adjacency matrix, you perform some straightforward matrix calculations to calculate a vector of Hub scores and a vector of Authority scores as follows:

  • Sum across the columns and normalize, this becomes your Hub vector
  • Multiply the Hub vector element-wise across the adjacency matrix
  • Sum down the rows and normalize, this becomes your Authority vector
  • Multiply the Authority vector element-wise down the the adjacency matrix
  • Repeat

An important thing to note is that the algorithm is iterative: you perform the steps above until  eventually you reach convergence—that is, the vectors stop changing—and you’re done. For our purposes, we just pick a set number of iterations, execute them, and then accept the results from that point.  We’re mostly interested in the top entries, and those tend to stabilize pretty quickly.

This is an architectural-level post, so there’s no code but there is a useful discussion of the algorithm.

Comments closed

Stateful Processing In Spark Streaming

Bill Chambers and Jules Damji look at a couple of stateful scenarios within Spark Streaming:

No streaming events are free of duplicate entries. Dropping duplicate entries in record-at-a-time systems is imperative—and often a cumbersome operation for a couple of reasons. First, you’ll have to process small or large batches of records at time to discard them. Second, some events, because of network high latencies, may arrive out-of-order or late, which may force you to reiterate or repeat the process. How do you account for that?

Structured Streaming, which ensures exactly once-semantics, can drop duplicate messages as they come in based on arbitrary keys. To deduplicate data, Spark will maintain a number of user-specified keys and ensure that duplicates, when encountered, are discarded.

Just as other stateful processing APIs in Structured Streaming are bounded by declaring watermarking for late data semantics, so is dropping duplicates. Without watermarking, the maintained state can grow infinitely over the course of your stream.

In this scenario, you would still want some sort of de-duplication code at the far end of your process if you can never have duplicates come in across the lifetime of the application.  This sounds like it’s more about preventing bursty duplicates from sensors.

Comments closed

Data Lake Zones

Shannon Lowder walks us through a multi-zone approach to storing data in a data lake:

Our first zone is the raw zone.  This zone will serve as the landing point for source files.  Like the extract (or stage) schema in our data warehouse, we want these files to match the source system as close as possible.In the data lake, we actually go one step beyond saying we want the schema of our raw files to match the source system, we also want these files to be immutable.

Immutable means once they are written to the raw folder we shouldn’t be able to modify or delete them.  That way, we can always reconstruct different states from these files without having to retrieve them from the source system.

Worth reading the whole thing.

Comments closed

Multi-Parameter Website Scraping With Power Query

Callum Green shows how to build up a URL based off of multiple parameters, scraping data from a page for each permutation of parameters:

The sections highlighted in red are the parameters and sit in between some of the hard-coded URL text

Code Breakdown:

–          Text = http://www.boxofficemojo.com/monthly/?page=

–          Parameter = [Page]

–          Text = &view=calendargross&yr=

–          Parameter = [Year]

–          Text = &month=

–          Parameter = [Month]

–          Text = &p=.htm

This is a rather clever solution, and if your parameters are functionally dependent (unlike this example, where it was a simple cross join of the three domains), you can still use the solution the same way; you just need to populate your parameter combination table differently.

Comments closed

Comparing Ranking Functions

Doug Kline compares three window functions:  RANK, DENSE_RANK, and ROW_NUMBER:

— so let’s say that we’ve created a contest

— places in the contest (top place, 2nd place, etc.)
— will be determined by the test score

— in other words, we’re not so concerned with the raw score
— but rather, we’re interested in the *relative* score
— and the order in which people appear, based on their score

— we can use the ROW_NUMBER() function to give a
— ‘ranking’ to each record, based on Score

Doug’s post is a video and an extended script so you can follow along.

Comments closed

Row Goals On Nested Loops

Joe Obbish has performed a very interesting investigation of how row goals work with nested loop joins and the TOP operator:

This does not happen. The cost remains the same as before: 0.294842 units. This is because the scan is costed according to density instead of by looking at the histogram of the outer table. The following query with a local variable repeated five times also has a cost of 0.294842 optimizer units:

DECLARE @var BIGINT = 1;
SELECT *
FROM (
VALUES (@var), (@var), (@var), (@var), (@var)
) s (ID)
WHERE NOT EXISTS
(	SELECT 1	FROM dbo.BIG_HEAP b	WHERE s.ID = b.ID
)
OPTION (NO_PERFORMANCE_SPOOL);

The problem with using density instead of looking at the data in the outer table is mostly apparent when the outer table contains rows without a match in the inner table.

It’s a great bit of investigative legwork and Joe has a Connect item he’d like you to upvote.

Comments closed

Using Service Broker To Queue Up External Script Calls

Arvind Shyamsundar shows how to use Service Broker to run external R or Python scripts based on new data coming into a transactional system:

Here, we will show you how you can use the asynchronous execution mechanism offered by SQL Server Service Broker to ‘queue’ up data inside SQL Server which can then be asynchronously passed to a Python script, and the results of that Python script then stored back into SQL Server.

This is effectively similar to the external message queue pattern but has some key advantages:

  • The solution is integrated within the data store, leading to fewer moving parts and lower complexity
  • Because the solution is in-database, we don’t need to make copies of the data. We just need to know what data has to be processed (effectively a ‘pointer to the data’ is what we need).

Service Broker also offers options to govern the number of readers of the queue, thereby ensuring predictable throughput without affecting core database operations.

There are several interconnected parts here, and Arvind walks through the entire scenario.

Comments closed

Updating Data In Common Table Expressions

Kenneth Fisher shows that you can directly update a table referenced in a common table expression:

CTEs are cool things. You can essentially create one or more in-line view(s) within your query. One thing that isn’t overly well known is that you can actually update the data within the CTE. No, I don’t mean using using the UPDATE statement with a CTE but actually running the update through the CTE.

This is really powerful when combined with window functions, like only updating the first record given a particular partition.  You can also delete, which makes duplicate detection and deletion fairly straightforward.

Comments closed

Using Regular Expressions In Check Constraints

Denis Gobo shows that SQL Server check constraints support limited regular expression capabilities:

While SQL server does not support a full implementation of regular expression, you can do what the person asked for without a problem in T-SQL. Here is what the regular expression looks like

[DMOPT][0-9][0-9]

A constraint like that will allow allow the following alphabetic characters (D, M, O, P or T) followed by 2 numeric characters. Enough talking let’s look at some code, first create this table

Read on to see how this constraint works and for implementation code.

Comments closed