Press "Enter" to skip to content

Day: April 13, 2018

Spark Architecture: The Spark Streaming Receiver

Oleksii Yermolenko gives us an overview of the Receiver object in Spark Streaming:

The key component of Spark streaming application is called Receiver. It is responsible for opening new connections with the sources, listening events from them and aggregating incoming data within the memory. If receiver’s worker node is running out of memory, it starts using disk storage for persistence operations. But this negatively impacts the overall application’s performance.

All incoming data is first aggregated within receiver into chunks called Blocks. After preconfigured interval of time called batchInterval Spark does logical aggregation of these blocks into another entity called Batch. Batch has links to all blocks formed by receivers and uses this information for generation of RDD. This is the main Spark’s entity which is used by the engine for the operations upon the data. Normally RDD would consist of a number of partitions where each partition would reference the block generated by the receiver on the start stage. Streaming application can have lots of receivers located at different physical nodes, so the actual data would be distributed across the cluster from the start. Batch interval is global for the whole application and is defined on the stage of creation of Streaming Context. Block generation interval is a receiver based property which could be defined through the configuration of  spark.streaming.blockInterval property. By default blocks would be generated every 200ms but you can tune this property according to the nature of your data.

Read the whole thing, which includes some tips on design.

Comments closed

Hadoop 3.1 Released

Wangda Tan and Vinod Kumar Vavilapalli have a post on Hadoop 3.1.0:

This release is *not* yet ready for production use. Critical issues are being ironed out via testing and downstream adoption. Production users should wait for a 3.1.1/3.1.2 release.

The Hadoop community fixed 768 JIRAs (https://s.apache.org/apache-hadoop-3.1.0-all-tickets) in total as part of the 3.1.0 release. Of these fixes:
– 141 in Hadoop Common
– 266 in HDFS
– 329 in YARN
– 32 in MapReduce
Apache Hadoop 3.1.0 contains a number of significant features and enhancements.

YARN supporting GPUs and FPGAs is very interesting.

Comments closed

Deleting Lots Of Data

Kenneth Fisher wants to delete a lot of rows:

I recently had the task of deleting a bit over a billion rows from a table. Now I could have done just this:

DELETE FROM tablename WHERE createdate >= '1/1/2017'

But I have a few problems here. The table has no index on createdate, potentially causing problems with tempdb (the sort on createdate). Although in this case tempdb is pretty large because of some large batch work done at various times. I’m also going to be deleting > billion rows of ~6 billion which is probably going to fill up the log of the database (which fortunately isn’t in use yet) and end up rolling back my delete anyway. Even if I don’t fill up the log, I’m still going to bloat it pretty badly (autogrowth). And last, and anything but least, this is on a production server. Even if this database was on its own drive (meaning growth of the log can’t cause a problem with any other databases) that tempdb thing (let alone other resource usage) is going to be an issue.

Read on to see how to delete in batches.  My pattern is to have an explicit transaction within the WHILE loop, opening and closing for each deletion operation.  That has worked pretty well in the past when deleting large numbers of rows from a table.  It might also make sense to put a temporary filtered index on the table, dropping it afterward.

Comments closed

The Shuffling Operator And Azure SQL DW

Arun Sirpal is ready to deal:

For the purposes of this post the TSQL shown is elementary (don’t be surprised by that), the point is really about SHUFFLE. So, I select the estimated plan for the following code.

SELECT SOD.[SalesOrderID],SOD.[ProductID], SOH.[TotalDue]
FROM [SalesLT].[SalesOrderDetail] SOD
JOIN [SalesLT].[SalesOrderHeader] SOH ON
SOH.[SalesOrderID] = SOD.[SalesOrderID]
WHERE SOH.[TotalDue] > 1000

Shuffle me once, why not shuffle me twice. If you REALLY want to see the EXPLAIN command output, then it looks like this snippet below.

The DSQL operation clearly states SHUFFLE_MOVE. Why am I getting this? What does it mean?

Shuffling data isn’t the worst thing in the world, but it is a fairly expensive operation all things considered.  Ideally, your warehouse architecture limits the number of shuffle operations, but considering that you can only hash on one key, sometimes it’s inevitable.

Comments closed

The Most Powerful Force In The Universe, In DAX

Matt Allington talks about compound interest and shows how to calculate it in DAX:

Now back to the point of this article.  Compounding growth is very easy to do in Excel because you can write individual cell formulas (to do what ever you want), and each new formula can reference the answer from the previous formula as the starting point for the new formula. There is no such ability in the DAX language.  To solve such problems in DAX, you have to change the way you think and start to think about how to write a single formula that will work over an entire TABLE of data (or columns or multiple tables) – no cell by cell individual formulas are possible.

Below I will step you through the process of finding a solution to this problem.  As I often mention in my articles, it is the process that I believe is most important.  I seldom know how to answer a complex DAX problem when I start out (that’s the very definition of “complex”), and instead I follow a process to help me solve the problem.  Take a careful read below.  If you apply the same process when you write your formulas, you will be well on your way to becoming a DAX Superhero.

It’s an interesting problem when your growth rate is not always the same, but Matt has you covered.

Comments closed

Loading CSVs Into Azure Using dbatools

Stuart Moore has a quick Powershell script which loads CSV data into Azure SQL Database using dbatools:

To get some of this data usable for reporting we’re importing it into Azure SQL Database so people can start working their way through it, and we can fix up errors before we push it through into Azure Data Lake for mining. Being a fan of dbatools it was my first port of call for automating something like this.

Just to make life interesting, I want to add a time of creation field to the data to make tracking trends easier. As this information doesn’t actually exist in the CSV columns, I’m going to use LastWriteTime as a proxy for the creationtime.

Click through for the script.

Comments closed

Spatial Workaround In Azure SQL Data Warehouse

Rolf Tesmer has you covered if you want to perform spatial queries against data in Azure SQL Data Warehouse:

Recently we had a requirement to perform SQL Spatial functions on data that was stored in Azure SQL DW.  Seems simple enough as spatial has been in SQL for many years, but unfortunately, SQL Spatial functions are not natively supported in Azure SQL DW (yet)!

If interested – this is the link to the Azure Feedback feature request to make this available in Azure SQL DW – https://feedback.azure.com/forums/307516-sql-data-warehouse/suggestions/10508991-support-for-spatial-data-type

AND SO — to use spatial data in Azure SQL DW we need to look at alternative methods.  Luckily a recent new feature in Azure SQL DB  in the form of Elastic Query to Azure SQL DW now gives us the ability to perform these SQL Spatial functions on data within Azure SQL DW via a very simple method!

Check out that Azure Feedback item if you’d like to see native spatial support rather than using elastic query.  In the meantime, click through to see Rolf’s workaround.

Comments closed

Expanded Tables In DAX

Alberto Ferrari has a great explanation of how the concept of expanded tables works in DAX:

If you are coming from an SQL background, or if you are used to relational databases, you probably think that RELATED follows relationships. Thus, to compute the Month column, you would think that the engine followed a relationship between Sales and Date and obtained the value of the month by performing a lookup on the Date table.

DAX is different. Date[Month] belongs to the expanded version of Sales, There is a value for RELATED(Date[Month]) because Sales was expanded to include Date using a relationship.
RELATED requires a row context to be active. If you remove the row context of the calculated column, then RELATED no longer works.

This post cleared up a couple of ideas in my head, so check it out.

Comments closed