Prithviraj Bose explains accumulators in Spark:

However, the logs can be corrupted. For example, the second line is a blank line, the fourth line reports some network issues and finally the last line shows a sales value of zero (which cannot happen!).

We can use accumulators to analyse the transaction log to find out the number of blank logs (blank lines), number of times the network failed, any product that does not have a category or even number of times zero sales were recorded. The full sample log can be found here.

Accumulators are applicable to any operation which are,

1. Commutative ->f(x, y) = f(y, x), and

2. Associative ->f(f(x, y), z) = f(f(x, z), y) = f(f(y, z), x)

For example,sumandmaxfunctions satisfy the above conditions whereasaveragedoes not.

Accumulators are an important way of measuring just how messy your semi-structured data is.

Kevin Feasel

2016-05-12

Hadoop, Spark