Press "Enter" to skip to content

Accumulators in Spark

The Hadoop in Real World team explains what accumulators are in Spark:

Accumulators are like global variables in Spark application. In the real world, accumulators are used as counters and keep to keep track of something at an application level. Accumulators serve a very similar purpose as counters in MapReduce.

Read on for examples, as well as a warning against using them in a map() operation.