Dmitry Tolpeko notes a change to Amazon ElasticMapReduce:
So 50 executors were initially requested with the required memory 22528 and 4 vcores as expected, but actually 9 executors were created with 112640 memory and 20 cores that is 5x larger. It should have created 10 executors but my cluster does not have resources to run more containers.
Note: The second log row specifies allocated
vCores:5
, it is because of usingDefaultResourceCalculator
in my YARN cluster that ignores CPU and uses memory resource only. Do not pay attention to this, the Spark executor will still use 20 cores as it reported in the third log record above.
Click through for the reason.