1 year ago

#95090

test-img

Judy

Spark Executors are running on node manager machine in spite machine is in decommission state

we have a spark cluster of 50 nodes, with YARN as the resource manager.

cluster is based on HDP version - 2.6.4 and , and cluster is based ambari platform

on yarn node manager yarn-machine34 , we set the machine as decommission state and also for datanode

decommission state performed successfully and we also can see that ,machines is signed in files:

/etc/hadoop/conf/dfs.exclude
/etc/hadoop/conf/yarn.exclude

but in spite that , we still saw The Spark Executors are running on yarn-machine34

enter image description here

so how it an be ?

as I understand decommission state should avoid any of spark running application / executes

so what else we can do about?

apache-spark

hadoop-yarn

ambari

hdp

datanode

0 Answers

Your Answer

Accepted video resources