SAP Knowledge Base Article - Preview

2819085 - Execute pyspark job on Hadoop cluster errors using python 3 SAP


  • Executing a python 3 script which is intended to execute on the Hadoop Cluster as if it were a spark-scala job with the following command:
    /opt/spark/bin/spark-submit --master yarn --deploy-mode cluster --num-executors 23 --executor-cores 5 --executor-memory 35GB
  • The job is submitted but fails with the following error message:

    ERROR org.apache.spark.deploy.yarn.ApplicationMaster (Logging.scala:logError(91)) - User class threw exception: Cannot run /usr/miniconda3/bin/python - execve failed (ENOENT - No such file or directory) 



  • SAP Cloud Platform Big Data Services (BDS) 4.3
  • SAP Cloud Platform Big Data Services (BDS) 5.0


SAP BusinessObjects Predictive Analytics 3.0


Altiscale, Portal, HDFS

Hive, Spark, Pig, Tez, Oozie, Alation, MapReduce, SparkContext, YARN , KBA , BC-NEO-BDS , HCP Big Data Service , Problem

About this page

This is a preview of a SAP Knowledge Base Article. Click more to access the full version on SAP ONE Support launchpad (Login required).

Search for additional results

Visit SAP Support Portal's SAP Notes and KBA Search.