A big blog for Big Data.

Why Spark on YARN and not standalone?

By | February 6th, 2017 | Spark

At Altiscale, we run Spark on YARN, our recommended configuration to our users for running Spark. While it is possible to run Spark standalone and some vendors who provide only Spark solutions advocate for this, there are a number of reasons that running Spark on YARN provides a more robust, efficient, and resilient experience:

Support multiple versions of Spark simultaneously – Spark has been evolving rapidly, with multiple versions becoming available in rapid succession. Unfortunately, there hasn’t been a lot of attention paid to backward compatibility, so analytics or applications built on prior versions may not be supported by the later version. At Altiscale, we help our customers get around this issue by supporting multiple versions of Spark simultaneously. This allows you to continue getting value from a prior version, while also using a newer version if you need to utilize more recent capabilities. You can access multiple versions of Spark by simply pointing SPARK_HOME and SPARK_CONF_DIR to different installation for /opt/alti-spark-x.y.z (version number). The versions of Spark that Altiscale currently supports are Spark 1.6.1, 1.6.2, 1.6.3, 2.0.2, and 2.1.0.

Easily upgrade Spark – Since you can run multiple version of Spark on YARN, engineers can easily transition and upgrade to the newer version of Spark, since the data can be available on HDFS without moving and duplicating data in a test environment.

More effective resource utilization – YARN provides capacity queues and a light version of containers. Altiscale provides compute-bursting on top of Hadoop on both aspects: HDFS storage and Task-Hours (Memory). This means you can run different framework on top of YARN with their own resources and share the same input data (avoiding redundant data), while the cluster scales transparently.

Better resource management visibility – YARN (RM) provides the UI and schedulers to carve out resources per application. A Spark standalone cluster cannot provide that level of granularity nor visibility since it is not focusing on cluster management UI nor this type of features, and it may not be its priority.

Greater resilience – Spark on YARN forces users to write data onto HDFS or another service. It provides visibility and awareness regarding where the logs/data should go onto HDFS and avoids writing to local files that may fill up local disk, causing operational complexities or incidents. YARN also provides resource isolation and cap that utilize and protect the machines to suffer from memory shortage that may lead to kernel panic.

Simplify operational complexity – Spark on YARN isolates the Spark libraries from the underlying filesystems. When supporting multiple versions of the Spark framework, users can focus on the Spark JARs that are pre-deployed or deployed on-demand at runtime without having to know the underlying machines’ file and directory layout.


For more information on YARN and how YARN works, see these two blog posts:

“Untangling Apache Hadoop YARN, Part 1”

“Apache Hadoop YARN Background and an Overview”