Spark on Volcano

Spark introduction

Spark is a fast and versatile big data clustering computing system. It provides high-level APIs for Scala, Java, Python, and R, as well as an optimization engine that supports a generic computational graph for data analysis. It also supports a rich set of advanced tools, including Spark SQL for SQL and Dataframes, MLLib for machine learning, GraphX for graphics processing, and Spark Streaming for Streaming.

Spark on Volcano

Currently, there are two ways to support the integration of Spark on Kubernetes and volcano. - Spark on Kubernetes native support: maintained by the Apache Spark community and Volcano community - Spark Operator support: maintained by the GoogleCloudPlatform community and Volcano community

Spark on Kubernetes native support (spark-submit)

Spark on Kubernetes with Volcano as a custom scheduler is supported since Spark v3.3.0 and Volcano v1.5.1. See more detail in link.

Spark Operator support (spark-operator)

Install Spark-Operator through Helm.

$ helm repo add spark-operator

$ helm install my-release spark-operator/spark-operator --namespace spark-operator --create-namespace

To ensure that the Spark-Operator is up and running, check with the following directive.

$ kubectl get po -nspark-operator

Here’s the official spark.yaml.

apiVersion: ""
kind: SparkApplication
  name: spark-pi
  namespace: default
  type: Scala
  mode: cluster
  image: ""
  imagePullPolicy: Always
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.0.0.jar"
  sparkVersion: "3.0.0"
  batchScheduler: "volcano"   #Note: the batch scheduler name must be specified with `volcano`
    type: Never
    - name: "test-volume"
        path: "/tmp"
        type: Directory
    cores: 1
    coreLimit: "1200m"
    memory: "512m"        
      version: 3.0.0
    serviceAccount: spark
      - name: "test-volume"
        mountPath: "/tmp"
    cores: 1
    instances: 1
    memory: "512m"    
      version: 3.0.0
      - name: "test-volume"
        mountPath: "/tmp"

Deploy the Spark application and see the status.

$ kubectl apply -f spark-pi.yaml
$ kubectl get SparkApplication