【02】Scala集群配置

1、配置文件

   /home/scipio/spark-1.0.0-bin-hadoop1/conf/spark-env.sh

  /home/scipio/spark-1.0.0-bin-hadoop1/conf/slaves

  /home/scipio/spark-1.0.0-bin-hadoop1/conf/spark-defaults.conf

   可以从spark-env.sh.template,spark-defaults.conf.template拷贝一份


2、配置参数

export JAVA_HOME = /usr/lib/jvm/java-7-oracle

export SPARK_MASTER_IP = 127.0.0.1
export SPARK_WORKER_CORES = 8
export SPARK_WORKER_INSTANCES = 1

export SPARK_WORKER_MEMORY = 20g
export SPARK_MASTER_PORT=8088

export SPARK_SUBMIT_OPTS="-verbose:gc -XX:-PrintGCDetails -XX:+PrintGCTimeStamps"

#export SPARK_SUBMIT_OPTS = "-Dspark.executor.memory=10g"

  配置slaves

192.168.2.241
192.168.2.242
192.168.2.243

 配置app参数

# Example:
# spark.master            spark://master:7077
# spark.eventLog.enabled  true
# spark.eventLog.dir      hdfs://namenode:8021/directory
# spark.serializer        org.apache.spark.serializer.KryoSerializer
# spark.local.dir         /data/tmp_spark_dir/
# spark.executor.memory   10g


3、启动集群

/home/scipio/spark-1.0.0-bin-hadoop1/bin$ MASTER=spark://host:port ./spark-shell


你可能感兴趣的:(【02】Scala集群配置)