meikkiie 发表于 2017-12-18 10:42:15

hadoop 入门学习系列之八

  export SCALA_HOME=/opt/softwares/scala-2.11.8
  export JAVA_HOME=/opt/softwares/jdk1.7.0_80
  export HADOOP_HOME=/opt/app/hadoop-2.6.5
  export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
  SPARK_MASTER_IP=master
  SPARK_LOCAL_DIRS=/opt/app/spark-2.1.1
  SPARK_DRIVER_MEMORY=1G
  vi slaves在slaves文件下填上slave主机名:
  slave1
  slave2
  将配置好的spark-2.1.1文件夹分发给所有slaves
  scp -r /opt/app/spark-2.1.1 hadoop01@slave1:/opt/app/
  scp -r /opt/app/spark-2.1.1 hadoop01@slave2:/opt/app/
3. 启动Spark
  sbin/start-all.sh
4. 验证 Spark 是否安装成功
  用jps检查,在 master 上应该有以下几个进程:
  $ jps
  7949 Jps
  7328 SecondaryNameNode
  7805 Master
  7137 NameNode
  7475 ResourceManager
  在 slave 上应该有以下几个进程:
  $jps
  3132 DataNode
  3759 Worker
  3858 Jps
  3231 NodeManager
  进入Spark的Web管理页面: http://master:8080
5.运行示例
  #本地模式两线程运行
  ./bin/run-example SparkPi 10 --master local
  #Spark Standalone 集群模式运行
  bin/spark-submit   --class org.apache.spark.examples.SparkPi   --master spark://master:7077   examples/jars/spark-examples_2.11-2.1.1.jar   100
  #Spark on YARN 集群上 yarn-cluster 模式运行
  ./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn-cluster \# can also be `yarn-client`
  examples/jars/spark-examples*.jar \
  10
页: [1]
查看完整版本: hadoop 入门学习系列之八