Spark是Apache公司推出的一种基于Hadoop Distributed File System(HDFS)的并行计算架构。与MapReduce不同,Spark并不局限于编写map和reduce两个方法,其提供了更为强大的内存计算(in-memory computing)模型,使得用户可以通过编程将数据读取到集群的内存当中,并且可以方便用户快速地重复查询,非常适合用于实现机器学习算法。本文将介绍Apache Spark1.1.0部署与开发环境搭建。
0. 准备
出于学习目的,本文将Spark部署在虚拟机中,虚拟机选择VMware WorkStation。在虚拟机中,需要安装以下软件:
Spark的开发环境,本文选择Windows7平台,IDE选择IntelliJ IDEA。在Windows中,需要安装以下软件:
1. 安装JDK
解压jdk安装包到/usr/lib目录:
1 sudo cp jdk-7u67-linux-x64.gz /usr/lib2 cd /usr/lib3 sudo tar -xvzf jdk-7u67-linux-x64.gz4 sudo gedit /etc/profile
在/etc/profile文件的末尾添加环境变量:
1 export JAVA_HOME=/usr/lib/jdk1.7.0_672 export JRE_HOME=/usr/lib/jdk1.7.0_67/jre3 export PATH=JAVAHOME/bin:JRE_HOME/bin:$PATH4 export CLASSPATH=.:JAVAHOME/lib:JRE_HOME/lib:$CLASSPATH
保存并更新/etc/profile:
1 source /etc/profile
测试jdk是否安装成功:
1 java -version
![](http://images.cnitblog.com/blog/643832/201409/301821504876208.jpg)
2. 安装及配置SSH
1 sudo apt-get update2 sudo apt-get install openssh-server3 sudo /etc/init.d/ssh start
生成并添加密钥:
1 ssh-keygen -t rsa -P ""2 cd /home/hduser/.ssh 3 cat id_rsa.pub >> authorized_keys
ssh登录:
1 ssh localhost
![](http://images.cnitblog.com/blog/643832/201409/301828490037734.png)
3. 安装hadoop2.4.0
采用伪分布模式安装hadoop2.4.0。解压hadoop2.4.0到/usr/local目录:
1 sudo cp hadoop-2.4.0.tar.gz /usr/local/ 2 sudo tar -xzvf hadoop-2.4.0.tar.gz
在/etc/profile文件的末尾添加环境变量:
1 export HADOOP_HOME=/usr/local/hadoop-2.4.02 export PATH=HADOOPHOME/bin:HADOOP_HOME/sbin:$PATH34 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native5 export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
保存并更新/etc/profile:
1 source /etc/profile
在位于/usr/local/hadoop-2.4.0/etc/hadoop的hadoop-env.sh和yarn-env.sh文件中修改jdk路径:
1 cd /usr/local/hadoop-2.4.0/etc/hadoop2 sudo gedit hadoop-env.sh3 sudo gedit yarn-evn.sh
hadoop-env.sh:
![](http://images.cnitblog.com/blog/643832/201409/301857537224389.jpg)
yarn-env.sh:
![](http://images.cnitblog.com/blog/643832/201409/301857171281077.jpg)
修改core-site.xml:
1 sudo gedit core-site.xml
在<configuration></configuration>之间添加:
1 <property>
2 <name>fs.
default.name</name>
3 <value>hdfs:
//localhost:9000</value>4 </property>
56<property>
7 <name>hadoop.tmp.dir</name>
8 <value>/app/hadoop/tmp</value>
9 </property>
修改hdfs-site.xml:
1 sudo gedit hdfs-site.xml
在<configuration></configuration>之间添加:
1 <property>
2 <name>dfs.namenode.name.dir</name>
3 <value>/app/hadoop/dfs/nn</value>
4 </property>
5 6<property>
7 <name>dfs.namenode.data.dir</name>
8 <value>/app/hadoop/dfs/dn</value>
9 </property>
1011<property>
12 <name>dfs.replication</name>
13 <value>
1</value>
14 </property>
修改yarn-site.xml:
1 sudo gedit yarn-site.xml
在<configuration></configuration>之间添加:
1 <property>
2 <name>mapreduce.framework.name</name>
3 <value>yarn</value>
4 </property>
56 <property>
7 <name>yarn.nodemanager.aux-services</name>
8 <value>mapreduce_shuffle</value>
9 </property>
复制并重命名mapred-site.xml.template为mapred-site.xml:
1 sudo cp mapred-site.xml.template mapred-site.xml2 sudo gedit mapred-site.xml
在<configuration></configuration>之间添加:
1 <property> 2 <name>mapreduce.jobtracker.address </name> 3 <value>hdfs://localhost:9001</value>4</property>
在启动hadoop之前,为防止可能出现无法写入log的问题,记得为/app目录设置权限:
1 sudo mkdir /app2 sudo chmod -R hduser:hduser /app
格式化hadoop:
1 hadoop namenode -format
启动hdfs和yarn。在开发Spark时,仅需要启动hdfs:
1 sbin/start-dfs.sh 2 sbin/start-yarn.sh
在浏览器中打开地址http://localhost:50070/可以查看hdfs状态信息:
![](http://images.cnitblog.com/blog/643832/201410/011343357375021.png)
4. 安装scala
1 sudo cp /home/hduser/Download/scala-2.9.3.tgz /usr/local2 sudo tar -xvzf scala-2.9.3.tgz
在/etc/profile文件的末尾添加环境变量:
1 export SCALA_HOME=/usr/local/scala-2.9.32 export PATH=SCALAHOME/bin:PATH
保存并更新/etc/profile:
1 source /etc/profile
测试scala是否安装成功:
1 scala -version
5. 安装Spark
1 sudo cp spark-1.1.0-bin-hadoop2.4.tgz /usr/local2 sudo tar -xvzf spark-1.1.0-bin-hadoop2.4.tgz
在/etc/profile文件的末尾添加环境变量:
1 export SPARK_HOME=/usr/local/spark-1.1.0-bin-hadoop2.4 2 export PATH=SPARKHOME/bin:PATH
保存并更新/etc/profile:
1 source /etc/profile
复制并重命名spark-env.sh.template为spark-env.sh:
1 sudo cp spark-env.sh.template spark-env.sh2 sudo gedit spark-env.sh
在spark-env.sh中添加:
1 export SCALA_HOME=/usr/local/scala-2.9.32 export JAVA_HOME=/usr/lib/jdk1.7.0_673 export SPARK_MASTER_IP=localhost4 export SPARK_WORKER_MEMORY=1000m
启动Spark:
1 cd /usr/local/spark-1.1.0-bin-hadoop2.42 sbin/start-all.sh
测试Spark是否安装成功:
1 cd /usr/local/spark-1.1.0-bin-hadoop2.42 bin/run-example SparkPi
![](http://images.cnitblog.com/blog/643832/201409/302038094415391.jpg)
6. 搭建Spark开发环境
本文开发Spark的IDE推荐IntelliJ IDEA,当然也可以选择Eclipse。在使用IntelliJ IDEA之前,需要安装scala的插件。点击Configure:
![](http://images.cnitblog.com/blog/643832/201410/011350458623327.png)
点击Plugins:
![](http://images.cnitblog.com/blog/643832/201410/011351205347552.png)
点击Browse repositories...:
![](http://images.cnitblog.com/blog/643832/201410/011352166122451.png)
在搜索框内输入scala,选择Scala插件进行安装。由于已经安装了这个插件,下图没有显示安装选项:
![](http://images.cnitblog.com/blog/643832/201410/011354528941097.png)
安装完成后,IntelliJ IDEA会要求重启。重启后,点击Create New Project:
![](http://images.cnitblog.com/blog/643832/201410/011357570341819.png)
Project SDK选择jdk安装目录,建议开发环境中的jdk版本与Spark集群上的jdk版本保持一致。点击左侧的Maven,勾选Create from archetype,选择org.scala-tools.archetypes:scala-archetype-simple:
![](http://images.cnitblog.com/blog/643832/201410/011359472847300.png)
点击Next后,可根据需求自行填写GroupId,ArtifactId和Version:
![](http://images.cnitblog.com/blog/643832/201410/011402551918640.png)
点击Next后,如果本机没有安装maven会报错,请保证之前已经安装maven:
![](http://images.cnitblog.com/blog/643832/201410/011410153781155.png)
点击Next后,输入文件名,完成New Project的最后一步:
![](http://images.cnitblog.com/blog/643832/201410/011411509723625.png)
点击Finish后,maven会自动生成pom.xml和下载依赖包。我们需要修改pom.xml中scala的版本:
1 <properties> 2 <scala.version>2.10.4</scala.version> 3 </properties>
在<dependencies></dependencies>之间添加配置:
1 <!-- Spark -->
2 <dependency>
3 <groupId>org.apache.spark</groupId>
4 <artifactId>spark-core_2.
10</artifactId>
5 <version>
1.1.
0</version>
6 </dependency>
7 8 <!-- HDFS -->
9 <dependency>
10<groupId>org.apache.hadoop</groupId>
11 <artifactId>hadoop-client</artifactId>
12 <version>
2.4.
0</version>
13</dependency>
在<build><plugins></plugins></build>之间添加配置:
1<plugin> 2<groupId>org.apache.maven.plugins
</groupId> 3<artifactId>maven-shade-plugin
</artifactId>4<version>2.2
</version> 5<executions> 6<execution> 7<phase>package
</phase> 8<goals>9<goal>shade
</goal>10</goals>11<configuration>12<filters>13<filter>14<artifact>*:*
</artifact>15<excludes>16<exclude>META-INF/*SF
</exclude>17<exclude>META-INF/*.DSA
</exclude>18<exclude>META-INF/*.RSA
</exclude>19</excludes>20</filter>21</filters>22<transformers>23<transformer24implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">25<mainClass>mark.lin.App
</mainClass> // 记得修改成你的mainClass26</transformer>27<transformer28implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">29<resource>reference.conf
</resource>30</transformer>31</transformers>32<shadedArtifactAttached>true
</shadedArtifactAttached>33<shadedClassifierName>executable
</shadedClassifierName>34</configuration>35</execution>36</executions>37</plugin>
Spark的开发环境至此搭建完成。One more thing,wordcount的示例代码:
1 package mark.lin
//别忘了修改package 2 3import org.apache.spark.{SparkConf, SparkContext} 4import org.apache.spark.SparkContext._ 5 6import scala.collection.mutable.ListBuffer 7 8/** 9 * Hello world!10 *11*/12objectApp{13 def main(args: Array[String]) {14if (args.length !=
1) {15 println(
"Usage: java -jar code.jar dependencies.jar")16 System.exit(
0)17 }18 val jars =
ListBuffer[String]()19 args(
0).split(
",").map(jars +=
_)2021 val conf =
new SparkConf()22conf.setMaster(
"spark://localhost:7077").setAppName(
"wordcount").
set(
"spark.executor.memory",
"128m").setJars(jars)2324 val sc =
new SparkContext(conf)2526 val file = sc.textFile(
"hdfs://localhost:9000/hduser/wordcount/input/input.csv")27 val count = file.flatMap(line => line.split(
"")).map(word => (word,
1)).reduceByKey(_+
_)28 println(count)29count.saveAsTextFile(
"hdfs://localhost:9000/hduser/wordcount/output/")30 sc.stop()31 }32 }
7. 编译&运行
使用maven编译源代码。点击左下角,点击右侧package,点击绿色三角形,开始编译。
![](http://images.cnitblog.com/blog/643832/201410/011425437226501.png)
在target目录下,可以看到maven生成的jar包。其中,hellworld-1.0-SNAPSHOT-executable.jar是我们需要放到Spark集群上运行的。
在运行jar包之前,保证hadoop和Spark处于运行状态:
![](http://images.cnitblog.com/blog/643832/201410/011432121597903.jpg)
将jar包拷贝到Ubuntu的本地文件系统上,输入以下命令运行jar包:
1 java -jar helloworld-1.0-SNAPSHOT-executable.jar helloworld-1.0-SNAPSHOT-executable.jar
在浏览器中输入地址http://localhost:8080/可以查看任务运行情况:
![](http://images.cnitblog.com/blog/643832/201410/011435311127152.png)
8. Q&A
Q:在Spark集群上运行jar包,抛出异常“No FileSystem for scheme: hdfs”:
![](http://images.cnitblog.com/blog/643832/201410/011436547061557.jpg)
A:这是由于hadoop-common-2.4.0.jar中的core-default.xml缺少hfds的相关配置属性引起的异常。在maven仓库目录下找到hadoop-common-2.4.0.jar,以rar的打开方式打开:
![](http://images.cnitblog.com/blog/643832/201410/011443020811559.png)
将core-default.xml拖出,并添加配置:
1 <property> 2 <name>fs.hdfs.impl</name> 3 <value>org.apache.hadoop.hdfs.DistributedFileSystem</value> 4<description>The FileSystem for hdfs: uris.</description> 5 </property>
再将修改后的core-default.xml替换hadoop-common-2.4.0.jar中的core-default.xml,重新编译生成jar包。
Q:在Spark集群上运行jar包,抛出异常“Failed on local exception”:
![](http://images.cnitblog.com/blog/643832/201410/011445590505491.jpg)
A:检查你的代码,一般是由于hdfs路径错误引起。
Q:在Spark集群上运行jar包,重复提示“Connecting to master spark”:
![](http://images.cnitblog.com/blog/643832/201410/011447270193399.jpg)
A:检查你的代码,一般是由于setMaster路径错误引起。
Q:在Spark集群上运行jar包,重复提示“Initial job has not accepted any resource; check your cluster UI to ensure that workers are registered and have sufficient memory”:
![](http://images.cnitblog.com/blog/643832/201410/011448468006873.jpg)
A:检查你的代码,一般是由于内存设置不合理引起。此外,还需要检查Spark安装目录下的conf/spark-env.sh对worker内存的设置。
Q:maven报错:error: org.specs.Specification does not have a constructor
![](http://images.cnitblog.com/blog/643832/201410/021425586595284.png)
A: 删除test目录下的文件,重新编译。
9. 参考资料
[1] Spark Documentation from Apache. [Link]