您当前的位置: 首页 >  ar

段智华

暂无认证

  • 0浏览

    0关注

    1232博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

生产环境实战spark (10)分布式集群 5台设备 SPARK集群 HistoryServer WEBUI不能打开问题解决 File file:/tmp/spark-events does not

段智华 发布时间:2017-04-27 12:42:00 ,浏览量:0

生产环境实战spark (10)分布式集群 5台设备 SPARK集群 HistoryServer WEBUI不能打开问题解决

这个是个老问题,之前也遇到过,启动start-history-server.sh,报错如下

Caused by: java.io.FileNotFoundException: Log directory specified does not exist: file:/tmp/spark-events Did you configure the correct one through spark.history.fs.logDirectory?

[root@master sbin]# start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out
failed to launch: nice -n 0 /usr/local/spark-2.1.0-bin-hadoop2.6/bin/spark-class org.apache.spark.deploy.history.HistoryServer
        at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$startPolling(FsHistoryProvider.scala:197)
        ... 9 more
full log in /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out[root@master sbin]# cat /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out
Spark Command: /usr/local/jdk1.8.0_121/bin/java -cp /usr/local/spark-2.1.0-bin-hadoop2.6/conf/:/usr/local/spark-2.1.0-bin-hadoop2.6/jars/*:/usr/local/hadoop-2.6.5/etc/hadoop/ -Xmx1g org.apache.spark.deploy.history.HistoryServer
========================================
17/04/27 12:15:27 INFO history.HistoryServer: Started daemon with process name: 31814@master
17/04/27 12:15:27 INFO util.SignalUtils: Registered signal handler for TERM
17/04/27 12:15:27 INFO util.SignalUtils: Registered signal handler for HUP
17/04/27 12:15:27 INFO util.SignalUtils: Registered signal handler for INT
17/04/27 12:15:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/27 12:15:28 INFO spark.SecurityManager: Changing view acls to: root
17/04/27 12:15:28 INFO spark.SecurityManager: Changing modify acls to: root
17/04/27 12:15:28 INFO spark.SecurityManager: Changing view acls groups to: 
17/04/27 12:15:28 INFO spark.SecurityManager: Changing modify acls groups to: 
17/04/27 12:15:28 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
Exception in thread "main" java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:278)
        at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
Caused by: java.io.FileNotFoundException: Log directory specified does not exist: file:/tmp/spark-events Did you configure the correct one through spark.history.fs.logDirectory?
        at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$startPolling(FsHistoryProvider.scala:207)
        at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:153)
        at org.apache.spark.deploy.history.FsHistoryProvider.(FsHistoryProvider.scala:149)
        at org.apache.spark.deploy.history.FsHistoryProvider.(FsHistoryProvider.scala:77)
        ... 6 more
Caused by: java.io.FileNotFoundException: File file:/tmp/spark-events does not exist
        at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:537)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:750)
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:527)
        at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
        at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$startPolling(FsHistoryProvider.scala:197)
        ... 9 more
问题解决:

1,修改spark的配置文件spark-defaults.conf 

加上spark.history.fs.logDirectory    hdfs://Master:9000/historyserverforSpark

 spark.eventLog.enabled           false

[root@master conf]# ls
docker.properties.template  fairscheduler.xml.template  log4j.properties.template  metrics.properties.template  slaves  spark-defaults.conf  spark-env.sh
[root@master conf]# vi  spark-defaults.conf 

# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.

# Example:
# spark.master                     spark://master:7077
 spark.eventLog.enabled           false
# spark.eventLog.dir               hdfs://namenode:8021/directory
# spark.serializer                 org.apache.spark.serializer.KryoSerializer
# spark.driver.memory              5g
# spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"


spark.history.fs.logDirectory    hdfs://Master:9000/historyserverforSpark

2,将spark-defaults.conf 文件分发到各个worker节点

[root@master conf]# ssh   root@10.*.*.238  rm -rf /usr/local/spark-2.1.0-bin-hadoop2.6/conf/
[root@master conf]# ssh   root@10.*.*.239  rm -rf /usr/local/spark-2.1.0-bin-hadoop2.6/conf/
[root@master conf]# ssh   root@10.*.*.240  rm -rf /usr/local/spark-2.1.0-bin-hadoop2.6/conf/
[root@master conf]# ssh   root@10.*.*.241  rm -rf /usr/local/spark-2.1.0-bin-hadoop2.6/conf/
[root@master conf]# scp   -rq /usr/local/spark-2.1.0-bin-hadoop2.6/conf root@10.*.*.238:/usr/local/spark-2.1.0-bin-hadoop2.6
scp   -rq /usr/local/spark-2.1.0-bin-hadoop2.6/conf root@10.*.*.239:/usr/local/spark-2.1.0-bin-hadoop2.6
scp   -rq /usr/local/spark-2.1.0-bin-hadoop2.6/conf root@10.*.*.240:/usr/local/spark-2.1.0-bin-hadoop2.6
scp   -rq /usr/local/spark-2.1.0-bin-hadoop2.6/conf root@10.*.*.241:/usr/local/spark-2.1.0-bin-hadoop2.6
[root@master conf]# scp   -rq /usr/local/spark-2.1.0-bin-hadoop2.6/conf root@10.*.*.239:/usr/local/spark-2.1.0-bin-hadoop2.6
[root@master conf]# scp   -rq /usr/local/spark-2.1.0-bin-hadoop2.6/conf root@10.*.*.240:/usr/local/spark-2.1.0-bin-hadoop2.6
[root@master conf]# scp   -rq /usr/local/spark-2.1.0-bin-hadoop2.6/conf root@10.*.*.241:/usr/local/spark-2.1.0-bin-hadoop2.6
[root@master conf]# 
3,在hdfs中建立 hdfs://Master:9000/historyserverforSpark目录

[root@master sbin]# hdfs dfs -ls /
Found 1 items
drwx-wx-wx   - root supergroup          0 2017-04-25 16:08 /tmp
[root@master sbin]# hdfs dfs  -mkdir historyserverforSpark
mkdir: `historyserverforSpark': No such file or directory
[root@master sbin]# hdfs dfs  -mkdir  hdfs://Master:9000/historyserverforSpark
[root@master sbin]# hdfs dfs -ls /
Found 2 items
drwxr-xr-x   - root supergroup          0 2017-04-27 12:23 /historyserverforSpark
drwx-wx-wx   - root supergroup          0 2017-04-25 16:08 /tmp
[root@master sbin]# jps
4,重启启动spark的start-history-server.sh,成功启动

[root@master sbin]# start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to /usr/local/spark-2.1.0-bin-hadoop2.6/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-master.out
[root@master sbin]# 
5,查看页面

关注
打赏
1659361485
查看更多评论
立即登录/注册

微信扫码登录

0.0496s