您当前的位置: 首页 >  centos

Bulut0907

暂无认证

  • 0浏览

    0关注

    346博文

    0收益

  • 0浏览

    0点赞

    0打赏

    0留言

私信
关注
热门博文

在Centos7上搭建Flink1.14.4的Standalone分布式集群

Bulut0907 发布时间:2022-07-11 09:15:17 ,浏览量:0

目录
  • 1. 安装规划
  • 2. 下载解压(在bigdata001操作)
  • 3. 修改conf/flink-conf.yaml(在bigdata001操作)
  • 4. 修改conf/masters和conf/workers(在bigdata001操作)
  • 5. 环境变量的添加(在bigdata001操作)
  • 6. 启动和验证

1. 安装规划
  • 每台服务器相互设置ssh无密码登录,注意authorized_keys权限为600
服务名安装服务器安装教程java8bigdata001/2/3hadoopbigdata001/2/3Centos7上Hadoop 3.3.1的分布式集群安装过程 2. 下载解压(在bigdata001操作)
[root@bigdata001 opt]#
[root@bigdata001 opt]# wget https://dlcdn.apache.org/flink/flink-1.14.4/flink-1.14.4-bin-scala_2.12.tgz
[root@bigdata001 opt]# 
[root@bigdata001 opt]# tar -zxvf flink-1.14.4-bin-scala_2.12.tgz
[root@bigdata001 opt]#
[root@bigdata001 opt]# cd flink-1.14.4
[root@bigdata001 flink-1.14.4]#
3. 修改conf/flink-conf.yaml(在bigdata001操作)

新增目录

[root@bigdata001 flink-1.14.4]# 
[root@bigdata001 flink-1.14.4]# pwd
/opt/flink-1.14.4
[root@bigdata001 flink-1.14.4]# 
[root@bigdata001 flink-1.14.4]# mkdir web_upload_dir
[root@bigdata001 flink-1.14.4]# 
[root@bigdata001 flink-1.14.4]# mkdir io_tmp_dir
[root@bigdata001 flink-1.14.4]# 

修改部分


jobmanager.rpc.address: bigdata001

jobmanager.bind-host: bigdata001

jobmanager.memory.process.size: 2g		
taskmanager.memory.process.size: 6g		

taskmanager.numberOfTaskSlots: 2			

state.backend: rocksdb					
state.checkpoints.dir: hdfs://bigdata001:9000/flink/checkpoints/rocksdb			
state.savepoints.dir: hdfs://bigdata001:9000/flink/savepoints/rocksdb

rest.bind-address: bigdata001

io.tmp.dirs: /opt/flink-1.14.4/io_tmp_dir			

添加部分

env.java.home: /opt/jdk1.8.0_201				

execution.checkpointing.interval: 300000				

web.upload.dir: /opt/flink-1.14.4/web_upload_dir			

4. 修改conf/masters和conf/workers(在bigdata001操作)
bigdata001:8081
bigdata002			
bigdata003			
5. 环境变量的添加(在bigdata001操作)

vi /root/.bashrc(在bigdata002/3也要操作)

export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`

export HADOOP_CONF_DIR=/opt/hadoop-3.3.1/etc/hadoop

vi /root/.bashrc

export FLINK_HOME=/opt/flink-1.14.4

export PATH=$PATH:$FLINK_HOME/bin
6. 启动和验证
  1. flink-1.14.4目录的分发(在bigdata001操作)
[root@bigdata001 opt]# scp -r flink-1.14.4 root@bigdata002:/opt
[root@bigdata001 opt]# scp -r flink-1.14.4 root@bigdata003:/opt
  1. 启动(在bigdata001操作)
[root@bigdata001 opt]# start-cluster.sh 
Starting cluster.
Starting standalonesession daemon on host bigdata001.
Starting taskexecutor daemon on host bigdata002.
Starting taskexecutor daemon on host bigdata003.
[root@bigdata001 opt]#
  1. 访问http://bigdata001:8081,如下图所示: Web UI

  2. 执行测试程序(在bigdata001操作)

[root@bigdata001 opt]# /opt/flink-1.14.4/bin/flink run /opt/flink-1.14.4/examples/streaming/WordCount.jar 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/flink-1.14.4/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-3.3.1/share/hadoop/common/lib/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Executing WordCount example with default input data set.
Use --input to specify file input.
Printing result to stdout. Use --output to specify output path.
Job has been submitted with JobID c16df30fa6cc2cee2f5523021d08f80b
Program execution finished
Job with JobID c16df30fa6cc2cee2f5523021d08f80b has finished.
Job Runtime: 2555 ms

[root@bigdata001 opt]#
  1. stop集群(在bigdata001操作)
[root@bigdata001 opt]# stop-cluster.sh 
Stopping taskexecutor daemon (pid: 32594) on host bigdata002.
Stopping taskexecutor daemon (pid: 3484) on host bigdata003.
Stopping standalonesession daemon (pid: 1900) on host bigdata001.
[root@bigdata001 opt]#
关注
打赏
1664501120
查看更多评论
立即登录/注册

微信扫码登录

0.0372s