Flink(二)集群安装

shihongpin / 2024-09-26 / 原文

集群安装

Standalone模式

安装

  • 解压缩
[user@hadoop102 software]$ tar -zxvf flink-1.10.1-bin-scala_2.12.tgz -C /opt/module/
  • 修改flink/conf/flink-conf.yaml文件
jobmanager.rpc.address: hadoop102
  • 修改/conf/slaves文件
hadoop103
hadoop104
  • 分发给其他两台虚拟机
[user@hadoop102 module]$ xsync flink-1.10.1
  • 启动,首先要确保所有Spark集群和hadoop集群已关闭
[user@hadoop102 bin]$ ./start-cluster.sh 
Starting cluster.
Starting standalonesession daemon on host hadoop102.
Starting taskexecutor daemon on host hadoop103.
Starting taskexecutor daemon on host hadoop104.
[user@hadoop102 bin]$ cd 
[user@hadoop102 ~]$ jpsall
=============== hadoop102 ===============
38726 Jps
38649 StandaloneSessionClusterEntrypoint
=============== hadoop103 ===============
6755 Worker
26251 Jps
26174 TaskManagerRunner
=============== hadoop104 ===============
10583 Worker
50444 Jps
50366 TaskManagerRunner
  • 访问Web UI对flink集群和任务进行监控管理
http://hadoop102:8081

提交任务

  • 首先在/opt/module/flink-1.10.1目录下创建一个数据文件testword.txt
hello flink
hello spark
hello hadoop
hello java
  • 把testword.txt分发到TaskManagerRunner服务器上。如果从文件中读取数据,由于是从本地磁盘读取,实际任务会被分发到TaskManagerRunner的服务器上,所以要把数据文件分发
[user@hadoop102 flink-1.10.1]$ xsync testword.txt 
==================== hadoop102 ====================
sending incremental file list

sent 62 bytes  received 12 bytes  148.00 bytes/sec
total size is 48  speedup is 0.65
==================== hadoop103 ====================
sending incremental file list
testword.txt

sent 157 bytes  received 35 bytes  384.00 bytes/sec
total size is 48  speedup is 0.25
==================== hadoop104 ====================
sending incremental file list
testword.txt

sent 157 bytes  received 35 bytes  384.00 bytes/sec
total size is 48  speedup is 0.25
  • 执行程序