实时监控目录下的多个追加文件
入门案例4
实验目的:使用 Flume 监听整个目录的实时追加文件,并上传至 HDFS
实验分析:
实验步骤:
一、实验前准备步骤
1.切换到/hadoop/Flume/apache-flume-1.9.0-bin
命令:cd /hadoop/Flume/apache-flume-1.9.0-bin
2.新建files目录
命令:mkdir files
3.切换到files目录下,创建file1.txt,file2.txt
命令:cd files/
touch file1.txt
touch file2.txt
继续上一个实验。
二、实验
1.切换到/hadoop/Flume/apache-flume-1.9.0-bin/job 目录
命令:cd /hadoop/Flume/apache-flume-1.9.0-bin/job
2.新建配置文件flume-taildir-hdfs.conf
命令:vi flume-taildir-hdfs.conf
输入a或者i进入编辑模式
添加如下内容:
a4.sources = r4
a4.sinks = k4
a4.channels = c4
# Describe/configure the source
a4.sources.r4.type = TAILDIR
a4.sources.r4.positionFile = /hadoop/Flume/apache-flume-1.9.0-bin /tail_dir.json
a4.sources.r4.filegroups = f1
a4.sources.r4.filegroups.f1 = /hadoop/Flume/apache-flume-1.9.0-bin /files/file.*
# Describe the sink
a4.sinks.k4.type = hdfs
a4.sinks.k4.hdfs.path = hdfs://hadoop102:9000/flume/upload/%Y%m%d/%H
#上传文件的前缀
a4.sinks.k4.hdfs.filePrefix = upload-
#是否按照时间滚动文件夹
a4.sinks.k4.hdfs.round = true
#多少时间单位创建一个新的文件夹
a4.sinks.k4.hdfs.roundValue = 1
#重新定义时间单位
a4.sinks.k4.hdfs.roundUnit = hour
#是否使用本地时间戳
a4.sinks.k4.hdfs.useLocalTimeStamp = true
#积攒多少个 Event 才 flush 到 HDFS 一次
a4.sinks.k4.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a4.sinks.k4.hdfs.fileType = DataStream
#多久生成一个新的文件
a4.sinks.k4.hdfs.rollInterval = 60
#设置每个文件的滚动大小大概是 128M
a4.sinks.k4.hdfs.rollSize = 134217700
#文件的滚动与 Event 数量无关
a4.sinks.k4.hdfs.rollCount = 0
# Use a channel which buffers events in memory
a4.channels.c4.type = memory
a4.channels.c4.capacity = 1000
a4.channels.c4.transactionCapacity = 100
# Bind the source and sink to the channel
a4.sources.r4.channels = c4
a4.sinks.k4.channel = c4
点击Ecs退出编辑,:wq保存退出
- 返回上一级目录,启动监控文件夹命令
命令:cd ..
flume-ng agent --conf conf/ --name a4 --conf-file job/flume-taildir-hdfs.conf
4.新打开一个命令终端,切换到hadoop用户下
命令:su - hadoop 密码:Yhf_1018
切换到cd /hadoop/Flume/apache-flume-1.9.0-bin/files目录下
命令:cd /hadoop/Flume/apache-flume-1.9.0-bin/files
向file1.txt,file2.txt 追加内容
命令:echo hello >> file1.txt
echo songshu >> file2.txt
5.在HDFS上查看数据
重新打开一个命令终端,再hadoop用户下
命令:hdfs dfs -ls /flume/upload
命令:hdfs dfs -ls /flume/upload/20210201
命令:hdfs dfs -ls /flume/upload/20210201/22
命令:hdfs dfs -cat /flume/upload/20210201/22/upload-.1612190485267
详细学习内容可观看Spark快速大数据处理扫一扫~~~或者引擎搜索Spark余海峰