关于Flume启动时报ERROR - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:526)] 错

2019-08-08 11:32:19,680 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:526)] Hit max consecutive under-replication rotations (30); w                              ill not continue rolling files under this path due to under-replication
2019-08-08 11:32:19,913 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file bounda                              ry. Rolling to the next file, if there is one.
2019-08-08 11:32:19,913 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hyxy/flu                              me/spooldir/1901 to /home/hyxy/flume/spooldir/1901.COMPLETED
2019-08-08 12:32:36,096 (pool-4-thread-1) [WARN - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:239)] The channel is full, and cannot write data now. The s                              ource will try again after 250 milliseconds
2019-08-08 12:32:36,348 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:238)] Last read was never committed - resetting                               mark position.
2019-08-08 12:32:37,715 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file bounda                              ry. Rolling to the next file, if there is one.
2019-08-08 12:32:37,716 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /home/hyxy/flu                              me/spooldir/1901 to /home/hyxy/flume/spooldir/1901.COMPLETED
2019-08-08 12:32:41,856 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading c                              onfiguration file:/home/hyxy/apps/flume/conf/spooling-hdfs.conf

这个错误翻译过来是意思是连续命中最大复本数不足循环(30);由于复本数不足,我将不会继续在此路径下滚动文件

出现这个错误之后无法继续向HDFS上继续滚写数据,你的复本数不足。

1.可以检查Datanode是否有挂掉的或是否忘开启某个datanode

2.检查hdfs-site.xml属性中你设置的副本数,是否和大于你的Datanode节点

我的错误就是 我设置的3个副本,但是我的datanode只有两个,把设置的副本数改成2个就好了。

发布了16 篇原创文章 · 获赞 15 · 访问量 6115

猜你喜欢

转载自blog.csdn.net/ShengBOOM/article/details/98862336