hadoop3 EC测试

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/answer100answer/article/details/94042883

ec有关的全部命令:

 hdfs ec [通用选项]
     [-setPolicy -path <path> [-policy <policyName>] [-replicate]]
     [-getPolicy -path <path>]
     [-unsetPolicy -path <path>]
     [-listPolicies]
     [-addPolicies -policyFile <file>]
     [-listCodecs]
     [-enablePolicy -policy <policyName>]
     [-disablePolicy -policy <policyName>]
     [-help [cmd ...]]

由于编码出来的数据,要分布到多台datanode上,例如rs-6-3-1024K,就需要至少6+3=9台datanode。所以一般要有对应数量的dn。

1.查看当前支持的纠删码策略

命令如下

hdfs ec -listPolicies
[hadoop@hadoop-master1 shellUtils]$ hdfs ec -listPolicies
2019-06-28 20:10:52,329 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5], State=DISABLED
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2], State=DISABLED
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1], State=ENABLED
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=3], State=DISABLED
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4], State=DISABLED

可以看到支持5种ec策略,上述显示默认开启了RS-6-3-1024k策略:

  • RS-10-4-1024k:使用RS编码,每10个数据单元(cell),生成4个校验单元,共14个单元,也就是说:这14个单元中,只要有任意的10个单元存在(不管是数据单元还是校验单元,只要总数=10),就可以得到原始数据。每个单元的大小是1024k=1024*1024=1048576。

  • RS-3-2-1024k:使用RS编码,每3个数据单元,生成2个校验单元,共5个单元,也就是说:这5个单元中,只要有任意的3个单元存在(不管是数据单元还是校验单元,只要总数=3),就可以得到原始数据。每个单元的大小是1024k=1024*1024=1048576。

  • RS-6-3-1024k:使用RS编码,每6个数据单元,生成3个校验单元,共9个单元,也就是说:这9个单元中,只要有任意的6个单元存在(不管是数据单元还是校验单元,只要总数=6),就可以得到原始数据。每个单元的大小是1024k=1024*1024=1048576。

  • RS-LEGACY-6-3-1024k:策略和上面的RS-6-3-1024k一样,只是编码的算法用的是rs-legacy,应该是之前遗留的rs算法。

  • XOR-2-1-1024k:使用XOR编码(速度比RS编码快),每2个数据单元,生成1个校验单元,共3个单元,也就是说:这3个单元中,只要有任意的2个单元存在(不管是数据单元还是校验单元,只要总数=2),就可以得到原始数据。每个单元的大小是1024k=1024*1024=1048576。

2.查看路径下的ec策略

hdfs ec -getPolicy -path /user/ec/test

首先在/下创建目录rs-3-2,然后查看其是否设置了纠删码策略,结果显示没有指定策略(新建的目录不会指定策略)

[hadoop@hadoop-master1 shellUtils]$ hadoop fs -mkdir -p /user/ec/rs-3-2
2019-06-28 20:42:24,522 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop-master1 shellUtils]$ hdfs ec -getPolicy -path /user/ec/rs-3-2
2019-06-28 20:43:42,593 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
The erasure coding policy of /user/ec/rs-3-2 is unspecified

接下来,给此目录设置纠删码策略RS-3-2-1024k,此策略名是从前面list策略中查到的。可以看设置出错,原因是与默认开启的策略不同。

3.更换策略

hdfs ec [-enablePolicy -policy <policyName>]命令启用一组策略
[hadoop@hadoop-master1 shellUtils]$ hdfs ec -disablePolicy -policy RS-6-3-1024k
2019-06-28 20:59:00,453 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Erasure coding policy RS-6-3-1024k is disabled
[hadoop@hadoop-master1 shellUtils]$ hdfs ec -enablePolicy -policy RS-3-2-1024k
2019-06-28 20:59:15,340 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Erasure coding policy RS-3-2-1024k is enabled

设置rs-3-2-1024k

[hadoop@hadoop-master1 shellUtils]$ hdfs ec -setPolicy -path /user/ec/rs-3-2 -policy RS-3-2-1024k
2019-06-28 21:00:26,066 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Set RS-3-2-1024k erasure coding policy on /user/ec/rs-3-2

4.上传文件到ec目录下

[hadoop@hadoop-master1 test]$ hadoop fs -put hello.txt /user/ec/rs-3-2
2019-06-28 21:14:11,947 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-06-28 21:14:13,182 WARN erasurecode.ErasureCodeNative: ISA-L support is not available in your platform... using builtin-java codec where applicable

可以看到最后打印了ec的有关信息。

hdfs fsck /user/ec/rs-3-2/hello.txt  -files -blocks -locations
[hadoop@hadoop-master1 test]$ hdfs fsck /user/ec/rs-3-2/hello.txt  -files -blocks -locations
2019-06-28 21:17:25,502 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop-master1:9870/fsck?ugi=hadoop&files=1&blocks=1&locations=1&path=%2Fuser%2Fec%2Frs-3-2%2Fhello.txt
FSCK started by hadoop (auth:SIMPLE) from /10.179.83.24 for path /user/ec/rs-3-2/hello.txt at Fri Jun 28 21:17:26 CST 2019
/user/ec/rs-3-2/hello.txt 52 bytes, erasure-coded: policy=RS-3-2-1024k, 1 block(s):  OK
0. BP-1486153034-10.179.83.24-1559101838489:blk_-9223372036854775792_1003 len=52 Live_repl=3  
[blk_-9223372036854775792:DatanodeInfoWithStorage[10.179.52.55:9866,DS-be87c547-e130-41e2-8910-09ad4096ef19,DISK], 
blk_-9223372036854775788:DatanodeInfoWithStorage[10.179.131.90:9866,DS-efa0dabb-9912-41a9-8c8a-1f6b5672d928,DISK], 
blk_-9223372036854775789:DatanodeInfoWithStorage[10.179.100.195:9866,DS-0b1470fc-cfac-484a-971c-8aa439528950,DISK]]


Status: HEALTHY
 Number of data-nodes:	6
 Number of racks:		3
 Total dirs:			0
 Total symlinks:		0

Replicated Blocks:
 Total size:	0 B
 Total files:	0
 Total blocks (validated):	0
 Minimally replicated blocks:	0
 Over-replicated blocks:	0
 Under-replicated blocks:	0
 Mis-replicated blocks:		0
 Default replication factor:	2
 Average block replication:	0.0
 Missing blocks:		0
 Corrupt blocks:		0
 Missing replicas:		0

Erasure Coded Block Groups:
 Total size:	52 B
 Total files:	1
 Total block groups (validated):	1 (avg. block group size 52 B)
 Minimally erasure-coded block groups:	1 (100.0 %)
 Over-erasure-coded block groups:	0 (0.0 %)
 Under-erasure-coded block groups:	0 (0.0 %)
 Unsatisfactory placement block groups:	0 (0.0 %)
 Average block group size:	3.0
 Missing block groups:		0
 Corrupt block groups:		0
 Missing internal blocks:	0 (0.0 %)
FSCK ended at Fri Jun 28 21:17:26 CST 2019 in 8 milliseconds


The filesystem under path '/user/ec/rs-3-2/hello.txt' is HEALTHY

可以看到52字节,<1024k,直接整体编码,不用分割。存于1个block,一个dn就够了。

Live_repl=3 表示还有2个是校验块。

可以看到显示了该数据块的信息:0. BP-1486153034-10.179.83.24-1559101838489:blk_-9223372036854775792_1003 len=52 Live_repl=3 长度52字节。一共三个块,后边是3个块的信息:
[blk_-9223372036854775792:DatanodeInfoWithStorage[10.179.52.55:9866,DS-be87c547-e130-41e2-8910-09ad4096ef19,DISK],
blk_-9223372036854775788:DatanodeInfoWithStorage[10.179.131.90:9866,DS-efa0dabb-9912-41a9-8c8a-1f6b5672d928,DISK],
blk_-9223372036854775789:DatanodeInfoWithStorage[10.179.100.195:9866,DS-0b1470fc-cfac-484a-971c-8aa439528950,DISK]]
其中 9223372036854775792_1003 即为实际数据块,后边2个为校验块。登录到 10.179.52.55 机器上看:

[hadoop@hadoop-slave6 ~]$ ls -l  data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775792
-rw-rw-r-- 1 hadoop hadoop 52 Jun 28 21:14 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775792

整个块就是52字节。另外块:

[hadoop@hadoop-slave3 ~]$ ll data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775788
-rw-rw-r-- 1 hadoop hadoop 52 Jun 28 21:14 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775788

如此以来,共有3个块,并没有节省空间。
我们再看有多个数据块的情况。我们传一个大文件。


再看大于1024k的文件,我们上传一个9.26M的文件,信息:

[hadoop@hadoop-master1 ~]$ hdfs fsck /user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz -files -blocks -locations
2019-06-29 14:13:07,137 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop-master1:9870/fsck?ugi=hadoop&files=1&blocks=1&locations=1&path=%2Fuser%2Fec%2Frs-3-2%2Fapache-tomcat-8.5.42.tar.gz
FSCK started by hadoop (auth:SIMPLE) from /10.179.83.24 for path /user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz at Sat Jun 29 14:13:08 CST 2019
/user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz 9711748 bytes, erasure-coded: policy=RS-3-2-1024k, 1 block(s):  OK
0. BP-1486153034-10.179.83.24-1559101838489:blk_-9223372036854775760_1005 len=9711748 Live_repl=5 
[blk_-9223372036854775760:DatanodeInfoWithStorage[10.179.131.90:9866,DS-efa0dabb-9912-41a9-8c8a-1f6b5672d928,DISK], 
 blk_-9223372036854775759:DatanodeInfoWithStorage[10.179.131.21:9866,DS-5cc43afe-3c9e-400b-93d0-1146c7d1ce9f,DISK], 
 blk_-9223372036854775758:DatanodeInfoWithStorage[10.179.52.182:9866,DS-e91f4a19-3503-4a45-a5ea-208748281dfa,DISK], 
 blk_-9223372036854775757:DatanodeInfoWithStorage[10.179.100.195:9866,DS-0b1470fc-cfac-484a-971c-8aa439528950,DISK], 
 blk_-9223372036854775756:DatanodeInfoWithStorage[10.179.100.210:9866,DS-ef32ee8c-32b8-4d3a-b432-cfbaa3b4ef72,DISK]]

可知共有5个块,我们看第一个块信息:

[hadoop@hadoop-slave3 ~]$ ll -h  data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775760
-rw-rw-r-- 1 hadoop hadoop 3.3M Jun 29 14:12 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775760

第二个块:

[hadoop@hadoop-slave4 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775759
-rw-rw-r-- 1 hadoop hadoop 3.0M Jun 29 14:12 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775759

第三个块:

[hadoop@hadoop-slave5 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775758
-rw-rw-r-- 1 hadoop hadoop 3.0M Jun 29 14:12 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775758

第四个块:

[hadoop@hadoop-slave2 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775757
-rw-rw-r-- 1 hadoop hadoop 3.3M Jun 29 14:12 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775757

第五个块:

[hadoop@hadoop-slave1 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775756
-rw-rw-r-- 1 hadoop hadoop 3.3M Jun 29 14:12 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775756

可以看到有3个块都是3.3M,两个块是3M,这是为什么呢?


再传一个文件测试。1.79M的文件
显示信息4和块:

[hadoop@hadoop-master1 ~]$  hdfs fsck /user/ec/rs-3-2/songxia.pdf -files -blocks -locations
2019-06-29 22:34:28,714 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop-master1:9870/fsck?ugi=hadoop&files=1&blocks=1&locations=1&path=%2Fuser%2Fec%2Frs-3-2%2Fsongxia.pdf
FSCK started by hadoop (auth:SIMPLE) from /10.179.83.24 for path /user/ec/rs-3-2/songxia.pdf at Sat Jun 29 22:34:30 CST 2019
/user/ec/rs-3-2/songxia.pdf 1881522 bytes, erasure-coded: policy=RS-3-2-1024k, 1 block(s):  OK
0. BP-1486153034-10.179.83.24-1559101838489:blk_-9223372036854775744_1006 len=1881522 Live_repl=4 
[blk_-9223372036854775744:DatanodeInfoWithStorage[10.179.52.182:9866,DS-e91f4a19-3503-4a45-a5ea-208748281dfa,DISK], 
blk_-9223372036854775743:DatanodeInfoWithStorage[10.179.52.55:9866,DS-be87c547-e130-41e2-8910-09ad4096ef19,DISK], 
blk_-9223372036854775741:DatanodeInfoWithStorage[10.179.131.21:9866,DS-5cc43afe-3c9e-400b-93d0-1146c7d1ce9f,DISK], 
blk_-9223372036854775740:DatanodeInfoWithStorage[10.179.100.210:9866,DS-ef32ee8c-32b8-4d3a-b432-cfbaa3b4ef72,DISK]]

第一块刚好1M,即1024K:

[hadoop@hadoop-slave5 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775744
-rw-rw-r-- 1 hadoop hadoop 1.0M Jun 29 22:33 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775744

第二块,814k,表明是刚好:

[hadoop@hadoop-slave6 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775743
-rw-rw-r-- 1 hadoop hadoop 814K Jun 29 22:33 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775743

第三个块:

[hadoop@hadoop-slave4 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775741
-rw-rw-r-- 1 hadoop hadoop 1.0M Jun 29 22:33 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775741

第四块:

[hadoop@hadoop-slave1 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775740
-rw-rw-r-- 1 hadoop hadoop 1.0M Jun 29 22:33 data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/blk_-9223372036854775740

总结下来:一次分隔最小单位1024k,即1M。

  1. 如果不够1M,连一次都不都分隔,则只存一块,不分割。校验块大小数据块一样。
  2. 如果够分隔,则按1M大小均匀分隔成指定数据块数量,如 rs-3-2的数据块为3块。如大于3M,则每块都均匀分,最后不足1M的直接放在一个块中。(2M以内的文件,即使有三个数据块也只会存2个)

恢复测试

上边的策略是rs-3-2即丢失任意两个块,数据仍然能完整读出。我们将9.3M的文件的三、四块dn slave2/5关掉。

[hadoop@hadoop-slave2 ~]$ hdfs --daemon stop datanode
[hadoop@hadoop-slave5 ~]$ hdfs --daemon stop datanode

下载文件到本地,显示报错,但是依然可以下载下来:

[hadoop@hadoop-master1 ~]$ hadoop fs -get /user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz tmp
2019-06-29 23:51:03,906 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-06-29 23:51:05,167 WARN erasurecode.ErasureCodeNative: ISA-L support is not available in your platform... using builtin-java codec where applicable
2019-06-29 23:51:05,376 WARN impl.BlockReaderFactory: I/O error constructing remote block reader.
java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	......
	2019-06-29 23:51:05,392 WARN hdfs.DFSClient: [DatanodeInfoWithStorage[10.179.100.195:9866,DS-0b1470fc-cfac-484a-971c-8aa439528950,DISK]] are unavailable and all striping blocks on them are lost. IgnoredNodes = null

本地文件依然完好:

[hadoop@hadoop-master1 ~]$ du -sh tmp/apache-tomcat-8.5.42.tar.gz
9.3M	tmp/apache-tomcat-8.5.42.tar.gz

此时 页面上显示的lives nodes依然是全部,这是因为datanode的状态有一个刷新的间隔,这个间隔默认是10m(600s),只有10m没有收到datanode的消息,namenode才认为此datanode是dead的。

时间到了,会显示有2节点dead。

此时我们看一下块的分布情况:
显示数据块是健康的:

[hadoop@hadoop-master1 ~]$  hdfs fsck /user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz -files -blocks -locations
2019-06-30 00:03:52,289 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Connecting to namenode via http://hadoop-master1:9870/fsck?ugi=hadoop&files=1&blocks=1&locations=1&path=%2Fuser%2Fec%2Frs-3-2%2Fapache-tomcat-8.5.42.tar.gz
FSCK started by hadoop (auth:SIMPLE) from /10.179.83.24 for path /user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz at Sun Jun 30 00:03:53 CST 2019
/user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz 9711748 bytes, erasure-coded: policy=RS-3-2-1024k, 1 block(s):  Under replicated BP-1486153034-10.179.83.24-1559101838489:blk_-9223372036854775760_1005. 
Target Replicas is 5 but found 4 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
0. BP-1486153034-10.179.83.24-1559101838489:blk_-9223372036854775760_1005 len=9711748 Live_repl=4  
[blk_-9223372036854775760:DatanodeInfoWithStorage[10.179.131.90:9866,DS-efa0dabb-9912-41a9-8c8a-1f6b5672d928,DISK], 
blk_-9223372036854775759:DatanodeInfoWithStorage[10.179.131.21:9866,DS-5cc43afe-3c9e-400b-93d0-1146c7d1ce9f,DISK], 
blk_-9223372036854775758:DatanodeInfoWithStorage[10.179.52.55:9866,DS-be87c547-e130-41e2-8910-09ad4096ef19,DISK], 
blk_-9223372036854775756:DatanodeInfoWithStorage[10.179.100.210:9866,DS-ef32ee8c-32b8-4d3a-b432-cfbaa3b4ef72,DISK]]


Status: HEALTHY
 Number of data-nodes:	4
 Number of racks:		3
 Total dirs:			0
 Total symlinks:		0

Replicated Blocks:
 Total size:	0 B
 Total files:	0
 Total blocks (validated):	0
 Minimally replicated blocks:	0
 Over-replicated blocks:	0
 Under-replicated blocks:	0
 Mis-replicated blocks:		0
 Default replication factor:	2
 Average block replication:	0.0
 Missing blocks:		0
 Corrupt blocks:		0
 Missing replicas:		0

Erasure Coded Block Groups:
 Total size:	9711748 B
 Total files:	1
 Total block groups (validated):	1 (avg. block group size 9711748 B)
 Minimally erasure-coded block groups:	1 (100.0 %)
 Over-erasure-coded block groups:	0 (0.0 %)
 Under-erasure-coded block groups:	1 (100.0 %)
 Unsatisfactory placement block groups:	0 (0.0 %)
 Average block group size:	4.0
 Missing block groups:		0
 Corrupt block groups:		0
 Missing internal blocks:	1 (20.0 %)
FSCK ended at Sun Jun 30 00:03:53 CST 2019 in 1 milliseconds


The filesystem under path '/user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz' is HEALTHY

但是,此时的问题是数据块只有4个了!原因是什么呢?Target Replicas is 5 but found 4 live replica(s)目标块是5块,但是我们只有4个节点,因此只有4个块。

我们将关闭的两个节点打开

[hadoop@hadoop-slave2 ~]$ hdfs --daemon start datanode
[hadoop@hadoop-slave5 ~]$ hdfs --daemon start datanode

此时再看数据块的状态:

[hadoop@hadoop-master1 ~]$  hdfs fsck /user/ec/rs-3-2/apache-tomcat-8.5.42.tar.gz -files -blocks -locations
0. BP-1486153034-10.179.83.24-1559101838489:blk_-9223372036854775760_1005 len=9711748 Live_repl=5 
 [blk_-9223372036854775760:DatanodeInfoWithStorage[10.179.131.90:9866,DS-efa0dabb-9912-41a9-8c8a-1f6b5672d928,DISK], 
blk_-9223372036854775759:DatanodeInfoWithStorage[10.179.131.21:9866,DS-5cc43afe-3c9e-400b-93d0-1146c7d1ce9f,DISK], 
blk_-9223372036854775758:DatanodeInfoWithStorage[10.179.52.55:9866,DS-be87c547-e130-41e2-8910-09ad4096ef19,DISK], 
blk_-9223372036854775757:DatanodeInfoWithStorage[10.179.100.195:9866,DS-0b1470fc-cfac-484a-971c-8aa439528950,DISK], 
blk_-9223372036854775756:DatanodeInfoWithStorage[10.179.100.210:9866,DS-ef32ee8c-32b8-4d3a-b432-cfbaa3b4ef72,DISK]]

开启两节点后,数据块立马又恢复5块了!
分别在
hadoop-slave3
hadoop-slave4
hadoop-slave6
hadoop-slave2
hadoop-slave1

之前的块是1、2、3、4、5节点,现在是1、2、3、4、6节点。关闭的2、5节点的块,现在转到了2、6节点上了。我们去看5节点上的块,发现已经没有块了!

[hadoop@hadoop-slave5 ~]$ ll -h data/dfs/dn/current/BP-1486153034-10.179.83.24-1559101838489/current/finalized/subdir0/subdir0/
total 0

应该是在这些节点关闭后,hdfs重新启动译码和编码,将原来丢失的数据。总之,如果编码后的stripe中,有数据丢失,hdfs会自动启动恢复工作。不应该有的块,也会被删除。

猜你喜欢

转载自blog.csdn.net/answer100answer/article/details/94042883