hbase回收站清理周期设置为1天

一、环境检查

1.1、检查页面是否正常

#登录页面
master1 ip : 50070    
master2 ip : 50070   
master1 ip : 60010  或者 master2 ip : 60010

确保regionserver都在线:

1.2、检查回收站文件

# 登录hbase master
su - hadoop
# 查看回收站中文件
hdfs dfs -ls -h /user/hadoop/.Trash
# 查看回收站文件占用空间
hdfs dfs -du -h /user/hadoop/.Trash

1.3、检查回收站fs.trash.interval的值

# 查看值
cat /home/hadoop/hadoop-current/etc/hadoop/hdfs-site.xml|grep fs.trash.interval -A 2

二、解决方案

2.1、登陆hbase-master节点(仅在master1和master2节点执行):

# 备份文件
su - hadoop
cp /home/hadoop/hadoop-current/etc/hadoop/hdfs-site.xml /home/hadoop/hadoop-current/etc/hadoop/hdfs-site.xml.bak
# 修改配置fs.trash.interval值为1440:
#vi /home/hadoop/hadoop-current/etc/hadoop/hdfs-site.xml
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>

2.2、重启hadoop服务:

# 所有命令在hadoop下执行:
su - hadoop
# 获取当前集群namenode状态:
/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
==================
##示例:
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
active
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
standby
==================
# standby namenode节点:
##重启standby节点的namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh stop namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh start namenode
# 确认重启节点safemode模式为off
/home/hadoop/hadoop-current/bin/hdfs dfsadmin -safemode get
# active namenode节点:
##重启active节点的namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh stop namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh start namenode
# 确认重启节点safemode模式为off
/home/hadoop/hadoop-current/bin/hdfs dfsadmin -safemode get
==================
##示例:
$ /home/hadoop/hadoop-current/bin/hdfs dfsadmin -safemode get
19/08/07 11:55:03 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
19/08/07 11:55:03 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
19/08/07 11:55:03 INFO impl.MetricsSystemImpl: DFSClient metrics system started
19/08/07 11:55:03 INFO metrics.DFSClientMetrics: creating dfs client metrics
19/08/07 11:55:03 INFO hdfs.DFSClient: main create dfs client metrics.
Safe mode is OFF in hbase-master1-1/{master1_ip}:8020
Safe mode is OFF in hbase-master2-1/{master2_ip}:8020
==================
# 重启namenode1,2之后,检查namenode状态,预期为active和standby
/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
$/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
==================
##示例:
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
standby
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
active
==================

三、验证方案

3.1、登陆hbase master容器,所有操作在hadoop用户下执行:

1.确认页面是否正常
master1 ip : 50070
master2 ip : 50070
master1 ip : 60010  或者 master2 ip : 60010

3.2、确认HDFS 功能正常

# 登录hbase-master
su - hadoop
# 显示目录
/home/hadoop/hadoop-current/bin/hdfs dfs -ls /
# 显示容量
/home/hadoop/hadoop-current/bin/hdfs dfsadmin -report

3.3、确认Hbase是否正常

# 登录hbase-master
hbase shell
# 查看regionserver个数
>status
# 建表测试
>create 'test','f1'

3.4、确认配置生效

##验证配置生效:
/home/hadoop/hadoop-current/bin/hdfs getconf -confKey fs.trash.interval 
# 预期返回结果为1440
/home/hadoop/hadoop-current/bin/hdfs getconf -confKey fs.trash.checkpoint.interval
# 预期返回结果为0

四、回滚方案

4.1、恢复配置

##配置文件恢复需要分别登陆master1,master2节点执行:
su - hadoop
mv /home/hadoop/hadoop-current/etc/hadoop/hdfs-site.xml.bak /home/hadoop/hadoop-current/etc/hadoop/hdfs-site.xml

4.2、重启hadoop

# 切换至hadoop用户执行
su - hadoop
# 获取当前集群namenode状态:
/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
==================
##示例:
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
active
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
standby
==================
# standby namenode节点:
# 重启standby节点的namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh stop namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh start namenode
# 确认重启节点safemode模式为off
/home/hadoop/hadoop-current/bin/hdfs dfsadmin -safemode get
# active namenode节点:
# 重启active节点的namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh stop namenode
/home/hadoop/hadoop-current/sbin/hadoop-daemon.sh start namenode
# 确认重启节点safemode模式为off
/home/hadoop/hadoop-current/bin/hdfs dfsadmin -safemode get
==================
##示例:
$ /home/hadoop/hadoop-current/bin/hdfs dfsadmin -safemode get
19/08/07 11:55:03 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
19/08/07 11:55:03 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
19/08/07 11:55:03 INFO impl.MetricsSystemImpl: DFSClient metrics system started
19/08/07 11:55:03 INFO metrics.DFSClientMetrics: creating dfs client metrics
19/08/07 11:55:03 INFO hdfs.DFSClient: main create dfs client metrics.
Safe mode is OFF in hbase-master1-1/{master1_ip}:8020
Safe mode is OFF in hbase-master2-1/{master2_ip}:8020
==================
# 重启namenode1,2之后,检查namenode状态,预期为active和standby
/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
/home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
==================
##示例:
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn1
standby
[hadoop@hbase-master1-1 ~]$ /home/hadoop/hadoop-current/bin/hdfs haadmin -getServiceState nn2
active
==================
# 验证配置生效:
/home/hadoop/hadoop-current/bin/hdfs getconf -confKey fs.trash.interval 
/home/hadoop/hadoop-current/bin/hdfs getconf -confKey fs.trash.checkpoint.interval

4.3、确认页面是否正常

master1 ip : 50070   
master2 ip : 50070  
master1 ip : 60010  或者 master2 ip : 60010

猜你喜欢

转载自blog.csdn.net/zfw_666666/article/details/128869934