hbase跨机房同步

两边hbase版本均是1.1.4,两个hbase集群之间的数据同步
hbase通过命令看数据,太繁琐,可以hbaseclient
参考 HBase备份还原OpenTSDB数据之SnapshotHbase四种数据迁移方案
1 创建快照

hbase shell
snapshot 'your_table_snapshot',’your_table'

2 在另一个集群中快照中恢复数据
执行命令可以查看快照清单

[root@bwsc65 ~]# hadoop fs -ls hdfs://172.19.123.151:9000/hbase/.hbase-snapshot/
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/application/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/application/hbase-1.1.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Found 2 items
drwxr-xr-x   - root supergroup          0 2019-09-04 14:07 hdfs://172.19.123.151:9000/hbase/.hbase-snapshot/.tmp
drwxr-xr-x   - root supergroup          0 2019-09-04 14:07 hdfs://172.19.123.151:9000/hbase/.hbase-snapshot/my_table_09041407

在目的端hbase集群中,执行下面的命令进行快照的复制

hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot \
-snapshot my_table_09041407 -copy-from hdfs://172.19.123.151:9000/hbase \
-copy-to hdfs://10.101.10.65:9000/hbase   -overwrite -mappers 16 -bandwidth  1024

提示异常,java堆内存不够

2019-09-04 14:19:00,993 INFO  [main] mapreduce.Job: Task Id : attempt_1558491925397_0127_m_000013_2, Status : FAILED
Error: Java heap space
2019-09-04 14:19:01,995 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2019-09-04 14:19:02,001 INFO  [main] mapreduce.Job: Job job_1558491925397_0127 failed with state FAILED due to: Task failed task_1558491925397_0127_m_000002
Job failed as tasks failed. failedMaps:1 failedReduces:0

2019-09-04 14:19:02,070 INFO  [main] mapreduce.Job: Counters: 12
        Job Counters
                Failed map tasks=41
                Killed map tasks=15
                Launched map tasks=51
                Other local map tasks=51
                Total time spent by all maps in occupied slots (ms)=474640
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=118660
                Total vcore-seconds taken by all map tasks=118660
                Total megabyte-seconds taken by all map tasks=121507840
        Map-Reduce Framework
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
2019-09-04 14:19:02,072 ERROR [main] snapshot.ExportSnapshot: Snapshot export failed
org.apache.hadoop.hbase.snapshot.ExportSnapshotException: Copy Files Map-Reduce Job failed
        at org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:804)
        at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:997)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:1071)
        at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1075)

有人说可以通过修改hadoop的配置

vim mapred-site.xml
# 添加以下内容
        
        <property>
                <name>mapred.child.java.opts</name>
                <value>-Xmx1024m</value>
         </property>
 stop-hbase.sh 
 stop-all.sh
 start-all.sh
 start-hbase.sh

可是我的服务器在云端中,内存已经不足了,增加要钱,该怎么办呢?
我换了中策略,有点慢,但不会立马提示异常信息。

hadoop distcp -Dmapreduce.job.queue.name=queue_0001_01 -update -skipcrccheck -m 100 hdfs://172.19.123.151:9000/hbase/data/default/my_table /hbase/data/default/my_table 

我等了一个晚上,卡在这里不动了。这是为什么呢?
1

发布了317 篇原创文章 · 获赞 168 · 访问量 46万+

猜你喜欢

转载自blog.csdn.net/warrah/article/details/100533611