HBase学习---HBase Replication主要脚本

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/wjandy0211/article/details/90063602

describe 'wj_test’
create 'wj_test11'                                                                                                                                                                                                                                                                                    
{NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'fals
e', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '1', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN
_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 


192.168.145.8     dbp06  dbp06.cloud.9ffox.com        
192.168.155.205     dbp06  dbp07.cloud.9ffox.com    
192.168.161.205     dbp08  dbp08.cloud.9ffox.com    
    
    Call to dbp06.cloud.9ffox.com/221.122.96.180:16020 failed on local exception: org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedNoRouteToHostException: No route to host: dbp06.cloud.9ffox.com/221.122.96.180:16020
    
create 'wj_test11', {NAME => 'f', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '1', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'} 

disable 'wj_test'
alter 'wj_test', {NAME => 'f', REPLICATION_SCOPE => '1'}
enable 'wj_test'

hbase.zookeeper.quorum:hbase.zookeeper.property.clientPort:zookeeper.znode.parent

add_peer '1', CLUSTER_KEY =>"192.168.145.8,192.168.155.205,192.168.161.205:2181:/hbase"
add_peer '1', CLUSTER_KEY =>"192.168.145.8,192.168.155.205,192.168.161.205:2181:/hbase"

put 'wj_test11','wj_test004','f:name','wj_test003'
put 'wj_test11','wj_test004','f:r1','wj_test_r3'
put 'wj_test11','wj_test004','f:r2','wj_test_r3'


scan 'wj_test', {LIMIT => 10}

hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication [--starttime=timestamp1] [--stoptime=timestamp] [--families=comma separated list of families] <peerId> <tablename>

hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication  1 wj_test
hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication --starttime=1265875194289 --endtime=1265878794289 1 wj_test 


表: bd_bangcle_apkinfo

1、产生快照:
   snapshot ’bd_bangcle_apkinfo‘, ‘snapshotName’


2、列出所有快照:
   list_snapshots
   
3、删除快照:
   delete_snapshot ‘snapshotName’


4、从指定快照生成新表:
   clone_snapshot ‘snapshotName’, ’bd_bangcle_apkinfo‘


5、将指定快照内容替换生成快照的表的结构/数据,需要先disable当前表:
    disable ’bd_bangcle_apkinfo‘
    restore_snapshot ‘snapshotName’
    enable ’bd_bangcle_apkinfo‘


6、使用ExportSnapshot工具将现有快照导出至其他集群:
  hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot  -overwrite -snapshot snapshot_replication_test_20190510 -mappers 16 -copy-to hdfs:192.168.145.8:8020/hbase -bandwidth 40

hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication [--starttime=timestamp1] [--stoptime=timestamp] [--families=comma separated list of families] <peerId> <tablename>

hbase org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication  1 bd_bangcle_apkinfo

job_1541573865545_0267


 File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=254689
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=153
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=1
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
        Job Counters 
                Launched map tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=3507
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=3507
                Total vcore-milliseconds taken by all map tasks=3507
                Total megabyte-milliseconds taken by all map tasks=3591168
        Map-Reduce Framework
                Map input records=10
                Map output records=0
                Input split bytes=153
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=112
                CPU time spent (ms)=3260
                Physical memory (bytes) snapshot=424742912
                Virtual memory (bytes) snapshot=2717315072
                Total committed heap usage (bytes)=470286336
                Peak Map Physical memory (bytes)=424742912
                Peak Map Virtual memory (bytes)=2717315072
        HBase Counters
                BYTES_IN_REMOTE_RESULTS=0
                BYTES_IN_RESULTS=673
                MILLIS_BETWEEN_NEXTS=557
                NOT_SERVING_REGION_EXCEPTION=0
                NUM_SCANNER_RESTARTS=0
                NUM_SCAN_RESULTS_STALE=0
                REGIONS_SCANNED=1
                REMOTE_RPC_CALLS=0
                REMOTE_RPC_RETRIES=0
                ROWS_FILTERED=0
                ROWS_SCANNED=10
                RPC_CALLS=1
                RPC_RETRIES=0
        org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication$Verifier$Counters
                BADROWS=9
                GOODROWS=1
                ONLY_IN_SOURCE_TABLE_ROWS=9
        File Input Format Counters 
                Bytes Read=0
        File Output Format Counters 
                Bytes Written=0

猜你喜欢

转载自blog.csdn.net/wjandy0211/article/details/90063602