常用指令整理

版权声明:欢迎分享转载 我可能会失败,但我不会一直失败 https://blog.csdn.net/u012637358/article/details/86383703

---------------------创建topic--------------------------------------

cd /usr/hdp/current/kafka-broker
bin/kafka-topics.sh --create --zookeeper node3.cp.cn:2181 -replication-factor 1 --partitions 3 --topic SysWarning
bin/kafka-topics.sh --create --zookeeper node6.kg.cn:2181 -replication-factor 1 --partitions 3 --topic RADAR_MULTI
bin/kafka-topics.sh --create --zookeeper node6.kg.cn:2181 -replication-factor 1 --partitions 3 --topic TP

查询topic
bin/kafka-topics.sh --list --zookeeper node3.cp.cn:2181
bin/kafka-topics.sh --list --zookeeper node6.kg.cn:2181

删除topic
./bin/kafka-topics.sh --delete --zookeeper 【zookeeper server】 --topic 【topic name】
./bin/kafka-topics.sh --delete --zookeeper node6.kg.cn:2181 --topic RADAR_MULTI
./bin/kafka-topics.sh --describe --zookeeper node6.kg.cn:2181 --topic RADAR_MULTI

进入zookeeper客户端
zookeeper-client
查看topic
ls /brokers/topics/
获取topic下数据
get /brokers/topics/RADAR_MULTI

此时你若想真正删除它,可以登录zookeeper客户端:

命令:./bin/zookeeper-client

找到topic所在的目录:ls /brokers/topics

找到要删除的topic,执行命令:rmr /brokers/topics/【topic name】即可,此时topic被彻底删除。


查看消费者命令
/usr/hdp/2.5.3.0-37/kafka/bin/kafka-console-consumer.sh --bootstrap-server node6.kg.cn:6667 --topic CALLSATURATION --zookeeper node4.kg.cn:2181


创建topic的leader
首先查看节点/controller_epoch的值:
get /controller_epoch
得到值1

根据第一步中的分区情况,共有0,1,2三个分区,分别创建这三个节点

create /brokers/topics/RADAR_MULTI/partitions null
create /brokers/topics/RADAR_MULTI/partitions/0 null
create /brokers/topics/RADAR_MULTI/partitions/1 null
create /brokers/topics/RADAR_MULTI/partitions/2 null

将下面的controller_epoch改为上面得到的1
create /brokers/topics/RADAR_MULTI/partitions/0/state {“controller_epoch”:30,“leader”:1006,“version”:1,“leader_epoch”:2,“isr”:[1006]}
create /brokers/topics/RADAR_MULTI/partitions/1/state {“controller_epoch”:30,“leader”:1007,“version”:1,“leader_epoch”:3,“isr”:[1007]}
create /brokers/topics/RADAR_MULTI/partitions/2/state {“controller_epoch”:30,“leader”:1002,“version”:1,“leader_epoch”:4,“isr”:[1002]}


查看topic的生产者和消费者信息:
./kafka-topics.sh --describe --zookeeper node6.kg.cn:2181 --topic SECTOR_OPENCLOSE,DreamTopic

[root@linux-node2 bin]# ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic dream
Topic:dream PartitionCount:5 ReplicationFactor:2 Configs:
Topic: dream Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: dream Partition: 1 Leader: 2 Replicas: 2,3 Isr: 2,3
Topic: dream Partition: 2 Leader: 3 Replicas: 3,1 Isr: 3,1
Topic: dream Partition: 3 Leader: 1 Replicas: 1,3 Isr: 1,3
Topic: dream Partition: 4 Leader: 2 Replicas: 2,1 Isr: 2,1

./kafka-topics.sh --describe --zookeeper node6.sdp.cn:2181 --topic hisdata
leader:负责处理消息的读和写,leader是从所有节点中随机选择的.

Replicas:列出了所有的副本节点,不管节点是否在服务中.

Lsr:是正在服务中的节点.

查看group偏移量

[root@node10 bin]# ./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper node3.kg.cn:2181 --group Warning --topic WARN_SIMILAR
[2018-07-03 15:53:42,275] WARN WARNING: ConsumerOffsetChecker is deprecated and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. (kafka.tools.ConsumerOffsetCheckerKaTeX parse error: Expected 'EOF', got '#' at position 125: …oot@node10 bin]#̲ ./kafka-run-cl…)
Group Topic Pid Offset logSize Lag Owner
Radar RADAR_MULTI 0 2223450 2268819 45369 none
Radar RADAR_MULTI 1 2185552 2255608 70056 none
Radar RADAR_MULTI 2 2184745 2254045 69300 none

修改zookeeper中group topic偏移量

set /consumers/Radar/offsets/RADAR_MULTI/2 2254045
get /consumers/Radar/offsets/RADAR_MULTI/0

bin/kafka-console-producer.sh --broker-list node4.kg.cn:6667 --topic test

{“AREA_SOURCE”:“ZSQD”,“RUNWAY_STATUS”:"",“SECTOR_NAME”:“G,K,E,H,P,S,N”,“SECTOR_STATUS”:“open,open,open,open,o?pen,open,close”,“SEND_TIME”:“20180706064211”}

消费者:
bin/kafka-console-consumer.sh --zookeeper node4.kg.cn:2181 --topic test

------------------2018年3月23日15:43:52
./kafka-reassign-partitions.sh --zookeeper node4.kg.cn:2181 --reassignment-json-file increase-replication-factor.json --execute
increase-replication-factor.json文本内容如下:
查看topic分区副本分布情况:(ps:若partition副本数为3,其中一个是leader自己,剩余2个是leader的copy版本)
./kafka-topics.sh --describe --zookeeper node4.kg.cn:2181 --topic RADAR_MULTI
mulit:

{“version”:1,
“partitions”:[
{“topic”:“RADAR_MULTI”,“partition”:0,“replicas”:[1001,1003]},
{“topic”:“RADAR_MULTI”,“partition”:1,“replicas”:[1001,1002]},
{“topic”:“RADAR_MULTI”,“partition”:2,“replicas”:[1002,1003]}
]
}

atc:

{“version”:1,
“partitions”:[
{“topic”:“PLAN_IFPL”,“partition”:0,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_IFPL”,“partition”:1,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_IFPL”,“partition”:2,“replicas”:[1001,1002,1003]}
]
}

aftn:

{“version”:1,
“partitions”:[
{“topic”:"aftn:

{“version”:1,
“partitions”:[
{“topic”:“PLAN_QDPLANSTATUS”,“partition”:0,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_QDPLANSTATUS”,“partition”:1,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_QDPLANSTATUS”,“partition”:2,“replicas”:[1001,1002,1003]}
]

}

strip:

{“version”:1,
“partitions”:[
{“topic”:“PLAN_STRIP”,“partition”:0,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_STRIP”,“partition”:1,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_STRIP”,“partition”:2,“replicas”:[1001,1002,1003]}
]
}

cdm:

{“version”:1,
“partitions”:[
{“topic”:“PLAN_CDM”,“partition”:0,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_CDM”,“partition”:1,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_CDM”,“partition”:2,“replicas”:[1001,1002,1003]}
]
}

tjw:

{“version”:1,
“partitions”:[
{“topic”:“PLAN_TJW”,“partition”:0,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_TJW”,“partition”:1,“replicas”:[1001,1002,1003]},
{“topic”:“PLAN_TJW”,“partition”:2,“replicas”:[1001,1002,1003]}
]
}

------------------jar包上传storm集群------------------------
进入192.168.0.189
cd /opt/software/mh/

storm jar xxxxxx.jar main函数所在路径 名称
如 storm jar PlanMessageATC.jar storm.PlanMessageACTMain ACT 回车

storm 启动日志进程命令
storm logviewer

====================计划报文
storm jar storm.jar storm.plan.PlanMessageAFTNMain PLANAFTN
storm jar storm.jar storm.plan.PlanMessageATCMain PLANATC
storm jar storm.jar storm.plan.PlanMessageCDMMain PLANCDM
storm jar storm.jar storm.plan.PlanMessageSTRIPMain PLANSTRIP
storm jar storm.jar storm.plan.PlanMessageTJWMain PLANTJW

====================雷达数据
storm jar storm.jar storm.radar.MultiRadarDataMain RADARMulti
storm jar storm.jar storm.radar.ACRadarDataMain RADARAC
storm jar storm.jar storm.radar.ADSBRadarDataMain RADARADSB
storm jar storm.jar storm.radar.AleniaRadarDataMain RADARAlenia
storm jar storm.jar storm.radar.CFLRadarDataMain RADARCFL
storm jar storm.jar storm.radar.SRadarDataMain RADARS

====================负荷数据
storm jar storm.jar storm.CallSaturationMain CallSaturation
storm jar storm.jar storm.FlightsMain Flights
storm jar storm.jar storm.PersonnelDutyMain PersonnelDuty

====================告警数据
storm jar storm.jar storm.warn.WarningFlightMain WarningFlight
storm jar storm.jar storm.warn.WarningSimilarMain WarningSimilar
storm jar storm.jar storm.warn.WarningThirdMain WarningThird

====================值班数据、扇区开合数据、调配数据
storm jar storm.jar storm.duty.ATCDutyMain DutyATC
storm jar storm.jar storm.duty.TSSDutyMain DutyTSS
storm jar storm.jar storm.duty.QXDutyMain DutyQX
storm jar storm.jar storm.sector.SectorInfoMain SectorOpenClose
storm jar storm.jar storm.TPMain TP

====================运行提示信息
storm jar storm.jar storm.hint.ATCHintMain HintATC
storm jar storm.jar storm.hint.TSSHintMain HintTSS
storm jar storm.jar storm.hint.QXHintMain HintQX

====================设备数据
storm jar storm.jar storm.device.ATCDeviceMain DeviceATC
storm jar storm.jar storm.device.VHFDeviceMain DeviceVHF
storm jar storm.jar storm.device.UPSDeviceMain DeviceUPS

====================气象数据
storm jar storm.jar storm.meteor.ALARMMain WeatherALARM
storm jar storm.jar storm.meteor.MESSMain WeatherMESS
storm jar storm.jar storm.meteor.MeteorMain WeatherMeteor

-------------------HBASE命令------------------------------------------
表的管理
1)查看有哪些表
hbase(main)> list

2)创建表

语法:create , {NAME => , VERSIONS => }

例如:创建表t1,有两个family name:f1,f2,且版本数均为2

hbase(main)> create ‘t1’,{NAME => ‘f1’, VERSIONS => 2},{NAME => ‘f2’, VERSIONS => 2}

3)删除表
分两步:首先disable,然后drop
例如:删除表t1

hbase(main)> disable ‘t1’
hbase(main)> drop ‘t1’

4)查看表的结构

语法:describe

例如:查看表t1的结构

hbase(main)> describe ‘t1’

5)修改表结构
修改表结构必须先disable

语法:alter ‘t1’, {NAME => ‘f1’}, {NAME => ‘f2’, METHOD => ‘delete’}

例如:修改表test1的cf的TTL为180天

hbase(main)> disable ‘test1’
hbase(main)> alter ‘test1’,{NAME=>‘body’,TTL=>‘15552000’},{NAME=>‘meta’, TTL=>‘15552000’}
hbase(main)> enable ‘test1’=======

scan ‘Kg_MultiRadarData’, {ROWPREFIXFILTER => ‘MULTI’,LIMIT=>1} ------》前缀查询
scan ‘Kg_STRIP’, {ROWPREFIXFILTER => ‘CBJ5215ZSQDZSHC20170714201707140100’,LIMIT=>10}
scan ‘Kg_ATC’, {ROWPREFIXFILTER => ‘CES561QZSPDZYHB20170817201708170100_20170817021801’,LIMIT=>100}

scan ‘Kg_PlanData’, {ROWPREFIXFILTER => ‘CDG4697ZSQDZLXY20180117201801162255’,LIMIT=>100}


进入hbase
hbase shell
Hbase删除表中所有数据

truncate ‘Kg_MultiRadarData’

----------------------------连接redis--------------------------------
redis-cli -h 192.168.0.192 =======
truncate ‘表名’
scan ‘Kg_FlightState’,{FILTER=>“PrefixFilter(‘CES2463ZSQDZYTN20170718201707180225’)”}

storm 日志

/var/log/storm/workers-artifacts

----------------------------Redis操作-------------------------------------
1.停止
redis-cli -h 192.168.0.191 -p 7000 shutdown
redis-cli -h 192.168.0.191 -p 7001 shutdown
redis-cli -h 192.168.0.191 -p 7002 shutdown

redis-cli -h 192.168.0.192 -p 7003 shutdown
redis-cli -h 192.168.0.192 -p 7004 shutdown
redis-cli -h 192.168.0.190 -p 6379 shutdown

2.重启
redis-server …/redis_cluster/7000/redis.conf
redis-server …/redis_cluster/7001/redis.conf
redis-server …/redis_cluster/6379/redis.conf

redis-server …/redis_cluster/7003/redis.conf
redis-server …/redis_cluster/7004/redis.conf
redis-server …/redis_cluster/7005/redis.conf

集群连接:
redis-cli -h 192.168.0.191 -c -p 7000
单节点连接:
redis-cli -h 192.168.0.191 -p 7000
redis-cli -h 192.168.0.191 -p 7001
redis-cli -h 192.168.0.191 -p 7002
redis-cli -h 192.168.0.192 -p 7003
redis-cli -h 192.168.0.192 -p 7004
redis-cli -h 192.168.0.192 -p 7005

redis-cli -h 172.168.100.13 -p 6380
清空:
flushall
查看所有key:
keys *

redis清空
进入redis目录下
redis-cli
flushall
------------------------------elasticsearch集群启动关闭------------------------------------------

ES集群如果节点较多,在重启,关闭,启动等操作的时候,需要一个一个操作,非常麻烦,下面提供一种方式,可以通过脚本的方式,在一台节点上操作即可:

一、配置节点之间免密码登录;

具体配置方法,请百度搜索下就有了。

二、创建一下脚本完成操作:

集群启动
在elasticsearch安装目录下创建elasticstart.sh文件,内容如下:
!/bin/bash

ssh 100.100.37.26 /home/elasticsearch/bin/service/elasticsearch start
ssh 100.100.37.27 /home/elasticsearch/bin/service/elasticsearch start
ssh 100.100.37.28 /home/elasticsearch/bin/service/elasticsearch start
ssh 100.100.37.29 /home/elasticsearch/bin/service/elasticsearch start
ssh 100.100.37.30 /home/elasticsearch/bin/service/elasticsearch start
ssh 100.100.37.31 /home/elasticsearch/bin/service/elasticsearch start
ssh 100.100.37.32 /home/elasticsearch/bin/service/elasticsearch start
ssh 100.100.37.33 /home/elasticsearch/bin/service/elasticsearch start

代码说明:
ssh 100.100.37.26表示ssh方式登陆到服务器,/home/elasticsearch/bin/service/elasticsearch start 表示启动ES节点服务。集群里面有几个节点,需要都在此脚本中添上。

创建完脚本后,切换到文件目录,执行elasticstart.sh 启动所有节点。

集群重启
重启操作与启动操作步骤一致,重启文件内容如下:
ssh 100.100.37.26 /home/elasticsearch/bin/service/elasticsearch restart
ssh 100.100.37.27 /home/elasticsearch/bin/service/elasticsearch restart
ssh 100.100.37.28 /home/elasticsearch/bin/service/elasticsearch restart
ssh 100.100.37.29 /home/elasticsearch/bin/service/elasticsearch restart
ssh 100.100.37.30 /home/elasticsearch/bin/service/elasticsearch restart
ssh 100.100.37.31 /home/elasticsearch/bin/service/elasticsearch restart
ssh 100.100.37.32 /home/elasticsearch/bin/service/elasticsearch restart
ssh 100.100.37.33 /home/elasticsearch/bin/service/elasticsearch restart

集群关闭
关闭操作与启动操作步骤一致,文件内容如下:
!/bin/bash

ssh 100.100.37.26 /home/elasticsearch/bin/service/elasticsearch stop
ssh 100.100.37.27 /home/elasticsearch/bin/service/elasticsearch stop
ssh 100.100.37.28 /home/elasticsearch/bin/service/elasticsearch stop
ssh 100.100.37.29 /home/elasticsearch/bin/service/elasticsearch stop
ssh 100.100.37.30 /home/elasticsearch/bin/service/elasticsearch stop
ssh 100.100.37.31 /home/elasticsearch/bin/service/elasticsearch stop
ssh 100.100.37.32 /home/elasticsearch/bin/service/elasticsearch stop
ssh 100.100.37.33 /home/elasticsearch/bin/service/elasticsearch stop

-------------------------ElasticSearch-----------------------------------------------
访问路径:
http://192.168.0.191:9200/_plugin/head/
集群访问路径:
http://192.168.0.189:8080/#/main/services/HIVE/summary
Storm监控路径:
http://192.168.0.190:8744/index.html

查询节点命令:
curl ‘172.168.100.14:9200/_cat/nodes?v’
查询索引命令:
curl ‘172.168.100.14:9200/_cat/indices?v’

-------------------------Linux系统后台运行jar文件------------------------------------
发布在节点:192.168.0.193
目录位置:cd /opt/software

1、执行jar包的命令和在windows操作系统上是一样的,都是java -jar xxxx.jar。
2、将jar程序设置成后台运行,并且将标准输出的日志重定向至文件consoleMsg.log。

nohup java -jar AFTNStatusTimer.jar &
nohup java -jar AFTNStatusTimer.jar >consoleMsg.log 2>&1 &

其中:nohup命令的作用就是让程序在后台运行,不用担心关闭连接进程断掉的问题了,consoleMsg.log文件前提要创建好。
3、如果想杀掉运行中的jar程序,查看进程命令为:

ps aux|grep AFTNStatusTimer.jar

将会看到此jar的进程信息
data 5796 0.0 0.0 112656 996 pts/1 S+ 09:11 0:00 grep --color=auto getCimiss-surf.jar
data 30768 6.3 0.4 35468508 576800 ? Sl 09:09 0:08 java -jar getCimiss-surf.jar

其中30768则为此jar的pid,杀掉命令为
kill -9 30768
-----------------------api发布-------------------------------------------------
发布在节点:192.168.0.193
目录位置:cd /opt/api
java -jar kg_api.jar
java -jar loades.jar
nohup java -Dfile.encoding=utf-8 -jar kg_api.jar&
ps aux|grep kg_api.jar

----------------storm ui日志查询------logviewer (常用!)----------2017年9月21日14:14:52------------

启动Logviewer守护进程。语法如下:
storm logviewer
注:Logviewer提供一个Web接口查看Storm日志文件。该命令应该使用daemontools或者monit工具监控运行。

-------------------------------ambari监控异常(心跳检查!常用!)--------2017年9月21日14:14:52------------

在ambari安装服务器(172.168.100.18)上执行一下命令:
ambari-agent restart

--------------------------------es新增字段------------------------------------------------------------------
PUT /my_index/_mapping/my_type
{
“my_type”: {
“properties”: {
“english_title”: {
“type”: “string”,
“analyzer”: “english”
}
}
}
}
例:
http://172.168.100.14:9200/es_mixdata/_mapping/esradarmixhistory/
{
“esradarmixhistory”: {
“properties”: {
“PLAN_SECTOR_NAME”: {
“type”: “string”,
“store”: “yes”,
“index”: “not_analyzed”
}
}
}
}

http://172.168.100.14:9200/es_atcplandata/_mapping/esatcplanhistory/
{
“esatcplanhistory”: {
“properties”: {
“PLAN_SECTOR_NAME”: {
“type”: “string”,
“store”: “yes”,
“index”: “not_analyzed”
}
}
}
}

es创建别名
curl -XPOST ‘http://192.168.0.191:9200/_aliases’ -d ’
{
“actions” : [
{ “add” : { “index” : “es_mixdata”, “alias” : “es_mix” } }
]
}’

es删除索引
curl -XDELETE ‘http://192.168.0.191:9200/es_mixdata/esradarmixhistory/1


redis-sentinel …/sentinel.conf --sentinel
redis-cli -h 172.168.100.13 -p 26380 info Sentinel
redis-cli -h 172.168.100.13 info Replication

redis备份dump.db文件命令
SAVE

---------------------------------------------ntp-----------------------------------------------
service ntpd status

service ntpd stop

ntpdate 192.168.100.241

//禁止开机启动
chkconfig ntpd of
---------------------------------hbase修复-------------------------------------------------
新版本的 hbck 可以修复各种错误,修复选项是:
(1)-fix,向下兼容用,被-fixAssignments替代
(2)-fixAssignments,用于修复region assignments错误
(3)-fixMeta,用于修复meta表的问题,前提是HDFS上面的region info信息有并且正确。
(4)-fixHdfsHoles,修复region holes(空洞,某个区间没有region)问题
(5)-fixHdfsOrphans,修复Orphan region(hdfs上面没有.regioninfo的region)
(6)-fixHdfsOverlaps,修复region overlaps(区间重叠)问题
(7)-fixVersionFile,修复缺失hbase.version文件的问题
(8)-maxMerge (n默认是5),当region有重叠是,需要合并region,一次合并的region数最大不超过这个值。
(9)-sidelineBigOverlaps ,当修复region overlaps问题时,允许跟其他region重叠次数最多的一些region不参与(修复后,可以把没有参与的数据通过bulk load加载到相应的region)
(10)-maxOverlapsToSideline (n默认是2),当修复region overlaps问题时,一组里最多允许多少个region不参与
由于选项较多,所以有两个简写的选项
(11) -repair,相当于-fixAssignments -fixMeta -fixHdfsHoles -fixHdfsOrphans -fixHdfsOverlaps -fixVersionFile -sidelineBigOverlaps
(12)-repairHoles,相当于-fixAssignments -fixMeta -fixHdfsHoles -fixHdfsOrphans


查看当前目录文件大小
du -sh ./

关闭hbse服务

列出hbase表中文件
hdfs dfs -ls /apps/hbase/data/data/default/Kg_SpeciRadarData

查看hbase中哪个块miss
hadoop fsck filename -file -blocks
例:hdfs fsck /apps/hbase/data/data/default/Kg_SpeciRadarData/eb623ad9816de20c4f2e0814095ed037/RadarHome/06e7f6af535545f6b788ec2e57ca8e0f -files -blocks

删除块
hdfs dfs -rm -f /apps/hbase/data/data/default/Kg_SpeciRadarData/eb623ad9816de20c4f2e0814095ed037/RadarHome/06e7f6af535545f6b788ec2e57ca8e0f

开启hbase服务

校时设置
1、直接连接到校时服务器(微软)
ntpdate time.windows.com
2、更新 BIOS 时间
clock -w
3、加入计划任务
crontab -e
0-59/10 * * * * (/usr/sbin/ntpdate time.windows.com;/sbin/clock -w)

--------------------------------------------------------hbase 数据导入、导出-------------------------------------------------------------

1)导入./hbase org.apache.hadoop.hbase.mapreduce.Driver import 表名 数据文件位置
其中数据文件位置可为本地文件目录,也可以分布式文件系统hdfs的路径。
当其为前者时,直接指定即可,也可以加前缀file:///
而当其伟后者时,必须明确指明hdfs的路径,例如hdfs://mymaster:9000/path

2)导出./hbase org.apache.hadoop.hbase.mapreduce.Driver export 表名 数据文件位置
同上,其中数据文件位置可为本地文件目录,也可以分布式文件系统hdfs的路径。
另外,该接口类还提供了一些其它的方法,例如表与表之间的数据拷贝,导入tsv文件等,可回车键查看
列:./hbase org.apache.hadoop.hbase.mapreduce.Driver export Kg_AlarmInfo /HBASE/Kg_AlarmInfo

猜你喜欢

转载自blog.csdn.net/u012637358/article/details/86383703