关于hadoop群起脚本走过的那些坑

1、dos与unix不匹配

当脚本在远程客户端写得,或者从其它地方粘贴过来时可能会出现的问题,原因是粘贴过来的文本内容不是unix格式(一般是dos格式),需要在vi编辑器上将文件类型改为unix

# 按esc,输入:
:set fileformat=unix
:wq

2、zookeeper或journalnode在群起脚本上启动也没有保错,很正常,但jps就是没启动

zookeeper和journalnode有点特殊,我也忘记是为啥了,反正就是群起脚本执行前要加source /etc/profile

如:

#!/bin/bash

case $1 in
"start"){
	echo "------------------------$1 zookeeper------------------------"
	for i in hadoop104 hadoop105 hadoop106;
	do
		echo "------------------------$1 $i zookeeper------------------------"
		ssh $i "source /etc/profile;/opt/modules/zookeeper-3.4.5-cdh5.3.6/bin/zkServer.sh start"
	done
};;
"stop"){
    echo "------------------------$1 zookeeper------------------------"
    for i in hadoop104 hadoop105 hadoop106;
    do
        echo "------------------------$1 $i zookeeper------------------------"
        ssh $i "source /etc/profile;/opt/modules/zookeeper-3.4.5-cdh5.3.6/bin/zkServer.sh stop"
    done
};;
esac

3、hdfs无法启动

可能是没有格式化namenode

格式化可能会出现的问题:java.io.IOException: Cannot create directory /opt/module/hadoop-2.7.2/data/tmp/dfs/name/current

即无法创建文件夹,原因是权限不够切换到root用户执行bin/hdfs namenode -format

STARTUP_MSG:   build = http://github.com/cloudera/hadoop -r 6743ef286bfdd317b600adbdb154f982cf2fac7a; compiled by 'jenkins' on 2015-07-28T22:14Z
STARTUP_MSG:   java = 1.8.0_121
************************************************************/
20/10/28 18:25:22 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
20/10/28 18:25:22 INFO namenode.NameNode: createNameNode [-format]
20/10/28 18:25:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-50438531-b2f8-42c2-92dd-f7abb9ee1f4f
20/10/28 18:25:22 INFO namenode.FSNamesystem: No KeyProvider found.
20/10/28 18:25:22 INFO namenode.FSNamesystem: fsLock is fair:true
20/10/28 18:25:22 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
20/10/28 18:25:22 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
20/10/28 18:25:22 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
20/10/28 18:25:22 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Oct 28 18:25:22
20/10/28 18:25:22 INFO util.GSet: Computing capacity for map BlocksMap
20/10/28 18:25:22 INFO util.GSet: VM type       = 64-bit
20/10/28 18:25:22 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
20/10/28 18:25:22 INFO util.GSet: capacity      = 2^21 = 2097152 entries
20/10/28 18:25:22 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
20/10/28 18:25:22 INFO blockmanagement.BlockManager: defaultReplication         = 3
20/10/28 18:25:22 INFO blockmanagement.BlockManager: maxReplication             = 512
20/10/28 18:25:22 INFO blockmanagement.BlockManager: minReplication             = 1
20/10/28 18:25:22 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
20/10/28 18:25:22 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
20/10/28 18:25:22 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
20/10/28 18:25:22 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
20/10/28 18:25:22 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
20/10/28 18:25:22 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
20/10/28 18:25:22 INFO namenode.FSNamesystem: supergroup          = supergroup
20/10/28 18:25:22 INFO namenode.FSNamesystem: isPermissionEnabled = true
20/10/28 18:25:22 INFO namenode.FSNamesystem: HA Enabled: false
20/10/28 18:25:22 INFO namenode.FSNamesystem: Append Enabled: true
20/10/28 18:25:22 INFO util.GSet: Computing capacity for map INodeMap
20/10/28 18:25:22 INFO util.GSet: VM type       = 64-bit
20/10/28 18:25:22 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
20/10/28 18:25:22 INFO util.GSet: capacity      = 2^20 = 1048576 entries
20/10/28 18:25:22 INFO namenode.NameNode: Caching file names occuring more than 10 times
20/10/28 18:25:22 INFO util.GSet: Computing capacity for map cachedBlocks
20/10/28 18:25:22 INFO util.GSet: VM type       = 64-bit
20/10/28 18:25:22 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
20/10/28 18:25:22 INFO util.GSet: capacity      = 2^18 = 262144 entries
20/10/28 18:25:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
20/10/28 18:25:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
20/10/28 18:25:22 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
20/10/28 18:25:22 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
20/10/28 18:25:22 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
20/10/28 18:25:22 INFO util.GSet: Computing capacity for map NameNodeRetryCache
20/10/28 18:25:22 INFO util.GSet: VM type       = 64-bit
20/10/28 18:25:22 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
20/10/28 18:25:22 INFO util.GSet: capacity      = 2^15 = 32768 entries
20/10/28 18:25:22 INFO namenode.NNConf: ACLs enabled? false
20/10/28 18:25:22 INFO namenode.NNConf: XAttrs enabled? true
20/10/28 18:25:22 INFO namenode.NNConf: Maximum size of an xattr: 16384
20/10/28 18:25:22 INFO namenode.FSImage: Allocated new BlockPoolId: BP-924336988-192.168.1.104-1603880722785
20/10/28 18:25:22 INFO common.Storage: Storage directory /opt/module/hadoop-2.7.2/data/tmp/dfs/name has been successfully formatted.
20/10/28 18:25:22 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/10/28 18:25:22 INFO util.ExitUtil: Exiting with status 0
20/10/28 18:25:22 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop104/192.168.1.104
************************************************************/

初始化成功

4、hdfs无法启动

两种情况,一种是没有格式化,一种是格式化过多

hadoop namenode -format后,发现再次启动hadoop后,datanode节点无法正常启动。

原因是格式化次数过多,把几个集群里的data目录下的东西都清掉再格式化就行了

5、最后再附上我的群起脚本吧,希望对你们有帮助

#!/bin/bash

case $1 in
"start"){
	echo "**********$1 all zkServer**********"
	ssh hadoop104 "/opt/modules/zookeeper-3.4.5-cdh5.3.6/bin/allzkServer.sh start";
	
	echo "**********$1 HDFS**********"
	echo "----------$1 hadoop104 namenode----------"
	ssh hadoop104 "/opt/modules/hadoop-2.7.2/sbin/hadoop-daemon.sh start namenode"
	
	echo "----------$1 hadoop106 Secondarynamenode----------"
	ssh hadoop106 "/opt/modules/hadoop-2.7.2/sbin/hadoop-daemon.sh start secondarynamenode"
	
	echo "**********$1 datanode**********"
	for i in hadoop104 hadoop105 hadoop106;
	do
		echo "----------$1 $i datanode----------"
		ssh $i "/opt/modules/hadoop-2.7.2/sbin/hadoop-daemon.sh start datanode"
	done
	
	echo "**********$1 YARN**********"
	echo "----------$1 hadoop105 resourcemanager----------"
	ssh hadoop105 "/opt/modules/hadoop-2.7.2/sbin/yarn-daemon.sh start resourcemanager"
	
	echo "**********$1 nodemanager**********"
	for i in hadoop104 hadoop105 hadoop106;
	do
		echo "----------$1 $i nodemanager----------"
		ssh $i "/opt/modules/hadoop-2.7.2/sbin/yarn-daemon.sh start nodemanager"
	done
	
	echo "**********$1 historyserver**********"
	ssh hadoop106 "/opt/modules/hadoop-2.7.2/sbin/mr-jobhistory-daemon.sh start historyserver"
};;
# -------------------------------------------------------------------------------------------------------------------
# -------------------------------------------------------------------------------------------------------------------
"stop"){
	echo "**********$1 historyserver**********"
	ssh hadoop106 "/opt/modules/hadoop-2.7.2/sbin/mr-jobhistory-daemon.sh stop historyserver"
	
	echo "**********$1 YARN**********"
	echo "----------$1 hadoop105 resourcemanager----------"
	ssh hadoop105 "/opt/modules/hadoop-2.7.2/sbin/yarn-daemon.sh stop resourcemanager"
	
	echo "**********$1 nodemanager**********"
	for i in hadoop104 hadoop105 hadoop106;
	do
		echo "----------$1 $i nodemanager----------"
		ssh $i "/opt/modules/hadoop-2.7.2/sbin/yarn-daemon.sh stop nodemanager"
	done
	echo "**********$1 HDFS**********"
	echo "----------$1 hadoop104 namenode----------"
	ssh hadoop104 "/opt/modules/hadoop-2.7.2/sbin/hadoop-daemon.sh stop namenode"
	
	echo "----------$1 hadoop105 Secondarynamenode----------"
	ssh hadoop106 "/opt/modules/hadoop-2.7.2/sbin/hadoop-daemon.sh stop secondarynamenode"
	
	echo "**********$1 datanode**********"
	for i in hadoop104 hadoop105 hadoop106;
	do
		echo "----------$1 $i datanode----------"
		ssh $i "/opt/modules/hadoop-2.7.2/sbin/hadoop-daemon.sh stop datanode"
	done
	
	echo "**********$1 all zkServer**********"
	ssh hadoop104 "/opt/modules/zookeeper-3.4.5-cdh5.3.6/bin/allzkServer.sh stop";
};;
esac
扫描二维码关注公众号,回复: 12258005 查看本文章

猜你喜欢

转载自blog.csdn.net/tyh1579152915/article/details/109334218