(一)Hadoop相关进程
【hdfs】 启动脚本:start-hdfs.sh
NameNode NN
DataNode DN
Secondarynamenode 2NN
【yarn】 启动脚本:start-yarn.sh
ResourceManager RM
NodeManager NM
(二)sbin/start-all.sh脚本分析
1.调用${Hadoop_HOME}/libexec/hadoop-config.sh
2.调用start-dfs.sh
3.调用start-yarn.sh
(三)sbin/start-dfs.sh脚本分析
1.调用${Hadoop_HOME}/libexec/hadoop-config.sh
2.取得namenode名字
3.调用hadoop-daemons.sh启动namenode
hadoop-daemons.sh" \
--config"$HADOOP_CONF_DIR" \
--hostnames"$NAMENODES" \
--script"$bin/hdfs" start namenode$nameStartOpt
sbin/hadoop-daemons.sh–config .. –hostname .. start namenode…
sbin/hadoop-daemons.sh –config ..–hostname .. start datanode…
sbin/hadoop-daemons.sh–config .. –hostname .. start secondarynamenode
sbin/hadoop-daemons.sh–config .. –hostname .. start zkfc //容灾节点
(四)sbin/start-yarn.sh脚本分析
1.取得${Hadoop_HOME}/libexec/yarn-config.sh
2.调用yarn-daemons.sh
# startresourceManager
"$sbin"/yarn-daemon.sh--config …. start resourcemanager
# startnodeManager
"$sbin"/yarn-daemons.sh–config …. start nodemanager
# startproxyserver
#"$sbin"/yarn-daemon.sh--config …. start proxyserve
(五) sbin/hadoop-daemons.sh脚本分析
1.调用${Hadoop_HOME}/libexec/hadoop-config.sh
2.取slaves文件
3.调用hadoop-daemon.sh
(六)sbin/hadoop-daemon.sh脚本分析
1.调用${Hadoop_HOME}/libexec/hadoop-config.sh
2.bin/hdfs ….
(七)sbin/yarn-daemons.sh
1.调用调用${Hadoop_HOME}/libexec/yarn-config.sh
2.bin/yarn
通过分析脚本,可以随意启停单个节点,例如
$hadoop-daemon.sh start namenode//启动namenode
$hadoop-daemons.sh start datanode //启动所有datanode
$hadoop-daemon.sh start secondarynamenode //启动2nn
$hadoop-daemon.sh stop datanode //停止单个datanode