大数据生态系统(Hadoop)的安装部署

大数据生态系统(Hadoop)的安装部署

安装hadoop的准备阶段(在每个节点)

​ 1、安装 JDK 1.8

​ 2 、远程ssh无密码登录(主到从) ssh-keygen ssh-copy-id ip/主机名

​ 3、防火墙关闭 service iptables stop 永久关闭chkconfig iptables off

​ 4、selinux关闭 vim /etc/selinux/config
将 SELINUX=enforcing 改为 SELINUX=disabled

​ 5、修改主机名 vim /etc/sysconfig/network

​ 6、主机名和IP对应 vim /etc/hosts

hadoop的安装

​ 1、上传解压

​ 2、配置hadoop的环境变量

​ 3、检查支持哪些库或包

​ 进入hadoop安装目录的bin里面执行以下命令

./hadoop checknative

yum -y install openssl-devel

​ 4、修改hadoop的核心配置文件

​ 进入hadoop安装目录的etc里面执行以下命令

vim core-site.xml

<property>
	<name>fs.defaultFS</name>
	<value>hdfs://node01:8020</value>
</property>
<property>
	<name>hadoop.tmp.dir</name>
	<value>/export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas</value>
</property>
<!--  缓冲区大小,实际工作中根据服务器性能动态调整 -->
<property>
	<name>io.file.buffer.size</name>
	<value>4096</value>
</property>
<!--  开启hdfs的垃圾桶机制,删除掉的数据可以从垃圾桶中回收,单位分钟 -->
<property>
	<name>fs.trash.interval</name>
	<value>10080</value>
</property>

vim hdfs-site.xml

<!-- NameNode存储元数据信息的路径,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割   --> 
<!--   集群动态上下线 
<property>
	<name>dfs.hosts</name>
	<value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/accept_host</value>
</property>
<property>
	<name>dfs.hosts.exclude</name>
	<value>/export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/deny_host</value>
</property>
 -->
 <property>
		<name>dfs.namenode.secondary.http-address</name>
		<value>node01:50090</value>
</property>
<property>
	<name>dfs.namenode.http-address</name>
	<value>node01:50070</value>
</property>
<property>
	<name>dfs.namenode.name.dir</name>
	<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas</value>
</property>
<!--  定义dataNode数据存储的节点位置,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割  -->
<property>
	<name>dfs.datanode.data.dir</name>
	<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas</value>
</property>
<property>
	<name>dfs.namenode.edits.dir</name>
	<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits</value>
</property>
<property>
	<name>dfs.namenode.checkpoint.dir</name>
	<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name</value>
</property>
<property>
	<name>dfs.namenode.checkpoint.edits.dir</name>
	<value>file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits</value>
</property>
<property>
	<name>dfs.replication</name>
	<value>2</value>
</property>
<property>
	<name>dfs.permissions</name>
	<value>false</value>
</property>
dfs.blocksize 134217728

vim mapred-site.xml -->(将mapred-site.xml.template复制一份改名为mapred-site.xml)

<property>
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
</property>
<property>
	<name>mapreduce.job.ubertask.enable</name>
	<value>true</value>
</property>
<property>
	<name>mapreduce.jobhistory.address</name>
	<value>node01:10020</value>
</property>
<property>
	<name>mapreduce.jobhistory.webapp.address</name>
	<value>node01:19888</value>
</property>

vim yarn-site.xml

<property>
	<name>yarn.resourcemanager.hostname</name>
	<value>node01</value>
</property>
<property>
	<name>yarn.nodemanager.aux-services</name>
	<value>mapreduce_shuffle</value>
</property>

第一台机器执行以下命令
cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim hadoop-env.sh

export JAVA_HOME=/export/servers/jdk1.8.0_141

​ 5、设置集群有哪些工作节点

​ 编辑安装目录下etcslaves文件
node01
node02
node03

node01机器上面创建以下目录
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas 
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits

​ 6、其他节点分发

scp -r hadoop-2.6.0-cdh5.14.0 node02:$PWD

scp -r hadoop-2.6.0-cdh5.14.0 node03:$PWD

​ 7、配置其他节点的hadoop的环境变量

scp /etc/profifile.d/hadoop.sh node02:/etc/profifile.d/

scp /etc/profifile.d/hadoop.sh node03:/etc/profifile.d/

​ 8 、格式化集群

​ 在集群安装目录的bin内部执行一下命令进行格式化

hdfs namenode -format

​ 9、集群启动

​ 在集群安装目录的sbin内部执行一下命令进行启动

./start-all.sh

猜你喜欢

转载自blog.csdn.net/weixin_46015057/article/details/106223376