大数据A环境搭建--HADOOP--Ubuntu

一、安装JDK

  • 关闭防火墙 systemctl stop firewalld

1.更改主机名字

hostnamectl set-hostname master
hostnamectl set-hostname slave1
hostnamectl set-hostname slave2

2.配置密钥 免密登录

ssh-keygen -t rsa
ssh-copy-id master
ssh-copy-id slave1
ssh-copy-id slave2

3.映射地址

vi /etc/hosts/
master地址 master
slave1地址 slave1
slave2地址 slave2
scp /etc/hosts root@slave1:/etc/hosts
scp /etc/hosts root@slave2:/etc/hosts

然后去slave1 和slave2去配置

4 .解压并移动

  • tar-zxvf /安装包路径/安装包 -C /指定目录文件

5.配置环境变量

cd /usr/local/src/jdk1.8.0_162

vi /root/.bash_profile

export JAVA_HOME=/usr/local/src/jdk1.8.0_162
export PATH= P A T H : PATH: PATH:JAVA_HOME/bin
source /root/.bash_profile

6.分发到各个节点

scp -r /路径 root@slave1:/路径
scp /root/.bash_profile @slave1:/root

二、安装HADOOP

1.解压并且移动

  • tar-zxvf /安装包路径/安装包 -C /指定目录文件

2.配置环境变量

cd /usr/local/src/hadoop-2.7.7/etc/hadoop
vi /root/.bash_profile

export HADOOP_HOME=/usr/local/src/hadoop-2.7.7
export PATH= P A T H : PATH: PATH:HADOOP_HOME/bin:$HADOOP_HOME/sbin

srouce /root/.bash_profile

3.配置.sh和.xml文件

i.创建临时文件

cd /usr/local/src/hadoop-2.7.7
mkdir hdfs
cd hdfs
mkdir tmp
mkdir data
mkdir name

ii.配置hadoop-env.sh

  • 整体的配置环境:cd /usr/local/src/hadoop-2.7.7/etc/hadoop

cd /usr/local/src/hadoop-2.7.7/etc/hadoop
vi hadoop-env.sh
export JAVA_HOME=/usr/local/src/jdk1.8.0_162
export HADOOP_CONF_DIR=/usr/local/src/hadoop-2.7.7/etc/hadoop
source hadoop-env.sh

iii.配置core-site.xml

hadoop.tmp.dir /usr/local/src/hadoop-2.7.7/tmp //路径为建立文件夹的路径 fs.default.name hdfs://master:9000

VI.配置hdfs-site.xml

dfs.replication 3 dfs.namenode.name.dir /usr/local/src/hadoop-2.7.7/hdfs/name dfs.datanode.data.dir /usr/local/src/hadoop-2.7.7/hdfs/data

V. 配置yarn-site.xml

yarn.resourcemanager.hostname master yarn.nodemanager.aux-services yarn.nodemanager.vmem-check-enabled

IV.配置mapred-site.xml.template

mapreduce.framework.name yarn

IIV.配置slaves文件

slave1
slave2

4.格式文件系统

hdfs nodename -format
hadoop nodename -format

5启动节点并且查看进程

start-all.sh
jps

6.查看网页集群

查看hdfs集群状态,也就是namenode的访问地址,默认访问地址:http://namenode的ip:50070
查看secondary namenode的集群状态,默认访问地址:http://namenode的ip:50090

猜你喜欢

转载自blog.csdn.net/m0_62491934/article/details/124340335