Ubuntu伪分布式hadoop安装

hdoop官网:https://hadoop.apache.org/docs/r3.2.2/

1. 前期环境搭建

1.1 java环境配置

先解压到下面目录

/usr/lib/jvm/jdk-15.0.2

然后到home里面cd ~

vim .bashrc

将下面配置粘贴到任意位置

#java environment
export JAVA_HOME=/usr/lib/jvm/jdk-15.0.2
#export PATH=$PATH:$JAVA_HOME/bin
export PATH=${
    
    JAVA_HOME}/bin:$PATH
# Hadoop Enviroment
export HADOOP_HOME=/usr/local/hadoop-3.2.2
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

1.2 安装免密

sudo apt instll ssh
ssh-keygen

查看

ls .ssh
cat .ssh/id_rsa.pub>> .ssh/authorized_keys

登陆其他电脑

ssh localhost

2.hadoop 配置

2.1 权限配置

将hadoop解压到 /usr/loca/目录下

并且给 /usr/local/hadoop-3.2.2 授权超级用户

chown -R charles /usr/local/hadoop-3.2.2

2.2 修改配置文件

2.2.1 hadoop-env.sh

vim /usr/local/hadoop-3.2.2/etc/hadoop/hadoop-env.sh

在54行左右

export JAVA_HOME=/usr/lib/jvm/jdk-15.0.2

2.2.2 核心配置core-site.xml

临时目录不用自己建,系统会自动生成

vim /usr/local/hadoop-3.2.2/etc/hadoop/core-site.xml
<configuration>
    <!--配置hdfs默认的命名-->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <!--配置临时目录-->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop-3.2.2/tmp</value>
    </property>
</configuration>

2.2.3 hdfs-site.xml

vim /usr/local/hadoop-3.2.2/etc/hadoop/hdfs-site.xml
<configuration>
 <!--配置副本个数 伪分布 默认为1-->
   <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>

     <!--配置元数据的存储位置-->
     <property>
         <name>dfs.namenode.name.dir</name>
         <value>file:/usr/local/hadoop-3.2.2/hadoop_data/hdfs/namenode</value>
     </property>
     <!--配置datanode数据存放位置-->
    <property>
         <name>dfs.datanode.data.dir</name>
         <value>file:/usr/local/hadoop-3.2.2/hadoop_data/hdfs/datanode</value>
     </property>
 </configuration>

2.2.4 mapreduce设置 mapred-site.xml

分配计算任务位若干个任务,再分配到各个节点

vim /usr/local/hadoop-3.2.2/etc/hadoop/mapred-site.xml
<configuration>
    <!--指定mapreduce运行框架-->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <final>true</final> <!--此处是否加上待定-->
    </property>
    
    <!---->
    <property>
        <name>mapreduce.application.classpath</name>
        <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
     </property>
       
</configuration>

2.2.5 yarn设置 yarn-site.xml

资源、节点管理器,看各个节点是否可用

vim /usr/local/hadoop-3.2.2/etc/hadoop/yarn-site.xml
<configuration>
	<!--指定mapreduce的shuffle-->
	<property>        
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	
	<!--  -->
	<property>        
		<name>yarn.nodemanager.env-whitelist</name>
		<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
	</property>
</configuration>

2.3 格式化名字节点

接下来构建分布式文件系统

hadoop namenode -format

后面这个删除了相当于把分布式文件系统干掉了(一般不执行这里)

ls /usr/local/hadoop-3.2.2/hadoop_data/
rm -rf /usr/local/hadoop-3.2.2/hadoop_data/

2.4 启动全分布式文件系统

start-dfs.sh

查看节点 有4个进程

jps

2.5 启动资源管理器

这里启动了两个服务

start-yarn.sh

这时候后台有5个服务

hadoop jar /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.2.jar pi 5 10

3 ubuntu软件安装

传送门:Ubuntu 软件安装

猜你喜欢

转载自blog.csdn.net/zx77588023/article/details/114923040