安装Hadoop的伪分布式

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qingfengxiaosong/article/details/86589644
  1. 给一个可执行的权限:

chmod u+x hadoop-2.7.1.tar.gz

 

  1. 配置环境,如图:

Vi /etc/profile

  1. 使文件生效:

source /etc/profile

  1. 修改4个配置文件,分别是core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml(Hadoop-env.sh、yarn-env.sh)

Core-site.xml:

Hdfs-site.xml

Mapred-site.xml

Yarn-site.xml

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.nodemanager.env-whitelist</name>

<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>

</property>

Hadoop-env.sh

Yarn-env.sh

  1. 格式化namenode

bin/hdfs namenode –format

  1. 启动我们的HDFS系统

Sbin/start-dfs.sh

  1. 启动我们的yarn:

Sbin/start-yarn.sh

 

  1. 连续创建两级目录:

bin/hdfs dfs -mkdir -p /user/derek

  1. 上传文件到指定的目录:

Bin/hdfs dfs –put /etc/Hadoop/*.xml /user/derek

10、bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar grep /user/derek /user/output 'dfs[a-z.]+'

 

  1. 关闭HDFS、yarn

Sbin/stop-dfs.sh

Sbin/stop-yarn.sh

猜你喜欢

转载自blog.csdn.net/qingfengxiaosong/article/details/86589644