集群环境下配置hadoop,zookeeper,hbase第一部分

1.本环境采用两台linux环境,ip分别为:
192.168.56.101
192.168.56.102
分别修改两台机器的/etc/hosts文件,增加如下内容:
192.168.56.101 master
192.168.56.102 slave
我们以master作为namenode服务器,slave为datenode服务器,首先安装jdk并配置环境变量和ssh(安装过程省略,请参考

网上资料)
2.安装hadoop,修改如下配置文件:
两台机器的hadoop安装路径要相同,切记,切忌!!!
1)修改core-site.xml为:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
    </property>
</configuration>
2)修改hdfs-site.xml为:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
    </property>
</configuration>
3)修改mapred-site.xml为:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
</configuration>
4)修改masters为:
master
5)修改slaves为:
slave
6)格式化hadoop文件系统:
hadoop namenode -format
7)启动hadoop:
start-all.sh

猜你喜欢

转载自tiandizhiguai.iteye.com/blog/1522752