【Hadoop环境搭建之SSH免密码登录高级篇】

想象一下这种场景:我们使用1000台廉价的PC机做Hadoop集群,虽然Hadoop号称高可用,低成本;但是廉价机器谁能保证不出现问题呢,况且世界上本身就没有不出问题的电脑,于是今天坏掉一台机器,明天需要扩充增加一个节点增加容量,但是有一个问题SSH免密码登录的认证的公钥文件在各个电脑上不能共享,如果增加一个节点,新产生的id_rsa.pub  文件在各个PC的authorized_keys文件中不存在,因此各个PC拒绝新节点来访问自己,因为新节点没有报到,此时管理员疯了:要把新节点的id_rsa.pub 文件加入到各个PC的authorized_keys中,管理员的噩梦了..................


配置SSH免密码登录,需要公钥和私钥,其中公钥向外公开,私钥自己私有,公钥里面可以盛多个主机的公钥文件,而私钥文件仅能包含自己。对ssh软件来说,私钥和公钥都要放在.ssh目录中。ssh免密码登录是将公钥放在远端需要登录的主机.ssh中,而私钥放在本地.ssh目录中,另外一个公钥文件可以存储多个公钥,而一个私钥只能存储本机一个私钥。因此我们通过nfs共享的是公钥文件authorized_keys,而私钥文件却不能共享。
所 以,我们的做法是: 各节点的.ssh目录仍然是相互独立,不共享,但各节点.ssh目录中的authorized_keys是一个软链接,指向我们事先创建好的nfs共享目 录中的文件,这样各节点往.ssh/authorized_keys追加公钥数据时,其他所有的节点就都能看到了

 

 

教你一手,修改主机名字
[root@bogon ~]# vi /etc/sysconfig/network
network          networking/      network-scripts/
[root@bogon ~]# vi /etc/sysconfig/network

#gaojingsong
NETWORKING=yes
HOSTNAME=node3

1、配置NFS共享
[root@node1 ~]# ls
anaconda-ks.cfg  install.log         portmap-4.0-7.i386.rpm  rlwrap-0.37
Desktop          install.log.syslog  Public                  rlwrap-0.37.tar.gz
Documents        Music               readline-6.2            Templates
Downloads        Pictures            readline-6.2.tar.gz     Videos
[root@node1 ~]# rpm -ivh portmap-4.0-7.i386.rpm
Preparing...                ########################################### [100%]
   1:portmap                ########################################### [100%]
[root@node1 ~]# service portmap start
Starting portmapper: /bin/bash: line 1:  2489 Segmentation fault      (core dumped) portmap
                                                           [FAILED]
[root@node1 ~]#  rpm -q nfs-utils portmap
nfs-utils-1.2.3-39.el6.i686
portmap-4.0-7.i386
估计是版本问题,不想仔细研究,使用yum好了
[root@node1 ~]# yum install portmap
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.sina.cn
 * updates: centosx4.centos.org
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package portmap.i386 0:4.0-8 will be obsoleted
---> Package rpcbind.i686 0:0.2.0-11.el6 will be updated
---> Package rpcbind.i686 0:0.2.0-11.el6_7 will be obsoleting
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================
 Package             Arch             Version          Repository      Size
========================================================================
Installing:
 rpcbind             i686            0.2.0-11.el6_7    updates          51 k
     replacing  portmap.i386 4.0-8

Transaction Summary
=============================================================
Install       1 Package(s)

Total download size: 51 k
Is this ok [y/N]: y
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : rpcbind-0.2.0-11.el6_7.i686                                          1/3
  Erasing    : portmap-4.0-8.i386                                                    2/3
Non-fatal POSTUN scriptlet failure in rpm package portmap
  Cleanup    : rpcbind-0.2.0-11.el6.i686                                              3/3
error reading information on service portmap: No such file or directory
warning: %postun(portmap-4.0-8.i386) scriptlet failed, exit status 1
  Verifying  : rpcbind-0.2.0-11.el6_7.i686                                            1/3
  Verifying  : portmap-4.0-8.i386                                                     2/3
  Verifying  : rpcbind-0.2.0-11.el6.i686                                              3/3

Installed:
  rpcbind.i686 0:0.2.0-11.el6_7                                                                                                        

Replaced:
  portmap.i386 0:4.0-8                                                                                                                 

Complete!
[root@node1 ~]# yum install nfs-utils
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
 * base: mirror.neu.edu.cn
 * extras: mirror.neu.edu.cn
 * updates: centosv4.centos.org
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nfs-utils.i686 1:1.2.3-39.el6 will be updated
---> Package nfs-utils.i686 1:1.2.3-64.el6 will be an update
--> Processing Dependency: python-argparse for package: 1:nfs-utils-1.2.3-64.el6.i686
--> Running transaction check
---> Package python-argparse.noarch 0:1.2.1-2.1.el6 will be installed
--> Finished Dependency Resolution

Running Transaction
  Installing : python-argparse-1.2.1-2.1.el6.noarch                     1/3
  Updating   : 1:nfs-utils-1.2.3-64.el6.i686                            2/3
  Cleanup    : 1:nfs-utils-1.2.3-39.el6.i686                            3/3
  Verifying  : 1:nfs-utils-1.2.3-64.el6.i686                            1/3
  Verifying  : python-argparse-1.2.1-2.1.el6.noarch                     2/3
  Verifying  : 1:nfs-utils-1.2.3-39.el6.i686                            3/3

Dependency Installed:
  python-argparse.noarch 0:1.2.1-2.1.el6                                                                                               

Updated:
  nfs-utils.i686 1:1.2.3-64.el6                                                                                                        

Complete!

 

二、修改配置文件
[root@node1 ~]#  vim /etc/exports
/home/hadoop/.ssh   192.168.*(rw,wdelay,root_squash,no_subtree_check,fsid=0)
[root@node1 ~]# service rpcbind start
[root@node1 ~]# service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
[root@node1 ~]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp   3510  status
    100021    1   tcp  55646  nlockmgr
    100021    3   tcp  55646  nlockmgr
    100021    4   tcp  55646  nlockmgr


三、创建用户,并用该用户产生密钥
[root@node1 ~]# useradd hadoop
[root@node1 ~]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]# su - hadoop
[hadoop@node1 ~]$ ssh-keygen -t rsa
[hadoop@node1 ~]$ cp .ssh/id_rsa.pub .ssh/authorized_keys
[hadoop@node1 ~]$ ssh 192.168.1.110
The authenticity of host '192.168.1.110 (192.168.1.110)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.110' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!
[hadoop@node1 ~]$ exit
logout
Connection to 192.168.1.110 closed.


四、检查 NFS 服务器是否输出共享的目录
[root@node1 ~]#  service nfs restart
Shutting down NFS daemon:                                  [  OK  ]
Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS quotas:                                  [  OK  ]
Shutting down NFS services:                                [  OK  ]
Shutting down RPC idmapd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
[root@node1 ~]# exportfs
/home/hadoop/.ssh   192.168.*
如果再次修改了exports文件,不用重启NFS,重新加载一次就可以
[root@node1 ~]# exportfs -rv
查看加载结果
[root@node1 ~]# exportfs -v
[root@node1 home]# chmod -R  777 hadoop/

 

-------------------------------------------------------------------------------------------------------


一、客户机挂载,创建soft link

挂载之前,先保证两端软件一致

[root@node3 ~]#  yum install portmap

[root@node3 ~]#  yum install nfs-utils
[root@node3 ~]# chkconfig rpcbind on
[root@node3 ~]# chkconfig nfs on
[root@node3 ~]# service rpcbind start

root@node3 ~]# useradd hadoop
[root@node3 ~]# passwd hadoop
Changing password for user hadoop.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]# su - hadoop
[root@node3 ~]# showmount -e 192.168.1.110
Export list for 192.168.1.110:
/home/hadoop/.ssh 192.168.*
[root@node3 ~]# mount 192.168.1.110:/home/hadoop/.ssh  /home/exp
教你一手:如果想实现客户端自动挂载,修改 vim /etc/fstab文件


二、新建用于放置挂载文件的目录,
[root@node3 ~]# mkdir -p /home/exp
注意/home/exp权限能允许挂载:
[root@node3 /]# ll /home
drwxr-xr-x 2 root root 4096 Aug 11 01:10 exp

三、挂载NFS服务器中的目录,挂载到本地的/home/exp
[root@node3 ~]# mount  192.168.1.110:/home/hadoop/.ssh /home/exp
[root@node3 ~]# cd /home/exp/
[root@node3 exp]# ls
authorized_keys  id_rsa  id_rsa.pub  known_hosts
[hadoop@node3 .ssh]$ cat authorized_keys
ssh-rsa AAAABjGPb2zQ== hadoop@node1
[root@node3 ~]# ll /home
total 8
drwxr-xr-x. 2 root   root   4096 Mar 12 04:18 exp
drwx------. 4 hadoop hadoop 4096 Mar 12 04:22 hadoop
[root@node3 ~]# su - hadoop
将挂载目录的authorized_keys 软连接到hadoop用户的.ssh目录下
(注意:因为源文件和目标文件不再同一个目录,所以源文件和目标文件一定要使用绝对路径,
否则会出现错误Too many levels of symbolic links错误)
[root@node3 ~]$ ln -s  /home/exp/authorized_keys /home/hadoop/.ssh/
[hadoop@node3 ~]$ ssh-keygen -t rsa
[hadoop@node3 .ssh]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

 --------------------------------------------------------------------------------------------------------------------

验证免密码登录

 [hadoop@node3 .ssh]$ ssh 192.168.1.110
The authenticity of host '192.168.1.110 (192.168.1.110)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.110' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!
[email protected]'s password:
Last login: Sat Mar 12 04:35:39 2016 from 192.168.1.110
[hadoop@node1 ~]$ exit
logout
Connection to 192.168.1.110 closed.
[hadoop@node3 .ssh]$ ssh 192.168.1.104
The authenticity of host '192.168.1.104 (192.168.1.104)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.104' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.104] failed - POSSIBLE BREAK-IN ATTEMPT!
[email protected]'s password:
[hadoop@node3 ~]$ exit
logout
Connection to 192.168.1.104 closed.
[hadoop@node3 .ssh]$

主节点实验

[hadoop@node1 ~]$ ssh 192.168.1.104
The authenticity of host '192.168.1.104 (192.168.1.104)' can't be established.
RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.104' (RSA) to the list of known hosts.
reverse mapping checking getaddrinfo for bogon [192.168.1.104] failed - POSSIBLE BREAK-IN ATTEMPT!
[email protected]'s password:
Last login: Sat Mar 12 10:16:51 2016 from 192.168.1.104
[hadoop@node3 ~]$ exit
logout
Connection to 192.168.1.104 closed.
[hadoop@node1 ~]$ ssh 192.168.1.110
reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!
[email protected]'s password:
Last login: Sat Mar 12 10:27:44 2016 from 192.168.1.104
[hadoop@node1 ~]$ exit

 

 

 

 

 

 

 

 

增加新节点实验

[root@node2 ~]# yum install nfs-utils

Loaded plugins: fastestmirror, refresh-packagekit, security

Loading mirror speeds from cached hostfile

 * base: mirrors.163.com

 * extras: mirrors.163.com

 * updates: mirrors.163.com

Setting up Install Process

Resolving Dependencies

--> Running transaction check

---> Package nfs-utils.i686 1:1.2.3-39.el6 will be updated

---> Package nfs-utils.i686 1:1.2.3-64.el6 will be an update

--> Processing Dependency: python-argparse for package: 1:nfs-utils-1.2.3-64.el6.i686

--> Running transaction check

---> Package python-argparse.noarch 0:1.2.1-2.1.el6 will be installed

--> Finished Dependency Resolution

Total download size: 380 k

Is this ok [y/N]: y

Downloading Packages:

(1/2): nfs-utils-1.2.3-64.el6.i686.rpm                          | 333 kB     00:00     

(2/2): python-argparse-1.2.1-2.1.el6.noarch.rpm        |  48 kB     00:00     

----------------------------------------------------------------------------------------------------------------

Total                                                                            51 kB/s | 380 kB     00:07     

warning: rpmts_HdrFromFdno: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY

Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

Importing GPG key 0xC105B9DE:

 Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <[email protected]>

 Package: centos-release-6-5.el6.centos.11.1.i686 (@anaconda-CentOS-201311271240.i386/6.5)

 From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

Is this ok [y/N]: y

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Warning: RPMDB altered outside of yum.

  Installing : python-argparse-1.2.1-2.1.el6.noarch                                                         1/3 

  Updating   : 1:nfs-utils-1.2.3-64.el6.i686                                                                2/3 

  Cleanup    : 1:nfs-utils-1.2.3-39.el6.i686                                                                3/3 

  Verifying  : 1:nfs-utils-1.2.3-64.el6.i686                                                                1/3 

  Verifying  : python-argparse-1.2.1-2.1.el6.noarch                                                         2/3 

  Verifying  : 1:nfs-utils-1.2.3-39.el6.i686                                                                3/3 

Dependency Installed:

  python-argparse.noarch 0:1.2.1-2.1.el6                                                                        

Updated:

  nfs-utils.i686 1:1.2.3-64.el6                                                                                 

Complete!

[root@node2 ~]#  chkconfig rpcbind on

[root@node2 ~]# chkconfig nfs on

[root@node2 ~]# service rpcbind start

[root@node2 ~]#  showmount -e 192.168.1.110

Export list for 192.168.1.110:

/home/hadoop/.ssh 192.168.*

[root@node2 ~]#  mkdir -p /home/ex

[root@node2 ~]# mkdir -p /home/exp

[root@node2 ~]# rm -rf /home/ex

[root@node2 ~]# mount  192.168.1.110:/home/hadoop/.ssh /home/exp 

[root@node2 ~]#  cd /home/exp/

[root@node2 exp]#  ls

1.txt  authorized_keys  id_rsa  id_rsa.pub  known_hosts

[root@node2 exp]# cat authorized_keys 

ssh-rsa AAAb2zQ== hadoop@node1

ssh-rsa AAAAB3NFTZaw== hadoop@node3

[root@node2 exp]# adduser hadoop

[root@node2 exp]# passwd hadoop

Changing password for user hadoop.

New password: 

BAD PASSWORD: it is based on a dictionary word

BAD PASSWORD: is too simple

Retype new password: 

passwd: all authentication tokens updated successfully.

[root@node2 exp]# su - hadoop

[hadoop@node2 ~]$ ln -s  /home/exp/authorized_keys /home/hadoop/.ssh/

ln: target `/home/hadoop/.ssh/' is not a directory: No such file or directory

[hadoop@node2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 

Created directory '/home/hadoop/.ssh'.

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

4a:02:e1:c8:45:17:bc:2e:13:aa:88:c9:3b:ab:bc:eb hadoop@node2

The key's randomart image is:

[hadoop@node2 ~]$ ln -s  /home/exp/authorized_keys /home/hadoop/.ssh/

[hadoop@node2 ~]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

[hadoop@node2 ~]$ cat .ssh/authorized_keys

ssh-rsa QAtQnb2zQ== hadoop@node1

ssh-rsa AAAAN5wglnFTZaw== hadoop@node3

ssh-rsa AAAAB3N6iH6BW6p7wFCufQ== hadoop@node2

[hadoop@node2 ~]$ ssh 192.168.1.110

The authenticity of host '192.168.1.110 (192.168.1.110)' can't be established.

RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.1.110' (RSA) to the list of known hosts.

reverse mapping checking getaddrinfo for bogon [192.168.1.110] failed - POSSIBLE BREAK-IN ATTEMPT!

[email protected]'s password: 

Last login: Sat Mar 12 10:29:01 2016 from 192.168.1.110

[hadoop@node1 ~]$ exit

logout

Connection to 192.168.1.110 closed.

[hadoop@node2 ~]$ ssh 192.168.1.103

The authenticity of host '192.168.1.103 (192.168.1.103)' can't be established.

RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.1.103' (RSA) to the list of known hosts.

reverse mapping checking getaddrinfo for bogon [192.168.1.103] failed - POSSIBLE BREAK-IN ATTEMPT!

[email protected]'s password: 

Last login: Tue Mar 15 04:48:13 2016 from 192.168.1.104

[hadoop@node2 ~]$ exit

logout

Connection to 192.168.1.103 closed.

[hadoop@node2 ~]$ ssh 192.168.1.104

The authenticity of host '192.168.1.104 (192.168.1.104)' can't be established.

RSA key fingerprint is df:a1:53:ce:e0:59:0f:23:41:cd:04:16:23:e4:8b:e7.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '192.168.1.104' (RSA) to the list of known hosts.

reverse mapping checking getaddrinfo for bogon [192.168.1.104] failed - POSSIBLE BREAK-IN ATTEMPT!

[email protected]'s password: 

Last login: Sat Mar 12 10:17:38 2016 from 192.168.1.110

 

备注:关于NFS操作不同版本可能略有不同,可以参考

永久链接: http://gaojingsong.iteye.com/blog/2278941

预览文章: 【Linux网络文件系统--Network File System搭建】

猜你喜欢

转载自gaojingsong.iteye.com/blog/2282300
今日推荐