挖坑:handoop2.6 开启kerberos(全流程学习记录)

目录:

  1.涉及插件简介

  2.安装步骤

  3.日志错误查看

  

1.kerberos是什么东西

度娘指导:

  Kerberos 是一种网络认证协议,其设计目标是通过密钥系统为 客户机 / 服务器 应用程序提供强大的认证服务。该认证过程的实现不依赖于主机操作系统的认证,无需基于主机地址的信任,不要求网络上所有主机的物理安全,并假定网络上传送的数据包可以被任意地读取、修改和插入数据。在以上情况下, Kerberos 作为一种可信任的第三方认证服务,是通过传统的密码技术(如:共享密钥)执行认证服务的。

  认证过程具体如下:客户机向认证服务器(AS)发送请求,要求得到某服务器的证书,然后 AS 的响应包含这些用客户端密钥加密的证书。证书的构成为: 1) 服务器 “ticket” ; 2) 一个临时加密密钥(又称为会话密钥 “session key”) 。客户机将 ticket (包括用服务器密钥加密的客户机身份和一份会话密钥的拷贝)传送到服务器上。会话密钥可以(现已经由客户机和服务器共享)用来认证客户机或认证服务器,也可用来为通信双方以后的通讯提供加密服务,或通过交换独立子会话密钥为通信双方提供进一步的通信加密服务

  上述认证交换过程需要只读方式访问 Kerberos 数据库。但有时,数据库中的记录必须进行修改,如添加新的规则或改变规则密钥时。修改过程通过客户机和第三方 Kerberos 服务器(Kerberos 管理器 KADM)间的协议完成。有关管理协议在此不作介绍。另外也有一种协议用于维护多份 Kerberos 数据库的拷贝,这可以认为是执行过程中的细节问题,并且会不断改变以适应各种不同数据库技术。

  Hadoop提供了两种安全配置simple和kerberos。

  simple为简单认证控制用户名和所属组,很容易被冒充。

  Kerberos为借助kerberos服务生成秘钥文件所有机器公用,有秘钥文件才是可信节点。本文主要介绍kerberos的配置。Kerberos也存在问题,配置复杂,切更改麻烦,需从新生成秘钥切分发所有机器。

2.为什么还要用到SASL

  SASL是一个更加通用的身份认证接口,其接口在设计上可以兼容很多主流的认证方案。很多项目在做身份认证的时候,也是采用的SASL接口和流程。

  在真正使用kerberos进行身份认证时,我们一般不直接使用kerberos的接口。而是会使用诸如GSSAPI或者SASL等更通用的一些标准接口。之所以这么做,是因为:

  •   kerberos的接口更琐碎
  •   SASL和GSSAPI都是IETF标准,他们对身份认证这一行为做了更宏观的抽象。从而可以灵活的接入不同的认证方案。

2.安装步骤

  前提:已经安装好hadoop集群

  主机:10.1.4.32(namenode),10.1.4.34.10.1.4.36,主机名分别是host32,host34,host36

  hadoop版本:hadoop-2.6.0-cdh5.7.2

  linux版本:centos 7

1.安装kerberos,以10.1.4.32为KDC

yum install krb5-libs krb5-server krb5-workstation

2.修改三个配置文件:

/etc/krb5.conf

# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
 default_realm = STA.COM #修改为自己指定的域
# default_ccache_name = KEYRING:persistent:%{uid} #该行注释

[realms]
 STA.COM = {
  kdc = host32 #KDC主机名
  admin_server = host32 #同上
 }

[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM

 只需要修改备注地方就行,其中需要注意的是 "default_ccache_name = KEYRING:persistent:%{uid}" 把这行注释掉,否则后面登录时有可能报错找不到可信认证,具体原因暂不探究:

hdfs GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

 另外,票据是会过期的,在这个配置文件中可以更改认证的生命时间,以及重刷认证的有效期

ticket_lifetime = 24h
renew_lifetime = 7d

 2./var/kerberos/krb5kdc/kdc.conf

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 STA.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

保持默认即可,这里有一个配置supported_enctypes,如果需要改更改加密方式,应该需要改这里的配置,我保持默认.

另外此处有一个操作容易遗漏:需要下载jdk的加密包并传到本地$JAVA_HOME/jre/lib/security,如果hadoop是手动指定的JAVA_HOME,则拷贝到相应目录

Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy File

下载后将两个jar包:local_policy.jar,US_export_policy.jar复制到相应目录,注意不管是客户端还是服务端都需要这个操作.否则将会报初始化错误或者

javax.security.auth.login.LoginException: Unable to obtain password from user

3./var/kerberos/krb5kdc/kadm5.acl

改一下域

3.创建Kerberos数据库

db5_util create -r STA.COM –s

如果只定义一个realm的话,可以不需要-r,db5

如果需要重建数据库,将/var/kerberos/krb5kdc目录下的principal相关的文件删除即可.

4.启动kerberos并设置为开启启动

chkconfig krb5kdc on
chkconfig kadmin on
service krb5kdc start
service kadmin start
service krb5kdc status

5.生成kerberos用户

添加规则:addprinc 表示新增principle

kadmin.local -q "addprinc -randkey udap/[email protected]"
kadmin.local -q "addprinc -randkey udap/[email protected]"
kadmin.local -q "addprinc -randkey udap/[email protected]"
 
kadmin.local -q "addprinc -randkey HTTP/[email protected]"
kadmin.local -q "addprinc -randkey HTTP/[email protected]"
kadmin.local -q "addprinc -randkey HTTP/[email protected]"

生成keytab:xst表示生成keytab

kadmin.local -q "xst  -k udap-unmerged.keytab  udap/[email protected]"
kadmin.local -q "xst  -k udap-unmerged.keytab  udap/[email protected]"
kadmin.local -q "xst  -k udap-unmerged.keytab  udap/[email protected]"
 
kadmin.local -q "xst  -k HTTP.keytab  HTTP/[email protected]"
kadmin.local -q "xst  -k HTTP.keytab  HTTP/[email protected]"
kadmin.local -q "xst  -k HTTP.keytab  HTTP/[email protected]"

合成同一个keytab后发往各个hadooop节点

$ ktutil
ktutil: rkt udap-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: wkt udap.keytab
查看
klist -ket  udap.keytab

rkt表示展示,wkt表示写入

scp udap.keytab host32:/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop
scp udap.keytab host34:/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop
scp udap.keytab host36:/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop
ssh host32 "chown udap:udap /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab ;chmod 400 /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab"
ssh host34 "chown udap:udap /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab ;chmod 400 /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab"
ssh host36 "chown udap:udap /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab ;chmod 400 /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab"

6.现在每个hadoop节点中都有了这个keytab,将他们配置到xml文件中,中间的配置过程比较艰辛,直接贴结果了:

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://host32</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/udap/app/hadoop-2.6.0-cdh5.7.2/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
  <name>hadoop.security.authentication</name>
  <value>kerberos</value>
</property>

<property>
  <name>hadoop.security.authorization</name>
  <value>true</value>
</property>
<property>
  <name>fs.permissions.umask-mode</name>
  <value>027</value>
</property>
</configuration>

其中fs.permissions.umask-mode配置的是新建文件夹的默认权限,配合kerberos管理,027与750是同一个意思,转另一篇,hadoop文件权限管理.

yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>host32</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>

<property>
  <name>yarn.resourcemanager.keytab</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>yarn.resourcemanager.principal</name>
  <value>udap/[email protected]</value>
</property>
<property>
  <name>yarn.nodemanager.keytab</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>yarn.nodemanager.principal</name>
  <value>udap/[email protected]</value>
</property></configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>10.1.4.32:50090</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/udap/app/hadoop-2.6.0-cdh5.7.2/tmp/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/home/udap/app/hadoop-2.6.0-cdh5.7.2/tmp/dfs/data</value>
        </property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
<description>max number of file which can be opened in a datanode</description>
</property>

<property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
</property>
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>udap/[email protected]</value>
</property>
<property>
  <name>dfs.namenode.kerberos.https.principal</name>
  <value>HTTP/[email protected]</value>
</property>
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:1034</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:1036</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>udap/[email protected]</value>
</property>
<property>
  <name>dfs.datanode.kerberos.https.principal</name>
  <value>HTTP/[email protected]</value>
</property>


<!-- datanode SASL配置 -->
<property>
  <name>dfs.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>
<property>
  <name>dfs.data.transfer.protection</name>
  <value>integrity</value>
</property>

<!--journalnode 配置-->
<property>
  <name>dfs.journalnode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.principal</name>
  <value>udap/[email protected]</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
  <value>HTTP/[email protected]</value>
</property>

<!--webhdfs-->
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>

<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/[email protected]</value>
</property>

<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>

<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
</property>
<property>
  <name>dfs.nfs.kerberos.principal</name>
  <value>udap/[email protected]</value>
</property>
<property>
  <name>dfs.nfs.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.secondary.https.address</name>
  <value>host32:50495</value>
</property>
<property>
  <name>dfs.secondary.https.port</name>
  <value>50495</value>
</property>
<property>
  <name>dfs.secondary.namenode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.principal</name>
  <value>udap/[email protected]</value>
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.https.principal</name>
  <value>udap/[email protected]</value>
</property>
</configuration>

 7.现在在所有节点安装kerberos客户端

yum install krb5-workstation krb5-libs krb5-auth-dialog

安装完后只需要保证/etc/krb5.conf配置文件与KDC相同即可,如果主机名不通要记得在/etc/hosts中添加

8.添加SASL

听说小于2.6版本的hdfs需要用jsvc来安装SASL,2.6以后的不用.我直接采用openssl安装

在host32上执行:

openssl req -new -x509 -keyout test_ca_key -out test_ca_cert -days 9999 -subj '/C=CN/ST=zhejiang/L=hangzhou/O=dtdream/OU=security/CN=zelda.com'

将上面生成的test_ca_key和test_ca_cert丢到所有机器上,在各个机器上继续:

keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=zelda.com, OU=test, O=test, L=hangzhou, ST=zhejiang, C=cn"
keytool -keystore truststore -alias CARoot -import -file test_ca_cert
keytool -certreq -alias localhost -keystore keystore -file cert
openssl x509 -req -CA test_ca_cert -CAkey test_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial -passin pass:changeit
keytool -keystore keystore -alias CARoot -import -file test_ca_cert
keytool -keystore keystore -alias localhost -import -file cert_signed

目录下回生成keystore和trustkeystore,拷贝到各个主机上,并配置hadoop的ssl-client.xml和ssl-server.xml配置文件

从.example后缀文件拷贝出来修改即可

9.启动hdfs,如果有报错查看hdfs日志和DKC日志:/var/log/krb5kdc.log 和 /var/log/kadmind.log,访问首页变为 https://10.1.4.32:50470

10.添加princ并访问hdfs

kadmin.local
kadmin.local:addprinc [email protected]

登录后访问

如果有异常可以用命令查看当前认证是否有误:

[udap@host32 hadoop]$ klist
Ticket cache: FILE:/tmp/krb5cc_1005
Default principal: [email protected]

Valid starting       Expires              Service principal
2018-12-14T16:20:55  2018-12-15T16:20:55  krbtgt/[email protected]

创建不同用户登录,则对应在hdfs上的用户不同:

[udap@host32 bin]$ ./hdfs dfs -ls /
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
18/12/14 16:22:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
drwxr-xr-x   - cgf  cgf                 0 2018-12-14 08:52 /cgf
drwxr-xr-x   - hx   hx                  0 2018-12-14 10:35 /hx
drwxr-xr-x   - udap supergroup          0 2018-12-13 11:33 /udap

11.如果需要新增免密登录keytab文件,需要如下操作:这段摘抄的记录: https://my.oschina.net/psuyun/blog/333077

运行命令
ktutil
add_entry -password -p hadoop/[email protected] -k 3 -e aes256-cts-hmac-sha1-96
解释:-k 指编号 -e指加密方式 -password 指使用密码的方式 
例子:
add_entry -password -p host/[email protected] -k 1 -e aes256-cts-hmac-sha1-96

write_kt 文件名.keytab
作用将密码记录到“文件名.keytab”中
例子:
write_kt /hadoop-data/etc/hadoop/hadoop.keytab

使用无密码的方式登录
kinit -kt username.keytab username
例子:
 kinit -kt /hadoop-data/etc/hadoop/hadoop.keytab hadoop/admin
 区别于
kinit -t username.keytab username(这种方式登录的时候还是提示输入密码)

3错误日志查看

1.使用不同用户登录后就在hdfs上实现了多租户隔离

报错信息多种多样,懒得列举了,反正具体都在日志中查看

2.不知道有没有什么忘记回顾的,待续...

猜你喜欢

转载自www.cnblogs.com/garfieldcgf/p/10077331.html