ESXI 系统手动安装
一. 背景
物理服务器在使用过程中基本以虚拟化方式进行资源管理。目前主流虚拟化:VMware ESXI/Hyper-v/KVM/xen等。
相比VMware Vshpere ESXI 在配置和管理方面更为简单,优缺点就不说了,测试环境尽量使用ESXI进行管理。在版本上面,由于测试环境的服务器型号: DELL R710 + 基本都在R720 R730的配置,为了兼容服务器方便虚拟机迁移,版本方面使用ESXI6.7 U3 版本。(U3一下版本存在bug,ESXI7.x 不兼容R710 R720.)
二. 带外idrac配置
做好raid,用于系统安装.
返回服务器页面,点击启动
下载 jnlp 控制台.双击打开:
需要jdk 1.7版本: (java中添加idrac站点白名单,lts启用1.0 1.1 1.2等)
三. 系统安装
四. 系统配置
4.1 主机名/ip信息配置
4.2 配置ansible
开启ssh
tcping/telnet 验证端口:
配置ansible主机ssh互信
ssh-copy-id 192.168.1.28
[root@gs-ansible-1-118 ~]# ssh-copy-id 192.168.1.28
Password:
Now try logging into the machine, with "ssh '192.168.1.28'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[root@gs-ansible-1-118 ~]# ssh 192.168.1.28
Password:
The time and date of this login have been sent to the system logs.
# 重新登录依然需要密码.检查配置
修复ssh密码登录:
[root@gs-esxi-1-28:~] grep AuthorizedKeysFile /etc/ssh/sshd_config
AuthorizedKeysFile /etc/ssh/keys-%u/authorized_keys
[root@gs-esxi-1-28:~] ls -a
. .mtoolsrc bin bootpart4kn.gz lib mbr productLocker store tmp vmfs
.. .ssh bootbank dev lib64 opt sbin tardisks usr vmimages
.ash_history altbootbank bootpart.gz etc locker proc scratch tardisks.noauto var vmupgrade
[root@gs-esxi-1-28:~] cp .ssh/authorized_keys /etc/ssh/keys-root/
[root@gs-esxi-1-28:~] cat /etc/ssh/keys-root/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3s4pbATnrlwJPVrK52QUSg7dtHXmB1mAujNJt336i0O3uwJys8WylSAldNiqreMGKnaT/MJhSBQT1XOKfGHCBU3VAvtX5sbEs0/sPI0C0y/YWkox/NVldz0E2g+L75Ltj76V8Gbixsmbhz2kj6ozcpR6yHMXipwyFd+oljynnio8AOqivxq3m/hIIK+bihJ3rl+7k+tm6kv++om6VRohplgXuzZnyGIYHM/gErim1q/MNJTMLlXDtsQ9a6bLIJDVfcpt04xujJNbdny2W+4oEt0Ch5y69Knd+n0EtSYh6gIvvvuN9J4dAawVNIOLFkLsGSFWr1q6YSKzGyKxwY94fQ== root@gs-ansible-118
[root@gs-ansible-1-118 ~]# ssh 192.168.1.28 uptime
8:52:40 up 00:44:11, load average: 0.00, 0.00, 0.00
配置ansible清单: => 故障待处理
[root@gs-ansible-1-118 ~]# echo -e "[ESXI]\n192.168.1.28" >>/etc/ansible/hosts && ansible ESXI --list-host
hosts (1):
192.168.1.28
[root@gs-ansible-1-118 ~]# ansible ESXI -m shell -a "uptime"
192.168.1.118 | SUCCESS | rc=0 >>
16:54:31 up 210 days, 7:50, 4 users, load average: 0.08, 0.02, 0.01
4.3 时区/ntp服务配置
配置完网络后,就可以web管理/ssh管理ESXI系统了
修改存储标准目录名称: i p − o s / ip-os/ ip−os/ip-data/ i p − d a t a 1 / ip-data1/ ip−data1/ip-ssd-data
UTC是标准时区,相比我们东8区来言,所有的时间都是慢8个小时,在2022年4月20日16:29:18,日志等记录的都是UTC相关时间,给我们日常排查带来不变。且对于虚拟机来言,由于vmtools的缘故每次重启会加载系统硬件时间导致虚拟机的时间为utc时区,管理较为不便。下面我们修改UTC时区为CST时区。
原理:
修改ESXI时区文件"/etc/localtime",由于ESXI系统中没有/Asia/shanghai,在这里我们从linux中拷贝文件,替换即可. 但由于ESXI每次重启都会替换/etc/localtime文件,为了保证替换的有效性,使用定时任务替换该文件.
# 获取目录位置(系统文件重启会覆盖,数据卷不会)
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 df |grep VMFS
VMFS-6 1790464491520 1537212416 1788927279104 0% /vmfs/volumes/192.168.1.28-os
# 拷贝linux 亚洲上海时区文件到ESXI节点
[root@gs-ansible-1-118 ESXI]# scp /usr/share/zoneinfo/Asia/Shanghai 192.168.1.28:/vmfs/volumes/192.168.1.28-os/
Shanghai 100% 388 0.4KB/s 00:00
[root@gs-ansible-1-118 ESXI]# ls
set-esxi-localtime.sh
# 拷贝配置脚本esxi存储目录
[root@gs-ansible-1-118 ESXI]# scp set-esxi-localtime.sh 192.168.1.28:/vmfs/volumes/192.168.1.28-os/
set-esxi-localtime.sh 100% 87 0.1KB/s 00:00
# 查看脚本,主要就是一条覆盖命令
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'cat /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh'
#!/bin/sh
base_dir=/vmfs/volumes/192.168.1.28-os
/bin/rm -f /etc/localtime_bak &&/bin/mv /etc/localtime /etc/localtime_bak && cp $base_dir/Shanghai /etc/localtime
# 配置开机自启动
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'sed -i "s#exit 0#sh /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh#g" /etc/rc.local.d/local.sh'
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'echo "exit 0" >> /etc/rc.local.d/local.sh'
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'cat /etc/rc.local.d/local.sh'
#!/bin/sh
# local configuration options
# Note: modify at your own risk! If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading. Changes are not supported unless under direction of
# VMware support.
# Note: This script will not be run when UEFI secure boot is enabled.
sh /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh
exit 0
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'sh /vmfs/volumes/192.168.1.28-os/set-esxi-localtime.sh'
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'date'
Wed Apr 20 17:42:25 CST 2022
4.4 添加VCSA 统一管理
4.4 基础软件安装配置
4.4.1 omsa (优化中)
OpenManage Server Administrator (OMSA) 是一种软件代理,可通过两种方式提供全面的、一对一的系统管理解决方案:一是通过集成的、基于 Web 浏览器的图形用户界面 (GUI);二是通过操作系统显示的命令行界面 (CLI)。
使系统管理员能够在网络上本地和远程管理系统。
受管节点:安装代理和 web 组件。(Windows、Linux)
VIBVIB:OMSA的代理程序,不含Web组件(VMware)。
vmware的代理程序不具备web组件的功能,所以有三个软件包:
- omsa idrac基础模块
- omsa 系统软件
- omsa Windows代理软件(通过代理远程vmware实现管理)
# 获取包
[root@gs-ansible-1-118 ESXI]# ll
total 13888
-rw-r--r-- 1 root root 2679338 Apr 20 11:44 ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip
-rw-r--r-- 1 root root 7113139 Apr 20 11:44 OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip
-rw-r--r-- 1 root root 4419566 Apr 19 16:26 PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz
-rwxr-xr-x 1 root root 87 Apr 20 17:20 set-esxi-localtime.sh
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'mkdir /vmfs/volumes/192.168.1.28-os/tools'
[root@gs-ansible-1-118 ESXI]# scp ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip 192.168.1.28:/vmfs/volumes/192.168.1.28-os/tools/
ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip 100% 2617KB 2.6MB/s 00:00
[root@gs-ansible-1-118 ESXI]# scp OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip 192.168.1.28:/vmfs/volumes/192.168.1.28-os/tools/
OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip 100% 6946KB 6.8MB/s 00:00
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'cd /vmfs/volumes/192.168.1.28-os/tools && unzip ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip '
Archive: ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip
inflating: index.xml
inflating: vendor-index.xml
inflating: metadata.zip
inflating: vib20/dcism/Dell_bootbank_dcism_4.2.0.0.ESXi6-2581.vib
ssh 192.168.1.28 'esxcli software vib install -v /vmfs/volumes/192.168.1.28-os/tools/vib20/dcism/Dell_bootbank_dcism_4.2.0.0.ESXi6-2581.vib'
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'cd /vmfs/volumes/192.168.1.28-os/tools && unzip OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip '
Archive: OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip
replace index.xml? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
inflating: index.xml
inflating: vendor-index.xml
inflating: metadata.zip
inflating: vib20/OpenManage/Dell_bootbank_OpenManage_10.1.0.0.ESXi670-4634.vib
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'esxcli software vib install -v /vmfs/volumes/192.168.1.28-os/tools/vib20/OpenManage/Dell_bootbank_OpenManage_10.1.0.0.ESXi670-4634.vib'
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Dell_bootbank_OpenManage_10.1.0.0.ESXi670-4634
VIBs Removed:
VIBs Skipped:
4.4.2 prcecli工具
# 获取包
[root@gs-ansible-1-118 ESXI]# ll
total 13888
-rw-r--r-- 1 root root 2679338 Apr 20 11:44 ISM-Dell-Web-4.2.0.0-2581.VIB-ESX6i-Live_A00.zip
-rw-r--r-- 1 root root 7113139 Apr 20 11:44 OM-SrvAdmin-Dell-Web-10.1.0.0-4634.VIB-ESX67i_A00.zip
-rw-r--r-- 1 root root 4419566 Apr 19 16:26 PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz
-rwxr-xr-x 1 root root 87 Apr 20 17:20 set-esxi-localtime.sh
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'mkdir /vmfs/volumes/192.168.1.28-os/tools'
[root@gs-ansible-1-118 ESXI]# scp PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz 192.168.1.28:/vmfs/volumes/192.168.1.28-os/tools/
PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz 100% 4316KB 4.2MB/s 00:00
# 安装
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 'cd /vmfs/volumes/192.168.1.28-os/tools && tar -xf PERCCLI_MRXX5_7.1910.0_A12_VMware.tar.gz && esxcli software vib install -v /vmfs/volumes/192.168.1.28-os/tools/PERCCLI_MRXX5_7.1910.0_A12_VMware/PERCCLI_7.1910_VMware/ESXI\ 6.7/vmware-perccli-007.1910.vib'
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: BCM_bootbank_vmware-perccli_007.1910.0000.0000-01
VIBs Removed:
VIBs Skipped:
# 验证
[root@gs-ansible-1-118 ESXI]# ssh 192.168.1.28 '/opt/lsi/perccli/perccli show'
CLI Version = 007.1910.0000.0000 Oct 08, 2021
Operating system = VMkernel 6.7.0
Status Code = 0
Status = Success
Description = None
Number of Controllers = 1
Host Name = gs-esxi-1-28
Operating System = VMkernel 6.7.0
StoreLib IT Version = 07.2000.0200.0200
StoreLib IR3 Version = 16.14-0
System Overview :
===============
------------------------------------------------------------------------
Ctl Model Ports PDs DGs DNOpt VDs VNOpt BBU sPR DS EHS ASOs Hlth
------------------------------------------------------------------------
0 PERCH730Mini 8 7 2 0 2 0 Opt On 3 N 0 Opt
------------------------------------------------------------------------
Ctl=Controller Index|DGs=Drive groups|VDs=Virtual drives|Fld=Failed
PDs=Physical drives|DNOpt=Array NotOptimal|VNOpt=VD NotOptimal|Opt=Optimal
Msng=Missing|Dgd=Degraded|NdAtn=Need Attention|Unkwn=Unknown
sPR=Scheduled Patrol Read|DS=DimmerSwitch|EHS=Emergency Spare Drive
Y=Yes|N=No|ASOs=Advanced Software Options|BBU=Battery backup unit/CV
Hlth=Health|Safe=Safe-mode boot|CertProv-Certificate Provision mode
Chrg=Charging | MsngCbl=Cable Failure
[root@gs-ansible-1-118 ~]# ssh 192.168.1.28 '/opt/lsi/perccli/perccli /call/eall/sall show'
CLI Version = 007.1910.0000.0000 Oct 08, 2021
Operating system = VMkernel 6.7.0
Controller = 0
Status = Success
Description = Show Drive Information Succeeded.
Drive Information :
=================
----------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
----------------------------------------------------------------------------------
32:0 0 Onln 0 185.750 GB SATA SSD N N 512B INTEL SSDSC2BX200G4R U -
32:1 1 Onln 0 185.750 GB SATA SSD N N 512B INTEL SSDSC2BX200G4R U -
32:2 2 Onln 1 558.375 GB SAS HDD N Y 512B ST600MM0088 U -
32:3 3 Onln 1 558.375 GB SAS HDD N Y 512B ST600MM0088 U -
32:4 4 Onln 1 558.375 GB SAS HDD N Y 512B ST600MM0088 U -
32:5 5 Onln 1 558.375 GB SAS HDD N Y 512B ST600MM0088 U -
32:7 7 Onln 0 185.750 GB SATA SSD N N 512B INTEL SSDSC2BX200G4R U -
----------------------------------------------------------------------------------
EID=Enclosure Device ID|Slt=Slot No|DID=Device ID|DG=DriveGroup
DHS=Dedicated Hot Spare|UGood=Unconfigured Good|GHS=Global Hotspare
UBad=Unconfigured Bad|Sntze=Sanitize|Onln=Online|Offln=Offline|Intf=Interface
Med=Media Type|SED=Self Encryptive Drive|PI=Protection Info
SeSz=Sector Size|Sp=Spun|U=Up|D=Down|T=Transition|F=Foreign
UGUnsp=UGood Unsupported|UGShld=UGood shielded|HSPShld=Hotspare shielded
CFShld=Configured shielded|Cpybck=CopyBack|CBShld=Copyback Shielded
UBUnsp=UBad Unsupported|Rbld=Rebuild