Oracle 10g RAC - clusterware的安装

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/xxzhaobb/article/details/86678604

这里安装clusterware,安装完毕后,版本是10.2.0.1.0,后续需要升级到10.2.0.5.0

下面出问题了。主要原因,还是ssh互信没有搞好。搞好后,问题解决

使用上面的方法不行,最终还是弄回了裸设备。具体方法参考上一篇的内容

运行上图的脚本,节点1 运行正常,节点2 运行出错。

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
	rac10g01
	rac10g02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/app/oracle/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
[root@rac10g02 crs]# 

解决方法,编辑vipca文件,添加unset LD_ASSUME_KERNEL,后,设置网卡参数后,运行vipca脚本

[root@rac10g01 bin]# vi vipca   --在文件中添加unset LD_ASSUME_KERNEL,如下图所示
[root@rac10g01 bin]# ./oifcfg setif -global eth1/192.168.2.0:public
[root@rac10g01 bin]# ./oifcfg setif -global eth2/10.10.10.0:cluster_interconnect
[root@rac10g01 bin]# ./oifcfg getif
eth1  192.168.2.0  global  public
eth2  10.10.10.0  global  cluster_interconnect
[root@rac10g01 bin]# ./vipca

运行完毕后,返回之前的root.sh界面,点击确定,确定后,安装完毕。

查看clusterware的版本,是10.2.0.1.0

[oracle@rac10g02 ~]$ crsctl query crs softwareversion
CRS software version on node [rac10g02] is [10.2.0.1.0]
[oracle@rac10g02 ~]$ 	

end

猜你喜欢

转载自blog.csdn.net/xxzhaobb/article/details/86678604