Database-Oracle-Linux系统安装Oracle11g RAC

在实验环境下建议每台机器3g内存
动态内存100g
两块网卡:对外的public interface和私有网卡private interface,两个子网
网卡1:host-only 网卡2:内部网络

1. 创建虚拟机

名称:node1_RAC_11gR2_rhel6u5_x64和node2_RAC_11gR2_rhel6u5_x64:
2.5-4g内存,引导:硬盘+cdrom,网卡1用hostonly,网卡2内部网络
主机名:node1.test.com node2.test.com
网络:
第一块改名eth0,勾选自动连接 第二块改名eth1,勾选自动连接
eth0:手动ip192.168.0.1/24,网关192.168.0.254,DNS:192.168.0.1,192.168.0.2
第二个节点 手动ip192.168.0.2/24,网关192.168.0.254,DNS:192.168.0.1,192.168.0.2
第二块网卡改名:eth1,勾选自动连接
手动ip:192.168.1.1/24
第二个节点 手动ip:192.168.1.2/24
时区:asia/shanghai
存储:use all space,review,删除/home,swap给4096MB,其他都给/
安装包:desktop

2.调整系统

关闭iptables/ip6tables/selinux
关闭防火墙:
#service iptables stop
#service ip6tables stop
#chkconfig iptables off
#chkconfig ip6tables off
关闭selinux:
#vi /etc/selinux/config
SELINUX=disabled

配置yum:
#rm -f /etc/yum.repos.d/*
#vi /etc/yum.repos.d/rhel6.repo
[Server]
name=Server
baseurl=file:///media/“RHEL_6.5 x86_64 Disc 1”/Server
enabled=1
gpgcheck=0
安装vb增强功能:
#yum -y install gcc kernel-devel
#ln -s /usr/src/kernels/2.6.32-431.el6.x86_64/ /usr/src/linux
右键eject弹出光盘
设备–>安装增强功能
右键eject弹出光盘

3. 硬件要求:

内存/swap/tmp/shared momory
#vi /etc/fstab(永久修改)
tmpfs /dev/shm tmpfs defaults,size=3G 0 0
#mount -o remount /dev/shm
临时修改
#mount -t tmpfs shmfs -o size=3g /dev/shm

4. 设置用户和目录:

用户:grid,oracle
群组:oinstall, asmadmin(存储的管理员), asmdba, asmoper, dba, oper
职责分离的需要:创建六个群组两个用户
groupadd -g 1000 oinstall
groupadd -g 1001 dba
groupadd -g 1002 oper
groupadd -g 1003 asmadmin
groupadd -g 1004 asmdba
groupadd -g 1005 asmoper
useradd -u 1000 -g oinstall -G dba,oper,asmdba oracle
useradd -u 1001 -g oinstall -G asmadmin,asmdba,asmoper grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01

passwd grid
passwd oracle

5. 设置user profile文件:

ORACLE_SID跟实例名称一样;TNS_ADMIN告诉系统监听的目录在哪;EDITOR:对应的编辑工具;

#vi ~grid/.bash_profile
export ORACLE_SID=+ASM1 //管理ASM存储的实例叫做,+ASM*,node2上改为+ASM2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export TNS_ADMIN= O R A C L E H O M E / n e t w o r k / a d m i n e x p o r t P A T H = ORACLE_HOME/network/admin export PATH= PATH: O R A C L E H O M E / b i n e x p o r t L D L I B R A R Y P A T H = ORACLE_HOME/bin export LD_LIBRARY_PATH= ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH= O R A C L E H O M E / J R E : ORACLE_HOME/JRE: ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export EDITOR=vi
export LANG=C
export NLS_LANG=american_america.AL32UTF8
export NLS_DATE_FORMAT=‘yyyy-mm-dd hh24:mi:ss’
umask 022

#vi ~grid/.bashrc (不用做也可以)
alias sqlplus=‘rlwrap sqlplus’
alias asmcmd=‘rlwrap asmcmd’

#vi ~oracle/.bash_profile
export ORACLE_SID=orcl1 node2上改为orcl2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME= O R A C L E B A S E / p r o d u c t / 11.2.0 / d b 1 e x p o r t T N S A D M I N = ORACLE_BASE/product/11.2.0/db_1 export TNS_ADMIN= ORACLE_HOME/network/admin
export ORACLE_HOSTNAME=node1.test.com node2上改为node2.host.com
export ORACLE_UNQNAME=orcl
export PATH= P A T H : PATH: ORACLE_HOME/bin
export LD_LIBRARY_PATH= O R A C L E H O M E / l i b : / l i b : / u s r / l i b e x p o r t C L A S S P A T H = ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH= ORACLE_HOME/JRE: O R A C L E H O M E / j l i b : ORACLE_HOME/jlib: ORACLE_HOME/rdbms/jlib
export NLS_LANG=american_america.AL32UTF8
export NLS_DATE_FORMAT=‘yyyy-mm-dd hh24:mi:ss’
export EDITOR=vi
export LANG=C
umask 022

#vi ~oracle/.bashrc (可不做)
alias sqlplus=‘rlwrap sqlplus’
alias rman=‘rlwrap rman’

6. 修改资源限制:

#vi /etc/security/limits.conf
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768

oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768

7. 修改内核参数:

#vi /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1051963392 2076053504
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
#sysctl -p

8. 安装软件包:

binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc+±33-3.2.3-69.el6 (x86_64)
compat-libstdc+±33-3.2.3-69.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c+±4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc+±4.4.4-13.el6 (x86_64)
libstdc+±4.4.4-13.el6.i686
libstdc+±devel-4.4.4-13.el6 (x86_64)
libstdc+±devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)
elfutils-libelf-devel

rpm -q binutils compat-libcap1 compat-libstdc+±33 gcc gcc-c++ glibc glibc-devel ksh libgcc libstdc++ libstdc+±devel libaio libaio-devel make sysstat elfutils-libelf-devel
yum install -y binutils compat-libcap1 compat-libstdc+±33 gcc gcc-c++ glibc glibc-devel ksh libgcc libstdc++ libstdc+±devel libaio libaio-devel make sysstat elfutils-libelf-devel

安装rlwrap和bind (不用安装上下键以及DNS)

yum install -y bind

/installation/grid/rpm/cvuqdisk-1.0.9-1.rpm(用scp复制到node2)

9. 配置网络:

        正常情况下用户是连接实例的vip,当节点出现问题时,vip会漂移到其他活着的节点上。(采用这种方式是因为其花费时间少,可以给用户快速的响应)
        scan:单一客户端访问名称    把所有的节点聚合在一起称为scan(这样用户就不需要记下所有节点的vip)      
        用户通过scan vip访问1521端口时,scan listner(存活在某个节点上)这个监听把这个连接请求向节点的监听和vip再做一次转发。
        为了解决用户连接的拥堵问题,一个名称最多解析三个scan vip,对应的3个监听分布到多个节点上来分散开销。
        整个网络需要9个ip,在host文件里做出声明,搭建DNS。  6个在host文件中,3个搭建DNS。                   

node1:
public(eth0): 192.168.0.1/24 网关:192.168.0.254
private(eth1): 192.168.1.1/24
node1的virutal ip: 192.168.0.11
node2:
public(eth0): 192.168.0.2/24 网关:192.168.0.254
private(eth1): 192.168.1.2/24
node2的virutal ip: 192.168.0.12
scan和scan vip:scan.test.com 192.168.0.101/102/103

#vi /etc/hosts ( 两个节点都要添加下面两个)
#node1
192.168.0.1 node1.test.com node1 #public ip
192.168.1.1 node1-priv.test.com node1-priv #private ip
192.168.0.11 node1-vip.test.com node1-vip #node1 vip
#node2
192.168.0.2 node2.test.com node2 #public ip
192.168.1.2 node2-priv.test.com node2-priv #private ip
192.168.0.12 node2-vip.test.com node2-vip #node2 vip
再加上scanvip

node1配置主dns: (实际生产中不用配置DNS,网络部门负责配置)
#vi /etc/named.conf
listen-on port 53 { any; };
listen-on-v6 port 53 { any; };
allow-query { any; };
dnssec-enable no;
dnssec-validation no;

#vi /etc/named.rfc1912.zones
zone “test.com” IN {
type master;
file “test.com.hosts”;
};

zone “0.168.192.in-addr.arpa” IN {
type master;
file “192.168.0.rev”;
};

#vi /var/named/test.com.hosts
$TTL 1D
@ IN SOA node1.test.com. root.node1.test.com. (
2016031601
3h
1h
1w
1h )
IN NS node1.test.com.
IN NS node2.test.com.
node1 IN A 192.168.0.1
node2 IN A 192.168.0.2
scan IN A 192.168.0.101
scan IN A 192.168.0.102
scan IN A 192.168.0.103

#vi /var/named/192.168.0.rev
$TTL 1D
@ IN SOA node1.test.com. root.node1.test.com. (
1
3h
1h
1w
1h )
IN NS node1.test.com.
IN NS node2.test.com.
1 IN PTR node1.test.com.
2 IN PTR node2.test.com.
101 IN PTR scan.test.com.
102 IN PTR scan.test.com.
103 IN PTR scan.test.com.

#service named start 如果不行就执行下: rndc-confgen -r /dev/urandom -a
#chkconfig --level 35 named on
#nslookup
测试localhost/127.0.0.1/node1/192.168.0.1/node2/192.168.0.2/scan/192.168.0.101(102,103)
确保测试成功再去配置辅助dns

node2配置辅助dns:

#vi /etc/named.conf
listen-on port 53 { any; };
listen-on-v6 port 53 { any; };
allow-query { any; };
dnssec-enable no;
dnssec-validation no;

#vi /etc/named.rfc1912.zones (追加)
zone “test.com” IN {
type slave;
file “slaves/test.com.hosts”;
masters {192.168.0.1;};
};

zone “0.168.192.in-addr.arpa” IN {
type slave;
file “slaves/192.168.0.rev”;
masters {192.168.0.1;};
};

ls /var/named/slaves/
#service named start
ls /var/named/slaves/
chkconfig --level 35 named on
#nslookup - 192.168.0.2
测试localhost/127.0.0.1/node1/192.168.0.1/node2/192.168.0.2/scan/192.168.0.101(102,103)

10. ntp (关闭时间同步)

#service ntpd stop
#chkconfig ntpd off
#mv /etc/ntp.conf /etc/ntp.conf.bak

11. 配置共享存储

第一个磁盘组:ocr/voting disk(集群重要的信息,配置信息),至少三块
第二个磁盘组:data数据
第三个磁盘组:fra备份
SAN/NAS
ocr/voting disk: 3个1GB(+CRS)
data: 2个10GB(+DATA)
fra: 1个10GB(+FRA)

创建共享磁盘的子目录:mkdir -p /root/virtualbox vms/shared_disk (外挂存储的话不用)
关闭node1/node2

node1添加6块磁盘(固定大小):(在外部主机上添加,实际生产中添加三个1G,一个五六十G的就可以)
/root/virtualbox vms/shared_disk/asmdisk1.vdi
vb将6块硬盘改为可共享
node2添加6块共享的磁盘
#ll /dev/sd*
用fdisk -l 查看磁盘是否挂上
执行命令:
#for i in b c d e f g ; (有几块盘就写几个字符)
do
echo “KERNEL==“sd*”, BUS==“scsi”, PROGRAM==”/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i", NAME=“asm-disk$i”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660"” >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done 绑定 (不绑定的话重启就没了)
#start_udev;ls /dev/asm* 确认生成asmdisk。

node1/node2设置临时共享
#mkdir /installation
#mount -t vboxsf installation /installation
mkdir /softwar
#mount -t vboxsf software /software
外部主机:unzip三个压缩表,然后虚拟机里安装第三个压缩表解压文件夹里的yum -y install cvuqdisk-1.0.9-1.rpm
[root@node1 installation]# yum -y install rlwrap-0.42-1.el6.x86_64.rpm (上下键)

12. node1安装gi

service NetworkManager stop
chkconfig NetworkManager off
#xhost +
#su - grid
$cd /installation/grid/
$ ./runInstaller

解压缩,grid运行runInstaller
$.runinstall
高级安装,集群名称cluster name:test-cluster, scan name: scan.test.com scan:1521
不配置gns,添加节点2,配置ssh
以下三个可以忽略
pdksh是它自己安装的版本,咱们已经安装了, 所以没有关系
asm 用虚拟机模拟会有点差别,所以没有关系
dns 没有关系

手动免密互信
#所有节点均执行
rm -rf ~/.ssh    
mkdir ~/.ssh
chmod 700 ~/.ssh
/usr/bin/ssh-keygen -t rsa

#仅节点1执行
ssh node1 cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys  
ssh node2 cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys node2:.ssh/authorized_keys

#仅节点2执行
[grid@cheastrac02:~]$chmod 600 ~/.ssh/authorized_keys

测试

asm: 磁盘组名称:CRS,redundancy:normal方式,change discovery path搜索路径:/dev/asm*,使用bcd三块硬盘
选择use same password for this accounts,填入密码
安装grid到最后跑进度条的时候可以进到拷贝路径看数量和拷贝速度 du - sm *

根据提示运行两个脚本,先节点1成功再节点2
#/u01/app/oraInventory/orainstRoot.sh
#/u01/app/11.2.0/grid/root.sh

13. 测试gi:

#su - grid
$ crsctl check crs
$ crsctl stat res -t
$ srvctl status asm

14. node1创建asm磁盘组:

先创建ASM磁盘组才能创建数据库
桌面grid登录,asmca创建磁盘组
data: 2个10GB(normal)
fra: 1个10GB(external)

15. node1上安装db:

桌面oracle登录,运行runInstaller
cd /installation/database
./runInstaller

只安装软件install database。。。,rac方式(第二个),选择全部节点,ssh连接,

  1. node1里oracle用户上创建db:
    dbca,rac,数据库orcl,node1/node2,存储asm,data磁盘组,fra使用+FRA磁盘组,sample schema,内存800MB,字符集al32utf8

问题:
查看数据库的字符集:
SQL> select * from v$nls_parameters;
删除asm磁盘的头部信息:
#dd if=/dev/zero of=/dev/sdb bs=1M count=1
如果创建grid时候失败,需要重新创建的时候,要把之前创建产生的文件删除;
删除 /tmp里面的所有
删除 /u01/app/oraInventory里面的所有
删除 /u01/app/11.2.0/grid 里面的东西
删除后看下权限
手动建立ssh信任关系:
node1/node2上
#su - grid
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh

$ ssh-keygen -t rsa
$ ssh-copy-id 192.168.0.2 node1上
$ ssh-copy-id 192.168.0.1 node2上

ssh node1 date
ssh node2 date
ssh node1-priv date
ssh node2-priv date

或者:
ins-06002 setup报错
ssh-keygen -t rsa 提示的路径直接写 ;不用密码
ssh-keygen -t dsa
以上两个节点都做
cd /home/grid/.ssh/

ssh rac1 cat /home/grid/.ssh/id_rsa.pub >>authorized_keys
ssh rac1 cat /home/grid/.ssh/id_dsa.pub >>authorized_keys

ssh rac2 cat /home/grid/.ssh/id_rsa.pub >>authorized_keys
ssh rac2 cat /home/grid/.ssh/id_dsa.pub >>authorized_keys

关闭自动挂载,避免桌面崩溃
#chmod -x /usr/libexec/gvfs-gdu-volume-monitor
init 3 init 5
关闭没必要的进程bluetooth cups kdump rhnsd

在将linux下的虚拟机拷贝到windows下时,先将eth0改成桥接(之后改回来),拷贝到windows后将node1(只要出现旧修改)的配置文件用文本文件打开修改六块共享磁盘的位置。

猜你喜欢

转载自blog.csdn.net/adson1987/article/details/90171305