I. 개요
1. 사용 Software
제품명 | 버전 | 아키텍쳐 | 배포 사이트 |
---|---|---|---|
VirtualBox | 4.3.x | 호스트 환경에 따름 | https://www.virtualbox.org |
Oracle Enterprise Linux | 4 (Update 6 이상 권장) | x86 32bit | |
Clusterware, Database | 11.1 (11.1.0.3 이상 권장) | x86 32bit | https://support.oracle.com (My Oracle Support 권한 필요) |
ASMLib | 2.0 | x86 (Intel IA32) | http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html |
Oracle Database 11.2.0.1은 http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html 에서 받을 수 있습니다.
2. 구성 계획
A) 서버
VM | Host Name | Memory | Net Adapter 1 | Net Adapter 2 | 구성 방법 | 비고 |
---|---|---|---|---|---|---|
RAC1 | rac1 | 2048MB | 브리지 어댑터 | 호스트 전용 어댑터 | 리눅스 설치 | |
RAC2 | rac2 | 2048MB | 브리지 어댑터 | 호스트 전용 어댑터 | RAC1 복제 |
리눅스 기반의 2 node RAC로 구성합니다.
따라서 최소 2개의 Virtual Machine(이하 VM)이 구성되어야 합니다.
B) 스토리지
- 개별 스토리지
파일 이름 | 용량 | 타입 | 용도 | 비고 |
---|---|---|---|---|
RAC1.vdi | 30GB | Dynamic | 1번 서버의 시스템 영역 | |
RAC2.vdi | 30GB | Dynamic | 2번 서버의 시스템 영역 | RAC1을 복제하여 사용합니다. |
- 공유 스토리지
파일 이름 | 용량 | 타입 | 용도 | 비고 |
---|---|---|---|---|
CRS1.vdi | 1GB | Fixed / Shareable | CRS 저장 영역 | 공유(shareable) 디스크로 사용하기 위해서 Fixed Size로 생성합니다. |
CRS2.vdi | 1GB | Fixed / Shareable | ||
CRS3.vdi | 1GB | Fixed / Shareable | ||
DATA1.vdi | 10GB | Fixed / Shareable | Data 저장 영역 | |
DATA2.vdi | 10GB | Fixed / Shareable | ||
DATA3.vdi | 10GB | Fixed / Shareable | ||
DATA4.vdi | 10GB | Fixed / Shareable | ||
FRA1.vdi | 10GB | Fixed / Shareable | Fast Recovery Area | |
FRA2.vdi | 10GB | Fixed / Shareable |
Automatic Storage Management를 이용하여 Data 공간과 Fast Recovery Area를 구성합니다.
디스크 공간을 절약하기 위해 설치 파일은 모두 한 곳에 압축을 풀어 VirtualBox의 게스트 확장 기능으로 VM에서 불러와 사용합니다.
C) 네트워크
VM | Public IP | Private IP | Virtual IP | Netmask | Gateway | SCAN IP |
---|---|---|---|---|---|---|
RAC1 | 10.0.1.101 | 10.0.5.101 | 10.0.1.111 | 255.255.255.0 | 10.0.1.1 | 10.0.1.110 |
RAC2 | 10.0.1.102 | 10.0.5.102 | 10.0.1.112 | 255.255.255.0 | 10.0.1.1 |
Netmask의 경우 Public과 Private 모두 24bit(255.255.255.0)을 사용합니다.
인터넷 공유기 또는 Gateway의 IP에 맞춰 Public IP와 Virtual IP를 설정하기 바랍니다.
3. 호스트 환경
원활한 실습을 위해 64비트 운영체제에 8GB 이상의 메모리가 장착된 환경에서 작업하는 것을 권합니다.
디스크 I/O에 의한 지연을 최소화하기 위해 호스트의 OS가 설치되지 않은 별도의 내장 디스크를 사용하거나 SSD의 사용을 권합니다.
ESATA 또는 USB 3.0 이상의 빠른 속도를 보장하는 외장 디스크를 사용하는 것도 방법일 수 있습니다만 지속적인 연결이 보장되어야 합니다.
II. Virtual Box 설정
III. LINUX 설치
IV. 운영체제 환경 설정
chkconfig --level 123456 xinetd off chkconfig --level 123456 sendmail off chkconfig --level 123456 cups off chkconfig --level 123456 cups-config-daemon off chkconfig --level 123456 smartd off chkconfig --level 123456 isdn off chkconfig --level 123456 pcmcia off chkconfig --level 123456 iptables off
[root@rac1 ~]# chkconfig --level 123456 xinetd off [root@rac1 ~]# chkconfig --level 123456 sendmail off [root@rac1 ~]# chkconfig --level 123456 cups off [root@rac1 ~]# chkconfig --level 123456 cups-config-daemon off [root@rac1 ~]# chkconfig --level 123456 smartd off [root@rac1 ~]# chkconfig --level 123456 isdn off [root@rac1 ~]# chkconfig --level 123456 pcmcia off [root@rac1 ~]# chkconfig --level 123456 iptables off
rpm -q binutils-* rpm -q compat-db-4* rpm -q control-center-2* rpm -q gcc-3* rpm -q gcc-c++-3* rpm -q glibc-2* rpm -q glibc-common-2* rpm -q gnome-libs-1* rpm -q libstdc++-3* rpm -q libstdc++-devel-3* rpm -q make-3*
[root@rac1 ~]# rpm -q binutils-* binutils-2.15.92.0.2-25 [root@rac1 ~]# rpm -q compat-db-4* compat-db-4.1.25-9 [root@rac1 ~]# rpm -q control-center-2* control-center-2.8.0-12.rhel4.5 [root@rac1 ~]# rpm -q gcc-3* gcc-3.4.6-11.0.1 [root@rac1 ~]# rpm -q gcc-c++-3* gcc-c++-3.4.6-11.0.1 [root@rac1 ~]# rpm -q glibc-2* glibc-2.3.4-2.43 [root@rac1 ~]# rpm -q glibc-common-2* glibc-common-2.3.4-2.43 [root@rac1 ~]# rpm -q gnome-libs-1* gnome-libs-1.4.1.2.90-44.2 [root@rac1 ~]# rpm -q libstdc++-3* libstdc++-3.4.6-11.0.1 [root@rac1 ~]# rpm -q libstdc++-devel-3* libstdc++-devel-3.4.6-11.0.1 [root@rac1 ~]# rpm -q make-3* make-3.80-7.EL4
### Public 10.0.1.101 rac1.localdomain rac1 10.0.1.102 rac2.localdomain rac2 ### Private 10.0.5.101 rac1-priv.localdomain rac1-priv 10.0.5.102 rac2-priv.localdomain rac2-priv ### Virtual 10.0.1.111 rac1-vip.localdomain rac1-vip 10.0.1.112 rac2-vip.localdomain rac2-vip
# Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ### Public 10.0.1.101 rac1.localdomain rac1 10.0.1.102 rac2.localdomain rac2 ### Private 10.0.5.101 rac1-priv.localdomain rac1-priv 10.0.5.102 rac2-priv.localdomain rac2-priv ### Virtual 10.0.1.111 rac1-vip.localdomain rac1-vip 10.0.1.112 rac2-vip.localdomain rac2-vip
# Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 fs.file-max = 327679 kernel.msgmni = 2878 kernel.msgmax = 8192 kernel.msgmnb = 65536 kernel.sem = 250 32000 100 142 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4294967295 net.core.rmem_default = 262144 # For 11g recommended value for net.core.rmem_max is 4194304 net.core.rmem_max = 4194304 # For 10g uncomment the following line, comment other entries for this parameter and re-run sysctl -p # net.core.rmem_max=2097152 net.core.wmem_default = 262144 net.core.wmem_max = 262144 fs.aio-max-nr = 3145728 net.ipv4.ip_local_port_range = 1024 65000 vm.lower_zone_protection = 100
[root@rac1 ~]# sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 1 kernel.core_uses_pid = 1 fs.file-max = 327679 kernel.msgmni = 2878 kernel.msgmax = 8192 kernel.msgmnb = 65536 kernel.sem = 250 32000 100 142 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4294967295 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 262144 fs.aio-max-nr = 3145728 net.ipv4.ip_local_port_range = 1024 65000 vm.lower_zone_protection = 100
oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 3500000 oracle hard memlock 3500000
session required pam_limits.so
if [ \$USER = "oracle" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
/sbin/modprobe hangcheck-timer
# groupadd oinstall # groupadd dba groupadd oper # useradd -g oinstall -G dba,oper,vboxsf oracle usermod -g oinstall -G dba,oper,vboxsf oracle passwd oracle
[root@rac1 ~]# groupadd oper [root@rac1 ~]# usermod -g oinstall -G dba,oper,vboxsf oracle [root@rac1 ~]# passwd oracle Changing password for user oracle. New UNIX password: BAD PASSWORD: it is based on a dictionary word Retype new UNIX password: passwd: all authentication tokens updated successfully.
mkdir -p /u01/app/oracle chown -R oracle:oinstall /u01
[root@rac1 ~]# mkdir -p /u01/app/oracle [root@rac1 ~]# chown -R oracle:oinstall /u01
PATH=$PATH:$HOME/bin:/u01/app/11.1.0/crs/bin
export TMP=/tmp export TMPDIR=$TMP export EDITOR=vi export ORACLE_HOSTNAME=rac1 # rac2.localdomain export ORACLE_UNQNAME=racdb export ORACLE_BASE=/u01/app/oracle export CRS_HOME=/u01/app/11.1.0/crs export DB_HOME=$ORACLE_BASE/product/11.1.0/db_1 export ORACLE_HOME=$DB_HOME export ORACLE_SID=racdb1 # racdb2 export ORACLE_TERM=xterm export BASE_PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$CRS_HOME/bin:$BASE_PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib export NLS_LANG=AMERICAN_KOREA.AL32UTF8 if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
export ORACLE_SID=+ASM1 # +ASM2 export ORACLE_HOME=$CRS_HOME export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$CRS_HOME/bin:$BASE_PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_SID=racdb1 # racdb2 export ORACLE_HOME=$DB_HOME export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$GRID_HOME/bin:$BASE_PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias crs_env='. ~/.crs_env' alias db_env='. ~/.db_env' alias ss='sqlplus / as sysdba' alias ltr='ls -ltr'
V. 공유 스토리지 설정
vboxmanage createmedium --filename OCR1.vdi --size 300 --format VDI --variant Fixed vboxmanage createmedium --filename OCR2.vdi --size 300 --format VDI --variant Fixed vboxmanage createmedium --filename VOTE1.vdi --size 300 --format VDI --variant Fixed vboxmanage createmedium --filename VOTE2.vdi --size 300 --format VDI --variant Fixed vboxmanage createmedium --filename VOTE3.vdi --size 300 --format VDI --variant Fixed vboxmanage createmedium --filename DATA1.vdi --size 10240 --format VDI --variant Fixed vboxmanage createmedium --filename DATA2.vdi --size 10240 --format VDI --variant Fixed vboxmanage createmedium --filename DATA3.vdi --size 10240 --format VDI --variant Fixed vboxmanage createmedium --filename FRA1.vdi --size 10240 --format VDI --variant Fixed
> vboxmanage createmedium --filename OCR1.vdi --size 300 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: f874820f-3f90-44b2-bc79-09b288c07bb5 > vboxmanage createmedium --filename OCR2.vdi --size 300 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 58144599-f14b-4319-ae7f-7b7df7dfb329 > vboxmanage createmedium --filename VOTE1.vdi --size 300 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: b9899f79-68b6-40f4-b718-c73635688550 > vboxmanage createmedium --filename VOTE2.vdi --size 300 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: e2863fe6-60a5-45c7-b69c-e8673f0f22b9 > vboxmanage createmedium --filename VOTE3.vdi --size 300 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 42ee91fa-1126-4aa8-95d5-45db095b6cb9 > vboxmanage createmedium --filename DATA1.vdi --size 10240 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: bdc67b80-d2ab-466a-80c7-90a1c08b3cdd > vboxmanage createmedium --filename DATA2.vdi --size 10240 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 4aeabd65-1ba8-40b9-965d-8298b15c6bc0 > vboxmanage createmedium --filename DATA3.vdi --size 10240 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 12e9ff14-0712-4ace-bb8b-02c53238f22b > vboxmanage createmedium --filename FRA1.vdi --size 10240 --format VDI --variant Fixed 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Medium created. UUID: 561a9964-9a00-493d-b2d5-6ff628bce3be
vboxmanage modifymedium OCR1.vdi --type shareable vboxmanage modifymedium OCR2.vdi --type shareable vboxmanage modifymedium VOTE1.vdi --type shareable vboxmanage modifymedium VOTE2.vdi --type shareable vboxmanage modifymedium VOTE3.vdi --type shareable vboxmanage modifymedium DATA1.vdi --type shareable vboxmanage modifymedium DATA2.vdi --type shareable vboxmanage modifymedium DATA3.vdi --type shareable vboxmanage modifymedium FRA1.vdi --type shareable
> vboxmanage modifymedium OCR1.vdi --type shareable > vboxmanage modifymedium OCR2.vdi --type shareable > vboxmanage modifymedium VOTE1.vdi --type shareable > vboxmanage modifymedium VOTE2.vdi --type shareable > vboxmanage modifymedium VOTE3.vdi --type shareable > vboxmanage modifymedium DATA1.vdi --type shareable > vboxmanage modifymedium DATA2.vdi --type shareable > vboxmanage modifymedium DATA3.vdi --type shareable > vboxmanage modifymedium FRA1.vdi --type shareable
ls /dev/sd* cat /proc/partitions
[root@rac1 ~]# ls /dev/sd* /dev/sda /dev/sda2 /dev/sdb /dev/sdd /dev/sdf /dev/sdh /dev/sdj /dev/sda1 /dev/sda3 /dev/sdc /dev/sde /dev/sdg /dev/sdi [root@rac1 ~]# cat /proc/partitions major minor #blocks name 8 0 31457280 sda 8 1 104391 sda1 8 2 4192965 sda2 8 3 27157882 sda3 8 16 307200 sdb 8 32 307200 sdc 8 48 307200 sdd 8 64 307200 sde 8 80 307200 sdf 8 96 10485760 sdg 8 112 10485760 sdh 8 128 10485760 sdi 8 144 10485760 sdj
fdisk /dev/sdb fdisk /dev/sdc fdisk /dev/sdd fdisk /dev/sde fdisk /dev/sdf fdisk /dev/sdg fdisk /dev/sdh fdisk /dev/sdi fdisk /dev/sdj
[root@rac1 ~]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-300, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-300, default 300): Using default value 300 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sdc Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-300, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-300, default 300): Using default value 300 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sdd Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-300, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-300, default 300): Using default value 300 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sde Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-300, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-300, default 300): Using default value 300 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sdf Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-300, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-300, default 300): Using default value 300 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sdg Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 1305. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): Using default value 1305 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sdh Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 1305. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): Using default value 1305 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sdi Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 1305. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): Using default value 1305 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@rac1 ~]# fdisk /dev/sdj Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 1305. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): Using default value 1305 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
ls /dev/sd* cat /proc/partitions
[root@rac1 ~]# cat /proc/partitions major minor #blocks name 8 0 31457280 sda 8 1 104391 sda1 8 2 4192965 sda2 8 3 27157882 sda3 8 16 307200 sdb 8 17 307184 sdb1 8 32 307200 sdc 8 33 307184 sdc1 8 48 307200 sdd 8 49 307184 sdd1 8 64 307200 sde 8 65 307184 sde1 8 80 307200 sdf 8 81 307184 sdf1 8 96 10485760 sdg 8 97 10482381 sdg1 8 112 10485760 sdh 8 113 10482381 sdh1 8 128 10485760 sdi 8 129 10482381 sdi1 8 144 10485760 sdj 8 145 10482381 sdj1
/dev/raw/raw1 /dev/sdb1 /dev/raw/raw2 /dev/sdc1 /dev/raw/raw3 /dev/sdd1 /dev/raw/raw4 /dev/sde1 /dev/raw/raw5 /dev/sdf1 /dev/raw/raw6 /dev/sdg1 /dev/raw/raw7 /dev/sdh1 /dev/raw/raw8 /dev/sdi1 /dev/raw/raw9 /dev/sdj1
service rawdevices restart
[root@rac1 ~]# service rawdevices restart Assigning devices: /dev/raw/raw1 --> /dev/sdb1 /dev/raw/raw1: bound to major 8, minor 17 /dev/raw/raw2 --> /dev/sdc1 /dev/raw/raw2: bound to major 8, minor 33 /dev/raw/raw3 --> /dev/sdd1 /dev/raw/raw3: bound to major 8, minor 49 /dev/raw/raw4 --> /dev/sde1 /dev/raw/raw4: bound to major 8, minor 65 /dev/raw/raw5 --> /dev/sdf1 /dev/raw/raw5: bound to major 8, minor 81 /dev/raw/raw6 --> /dev/sdg1 /dev/raw/raw6: bound to major 8, minor 97 /dev/raw/raw7 --> /dev/sdh1 /dev/raw/raw7: bound to major 8, minor 113 /dev/raw/raw8 --> /dev/sdi1 /dev/raw/raw8: bound to major 8, minor 129 /dev/raw/raw9 --> /dev/sdj1 /dev/raw/raw9: bound to major 8, minor 145 done
:113 => 113번 줄로 이동 #raw/*:root:disk:0660 => 주석 처리 raw/*:oracle:dba:0660 => 밑 줄에 추가
#raw/*:root:disk:0660 raw/*:oracle:dba:0660
cd /dev/raw ls -ltra
[root@rac1 ~]# cd /dev/raw [root@rac1 raw]# ls -ltra 합계 0 crw-rw---- 1 oracle dba 162, 1 8월 22 11:52 raw1 crw-rw---- 1 oracle dba 162, 2 8월 22 11:52 raw2 crw-rw---- 1 oracle dba 162, 3 8월 22 11:52 raw3 crw-rw---- 1 oracle dba 162, 4 8월 22 11:52 raw4 crw-rw---- 1 oracle dba 162, 5 8월 22 11:52 raw5 crw-rw---- 1 oracle dba 162, 6 8월 22 11:52 raw6 crw-rw---- 1 oracle dba 162, 7 8월 22 11:52 raw7 crw-rw---- 1 oracle dba 162, 8 8월 22 11:52 raw8 crw-rw---- 1 oracle dba 162, 9 8월 22 11:52 raw9 drwxr-xr-x 2 root root 220 8월 22 11:52 . drwxr-xr-x 10 root root 6200 8월 22 11:52 ..
VI. RAC2 VM 구성
1. 2번 노드 VM 복제
vboxmanage clonemedium rac1.vdi ..\rac2.vdi --format VDI
> vboxmanage clonemedium rac1.vdi ..\rac2.vdi --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone medium created in format 'VDI'. UUID: b7d052cb-63c9-4f92-b81c-b429163680d4
2. 재기동 및 네트워크 정보 수정
... export ORACLE_HOSTNAME=rac2 ... export ORACLE_SID=racdb2 ...
export ORACLE_SID=+ASM2 ...
export ORACLE_SID=racdb2 ...
3. 패스워드 없는 SSH 접속 설정
mkdir .ssh
1번 노드
[oracle@rac1 ~]$ mkdir .ssh
2번 노드
[oracle@rac2 ~]$ mkdir .ssh
/usr/bin/ssh-keygen -t rsa /usr/bin/ssh-keygen -t dsa cd .ssh/ cat id_rsa.pub >> authorized_keys cat id_dsa.pub >> authorized_keys scp authorized_keys rac2:/home/oracle/.ssh/
/usr/bin/ssh-keygen -t rsa /usr/bin/ssh-keygen -t dsa cd .ssh/ cat id_rsa.pub >> authorized_keys cat id_dsa.pub >> authorized_keys scp authorized_keys rac1:/home/oracle/.ssh/
ssh rac1 date ssh rac2 date ssh rac1-priv date ssh rac2-priv date
1번 노드
2번 노드
exec /usr/bin/ssh-agent $SHELL usr/bin/ssh-add
1번 노드
[oracle@rac1 ~]$ exec /usr/bin/ssh-agent $SHELL [oracle@rac1 ~]$ /usr/bin/ssh-add Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa) Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
2번 노드
[oracle@rac2 ~]$ exec /usr/bin/ssh-agent $SHELL [oracle@rac2 ~]$ /usr/bin/ssh-add Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa) Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
cd <클러스터웨어 파일 압축 해제 경로>/clusterware/cluvfy ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
VII. Clusterware 설치
/u01/app/oraInventory/orainstRoot.sh
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh 다음 권한 변경 중/u01/app/oraInventory 대상 770. 그룹 이름 변경 중 /u01/app/oraInventory 대상 oinstall. 스크립트 실행이 완료되었습니다.
[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh 다음 권한 변경 중/u01/app/oraInventory 대상 770. 그룹 이름 변경 중 /u01/app/oraInventory 대상 oinstall. 스크립트 실행이 완료되었습니다.
/u01/app/11.1.0/crs/root.sh
[root@rac1 ~]# /u01/app/11.1.0/crs/root.sh WARNING: directory '/u01/app/11.1.0' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully The directory '/u01/app/11.1.0' is not owned by root. Changing owner to root The directory '/u01/app' is not owned by root. Changing owner to root The directory '/u01' is not owned by root. Changing owner to root Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /dev/raw/raw3 Now formatting voting device: /dev/raw/raw4 Now formatting voting device: /dev/raw/raw5 Format of 3 voting devices complete. Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. rac1 Cluster Synchronization Services is inactive on these nodes. rac2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
[root@rac2 ~]# /u01/app/11.1.0/crs/root.sh WARNING: directory '/u01/app/11.1.0' is not owned by root WARNING: directory '/u01/app' is not owned by root WARNING: directory '/u01' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Setting up Network socket directories Oracle Cluster Registry configuration upgraded successfully The directory '/u01/app/11.1.0' is not owned by root. Changing owner to root The directory '/u01/app' is not owned by root. Changing owner to root The directory '/u01' is not owned by root. Changing owner to root clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 30 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. Cluster Synchronization Services is active on these nodes. rac1 rac2 Cluster Synchronization Services is active on all the nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps (2) 노드에 VIP 응용 프로그램 리소스 생성... (2) 노드에 GSD 응용 프로그램 리소스 생성... (2) 노드에 ONS 응용 프로그램 리소스 생성... (2) 노드에서 VIP 응용 프로그램 리소스 시작... (2) 노드에서 GSD 응용 프로그램 리소스 시작... (2) 노드에서 ONS 응용 프로그램 리소스 시작... Done.
vipca
{}
VIII. Database 설치
/u01/app/oracle/product/11.1.0/db_1/root.sh
[root@rac1 ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. Finished product-specific root actions.
[root@rac2 ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. Finished product-specific root actions.
{}
IX. Clusterware 패치
/u01/app/11.1.0/crs/bin/crsctl stop crs /u01/app/11.1.0/crs/install/root111.sh
[root@rac1 ~]# /u01/app/11.1.0/crs/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped Oracle Clusterware resources Stopping Cluster Synchronization Services. Shutting down the Cluster Synchronization Services daemon. Shutdown request successfully issued. [root@rac1 ~]# /u01/app/11.1.0/crs/install/root111.sh Creating pre-patch directory for saving pre-patch clusterware files Completed patching clusterware files to /u01/app/11.1.0/crs Relinking some shared libraries. Relinking of patched files is complete. Preparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup will be queued to init within 30 seconds. Starting up the CRS daemons. Waiting for the patched CRS daemons to start. This may take a while on some systems. . 11107 patch successfully applied. clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac1 rac1-priv rac1 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. clscfg -upgrade completed successfully Creating '/u01/app/11.1.0/crs/install/paramfile.crs' with data used for CRS configuration Setting CRS configuration values in /u01/app/11.1.0/crs/install/paramfile.crs Setting cluster unique identifier Restarting Oracle clusterware Stopping Oracle clusterware Stopping resources. This could take several minutes. Successfully stopped Oracle Clusterware resources Stopping Cluster Synchronization Services. Shutting down the Cluster Synchronization Services daemon. Shutdown request successfully issued. Waiting for Cluster Synchronization Services daemon to stop Cluster Synchronization Services daemon has stopped Starting Oracle clusterware Attempting to start Oracle Clusterware stack Waiting for Cluster Synchronization Services daemon to start Waiting for Cluster Synchronization Services daemon to start Waiting for Cluster Synchronization Services daemon to start Waiting for Cluster Synchronization Services daemon to start Waiting for Cluster Synchronization Services daemon to start Waiting for Cluster Synchronization Services daemon to start Waiting for Cluster Synchronization Services daemon to start Cluster Synchronization Services daemon has started Event Manager daemon has started Cluster Ready Services daemon has started
[root@rac2 ~]# /u01/app/11.1.0/crs/bin/crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped Oracle Clusterware resources Stopping Cluster Synchronization Services. Shutting down the Cluster Synchronization Services daemon. Shutdown request successfully issued. [root@rac2 ~]# /u01/app/11.1.0/crs/install/root111.sh Creating pre-patch directory for saving pre-patch clusterware files Completed patching clusterware files to /u01/app/11.1.0/crs Relinking some shared libraries. Relinking of patched files is complete. Preparing to recopy patched init and RC scripts. Recopying init and RC scripts. Startup will be queued to init within 30 seconds. Starting up the CRS daemons. Waiting for the patched CRS daemons to start. This may take a while on some systems. . 11107 patch successfully applied. clscfg: EXISTING configuration version 4 detected. clscfg: version 4 is 11 Release 1. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 2: rac2 rac2-priv rac2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. clscfg -upgrade completed successfully Creating '/u01/app/11.1.0/crs/install/paramfile.crs' with data used for CRS configuration Setting CRS configuration values in /u01/app/11.1.0/crs/install/paramfile.crs
{}
X. Database 패치
/u01/app/oracle/product/11.1.0/db_1/root.sh
[root@rac1 ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. Finished product-specific root actions.
[root@rac2 ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. Finished product-specific root actions.
{}
{}
XI. Listener 생성
{}
XII. ASM 인스턴스 생성
{}
XIII. 데이터베이스 생성
dbca
{}