How To

Install Oracle RAC 10g R2 on SUSE Linux 9

This page is a fast and easy (trivial) guide to install Oracle RAC 10g R2 on two nodes running a fresh installed SLES 9 (Linux SUSE).

Prerequisites:

>= 2 servers
1GB RAM, 2*RAM swap
2 NIC for each node with the SAME interface name
4GB disk x sw, 2GB disk local DB
400MB on /tmp (or where TEMP TMPDIR variables refer)
Shared Storage between all nodes
Oracle 10g R2 (10.2.0.1)
root access to the system, X11 working!
ssh/rsh, jdk > 1.4.2
OS SUSE ES 9 SP 2 (2.6.5-7.97)
Packages: Default Installation +
 gcc-3.3.3 (use YaST /sbin/yast2 to install "C/C++ Compiler&Tools")
 orarun-1.8 (YaST->Software->Productivity->Databases)
No Oracle SW installed or running on the system (to apply this easy installation procedure)


Preparation (as root)

Define FSs and paths for Oracle: stage, ORACLE_BASE, ORACLE_HOME, ...

Prepare FSs with the required size (eg. 3x10GB (SW, stage, backup)).

Download all the required CDs, put them on the system (mount /dev/cdrom; cd /media/cdrecorder; cp ... )
Decompress the files (unzip)

Edit /etc/profile.d/oracle.sh and set appropriate values (eg. ORACLE_SID)

Run the script /usr/sbin/rcoracle start to setup kernel parameters

Enable oracle user (set /bin/bash in /etc/passwd)
 All nodes must use the same UID, GID, ... should use the same FS, env, ...

Configure pubblic and private IP, Virtual IP, ... /etc/hosts
 (use FQN first, use NIC interfaces with the same name: it's important!) no firewall
 Typical files/commands to be configured:
 /etc/hosts (must contains all the NICs (eg. node01), PIPs (eg. node01-priv), VIPs (eg. node01-vip), aliases)
 /etc/sysconfig/network
 uname -S

Configure ~oracle/.rhosts and check rsh between all nodes/interfaces (ssh is better but a bit more complex...)

Set identical date/time on all nodes

Configure shared raw devices and make them available to Oracle
 The devices must have the same name on all nodes
 (to check partitioning: fdisk -l, to check the binding: raw -qa,
  edit /etc/raw with entries like raw1:[sda1|hda1], set correct ownership to /dev/raw/rawX files
  bind them with /etc/init.d/raw start command)
For Oracle Clusterware:
 2  x 100MB (1 if external redundancy is provided) OCR (Oracle Cluster Registry)
 3  x  20MB (1 if external redundancy is provided) Voting Disk
For Oracle Dabase File:
 1  x 500MB SYSTEM
 1  x 1GB (+250MB for nodes exceeding 2) SYSAUX
 n  x 500MB (n=number of nodes) UNDOTBSn
 1  x 500MB TEMP
 1  x 160MB EXAMPLE
 1  x 120MB USERS
 2n x 150MB (n=number of nodes) Redo Log Files
 2  x 110MB Control Files
 1  x   5MB SPFILE
 1  x   5MB Password File

Edit the Database File to raw devices mapping file (it's not for CwF but for RAC/DBCA and can be done later...):
 # $ORACLE_BASE/oradata/dbname/dbname_raw.conf
 system=/dev/raw/rawX
 sysaux=/dev/raw/rawX
 example=/dev/raw/rawX
 users=/dev/raw/rawX
 temp=/dev/raw/rawX
 undotbs1=/dev/raw/rawX
 undotbs2=/dev/raw/rawX
 redo1_1=/dev/raw/rawX
 redo1_2=/dev/raw/rawX
 redo2_1=/dev/raw/rawX
 redo2_2=/dev/raw/rawX
 control1=/dev/raw/rawX
 control2=/dev/raw/rawX
 spfile=/dev/raw/rawX
 pwdfile=/dev/raw/rawX

Set the variable DBCA_RAW_CONFIG=$ORACLE_BASE/oradata/dbname/dbname_raw.conf
 dbca will use this setting

Check configuration prerequisites:
 /mountpoint/clusterware/cluvfy/runcluvfy.sh comp nodecon -n node01,node02 -verbose
 /mountpoint/clusterware/cluvfy/runcluvfy.sh stage -post hwos -n node01,node02 -verbose
 /mountpoint/clusterware/cluvfy/runcluvfy.sh stage -pre crsinst -n node01,node02 -verbose
Check failiures and fix them (some errors/warnings can be misleading...)

Oracle Clusterware Installation (as oracle)

Start installation (xhost +; set DISPLAY=[hostname]:0.0 then ./runInstaller)
 It'll complain about the lack of oraLoc and inventory suggesting to create them... Yes of course!
 There are some scripts to be run as root...
 It's important to answer correctly to all the question about hosts' names and networking
 During the installation the most important settings are the OCR and the Voting Disk

Finish the OC installation on all the nodes before starting the RAC Installation


Oracle RAC Installation (as oracle)

Check configuration prerequisites:
 /mountpoint/clusterware/cluvfy/runcluvfy.sh stage -pre dbinst -n node01,node02 -verbose
 RAC's ORACLE_HOME *must* be different from Clusterware's ORACLE_HOME

Uncompress Oracle SW if You have not done it previuosly (unzip)

Start installation (xhost +; set DISPLAY then ./runInstaller)
 Choose Enterprise type installation
 There are some scripts to be run as root...
 You are prompted to define the SYS/SYSTEM/... password. Do not forget it!
 You must run the vipca wizard to configure Virtual IP for server and services
 At the end of the installation You must create a DB. Choose the Advanced configuration
 to create a custom database (using raw devices)

You can also create the DB later. To check DB creation prerequisites:
 /mountpoint/clusterware/cluvfy/runcluvfy.sh stage -pre dbcfg -n node01,node02 -d oracle_home -verbose
To create new DBs use dbca and choose RAC instance instead of Standalone instance


Post Installation

Check if Oracle is working (sqlplus http://hostname:1158/em)

Configure startup (edit /etc/sysconfig/oracle, check /etc/oratab)

Backup Voting Disk

Install Patches

Buon divertimento! Have a lot of Fun! Que te diviertas! Diverte-te!


More info...

Defining Shared DB Storage is quite complex... There are several possibilities for Oracle Clusterware and for Oracle database File (Database, Recovery, Voting Disk, Oracle Cluster Registry):

The following table show all the alternatives:
Storage OptionOCR, Voting DiskClusterwareDatabaseRecovery
ASMNoNoYesYes
OCFSYesNoYesYes
Local StorageNoYesNoNo
NFS (on NAS)YesYesYesYes
Shared Raw DevicesYesNoYesNo

Configuring DB Storage requires a specific design, using ASM is the Oracle suggested (and the better) configuration. In this sample configuration we used raw devices (and local storage for SW) because they are easly defined and give few problems.

Use NTP!

SSH is far better than rcp and rsh
    Configure ssh for oracle (do this as oracle): 
	mkdir ~/.ssh; chmod 700 ~/.ssh; 
        /usr/bin/ssh-keygen -t rsa; /usr/bin/ssh-keygen -t dsa; cd ~/.ssh
	repeat (all nodes, all keys)
	    cat id_rsa.pub >> authorized_keys
	chmod 600 authorized_keys
	repeat (all nodes pubblic/private names)
	    test connection (eg. ssh node01 uname -a) and accept the fingertip
	Use null passphrase (easier) otherwise You have to
	    exec /usr/bin/ssh-agent $SHELL
	    /usr/bin/ssh-add
	    vi ~/.ssh/config #### Host * ForwardX11 no ####

OS packages ocf-support ocf-tools ocf-kernel (if You want OCF)

OS packages oracleasm-support oracleasm oracleasmlib (if You want ASMlib)
IF ASM
 /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1
 ...

If You prefer the italian version of this document...


Version: 1.0.4
Author: mail@meo.bogliolo.name