Installing a Virtualized Oracle 12cR1 RAC Cluster using Oracle Linux 6.4 Virtual Machines on VMware ESXi 5
Last updated 27-Sep-2013

A new release of Oracle means it’s time for a new walkthrough. In this fourth “RAC on ESX” walkthrough, I’ll go over the process of building an Oracle 12c RAC cluster on VMware ESXi 5 from start to finish.  My goal in this walkthrough is to have you up and running with a virtualized Oracle cluster with minimal hassle. Since this guide is step by step, you don't need to be an expert to follow along, but the more experience you have the better.

The following diagram will give you a conceptual idea of the cluster.

As I’ve mentioned in previous walkthroughs, this configuration is meant only for testing, and to give you a way to learn RAC without buying the expensive hardware a traditional RAC cluster entails. If you’re building a production RAC cluster, I suggest you read the Grid Infrastructure and RAC installation guides, and the RAC Administration and Deployment Guide.  The following MOS (My Oracle Support) notes will also provide you with guidance (this requires an Oracle support subscription):

Sections

Prerequisites
Hypervisor Configuration and Virtual Machine Creation
Oracle Linux Installation
Pre-installation Tasks
Configuring Shared Storage
Installing Oracle Grid Infrastructure
Installing Oracle Database 12c
Creating a RAC Database
Post-installation Tasks
Verification
Miscellaneous Notes
References

Prerequisites

The following is required in order to succeed in following this guide:

Hardware Requirements

Software Requirements

Network Requirements

We will need 9 IP addresses for the RAC cluster. 2 public communication IP addresses, 2 Virtual IPs, 2 Interconnect IP addresses, and 3 SCAN (single client access name) addresses. The public communication IP addresses, SCAN addresses, and VIP addresses need to be on the same segment. The private addresses need to be on their own segment. This is how it looks in my network:

Node Hostname Public IP Interconnect IP VIP
node1.example.com 192.168.2.220 10.0.0.1 192.168.2.222
node2.example.com 192.268.2.221 10.0.0.2 102.168.2.223

Lastly our SCAN addresses will be 192.168.2.117, 192.168.2.118, and 192.168.2.119. The SCAN addresses should be configured in DNS rather than in the hosts file, as 3 round-robin A records. If you don't have DNS configured, or are unable to configure DNS, you can place the SCAN addresses in the hosts file. Placing the SCAN addresses in the host file is against best practices. I gave them a name of clus-scan. Make sure these addresses are resolvable from the nodes.

Your networking environment is probably different from mine. Feel free to configure the IPs to be on whatever network segment you use, just make sure that the Public IPs, SCAN addresses, and VIPs are on the same segment. If you do use different addresses, make sure to use them during the OS install and update /etc/hosts appropriately.

Hypervisor Configuration and Virtual Machine Creation

Each RAC node requires two network connections; one connection is for public communication, and the other is for the Interconnect. In order to isolate Interconnect traffic, we will create a virtual switch as shown below. It's likely that the RAC Interconnect will work without following this step, however I haven't tested this and Interconnect traffic is supposed to be isolated over its own VLAN or switch in the real world anyway.

Virtual Switch Creation

Log into the ESXi host using the vSphere client, select the host, and click the "Configuration" tab.

Click "Networking" in the "Hardware" box.

Click "Add Networking..." in the upper right corner of the pane.
3

Select "Virtual Machine" as the connection type and click the "Next" button.
4

Make sure "Create a vSphere standard switch" is selected, and click the "Next" button.
5

I used "RAC Interconnect" as my network label. Feel free to use any label you want to, and click the "Next" button.
6

Click "Finish" to create the virtual switch.
7

Virtual Machine Creation

Right click on the ESXi host, and click "New Virtual Machine..." to begin the process.


Select the "Custom" configuration option.

Set "node1.example.com" as the virtual machine name.

Select the storage location for the virtual machine files.

Select "Virtual Machine Version: 8."

Select "Linux" as the guest operating system, and then select "Oracle Linux 4/5/6 (64-bit)" from the version dropdown.

I just left this screen set to the defaults, but you can change them if needed.

Set the memory size to 4236MB. You'll notice this is slightly more than the required 4GB. I am doing this because the virtual machine reserves a small amount of memory that isn't visible to the guest operating system. Setting this amount of memory allows the guest to have a full 4GB available. The Cluster Verification Utility memory check will fail otherwise.

Configure the networking as shown below. Make sure that this setting is consistent across all RAC nodes.

I left the controller setting at the default, but you can change it if you have a specific reason to do so.

Select "Create a new virtual disk."

Set the disk size to 30GB, and select "Thick Provision Eager Zeroed."

I left these settings at their defaults.

Click the "Finish" button to create the virtual machine. This may take some time to complete.

Next, the second cluster node virtual machine will be created. Repeat the same process you used for node1, but use "node2.example.com" as the name of the virtual machine instead.

Oracle Linux Installation

As listed in the prerequisites section, you'll need the Oracle Linux 6.4 installation ISO to follow this guide. You can mount it in the virtual machine by selecting the virtual machine, then mounting the ISO as shown below. The screenshot below has me mounting the ISO from the datastore, but you can also mount it from a local ISO image (which means the installation will run over the network). On my hypervisor that option is grayed out until I turn on the virtual machine.

Select the newly created virtual machine, and click the play button to start it.


Once the virtual machine is started, you can view its console by right clicking on it and then clicking "Open Console."


From the console of the virtual machine, you can mount the ISO.


From my virtual machine, I clicked "Send Ctrl+Alt+del" in order to restart it and boot from the installation ISO.


The virtual machine will boot from the ISO. Press enter to proceed with the default installation option.


I selected "Skip." Feel free to test the installation media if you want to.


The graphical installation will commence.


Select your desired language.


Select your desired keyboard.


Select "Basic Storage Devices."


You may see a warning pop up, in which case you can click "Yes, discard any data."


Type in "node1.example.com," or a different hostname if you prefer. Click "Configure Network."


Select "System eth0," and then click "Edit."


Configure the network settings for the public interface as required. Make sure that "Connect automatically" is checked. I left IPv6 turned off, which is the default. Click "Apply" when you're finished.


Select "System eth1," and then click "Edit."


Configure the network settings for the private interface as required. Make sure that "Connect automatically" is checked. I left IPv6 turned off, which is the default. Click "Apply" when you're finished, and then click the "Close" button when the "Network Connections" screen pops up.


Select your desired time zone.


Enter a password for the root user.


Select "Use All Space." Click "Write changes to disk" when the warning pops up.


Change the installation type from "Basic Server" to "Minimal."


The dependency check will run and then the installation process will begin.


The installation is finished.


The Oracle Linux installation is now complete on node1. Repeat the process on node2, but make sure to use the correct hostname and IP address. The fully-qualified hostname for node2 is node2.example.com. Use the same root password on both nodes.

Pre-installation Tasks

All steps are run on both nodes as the root user, unless specified otherwise.

Disable SELinux

{
echo \# This file controls the state of SELinux on the system.
echo \# SELINUX= can take one of these three values:
echo \# enforcing - SELinux security policy is enforced.
echo \# permissive - SELinux prints warnings instead of enforcing.
echo \# disabled - No SELinux policy is loaded.
echo SELINUX=disabled
echo \# SELINUXTYPE= can take one of these two values:
echo \# targeted - Targeted processes are protected,
echo \# mls - Multi Level Security protection.
echo SELINUXTYPE=targeted

} > /etc/selinux/config

The node must be rebooted in order for the change to take effect (reboot or shutdown -r now). You can verify the change by running getenforce. The output should be "Disabled."

Install Required OS Packages

yum install compat-libcap1.x86_64 compat-libstdc++-33.x86_64 gcc.x86_64 gcc-c++.x86_64 glibc-devel.x86_64 ksh.x86_64 libstdc++-devel.x86_64 libaio-devel.x86_64 libXmu.x86_64 libXxf86dga.x86_64 libXxf86misc.x86_64 libdmx.x86_64 make.x86_64 nfs-utils.x86_64 sysstat.x86_64 mlocate.x86_64 compat-libstdc++-33.i686 glibc-devel.i686 libstdc++.i686 libstdc++-devel.i686 libaio-devel.i686 glibc.i686 libgcc.i686 libaio-devel.i686 libXext.i686 libXtst.i686 libX11.i686 libXau.i686 libxcb.i686 libXi.i686 xorg-x11-twm.x86_64 xorg-x11-server-utils.x86_64 xorg-x11-utils.x86_64 xorg-x11-xauth.x86_64 oracleasm-support.x86_64 tigervnc-server.x86_64 xterm.x86_64 ntp.x86_64 nscd.x86_64 openssh-clients.x86_64 unzip.x86_64 smartmontools.x86_64 parted.x86_64 wget.x86_64 bind-utils.x86_64 -y

The ASMlib tools package (oracleasmlib-2.0.4-1.el6) is not available on the public YUM repository, so we will download it and install it manually.

wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el6.x86_64.rpm

rpm -iv oracleasmlib-2.0.4-1.el6.x86_64.rpm

Configure System Services

chkconfig iptables off && /etc/init.d/iptables stop && \
chkconfig ip6tables off && /etc/init.d/ip6tables stop && \
chkconfig nscd on && /etc/init.d/nscd start && \
chkconfig ntpd on

Configure the Network Time Protocol Daemon

{
echo OPTIONS=\"-x -u ntp:ntp -p /var/run/ntpd.pid\"
} > /etc/sysconfig/ntpd

I changed the following lines in /etc/ntp.conf:

server 0.rhel.pool.ntp.org
server 1.rhel.pool.ntp.org
server 2.rhel.pool.ntp.org

To the following:

server nist1-ny.ustiming.org
server nist.time.nosc.us
server nist1-la.ustiming.org

Feel free to use any NTP servers you want, or leave it at the defaults. I changed the NTP servers in my configuration in order to avoid the "PRVF-5408 : NTP Time Server is common only to the following node" errors that occur when the Cluster Verification Utility runs.

Synchronize time on our nodes using ntpdate (use any NTP server you want):

ntpdate nist1-ny.ustiming.org

Start ntpd:

/etc/init.d/ntpd start

Configure /etc/hosts

{
echo \#Do not remove the following line, or various programs
echo \# that require network functionality will fail.
echo 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
echo ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
echo
echo \#public
echo 192.168.2.220 node1 node1.example.com
echo 192.168.2.221 node2 node2.example.com
echo
echo \#private
echo 10.0.0.1 node1-priv node1-priv.example.com
echo 10.0.0.2 node2-priv node2-priv.example.com
echo
echo \#vip
echo 192.168.2.222 node1-vip node1-vip.example.com
echo 192.168.2.223 node2-vip node2-vip.example.com
} > /etc/hosts

Configure the Shared Memory Filesystem

Open /etc/fstab and change the following line:

tmpfs /dev/shm tmpfs defaults 0 0

To the following:

tmpfs /dev/shm tmpfs rw,exec,size=4G 0 0

Then remount the file system:

mount -o remount /dev/shm

Set Linux Kernel Parameters

{
echo
echo \# BEGIN ORACLE RAC KERNEL PARAMETERS
echo \# kernel.shmall = 1/2 of physical memory in pages
echo \# See MOS note 301830.1
echo kernel.shmall = 2097152
echo \# kernel.shmmax = 1/2 of physical memory in bytes
echo \# See MOS note 567506.1
echo kernel.shmmax = 2148726784
echo kernel.shmmni = 4096
echo kernel.sem = 250 32000 100 128
echo fs.file-max = 6815744
echo fs.aio-max-nr = 1048576
echo net.ipv4.ip_local_port_range = 9000 65500
echo net.core.rmem_default = 262144
echo net.core.rmem_max = 4194304
echo net.core.wmem_default = 262144
echo net.core.wmem_max = 1048576
echo \# END ORACLE RAC KERNEL PARAMETERS
} >> /etc/sysctl.conf
/sbin/sysctl -p

Create and Configure OS Groups, Users, Directories, and Permissions

groupadd -g 1000 oinstall
groupadd -g 1100 asmadmin
groupadd -g 1200 dba
groupadd -g 1201 oper
groupadd -g 1202 backupdba
groupadd -g 1203 dgdba
groupadd -g 1204 kmdba
groupadd -g 1300 asmdba
groupadd -g 1301 asmoper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 1101 -g oinstall -G dba,oper,asmdba,backupdba,dgdba,kmdba oracle
mkdir -p /u01/app/grid
mkdir -p /u01/app/12.1.0/grid
chown -R grid:oinstall /u01
mkdir -p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01

After the oracle and grid users have been created, set passwords for the accounts with "passwd oracle" and "passwd grid."

Set Shell Limits for the Oracle and Grid Users

{
echo
echo "if [ \$USER = "oracle" ] || [ \$USER = "grid" ]"
echo "then"
echo " if [ \$SHELL = "/bin/ksh" ]"
echo " then"
echo " ulimit -p 16384"
echo " ulimit -n 65536"
echo " else"
echo " ulimit -u 16384 -n 65536"
echo " fi"
echo " umask 022"
echo "fi"
} >> /etc/profile

{
echo
echo "oracle soft nproc 2047"
echo "oracle hard nproc 16384"
echo "oracle soft nofile 1024"
echo "oracle hard nofile 65536"
echo "oracle soft stack 10240"
echo "oracle hard stack 10240"
echo "grid soft nproc 2047"
echo "grid hard nproc 16384"
echo "grid soft nofile 1024"
echo "grid hard nofile 65536"
echo "grid soft stack 10240"
echo "grid hard stack 10240"
} >> /etc/security/limits.conf

{
echo
echo \# Default limit for number of user\'s processes to prevent
echo \# accidental fork bombs.
echo \# See rhbz \#432903 for reasoning.
echo \# Update: this limit has been ammended for Oracle RAC
echo \# See MOS note 1487773.1
echo
echo \# \* soft nproc 1024
echo \* - nproc 1024
echo root soft nproc unlimited
} > /etc/security/limits.d/90-nproc.conf

{
echo
echo session required pam_limits.so
} >> /etc/pam.d/login

Configure the Bash Profiles for the Oracle and Grid Users

Run the following on node1:

{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1"
echo "export ORACLE_BASE=/u01/app/oracle"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=ORCL1"
echo "export ORACLE_UNQNAME=ORCL"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/oracle/.bash_profile

{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/12.1.0/grid"
echo "export ORACLE_BASE=/u01/app/grid"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=+ASM1"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/grid/.bash_profile

Run the following on node2:

{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1"
echo "export ORACLE_BASE=/u01/app/oracle"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=ORCL2"
echo "export ORACLE_UNQNAME=ORCL"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/oracle/.bash_profile

{
echo
echo "export EDITOR=vi"
echo "export ORACLE_HOME=/u01/app/12.1.0/grid"
echo "export ORACLE_BASE=/u01/app/grid"
echo "export PATH=\$PATH:\$ORACLE_HOME/bin:\$ORACLE_HOME/OPatch"
echo "export ORACLE_SID=+ASM2"
echo "export LD_LIBRARY_PATH=\$ORACLE_HOME/lib"
echo
} >> /home/grid/.bash_profile

Configure SSH Equivalency for the Oracle and Grid Users

We will start with the Oracle user. Start this process as the root user. On each node run the following to generate the RSA keys. Use the default key location, and leave the passphrase blank.

su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa

The following commands will append the public keys to the authorized_keys file. Run the following on node1:

touch ~/.ssh/authorized_keys
cd ~/.ssh

Now run the following on node1 to authorize the RSA key on node1:

cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys

Next run the following on node1 to authorize the RSA key on node2:

ssh node2 cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys

Our keys have been appended to the authorized_keys file, and now we must copy this file to the second node. Run the following on node1:

scp authorized_keys node2:/home/oracle/.ssh

Run the following on each node:

chmod 600 /home/oracle/.ssh/authorized_keys

Next we will test equivalency on each node. On node2 you will be asked if you wish to continue connecting. Type yes. You should not see this warning again. Your output should look like the following:

[oracle@node1 ~]$ ssh node1 date
Thu Sep 19 08:54:09 CDT 2013
[oracle@node1 ~]$ ssh node2 date
Thu Sep 19 08:54:10 CDT 2013

Now we will configure equivalency for the grid user. On each node run the following to generate the RSA keys. Use the default key location, and leave the passphrase blank.

su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa

The following commands will append the public keys to the authorized_keys file. Run the following on node1:

touch ~/.ssh/authorized_keys
cd ~/.ssh

Now run the following on node1 to authorize the RSA key on node1:

cat /home/grid/.ssh/id_rsa.pub >> authorized_keys

Next run the following on node1 to authorize the RSA key on node2:

ssh node2 cat /home/grid/.ssh/id_rsa.pub >> authorized_keys

Our keys have been appended to the authorized_keys file, and now we must copy this file to the second node.

scp authorized_keys node2:/home/grid/.ssh

Run the following on each node:

chmod 600 /home/grid/.ssh/authorized_keys

Next we will test equivalency on each node. On node2 you will be asked if you wish to continue connecting. Type yes. You should not see this warning again. Your output should look like the following:

[grid@node1 .ssh]$ ssh node1 date
Fri Jan 28 01:40:30 CST 2011
[grid@node1 .ssh]$ ssh node2 date
Fri Jan 28 01:40:32 CST 2011

Installation Media Preparation

I transferred the compressed grid installation media to the grid user's home directory. I transferred the media to the node from a Windows system with WinSCP, which can be downloaded for free from http://winscp.net/eng/download.php. When I used WinSCP, I connected to the node using the grid user; this will assure that the grid user will have ownership permissions on the compressed media. I did the same thing for the database installation media, except I connected with the oracle user.

[grid@node1 ~]$ unzip V38501-01_1of2.zip && unzip V38501-01_2of2.zip

[grid@node1 ~]$ ls -l
total 1906444
drwxr-xr-x 7 grid oinstall 4096 Jun 10 07:15 grid
-rw-r--r--. 1 grid oinstall 1750478910 Jun 26 04:36 V38501-01_1of2.zip
-rw-r--r--. 1 grid oinstall 201673595 Jun 26 03:44 V38501-01_2of2.zip

[oracle@node1 ~]$ unzip V38500-01_1of2.zip && unzip V38500-01_2of2.zip

[oracle@node1 ~]$ ls -l
total 2419504
drwxr-xr-x 7 oracle oinstall 4096 Jun 10 07:14 database
-rw-r--r--. 1 oracle oinstall 1361028723 Jun 26 04:29 V38500-01_1of2.zip
-rw-r--r--. 1 oracle oinstall 1116527103 Jun 26 04:23 V38500-01_2of2.zip

You may need to free up some space by removing the compressed media files after you unzip them. I did this by running the following as root:

rm -rf /home/oracle/V38500-01* /home/grid/V38501-01*

Installing CVUQDISK

CVUQDISK is required by the cluster verification utility. Below are the steps I took to install it. I unzipped the grid installation media to /home/grid. If you placed it elsewhere, you'll need to adjust the commands below.

This is how I copied the package to node2:

su - grid
scp ~/grid/rpm/cvuqdisk-1.0.9-1.rpm node2:/home/grid

Install the package on node1, as root:

cd /home/grid/grid/rpm/
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
rpm -iv cvuqdisk-1.0.9-1.rpm

Install the package on node2, as root:

cd /home/grid
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
rpm -iv cvuqdisk-1.0.9-1.rpm

Configuring Shared Storage

We will be creating three shared disk files, with their purposes and sizes as follows:

Name Size Disk Group Contents
crs.vmdk 5GB +CRS Grid Infrastructure Management Repository, voting files, and the Oracle Cluster Registry (OCR)
data.vmdk 5GB +DATA Database files
fra.vmdk 10GB +FRA Fast Recovery Area

Creating shared disks on the VMware ESXi host

You'll need to install VCLI in order to configure the shared disks. VCLI can be downloaded for free from https://my.vmware.com/web/vmware/downloads.

Once you have VCLI installed, cd to the following directory:

C:\Program Files (x86)\VMware vSphere CLI\bin

Run the following commands to create each disk file on the ESX host. You'll need to substitute the IP after -server with the IP of your ESX host, and use the correct path for your data store. I placed my shared disks in a folder called 12cR1RAC. If you want to create your own folder you can do so by using the Datastore Browser. This will take a while.

vmkfstools.pl -server 192.168.2.253 -c 5G -d eagerzeroedthick -a lsilogic /vmfs/volumes/datastore1/12cR1RAC/crs.vmdk
vmkfstools.pl -server 192.168.2.253 -c 5G -d eagerzeroedthick -a lsilogic /vmfs/volumes/datastore1/12cR1RAC/data.vmdk
vmkfstools.pl -server 192.168.2.253 -c 10G -d eagerzeroedthick -a lsilogic /vmfs/volumes/datastore1/12cR1RAC/fra.vmdk

Now that we've created our shared disks, we'll be adding them to our virtual machines.

Shutdown the virtual machines by executing shutdown -h now on each virtual machine. Once they are powered off right click node1 and select "Edit Settings...."

Click the "Add...."

Select "Hard Disk."

Select "Use an existing virtual disk."

Click "Browse..." and select the crs.vmdk file you just created.

Under "Virtual Device Node," select "SCSI (1:0)". This will create a new disk controller. Select "Independent" and "Persistent."

Review "Ready to Complete" and click "Finish." Don't click the "OK" button yet.

There are two more drives to add. Repeat the drive adding process, and use an incrementing virtual device node (1:0, 1:1, and 1:2). Select the new SCSI controller and select "Physical." Now, go ahead and click "OK" to have the change take effect.

Repeat this process on node2. Make sure that the drives have matching virtual device node IDs on each RAC node. After adding crs.vmdk, I added data.vmdk and finally fra.vmdk. Do this in the same order on both nodes.

Start the nodes. They should now have the drives attached to them. A listing of the devices should be similar to the following.

[root@node1 ~]# fdisk -l|egrep sd[bcd]
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes
Disk /dev/sdd: 10.7 GB, 10737418240 bytes

[root@node2 ~]# fdisk -l|egrep sd[bcd]
Disk /dev/sdb: 5368 MB, 5368709120 bytes
Disk /dev/sdc: 5368 MB, 5368709120 bytes
Disk /dev/sdd: 10.7 GB, 10737418240 bytes

Configure Storage Devices

Next, we'll partition the disks we added to the virtual machines. These disks will be used by ASM. Run the following on node1:

fdisk /dev/sdb

Create a new partition with n, then select primary, partition number 1, and use the defaults for the starting and ending cylinder. Type w to write changes.

[root@node1 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x61f217af.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-522, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-522, default 522):
Using default value 522

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Do the same thing for /dev/sdc, and /dev/sdd. The partitions are now visible on node1.

[root@node1 ~]# ls -l /dev/sd[bcd]1
brw-rw---- 1 root disk 8, 17 Sep 19 17:21 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Sep 19 17:22 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Sep 19 17:22 /dev/sdd1

On node2, run partprobe. Now node2 you should now be able to see the new partitions.

[root@node2 ~]# partprobe
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.

[root@node2 ~]# ls -l /dev/sd[bcd]1
brw-rw---- 1 root disk 8, 17 Sep 19 17:29 /dev/sdb1
brw-rw---- 1 root disk 8, 33 Sep 19 17:29 /dev/sdc1
brw-rw---- 1 root disk 8, 49 Sep 19 17:29 /dev/sdd1

Configure ASMlib

This step will be done on both nodes. Run /etc/init.d/oracleasm configure, and use the following parameters:

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y

Now, we will configure our 3 shared disks to use ASMlib. This needs to be done on node1.

# /etc/init.d/oracleasm createdisk CRS01 /dev/sdb1
Marking disk "CRS01" as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk DATA01 /dev/sdc1
Marking disk "DATA01" as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk FRA01 /dev/sdd1
Marking disk "FRA01" as an ASM disk: [ OK ]

On node2, run the following so that the disks are available on node2 as well:

/etc/init.d/oracleasm scandisks

Once this is complete, oracleasm listdisks should show the newly created ASMlib disks on both nodes:

[root@node1 rpm]# /etc/init.d/oracleasm listdisks
CRS01
DATA01
FRA01

[root@node2 grid]# /etc/init.d/oracleasm listdisks
CRS01
DATA01
FRA01

Pre-installation Cluster Verification

We should now be ready to install Grid Infrastructure. You can use the Cluster Verification Utility to make sure there are no major underlying problems with the node configuration. I ran the following command as grid on node1.

~/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose > cluvfy_results

The only check that failed was the membership check for the grid user in the dba group. Since this is intentional, you can ignore this. If you'd like to see what my output looked like, you can download it here: cluvfy_results.txt

Installing Oracle Grid Infrastructure

Start a VNC session on node1 as the grid user by typing:

vncserver

Then, enter a password of your choice.

Now, connect to the session with a client such as tight VNC client, using the syntax <ip address>:1. Once connected, start the installer.

cd grid
./runInstaller

Click the "Add" button and fill in the appropriate node name and node VIP name.

The installation runs. Click "Yes" when the configuration scripts popup appears.

I skipped software updates, but feel free to try this if you want to.


Select "Install and Configure Oracle Grid Infrastructure for a Cluster."


Select "Configure a Standard cluster."


Select "Advanced Installation."


Select your desired product language.


As previously mentioned, I configured SCAN records in DNS, so I'm not going to configure GNS.


Click the "Add" button to add the additional node, and then click "Next."


For the eth1 interface, select "Private" from the "Use for" dropdown.


Select "Yes."


Select "Use Standard ASM for storage."


This disk group will be used by the clusterware. Configure it as shown below.


This cluster is just for testing purposes, so I used a single password.


Select "Do not use Intelligent Platform Management Interface (IPMI)."


By default the correct operating system groups should be selected.


By default the correct software locations should be filled in.


By default the correct inventory location should be filled in.


You can fill this in if you want to have the installer execute the root scripts for you.


The prerequisite checks will now run. There shouldn't be any failures or errors.


Review the summary screen to make sure everything is correct, and then click "Install."


This will generally take a while, depending on your hardware. If you opted to automated the root script execution, you'll see a popup requesting additional approval before the installer will actually do it.


If you see the following screen, it means there were no issues with the installation. You're ready to install the database software.

Installing Oracle Database 12c

Run the following as the oracle user to start a vncserver session.

vncserver

Connect to the session as we did before with your VNC client. Because there may now be two VNC sessions running, you may need to connect by typing <ip address>:2, which will connect you to the second VNC session running.

From the VNC session as the oracle user, run the installer.

cd database
./runInstaller

Uncheck "I wish to receive security updates via My Oracle Support." Click "Yes" when the popup appears.


Select "Skip software updates."


Select "Install database software only."


Select "Oracle Real Application Clusters database installation."


Make sure both nodes are selected.


Select your desired language.


Select "Enterprise Edition" in order to be able to test the full feature set of Oracle.


By default the correct locations should be filled in.


By default the correct OS groups should be selected.


The prerequisite checks will run. There should be no warnings or errors, and the installer should automatically go to the next screen.


Review the summary screen to make sure everything is correct, and then click "Install."


The installation runs.


Run the root script on each node.


You should now see the following screen. Click "Close."

Creating a RAC Database

We're just about done. Now that the software is installed, let's create a RAC database!

We still have two ASM disks that we need to create disk groups with. We'll be doing this from a vnc session as the grid user. Start the ASMCA by running asmca.

Click the "Disk Groups" tab and click "Create."


Configure the "DATA" disk group as shown below, and click "OK" to create the disk group.


Configure the "FRA" disk group as shown below, and click "OK" to create the disk group.


The disk groups should be listed as shown below.


Feel free to click the "ASM Instances" tab to verify that ASM is running on both nodes. Click "Exit."

We will now use the Database Configuration Assistant to create a RAC database. From another VNC session being run under the oracle user, run dbca. "Create Database" should be selected.


Select "Advanced Mode."


Select "Admin-Managed" as the configuration type.


Type "ORCL" as the "Global Database Name," which will cause the SID prefix to automatically be filled in.


Add node2 to the "Selected" list.


I left this screen at its defaults.


Since this is for testing, I used the same password for the administrative accounts.


Select the "+DATA" disk group as the common location for all database files. I specified 10,000MB as the size of my FRA, and enabled archiving.


I added the sample schemas, and left the other settings at their defaults.


You can leave this at the defaults. The only thing I changed was enabling Automatic Memory Management.


"Create Database" should be checked by default.


The prerequisite checks will run. There should be no warnings or errors, and the installer should automatically go to the next screen.


Review the summary, then click "Finish" to have DBCA create the database.


The creation process runs.


The following popup should eventually appear, indicating that the database was successfully created. Click "Exit."


You should see the following screen. Click "Close."

If you've made it this far, you've successfully completed the installation and created a functional RAC database!

Post-installation Tasks

Clear Temp Files

The various installers used /tmp for their storage location. To free up some space you can run the following to clean this location out. Always double check what you've typed before pressing enter when using rm -rf. Run the following as root on each node.

rm -rf /tmp/*

Edit /etc/oratab

Add a new line to the bottom of /etc/oratab on node1 and node2, so that ORCL1 and ORCL2 are the SID values, respectfully. An example follows.

ORCL1:/u01/app/oracle/product/12.1.0/dbhome_1:N:

Verification

Verify that CRS is running:

[grid@node1 grid]$ crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Check the status of the SCAN:

[grid@node1 grid]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node node2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node node1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node node1

Check the status of the ASM instances:

[grid@node1 grid]$ srvctl status asm
ASM is running on node1,node2

Check the status of the database instances:

[grid@node1 ~]$ srvctl status database -d ORCL
Instance ORCL1 is running on node node1
Instance ORCL2 is running on node node2

Check the node apps:

[grid@node1 grid]$ srvctl status nodeapps
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
VIP node2-vip is enabled
VIP node2-vip is running on node: node2
Network is enabled
Network is running on node: node1
Network is running on node: node2
ONS is enabled
ONS daemon is running on node: node1
ONS daemon is running on node: node2

Check the SCAN config:

[grid@node1 grid]$ srvctl config scan
SCAN name: clus-scan, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 0 IPv4 VIP: 192.168.2.117
SCAN name: clus-scan, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 1 IPv4 VIP: 192.168.2.119
SCAN name: clus-scan, Network: 1
Subnet IPv4: 192.168.2.0/255.255.255.0/eth0
Subnet IPv6:
SCAN 2 IPv4 VIP: 192.168.2.118

Check the database config:

[grid@node1 grid]$ srvctl config database -d ORCL
Database unique name: ORCL
Database name: ORCL
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCL/spfileORCL.ora
Password file: +DATA/ORCL/orapworcl
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL
Database instances: ORCL1,ORCL2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed

Verify that you can connect to the database:

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Sep 24 02:03:35 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL>

Verify the status for our instances:

SQL> select instance_name, status, startup_time from gv$instance;

INSTANCE_NAME    STATUS       STARTUP_T
---------------- ------------ ---------
ORCL1            OPEN         24-SEP-13
ORCL2            OPEN         24-SEP-13

And that's it! If you've made it this far, you've finished the install and verified that RAC is up and running. You'll probably want to study the documentation in more detail at this point to get a better understanding of RAC concepts and administration. I truly hope this article has been of use to you!

Miscellaneous Notes

Oracle RDBMS Pre-Install RPM

You may have noticed that I didn't use the oracle-rdbms-server-12cR1-preinstall.x86_64 RPM. I actually did initially, but it left so many things out that I decided not to bother with it and just configure everything myself. The side benefit of this is that the installation process I documented will more closely align with installations on other distributions, such as RHEL, that do not have the pre-install RPM.

The Disk I/O Scheduler

I didn't need to configure Deadline for I/O scheduling because we are using the UEK kernel, which uses Deadline for I/O scheduling by default.

References

Requirements for Installing Oracle Database 12.1 on RHEL6 or OL6 64-bit (x86-64) (Doc ID 1529864.1)
Oracle® Database Installation Guide 12c Release 1 (12.1) for Linux
Oracle® Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux
Oracle® Real Application Clusters Installation Guide 12c Release 1 (12.1) for Linux and UNIX



Questions? Comments? Email me at jmoracle1@gmail.com.