Cleaning up a failed Oracle 11g grid infrastructure install.

Unfortunately, the installer or more specifically the script for the Oracle 11g grid (Clusterware) installer can be very flaky.


If it fails and you need to fix something or rerun it for any reason, it will fail the next time if you have not cleaned up the install by de-configuring CRS. I also like to wipe the installation off altogether and restart from a clean base. Here are the steps.

/u01/app/11.2.0/grid/crs/install/ -deconfig -verbose -force

/u01/app/11.2.0/grid/crs/install/ -deconfig -verbose -force -lastnode

At this point, you could rerun (after you have fixed the problem, but if you have closed the installer or just want to restart from a clean base then continue with the steps below.


Get the oracle home from the inventory then delete them both

 cat /etc/oraInst.loc

cd /u01/app/oraInventory/ContentsXML/

cat inventory.xml

Find the Oracle home

Remove it on all nodes

 rm -R /u01/app/11.2.0/

Also remove the inventory

rm -R /u01/app/oraInventory/

rm -R /etc/oracle

rm /etc/oraInst.loc

rm /etc/oratab

rm /usr/local/bin/dbhome

rm /usr/local/bin/oraenv

rm /usr/local/bin/coraenv

Then change the ownership of the /u01/app directory:

chown oracle:dba /u01/app


You could stop there but if you really want to wipe the slate, you could delete your ASM disks  and recreate them fresh before the next install.


Delete your ASM disks on node 1

 oracleasm deletedisk DISK1

oracleasm deletedisk DISK2

oracleasm deletedisk DISK3

oracleasm deletedisk DISK4





On all nodes

oracleasm scandisks

/usr/sbin/oracleasm exit


Now give your node a reboot and you should have a clean base from which to start another install.


Using files on a shared drive as disks for ASM (on RedHat Linux)

Firstly, you need to create files padded to the required disk size. So if you want 4 1Gb files you could execute the below commands on shared storage:

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-1 bs=1024k count=1000

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-2 bs=1024k count=1000

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-3 bs=1024k count=1000

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-4 bs=1024k count=1000

dd creates your file, ‘if’=/dev/zero gets null characters to populate the file with. ‘of=’ is the name of the file to create.

‘bs=1024k count=1000’ specifies the file size. This is 1000 times a 1024k byte size.


Next, you need your nodes to recognize your new files as a disk or device and you do this by attaching a loopback adapter:

On BOTH nodes

create a loopback to represent a disk

/sbin/losetup /dev/loop1 /SHAREDDISK/asmDisk1-1

/sbin/losetup /dev/loop2 /SHAREDDISK/asmDisk1-2

/sbin/losetup /dev/loop3 /SHAREDDISK/asmDisk1-3

/sbin/losetup /dev/loop4 /SHAREDDISK/asmDisk1-4

If you get a device busy message

ioctl: LOOP_SET_FD: Device or resource busy

delete the current device with:

/sbin/losetup -d /dev/loop1

And try again.


The next step is to create the actual ASM disks. This assumes that you have already installed the appropriate oracle ASM packages and kernel module.

On one node

Create your ASM disks:

 oracleasm createdisk DISK1 /dev/loop1

oracleasm createdisk DISK2 /dev/loop2

oracleasm createdisk DISK3 /dev/loop3

oracleasm createdisk DISK4 /dev/loop4

On both nodes:

oracleasm scandisks

oracleasm listdisks


Lastly, put the following lines into /etc/rc.local on both nodes so that the disks come back after a reboot.

/sbin/losetup /dev/loop1 /SHAREDDISK/asmDisk1-1

/sbin/losetup /dev/loop2 /SHAREDDISK/asmDisk1-2

/sbin/losetup /dev/loop3 /SHAREDDISK/asmDisk1-3

/sbin/losetup /dev/loop4 /SHAREDDISK/asmDisk1-4

oracleasm createdisk DISK1 /dev/loop1

oracleasm createdisk DISK2 /dev/loop2

oracleasm createdisk DISK3 /dev/loop3

oracleasm createdisk DISK4 /dev/loop4

/usr/sbin/oracleasm scandisks


This is because, at the point when scandisks is ran normally, your new disks will not be mounted so you need to force it to be done later.

Your new ASM disks should now be visible on all of your nodes.