Using files on a shared drive as disks for ASM (on RedHat Linux)

Firstly, you need to create files padded to the required disk size. So if you want 4 1Gb files you could execute the below commands on shared storage:

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-1 bs=1024k count=1000

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-2 bs=1024k count=1000

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-3 bs=1024k count=1000

dd if=/dev/zero of=/SHAREDDISK/asmDisk1-4 bs=1024k count=1000

dd creates your file, ‘if’=/dev/zero gets null characters to populate the file with. ‘of=’ is the name of the file to create.

‘bs=1024k count=1000’ specifies the file size. This is 1000 times a 1024k byte size.


Next, you need your nodes to recognize your new files as a disk or device and you do this by attaching a loopback adapter:

On BOTH nodes

create a loopback to represent a disk

/sbin/losetup /dev/loop1 /SHAREDDISK/asmDisk1-1

/sbin/losetup /dev/loop2 /SHAREDDISK/asmDisk1-2

/sbin/losetup /dev/loop3 /SHAREDDISK/asmDisk1-3

/sbin/losetup /dev/loop4 /SHAREDDISK/asmDisk1-4

If you get a device busy message

ioctl: LOOP_SET_FD: Device or resource busy

delete the current device with:

/sbin/losetup -d /dev/loop1

And try again.


The next step is to create the actual ASM disks. This assumes that you have already installed the appropriate oracle ASM packages and kernel module.

On one node

Create your ASM disks:

 oracleasm createdisk DISK1 /dev/loop1

oracleasm createdisk DISK2 /dev/loop2

oracleasm createdisk DISK3 /dev/loop3

oracleasm createdisk DISK4 /dev/loop4

On both nodes:

oracleasm scandisks

oracleasm listdisks


Lastly, put the following lines into /etc/rc.local on both nodes so that the disks come back after a reboot.

/sbin/losetup /dev/loop1 /SHAREDDISK/asmDisk1-1

/sbin/losetup /dev/loop2 /SHAREDDISK/asmDisk1-2

/sbin/losetup /dev/loop3 /SHAREDDISK/asmDisk1-3

/sbin/losetup /dev/loop4 /SHAREDDISK/asmDisk1-4

oracleasm createdisk DISK1 /dev/loop1

oracleasm createdisk DISK2 /dev/loop2

oracleasm createdisk DISK3 /dev/loop3

oracleasm createdisk DISK4 /dev/loop4

/usr/sbin/oracleasm scandisks


This is because, at the point when scandisks is ran normally, your new disks will not be mounted so you need to force it to be done later.

Your new ASM disks should now be visible on all of your nodes.

Manually upgrading to 11g

You can directly upgrade an Oracle database from version or higher, directly to 11g

Upgrade Process.

Open database in Upgrade mode:

     startup upgrade

Pre-Upgrade information tool:

  • Precursor to the upgrade.
  • Generated report on required  and recommended changes to make..
  • Generally increasing tablespace sizes or remove parameters.

Upgrade script:

  • Makes actual changes to the database.
  • If it is stopped or failes, it can be rerun.
  • Shuts down database on completion.
Restart the database in normal mode.

Upgrade status script:

  • Verifies that all components have been successfully upgraded.
  • If any components have failed, rerun the catupgrd.sql script.

Post Upgrade actions script:

  • New in 11g.
  • Performs upgrade actions that don’t require upgrade mode.
  • Can be ran at the same time as catuppst.sql.
  • It recompiles INVALID objects.
catdwgrd.sql would carry out the downgrade to the previous version if you needed it.

The Database Upgrade Assistant (DBUA)

Can upgrade database and ASM instances simultaniously.
Faster at the end as it uses parallel compilation on multi-CPU systems
Allows you to upgrade from XE to 11g
You can move datafiles around as part of the upgrade.
The DBUA asks you to supply the ORACLE_BASE parameter which it uses to derive default db locations as well as the DIAGNOSTIC_DEST parameter.
If you specify AUTOEXTEND on the command line, Oracle will allow tablespaces to AUTOEXTEND then sets them back to their original settings after the upgrade.

The compatible parameter

The compatible parameter controls whether a lot of functionality is available.
The default value is: 11.1.0 or 11.2.0 and the minimum allowed value is 10.0.0