Firstly, you need to create files padded to the required disk size. So if you want 4 1Gb files you could execute the below commands on shared storage:
dd if=/dev/zero of=/SHAREDDISK/asmDisk1-1 bs=1024k count=1000
dd if=/dev/zero of=/SHAREDDISK/asmDisk1-2 bs=1024k count=1000
dd if=/dev/zero of=/SHAREDDISK/asmDisk1-3 bs=1024k count=1000
dd if=/dev/zero of=/SHAREDDISK/asmDisk1-4 bs=1024k count=1000
dd creates your file, ‘if’=/dev/zero gets null characters to populate the file with. ‘of=’ is the name of the file to create.
‘bs=1024k count=1000’ specifies the file size. This is 1000 times a 1024k byte size.
Next, you need your nodes to recognize your new files as a disk or device and you do this by attaching a loopback adapter:
On BOTH nodes
create a loopback to represent a disk
/sbin/losetup /dev/loop1 /SHAREDDISK/asmDisk1-1
/sbin/losetup /dev/loop2 /SHAREDDISK/asmDisk1-2
/sbin/losetup /dev/loop3 /SHAREDDISK/asmDisk1-3
/sbin/losetup /dev/loop4 /SHAREDDISK/asmDisk1-4
If you get a device busy message
ioctl: LOOP_SET_FD: Device or resource busy
delete the current device with:
/sbin/losetup -d /dev/loop1
And try again.
The next step is to create the actual ASM disks. This assumes that you have already installed the appropriate oracle ASM packages and kernel module.
On one node
Create your ASM disks:
oracleasm createdisk DISK1 /dev/loop1
oracleasm createdisk DISK2 /dev/loop2
oracleasm createdisk DISK3 /dev/loop3
oracleasm createdisk DISK4 /dev/loop4
On both nodes:
oracleasm scandisks
oracleasm listdisks
Lastly, put the following lines into /etc/rc.local on both nodes so that the disks come back after a reboot.
/sbin/losetup /dev/loop1 /SHAREDDISK/asmDisk1-1
/sbin/losetup /dev/loop2 /SHAREDDISK/asmDisk1-2
/sbin/losetup /dev/loop3 /SHAREDDISK/asmDisk1-3
/sbin/losetup /dev/loop4 /SHAREDDISK/asmDisk1-4
oracleasm createdisk DISK1 /dev/loop1
oracleasm createdisk DISK2 /dev/loop2
oracleasm createdisk DISK3 /dev/loop3
oracleasm createdisk DISK4 /dev/loop4
/usr/sbin/oracleasm scandisks
This is because, at the point when scandisks is ran normally, your new disks will not be mounted so you need to force it to be done later.
Your new ASM disks should now be visible on all of your nodes.