Oracle Direct NFS

Features embedded in Oracle Kernel

Improves performance and manageability for NFS drives


NFS file system should be mounted and available.
Direct NFS does not mount drives
Direct NFS client uses either:
configuration file called ‘oranfstab’
mount file tab – /etc/mtab

$ ORACLE_HOME/ dbs/oranfstab

which would specify the Direct NFS Client settings for a single database.


which specifies the NFS mounts available to all Oracle databases on that host.

Finally, Oracle reads the mount tab file
/etc/mtab on Linux
to identify available NFS mounts.
If there are duplicate entries. Direct NFS uses the first entry found.
You need to replace the standard Oracle Disk Library (ODM)  with one that supports Direct NFS.
Enabling the Direct NFS Client ODM Library
  • $ cd $ ORACLE_HOME/ lib 
  • $ cp libodm11. so libodm11. so_stub 
  • $ ln –s libnfsodm11. so libodm11. so

Oracle ASM

A brief discussion on Oracle ASM and some of its advantages over other file systems.




Database centric file system
only used for managing database files
High performance
Requires a certain amount of tuning
Not suitable for all applications
Current recommendation is to allow ASM handle raid mirroring rather than let hardware handle it
Probably shouldn’t actually replace hardware raid with it.
ASM is now a mature technology with 11g and 12c
Unofficial benchmarks suggest that ASM is lightning fast, especially once tuned.
Requires very little actual interaction for the dbas.
There is practically no difference at all from within the rdbms
ASM replaces the physical storage level for ‘database files’
     datafiles, tempfiles, online logfiles, controlfile
     Recovery related files and the FRA
     spfile and passwordfile (12c)
Non database files (conventional storage)
     ADR files
The problem with o/s managed filesystems tends to be wasted IO
ASM reduces fragmentation and allows you to optimize allocation units thereby reducing IO wastage.

Manually upgrading to 11g

You can directly upgrade an Oracle database from version or higher, directly to 11g

Upgrade Process.

Open database in Upgrade mode:

     startup upgrade

Pre-Upgrade information tool:

  • Precursor to the upgrade.
  • Generated report on required  and recommended changes to make..
  • Generally increasing tablespace sizes or remove parameters.

Upgrade script:

  • Makes actual changes to the database.
  • If it is stopped or failes, it can be rerun.
  • Shuts down database on completion.
Restart the database in normal mode.

Upgrade status script:

  • Verifies that all components have been successfully upgraded.
  • If any components have failed, rerun the catupgrd.sql script.

Post Upgrade actions script:

  • New in 11g.
  • Performs upgrade actions that don’t require upgrade mode.
  • Can be ran at the same time as catuppst.sql.
  • It recompiles INVALID objects.
catdwgrd.sql would carry out the downgrade to the previous version if you needed it.

The Database Upgrade Assistant (DBUA)

Can upgrade database and ASM instances simultaniously.
Faster at the end as it uses parallel compilation on multi-CPU systems
Allows you to upgrade from XE to 11g
You can move datafiles around as part of the upgrade.
The DBUA asks you to supply the ORACLE_BASE parameter which it uses to derive default db locations as well as the DIAGNOSTIC_DEST parameter.
If you specify AUTOEXTEND on the command line, Oracle will allow tablespaces to AUTOEXTEND then sets them back to their original settings after the upgrade.

The compatible parameter

The compatible parameter controls whether a lot of functionality is available.
The default value is: 11.1.0 or 11.2.0 and the minimum allowed value is 10.0.0

Some notes on Oracle Indexes

Oracle Indexes

The degree to which indexes help performance depends partly on the selectivity of the data.High selectivity – Few rows match the index valueLow selectivity – like Country name returns many rows from the index

If data is selective but the rows are not stored near each other in the table, many blocks may have to be read which will reduce the indexes effectiveness.
If a high percentage of blocks need to be read, a full table scan may be quicker. Oracle uses multi block read on full table scans.

Index block reads are single block reads

The  goal of an index should be to reduce the number of single blocks needed to be read to solve the querry.

Indexes generally benefit performance on:

  • WHERE clause of UPDATE
  • WHERE clause of DELETE (where few rows are selected)

Decrease performance for:

  • INSERT statements
  • DELETES (of large numbers of rows)

SQL for indexes:

Get all indexes:

select * from dba_ind_columns;

Other tables:

  • all_ind_columns (all you have access to)
  • user_ind_columns
  • dba_indexes
  • user_indexes
  • all_indexes

Oracle Index Types.

Btree indexes

  • The default index type
  • Balanced tree
  • Good with High Cardinality
  • Null value are not indexed
  • Most common type
  • Each value should take aprox the same time to return

Btree cluster indexes

  • These are B*tree index defined for clusters. Clusters are two or more tables with one or more common columns and are usually accessed together (via a join).
  • CREATE INDEX product_orders_ix ON CLUSTER product_orders;

Hash cluster indexes

Reverse Key indexes

Bitmap Indexes

Partitioned Indexes

Function based Indeses

Index Organised Tables

Domain Indexes

A simple guide to Oracle Database Architecture

If you are having trouble getting your head around the basics of Oracles Architecture, this video does a good job of simplifying things and making it easy to remember by breaking it down and describing it via everyday examples. You can skip to around 2:30 for the real start of the video.


Datafiles – on disk

read to memory
     data – buffer cache
     sql goes to shared pool – library cache
executed sql goes to redo log buffer
Operations on the data always happen in memory. (database buffer)
Queries gets stored in the shared library
Subsequent queries (repeats) can be retrieved directly from the shared library and the buffer cache.
On entering data, process has to wait until data is written to datafile. This is inefficient as writing data out is slow.
Faster way is to use a scrap of paper to quickly jot it down then when have time write it to the ledger
So data goes to memory then the redo buffer then on a checkpoint, gets written to disk.
Once it is in the redo log, it is considered finished.
Before modifying data, the old values are recorded in the rollback segment
When a transaction is canceled, it is rolled back by retrieving the old values from the rollback segment and putting them back.