When the data in a oracle database needs to be changed, it has to be done quickly. Accessing the data files on disk is slow and so, all changes in the Oracle database are carried out in the buffer cache and are only written to files at a later point by the DBWR.
The buffer is of finite size and so will fill up after a while. More data will need to be written in to the buffer meaning that old data will need to be written to disk and cleared out. A Least recently Used (LRU) algorithm is used to determine what data to write to disk and remove from the buffer cache.
Popular data will constantly be moving to the top of the queue and may never reach the bottom of the LRU queue where it would be written to disk.
This would be a problem and so Oracle uses something called checkpoints to ensure that all data is eventually written out to disk, regardless of how often it is accessed in memory.
Checkpoints are signaled by either the LGWR (Log Writer) process or the CKPT (checkpoint) background process.
There are two types of checkpoints:
- Fast Checkpoints
- Normal Checkpoints
- The DBWR will write a few extra buffers each time it runs.
- Normal checkpoints will take much longer to complete but have little effect on system performance while they run.
- At each checkpoint, the DBWR writes a large number of buffers.
- This type of checkpoint completes much quicker, but has a large impact on system while it is running
- Fast checkpoints are more efficient overall in terms of I/O.
A checkpoint is automatically triggered on a log switch.
You can reduce the recovery time of the database by generating more frequent checkpoints.
Oracle is a multi user system and as such, multiple processes need to access the database at the same time to perform both reads and writes. Each one needs to get a consistent view of the data – one that does not change during a tranaction.
Oracle ensures that the data that a process gets is consistent as of the time that the statement began.
Even if another process modifies data that is being read while an earlier process is reading it, the earlier process will not see the change.
Before modification, data is written to the undo records and this log is kept until the modification is committed and all processes that might need that data have finished. This means that each process can have its own copy of the data that it is working with and will use this copy opposed to the data that is stored in the data files.
This can lead to multiple versions of the same data in the undo records.
In Oracle, the Undo records allow data that is being modified to be accessed.
Oracle uses row level locking to prevent two processes from trying to update the same data at the same time.
Only the rows being modified are locked.
This means that the rest of the table can still be modified while a row is being changed. It also means that the whole of the table including the data that is currently being modified can still be read.
This is covered elsewhere but the key is that the data will always be consistent with its state at the time that an operation began.
The Users server process is the process that handles the users requests to the oracle database.
When the user requests a add, update or delete of data in the database, the server process modifies the data from the buffer cache, not the actual data files.
The DBWR (Database Writer) process is the only process that writes to the data files.
Here is the process by which data is modified in the Oracle database:
If the operation is an Insert:
Space found in a block in buffer cache.
Data inserted into that block buffer.
If the operation is an Update or a Delete:
The data is read, either from the buffer cache or the data files and if from the datafiles, it is placed in the buffer.
The data is deleted or modified in the buffer cache.
A copy of the origional data written to the UNDO tablespace in case it is needed for a rollback or for read consistency.
Once the data is modified in the Buffer Cache, a log file is written to the Redo log.
Once the Redo log is written out, the change is considered complete.
Later…the DBWR writes the changed blocks to the datafiles.
The DBWR is the only process that can write to datafiles but the Server Processes can read form them.
Reading from the Oracle RDMS is done by the users server process
Once a query is submitted, the users server process first determines if data is in the buffer cache of the SGA (System Global Area) memory area.
If the data is not already in the buffer cache, Oracle reads the data from the data files on the disk and writes it to the SGA.
The data is returned to the user or manipulated in memory from there, not from disk.
This means that commonly used data can be accessed form the Buffer Cache which is in RAM and very fast and not form the disk which is slow.
This is a very scalable architecture.