One of the most important ideas in Oracle is that your data is safe. It is protected from corruption and also from system crashes.
One of the most important ways that it does this is through the Redo logs.
A Redo log record is written to describe the changes every time some data is changed in the database.
This is so that the system can be recovered after a system failure.
If the system suffers a crash like a power failure, all data in the buffers (memory) will be lost.
The redo logs can be used the re-apply all changes that have been made since the data file were last written to.
On recovery, all committed transactions will be re-applied (basically, the SQL will be re-run). All uncommitted transactions will be rolled back.
Without redo logs, you cannot recover after a system failure.
note. You should not use disk caching on disks with redo logs unless that disk caching controller is battery backed up. This is because if the power failed, Oracle would think that data was written to disk but it may be sitting in the cache and would be lost with the power.
Process of modifying data in the Oracle database:
- A change is made to the buffer cache in the database.
- The transaction completes and commit opperation is triggered.
- Redo data is written to the redo log buffers.
- The LGWR (Log Writer) process writes to redo log files on disk.
- Commit operation is complete once the redo log file has been written.
How the Redo logs are structured.
The Redo logs are made up or 2 or more log files or log groups.
A log group is treated the same as log file but it has redundancy. It is simply multiple copies of the same file.
- The two or more files are used in an alternating fashion
- One log file fills up then the next one starts being written to.
- If archivelogmode is enabled, the first redo log file will be archived.
- If the first Redo log file has not finished being archived by the time the last one fills up, the database will wait and not execute any more transactions until the first Redo log file has been fully archived.
When the data in a oracle database needs to be changed, it has to be done quickly. Accessing the data files on disk is slow and so, all changes in the Oracle database are carried out in the buffer cache and are only written to files at a later point by the DBWR.
The buffer is of finite size and so will fill up after a while. More data will need to be written in to the buffer meaning that old data will need to be written to disk and cleared out. A Least recently Used (LRU) algorithm is used to determine what data to write to disk and remove from the buffer cache.
Popular data will constantly be moving to the top of the queue and may never reach the bottom of the LRU queue where it would be written to disk.
This would be a problem and so Oracle uses something called checkpoints to ensure that all data is eventually written out to disk, regardless of how often it is accessed in memory.
Checkpoints are signaled by either the LGWR (Log Writer) process or the CKPT (checkpoint) background process.
There are two types of checkpoints:
- Fast Checkpoints
- Normal Checkpoints
- The DBWR will write a few extra buffers each time it runs.
- Normal checkpoints will take much longer to complete but have little effect on system performance while they run.
- At each checkpoint, the DBWR writes a large number of buffers.
- This type of checkpoint completes much quicker, but has a large impact on system while it is running
- Fast checkpoints are more efficient overall in terms of I/O.
A checkpoint is automatically triggered on a log switch.
You can reduce the recovery time of the database by generating more frequent checkpoints.
Oracle is a multi user system and as such, multiple processes need to access the database at the same time to perform both reads and writes. Each one needs to get a consistent view of the data – one that does not change during a tranaction.
Oracle ensures that the data that a process gets is consistent as of the time that the statement began.
Even if another process modifies data that is being read while an earlier process is reading it, the earlier process will not see the change.
Before modification, data is written to the undo records and this log is kept until the modification is committed and all processes that might need that data have finished. This means that each process can have its own copy of the data that it is working with and will use this copy opposed to the data that is stored in the data files.
This can lead to multiple versions of the same data in the undo records.
In Oracle, the Undo records allow data that is being modified to be accessed.
Oracle uses row level locking to prevent two processes from trying to update the same data at the same time.
Only the rows being modified are locked.
This means that the rest of the table can still be modified while a row is being changed. It also means that the whole of the table including the data that is currently being modified can still be read.
This is covered elsewhere but the key is that the data will always be consistent with its state at the time that an operation began.
The Users server process is the process that handles the users requests to the oracle database.
When the user requests a add, update or delete of data in the database, the server process modifies the data from the buffer cache, not the actual data files.
The DBWR (Database Writer) process is the only process that writes to the data files.
Here is the process by which data is modified in the Oracle database:
If the operation is an Insert:
Space found in a block in buffer cache.
Data inserted into that block buffer.
If the operation is an Update or a Delete:
The data is read, either from the buffer cache or the data files and if from the datafiles, it is placed in the buffer.
The data is deleted or modified in the buffer cache.
A copy of the origional data written to the UNDO tablespace in case it is needed for a rollback or for read consistency.
Once the data is modified in the Buffer Cache, a log file is written to the Redo log.
Once the Redo log is written out, the change is considered complete.
Later…the DBWR writes the changed blocks to the datafiles.
The DBWR is the only process that can write to datafiles but the Server Processes can read form them.