Great webinar on Oracle 12c plugable databases.
Skip to 8 minutes for the start of the actual presentation.
Data, the Cloud and other thoughts.
Great webinar on Oracle 12c plugable databases.
Skip to 8 minutes for the start of the actual presentation.
There is no function in Oracle (up to 11g currently) that will allow you to reduce the size of the undo tablespace.
Sometimes you will need to free up space and the best way to get it will be to reduce the size of the undo tablespace.
The way to do this is basically to create a new undo tablespace, drop the old one and recreate it with less space.
Here are the exact steps:
1. First, create a script to recreate the undo tablespace exactly as it was (you will need to tweak it to use less space)
In sqlplus:
You can find the location of the undo tablespace datafile with:
select * from dba_data_files;
set long 20000 (to make sure that the output isnt truncated.)
Create the SQL to recreate the tablespace later:
SELECT DBMS_METADATA.GET_DDL(‘TABLESPACE’, ‘UNDOTBS1’) from dual;
2. Create a second undo tablespace to use temporarily while you remove the first.
CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE ‘[FILELOCATION/FILENAME’ SIZE 50 M AUTOEXTEND OFF;
3. You need to set the new Undo tablespace as default otherwise you wont be able to delete the first;
ALTER SYSTEM SET undo_tablespace=UNDOTBS2;
4. Make sure that the default has been set correctly:
SHOW PARAMETER undo_tablespace;
5. Now drop the original tablespace:
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
6. Use you generated SQL to create the new undo tablespace. Make sure that you change the script to set the size correctly.
7. Next, you need to set the new tablespace as the default:
ALTER SYSTEM SET undo_tablespace=UNDOTBS1;
8. Make sure that the new tablespace is now the default:
SHOW PARAMETER undo_tablespace;
9. Drop the second undo tablespace.
DROP TABLESPACE undotbs2 INCLUDING CONTENTS AND DATAFILES;
One of the most important ideas in Oracle is that your data is safe. It is protected from corruption and also from system crashes.
One of the most important ways that it does this is through the Redo logs.
A Redo log record is written to describe the changes every time some data is changed in the database.
This is so that the system can be recovered after a system failure.
If the system suffers a crash like a power failure, all data in the buffers (memory) will be lost.
The redo logs can be used the re-apply all changes that have been made since the data file were last written to.
On recovery, all committed transactions will be re-applied (basically, the SQL will be re-run). All uncommitted transactions will be rolled back.
Without redo logs, you cannot recover after a system failure.
note. You should not use disk caching on disks with redo logs unless that disk caching controller is battery backed up. This is because if the power failed, Oracle would think that data was written to disk but it may be sitting in the cache and would be lost with the power.
Process of modifying data in the Oracle database:
The Redo logs are made up or 2 or more log files or log groups.
A log group is treated the same as log file but it has redundancy. It is simply multiple copies of the same file.
When the data in a oracle database needs to be changed, it has to be done quickly. Accessing the data files on disk is slow and so, all changes in the Oracle database are carried out in the buffer cache and are only written to files at a later point by the DBWR.
The buffer is of finite size and so will fill up after a while. More data will need to be written in to the buffer meaning that old data will need to be written to disk and cleared out. A Least recently Used (LRU) algorithm is used to determine what data to write to disk and remove from the buffer cache.
Popular data will constantly be moving to the top of the queue and may never reach the bottom of the LRU queue where it would be written to disk.
This would be a problem and so Oracle uses something called checkpoints to ensure that all data is eventually written out to disk, regardless of how often it is accessed in memory.
Checkpoints are signaled by either the LGWR (Log Writer) process or the CKPT (checkpoint) background process.
There are two types of checkpoints:
Normal Checkpoints.
Fast Checkpoints.
A checkpoint is automatically triggered on a log switch.
You can reduce the recovery time of the database by generating more frequent checkpoints.
The Users server process is the process that handles the users requests to the oracle database.
When the user requests a add, update or delete of data in the database, the server process modifies the data from the buffer cache, not the actual data files.
The DBWR (Database Writer) process is the only process that writes to the data files.
Here is the process by which data is modified in the Oracle database:
If the operation is an Insert:
Space found in a block in buffer cache.
Data inserted into that block buffer.
If the operation is an Update or a Delete:
The data is read, either from the buffer cache or the data files and if from the datafiles, it is placed in the buffer.
The data is deleted or modified in the buffer cache.
A copy of the origional data written to the UNDO tablespace in case it is needed for a rollback or for read consistency.
Once the data is modified in the Buffer Cache, a log file is written to the Redo log.
Once the Redo log is written out, the change is considered complete.
Later…the DBWR writes the changed blocks to the datafiles.
The DBWR is the only process that can write to datafiles but the Server Processes can read form them.