Examples: Building a Repair Table or Orphan Key Table
The ADMIN_TABLE procedure is used to create, purge, or drop a repair table or an orphan key table.
A repair table provides information about the corruptions that were found by the CHECK_OBJECT procedure and how these will be addressed if the FIX_CORRUPT_BLOCKS procedure is run. Further, it is used to drive the execution of the FIX_CORRUPT_BLOCKS procedure.
An orphan key table is used when the DUMP_ORPHAN_KEYS procedure is executed and it discovers index entries that point to corrupt rows. The DUMP_ORPHAN_KEYS procedure populates the orphan key table by logging its activity and providing the index information in a usable manner.
Example: Creating a Repair Table
The following example creates a repair table for the users tablespace.
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'USERS');
END;
/
For each repair or orphan key table, a view is also created that eliminates any rows that pertain to objects that no longer exist. The name of the view corresponds to the name of the repair or orphan key table and is prefixed by DBA_ (for example, DBA_REPAIR_TABLE or DBA_ORPHAN_KEY_TABLE).
The following query describes the repair table that was created for the users tablespace.
DESC REPAIR_TABLE
Name Null? Type
---------------------------- -------- --------------
OBJECT_ID NOT NULL NUMBER
TABLESPACE_ID NOT NULL NUMBER
RELATIVE_FILE_ID NOT NULL NUMBER
BLOCK_ID NOT NULL NUMBER
CORRUPT_TYPE NOT NULL NUMBER
SCHEMA_NAME NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
BASEOBJECT_NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
CORRUPT_DESCRIPTION VARCHAR2(2000)
REPAIR_DESCRIPTION VARCHAR2(200)
MARKED_CORRUPT NOT NULL VARCHAR2(10)
CHECK_TIMESTAMP NOT NULL DATE
FIX_TIMESTAMP DATE
REFORMAT_TIMESTAMP DATE
Example: Creating an Orphan Key Table
This example illustrates the creation of an orphan key table for the users tablespace.
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'ORPHAN_KEY_TABLE',
TABLE_TYPE => dbms_repair.orphan_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'USERS');
END;
/
The orphan key table is described in the following query:
DESC ORPHAN_KEY_TABLE
Name Null? Type
---------------------------- -------- -----------------
SCHEMA_NAME NOT NULL VARCHAR2(30)
INDEX_NAME NOT NULL VARCHAR2(30)
IPART_NAME VARCHAR2(30)
INDEX_ID NOT NULL NUMBER
TABLE_NAME NOT NULL VARCHAR2(30)
PART_NAME VARCHAR2(30)
TABLE_ID NOT NULL NUMBER
KEYROWID NOT NULL ROWID
KEY NOT NULL ROWID
DUMP_TIMESTAMP NOT NULL DATE
Example: Detecting Corruption
The CHECK_OBJECT procedure checks the specified object, and populates the repair table with information about corruptions and repair directives. You can optionally specify a range, partition name, or subpartition name when you want to check a portion of an object.
Validation consists of checking all blocks in the object that have not previously been marked corrupt. For each block, the transaction and data layer portions are checked for self consistency. During CHECK_OBJECT, if a block is encountered that has a corrupt buffer cache header, then that block is skipped.
The following is an example of executing the CHECK_OBJECT procedure for the scott.dept table.
SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'DEPT',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/
SQL*Plus outputs the following line, indicating one corruption:
number corrupt: 1
Querying the repair table produces information describing the corruption and suggesting a repair action.
SELECT OBJECT_NAME, BLOCK_ID, CORRUPT_TYPE, MARKED_CORRUPT,
CORRUPT_DESCRIPTION, REPAIR_DESCRIPTION
FROM REPAIR_TABLE;
OBJECT_NAME BLOCK_ID CORRUPT_TYPE MARKED_COR
------------------------------ ---------- ------------ ----------
CORRUPT_DESCRIPTION
------------------------------------------------------------------------------
REPAIR_DESCRIPTION
------------------------------------------------------------------------------
DEPT 3 1 FALSE
kdbchk: row locked by non-existent transaction
table=0 slot=0
lockid=32 ktbbhitc=1
mark block software corrupt
The corrupted block has not yet been marked corrupt, so this is the time to extract any meaningful data. After the block is marked corrupt, the entire block must be skipped.
Example: Fixing Corrupt Blocks
Use the FIX_CORRUPT_BLOCKS procedure to fix the corrupt blocks in specified objects based on information in the repair table that was generated by the CHECK_OBJECT procedure. Before changing a block, the block is checked to ensure that the block is still corrupt. Corrupt blocks are repaired by marking the block software corrupt. When a repair is performed, the associated row in the repair table is updated with a timestamp.
This example fixes the corrupt block in table scott.dept that was reported by the CHECK_OBJECT procedure.
SET SERVEROUTPUT ON
DECLARE num_fix INT;
BEGIN
num_fix := 0;
DBMS_REPAIR.FIX_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME=> 'DEPT',
OBJECT_TYPE => dbms_repair.table_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
FIX_COUNT=> num_fix);
DBMS_OUTPUT.PUT_LINE('num fix: ' || TO_CHAR(num_fix));
END;
/
SQL*Plus outputs the following line:
num fix: 1
The following query confirms that the repair was done.
SELECT OBJECT_NAME, BLOCK_ID, MARKED_CORRUPT
FROM REPAIR_TABLE;
OBJECT_NAME BLOCK_ID MARKED_COR
------------------------------ ---------- ----------
DEPT 3 TRUE
Example: Finding Index Entries Pointing to Corrupt Data Blocks
The DUMP_ORPHAN_KEYS procedure reports on index entries that point to rows in corrupt data blocks. For each index entry, a row is inserted into the specified orphan key table. The orphan key table must have been previously created.
This information can be useful for rebuilding lost rows in the table and for diagnostic purposes.
Note:
This should be run for every index associated with a table identified in the repair table.
In this example, pk_dept is an index on the scott.dept table. It is scanned to determine if there are any index entries pointing to rows in the corrupt data block.
SET SERVEROUTPUT ON
DECLARE num_orphans INT;
BEGIN
num_orphans := 0;
DBMS_REPAIR.DUMP_ORPHAN_KEYS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'PK_DEPT',
OBJECT_TYPE => dbms_repair.index_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
ORPHAN_TABLE_NAME=> 'ORPHAN_KEY_TABLE',
KEY_COUNT => num_orphans);
DBMS_OUTPUT.PUT_LINE('orphan key count: ' || TO_CHAR(num_orphans));
END;
/
The following output indicates that there are three orphan keys:
orphan key count: 3
Index entries in the orphan key table implies that the index should be rebuilt. This guarantees that a table probe and an index probe return the same result set.
Example: Skipping Corrupt Blocks
The SKIP_CORRUPT_BLOCKS procedure enables or disables the skipping of corrupt blocks during index and table scans of the specified object. When the object is a table, skipping applies to the table and its indexes. When the object is a cluster, it applies to all of the tables in the cluster, and their respective indexes.
The following example enables the skipping of software corrupt blocks for the scott.dept table:
BEGIN
DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'DEPT',
OBJECT_TYPE => dbms_repair.table_object,
FLAGS => dbms_repair.skip_flag);
END;
/
Querying scott's tables using the DBA_TABLES view shows that SKIP_CORRUPT is enabled for table scott.dept.
SELECT OWNER, TABLE_NAME, SKIP_CORRUPT FROM DBA_TABLES
WHERE OWNER = 'SCOTT';
OWNER TABLE_NAME SKIP_COR
------------------------------ ------------------------------ --------
SCOTT ACCOUNT DISABLED
SCOTT BONUS DISABLED
SCOTT DEPT ENABLED
SCOTT DOCINDEX DISABLED
SCOTT EMP DISABLED
SCOTT RECEIPT DISABLED
SCOTT SALGRADE DISABLED
SCOTT SCOTT_EMP DISABLED
SCOTT SYS_IOT_OVER_12255 DISABLED
SCOTT WORK_AREA DISABLED
10 rows selected.
lunes, 21 de diciembre de 2009
martes, 15 de diciembre de 2009
Rman Backup Tape Veritas
BACKUP FULL
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP DATABASE PLUS ARCHIVELOG FORMAT 'bk_u%u_s%s_p%p_t%t';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP INCREMENTAL LEVEL 0
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'bk_u%u_s%s_p%p_t%t'
DATABASE;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP FULL CLUSTER VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
SEND 'NB_ORA_CLIENT=telbebdsdp1,NB_ORA_POLICY=BD_Online_ALTAMIRA_SDPTE01';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'db_%s_%p_%t'
TAG 'BD_SDPTE01_ALTAMIRA'
DATABASE;
RELEASE CHANNEL ch00;
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
BACKUP ARCHIVELOG CLUSTER - VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
# Control file backup
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
BACKUP
FORMAT 'ctrl_%s_%p_%t'
CURRENT CONTROLFILE;
RELEASE CHANNEL ch00;
}
BACKUP
connect catalog rman/rman@cat10g;
connect target rman/rman@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
# ALLOCATE CHANNEL ch03
# TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Application-Backu
p)';
backup archivelog all delete input TAG 'BACKUP ARCHIVELOG CL LETODB';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
#RELEASE CHANNEL ch03;
}
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP DATABASE PLUS ARCHIVELOG FORMAT 'bk_u%u_s%s_p%p_t%t';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP INCREMENTAL LEVEL 0
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'bk_u%u_s%s_p%p_t%t'
DATABASE;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP FULL CLUSTER VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
SEND 'NB_ORA_CLIENT=telbebdsdp1,NB_ORA_POLICY=BD_Online_ALTAMIRA_SDPTE01';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'db_%s_%p_%t'
TAG 'BD_SDPTE01_ALTAMIRA'
DATABASE;
RELEASE CHANNEL ch00;
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
BACKUP ARCHIVELOG CLUSTER - VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
# Control file backup
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
BACKUP
FORMAT 'ctrl_%s_%p_%t'
CURRENT CONTROLFILE;
RELEASE CHANNEL ch00;
}
BACKUP
connect catalog rman/rman@cat10g;
connect target rman/rman@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
# ALLOCATE CHANNEL ch03
# TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Application-Backu
p)';
backup archivelog all delete input TAG 'BACKUP ARCHIVELOG CL LETODB';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
#RELEASE CHANNEL ch03;
}
jueves, 10 de diciembre de 2009
Oracle Full Text Indexing using Oracle Text
Full Text Indexing using Oracle Text
Oracle Text, previously know as interMedia Text and ConText, is an extensive full text indexing technology allowing you to efficiently query free text and produce document classification applications. In this article I'll only scratch the surface of this very complex feature.
CONTEXT Indexes
CTXCAT Indexes
CTXRULE Indexes
Index Maintenance
The examples in this article require access to the CTX_DDL package, which is granted as follows:
GRANT EXECUTE ON CTX_DDL TO;
CONTEXT Indexes
The CONTEXT index type is used to index large amounts of text such as Word, PDF, XML, HTML or plain text documents. In this example we will store the data in a BLOB column, which allows us to store binary documents like Word and PDF as well as plain text. Using a CLOB is preferable if only plain text documents are being used.
First we build a sample schema to hold our data:
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
DROP PROCEDURE load_file_to_my_docs;
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc BLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;
CREATE OR REPLACE DIRECTORY documents AS 'C:\work';Next we load several files as follows:
CREATE OR REPLACE PROCEDURE load_file_to_my_docs (p_file_name IN my_docs.name%TYPE) AS
v_bfile BFILE;
v_blob BLOB;
BEGIN
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, p_file_name, empty_blob())
RETURN doc INTO v_blob;
v_bfile := BFILENAME('DOCUMENTS', p_file_name);
Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);
COMMIT;
END;
/
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.doc');
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.asp');
EXEC load_file_to_my_docs('XMLOverHTTP9i.asp');
EXEC load_file_to_my_docs('UNIXForDBAs.asp');
EXEC load_file_to_my_docs('emp_ws_access.sql');
EXEC load_file_to_my_docs('emp_ws_test.html');
EXEC load_file_to_my_docs('9ivsSS2000forPerformanceV22.pdf');Next we create a CONTEXT type index on the doc column and gather table statistics:
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);Finally we query table looking for documents with specific content:
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'SQL Server', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
100 127 9ivsSS2000forPerformanceV22.pdf
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'XML', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
74 123 XMLOverHTTP9i.asp
9 125 emp_ws_access.sql
2 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)CTXCAT Indexes
The CTXCAT index type is best suited to smaller text fragments that must be indexed along with other relational data. In this example the data will be stored in a VARCHAR2 column.
First we create a schema to hold the data:
DROP TABLE my_items;
DROP SEQUENCE my_items_seq;
EXEC CTX_DDL.DROP_INDEX_SET('my_items_iset');
CREATE TABLE my_items (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
description VARCHAR2(4000) NOT NULL,
price NUMBER(7,2) NOT NULL
)
/
ALTER TABLE my_items ADD (
CONSTRAINT my_items_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_items_seq;Next we populate the schema with some dummy data:
BEGIN
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Bike: '||i, 'Bike Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Car: '||i, 'Car Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'House: '||i, 'House Description ('||i||')', i);
END LOOP;
COMMIT;
END;
/Next we create a CTXCAT index on the DESCRIPTION and PRICE columns and gather table statistics. In order to create the index we must create an index-set with a sub-index for each column referenced by the CATSEARCH function:
EXEC CTX_DDL.CREATE_INDEX_SET('my_items_iset');
EXEC CTX_DDL.ADD_INDEX('my_items_iset','price');
CREATE INDEX my_items_name_idx ON my_items(description) INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('index set my_items_iset');
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_ITEMS', cascade=>TRUE);Finally we query table looking for items with a description that contains our specified words and an appropriate price:
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Bike', 'price BETWEEN 1 AND 5')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1 1 Bike: 1
2 2 Bike: 2
3 3 Bike: 3
4 4 Bike: 4
5 5 Bike: 5
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Car', 'price BETWEEN 101 AND 105 ORDER BY price DESC')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1105 105 Car: 105
1104 104 Car: 104
1103 103 Car: 103
1102 102 Car: 102
1101 101 Car: 101
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'Every column used to restrict the selection or order the output in the CATSEARCH function should have a sub-index within the index-set. The CTXCAT index type is transactional so there is no need to synchronize the index.
CTXRULE Indexes
The CTXRULE index type can be used to build document classification applications.
First we must define our document categories and store them, along with a suitable query for the MATCHES function:
DROP TABLE my_doc_categories;
DROP TABLE my_categories;
DROP SEQUENCE my_categories_seq;
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
CREATE TABLE my_categories (
id NUMBER(10) NOT NULL,
category VARCHAR2(30) NOT NULL,
query VARCHAR2(2000) NOT NULL
)
/
ALTER TABLE my_categories ADD (
CONSTRAINT my_categories_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_categories_seq;
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'Oracle', 'ABOUT(Oracle)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'SQL Server', 'ABOUT(SQL Server)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'UNIX', 'ABOUT(UNIX)');Next we create a table to hold our documents:
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc CLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;Then we create an intersection table to resolve the many-to-many relationship between documents and categories:
CREATE TABLE my_doc_categories (
my_doc_id NUMBER(10) NOT NULL,
my_category_id NUMBER(10) NOT NULL
)
/
ALTER TABLE my_doc_categories ADD (
CONSTRAINT my_doc_categories_pk PRIMARY KEY (my_doc_id, my_category_id)
)
/Next we create a BEFORE INSERT trigger on the MY_DOCS table to automatically assign the documents to the relevant categories as they are being inserted. The MATCHES function is used to decide if the document matches any of our gategory queries. The resulting cursor is used to insert the matches into the intersect table:
CREATE OR REPLACE TRIGGER my_docs_trg
BEFORE INSERT ON my_docs
FOR EACH ROW
BEGIN
FOR c1 IN (SELECT id
FROM my_categories
WHERE MATCHES(query, :new.doc)>0)
LOOP
BEGIN
INSERT INTO my_doc_categories(my_doc_id, my_category_id)
VALUES (:new.id, c1.id);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END LOOP;
END;
/Next we create the CTXRULE index to support the trigger. For completeness we also create a CONTEXT index on the document itself, although this is not involved in the category assignment process:
CREATE INDEX my_categories_query_idx ON my_categories(query) INDEXTYPE IS CTXSYS.CTXRULE;
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_CATEGORIES', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOC_CATEGORIES', cascade=>TRUE);Finally
we test the mechanism by inserting some rows and checking the classification:
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle Document', 'This document constains the word Oracle!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'SQL Server Document', 'This document constains the words SQL Server!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'UNIX Document', 'This document constains the word UNIX!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle UNIX Document', 'This document constains the word UNIX and the word Oracle!');
COLUMN name FORMAT A30;
SELECT a.name, b.category
FROM my_docs a,
my_categories b,
my_doc_categories c
WHERE c.my_doc_id = a.id
AND c.my_category_id = b.id;
NAME CATEGORY
------------------------------ ------------------------------
Oracle Document Oracle
SQL Server Document SQL Server
UNIX Document UNIX
Oracle UNIX Document UNIX
Oracle UNIX Document Oracle
5 rows selected.The output shows that the documents have been assigned to the correct categories. Note, the "Oracle UNIX Document" document has been assigned to both the "Oracle" and "UNIX" categories.
Index Maintenance
Not all Oracle Text indexes are automatically updated as records are added or deleted. To synchronize the index with the table you must call:
SQL> EXEC CTX_DDL.SYNC_INDEX('my_docs_doc_idx');Regular synchronizations of the index can be automated using the DBMS_JOB package. The following script is provided to make this task easier:
$ORACLE_HOME/ctx/sample/script/drjobdml.sqlIt can be called from SQL*Plus whilst logged on as the index owner as follows:
SQL> @drjobdml.sql index-name interval-mins
SQL> @drjobdml.sql my_docs_doc_idx 60Regular synchronization of text indexes can cause fragmentation which affects query performance. To correct this situation the index can be rebuilt or optimized. Index optimization can be performed in three basic modes (FAST, FULL or TOKEN). The FAST mode compacts fragmented rows but does not remove old data:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FAST');
END;
/The FULL mode optimizes either the entire index or a portion of it, with old data removed:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FULL');
END;
/The TOKEN mode perfoms a full optimization for a specific token:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','TOKEN', token=>'Oracle');
END;
/
Oracle Text, previously know as interMedia Text and ConText, is an extensive full text indexing technology allowing you to efficiently query free text and produce document classification applications. In this article I'll only scratch the surface of this very complex feature.
CONTEXT Indexes
CTXCAT Indexes
CTXRULE Indexes
Index Maintenance
The examples in this article require access to the CTX_DDL package, which is granted as follows:
GRANT EXECUTE ON CTX_DDL TO
CONTEXT Indexes
The CONTEXT index type is used to index large amounts of text such as Word, PDF, XML, HTML or plain text documents. In this example we will store the data in a BLOB column, which allows us to store binary documents like Word and PDF as well as plain text. Using a CLOB is preferable if only plain text documents are being used.
First we build a sample schema to hold our data:
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
DROP PROCEDURE load_file_to_my_docs;
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc BLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;
CREATE OR REPLACE DIRECTORY documents AS 'C:\work';Next we load several files as follows:
CREATE OR REPLACE PROCEDURE load_file_to_my_docs (p_file_name IN my_docs.name%TYPE) AS
v_bfile BFILE;
v_blob BLOB;
BEGIN
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, p_file_name, empty_blob())
RETURN doc INTO v_blob;
v_bfile := BFILENAME('DOCUMENTS', p_file_name);
Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);
COMMIT;
END;
/
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.doc');
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.asp');
EXEC load_file_to_my_docs('XMLOverHTTP9i.asp');
EXEC load_file_to_my_docs('UNIXForDBAs.asp');
EXEC load_file_to_my_docs('emp_ws_access.sql');
EXEC load_file_to_my_docs('emp_ws_test.html');
EXEC load_file_to_my_docs('9ivsSS2000forPerformanceV22.pdf');Next we create a CONTEXT type index on the doc column and gather table statistics:
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);Finally we query table looking for documents with specific content:
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'SQL Server', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
100 127 9ivsSS2000forPerformanceV22.pdf
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'XML', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
74 123 XMLOverHTTP9i.asp
9 125 emp_ws_access.sql
2 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)CTXCAT Indexes
The CTXCAT index type is best suited to smaller text fragments that must be indexed along with other relational data. In this example the data will be stored in a VARCHAR2 column.
First we create a schema to hold the data:
DROP TABLE my_items;
DROP SEQUENCE my_items_seq;
EXEC CTX_DDL.DROP_INDEX_SET('my_items_iset');
CREATE TABLE my_items (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
description VARCHAR2(4000) NOT NULL,
price NUMBER(7,2) NOT NULL
)
/
ALTER TABLE my_items ADD (
CONSTRAINT my_items_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_items_seq;Next we populate the schema with some dummy data:
BEGIN
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Bike: '||i, 'Bike Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Car: '||i, 'Car Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'House: '||i, 'House Description ('||i||')', i);
END LOOP;
COMMIT;
END;
/Next we create a CTXCAT index on the DESCRIPTION and PRICE columns and gather table statistics. In order to create the index we must create an index-set with a sub-index for each column referenced by the CATSEARCH function:
EXEC CTX_DDL.CREATE_INDEX_SET('my_items_iset');
EXEC CTX_DDL.ADD_INDEX('my_items_iset','price');
CREATE INDEX my_items_name_idx ON my_items(description) INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('index set my_items_iset');
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_ITEMS', cascade=>TRUE);Finally we query table looking for items with a description that contains our specified words and an appropriate price:
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Bike', 'price BETWEEN 1 AND 5')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1 1 Bike: 1
2 2 Bike: 2
3 3 Bike: 3
4 4 Bike: 4
5 5 Bike: 5
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Car', 'price BETWEEN 101 AND 105 ORDER BY price DESC')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1105 105 Car: 105
1104 104 Car: 104
1103 103 Car: 103
1102 102 Car: 102
1101 101 Car: 101
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'Every column used to restrict the selection or order the output in the CATSEARCH function should have a sub-index within the index-set. The CTXCAT index type is transactional so there is no need to synchronize the index.
CTXRULE Indexes
The CTXRULE index type can be used to build document classification applications.
First we must define our document categories and store them, along with a suitable query for the MATCHES function:
DROP TABLE my_doc_categories;
DROP TABLE my_categories;
DROP SEQUENCE my_categories_seq;
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
CREATE TABLE my_categories (
id NUMBER(10) NOT NULL,
category VARCHAR2(30) NOT NULL,
query VARCHAR2(2000) NOT NULL
)
/
ALTER TABLE my_categories ADD (
CONSTRAINT my_categories_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_categories_seq;
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'Oracle', 'ABOUT(Oracle)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'SQL Server', 'ABOUT(SQL Server)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'UNIX', 'ABOUT(UNIX)');Next we create a table to hold our documents:
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc CLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;Then we create an intersection table to resolve the many-to-many relationship between documents and categories:
CREATE TABLE my_doc_categories (
my_doc_id NUMBER(10) NOT NULL,
my_category_id NUMBER(10) NOT NULL
)
/
ALTER TABLE my_doc_categories ADD (
CONSTRAINT my_doc_categories_pk PRIMARY KEY (my_doc_id, my_category_id)
)
/Next we create a BEFORE INSERT trigger on the MY_DOCS table to automatically assign the documents to the relevant categories as they are being inserted. The MATCHES function is used to decide if the document matches any of our gategory queries. The resulting cursor is used to insert the matches into the intersect table:
CREATE OR REPLACE TRIGGER my_docs_trg
BEFORE INSERT ON my_docs
FOR EACH ROW
BEGIN
FOR c1 IN (SELECT id
FROM my_categories
WHERE MATCHES(query, :new.doc)>0)
LOOP
BEGIN
INSERT INTO my_doc_categories(my_doc_id, my_category_id)
VALUES (:new.id, c1.id);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END LOOP;
END;
/Next we create the CTXRULE index to support the trigger. For completeness we also create a CONTEXT index on the document itself, although this is not involved in the category assignment process:
CREATE INDEX my_categories_query_idx ON my_categories(query) INDEXTYPE IS CTXSYS.CTXRULE;
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_CATEGORIES', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOC_CATEGORIES', cascade=>TRUE);Finally
we test the mechanism by inserting some rows and checking the classification:
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle Document', 'This document constains the word Oracle!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'SQL Server Document', 'This document constains the words SQL Server!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'UNIX Document', 'This document constains the word UNIX!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle UNIX Document', 'This document constains the word UNIX and the word Oracle!');
COLUMN name FORMAT A30;
SELECT a.name, b.category
FROM my_docs a,
my_categories b,
my_doc_categories c
WHERE c.my_doc_id = a.id
AND c.my_category_id = b.id;
NAME CATEGORY
------------------------------ ------------------------------
Oracle Document Oracle
SQL Server Document SQL Server
UNIX Document UNIX
Oracle UNIX Document UNIX
Oracle UNIX Document Oracle
5 rows selected.The output shows that the documents have been assigned to the correct categories. Note, the "Oracle UNIX Document" document has been assigned to both the "Oracle" and "UNIX" categories.
Index Maintenance
Not all Oracle Text indexes are automatically updated as records are added or deleted. To synchronize the index with the table you must call:
SQL> EXEC CTX_DDL.SYNC_INDEX('my_docs_doc_idx');Regular synchronizations of the index can be automated using the DBMS_JOB package. The following script is provided to make this task easier:
$ORACLE_HOME/ctx/sample/script/drjobdml.sqlIt can be called from SQL*Plus whilst logged on as the index owner as follows:
SQL> @drjobdml.sql index-name interval-mins
SQL> @drjobdml.sql my_docs_doc_idx 60Regular synchronization of text indexes can cause fragmentation which affects query performance. To correct this situation the index can be rebuilt or optimized. Index optimization can be performed in three basic modes (FAST, FULL or TOKEN). The FAST mode compacts fragmented rows but does not remove old data:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FAST');
END;
/The FULL mode optimizes either the entire index or a portion of it, with old data removed:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FULL');
END;
/The TOKEN mode perfoms a full optimization for a specific token:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','TOKEN', token=>'Oracle');
END;
/
miércoles, 2 de diciembre de 2009
Oracle Secrets Database
SQL*Plus: Release 10.2.0.4.0 - Production on Wed Dec 2 15:34:49 2009
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SHOW PARAMETER audit syslog level
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /u01/app/oracle/admin/POWER/ad
ump
audit_syslog_level string
audit_sys_operations boolean FALSE
audit_trail string NONE
SQL> SELECT value FROM v$parameter WHERE name='audit syslog level';
no rows selected
SQL>
Yet, when executing CONNECT / AS SYSDBA, the facility and level logged in /var/adm/messages
on Solaris is “user.notice”:
Feb 21 11:45:52 dbserver Oracle Audit[27742]: [ID 441842 user.notice]
ACTION : 'CONNECT'
Feb 21 11:45:52 dbserver DATABASE USER: '/'
Feb 21 11:45:52 dbserver PRIVILEGE : SYSDBA
Feb 21 11:45:52 dbserver CLIENT USER: oracle
Feb 21 11:45:52 dbserver CLIENT TERMINAL: pts/3
Feb 21 11:45:52 dbserver STATUS: 0
If an SPFILE is used, the full setting is available by querying V$SPPARAMETER:
SQL> SELECT value FROM v$spparameter WHERE name='audit syslog level';
VALUE
-----------
user.notice
Auditing Non-Privileged Users
Of course, you may also direct audit records pertaining to non-privileged users to the system log by setting
AUDIT TRAIL=OS in addition to AUDIT SYSLOG LEVEL. Non-privileged users cannot delete audit trails logging their actions. The search for perpetrators with queries against auditing views, such as DBA AUDIT STATEMENT or DBA AUDIT OBJECT, is easier than searching the system log. For these reasons, keeping the audit trails of non-privileged users inside the database with
AUDIT TRAIL=DB is preferred. With the latter setting, audit trails are written to the table SYS.AUD$ and may be queried through the aforementioned data dictionary views. Setting AUDIT TRAIL=NONE switches off auditing of actions by non-privileged users.
SQL> AUDIT CONNECT BY appuser /* audit trail=os set */;
entries similar to the following are written to the syslog facility (example from Solaris):
Feb 21 11:41:14 dbserver Oracle Audit[27684]: [ID 930208 user.notice]
SESSIONID: "15" ENTRYID: "1" STATEMENT: "1" USERID: "APPUSER"
USERHOST: "dbserver" TERMINAL: "pts/3" ACTION: "100" RETURNCODE: "0"
COMMENT$TEXT: "Authenticated by: DATABASE" OS$USERID: "oracle"
PRIV$USED: 5
Another entry is added to /var/adm/messages when a database session ends:
Feb 21 11:44:41 dbserver Oracle Audit[27684]: [ID 162490 user.notice]
SESSIONID: "15" ENTRYID: "1" ACTION: "101" RETURNCODE: "0"
LOGOFF$PREAD: "1" LOGOFF$LREAD: "17" LOGOFF$LWRITE: "0" LOGOFF$DEAD:
"0" SESSIONCPU: "2"
Note that additional data provided on the actions LOGON (100) and LOGOFF (101) conforms
to the columns of the view DBA AUDIT SESSION. Translation from action numbers to action
names is done via the view AUDIT ACTIONS as in this example:
SQL> SELECT action, name FROM audit actions WHERE action IN (100,101)
ACTION NAME
------ ------
100 LOGON
101 LOGOFF
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=FALSE and AUDIT TRAIL=NONE,
CONNECT, STARTUP, and SHUTDOWN are logged via syslog. With these settings, an instance shutdown on Solaris writes entries similar to the following to /var/adm/messages:
Feb 21 14:40:01 dbserver Oracle Audit[29036]:[ID 63719 auth.info] ACTION:'SHUTDOWN'
Feb 21 14:40:01 dbserver DATABASE USER: '/'
Feb 21 14:40:01 dbserver PRIVILEGE : SYSDBA
Feb 21 14:40:01 dbserver CLIENT USER: oracle
Feb 21 14:40:01 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:40:01 dbserver STATUS: 0
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=TRUE, and AUDIT TRAIL=NONE,
SQL and PL/SQL statements executed with SYSDBA or SYSOPER privileges are also logged via syslog. Dropping a user after connecting with / AS SYSDBA results in a syslog entry similar to the one shown here:
Feb 21 14:46:53 dbserver Oracle Audit[29170]: [ID 853627 auth.info]
ACTION : 'drop user appuser'
Feb 21 14:46:53 dbserver DATABASE USER: '/'
Feb 21 14:46:53 dbserver PRIVILEGE : SYSDBA
Feb 21 14:46:53 dbserver CLIENT USER: oracle
Feb 21 14:46:53 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:46:53 dbserver STATUS: 0
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SHOW PARAMETER audit syslog level
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /u01/app/oracle/admin/POWER/ad
ump
audit_syslog_level string
audit_sys_operations boolean FALSE
audit_trail string NONE
SQL> SELECT value FROM v$parameter WHERE name='audit syslog level';
no rows selected
SQL>
Yet, when executing CONNECT / AS SYSDBA, the facility and level logged in /var/adm/messages
on Solaris is “user.notice”:
Feb 21 11:45:52 dbserver Oracle Audit[27742]: [ID 441842 user.notice]
ACTION : 'CONNECT'
Feb 21 11:45:52 dbserver DATABASE USER: '/'
Feb 21 11:45:52 dbserver PRIVILEGE : SYSDBA
Feb 21 11:45:52 dbserver CLIENT USER: oracle
Feb 21 11:45:52 dbserver CLIENT TERMINAL: pts/3
Feb 21 11:45:52 dbserver STATUS: 0
If an SPFILE is used, the full setting is available by querying V$SPPARAMETER:
SQL> SELECT value FROM v$spparameter WHERE name='audit syslog level';
VALUE
-----------
user.notice
Auditing Non-Privileged Users
Of course, you may also direct audit records pertaining to non-privileged users to the system log by setting
AUDIT TRAIL=OS in addition to AUDIT SYSLOG LEVEL. Non-privileged users cannot delete audit trails logging their actions. The search for perpetrators with queries against auditing views, such as DBA AUDIT STATEMENT or DBA AUDIT OBJECT, is easier than searching the system log. For these reasons, keeping the audit trails of non-privileged users inside the database with
AUDIT TRAIL=DB is preferred. With the latter setting, audit trails are written to the table SYS.AUD$ and may be queried through the aforementioned data dictionary views. Setting AUDIT TRAIL=NONE switches off auditing of actions by non-privileged users.
SQL> AUDIT CONNECT BY appuser /* audit trail=os set */;
entries similar to the following are written to the syslog facility (example from Solaris):
Feb 21 11:41:14 dbserver Oracle Audit[27684]: [ID 930208 user.notice]
SESSIONID: "15" ENTRYID: "1" STATEMENT: "1" USERID: "APPUSER"
USERHOST: "dbserver" TERMINAL: "pts/3" ACTION: "100" RETURNCODE: "0"
COMMENT$TEXT: "Authenticated by: DATABASE" OS$USERID: "oracle"
PRIV$USED: 5
Another entry is added to /var/adm/messages when a database session ends:
Feb 21 11:44:41 dbserver Oracle Audit[27684]: [ID 162490 user.notice]
SESSIONID: "15" ENTRYID: "1" ACTION: "101" RETURNCODE: "0"
LOGOFF$PREAD: "1" LOGOFF$LREAD: "17" LOGOFF$LWRITE: "0" LOGOFF$DEAD:
"0" SESSIONCPU: "2"
Note that additional data provided on the actions LOGON (100) and LOGOFF (101) conforms
to the columns of the view DBA AUDIT SESSION. Translation from action numbers to action
names is done via the view AUDIT ACTIONS as in this example:
SQL> SELECT action, name FROM audit actions WHERE action IN (100,101)
ACTION NAME
------ ------
100 LOGON
101 LOGOFF
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=FALSE and AUDIT TRAIL=NONE,
CONNECT, STARTUP, and SHUTDOWN are logged via syslog. With these settings, an instance shutdown on Solaris writes entries similar to the following to /var/adm/messages:
Feb 21 14:40:01 dbserver Oracle Audit[29036]:[ID 63719 auth.info] ACTION:'SHUTDOWN'
Feb 21 14:40:01 dbserver DATABASE USER: '/'
Feb 21 14:40:01 dbserver PRIVILEGE : SYSDBA
Feb 21 14:40:01 dbserver CLIENT USER: oracle
Feb 21 14:40:01 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:40:01 dbserver STATUS: 0
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=TRUE, and AUDIT TRAIL=NONE,
SQL and PL/SQL statements executed with SYSDBA or SYSOPER privileges are also logged via syslog. Dropping a user after connecting with / AS SYSDBA results in a syslog entry similar to the one shown here:
Feb 21 14:46:53 dbserver Oracle Audit[29170]: [ID 853627 auth.info]
ACTION : 'drop user appuser'
Feb 21 14:46:53 dbserver DATABASE USER: '/'
Feb 21 14:46:53 dbserver PRIVILEGE : SYSDBA
Feb 21 14:46:53 dbserver CLIENT USER: oracle
Feb 21 14:46:53 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:46:53 dbserver STATUS: 0
miércoles, 25 de noviembre de 2009
Oracle Sqlplus Unix Shell PLANOS
Cómo pasar valores desde sqlplus hacia Unix Shell
Ver Instancia Unicamente desde Unix
ps -fea | grep pmon |grep -v grep | awk {'print $9'} | cut -c 10-100
Hay ocasiones que dentro de un Shell script queremos consultar la base de datos y con dicho resultado continuar la lógica de nuestro Shell. Como logramos transferir ese resultado?
Tenemos dos enfoques para capturar valores luego de terminada la ejecución de Sql*plus: por archivos y por variables.
La primera de ellas es básicamente escribir la salida de la consulta en un archivo y luego levantarlo desde Shell. Este caso es ideal cuando nuestra consulta retorna varios registros, ya que el archivo servirá de entrada a algún comando Unix para procesarlo línea a línea. No veremos un ejemplo ya que es muy sencillo, basta usar el comando SPOOL de Sqlplus para obtener la salida en un archivo de texto. Si bien no es del todo prolijo, funciona.
Unix lo hace fácil: ejecutamos la consulta en una sesión sqlplus, y la salida es capturada en la variable 'resultado'. Para simplificar este ejemplo no he considerado si hubo algún error en la consulta, pero puede agregarse lógica que trate mensajes de error dentro de la variable asignada.
#!/bin/ksh
resultado=`sqlplus -s 'scott/tiger' << EOF
set serveroutput on
set feedback offset head off}
select * from dual;exit;
EOF`
echo "El resultado es: $resultado"
Varios resultados en una línea
Qué hacer si necesitamos retornar más de un valor? Continuamos retornando una única línea, pero separamos los valores dentro del sql con algún caracter que no ocurra dentro de cada resultado, por ejemplo punto y coma. En el ejemplo obtenemos el usuario conectado, la fecha actual y el valor de la columna de Dual, todo al mismo tiempo.
#!/bin/kshresultado=`sqlplus -s 'scott/tiger' << EOF
set serveroutput onset feedback off
set head offselect user||';'||sysdate||';'||dummy from dual;exit;EOF`
echo "Los valores son: $resultado"El valor retornado es: SCOTT;05-FEB-09;X
Con el comando cut separamos los valores fácilmente:
echo $resultado | cut -d';' -f1
SCOTT
echo $resultado | cut -d';' -f2
05-FEB-09
echo $resultado | cut -d';' -f3
X
Resultados multilínea
Si queremos obtener varias líneas en lugar de una sola, también podemos hacerlo de esta forma. Como en el caso anterior, si tenemos múltiples valores por línea es aconsejable usar separadores, ya que los espacios no son buenos a la hora de identificar strings que contengan espacios. Además, evitamos los incómodos espacios entre líneas al optimizar el tamaño de cada cadena y no llegar al fin de cada línea.
#!/bin/ksh
resultado=`sqlplus -s 'scott/tiger' << EOF
set serveroutput on
set feedback off
set head off
set linesize 131
set pagesize 9999
select empno||';'||ename||';'||job||';'||mgr||';'||deptno from emp;
exit;
EOF`
echo "El resultado es: $resultado"
La salida en este caso es:
7369;SMITH;CLERK;7902;20
7499;ALLEN;SALESMAN;7698;30
7521;WARD;SALESMAN;7698;30
7566;JONES;MANAGER;7839;20
7654;MARTIN;SALESMAN;7698;30
7698;BLAKE;MANAGER;7839;30
7782;CLARK;MANAGER;7839;10
7788;SCOTT;ANALYST;7566;20
7839;KING;PRESIDENT;;10
7844;TURNER;SALESMAN;7698;30
7876;ADAMS;CLERK;7788;20
7900;JAMES;CLERK;7698;30
7902;FORD;ANALYST;7566;20
Ahora, podemos usar cualquier comando para tratar las líneas. Uno de mis favoritos es awk, ya que nos provee de muchas funciones para tratamiento de cada tipo.
Para que awk consuma cada línea de la variable como si fuese un archivo, debemos incluir comillas dobles, de otro modo lo considerará como una única gran línea.
En el siguiente ejemplo, vemos como awk toma línea a línea e imprime un texto anexo.
echo "$resultado" | awk -F";" 'BEGIN {$cnt=1}
{print "Linea "$cnt, $0; $cnt=$cnt+1;}'
La salida generada por awk es:
Linea 1 7369;SMITH;CLERK;7902;20
Linea 2 7499;ALLEN;SALESMAN;7698;30
Linea 3 7521;WARD;SALESMAN;7698;30
Linea 4 7566;JONES;MANAGER;7839;20
Linea 5 7654;MARTIN;SALESMAN;7698;30
Linea 6 7698;BLAKE;MANAGER;7839;30
Linea 7 7782;CLARK;MANAGER;7839;10
Linea 8 7788;SCOTT;ANALYST;7566;20
Linea 9 7839;KING;PRESIDENT;;10
Linea 10 7844;TURNER;SALESMAN;7698;30
Linea 11 7876;ADAMS;CLERK;7788;20
Linea 12 7900;JAMES;CLERK;7698;30
Linea 13 7902;FORD;ANALYST;7566;20
Linea 14 7934;MILLER;CLERK;7782;10
lunes, 9 de noviembre de 2009
Oracle 11g Database Replay
Database Replay in Oracle Database 11g Release 1
The Database Replay functionality of Oracle 11g allows you to capture workloads on a production system and replay them exactly as they happened on a test system. This provides an accurate method to test the impact of a variety of system changes including:
Database upgrades.
Operating system upgrades or migrations.
Configuration changes, such as changes to initialization parameters or conversion from a single node to a RAC environment.
Hardware changes or migrations.
The capture and replay processes can be configured and initiated using PL/SQL APIs, or Enterprise Manager, both of which are demonstrated in this article. To keep things simple, the examples presented here are performed against two servers (prod-11g and test-11g), both of which run an identical database with a SID of DB11G.
Capture using the DBMS_WORKLOAD_CAPTURE Package
Replay using the DBMS_WORKLOAD_REPLAY Package
Capture using Enterprise Manager
Replay using Enterprise Manager
Capture using the DBMS_WORKLOAD_CAPTURE Package
The DBMS_WORKLOAD_CAPTURE package provides a set of procedures and functions to control the capture process. Before we can initiate the capture process we need an empty directory on the "prod-11g" database server to hold the capture logs.
mkdir /u01/app/oracle/db_replay_captureNext, we create a directory object pointing to the new directory.
CONN sys/password@prod AS SYSDBA
CREATE OR REPLACE DIRECTORY db_replay_capture_dir AS '/u01/app/oracle/db_replay_capture/';
-- Make sure existing processes are complete.
SHUTDOWN IMMEDIATE
STARTUPNotice the inclusion of a shutdown and startup of the database. This is not necessary, but Oracle suggest it as a good way to make sure any outstanding processes are complete before starting the capture process.
The combination of the ADD_FILTER procedure and the DEFAULT_ACTION parameter of the START_CAPTURE procedure allow the workload to be refined by including or excluding specific work based on the following attributes:
INSTANCE_NUMBER
USER
MODULE
ACTION
PROGRAM
SERVICE
For simplicity let's assume we want to capture everything, so we can ignore this and jump straight to the START_CAPTURE procedure. This procedure allows us to name a capture run, specify the directory the capture files should be placed in, and specify the length of time the capture process should run for. If the duration is set to NULL, the capture runs until it is manually turned off using the FINISH_CAPTURE procedure.
BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (name => 'test_capture_1',
dir => 'DB_REPLAY_CAPTURE_DIR',
duration => NULL);
END;
/Now, we need to do some work to capture. First, we create a test user.
CREATE USER db_replay_test IDENTIFIED BY db_replay_test
QUOTA UNLIMITED ON users;
GRANT CONNECT, CREATE TABLE TO db_replay_test;Next, we create a table and populate it with some data.
CONN db_replay_test/db_replay_test@prod
CREATE TABLE db_replay_test_tab (
id NUMBER,
description VARCHAR2(50),
CONSTRAINT db_replay_test_tab_pk PRIMARY KEY (id)
);
BEGIN
FOR i IN 1 .. 500000 LOOP
INSERT INTO db_replay_test_tab (id, description)
VALUES (i, 'Description for ' || i);
END LOOP;
COMMIT;
END;
/Once the work is complete we can stop the capture using the FINISH_CAPTURE procedure.
CONN sys/password@prod AS SYSDBA
BEGIN
DBMS_WORKLOAD_CAPTURE.finish_capture;
END;
/If we check out the capture directory, we can see that some files have been generated there.
$ cd /u01/app/oracle/db_replay_capture
$ ls
wcr_4f9rtgw00238y.rec wcr_cr.html wcr_scapture.wmd
wcr_4f9rtjw002397.rec wcr_cr.text
wcr_4f9rtyw00239h.rec wcr_fcapture.wmd
$We can retrieve the ID of the capture run by passing the directory object name to the GET_CAPTURE_INFO function, or by querying the DBA_WORKLOAD_CAPTURES view.
SELECT DBMS_WORKLOAD_CAPTURE.get_capture_info('DB_REPLAY_CAPTURE_DIR')
FROM dual;
DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO('DB_REPLAY_CAPTURE_DIR')
---------------------------------------------------------------
21
1 row selected.
SQL>
COLUMN name FORMAT A30
SELECT id, name FROM dba_workload_captures;
ID NAME
---------- ------------------------------
21 test_capture_1
1 row selected.
SQL>The DBA_WORKLOAD_CAPTURES view contains information about the capture process. This can be queried directly, or a report can be generated in text or HTML format using the REPORT function.
DECLARE
l_report CLOB;
BEGIN
l_report := DBMS_WORKLOAD_CAPTURE.report(capture_id => 21,
format => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);
END;
/The capture ID can be used to export the AWR snapshots associated with the specific capture run.
BEGIN
DBMS_WORKLOAD_CAPTURE.export_awr (capture_id => 21);
END;
/A quick look at the capture directory shows a dump file and associated log file have been produced.
$ cd /u01/app/oracle/db_replay_capture
$ ls
wcr_4f9rtgw00238y.rec wcr_ca.dmp wcr_cr.text
wcr_4f9rtjw002397.rec wcr_ca.log wcr_fcapture.wmd
wcr_4f9rtyw00239h.rec wcr_cr.html wcr_scapture.wmd
$Replay using the DBMS_WORKLOAD_REPLAY Package
The DBMS_WORKLOAD_REPLAY package provides a set of procedures and functions to control the replay process. In order to replay the logs captured on the "prod-11g" system, we need to transfers the capture files to our test system. Before we can do this, we need to create a directory on the "test-11g" system to put them in. For simplicity we will keep the name the same.
mkdir /u01/app/oracle/db_replay_captureTransfer the files from the production server to the test server.
Next, we create a directory object pointing to the new directory.
CONN sys/password@test AS SYSDBA
CREATE OR REPLACE DIRECTORY db_replay_capture_dir AS '/u01/app/oracle/db_replay_capture/';It is a good idea to adjust the test system time to match the time when the capture process was started. This way any time-based processing will react in the same way. For this test I have ignored this step.
We can now prepare to replay the existing capture logs using the PROCESS_CAPTURE, INITIALIZE_REPLAY and PREPARE_REPLAY procedures. I've named the replay with the same name as the capture process (test_capture_1), but this is not necessary.
BEGIN
DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_CAPTURE_DIR');
DBMS_WORKLOAD_REPLAY.initialize_replay (replay_name => 'test_capture_1',
replay_dir => 'DB_REPLAY_CAPTURE_DIR');
DBMS_WORKLOAD_REPLAY.prepare_replay (synchronization => TRUE);
END;
/Before we can start the replay, we need to calibrate and start a replay client using the "wrc" utility. The calibration step tells us the number of replay clients and hosts necessary to faithfully replay the workload.
$ wrc mode=calibrate replaydir=/u01/app/oracle/db_replay_capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:33:42 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Report for Workload in: /u01/app/oracle/db_replay_capture
-----------------------
Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).
Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 3
Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE
$The calibration step suggest a single client on a single CPU is enough, so we only need to start a single replay client, which is shown below.
$ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (09:34:14)The replay client pauses waiting for replay to start. We initiate replay with the following command.
BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END;
/If you need to stop the replay before it is complete, call the CANCEL_REPLAY procedure.
The output from the replay client includes the start and finish time of the replay operation.
$ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (09:34:14)
Replay started (09:34:44)
Replay finished (09:39:15)
$Once complete, we can see the DB_REPLAY_TEST_TAB table has been created and populated in the DB_REPLAY_TEST schema.
SQL> CONN sys/password@test AS SYSDBA
Connected.
SQL> SELECT table_name FROM dba_tables WHERE owner = 'DB_REPLAY_TEST';
TABLE_NAME
------------------------------
DB_REPLAY_TEST_TAB
SQL> SELECT COUNT(*) FROM db_replay_test.db_replay_test_tab;
COUNT(*)
----------
500000
SQL>Information about the replay processing is available from the DBA_WORKLOAD_REPLAYS view.
COLUMN name FORMAT A30
SELECT id, name FROM dba_workload_replays;
ID NAME
---------- ------------------------------
11 test_capture_1
1 row selected.
SQL>In addition, a report can be generated in text or HTML format using the REPORT function.
DECLARE
l_report CLOB;
BEGIN
l_report := DBMS_WORKLOAD_REPLAY.report(replay_id => 11,
format => DBMS_WORKLOAD_REPLAY.TYPE_HTML);
END;
/
The Database Replay functionality of Oracle 11g allows you to capture workloads on a production system and replay them exactly as they happened on a test system. This provides an accurate method to test the impact of a variety of system changes including:
Database upgrades.
Operating system upgrades or migrations.
Configuration changes, such as changes to initialization parameters or conversion from a single node to a RAC environment.
Hardware changes or migrations.
The capture and replay processes can be configured and initiated using PL/SQL APIs, or Enterprise Manager, both of which are demonstrated in this article. To keep things simple, the examples presented here are performed against two servers (prod-11g and test-11g), both of which run an identical database with a SID of DB11G.
Capture using the DBMS_WORKLOAD_CAPTURE Package
Replay using the DBMS_WORKLOAD_REPLAY Package
Capture using Enterprise Manager
Replay using Enterprise Manager
Capture using the DBMS_WORKLOAD_CAPTURE Package
The DBMS_WORKLOAD_CAPTURE package provides a set of procedures and functions to control the capture process. Before we can initiate the capture process we need an empty directory on the "prod-11g" database server to hold the capture logs.
mkdir /u01/app/oracle/db_replay_captureNext, we create a directory object pointing to the new directory.
CONN sys/password@prod AS SYSDBA
CREATE OR REPLACE DIRECTORY db_replay_capture_dir AS '/u01/app/oracle/db_replay_capture/';
-- Make sure existing processes are complete.
SHUTDOWN IMMEDIATE
STARTUPNotice the inclusion of a shutdown and startup of the database. This is not necessary, but Oracle suggest it as a good way to make sure any outstanding processes are complete before starting the capture process.
The combination of the ADD_FILTER procedure and the DEFAULT_ACTION parameter of the START_CAPTURE procedure allow the workload to be refined by including or excluding specific work based on the following attributes:
INSTANCE_NUMBER
USER
MODULE
ACTION
PROGRAM
SERVICE
For simplicity let's assume we want to capture everything, so we can ignore this and jump straight to the START_CAPTURE procedure. This procedure allows us to name a capture run, specify the directory the capture files should be placed in, and specify the length of time the capture process should run for. If the duration is set to NULL, the capture runs until it is manually turned off using the FINISH_CAPTURE procedure.
BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (name => 'test_capture_1',
dir => 'DB_REPLAY_CAPTURE_DIR',
duration => NULL);
END;
/Now, we need to do some work to capture. First, we create a test user.
CREATE USER db_replay_test IDENTIFIED BY db_replay_test
QUOTA UNLIMITED ON users;
GRANT CONNECT, CREATE TABLE TO db_replay_test;Next, we create a table and populate it with some data.
CONN db_replay_test/db_replay_test@prod
CREATE TABLE db_replay_test_tab (
id NUMBER,
description VARCHAR2(50),
CONSTRAINT db_replay_test_tab_pk PRIMARY KEY (id)
);
BEGIN
FOR i IN 1 .. 500000 LOOP
INSERT INTO db_replay_test_tab (id, description)
VALUES (i, 'Description for ' || i);
END LOOP;
COMMIT;
END;
/Once the work is complete we can stop the capture using the FINISH_CAPTURE procedure.
CONN sys/password@prod AS SYSDBA
BEGIN
DBMS_WORKLOAD_CAPTURE.finish_capture;
END;
/If we check out the capture directory, we can see that some files have been generated there.
$ cd /u01/app/oracle/db_replay_capture
$ ls
wcr_4f9rtgw00238y.rec wcr_cr.html wcr_scapture.wmd
wcr_4f9rtjw002397.rec wcr_cr.text
wcr_4f9rtyw00239h.rec wcr_fcapture.wmd
$We can retrieve the ID of the capture run by passing the directory object name to the GET_CAPTURE_INFO function, or by querying the DBA_WORKLOAD_CAPTURES view.
SELECT DBMS_WORKLOAD_CAPTURE.get_capture_info('DB_REPLAY_CAPTURE_DIR')
FROM dual;
DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO('DB_REPLAY_CAPTURE_DIR')
---------------------------------------------------------------
21
1 row selected.
SQL>
COLUMN name FORMAT A30
SELECT id, name FROM dba_workload_captures;
ID NAME
---------- ------------------------------
21 test_capture_1
1 row selected.
SQL>The DBA_WORKLOAD_CAPTURES view contains information about the capture process. This can be queried directly, or a report can be generated in text or HTML format using the REPORT function.
DECLARE
l_report CLOB;
BEGIN
l_report := DBMS_WORKLOAD_CAPTURE.report(capture_id => 21,
format => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);
END;
/The capture ID can be used to export the AWR snapshots associated with the specific capture run.
BEGIN
DBMS_WORKLOAD_CAPTURE.export_awr (capture_id => 21);
END;
/A quick look at the capture directory shows a dump file and associated log file have been produced.
$ cd /u01/app/oracle/db_replay_capture
$ ls
wcr_4f9rtgw00238y.rec wcr_ca.dmp wcr_cr.text
wcr_4f9rtjw002397.rec wcr_ca.log wcr_fcapture.wmd
wcr_4f9rtyw00239h.rec wcr_cr.html wcr_scapture.wmd
$Replay using the DBMS_WORKLOAD_REPLAY Package
The DBMS_WORKLOAD_REPLAY package provides a set of procedures and functions to control the replay process. In order to replay the logs captured on the "prod-11g" system, we need to transfers the capture files to our test system. Before we can do this, we need to create a directory on the "test-11g" system to put them in. For simplicity we will keep the name the same.
mkdir /u01/app/oracle/db_replay_captureTransfer the files from the production server to the test server.
Next, we create a directory object pointing to the new directory.
CONN sys/password@test AS SYSDBA
CREATE OR REPLACE DIRECTORY db_replay_capture_dir AS '/u01/app/oracle/db_replay_capture/';It is a good idea to adjust the test system time to match the time when the capture process was started. This way any time-based processing will react in the same way. For this test I have ignored this step.
We can now prepare to replay the existing capture logs using the PROCESS_CAPTURE, INITIALIZE_REPLAY and PREPARE_REPLAY procedures. I've named the replay with the same name as the capture process (test_capture_1), but this is not necessary.
BEGIN
DBMS_WORKLOAD_REPLAY.process_capture('DB_REPLAY_CAPTURE_DIR');
DBMS_WORKLOAD_REPLAY.initialize_replay (replay_name => 'test_capture_1',
replay_dir => 'DB_REPLAY_CAPTURE_DIR');
DBMS_WORKLOAD_REPLAY.prepare_replay (synchronization => TRUE);
END;
/Before we can start the replay, we need to calibrate and start a replay client using the "wrc" utility. The calibration step tells us the number of replay clients and hosts necessary to faithfully replay the workload.
$ wrc mode=calibrate replaydir=/u01/app/oracle/db_replay_capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:33:42 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Report for Workload in: /u01/app/oracle/db_replay_capture
-----------------------
Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).
Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 3
Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE
$The calibration step suggest a single client on a single CPU is enough, so we only need to start a single replay client, which is shown below.
$ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (09:34:14)The replay client pauses waiting for replay to start. We initiate replay with the following command.
BEGIN
DBMS_WORKLOAD_REPLAY.start_replay;
END;
/If you need to stop the replay before it is complete, call the CANCEL_REPLAY procedure.
The output from the replay client includes the start and finish time of the replay operation.
$ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture
Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (09:34:14)
Replay started (09:34:44)
Replay finished (09:39:15)
$Once complete, we can see the DB_REPLAY_TEST_TAB table has been created and populated in the DB_REPLAY_TEST schema.
SQL> CONN sys/password@test AS SYSDBA
Connected.
SQL> SELECT table_name FROM dba_tables WHERE owner = 'DB_REPLAY_TEST';
TABLE_NAME
------------------------------
DB_REPLAY_TEST_TAB
SQL> SELECT COUNT(*) FROM db_replay_test.db_replay_test_tab;
COUNT(*)
----------
500000
SQL>Information about the replay processing is available from the DBA_WORKLOAD_REPLAYS view.
COLUMN name FORMAT A30
SELECT id, name FROM dba_workload_replays;
ID NAME
---------- ------------------------------
11 test_capture_1
1 row selected.
SQL>In addition, a report can be generated in text or HTML format using the REPORT function.
DECLARE
l_report CLOB;
BEGIN
l_report := DBMS_WORKLOAD_REPLAY.report(replay_id => 11,
format => DBMS_WORKLOAD_REPLAY.TYPE_HTML);
END;
/
Oracle 11g Query Result Cache
Query Result Cache in Oracle Database 11g Release 1
Oracle 11g allows the results of SQL queries to be cached in the SGA and reused to improve performance. Set up the following schema objects to see how the SQL query cache works.
CREATE TABLE qrc_tab (
id NUMBER
);
INSERT INTO qrc_tab VALUES (1);
INSERT INTO qrc_tab VALUES (2);
INSERT INTO qrc_tab VALUES (3);
INSERT INTO qrc_tab VALUES (4);
INSERT INTO qrc_tab VALUES (5);
CREATE OR REPLACE FUNCTION slow_function(p_id IN qrc_tab.id%TYPE)
RETURN qrc_tab.id%TYPE DETERMINISTIC AS
BEGIN
DBMS_LOCK.sleep(1);
RETURN p_id;
END;
/
SET TIMING ONThe function contains a one second sleep so we can easily detect if it has been executed by checking the elapsed time of the query.
Next, we query the test table using the slow function and check out the elapsed time. Each run takes approximately five seconds, one second sleep for each row queried.
SELECT slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.15
SQL>Adding the RESULT_CACHE hint to the query tells the server to attempt to retrieve the information from the result cache. If the information is not present, it will cache the results of the query provided there is enough room in the result cache. Since we have no cached results, we would expect the first run to take approximately five seconds, but subsequent runs to be much quicker.
SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.20
SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:00.15
SQL>The default action of the result cache is controlled by the RESULT_CACHE_MODE parameter. When it is set to MANUAL, the RESULT_CACHE hint must be used for a query to access the result cache.
SHOW PARAMETER RESULT_CACHE_MODE
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
result_cache_mode string MANUAL
SQL>If we set the RESULT_CACHE_MODE parameter to FORCE, the result cache is used by default, but we can bypass it using the NO_RESULT_CACHE hint.
ALTER SESSION SET RESULT_CACHE_MODE=FORCE;
SELECT slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:00.14
SELECT /*+ no_result_cache */ slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.14
SQL>
Oracle 11g allows the results of SQL queries to be cached in the SGA and reused to improve performance. Set up the following schema objects to see how the SQL query cache works.
CREATE TABLE qrc_tab (
id NUMBER
);
INSERT INTO qrc_tab VALUES (1);
INSERT INTO qrc_tab VALUES (2);
INSERT INTO qrc_tab VALUES (3);
INSERT INTO qrc_tab VALUES (4);
INSERT INTO qrc_tab VALUES (5);
CREATE OR REPLACE FUNCTION slow_function(p_id IN qrc_tab.id%TYPE)
RETURN qrc_tab.id%TYPE DETERMINISTIC AS
BEGIN
DBMS_LOCK.sleep(1);
RETURN p_id;
END;
/
SET TIMING ONThe function contains a one second sleep so we can easily detect if it has been executed by checking the elapsed time of the query.
Next, we query the test table using the slow function and check out the elapsed time. Each run takes approximately five seconds, one second sleep for each row queried.
SELECT slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.15
SQL>Adding the RESULT_CACHE hint to the query tells the server to attempt to retrieve the information from the result cache. If the information is not present, it will cache the results of the query provided there is enough room in the result cache. Since we have no cached results, we would expect the first run to take approximately five seconds, but subsequent runs to be much quicker.
SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.20
SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:00.15
SQL>The default action of the result cache is controlled by the RESULT_CACHE_MODE parameter. When it is set to MANUAL, the RESULT_CACHE hint must be used for a query to access the result cache.
SHOW PARAMETER RESULT_CACHE_MODE
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
result_cache_mode string MANUAL
SQL>If we set the RESULT_CACHE_MODE parameter to FORCE, the result cache is used by default, but we can bypass it using the NO_RESULT_CACHE hint.
ALTER SESSION SET RESULT_CACHE_MODE=FORCE;
SELECT slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:00.14
SELECT /*+ no_result_cache */ slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.14
SQL>
Oracle 11g Flashback Data Archive (Oracle Total Recall)
Flashback Data Archive (Oracle Total Recall)
Most flashback features work at the logical level, in that they don't directly allow you to recover an object. They simply give you a view of how the data was in the past, which in turn can be used to recover from logical corruption, such as accidental deletions. Typically, this view of the past is constructed using undo segments, which are retained for a period of time indicated by the UNDO_RETENTION parameter. Once the undo information is lost, the view of the past is lost also. At least that's how it was before Oracle 11g introduced the Flashback Data Archive functionality.
A flashback data archive is essentially an extended store of undo information, allowing some logical flashback operations to extend way back into the past with no difference in how they are coded. An individual flashback archives consists of one or more tablespaces, or parts of tablespace. Each flashback archive has a name, retention period and a quota on each associated tablespace. The database can have multiple flashback data archives, but only a single default archive. When a DML transaction commits an operation on a flashback archive enabled table, the Flashback Data Archiver (FBDA) process stores the pre-image of the rows into a flashback archive, along with metadata of the current rows. The FBDA process is also responsible for managing the data within the flashback archives, such as purging data beyond the retention period.
The following script creates a new tablespace, then creates two flashback data archives using the CREATE FLASHBACK ARCHIVE command. The first is limited in size to 10Gig with a retention period of 1 year, while the second has an unlimited quota and a retention period of 2 years.
CREATE TABLESPACE fda_ts
DATAFILE '/u01/app/oracle/oradata/DB11G/fda1_01.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE FLASHBACK ARCHIVE DEFAULT fda_1year TABLESPACE fda_ts
QUOTA 10G RETENTION 1 YEAR;
CREATE FLASHBACK ARCHIVE fda_2year TABLESPACE fda_ts
RETENTION 2 YEAR;Management of flashback archives falls into three distinct categories:
Tablespace management.
-- Set as default FBA
ALTER FLASHBACK ARCHIVE fba_name SET DEFAULT;
-- Add up to 10G of the specified tablespace to the specified flashback archive.
ALTER FLASHBACK ARCHIVE fba_name ADD TABLESPACE ts_name QUOTA 10G;
-- Add an unlimited quota of the specified tablespace to the specified flashback archive.
ALTER FLASHBACK ARCHIVE fba_name ADD TABLESPACE ts_name;
-- Change the tablespace quota to 20G.
ALTER FLASHBACK ARCHIVE fba_name MODIFY TABLESPACE ts_name QUOTA 20G;
-- Change the tablespace quota to unlimited.ALTER FLASHBACK ARCHIVE fba_name MODIFY TABLESPACE ts_name;
-- Remove the specified tablespace from the archive.
ALTER FLASHBACK ARCHIVE fba_name REMOVE TABLESPACE ts_name;Modifying the retention period.
ALTER FLASHBACK ARCHIVE fba_name MODIFY RETENTION 2 YEAR;
Purging data.
-- Remove all historical data.
ALTER FLASHBACK ARCHIVE fba_name PURGE ALL;
-- Remove all data before the specified time.
ALTER FLASHBACK ARCHIVE fba_name PURGE BEFORE TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' DAY);
-- Remove all data before the specified SCN.
ALTER FLASHBACK ARCHIVE fba_name PURGE BEFORE SCN 728969;Archives are removed using the DROP FLASHBACK ARCHIVE command, which drops the archive and all its historical data, but doesn't drop the associated tablespace.
DROP FLASHBACK ARCHIVE fba_name;
To enable flashback archiving on a specific table the user must have the FLASHBACK ARCHIVE object privilege on the specific flashback archive used. To try this out create a test user and grant the object privilege on the first flashback archive created earlier.
CONN sys/password AS SYSDBA
CREATE USER fda_test_user IDENTIFIED BY fda_test_user QUOTA UNLIMITED ON users;
GRANT CONNECT, CREATE TABLE TO fda_test_user;
GRANT FLASHBACK ARCHIVE ON fda_1year TO fda_test_user;
If we connect to the test user we are able to create a table and associate it with the default flashback archive as follows.
CONN fda_test_user/fda_test_user
CREATE TABLE test_tab_1 (
id NUMBER,
desription VARCHAR2(50),
CONSTRAINT test_tab_1_pk PRIMARY KEY (id)
)
FLASHBACK ARCHIVE;If we try to create a similar table, but point it at the second archive it fails, as we have no privileges on it.
CONN fda_test_user/fda_test_user
CREATE TABLE test_tab_2 (
id NUMBER,
desription VARCHAR2(50),
CONSTRAINT test_tab_2_pk PRIMARY KEY (id)
)
FLASHBACK ARCHIVE fda_2year;
CREATE TABLE test_tab_2 (
*
ERROR at line 1:
ORA-55620: No privilege to use Flashback Archive
SQL>The ALTER TABLE command allows existing tables to have flashback archiving switched on or off.
-- Enable using the default FBDA.
ALTER TABLE table_name FLASHBACK ARCHIVE;
-- Enable using specific FBDA.
ALTER TABLE table_name FLASHBACK ARCHIVE fda_name;
-- Disable flashback archiving.
ALTER TABLE table_name NO FLASHBACK ARCHIVE;
As with the CREATE TABLE statement, the FLASHBACK ARCHIVE object privilege must be granted on the flashback archive being used.
Once flashback archiving is enabled the table is protected, so it cannot be switched off unless you have the FLASHBACK ARCHIVE ADMINISTER system privilege, or are logged on as SYSDBA.
SQL> ALTER TABLE test_tab_1 NO FLASHBACK ARCHIVE;
ALTER TABLE test_tab_1 NO FLASHBACK ARCHIVE
*
ERROR at line 1:
ORA-55620: No privilege to use Flashback Archive
SQL>In addition, there are certain DDL restrictions associated with having flashback archiving enabled. The following operations result in a ORA-55610 error:
ALTER TABLE statements that drop, rename or modify columns.
ALTER TABLE statements that performs partition or subpartition operations.
ALTER TABLE statements that converts a LONG column to a LOB column.
ALTER TABLE statements that includes an UPGRADE TABLE clause, with or without an INCLUDING DATA clause.
DROP TABLE statements.
RENAME TABLE statements.
TRUNCATE TABLE statements.
Information about flashback data archives is displayed using the %_FLASHBACK_ARCHIVE view.
CONN sys/password AS SYSDBA
COLUMN flashback_archive_name FORMAT A20
SELECT flashback_archive_name, retention_in_days, status
FROM dba_flashback_archive;
FLASHBACK_ARCHIVE_NA RETENTION_IN_DAYS STATUS
-------------------- ----------------- -------
FDA_2YEAR 730
FDA_1YEAR 365 DEFAULT
2 rows selected.
SQL>The %_FLASHBACK_ARCHIVE_TS view displays the tablespaces and quotas associated with each flashback archive.
COLUMN flashback_archive_name FORMAT A20
COLUMN quota_in_mb FORMAT A10
SELECT flashback_archive_name, tablespace_name, quota_in_mb
FROM dba_flashback_archive_ts;
FLASHBACK_ARCHIVE_NA TABLESPACE_NAME QUOTA_IN_M
-------------------- ------------------------------ ----------
FDA_2YEAR FDA_TS
FDA_1YEAR FDA_TS 10240
2 rows selected.
SQL>
The %_FLASHBACK_ARCHIVE_TABLES view displays tables associated with each flashback archive, along with the name of the table holding the historical information.
COLUMN table_name FORMAT A15
COLUMN owner_name FORMAT A15
COLUMN flashback_archive_name FORMAT A20
COLUMN archive_table_name FORMAT A20
SELECT table_name, owner_name, flashback_archive_name, archive_table_name
FROM dba_flashback_archive_tables;
TABLE_NAME OWNER_NAME FLASHBACK_ARCHIVE_NA ARCHIVE_TABLE_NAME
--------------- --------------- -------------------- --------------------
TEST_TAB_1 FDA_TEST_USER FDA_1YEAR SYS_FBA_HIST_72023
1 row selected.
SQL>
Most flashback features work at the logical level, in that they don't directly allow you to recover an object. They simply give you a view of how the data was in the past, which in turn can be used to recover from logical corruption, such as accidental deletions. Typically, this view of the past is constructed using undo segments, which are retained for a period of time indicated by the UNDO_RETENTION parameter. Once the undo information is lost, the view of the past is lost also. At least that's how it was before Oracle 11g introduced the Flashback Data Archive functionality.
A flashback data archive is essentially an extended store of undo information, allowing some logical flashback operations to extend way back into the past with no difference in how they are coded. An individual flashback archives consists of one or more tablespaces, or parts of tablespace. Each flashback archive has a name, retention period and a quota on each associated tablespace. The database can have multiple flashback data archives, but only a single default archive. When a DML transaction commits an operation on a flashback archive enabled table, the Flashback Data Archiver (FBDA) process stores the pre-image of the rows into a flashback archive, along with metadata of the current rows. The FBDA process is also responsible for managing the data within the flashback archives, such as purging data beyond the retention period.
The following script creates a new tablespace, then creates two flashback data archives using the CREATE FLASHBACK ARCHIVE command. The first is limited in size to 10Gig with a retention period of 1 year, while the second has an unlimited quota and a retention period of 2 years.
CREATE TABLESPACE fda_ts
DATAFILE '/u01/app/oracle/oradata/DB11G/fda1_01.dbf'
SIZE 1M AUTOEXTEND ON NEXT 1M;
CREATE FLASHBACK ARCHIVE DEFAULT fda_1year TABLESPACE fda_ts
QUOTA 10G RETENTION 1 YEAR;
CREATE FLASHBACK ARCHIVE fda_2year TABLESPACE fda_ts
RETENTION 2 YEAR;Management of flashback archives falls into three distinct categories:
Tablespace management.
-- Set as default FBA
ALTER FLASHBACK ARCHIVE fba_name SET DEFAULT;
-- Add up to 10G of the specified tablespace to the specified flashback archive.
ALTER FLASHBACK ARCHIVE fba_name ADD TABLESPACE ts_name QUOTA 10G;
-- Add an unlimited quota of the specified tablespace to the specified flashback archive.
ALTER FLASHBACK ARCHIVE fba_name ADD TABLESPACE ts_name;
-- Change the tablespace quota to 20G.
ALTER FLASHBACK ARCHIVE fba_name MODIFY TABLESPACE ts_name QUOTA 20G;
-- Change the tablespace quota to unlimited.ALTER FLASHBACK ARCHIVE fba_name MODIFY TABLESPACE ts_name;
-- Remove the specified tablespace from the archive.
ALTER FLASHBACK ARCHIVE fba_name REMOVE TABLESPACE ts_name;Modifying the retention period.
ALTER FLASHBACK ARCHIVE fba_name MODIFY RETENTION 2 YEAR;
Purging data.
-- Remove all historical data.
ALTER FLASHBACK ARCHIVE fba_name PURGE ALL;
-- Remove all data before the specified time.
ALTER FLASHBACK ARCHIVE fba_name PURGE BEFORE TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' DAY);
-- Remove all data before the specified SCN.
ALTER FLASHBACK ARCHIVE fba_name PURGE BEFORE SCN 728969;Archives are removed using the DROP FLASHBACK ARCHIVE command, which drops the archive and all its historical data, but doesn't drop the associated tablespace.
DROP FLASHBACK ARCHIVE fba_name;
To enable flashback archiving on a specific table the user must have the FLASHBACK ARCHIVE object privilege on the specific flashback archive used. To try this out create a test user and grant the object privilege on the first flashback archive created earlier.
CONN sys/password AS SYSDBA
CREATE USER fda_test_user IDENTIFIED BY fda_test_user QUOTA UNLIMITED ON users;
GRANT CONNECT, CREATE TABLE TO fda_test_user;
GRANT FLASHBACK ARCHIVE ON fda_1year TO fda_test_user;
If we connect to the test user we are able to create a table and associate it with the default flashback archive as follows.
CONN fda_test_user/fda_test_user
CREATE TABLE test_tab_1 (
id NUMBER,
desription VARCHAR2(50),
CONSTRAINT test_tab_1_pk PRIMARY KEY (id)
)
FLASHBACK ARCHIVE;If we try to create a similar table, but point it at the second archive it fails, as we have no privileges on it.
CONN fda_test_user/fda_test_user
CREATE TABLE test_tab_2 (
id NUMBER,
desription VARCHAR2(50),
CONSTRAINT test_tab_2_pk PRIMARY KEY (id)
)
FLASHBACK ARCHIVE fda_2year;
CREATE TABLE test_tab_2 (
*
ERROR at line 1:
ORA-55620: No privilege to use Flashback Archive
SQL>The ALTER TABLE command allows existing tables to have flashback archiving switched on or off.
-- Enable using the default FBDA.
ALTER TABLE table_name FLASHBACK ARCHIVE;
-- Enable using specific FBDA.
ALTER TABLE table_name FLASHBACK ARCHIVE fda_name;
-- Disable flashback archiving.
ALTER TABLE table_name NO FLASHBACK ARCHIVE;
As with the CREATE TABLE statement, the FLASHBACK ARCHIVE object privilege must be granted on the flashback archive being used.
Once flashback archiving is enabled the table is protected, so it cannot be switched off unless you have the FLASHBACK ARCHIVE ADMINISTER system privilege, or are logged on as SYSDBA.
SQL> ALTER TABLE test_tab_1 NO FLASHBACK ARCHIVE;
ALTER TABLE test_tab_1 NO FLASHBACK ARCHIVE
*
ERROR at line 1:
ORA-55620: No privilege to use Flashback Archive
SQL>In addition, there are certain DDL restrictions associated with having flashback archiving enabled. The following operations result in a ORA-55610 error:
ALTER TABLE statements that drop, rename or modify columns.
ALTER TABLE statements that performs partition or subpartition operations.
ALTER TABLE statements that converts a LONG column to a LOB column.
ALTER TABLE statements that includes an UPGRADE TABLE clause, with or without an INCLUDING DATA clause.
DROP TABLE statements.
RENAME TABLE statements.
TRUNCATE TABLE statements.
Information about flashback data archives is displayed using the %_FLASHBACK_ARCHIVE view.
CONN sys/password AS SYSDBA
COLUMN flashback_archive_name FORMAT A20
SELECT flashback_archive_name, retention_in_days, status
FROM dba_flashback_archive;
FLASHBACK_ARCHIVE_NA RETENTION_IN_DAYS STATUS
-------------------- ----------------- -------
FDA_2YEAR 730
FDA_1YEAR 365 DEFAULT
2 rows selected.
SQL>The %_FLASHBACK_ARCHIVE_TS view displays the tablespaces and quotas associated with each flashback archive.
COLUMN flashback_archive_name FORMAT A20
COLUMN quota_in_mb FORMAT A10
SELECT flashback_archive_name, tablespace_name, quota_in_mb
FROM dba_flashback_archive_ts;
FLASHBACK_ARCHIVE_NA TABLESPACE_NAME QUOTA_IN_M
-------------------- ------------------------------ ----------
FDA_2YEAR FDA_TS
FDA_1YEAR FDA_TS 10240
2 rows selected.
SQL>
The %_FLASHBACK_ARCHIVE_TABLES view displays tables associated with each flashback archive, along with the name of the table holding the historical information.
COLUMN table_name FORMAT A15
COLUMN owner_name FORMAT A15
COLUMN flashback_archive_name FORMAT A20
COLUMN archive_table_name FORMAT A20
SELECT table_name, owner_name, flashback_archive_name, archive_table_name
FROM dba_flashback_archive_tables;
TABLE_NAME OWNER_NAME FLASHBACK_ARCHIVE_NA ARCHIVE_TABLE_NAME
--------------- --------------- -------------------- --------------------
TEST_TAB_1 FDA_TEST_USER FDA_1YEAR SYS_FBA_HIST_72023
1 row selected.
SQL>
miércoles, 4 de noviembre de 2009
Oracle 10g Copiar una base de datos Windows
Copiar una base de datos.
La petición textual era la creación de una base de datos igual que otra, pero con otro nombre. En algunos casos se trataba de replicar el entorno de producción para usarlo como test. En otros, simplemente, testear que el backup permitía recuperar la base de datos sobre otra máquina y dejar esa base de datos operativa como entorno auxiliar.
Cuando la base de datos puede conservar el mismo nombre y mismo SID, es tan fácil como copiar datafiles, redolog, controlfiles y archivos ORA sobre los mismos directorios originales en una máquina con un servidor Oracle instalado*.
* Por supuesto, misma plataforma O.S., versión y release de Oracle.
No obstante, en los casos en los que el nombre debe ser diferente, hay que ser muy preciso con los pasos a seguir. Incluyo dos pasos previos como opcionales por si son de utilidad.
En este ejemplo se asume que disponemos de la posibilidad de hacer un backup en frío de la base de datos.
Pasos preliminares (recomendado):
- Backup frío, backup caliente, export, Transport Tablespaces, backup del backup, backup de todo lo posible.
- Reducción de tamaño de los datafiles. (Tom Kyte tiene un fabuloso script para ello). Y otra vez backup.
Pasos a seguir:
1.- Backup en frío de la BBDD original
2.- Generación del pfile para la nueva BBDD
3.- Edición del pfile para sustituir: CONTROL_FILES, DB_NAME, y carpetas _DUMP_DEST,...
4.- Definir el nuevo ORACLE_SID
5.- Crear el nuevo servicio.
6.- Creación del fichero de passwords (sólo si el pfile tiene REMOTE_LOGIN_PASSWORDFILE=exclusive)
7.- Conexión a sqlplus como SYSDBA
8.- Arranque de la instancia y creación del fichero de parámetros SPFILE
9.- Backup del controlfile de la BBDD original a traza
10.- Recreación del controlfile con la cláusula SET NAME.
11.- Abrir la base de datos con modo OPEN RESETLOGS.
1.- Backup en frío de la BBDD original.
Restauración del backup sobre nueva ubicación.
-- Ejecutar el resultado de la siguiente sentencia como script.
-- NOTA: cuidado con los nombres duplicados de fichero.
-------------------------------------------------------------
select 'shutdown immediate;' from dual
union all
select 'host copy '||name||' &&directorio_destino' from v$controlfile
union all
select 'host copy '||member||' &directorio_destino' from v$logfile
union all
select 'host copy '||name||' &directorio_destino' from v$datafile
union all
select 'startup' from dual;
-- copia de los ficheros a los directorios destino
---------------------------------------------------
2.- Generación del pfile para la nueva base de datos
SQL> create pfile='?\admin\sid\pfile\inittest.ora' from spfile;
Archivo creado.
3.- Edición del pfile para sustituir: CONTROL_FILES, DB_NAME, y carpetas _DUMP_DEST,...
4.- Definir el nuevo ORACLE_SID
c:\>set ORACLE_SID=test
5.- Crear el nuevo servicio.
c:\>oradim -NEW -SRVC OracleServicetest -startmode auto
6.- Creación del fichero de passwords
c:\>orapwd file=C:\orant\ora92\database\PWDtest.ora password=xxxxxxxxx
7.- Conexión a sqlplus como SYSDBA
C:\>sqlplus
SQL*Plus: Release 10.2.0.4 - Production on Vie Jul 29 16:41:22 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Introduzca el nombre de usuario: sys/xxxxxxxxxx as sysdba
Conectado a una instancia inactiva.
8.- Arranque de la instancia y creación del fichero de parámetros SPFILE
SQL> STARTUP NOMOUNT PFILE='C:\DIRECTORIO_DESTINO\inittest.ora'
Instancia ORACLE iniciada.
Total System Global Area 1074866044 bytes
Fixed Size 456572 bytes
Variable Size 905969664 bytes
Database Buffers 167772160 bytes
Redo Buffers 667648 bytes
SQL> create spfile from pfile='C:\DIRECTORIO_DESTINO\inittest.ora';
Archivo creado.
9.- Backup del controlfile de PROD a trace.
SQL> alter database backup controlfile to trace;
Base de datos modificada.
10.- A partir de la traza del fichero de control.
Recreación del controlfile con el SET NAME al nuevo nombre.
CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS NOARCHIVELOG REUSE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 133
MAXINSTANCES 1
MAXLOGHISTORY 453
LOGFILE
GROUP 2 'C:\DATA\TEST\REDO\REDO02.LOG' SIZE 100M,
GROUP 3 'C:\DATA\TEST\REDO\REDO03.LOG' SIZE 100M,
GROUP 4 'C:\DATA\TEST\REDO\REDO04.LOG' SIZE 150M,
GROUP 5 'C:\DATA\TEST\REDO\REDO05.LOG' SIZE 150M,
GROUP 6 'C:\DATA\TEST\REDO\REDO06.LOG' SIZE 200M
-- STANDBY LOGFILE
DATAFILE
(... ficheros...)
'C:\DATA\TEST\SYSTEM\SYSTEM01.DBF',
'C:\DATA\TEST\DATA\DATA01.DBF'
CHARACTER SET WE8MSWIN1252
;
Archivo de control creado.
11.- Abrir la base de datos con modo OPEN RESETLOGS.
SQL> alter database open resetlogs;
Base de datos modificada.
12.- Verificación.
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
test
SQL> select name from v$database;
NAME
---------
TEST
SQL> select status from v$thread;
STATUS
------
OPEN
12.- Shutdown y Open de la base de datos.
SQL> shutdown immediate;
Base de datos cerrada.
Base de datos desmontada.
Instancia ORACLE cerrada.
SQL> startup
Instancia ORACLE iniciada.
Total System Global Area 1074866044 bytes
Fixed Size 456572 bytes
Variable Size 905969664 bytes
Database Buffers 167772160 bytes
Redo Buffers 667648 bytes
Base de datos montada.
Base de datos abierta.
La petición textual era la creación de una base de datos igual que otra, pero con otro nombre. En algunos casos se trataba de replicar el entorno de producción para usarlo como test. En otros, simplemente, testear que el backup permitía recuperar la base de datos sobre otra máquina y dejar esa base de datos operativa como entorno auxiliar.
Cuando la base de datos puede conservar el mismo nombre y mismo SID, es tan fácil como copiar datafiles, redolog, controlfiles y archivos ORA sobre los mismos directorios originales en una máquina con un servidor Oracle instalado*.
* Por supuesto, misma plataforma O.S., versión y release de Oracle.
No obstante, en los casos en los que el nombre debe ser diferente, hay que ser muy preciso con los pasos a seguir. Incluyo dos pasos previos como opcionales por si son de utilidad.
En este ejemplo se asume que disponemos de la posibilidad de hacer un backup en frío de la base de datos.
Pasos preliminares (recomendado):
- Backup frío, backup caliente, export, Transport Tablespaces, backup del backup, backup de todo lo posible.
- Reducción de tamaño de los datafiles. (Tom Kyte tiene un fabuloso script para ello). Y otra vez backup.
Pasos a seguir:
1.- Backup en frío de la BBDD original
2.- Generación del pfile para la nueva BBDD
3.- Edición del pfile para sustituir: CONTROL_FILES, DB_NAME, y carpetas _DUMP_DEST,...
4.- Definir el nuevo ORACLE_SID
5.- Crear el nuevo servicio.
6.- Creación del fichero de passwords (sólo si el pfile tiene REMOTE_LOGIN_PASSWORDFILE=exclusive)
7.- Conexión a sqlplus como SYSDBA
8.- Arranque de la instancia y creación del fichero de parámetros SPFILE
9.- Backup del controlfile de la BBDD original a traza
10.- Recreación del controlfile con la cláusula SET NAME.
11.- Abrir la base de datos con modo OPEN RESETLOGS.
1.- Backup en frío de la BBDD original.
Restauración del backup sobre nueva ubicación.
-- Ejecutar el resultado de la siguiente sentencia como script.
-- NOTA: cuidado con los nombres duplicados de fichero.
-------------------------------------------------------------
select 'shutdown immediate;' from dual
union all
select 'host copy '||name||' &&directorio_destino' from v$controlfile
union all
select 'host copy '||member||' &directorio_destino' from v$logfile
union all
select 'host copy '||name||' &directorio_destino' from v$datafile
union all
select 'startup' from dual;
-- copia de los ficheros a los directorios destino
---------------------------------------------------
2.- Generación del pfile para la nueva base de datos
SQL> create pfile='?\admin\sid\pfile\inittest.ora' from spfile;
Archivo creado.
3.- Edición del pfile para sustituir: CONTROL_FILES, DB_NAME, y carpetas _DUMP_DEST,...
4.- Definir el nuevo ORACLE_SID
c:\>set ORACLE_SID=test
5.- Crear el nuevo servicio.
c:\>oradim -NEW -SRVC OracleServicetest -startmode auto
6.- Creación del fichero de passwords
c:\>orapwd file=C:\orant\ora92\database\PWDtest.ora password=xxxxxxxxx
7.- Conexión a sqlplus como SYSDBA
C:\>sqlplus
SQL*Plus: Release 10.2.0.4 - Production on Vie Jul 29 16:41:22 2005
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Introduzca el nombre de usuario: sys/xxxxxxxxxx as sysdba
Conectado a una instancia inactiva.
8.- Arranque de la instancia y creación del fichero de parámetros SPFILE
SQL> STARTUP NOMOUNT PFILE='C:\DIRECTORIO_DESTINO\inittest.ora'
Instancia ORACLE iniciada.
Total System Global Area 1074866044 bytes
Fixed Size 456572 bytes
Variable Size 905969664 bytes
Database Buffers 167772160 bytes
Redo Buffers 667648 bytes
SQL> create spfile from pfile='C:\DIRECTORIO_DESTINO\inittest.ora';
Archivo creado.
9.- Backup del controlfile de PROD a trace.
SQL> alter database backup controlfile to trace;
Base de datos modificada.
10.- A partir de la traza del fichero de control.
Recreación del controlfile con el SET NAME al nuevo nombre.
CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS NOARCHIVELOG REUSE
MAXLOGFILES 50
MAXLOGMEMBERS 5
MAXDATAFILES 133
MAXINSTANCES 1
MAXLOGHISTORY 453
LOGFILE
GROUP 2 'C:\DATA\TEST\REDO\REDO02.LOG' SIZE 100M,
GROUP 3 'C:\DATA\TEST\REDO\REDO03.LOG' SIZE 100M,
GROUP 4 'C:\DATA\TEST\REDO\REDO04.LOG' SIZE 150M,
GROUP 5 'C:\DATA\TEST\REDO\REDO05.LOG' SIZE 150M,
GROUP 6 'C:\DATA\TEST\REDO\REDO06.LOG' SIZE 200M
-- STANDBY LOGFILE
DATAFILE
(... ficheros...)
'C:\DATA\TEST\SYSTEM\SYSTEM01.DBF',
'C:\DATA\TEST\DATA\DATA01.DBF'
CHARACTER SET WE8MSWIN1252
;
Archivo de control creado.
11.- Abrir la base de datos con modo OPEN RESETLOGS.
SQL> alter database open resetlogs;
Base de datos modificada.
12.- Verificación.
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
test
SQL> select name from v$database;
NAME
---------
TEST
SQL> select status from v$thread;
STATUS
------
OPEN
12.- Shutdown y Open de la base de datos.
SQL> shutdown immediate;
Base de datos cerrada.
Base de datos desmontada.
Instancia ORACLE cerrada.
SQL> startup
Instancia ORACLE iniciada.
Total System Global Area 1074866044 bytes
Fixed Size 456572 bytes
Variable Size 905969664 bytes
Database Buffers 167772160 bytes
Redo Buffers 667648 bytes
Base de datos montada.
Base de datos abierta.
Oracle Uso de Logminer
EXAMPLE 1
alter session set nls_date_format = 'dd-mm-yyyy hh24:mi:ss' ;
alter session set nls_language = american ;
set lines 4000
set trimspool on
set feed off
col session_info format a40 word_wrap
col sql_redo format a90 word_wrap
col sql_undo format a90 word_wrap
begin
sys.dbms_logmnr.add_logfile ('/SCEL/data14/archives/hagc/arch_1_189771.arc',sys.dbms_logmnr.new) ;
sys.dbms_logmnr.add_logfile ('/SCEL/data14/archives/hagc/arch_1_189772.arc',sys.dbms_logmnr.addfile) ;
sys.dbms_logmnr.add_logfile ('/SCEL/data14/archives/hagc/arch_1_189773.arc',sys.dbms_logmnr.addfile) ;
end ;
/
-- Para rastrear DMLs
--
exec sys.dbms_logmnr.start_logmnr (options => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + sys.DBMS_LOGMNR.COMMITTED_DATA_ONLY -
+ SYS.DBMS_LOGMNR.PRINT_PRETTY_SQL );
--
-- Para rastrear DDLs
--
--exec sys.dbms_logmnr.start_logmnr (options => sys.DBMS_LOGMNR.DDL_DICT_TRACKING)
spool /tmp/invest.lst
set feed on
set pages 200
set lines 4000
set trimspool on
set term off
select
username,
timestamp,
session_info,
sql_redo ,
sql_undo ,
operation
from
v$logmnr_contents t
WHERE
upper (seg_owner) = 'SISCEL'
and upper (seg_name) = 'GA_SERVSUPLABO'
--or lower (seg_name) = 'bpd_planes'
-- upper (username) in ('DYRUIZGO')
-- UPPER (session_info) like '%ELIPROGS%'
--operation = 'DDL'
;
spool off
EXEC sys.DBMS_LOGMNR.END_LOGMNR
EXAMPLE 2
SQL> connect / as sysdba
Conectado.
SQL> show parameters utl
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir string c:\oraclefiles
SQL> exec DBMS_LOGMNR_D.BUILD( DICTIONARY_FILENAME =>'dictionary.ora', DICTIONARY_LOCATION => 'c:\oraclefiles');
Procedimiento PL/SQL terminado correctamente.
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
C:\ORACLE\ORADATA\ORCL\REDO04A.LOG
C:\ORACLE\ORADATA\ORCL\REDO04B.LOG
C:\ORACLE\ORADATA\ORCL\REDO05A.LOG
C:\ORACLE\ORADATA\ORCL\REDO05B.LOG
C:\ORACLE\ORADATA\ORCL\REDO06A.LOG
C:\ORACLE\ORADATA\ORCL\REDO06B.LOG
6 filas seleccionadas.
SQL> exec DBMS_LOGMNR.add_logfile('C:\ORACLE\ORADATA\ORCL\REDO04A.LOG');
Procedimiento PL/SQL terminado correctamente.
SQL> exec DBMS_LOGMNR.add_logfile('C:\ORACLE\ORADATA\ORCL\REDO05A.LOG');
Procedimiento PL/SQL terminado correctamente.
SQL> exec DBMS_LOGMNR.add_logfile('C:\ORACLE\ORADATA\ORCL\REDO06A.LOG');
Procedimiento PL/SQL terminado correctamente.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'c:\oraclefiles\dictionary.ora');
Procedimiento PL/SQL terminado correctamente.
SQL> set pages 100
SQL> set lines 120
SQL> column sql_redo format a50
SQL> column sql_undo format a50
SQL> select sql_redo, sql_undo from v$logmnr_contents where rownum<10;
SQL_REDO SQL_UNDO
-------------------------------------------------- --------------------------------------------------
alter session set nls_date_format = 'dd-mm-yyyy hh24:mi:ss' ;
alter session set nls_language = american ;
set lines 4000
set trimspool on
set feed off
col session_info format a40 word_wrap
col sql_redo format a90 word_wrap
col sql_undo format a90 word_wrap
begin
sys.dbms_logmnr.add_logfile ('/SCEL/data14/archives/hagc/arch_1_189771.arc',sys.dbms_logmnr.new) ;
sys.dbms_logmnr.add_logfile ('/SCEL/data14/archives/hagc/arch_1_189772.arc',sys.dbms_logmnr.addfile) ;
sys.dbms_logmnr.add_logfile ('/SCEL/data14/archives/hagc/arch_1_189773.arc',sys.dbms_logmnr.addfile) ;
end ;
/
-- Para rastrear DMLs
--
exec sys.dbms_logmnr.start_logmnr (options => sys.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + sys.DBMS_LOGMNR.COMMITTED_DATA_ONLY -
+ SYS.DBMS_LOGMNR.PRINT_PRETTY_SQL );
--
-- Para rastrear DDLs
--
--exec sys.dbms_logmnr.start_logmnr (options => sys.DBMS_LOGMNR.DDL_DICT_TRACKING)
spool /tmp/invest.lst
set feed on
set pages 200
set lines 4000
set trimspool on
set term off
select
username,
timestamp,
session_info,
sql_redo ,
sql_undo ,
operation
from
v$logmnr_contents t
WHERE
upper (seg_owner) = 'SISCEL'
and upper (seg_name) = 'GA_SERVSUPLABO'
--or lower (seg_name) = 'bpd_planes'
-- upper (username) in ('DYRUIZGO')
-- UPPER (session_info) like '%ELIPROGS%'
--operation = 'DDL'
;
spool off
EXEC sys.DBMS_LOGMNR.END_LOGMNR
EXAMPLE 2
SQL> connect / as sysdba
Conectado.
SQL> show parameters utl
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
utl_file_dir string c:\oraclefiles
SQL> exec DBMS_LOGMNR_D.BUILD( DICTIONARY_FILENAME =>'dictionary.ora', DICTIONARY_LOCATION => 'c:\oraclefiles');
Procedimiento PL/SQL terminado correctamente.
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
C:\ORACLE\ORADATA\ORCL\REDO04A.LOG
C:\ORACLE\ORADATA\ORCL\REDO04B.LOG
C:\ORACLE\ORADATA\ORCL\REDO05A.LOG
C:\ORACLE\ORADATA\ORCL\REDO05B.LOG
C:\ORACLE\ORADATA\ORCL\REDO06A.LOG
C:\ORACLE\ORADATA\ORCL\REDO06B.LOG
6 filas seleccionadas.
SQL> exec DBMS_LOGMNR.add_logfile('C:\ORACLE\ORADATA\ORCL\REDO04A.LOG');
Procedimiento PL/SQL terminado correctamente.
SQL> exec DBMS_LOGMNR.add_logfile('C:\ORACLE\ORADATA\ORCL\REDO05A.LOG');
Procedimiento PL/SQL terminado correctamente.
SQL> exec DBMS_LOGMNR.add_logfile('C:\ORACLE\ORADATA\ORCL\REDO06A.LOG');
Procedimiento PL/SQL terminado correctamente.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'c:\oraclefiles\dictionary.ora');
Procedimiento PL/SQL terminado correctamente.
SQL> set pages 100
SQL> set lines 120
SQL> column sql_redo format a50
SQL> column sql_undo format a50
SQL> select sql_redo, sql_undo from v$logmnr_contents where rownum<10;
SQL_REDO SQL_UNDO
-------------------------------------------------- --------------------------------------------------
ORACLE Usuario que Ejecuto Sentencia SQL a Partir del SQL_ID
En ocasiones queremos conocer, quién fue el que ejecutó una sentencia "x" en la base de datos y no sabemos como hacerlo.
Este es un ejemplo sencillo como averiguarlo.
Nota: Cada vez que se reinicia una instancia, se pierde la información almacenada en el shared pool, por tanto, si tienes un SQL_ID de días atrás y la base de datos se reinició, no te servirá de nada este procedimiento.
Tomemos por ejemplo un SQL_ID de una sentencia cualquiera, ejecutada en la base de datos. Esto se hace consultando la vista V_$SQLTEXT:
SQL> select sql_id, sql_text from sys.V_$SQLTEXT where rownum <>
SQL_ID SQL_TEXT
------------- -------------------------------------------------------
ay9t40xq6c00t ODEGA" = :2 AND "COD_UBICACION" = :3 AND "NO_CIA" = :1
Ahora consultando en la vista V_$SQLAREA, podemos obtener la fecha que se ejecutó por primera vez:
SQL> select USERS_OPENING, FIRST_LOAD_TIME, USERS_EXECUTING, SQL_ID, ELAPSED_TIME,
2 LAST_LOAD_TIME, PROGRAM_ID from sys.V_$SQLAREA
3 where sql_id='ay9t40xq6c00t';
USERS_OPENING FIRST_LOAD_TIME USERS_EXECUTING SQL_ID ELAPSED_TIME LAST_LOAD PROGRAM_ID
------------- ------------------- --------------- ------------- ------------ --------- ----------
0 2009-10-09/16:14:22 0 ay9t40xq6c00t 0 09-OCT-09 0
Modificando la consulta anterior, podemos obtener también de la misma vista, el id, del usuario que ejecutó la sentencia:
SQL> select FIRST_LOAD_TIME,SQL_ID,PARSING_USER_ID from sys.V_$SQLAREA
2 where sql_id='ay9t40xq6c00t';
FIRST_LOAD_TIME SQL_ID PARSING_USER_ID
------------------- ------------- ---------------
2009-10-09/16:14:22 ay9t40xq6c00t 175
Finalmente, consultamos en la tabla all_users, utilizando el ID del usuario y obtenemos el username del usuario que ejecutó la sentencia.
SQL> select username, user_id, created from dba_users where user_id='175';
USERNAME USER_ID CREATED
------------------------------ ---------- ---------
LABORATORIO 175 06-APR-09
Este es un ejemplo sencillo como averiguarlo.
Nota: Cada vez que se reinicia una instancia, se pierde la información almacenada en el shared pool, por tanto, si tienes un SQL_ID de días atrás y la base de datos se reinició, no te servirá de nada este procedimiento.
Tomemos por ejemplo un SQL_ID de una sentencia cualquiera, ejecutada en la base de datos. Esto se hace consultando la vista V_$SQLTEXT:
SQL> select sql_id, sql_text from sys.V_$SQLTEXT where rownum <>
SQL_ID SQL_TEXT
------------- -------------------------------------------------------
ay9t40xq6c00t ODEGA" = :2 AND "COD_UBICACION" = :3 AND "NO_CIA" = :1
Ahora consultando en la vista V_$SQLAREA, podemos obtener la fecha que se ejecutó por primera vez:
SQL> select USERS_OPENING, FIRST_LOAD_TIME, USERS_EXECUTING, SQL_ID, ELAPSED_TIME,
2 LAST_LOAD_TIME, PROGRAM_ID from sys.V_$SQLAREA
3 where sql_id='ay9t40xq6c00t';
USERS_OPENING FIRST_LOAD_TIME USERS_EXECUTING SQL_ID ELAPSED_TIME LAST_LOAD PROGRAM_ID
------------- ------------------- --------------- ------------- ------------ --------- ----------
0 2009-10-09/16:14:22 0 ay9t40xq6c00t 0 09-OCT-09 0
Modificando la consulta anterior, podemos obtener también de la misma vista, el id, del usuario que ejecutó la sentencia:
SQL> select FIRST_LOAD_TIME,SQL_ID,PARSING_USER_ID from sys.V_$SQLAREA
2 where sql_id='ay9t40xq6c00t';
FIRST_LOAD_TIME SQL_ID PARSING_USER_ID
------------------- ------------- ---------------
2009-10-09/16:14:22 ay9t40xq6c00t 175
Finalmente, consultamos en la tabla all_users, utilizando el ID del usuario y obtenemos el username del usuario que ejecutó la sentencia.
SQL> select username, user_id, created from dba_users where user_id='175';
USERNAME USER_ID CREATED
------------------------------ ---------- ---------
LABORATORIO 175 06-APR-09
ORACLE Recompiling Invalid Schema Objects
Recompiling Invalid Schema Objects
Operations such as upgrades, patches and DDL changes can invalidate schema objects. Provided these changes don't cause compilation failures the objects will be revalidated by on-demand automatic recompilation, but this can take an unacceptable time to complete, especially where complex dependencies are present. For this reason it makes sense to recompile invalid objects in advance of user calls. It also allows you to identify if any changes have broken your code base. This article presents several methods for recompiling invalid schema objects.
Identifying Invalid Objects
The Manual Approach
Custom Script
DBMS_UTILITY.compile_schema
UTL_RECOMP
utlrp.sql and utlprp.sql
Identifying Invalid Objects
The DBA_OBJECTS view can be used to identify invalid objects using the following query:
COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status = 'INVALID';
ORDER BY owner, object_type, object_name;With this information you can decide which of the following recompilation methods is suitable for you.
The Manual ApproachFor small numbers of objects you may decide that a manual recompilation is sufficient. The following example shows the compile syntax for several object types:
ALTER PACKAGE my_package COMPILE;
ALTER PACKAGE my_package COMPILE BODY;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER VIEW my_view COMPILE;Notice that the package body is compiled in the same way as the package specification, with the addition of the word "BODY" at the end of the command.
An alternative approach is to use the DBMS_DDL package to perform the recompilations:
EXEC DBMS_DDL.alter_compile('PACKAGE', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PACKAGE BODY', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PROCEDURE', 'MY_SCHEMA', 'MY_PROCEDURE');
EXEC DBMS_DDL.alter_compile('FUNCTION', 'MY_SCHEMA', 'MY_FUNCTION');
EXEC DBMS_DDL.alter_compile('TRIGGER', 'MY_SCHEMA', 'MY_TRIGGER');This method is limited to PL/SQL objects, so it is not applicable for views.
Script Propio.set heading off
set feedback off
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER ' || OBJECT_TYPE || ' ' || OWNER || '.' || OBJECT_NAME || ' COMPILE; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE NOT IN ('PACKAGE BODY') AND OWNER = 'SPE' AND STATUS = 'IN
VALID';
spool off
@/export/home/oracle/kdbin/recompile.lst
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER PACKAGE ' || OWNER || '.' || OBJECT_NAME || ' COMPILE BODY; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE IN ('PACKAGE BODY') AND OWNER = 'SPE' AND STATUS = 'INVALID';
spool off
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER ' || OBJECT_TYPE || ' ' || OWNER || '.' || OBJECT_NAME || ' COMPILE; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE NOT IN ('PACKAGE BODY') AND STATUS = 'INVALID';
spool off
@/export/home/oracle/kdbin/recompile.lst
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER PACKAGE ' || OWNER || '.' || OBJECT_NAME || ' COMPILE BODY; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE IN ('PACKAGE BODY') AND STATUS = 'INVALID';
spool off
@/export/home/oracle/kdbin/recompile.lst
set heading on
set feedback on
exit
Custom Script
In some situations you may have to compile many invalid objects in one go. One approach is to write a custom script to identify and compile the invalid objects. The following example identifies and recompiles invalid packages and package bodies.
SET SERVEROUTPUT ON SIZE 1000000
BEGIN
FOR cur_rec IN (SELECT owner,
object_name,
object_type,
DECODE(object_type, 'PACKAGE', 1,
'PACKAGE BODY', 2, 2) AS recompile_order
FROM dba_objects
WHERE object_type IN ('PACKAGE', 'PACKAGE BODY')
AND status != 'VALID'
ORDER BY 4)
LOOP
BEGIN
IF cur_rec.object_type = 'PACKAGE' THEN
EXECUTE IMMEDIATE 'ALTER ' || cur_rec.object_type ||
' "' || cur_rec.owner || '"."' || cur_rec.object_name || '" COMPILE';
ElSE
EXECUTE IMMEDIATE 'ALTER PACKAGE "' || cur_rec.owner ||
'"."' || cur_rec.object_name || '" COMPILE BODY';
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line(cur_rec.object_type || ' : ' || cur_rec.owner ||
' : ' || cur_rec.object_name);
END;
END LOOP;
END;
/This approach is fine if you have a specific task in mind, but be aware that you may end up compiling some objects multiple times depending on the order they are compiled in. It is probably a better idea to use one of the methods provided by Oracle since they take the code dependencies into account.
DBMS_UTILITY.compile_schema
The COMPILE_SCHEMA procedure in the DBMS_UTILITY package compiles all procedures, functions, packages, and triggers in the specified schema. The example below shows how it is called from SQL*Plus:
EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');UTL_RECOMP
The UTL_RECOMP package contains two procedures used to recompile invalid objects. As the names suggest, the RECOMP_SERIAL procedure recompiles all the invalid objects one at a time, while the RECOMP_PARALLEL procedure performs the same task in parallel using the specified number of threads. Their definitions are listed below:
PROCEDURE RECOMP_SERIAL(
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);
PROCEDURE RECOMP_PARALLEL(
threads IN PLS_INTEGER DEFAULT NULL,
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);The usage notes for the parameters are listed below:
schema - The schema whose invalid objects are to be recompiled. If NULL all invalid objects in the database are recompiled.
threads - The number of threads used in a parallel operation. If NULL the value of the "job_queue_processes" parameter is used. Matching the number of available CPUs is generally a good starting point for this value.
flags - Used for internal diagnostics and testing only.
The following examples show how these procedures care used:
-- Schema level.
EXEC UTL_RECOMP.recomp_serial('SCOTT');
EXEC UTL_RECOMP.recomp_parallel(4, 'SCOTT');
-- Database level.
EXEC UTL_RECOMP.recomp_serial();
EXEC UTL_RECOMP.recomp_parallel(4);
-- Using job_queue_processes value.
EXEC UTL_RECOMP.recomp_parallel();
EXEC UTL_RECOMP.recomp_parallel(NULL, 'SCOTT');There are a number of restrictions associated with the use of this package including:
Parallel execution is perfomed using the job queue. All existing jobs are marked as disabled until the operation is complete.
The package must be run from SQL*Plus as the SYS user, or another user with SYSDBA.
The package expects the STANDARD, DBMS_STANDARD, DBMS_JOB and DBMS_RANDOM to be present and valid.
Runnig DDL operations at the same time as this package may result in deadlocks.
utlrp.sql and utlprp.sql
The utlrp.sql and utlprp.sql scripts are provided by Oracle to recompile all invalid objects in the database. They are typically run after major database changes such as upgrades or patches. They are located in the $ORACLE_HOME/rdbms/admin directory and provide a wrapper on the UTL_RECOMP package. The utlrp.sql script simply calls the utlprp.sql script with a command line parameter of "0". The utlprp.sql accepts a single integer parameter that indicates the level of parallelism as follows:
0 - The level of parallelism is derived based on the CPU_COUNT parameter.
1 - The recompilation is run serially, one object at a time.
N - The recompilation is run in parallel with "N" number of threads.
Both scripts must be run as the SYS user, or another user with SYSDBA, to work correctly.
For further information see:
DBMS_UTILITY.compile_schema
UTL_RECOMP
Operations such as upgrades, patches and DDL changes can invalidate schema objects. Provided these changes don't cause compilation failures the objects will be revalidated by on-demand automatic recompilation, but this can take an unacceptable time to complete, especially where complex dependencies are present. For this reason it makes sense to recompile invalid objects in advance of user calls. It also allows you to identify if any changes have broken your code base. This article presents several methods for recompiling invalid schema objects.
Identifying Invalid Objects
The Manual Approach
Custom Script
DBMS_UTILITY.compile_schema
UTL_RECOMP
utlrp.sql and utlprp.sql
Identifying Invalid Objects
The DBA_OBJECTS view can be used to identify invalid objects using the following query:
COLUMN object_name FORMAT A30
SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status = 'INVALID';
ORDER BY owner, object_type, object_name;With this information you can decide which of the following recompilation methods is suitable for you.
The Manual ApproachFor small numbers of objects you may decide that a manual recompilation is sufficient. The following example shows the compile syntax for several object types:
ALTER PACKAGE my_package COMPILE;
ALTER PACKAGE my_package COMPILE BODY;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER VIEW my_view COMPILE;Notice that the package body is compiled in the same way as the package specification, with the addition of the word "BODY" at the end of the command.
An alternative approach is to use the DBMS_DDL package to perform the recompilations:
EXEC DBMS_DDL.alter_compile('PACKAGE', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PACKAGE BODY', 'MY_SCHEMA', 'MY_PACKAGE');
EXEC DBMS_DDL.alter_compile('PROCEDURE', 'MY_SCHEMA', 'MY_PROCEDURE');
EXEC DBMS_DDL.alter_compile('FUNCTION', 'MY_SCHEMA', 'MY_FUNCTION');
EXEC DBMS_DDL.alter_compile('TRIGGER', 'MY_SCHEMA', 'MY_TRIGGER');This method is limited to PL/SQL objects, so it is not applicable for views.
Script Propio.set heading off
set feedback off
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER ' || OBJECT_TYPE || ' ' || OWNER || '.' || OBJECT_NAME || ' COMPILE; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE NOT IN ('PACKAGE BODY') AND OWNER = 'SPE' AND STATUS = 'IN
VALID';
spool off
@/export/home/oracle/kdbin/recompile.lst
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER PACKAGE ' || OWNER || '.' || OBJECT_NAME || ' COMPILE BODY; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE IN ('PACKAGE BODY') AND OWNER = 'SPE' AND STATUS = 'INVALID';
spool off
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER ' || OBJECT_TYPE || ' ' || OWNER || '.' || OBJECT_NAME || ' COMPILE; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE NOT IN ('PACKAGE BODY') AND STATUS = 'INVALID';
spool off
@/export/home/oracle/kdbin/recompile.lst
spool /export/home/oracle/kdbin/recompile.lst
SELECT 'ALTER PACKAGE ' || OWNER || '.' || OBJECT_NAME || ' COMPILE BODY; ' FROM DBA_OBJECTS WHERE OBJECT_TYPE IN ('PACKAGE BODY') AND STATUS = 'INVALID';
spool off
@/export/home/oracle/kdbin/recompile.lst
set heading on
set feedback on
exit
Custom Script
In some situations you may have to compile many invalid objects in one go. One approach is to write a custom script to identify and compile the invalid objects. The following example identifies and recompiles invalid packages and package bodies.
SET SERVEROUTPUT ON SIZE 1000000
BEGIN
FOR cur_rec IN (SELECT owner,
object_name,
object_type,
DECODE(object_type, 'PACKAGE', 1,
'PACKAGE BODY', 2, 2) AS recompile_order
FROM dba_objects
WHERE object_type IN ('PACKAGE', 'PACKAGE BODY')
AND status != 'VALID'
ORDER BY 4)
LOOP
BEGIN
IF cur_rec.object_type = 'PACKAGE' THEN
EXECUTE IMMEDIATE 'ALTER ' || cur_rec.object_type ||
' "' || cur_rec.owner || '"."' || cur_rec.object_name || '" COMPILE';
ElSE
EXECUTE IMMEDIATE 'ALTER PACKAGE "' || cur_rec.owner ||
'"."' || cur_rec.object_name || '" COMPILE BODY';
END IF;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.put_line(cur_rec.object_type || ' : ' || cur_rec.owner ||
' : ' || cur_rec.object_name);
END;
END LOOP;
END;
/This approach is fine if you have a specific task in mind, but be aware that you may end up compiling some objects multiple times depending on the order they are compiled in. It is probably a better idea to use one of the methods provided by Oracle since they take the code dependencies into account.
DBMS_UTILITY.compile_schema
The COMPILE_SCHEMA procedure in the DBMS_UTILITY package compiles all procedures, functions, packages, and triggers in the specified schema. The example below shows how it is called from SQL*Plus:
EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');UTL_RECOMP
The UTL_RECOMP package contains two procedures used to recompile invalid objects. As the names suggest, the RECOMP_SERIAL procedure recompiles all the invalid objects one at a time, while the RECOMP_PARALLEL procedure performs the same task in parallel using the specified number of threads. Their definitions are listed below:
PROCEDURE RECOMP_SERIAL(
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);
PROCEDURE RECOMP_PARALLEL(
threads IN PLS_INTEGER DEFAULT NULL,
schema IN VARCHAR2 DEFAULT NULL,
flags IN PLS_INTEGER DEFAULT 0);The usage notes for the parameters are listed below:
schema - The schema whose invalid objects are to be recompiled. If NULL all invalid objects in the database are recompiled.
threads - The number of threads used in a parallel operation. If NULL the value of the "job_queue_processes" parameter is used. Matching the number of available CPUs is generally a good starting point for this value.
flags - Used for internal diagnostics and testing only.
The following examples show how these procedures care used:
-- Schema level.
EXEC UTL_RECOMP.recomp_serial('SCOTT');
EXEC UTL_RECOMP.recomp_parallel(4, 'SCOTT');
-- Database level.
EXEC UTL_RECOMP.recomp_serial();
EXEC UTL_RECOMP.recomp_parallel(4);
-- Using job_queue_processes value.
EXEC UTL_RECOMP.recomp_parallel();
EXEC UTL_RECOMP.recomp_parallel(NULL, 'SCOTT');There are a number of restrictions associated with the use of this package including:
Parallel execution is perfomed using the job queue. All existing jobs are marked as disabled until the operation is complete.
The package must be run from SQL*Plus as the SYS user, or another user with SYSDBA.
The package expects the STANDARD, DBMS_STANDARD, DBMS_JOB and DBMS_RANDOM to be present and valid.
Runnig DDL operations at the same time as this package may result in deadlocks.
utlrp.sql and utlprp.sql
The utlrp.sql and utlprp.sql scripts are provided by Oracle to recompile all invalid objects in the database. They are typically run after major database changes such as upgrades or patches. They are located in the $ORACLE_HOME/rdbms/admin directory and provide a wrapper on the UTL_RECOMP package. The utlrp.sql script simply calls the utlprp.sql script with a command line parameter of "0". The utlprp.sql accepts a single integer parameter that indicates the level of parallelism as follows:
0 - The level of parallelism is derived based on the CPU_COUNT parameter.
1 - The recompilation is run serially, one object at a time.
N - The recompilation is run in parallel with "N" number of threads.
Both scripts must be run as the SYS user, or another user with SYSDBA, to work correctly.
For further information see:
DBMS_UTILITY.compile_schema
UTL_RECOMP
ORACLE Renaming or Moving Oracle Files
Renaming or Moving Oracle Files
Controlfiles
Logfiles
Datafiles
Recreating the Controlfile
Controlfiles
The current location of the controlfiles can be queried from the V$CONTROLFILE view, as shown below.
SQL> select name from v$controlfile;
NAME
-------------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL01.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL
3 rows selected.
SQL>In order to rename or move these files we must alter the value of the control_files instance parameter.
SQL> show parameter control_files
NAME TYPE VALUE
------------------------------------ -------------------------------- ------------------------------
control_files string C:\ORACLE\ORADATA\DB10G\CONTRO
L01.CTL, C:\ORACLE\ORADATA\DB1
0G\CONTROL02.CTL, C:\ORACLE\OR
ADATA\DB10G\CONTROL03.CTL
SQL>To move or rename a controlfile do the following:
Alter the control_files parameter using the ALTER SYSTEM comamnd.
Shutdown the database.
Rename the physical file on the OS.
Start the database.
The following SQL*Plus output shows how this is done for an instance using an spfile. For instances using a pfile replace the spfile manipulation steps with an amendment of the parameter in the init.ora file.
SQL> ALTER SYSTEM SET control_files='C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_CONTROL01.CTL', -
> 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL', -
> 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL' SCOPE=SPFILE;
System altered.
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> HOST MOVE C:\ORACLE\ORADATA\DB10G\CONTROL01.CTL C:\ORACLE\ORADATA\DB10G\RENAME_CONTROL01.CTL
SQL> STARTUP
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 787968 bytes
Variable Size 61864448 bytes
Database Buffers 104857600 bytes
Redo Buffers 262144 bytes
Database mounted.
SQL>Repeating the initial query shows that the the controlfile has been renamed in the data dictionary.
SQL> select name from v$controlfile;
NAME
-------------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_CONTROL01.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL
3 rows selected.
SQL>Logfiles
The current location of the logfiles can be queried from the V$LOGFILE view, as shown below.
SQL> SELECT member FROM v$logfile;
MEMBER
-------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO03.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO02.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO01.LOG
3 rows selected.
SQL>To move or rename a logfile do the following:
Shutdown the database.
Rename the physical file on the OS.
Start the database in mount mode.
Issue the ALTER DATABASE RENAME FILE command to rename the file within the Oracle dictionary.
Open the database.
The following SQL*Plus output shows how this is done.
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> HOST MOVE C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO01.LOG C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_REDO01.LOG
SQL> STARTUP MOUNT
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 787968 bytes
Variable Size 61864448 bytes
Database Buffers 104857600 bytes
Redo Buffers 262144 bytes
Database mounted.
SQL> ALTER DATABASE RENAME FILE 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO01.LOG' -
> TO 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_REDO01.LOG';
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL>Repeating the initial query shows that the the logfile has been renamed in the data dictionary.
SQL> SELECT member FROM v$logfile;
MEMBER
-------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO03.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO02.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_REDO01.LOG
3 rows selected.
SQL>Datafiles
The process for renaming a datafile is the same as renaming a logfile, but for the same of clarity it is repeated below. The current location of the datafiles can be queried from the V$DATAFILE view, as shown below.
SQL> SELECT name FROM v$datafile;
NAME
---------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSTEM01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSAUX01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\USERS01.DBF
4 rows selected.
SQL>To move or rename a datafile do the following:
Shutdown the database.
Rename the physical file on the OS.
Start the database in mount mode.
Issue the ALTER DATABASE RENAME FILE command to rename the file within the Oracle dictionary.
Open the database.
The following SQL*Plus output shows how this is done.
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> HOST MOVE C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\USERS01.DBF C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_USERS01.DBF
SQL> STARTUP MOUNT
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 787968 bytes
Variable Size 61864448 bytes
Database Buffers 104857600 bytes
Redo Buffers 262144 bytes
Database mounted.
SQL> ALTER DATABASE RENAME FILE 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\USERS01.DBF' -
> TO 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_USERS01.DBF';
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL>Repeating the initial query shows that the the datafile has been renamed in the data dictionary.
SQL> SELECT name FROM v$datafile;
NAME
---------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSTEM01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSAUX01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_USERS01.DBF
4 rows selected.
SQL>Recreating the Controlfile
For largescale rearrangements it may be easier to manipulate the controlfile contents manually by backing up the controlfile to trace.
SQL> CONN sys/password AS SYSDBA
Connected.
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
Database altered.
SQL>The resulting trace file in the user_dump_dest directory contains commands and instructions for recreating the controlfile. The paths in the CREATE CONTROLFILE command can be manipulated to rename all datafiles and logfiles on one step.
Controlfiles
Logfiles
Datafiles
Recreating the Controlfile
Controlfiles
The current location of the controlfiles can be queried from the V$CONTROLFILE view, as shown below.
SQL> select name from v$controlfile;
NAME
-------------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL01.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL
3 rows selected.
SQL>In order to rename or move these files we must alter the value of the control_files instance parameter.
SQL> show parameter control_files
NAME TYPE VALUE
------------------------------------ -------------------------------- ------------------------------
control_files string C:\ORACLE\ORADATA\DB10G\CONTRO
L01.CTL, C:\ORACLE\ORADATA\DB1
0G\CONTROL02.CTL, C:\ORACLE\OR
ADATA\DB10G\CONTROL03.CTL
SQL>To move or rename a controlfile do the following:
Alter the control_files parameter using the ALTER SYSTEM comamnd.
Shutdown the database.
Rename the physical file on the OS.
Start the database.
The following SQL*Plus output shows how this is done for an instance using an spfile. For instances using a pfile replace the spfile manipulation steps with an amendment of the parameter in the init.ora file.
SQL> ALTER SYSTEM SET control_files='C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_CONTROL01.CTL', -
> 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL', -
> 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL' SCOPE=SPFILE;
System altered.
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> HOST MOVE C:\ORACLE\ORADATA\DB10G\CONTROL01.CTL C:\ORACLE\ORADATA\DB10G\RENAME_CONTROL01.CTL
SQL> STARTUP
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 787968 bytes
Variable Size 61864448 bytes
Database Buffers 104857600 bytes
Redo Buffers 262144 bytes
Database mounted.
SQL>Repeating the initial query shows that the the controlfile has been renamed in the data dictionary.
SQL> select name from v$controlfile;
NAME
-------------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_CONTROL01.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL02.CTL
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\CONTROL03.CTL
3 rows selected.
SQL>Logfiles
The current location of the logfiles can be queried from the V$LOGFILE view, as shown below.
SQL> SELECT member FROM v$logfile;
MEMBER
-------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO03.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO02.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO01.LOG
3 rows selected.
SQL>To move or rename a logfile do the following:
Shutdown the database.
Rename the physical file on the OS.
Start the database in mount mode.
Issue the ALTER DATABASE RENAME FILE command to rename the file within the Oracle dictionary.
Open the database.
The following SQL*Plus output shows how this is done.
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> HOST MOVE C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO01.LOG C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_REDO01.LOG
SQL> STARTUP MOUNT
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 787968 bytes
Variable Size 61864448 bytes
Database Buffers 104857600 bytes
Redo Buffers 262144 bytes
Database mounted.
SQL> ALTER DATABASE RENAME FILE 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO01.LOG' -
> TO 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_REDO01.LOG';
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL>Repeating the initial query shows that the the logfile has been renamed in the data dictionary.
SQL> SELECT member FROM v$logfile;
MEMBER
-------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO03.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\REDO02.LOG
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_REDO01.LOG
3 rows selected.
SQL>Datafiles
The process for renaming a datafile is the same as renaming a logfile, but for the same of clarity it is repeated below. The current location of the datafiles can be queried from the V$DATAFILE view, as shown below.
SQL> SELECT name FROM v$datafile;
NAME
---------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSTEM01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSAUX01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\USERS01.DBF
4 rows selected.
SQL>To move or rename a datafile do the following:
Shutdown the database.
Rename the physical file on the OS.
Start the database in mount mode.
Issue the ALTER DATABASE RENAME FILE command to rename the file within the Oracle dictionary.
Open the database.
The following SQL*Plus output shows how this is done.
SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> HOST MOVE C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\USERS01.DBF C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_USERS01.DBF
SQL> STARTUP MOUNT
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 787968 bytes
Variable Size 61864448 bytes
Database Buffers 104857600 bytes
Redo Buffers 262144 bytes
Database mounted.
SQL> ALTER DATABASE RENAME FILE 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\USERS01.DBF' -
> TO 'C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_USERS01.DBF';
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL>Repeating the initial query shows that the the datafile has been renamed in the data dictionary.
SQL> SELECT name FROM v$datafile;
NAME
---------------------------------------------------------
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSTEM01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\UNDOTBS01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\SYSAUX01.DBF
C:\ORACLE\PRODUCT\10.1.0\ORADATA\DB10G\RENAME_USERS01.DBF
4 rows selected.
SQL>Recreating the Controlfile
For largescale rearrangements it may be easier to manipulate the controlfile contents manually by backing up the controlfile to trace.
SQL> CONN sys/password AS SYSDBA
Connected.
SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
Database altered.
SQL>The resulting trace file in the user_dump_dest directory contains commands and instructions for recreating the controlfile. The paths in the CREATE CONTROLFILE command can be manipulated to rename all datafiles and logfiles on one step.
jueves, 29 de octubre de 2009
RMAN Drop Database
SQL> startup mount exclusive;
ORACLE instance started.
Total System Global Area 1610612736 bytes
Fixed Size 2139256 bytes
Variable Size 396188552 bytes
Database Buffers 1207959552 bytes
Redo Buffers 4325376 bytes
Database mounted.
SQL> alter system enable restricted session;
System altered.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[afrodita_POWER ]# rman target /
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Oct 29 11:21:59 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: COPIA (DBID=1588920959, not open)
RMAN> drop database;
database name is "COPIA" and DBID is 1588920959
Do you really want to drop the database (enter YES or NO)? YES
database dropped
RMAN>
ORACLE instance started.
Total System Global Area 1610612736 bytes
Fixed Size 2139256 bytes
Variable Size 396188552 bytes
Database Buffers 1207959552 bytes
Redo Buffers 4325376 bytes
Database mounted.
SQL> alter system enable restricted session;
System altered.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[afrodita_POWER ]# rman target /
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Oct 29 11:21:59 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: COPIA (DBID=1588920959, not open)
RMAN> drop database;
database name is "COPIA" and DBID is 1588920959
Do you really want to drop the database (enter YES or NO)? YES
database dropped
RMAN>
lunes, 19 de octubre de 2009
Oracle 11g New Features
if your database doesn’t satisfy the requirements for upgrade to Oracle Database 11g
■ catupgrd.sql This is the script that performs the actual upgrading of the
database to the Oracle Database 11g release and it now supports parallel
upgrades of the database.
■ utlu111s.sql This is the Upgrade Status Utility script which lets you
check the status of the upgrade—that is, whether the upgraded database’s
components have a valid status.
■ catuppst.sql This is the script you run to perform post-upgrade actions. This
is new in Oracle Database 11g Release 1.
■ utlrp.sql This script recompiles and revalidates any remaining application
objects.
file from the $ORACLE_HOME/rdbms/admin
directory to a staging directory such as /u01/app/oracle/upgrade. Log in as the owner
of the Oracle home directory of the older release and run the utlu111.i sql script
(from the /u01/app/oracle/upgrade directory). Spool the results so you can review the
output. Here’s an example showing the output of an execution of the utlu111i.sql
script on my system:
SQL> spool upgrade.log
SQL> @utlu111i.sql
Oracle Database 11.1 Pre-Upgrade Information Tool
01-30-2008 05:33:22
***********************************************************
How Is Oracle Managing My Memory?
You can monitor how Oracle is managing your memory by reviewing the
V$MEMORY_RESIZE_OPS view. This view contains a list of the last 800 SGA resize
requests handled by Oracle. Here is an example:
select parameter, initial_size, target_size, start_time
from v$memory_resize_ops
where initial_size > 0 and final_size > 0
order by parameter, start_time;
DDL WAIT Option Now DefaultIn Oracle Database 10g, by default DDL
commands would not wait if the object
was locked. Instead an error would be generated and the attempted change would
fail with an ORA-00054, indicating the resource was locked. Now, in Oracle
Database 11g, Oracle in many cases will not only not return an error, but will
execute the change without a wait being required. For example, you can now do
the following in Oracle Database 11g:
Session 1:
SQL>insert into test values (1);
1 row created.
Session 2:
SQL>alter table test add (id2 number);
Table altered.
approach but the only feasible one. In Oracle 11g, you do not need to do
much. In the session you want to issue a DDL statement, issue this SQL first:
alter session set ddl_lock_timeout = 10;
Duplicate
RMAN> duplicate database to dupdb
2> from active database
3> db_file_name_convert '/u01/app/oracle','/u05/app/oracle'
4> spfile
5> parameter_value_convert '/u01/app/oracle','/u05/app/oracle'
6> set log_file_name_convert '/u01/app/oracle',
'/u05/app/oracle'
7> set sga_max_size '3000m'
8> set sga_target '2000m';
Restoring an Archival Backup
You can issue the duplicate database command to restore an archival backup.
Here are the steps to restore and recover the database using an archival backup:
1. Create an auxiliary instance after creating the usual password file and the
parameter files. Connect to the auxiliary instance and start it.
2. Connect to the recovery catalog, the target, and the auxiliary instances, as
shown here:
RMAN> connect target sys/@prod1
RMAN> connect catalog rman/rman@catdb
RMAN> connect auxiliary /
3. Issue the list restore point all command to find out the name of
the restore points that are available.
RMAN> list restore point all;
SCN RSP Time Type Time Name
------- ------------- ---------- ------------
3074299 30-DEC-07 FIRSTQUART07
RMAN>
4. Issue the duplicate database command, making sure you specify the
correct restore point name to restore the database to the point in time the
restore point stands for.
RMAN> duplicate database
2> to newdb
3> until restore point firstquart07
4> db_file_name_convert='/u01/prod1/dbfiles/',
5>'/u01/newdb/dbfiles'
6> pfile = '/u01/newdb/admin/init.ora';
Using the Recover…Block Command
RMAN> recover datafile 2 block 24
datafile 4 block 10;
You can specify the exact backup from which you want RMAN to recover the
corrupt data blocks by specifying the backup tag with the recover . . . block
command, as shown here:
RMAN> recover datafile 2 block 24
datafile 4 block 10
The following example demonstrates how to create a flashback data archive in
the FLASH_TBS1 tablespace:
SQL> create flashback data archive flash1
tablespace flash_tbs1
retention 4 year;
Flashback Data Archive created.
SQL>
The clause retention 4 year specifies that the database must retain the
data in the new flashback data archive flash1 for four years before purging it. The
absence of the quota clause means the flash1 archive can occupy the entire
tablespace FLASH_TBS1. If you want to limit the archive to only a part of the
tablespace, specify the quota clause, as shown here:
SQL> create flashback data archive flash2
tablespace flash_tbs1
quota 2000m
retention 4 year;
Flashback Data Archive created.
SQL> drop flashback archive flash1;
SQL> alter flashback archive flash1
set default # makes flash1 the default archive
SQL> alter flashback archive flash1
add tablespace
flash_tbs1 # adds space to the flashback archive
SQL> alter flashback archive flash1
modify tablespace
flash_tbs1 quota 10G; # changes the quota for the archive
SQL> alter flashback archive flash1
modify retention
2 year; # changes the archive retention time
SQL> alter flashback tablespace flash1
add tablespace flash_tbs2; #adds a tablespace to an archive
SQL> alter flashback tablespace flash1
remove tablespace
flash_tbs2; #removes a tablespace from an archive
SQL> alter flashback archive flash1
purge all; # purges all archived data
SQL> alter flashback archive flash1
purge before
SQL> alter table employees
flashback archive;
SQL> alter table employees
flashback archive flash2;
SQL> alter table employees
no flashback archive;
Monitoring Flashback Data Archives
SQL> select flashback_archive_name,retention_in_days
from dba_flashback_archive;
FLASHBACK_ARCHIVE_NAME RETENTION_IN_DAYS
---------------------- -----------------
FLASH1 365
Accessing Older Data
SQL> select transaction_number, doctor_name, count
from patient_info as of
timestamp to_timestamp ('2007-01-01 00:00:00',
'YYYY-MM-DD HH23:MI:SS');
Case Sensitive
DBA, if you want to avoid one of those upgrade-broke-my-system moments?
alter system set sec_case_sensitive_logon = false;
Default Password Use
DBA_USERS_WITH_DEFPWD, that you can query to
determine whether a given user account is using one of these default passwords
SQL> SELECT * FROM DBA_USERS_WITH_DEFPWD;
USERNAME
------------------------------
DIP
MDSYS
WK_TEST
CTXSYS
OLAPSYS
OUTLN
EXFSYS
SCOTT
Preparing the Database for Tablespace Encryption
Configuring the Compatible Parameter Correctly
Here is an example of changing the compatible parameter (as you might do after
a database upgrade from Oracle Database 10g to Oracle Database 11g):
Alter system set compatible='11.1.0.0.0' scope=spfile;
Configure the Sqlnet.ora File (Optional) Oracle will, by default, create the wallet
in the directory $ORACLE_BASE/admin/$ORACLE_SID/wallet (you will need to
create this directory if it does not already exist). If you wish to use a location other than the default location, then configure the encryption_wallet_location parameter in the sqlnet.ora file of our database. Here is an example of such an
entry:
Encryption_wallet_location=
(source=(method=file)
(method_data=(directory=/mywallet)))
Open the Wallet and Create the Master Encryption Key
Alter system set encryption key authenticated by "robert";
The wallet is closed each time the database is shut down. You will need to
reopen the wallet each time you cycle the database, and the database will not open
until you open the wallet (which implies you have to nomount or mount the
database, open the wallet, and then open the database). The command is slightly
different to open an existing wallet:
Alter system set encryption wallet open authenticated by "robert";
While it might be an unusual operation, you can also close the wallet with the
alter system command as seen here:
Alter system set encryption wallet close;
Creating Encrypted Tablespaces
Create tablespace my_secure_tbs
datafile '/oracle01/oradata/orcl/my_secure_tbs_01.dbf' size 100m
encryption using '3DES168' default storage (encrypt);
You can also encrypt a tablespace using the default encryption algorithm by just
using the encryption keyword as seen here:
Create tablespace my_second_secure_tbs
datafile '/oracle01/oradata/orcl/my_second_secure_tbs_01.dbf' size 100m
encryption default storage (encrypt);
SECUREFILE LOBS must be created in tablespaces using Automatic Segment Space
Management (ASSM), and you must have your wallet open in order to use SECUREFILE.
Here is an example of the creation of a table with a SECUREFILE LOB:
Example:
create table notes (note_doc clob encrypt using 'AES128')
lob(note_doc) store as securefile
(cache nologging);
example of using the alter table command to compress the contents of a table. Also all future contents will be compressed:
Alter table emp move compress;
select table_name, partition_name, compression
from dba_tab_partitions
where table_name='COMPRESS_DEMO'
order by 1,2;
TABLE_NAME PARTITION_NAME COMPRESS
------------------------------ ------------------------------ --------
COMPRESS_DEMO CLOSE_BUT_NOT_YET DISABLED
COMPRESS_DEMO LONG_AGO ENABLED
COMPRESS_DEMO NOT_SO_LONG_AGO DISABLED
COMPRESS_DEMO NOW_OR_FUTURE DISABLED
New lock table Parameter
Lock table my_tab in exclusive mode wait 60;
Lock table my_tab in share mode nowait;
You can make an index invisible when you create it with the create index
command by using the invisible keyword as seen in this example:
Create index ix_test on test(id) invisible;
Alter index ix_test visible;
alter index ix_test invisible;
Read-Only Tables
The alter table command can now be used to make a table read-only. This allows
the DBA to make a table read-only across the database, including the owner of the
table. The following examples demonstrate the use of the alter table command along
with the read only keywords to make a table read-only, and then the use of the read
write keywords to make the table read-write:
Alter table my_table read only;
SQL> delete from my_table;
delete from my_table
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "SCOTT"."MY_TABLE"
alter table my_table read write;
Using the Result Cache
You can tell if you are using the result cache by looking at the execution plan of
your SQL statement. Here is an example of a SQL statement that uses the result_
cache hint, along with its associated execution plan (the output is cleaned up for the
benefit of the size of this page):
select /*+ result_cache */ sum(sal) sum from emp;
You can generate a report on the result cache using the Oracle PL/SQL
procedure dbms_result_cache.memory_report as seen in this example:
SQL> set serveroutput on
SQL> dbms_result_cache.memory_report
SQL> exec dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 864K bytes (864 blocks)
Maximum Result Size = 43K bytes (43 blocks)
[Memory]
Total Memory = 103528 bytes [0.077% of the Shared Pool]
... Fixed Memory = 5132 bytes [0.004% of the Shared Pool]
... Dynamic Memory = 98396 bytes [0.073% of the Shared Pool]
....... Overhead = 65628 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 30 blocks
........... Used Memory = 2 blocks
............... Dependencies = 1 blocks (1 count)
............... Results = 1 blocks
................... SQL = 1 blocks (1 count)
PL/SQL procedure successfully completed.
Create Triggers as Enabled or DisabledOracle Database 11g allows you to create a trigger either as enabled (the default) or
disabled using the new enabled or disable clause. If you want to create a trigger as
disabled, you would use the disable clause as seen in this example:
Create or replace trigger trigger_two
before insert on test
disable
begin
null;
end;
/
Performance Tuning
For example, I could run the following query to find all cumulative waits of more than one second that occurred on foreground/background processes:
select event, total_waits_fg twg, total_timeouts_fg ttf,
time_waited_fg twf, average_wait_fg awf,
time_waited_micro_fg twmf
from v$system_event
where time_waited_micro_fg > 1000000
and wait_class !='Idle';
■ catupgrd.sql This is the script that performs the actual upgrading of the
database to the Oracle Database 11g release and it now supports parallel
upgrades of the database.
■ utlu111s.sql This is the Upgrade Status Utility script which lets you
check the status of the upgrade—that is, whether the upgraded database’s
components have a valid status.
■ catuppst.sql This is the script you run to perform post-upgrade actions. This
is new in Oracle Database 11g Release 1.
■ utlrp.sql This script recompiles and revalidates any remaining application
objects.
file from the $ORACLE_HOME/rdbms/admin
directory to a staging directory such as /u01/app/oracle/upgrade. Log in as the owner
of the Oracle home directory of the older release and run the utlu111.i sql script
(from the /u01/app/oracle/upgrade directory). Spool the results so you can review the
output. Here’s an example showing the output of an execution of the utlu111i.sql
script on my system:
SQL> spool upgrade.log
SQL> @utlu111i.sql
Oracle Database 11.1 Pre-Upgrade Information Tool
01-30-2008 05:33:22
***********************************************************
How Is Oracle Managing My Memory?
You can monitor how Oracle is managing your memory by reviewing the
V$MEMORY_RESIZE_OPS view. This view contains a list of the last 800 SGA resize
requests handled by Oracle. Here is an example:
select parameter, initial_size, target_size, start_time
from v$memory_resize_ops
where initial_size > 0 and final_size > 0
order by parameter, start_time;
DDL WAIT Option Now DefaultIn Oracle Database 10g, by default DDL
commands would not wait if the object
was locked. Instead an error would be generated and the attempted change would
fail with an ORA-00054, indicating the resource was locked. Now, in Oracle
Database 11g, Oracle in many cases will not only not return an error, but will
execute the change without a wait being required. For example, you can now do
the following in Oracle Database 11g:
Session 1:
SQL>insert into test values (1);
1 row created.
Session 2:
SQL>alter table test add (id2 number);
Table altered.
approach but the only feasible one. In Oracle 11g, you do not need to do
much. In the session you want to issue a DDL statement, issue this SQL first:
alter session set ddl_lock_timeout = 10;
Duplicate
RMAN> duplicate database to dupdb
2> from active database
3> db_file_name_convert '/u01/app/oracle','/u05/app/oracle'
4> spfile
5> parameter_value_convert '/u01/app/oracle','/u05/app/oracle'
6> set log_file_name_convert '/u01/app/oracle',
'/u05/app/oracle'
7> set sga_max_size '3000m'
8> set sga_target '2000m';
Restoring an Archival Backup
You can issue the duplicate database command to restore an archival backup.
Here are the steps to restore and recover the database using an archival backup:
1. Create an auxiliary instance after creating the usual password file and the
parameter files. Connect to the auxiliary instance and start it.
2. Connect to the recovery catalog, the target, and the auxiliary instances, as
shown here:
RMAN> connect target sys/
RMAN> connect catalog rman/rman@catdb
RMAN> connect auxiliary /
3. Issue the list restore point all command to find out the name of
the restore points that are available.
RMAN> list restore point all;
SCN RSP Time Type Time Name
------- ------------- ---------- ------------
3074299 30-DEC-07 FIRSTQUART07
RMAN>
4. Issue the duplicate database command, making sure you specify the
correct restore point name to restore the database to the point in time the
restore point stands for.
RMAN> duplicate database
2> to newdb
3> until restore point firstquart07
4> db_file_name_convert='/u01/prod1/dbfiles/',
5>'/u01/newdb/dbfiles'
6> pfile = '/u01/newdb/admin/init.ora';
Using the Recover…Block Command
RMAN> recover datafile 2 block 24
datafile 4 block 10;
You can specify the exact backup from which you want RMAN to recover the
corrupt data blocks by specifying the backup tag with the recover . . . block
command, as shown here:
RMAN> recover datafile 2 block 24
datafile 4 block 10
The following example demonstrates how to create a flashback data archive in
the FLASH_TBS1 tablespace:
SQL> create flashback data archive flash1
tablespace flash_tbs1
retention 4 year;
Flashback Data Archive created.
SQL>
The clause retention 4 year specifies that the database must retain the
data in the new flashback data archive flash1 for four years before purging it. The
absence of the quota clause means the flash1 archive can occupy the entire
tablespace FLASH_TBS1. If you want to limit the archive to only a part of the
tablespace, specify the quota clause, as shown here:
SQL> create flashback data archive flash2
tablespace flash_tbs1
quota 2000m
retention 4 year;
Flashback Data Archive created.
SQL> drop flashback archive flash1;
SQL> alter flashback archive flash1
set default # makes flash1 the default archive
SQL> alter flashback archive flash1
add tablespace
flash_tbs1 # adds space to the flashback archive
SQL> alter flashback archive flash1
modify tablespace
flash_tbs1 quota 10G; # changes the quota for the archive
SQL> alter flashback archive flash1
modify retention
2 year; # changes the archive retention time
SQL> alter flashback tablespace flash1
add tablespace flash_tbs2; #adds a tablespace to an archive
SQL> alter flashback tablespace flash1
remove tablespace
flash_tbs2; #removes a tablespace from an archive
SQL> alter flashback archive flash1
purge all; # purges all archived data
SQL> alter flashback archive flash1
purge before
SQL> alter table employees
flashback archive;
SQL> alter table employees
flashback archive flash2;
SQL> alter table employees
no flashback archive;
Monitoring Flashback Data Archives
SQL> select flashback_archive_name,retention_in_days
from dba_flashback_archive;
FLASHBACK_ARCHIVE_NAME RETENTION_IN_DAYS
---------------------- -----------------
FLASH1 365
Accessing Older Data
SQL> select transaction_number, doctor_name, count
from patient_info as of
timestamp to_timestamp ('2007-01-01 00:00:00',
'YYYY-MM-DD HH23:MI:SS');
Case Sensitive
DBA, if you want to avoid one of those upgrade-broke-my-system moments?
alter system set sec_case_sensitive_logon = false;
Default Password Use
DBA_USERS_WITH_DEFPWD, that you can query to
determine whether a given user account is using one of these default passwords
SQL> SELECT * FROM DBA_USERS_WITH_DEFPWD;
USERNAME
------------------------------
DIP
MDSYS
WK_TEST
CTXSYS
OLAPSYS
OUTLN
EXFSYS
SCOTT
Preparing the Database for Tablespace Encryption
Configuring the Compatible Parameter Correctly
Here is an example of changing the compatible parameter (as you might do after
a database upgrade from Oracle Database 10g to Oracle Database 11g):
Alter system set compatible='11.1.0.0.0' scope=spfile;
Configure the Sqlnet.ora File (Optional) Oracle will, by default, create the wallet
in the directory $ORACLE_BASE/admin/$ORACLE_SID/wallet (you will need to
create this directory if it does not already exist). If you wish to use a location other than the default location, then configure the encryption_wallet_location parameter in the sqlnet.ora file of our database. Here is an example of such an
entry:
Encryption_wallet_location=
(source=(method=file)
(method_data=(directory=/mywallet)))
Open the Wallet and Create the Master Encryption Key
Alter system set encryption key authenticated by "robert";
The wallet is closed each time the database is shut down. You will need to
reopen the wallet each time you cycle the database, and the database will not open
until you open the wallet (which implies you have to nomount or mount the
database, open the wallet, and then open the database). The command is slightly
different to open an existing wallet:
Alter system set encryption wallet open authenticated by "robert";
While it might be an unusual operation, you can also close the wallet with the
alter system command as seen here:
Alter system set encryption wallet close;
Creating Encrypted Tablespaces
Create tablespace my_secure_tbs
datafile '/oracle01/oradata/orcl/my_secure_tbs_01.dbf' size 100m
encryption using '3DES168' default storage (encrypt);
You can also encrypt a tablespace using the default encryption algorithm by just
using the encryption keyword as seen here:
Create tablespace my_second_secure_tbs
datafile '/oracle01/oradata/orcl/my_second_secure_tbs_01.dbf' size 100m
encryption default storage (encrypt);
SECUREFILE LOBS must be created in tablespaces using Automatic Segment Space
Management (ASSM), and you must have your wallet open in order to use SECUREFILE.
Here is an example of the creation of a table with a SECUREFILE LOB:
Example:
create table notes (note_doc clob encrypt using 'AES128')
lob(note_doc) store as securefile
(cache nologging);
example of using the alter table command to compress the contents of a table. Also all future contents will be compressed:
Alter table emp move compress;
select table_name, partition_name, compression
from dba_tab_partitions
where table_name='COMPRESS_DEMO'
order by 1,2;
TABLE_NAME PARTITION_NAME COMPRESS
------------------------------ ------------------------------ --------
COMPRESS_DEMO CLOSE_BUT_NOT_YET DISABLED
COMPRESS_DEMO LONG_AGO ENABLED
COMPRESS_DEMO NOT_SO_LONG_AGO DISABLED
COMPRESS_DEMO NOW_OR_FUTURE DISABLED
New lock table Parameter
Lock table my_tab in exclusive mode wait 60;
Lock table my_tab in share mode nowait;
You can make an index invisible when you create it with the create index
command by using the invisible keyword as seen in this example:
Create index ix_test on test(id) invisible;
Alter index ix_test visible;
alter index ix_test invisible;
Read-Only Tables
The alter table command can now be used to make a table read-only. This allows
the DBA to make a table read-only across the database, including the owner of the
table. The following examples demonstrate the use of the alter table command along
with the read only keywords to make a table read-only, and then the use of the read
write keywords to make the table read-write:
Alter table my_table read only;
SQL> delete from my_table;
delete from my_table
*
ERROR at line 1:
ORA-12081: update operation not allowed on table "SCOTT"."MY_TABLE"
alter table my_table read write;
Using the Result Cache
You can tell if you are using the result cache by looking at the execution plan of
your SQL statement. Here is an example of a SQL statement that uses the result_
cache hint, along with its associated execution plan (the output is cleaned up for the
benefit of the size of this page):
select /*+ result_cache */ sum(sal) sum from emp;
You can generate a report on the result cache using the Oracle PL/SQL
procedure dbms_result_cache.memory_report as seen in this example:
SQL> set serveroutput on
SQL> dbms_result_cache.memory_report
SQL> exec dbms_result_cache.memory_report
R e s u l t C a c h e M e m o r y R e p o r t
[Parameters]
Block Size = 1K bytes
Maximum Cache Size = 864K bytes (864 blocks)
Maximum Result Size = 43K bytes (43 blocks)
[Memory]
Total Memory = 103528 bytes [0.077% of the Shared Pool]
... Fixed Memory = 5132 bytes [0.004% of the Shared Pool]
... Dynamic Memory = 98396 bytes [0.073% of the Shared Pool]
....... Overhead = 65628 bytes
....... Cache Memory = 32K bytes (32 blocks)
........... Unused Memory = 30 blocks
........... Used Memory = 2 blocks
............... Dependencies = 1 blocks (1 count)
............... Results = 1 blocks
................... SQL = 1 blocks (1 count)
PL/SQL procedure successfully completed.
Create Triggers as Enabled or DisabledOracle Database 11g allows you to create a trigger either as enabled (the default) or
disabled using the new enabled or disable clause. If you want to create a trigger as
disabled, you would use the disable clause as seen in this example:
Create or replace trigger trigger_two
before insert on test
disable
begin
null;
end;
/
Performance Tuning
For example, I could run the following query to find all cumulative waits of more than one second that occurred on foreground/background processes:
select event, total_waits_fg twg, total_timeouts_fg ttf,
time_waited_fg twf, average_wait_fg awf,
time_waited_micro_fg twmf
from v$system_event
where time_waited_micro_fg > 1000000
and wait_class !='Idle';
Suscribirse a:
Entradas (Atom)