Examples: Building a Repair Table or Orphan Key Table
The ADMIN_TABLE procedure is used to create, purge, or drop a repair table or an orphan key table.
A repair table provides information about the corruptions that were found by the CHECK_OBJECT procedure and how these will be addressed if the FIX_CORRUPT_BLOCKS procedure is run. Further, it is used to drive the execution of the FIX_CORRUPT_BLOCKS procedure.
An orphan key table is used when the DUMP_ORPHAN_KEYS procedure is executed and it discovers index entries that point to corrupt rows. The DUMP_ORPHAN_KEYS procedure populates the orphan key table by logging its activity and providing the index information in a usable manner.
Example: Creating a Repair Table
The following example creates a repair table for the users tablespace.
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'USERS');
END;
/
For each repair or orphan key table, a view is also created that eliminates any rows that pertain to objects that no longer exist. The name of the view corresponds to the name of the repair or orphan key table and is prefixed by DBA_ (for example, DBA_REPAIR_TABLE or DBA_ORPHAN_KEY_TABLE).
The following query describes the repair table that was created for the users tablespace.
DESC REPAIR_TABLE
Name Null? Type
---------------------------- -------- --------------
OBJECT_ID NOT NULL NUMBER
TABLESPACE_ID NOT NULL NUMBER
RELATIVE_FILE_ID NOT NULL NUMBER
BLOCK_ID NOT NULL NUMBER
CORRUPT_TYPE NOT NULL NUMBER
SCHEMA_NAME NOT NULL VARCHAR2(30)
OBJECT_NAME NOT NULL VARCHAR2(30)
BASEOBJECT_NAME VARCHAR2(30)
PARTITION_NAME VARCHAR2(30)
CORRUPT_DESCRIPTION VARCHAR2(2000)
REPAIR_DESCRIPTION VARCHAR2(200)
MARKED_CORRUPT NOT NULL VARCHAR2(10)
CHECK_TIMESTAMP NOT NULL DATE
FIX_TIMESTAMP DATE
REFORMAT_TIMESTAMP DATE
Example: Creating an Orphan Key Table
This example illustrates the creation of an orphan key table for the users tablespace.
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'ORPHAN_KEY_TABLE',
TABLE_TYPE => dbms_repair.orphan_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'USERS');
END;
/
The orphan key table is described in the following query:
DESC ORPHAN_KEY_TABLE
Name Null? Type
---------------------------- -------- -----------------
SCHEMA_NAME NOT NULL VARCHAR2(30)
INDEX_NAME NOT NULL VARCHAR2(30)
IPART_NAME VARCHAR2(30)
INDEX_ID NOT NULL NUMBER
TABLE_NAME NOT NULL VARCHAR2(30)
PART_NAME VARCHAR2(30)
TABLE_ID NOT NULL NUMBER
KEYROWID NOT NULL ROWID
KEY NOT NULL ROWID
DUMP_TIMESTAMP NOT NULL DATE
Example: Detecting Corruption
The CHECK_OBJECT procedure checks the specified object, and populates the repair table with information about corruptions and repair directives. You can optionally specify a range, partition name, or subpartition name when you want to check a portion of an object.
Validation consists of checking all blocks in the object that have not previously been marked corrupt. For each block, the transaction and data layer portions are checked for self consistency. During CHECK_OBJECT, if a block is encountered that has a corrupt buffer cache header, then that block is skipped.
The following is an example of executing the CHECK_OBJECT procedure for the scott.dept table.
SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'DEPT',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/
SQL*Plus outputs the following line, indicating one corruption:
number corrupt: 1
Querying the repair table produces information describing the corruption and suggesting a repair action.
SELECT OBJECT_NAME, BLOCK_ID, CORRUPT_TYPE, MARKED_CORRUPT,
CORRUPT_DESCRIPTION, REPAIR_DESCRIPTION
FROM REPAIR_TABLE;
OBJECT_NAME BLOCK_ID CORRUPT_TYPE MARKED_COR
------------------------------ ---------- ------------ ----------
CORRUPT_DESCRIPTION
------------------------------------------------------------------------------
REPAIR_DESCRIPTION
------------------------------------------------------------------------------
DEPT 3 1 FALSE
kdbchk: row locked by non-existent transaction
table=0 slot=0
lockid=32 ktbbhitc=1
mark block software corrupt
The corrupted block has not yet been marked corrupt, so this is the time to extract any meaningful data. After the block is marked corrupt, the entire block must be skipped.
Example: Fixing Corrupt Blocks
Use the FIX_CORRUPT_BLOCKS procedure to fix the corrupt blocks in specified objects based on information in the repair table that was generated by the CHECK_OBJECT procedure. Before changing a block, the block is checked to ensure that the block is still corrupt. Corrupt blocks are repaired by marking the block software corrupt. When a repair is performed, the associated row in the repair table is updated with a timestamp.
This example fixes the corrupt block in table scott.dept that was reported by the CHECK_OBJECT procedure.
SET SERVEROUTPUT ON
DECLARE num_fix INT;
BEGIN
num_fix := 0;
DBMS_REPAIR.FIX_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME=> 'DEPT',
OBJECT_TYPE => dbms_repair.table_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
FIX_COUNT=> num_fix);
DBMS_OUTPUT.PUT_LINE('num fix: ' || TO_CHAR(num_fix));
END;
/
SQL*Plus outputs the following line:
num fix: 1
The following query confirms that the repair was done.
SELECT OBJECT_NAME, BLOCK_ID, MARKED_CORRUPT
FROM REPAIR_TABLE;
OBJECT_NAME BLOCK_ID MARKED_COR
------------------------------ ---------- ----------
DEPT 3 TRUE
Example: Finding Index Entries Pointing to Corrupt Data Blocks
The DUMP_ORPHAN_KEYS procedure reports on index entries that point to rows in corrupt data blocks. For each index entry, a row is inserted into the specified orphan key table. The orphan key table must have been previously created.
This information can be useful for rebuilding lost rows in the table and for diagnostic purposes.
Note:
This should be run for every index associated with a table identified in the repair table.
In this example, pk_dept is an index on the scott.dept table. It is scanned to determine if there are any index entries pointing to rows in the corrupt data block.
SET SERVEROUTPUT ON
DECLARE num_orphans INT;
BEGIN
num_orphans := 0;
DBMS_REPAIR.DUMP_ORPHAN_KEYS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'PK_DEPT',
OBJECT_TYPE => dbms_repair.index_object,
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
ORPHAN_TABLE_NAME=> 'ORPHAN_KEY_TABLE',
KEY_COUNT => num_orphans);
DBMS_OUTPUT.PUT_LINE('orphan key count: ' || TO_CHAR(num_orphans));
END;
/
The following output indicates that there are three orphan keys:
orphan key count: 3
Index entries in the orphan key table implies that the index should be rebuilt. This guarantees that a table probe and an index probe return the same result set.
Example: Skipping Corrupt Blocks
The SKIP_CORRUPT_BLOCKS procedure enables or disables the skipping of corrupt blocks during index and table scans of the specified object. When the object is a table, skipping applies to the table and its indexes. When the object is a cluster, it applies to all of the tables in the cluster, and their respective indexes.
The following example enables the skipping of software corrupt blocks for the scott.dept table:
BEGIN
DBMS_REPAIR.SKIP_CORRUPT_BLOCKS (
SCHEMA_NAME => 'SCOTT',
OBJECT_NAME => 'DEPT',
OBJECT_TYPE => dbms_repair.table_object,
FLAGS => dbms_repair.skip_flag);
END;
/
Querying scott's tables using the DBA_TABLES view shows that SKIP_CORRUPT is enabled for table scott.dept.
SELECT OWNER, TABLE_NAME, SKIP_CORRUPT FROM DBA_TABLES
WHERE OWNER = 'SCOTT';
OWNER TABLE_NAME SKIP_COR
------------------------------ ------------------------------ --------
SCOTT ACCOUNT DISABLED
SCOTT BONUS DISABLED
SCOTT DEPT ENABLED
SCOTT DOCINDEX DISABLED
SCOTT EMP DISABLED
SCOTT RECEIPT DISABLED
SCOTT SALGRADE DISABLED
SCOTT SCOTT_EMP DISABLED
SCOTT SYS_IOT_OVER_12255 DISABLED
SCOTT WORK_AREA DISABLED
10 rows selected.
lunes, 21 de diciembre de 2009
martes, 15 de diciembre de 2009
Rman Backup Tape Veritas
BACKUP FULL
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP DATABASE PLUS ARCHIVELOG FORMAT 'bk_u%u_s%s_p%p_t%t';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP INCREMENTAL LEVEL 0
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'bk_u%u_s%s_p%p_t%t'
DATABASE;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP FULL CLUSTER VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
SEND 'NB_ORA_CLIENT=telbebdsdp1,NB_ORA_POLICY=BD_Online_ALTAMIRA_SDPTE01';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'db_%s_%p_%t'
TAG 'BD_SDPTE01_ALTAMIRA'
DATABASE;
RELEASE CHANNEL ch00;
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
BACKUP ARCHIVELOG CLUSTER - VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
# Control file backup
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
BACKUP
FORMAT 'ctrl_%s_%p_%t'
CURRENT CONTROLFILE;
RELEASE CHANNEL ch00;
}
BACKUP
connect catalog rman/rman@cat10g;
connect target rman/rman@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
# ALLOCATE CHANNEL ch03
# TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Application-Backu
p)';
backup archivelog all delete input TAG 'BACKUP ARCHIVELOG CL LETODB';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
#RELEASE CHANNEL ch03;
}
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP DATABASE PLUS ARCHIVELOG FORMAT 'bk_u%u_s%s_p%p_t%t';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP INCREMENTAL LEVEL 0
connect catalog rman/password@cat10g;
connect target rman/password@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=SEN_CT1BOSUNBD-SPE_BD_letodb_F_DS,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Applicati
on-Backup)';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'bk_u%u_s%s_p%p_t%t'
DATABASE;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
}
BACKUP FULL CLUSTER VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
SEND 'NB_ORA_CLIENT=telbebdsdp1,NB_ORA_POLICY=BD_Online_ALTAMIRA_SDPTE01';
BACKUP
INCREMENTAL LEVEL=0
FORMAT 'db_%s_%p_%t'
TAG 'BD_SDPTE01_ALTAMIRA'
DATABASE;
RELEASE CHANNEL ch00;
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
BACKUP ARCHIVELOG CLUSTER - VERITAS
connect rcvcat rman/password@cat9i
connect target /
RUN {
# Backup Archived Logs
sql 'alter system archive log current';
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC0;
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC1;
ALLOCATE CHANNEL ch03
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC2;
ALLOCATE CHANNEL ch04
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64" connect rman/password@RAC3;
BACKUP
FORMAT 'arch_%s_%p_%t'
ARCHIVELOG
ALL
DELETE INPUT;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
# Control file backup
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64";
BACKUP
FORMAT 'ctrl_%s_%p_%t'
CURRENT CONTROLFILE;
RELEASE CHANNEL ch00;
}
BACKUP
connect catalog rman/rman@cat10g;
connect target rman/rman@letodb;
resync catalog;
run {
ALLOCATE CHANNEL ch00
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch01
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
ALLOCATE CHANNEL ch02
TYPE 'SBT_TAPE' parms 'ENV=(SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64,NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV
=bksrv01,NB_ORA_SCHED=Default-Application-Backup)';
# ALLOCATE CHANNEL ch03
# TYPE 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=CT1BOSUNBD-SPE1-bck,NB_ORA_POLICY=CE_SEN_ct1bosunbd-spe_DB_letodb_FF_SD,NB_ORA_SERV=bksrv01,NB_ORA_SCHED=Default-Application-Backu
p)';
backup archivelog all delete input TAG 'BACKUP ARCHIVELOG CL LETODB';
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
#RELEASE CHANNEL ch03;
}
jueves, 10 de diciembre de 2009
Oracle Full Text Indexing using Oracle Text
Full Text Indexing using Oracle Text
Oracle Text, previously know as interMedia Text and ConText, is an extensive full text indexing technology allowing you to efficiently query free text and produce document classification applications. In this article I'll only scratch the surface of this very complex feature.
CONTEXT Indexes
CTXCAT Indexes
CTXRULE Indexes
Index Maintenance
The examples in this article require access to the CTX_DDL package, which is granted as follows:
GRANT EXECUTE ON CTX_DDL TO;
CONTEXT Indexes
The CONTEXT index type is used to index large amounts of text such as Word, PDF, XML, HTML or plain text documents. In this example we will store the data in a BLOB column, which allows us to store binary documents like Word and PDF as well as plain text. Using a CLOB is preferable if only plain text documents are being used.
First we build a sample schema to hold our data:
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
DROP PROCEDURE load_file_to_my_docs;
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc BLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;
CREATE OR REPLACE DIRECTORY documents AS 'C:\work';Next we load several files as follows:
CREATE OR REPLACE PROCEDURE load_file_to_my_docs (p_file_name IN my_docs.name%TYPE) AS
v_bfile BFILE;
v_blob BLOB;
BEGIN
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, p_file_name, empty_blob())
RETURN doc INTO v_blob;
v_bfile := BFILENAME('DOCUMENTS', p_file_name);
Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);
COMMIT;
END;
/
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.doc');
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.asp');
EXEC load_file_to_my_docs('XMLOverHTTP9i.asp');
EXEC load_file_to_my_docs('UNIXForDBAs.asp');
EXEC load_file_to_my_docs('emp_ws_access.sql');
EXEC load_file_to_my_docs('emp_ws_test.html');
EXEC load_file_to_my_docs('9ivsSS2000forPerformanceV22.pdf');Next we create a CONTEXT type index on the doc column and gather table statistics:
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);Finally we query table looking for documents with specific content:
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'SQL Server', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
100 127 9ivsSS2000forPerformanceV22.pdf
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'XML', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
74 123 XMLOverHTTP9i.asp
9 125 emp_ws_access.sql
2 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)CTXCAT Indexes
The CTXCAT index type is best suited to smaller text fragments that must be indexed along with other relational data. In this example the data will be stored in a VARCHAR2 column.
First we create a schema to hold the data:
DROP TABLE my_items;
DROP SEQUENCE my_items_seq;
EXEC CTX_DDL.DROP_INDEX_SET('my_items_iset');
CREATE TABLE my_items (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
description VARCHAR2(4000) NOT NULL,
price NUMBER(7,2) NOT NULL
)
/
ALTER TABLE my_items ADD (
CONSTRAINT my_items_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_items_seq;Next we populate the schema with some dummy data:
BEGIN
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Bike: '||i, 'Bike Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Car: '||i, 'Car Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'House: '||i, 'House Description ('||i||')', i);
END LOOP;
COMMIT;
END;
/Next we create a CTXCAT index on the DESCRIPTION and PRICE columns and gather table statistics. In order to create the index we must create an index-set with a sub-index for each column referenced by the CATSEARCH function:
EXEC CTX_DDL.CREATE_INDEX_SET('my_items_iset');
EXEC CTX_DDL.ADD_INDEX('my_items_iset','price');
CREATE INDEX my_items_name_idx ON my_items(description) INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('index set my_items_iset');
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_ITEMS', cascade=>TRUE);Finally we query table looking for items with a description that contains our specified words and an appropriate price:
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Bike', 'price BETWEEN 1 AND 5')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1 1 Bike: 1
2 2 Bike: 2
3 3 Bike: 3
4 4 Bike: 4
5 5 Bike: 5
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Car', 'price BETWEEN 101 AND 105 ORDER BY price DESC')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1105 105 Car: 105
1104 104 Car: 104
1103 103 Car: 103
1102 102 Car: 102
1101 101 Car: 101
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'Every column used to restrict the selection or order the output in the CATSEARCH function should have a sub-index within the index-set. The CTXCAT index type is transactional so there is no need to synchronize the index.
CTXRULE Indexes
The CTXRULE index type can be used to build document classification applications.
First we must define our document categories and store them, along with a suitable query for the MATCHES function:
DROP TABLE my_doc_categories;
DROP TABLE my_categories;
DROP SEQUENCE my_categories_seq;
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
CREATE TABLE my_categories (
id NUMBER(10) NOT NULL,
category VARCHAR2(30) NOT NULL,
query VARCHAR2(2000) NOT NULL
)
/
ALTER TABLE my_categories ADD (
CONSTRAINT my_categories_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_categories_seq;
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'Oracle', 'ABOUT(Oracle)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'SQL Server', 'ABOUT(SQL Server)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'UNIX', 'ABOUT(UNIX)');Next we create a table to hold our documents:
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc CLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;Then we create an intersection table to resolve the many-to-many relationship between documents and categories:
CREATE TABLE my_doc_categories (
my_doc_id NUMBER(10) NOT NULL,
my_category_id NUMBER(10) NOT NULL
)
/
ALTER TABLE my_doc_categories ADD (
CONSTRAINT my_doc_categories_pk PRIMARY KEY (my_doc_id, my_category_id)
)
/Next we create a BEFORE INSERT trigger on the MY_DOCS table to automatically assign the documents to the relevant categories as they are being inserted. The MATCHES function is used to decide if the document matches any of our gategory queries. The resulting cursor is used to insert the matches into the intersect table:
CREATE OR REPLACE TRIGGER my_docs_trg
BEFORE INSERT ON my_docs
FOR EACH ROW
BEGIN
FOR c1 IN (SELECT id
FROM my_categories
WHERE MATCHES(query, :new.doc)>0)
LOOP
BEGIN
INSERT INTO my_doc_categories(my_doc_id, my_category_id)
VALUES (:new.id, c1.id);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END LOOP;
END;
/Next we create the CTXRULE index to support the trigger. For completeness we also create a CONTEXT index on the document itself, although this is not involved in the category assignment process:
CREATE INDEX my_categories_query_idx ON my_categories(query) INDEXTYPE IS CTXSYS.CTXRULE;
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_CATEGORIES', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOC_CATEGORIES', cascade=>TRUE);Finally
we test the mechanism by inserting some rows and checking the classification:
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle Document', 'This document constains the word Oracle!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'SQL Server Document', 'This document constains the words SQL Server!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'UNIX Document', 'This document constains the word UNIX!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle UNIX Document', 'This document constains the word UNIX and the word Oracle!');
COLUMN name FORMAT A30;
SELECT a.name, b.category
FROM my_docs a,
my_categories b,
my_doc_categories c
WHERE c.my_doc_id = a.id
AND c.my_category_id = b.id;
NAME CATEGORY
------------------------------ ------------------------------
Oracle Document Oracle
SQL Server Document SQL Server
UNIX Document UNIX
Oracle UNIX Document UNIX
Oracle UNIX Document Oracle
5 rows selected.The output shows that the documents have been assigned to the correct categories. Note, the "Oracle UNIX Document" document has been assigned to both the "Oracle" and "UNIX" categories.
Index Maintenance
Not all Oracle Text indexes are automatically updated as records are added or deleted. To synchronize the index with the table you must call:
SQL> EXEC CTX_DDL.SYNC_INDEX('my_docs_doc_idx');Regular synchronizations of the index can be automated using the DBMS_JOB package. The following script is provided to make this task easier:
$ORACLE_HOME/ctx/sample/script/drjobdml.sqlIt can be called from SQL*Plus whilst logged on as the index owner as follows:
SQL> @drjobdml.sql index-name interval-mins
SQL> @drjobdml.sql my_docs_doc_idx 60Regular synchronization of text indexes can cause fragmentation which affects query performance. To correct this situation the index can be rebuilt or optimized. Index optimization can be performed in three basic modes (FAST, FULL or TOKEN). The FAST mode compacts fragmented rows but does not remove old data:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FAST');
END;
/The FULL mode optimizes either the entire index or a portion of it, with old data removed:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FULL');
END;
/The TOKEN mode perfoms a full optimization for a specific token:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','TOKEN', token=>'Oracle');
END;
/
Oracle Text, previously know as interMedia Text and ConText, is an extensive full text indexing technology allowing you to efficiently query free text and produce document classification applications. In this article I'll only scratch the surface of this very complex feature.
CONTEXT Indexes
CTXCAT Indexes
CTXRULE Indexes
Index Maintenance
The examples in this article require access to the CTX_DDL package, which is granted as follows:
GRANT EXECUTE ON CTX_DDL TO
CONTEXT Indexes
The CONTEXT index type is used to index large amounts of text such as Word, PDF, XML, HTML or plain text documents. In this example we will store the data in a BLOB column, which allows us to store binary documents like Word and PDF as well as plain text. Using a CLOB is preferable if only plain text documents are being used.
First we build a sample schema to hold our data:
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
DROP PROCEDURE load_file_to_my_docs;
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc BLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;
CREATE OR REPLACE DIRECTORY documents AS 'C:\work';Next we load several files as follows:
CREATE OR REPLACE PROCEDURE load_file_to_my_docs (p_file_name IN my_docs.name%TYPE) AS
v_bfile BFILE;
v_blob BLOB;
BEGIN
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, p_file_name, empty_blob())
RETURN doc INTO v_blob;
v_bfile := BFILENAME('DOCUMENTS', p_file_name);
Dbms_Lob.Fileopen(v_bfile, Dbms_Lob.File_Readonly);
Dbms_Lob.Loadfromfile(v_blob, v_bfile, Dbms_Lob.Getlength(v_bfile));
Dbms_Lob.Fileclose(v_bfile);
COMMIT;
END;
/
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.doc');
EXEC load_file_to_my_docs('FullTextIndexingUsingOracleText9i.asp');
EXEC load_file_to_my_docs('XMLOverHTTP9i.asp');
EXEC load_file_to_my_docs('UNIXForDBAs.asp');
EXEC load_file_to_my_docs('emp_ws_access.sql');
EXEC load_file_to_my_docs('emp_ws_test.html');
EXEC load_file_to_my_docs('9ivsSS2000forPerformanceV22.pdf');Next we create a CONTEXT type index on the doc column and gather table statistics:
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);Finally we query table looking for documents with specific content:
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'SQL Server', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
100 127 9ivsSS2000forPerformanceV22.pdf
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)
SELECT SCORE(1) score, id, name
FROM my_docs
WHERE CONTAINS(doc, 'XML', 1) > 0
ORDER BY SCORE(1) DESC;
SCORE ID NAME
---------- ---------- ------------------------------------------------
74 123 XMLOverHTTP9i.asp
9 125 emp_ws_access.sql
2 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=2 Bytes=58)
1 0 SORT (ORDER BY) (Cost=4 Card=2 Bytes=58)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MY_DOCS' (Cost=2 Card=2 Bytes=58)
3 2 DOMAIN INDEX OF 'MY_DOCS_DOC_IDX' (Cost=0)CTXCAT Indexes
The CTXCAT index type is best suited to smaller text fragments that must be indexed along with other relational data. In this example the data will be stored in a VARCHAR2 column.
First we create a schema to hold the data:
DROP TABLE my_items;
DROP SEQUENCE my_items_seq;
EXEC CTX_DDL.DROP_INDEX_SET('my_items_iset');
CREATE TABLE my_items (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
description VARCHAR2(4000) NOT NULL,
price NUMBER(7,2) NOT NULL
)
/
ALTER TABLE my_items ADD (
CONSTRAINT my_items_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_items_seq;Next we populate the schema with some dummy data:
BEGIN
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Bike: '||i, 'Bike Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'Car: '||i, 'Car Description ('||i||')', i);
END LOOP;
FOR i IN 1 .. 1000 LOOP
INSERT INTO my_items (id, name, description, price)
VALUES (my_items_seq.NEXTVAL, 'House: '||i, 'House Description ('||i||')', i);
END LOOP;
COMMIT;
END;
/Next we create a CTXCAT index on the DESCRIPTION and PRICE columns and gather table statistics. In order to create the index we must create an index-set with a sub-index for each column referenced by the CATSEARCH function:
EXEC CTX_DDL.CREATE_INDEX_SET('my_items_iset');
EXEC CTX_DDL.ADD_INDEX('my_items_iset','price');
CREATE INDEX my_items_name_idx ON my_items(description) INDEXTYPE IS CTXSYS.CTXCAT
PARAMETERS ('index set my_items_iset');
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_ITEMS', cascade=>TRUE);Finally we query table looking for items with a description that contains our specified words and an appropriate price:
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Bike', 'price BETWEEN 1 AND 5')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1 1 Bike: 1
2 2 Bike: 2
3 3 Bike: 3
4 4 Bike: 4
5 5 Bike: 5
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'
SELECT id, price, name
FROM my_items
WHERE CATSEARCH(description, 'Car', 'price BETWEEN 101 AND 105 ORDER BY price DESC')> 0;
ID PRICE NAME
---------- ---------- ------------------------------------------------
1105 105 Car: 105
1104 104 Car: 104
1103 103 Car: 103
1102 102 Car: 102
1101 101 Car: 101
5 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=150 Bytes=6000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MY_ITEMS' (Cost=2 Card=150 Bytes=6000)
2 1 DOMAIN INDEX OF 'MY_ITEMS_NAME_IDX'Every column used to restrict the selection or order the output in the CATSEARCH function should have a sub-index within the index-set. The CTXCAT index type is transactional so there is no need to synchronize the index.
CTXRULE Indexes
The CTXRULE index type can be used to build document classification applications.
First we must define our document categories and store them, along with a suitable query for the MATCHES function:
DROP TABLE my_doc_categories;
DROP TABLE my_categories;
DROP SEQUENCE my_categories_seq;
DROP TABLE my_docs;
DROP SEQUENCE my_docs_seq;
CREATE TABLE my_categories (
id NUMBER(10) NOT NULL,
category VARCHAR2(30) NOT NULL,
query VARCHAR2(2000) NOT NULL
)
/
ALTER TABLE my_categories ADD (
CONSTRAINT my_categories_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_categories_seq;
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'Oracle', 'ABOUT(Oracle)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'SQL Server', 'ABOUT(SQL Server)');
INSERT INTO my_categories VALUES(my_categories_seq.NEXTVAL, 'UNIX', 'ABOUT(UNIX)');Next we create a table to hold our documents:
CREATE TABLE my_docs (
id NUMBER(10) NOT NULL,
name VARCHAR2(200) NOT NULL,
doc CLOB NOT NULL
)
/
ALTER TABLE my_docs ADD (
CONSTRAINT my_docs_pk PRIMARY KEY (id)
)
/
CREATE SEQUENCE my_docs_seq;Then we create an intersection table to resolve the many-to-many relationship between documents and categories:
CREATE TABLE my_doc_categories (
my_doc_id NUMBER(10) NOT NULL,
my_category_id NUMBER(10) NOT NULL
)
/
ALTER TABLE my_doc_categories ADD (
CONSTRAINT my_doc_categories_pk PRIMARY KEY (my_doc_id, my_category_id)
)
/Next we create a BEFORE INSERT trigger on the MY_DOCS table to automatically assign the documents to the relevant categories as they are being inserted. The MATCHES function is used to decide if the document matches any of our gategory queries. The resulting cursor is used to insert the matches into the intersect table:
CREATE OR REPLACE TRIGGER my_docs_trg
BEFORE INSERT ON my_docs
FOR EACH ROW
BEGIN
FOR c1 IN (SELECT id
FROM my_categories
WHERE MATCHES(query, :new.doc)>0)
LOOP
BEGIN
INSERT INTO my_doc_categories(my_doc_id, my_category_id)
VALUES (:new.id, c1.id);
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
END LOOP;
END;
/Next we create the CTXRULE index to support the trigger. For completeness we also create a CONTEXT index on the document itself, although this is not involved in the category assignment process:
CREATE INDEX my_categories_query_idx ON my_categories(query) INDEXTYPE IS CTXSYS.CTXRULE;
CREATE INDEX my_docs_doc_idx ON my_docs(doc) INDEXTYPE IS CTXSYS.CONTEXT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_CATEGORIES', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOCS', cascade=>TRUE);
EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'MY_DOC_CATEGORIES', cascade=>TRUE);Finally
we test the mechanism by inserting some rows and checking the classification:
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle Document', 'This document constains the word Oracle!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'SQL Server Document', 'This document constains the words SQL Server!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'UNIX Document', 'This document constains the word UNIX!');
INSERT INTO my_docs (id, name, doc)
VALUES (my_docs_seq.NEXTVAL, 'Oracle UNIX Document', 'This document constains the word UNIX and the word Oracle!');
COLUMN name FORMAT A30;
SELECT a.name, b.category
FROM my_docs a,
my_categories b,
my_doc_categories c
WHERE c.my_doc_id = a.id
AND c.my_category_id = b.id;
NAME CATEGORY
------------------------------ ------------------------------
Oracle Document Oracle
SQL Server Document SQL Server
UNIX Document UNIX
Oracle UNIX Document UNIX
Oracle UNIX Document Oracle
5 rows selected.The output shows that the documents have been assigned to the correct categories. Note, the "Oracle UNIX Document" document has been assigned to both the "Oracle" and "UNIX" categories.
Index Maintenance
Not all Oracle Text indexes are automatically updated as records are added or deleted. To synchronize the index with the table you must call:
SQL> EXEC CTX_DDL.SYNC_INDEX('my_docs_doc_idx');Regular synchronizations of the index can be automated using the DBMS_JOB package. The following script is provided to make this task easier:
$ORACLE_HOME/ctx/sample/script/drjobdml.sqlIt can be called from SQL*Plus whilst logged on as the index owner as follows:
SQL> @drjobdml.sql index-name interval-mins
SQL> @drjobdml.sql my_docs_doc_idx 60Regular synchronization of text indexes can cause fragmentation which affects query performance. To correct this situation the index can be rebuilt or optimized. Index optimization can be performed in three basic modes (FAST, FULL or TOKEN). The FAST mode compacts fragmented rows but does not remove old data:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FAST');
END;
/The FULL mode optimizes either the entire index or a portion of it, with old data removed:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','FULL');
END;
/The TOKEN mode perfoms a full optimization for a specific token:
BEGIN
CTX_DDL.OPTIMIZE_INDEX('my_docs_doc_idx','TOKEN', token=>'Oracle');
END;
/
miércoles, 2 de diciembre de 2009
Oracle Secrets Database
SQL*Plus: Release 10.2.0.4.0 - Production on Wed Dec 2 15:34:49 2009
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SHOW PARAMETER audit syslog level
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /u01/app/oracle/admin/POWER/ad
ump
audit_syslog_level string
audit_sys_operations boolean FALSE
audit_trail string NONE
SQL> SELECT value FROM v$parameter WHERE name='audit syslog level';
no rows selected
SQL>
Yet, when executing CONNECT / AS SYSDBA, the facility and level logged in /var/adm/messages
on Solaris is “user.notice”:
Feb 21 11:45:52 dbserver Oracle Audit[27742]: [ID 441842 user.notice]
ACTION : 'CONNECT'
Feb 21 11:45:52 dbserver DATABASE USER: '/'
Feb 21 11:45:52 dbserver PRIVILEGE : SYSDBA
Feb 21 11:45:52 dbserver CLIENT USER: oracle
Feb 21 11:45:52 dbserver CLIENT TERMINAL: pts/3
Feb 21 11:45:52 dbserver STATUS: 0
If an SPFILE is used, the full setting is available by querying V$SPPARAMETER:
SQL> SELECT value FROM v$spparameter WHERE name='audit syslog level';
VALUE
-----------
user.notice
Auditing Non-Privileged Users
Of course, you may also direct audit records pertaining to non-privileged users to the system log by setting
AUDIT TRAIL=OS in addition to AUDIT SYSLOG LEVEL. Non-privileged users cannot delete audit trails logging their actions. The search for perpetrators with queries against auditing views, such as DBA AUDIT STATEMENT or DBA AUDIT OBJECT, is easier than searching the system log. For these reasons, keeping the audit trails of non-privileged users inside the database with
AUDIT TRAIL=DB is preferred. With the latter setting, audit trails are written to the table SYS.AUD$ and may be queried through the aforementioned data dictionary views. Setting AUDIT TRAIL=NONE switches off auditing of actions by non-privileged users.
SQL> AUDIT CONNECT BY appuser /* audit trail=os set */;
entries similar to the following are written to the syslog facility (example from Solaris):
Feb 21 11:41:14 dbserver Oracle Audit[27684]: [ID 930208 user.notice]
SESSIONID: "15" ENTRYID: "1" STATEMENT: "1" USERID: "APPUSER"
USERHOST: "dbserver" TERMINAL: "pts/3" ACTION: "100" RETURNCODE: "0"
COMMENT$TEXT: "Authenticated by: DATABASE" OS$USERID: "oracle"
PRIV$USED: 5
Another entry is added to /var/adm/messages when a database session ends:
Feb 21 11:44:41 dbserver Oracle Audit[27684]: [ID 162490 user.notice]
SESSIONID: "15" ENTRYID: "1" ACTION: "101" RETURNCODE: "0"
LOGOFF$PREAD: "1" LOGOFF$LREAD: "17" LOGOFF$LWRITE: "0" LOGOFF$DEAD:
"0" SESSIONCPU: "2"
Note that additional data provided on the actions LOGON (100) and LOGOFF (101) conforms
to the columns of the view DBA AUDIT SESSION. Translation from action numbers to action
names is done via the view AUDIT ACTIONS as in this example:
SQL> SELECT action, name FROM audit actions WHERE action IN (100,101)
ACTION NAME
------ ------
100 LOGON
101 LOGOFF
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=FALSE and AUDIT TRAIL=NONE,
CONNECT, STARTUP, and SHUTDOWN are logged via syslog. With these settings, an instance shutdown on Solaris writes entries similar to the following to /var/adm/messages:
Feb 21 14:40:01 dbserver Oracle Audit[29036]:[ID 63719 auth.info] ACTION:'SHUTDOWN'
Feb 21 14:40:01 dbserver DATABASE USER: '/'
Feb 21 14:40:01 dbserver PRIVILEGE : SYSDBA
Feb 21 14:40:01 dbserver CLIENT USER: oracle
Feb 21 14:40:01 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:40:01 dbserver STATUS: 0
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=TRUE, and AUDIT TRAIL=NONE,
SQL and PL/SQL statements executed with SYSDBA or SYSOPER privileges are also logged via syslog. Dropping a user after connecting with / AS SYSDBA results in a syslog entry similar to the one shown here:
Feb 21 14:46:53 dbserver Oracle Audit[29170]: [ID 853627 auth.info]
ACTION : 'drop user appuser'
Feb 21 14:46:53 dbserver DATABASE USER: '/'
Feb 21 14:46:53 dbserver PRIVILEGE : SYSDBA
Feb 21 14:46:53 dbserver CLIENT USER: oracle
Feb 21 14:46:53 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:46:53 dbserver STATUS: 0
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> SHOW PARAMETER audit syslog level
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /u01/app/oracle/admin/POWER/ad
ump
audit_syslog_level string
audit_sys_operations boolean FALSE
audit_trail string NONE
SQL> SELECT value FROM v$parameter WHERE name='audit syslog level';
no rows selected
SQL>
Yet, when executing CONNECT / AS SYSDBA, the facility and level logged in /var/adm/messages
on Solaris is “user.notice”:
Feb 21 11:45:52 dbserver Oracle Audit[27742]: [ID 441842 user.notice]
ACTION : 'CONNECT'
Feb 21 11:45:52 dbserver DATABASE USER: '/'
Feb 21 11:45:52 dbserver PRIVILEGE : SYSDBA
Feb 21 11:45:52 dbserver CLIENT USER: oracle
Feb 21 11:45:52 dbserver CLIENT TERMINAL: pts/3
Feb 21 11:45:52 dbserver STATUS: 0
If an SPFILE is used, the full setting is available by querying V$SPPARAMETER:
SQL> SELECT value FROM v$spparameter WHERE name='audit syslog level';
VALUE
-----------
user.notice
Auditing Non-Privileged Users
Of course, you may also direct audit records pertaining to non-privileged users to the system log by setting
AUDIT TRAIL=OS in addition to AUDIT SYSLOG LEVEL. Non-privileged users cannot delete audit trails logging their actions. The search for perpetrators with queries against auditing views, such as DBA AUDIT STATEMENT or DBA AUDIT OBJECT, is easier than searching the system log. For these reasons, keeping the audit trails of non-privileged users inside the database with
AUDIT TRAIL=DB is preferred. With the latter setting, audit trails are written to the table SYS.AUD$ and may be queried through the aforementioned data dictionary views. Setting AUDIT TRAIL=NONE switches off auditing of actions by non-privileged users.
SQL> AUDIT CONNECT BY appuser /* audit trail=os set */;
entries similar to the following are written to the syslog facility (example from Solaris):
Feb 21 11:41:14 dbserver Oracle Audit[27684]: [ID 930208 user.notice]
SESSIONID: "15" ENTRYID: "1" STATEMENT: "1" USERID: "APPUSER"
USERHOST: "dbserver" TERMINAL: "pts/3" ACTION: "100" RETURNCODE: "0"
COMMENT$TEXT: "Authenticated by: DATABASE" OS$USERID: "oracle"
PRIV$USED: 5
Another entry is added to /var/adm/messages when a database session ends:
Feb 21 11:44:41 dbserver Oracle Audit[27684]: [ID 162490 user.notice]
SESSIONID: "15" ENTRYID: "1" ACTION: "101" RETURNCODE: "0"
LOGOFF$PREAD: "1" LOGOFF$LREAD: "17" LOGOFF$LWRITE: "0" LOGOFF$DEAD:
"0" SESSIONCPU: "2"
Note that additional data provided on the actions LOGON (100) and LOGOFF (101) conforms
to the columns of the view DBA AUDIT SESSION. Translation from action numbers to action
names is done via the view AUDIT ACTIONS as in this example:
SQL> SELECT action, name FROM audit actions WHERE action IN (100,101)
ACTION NAME
------ ------
100 LOGON
101 LOGOFF
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=FALSE and AUDIT TRAIL=NONE,
CONNECT, STARTUP, and SHUTDOWN are logged via syslog. With these settings, an instance shutdown on Solaris writes entries similar to the following to /var/adm/messages:
Feb 21 14:40:01 dbserver Oracle Audit[29036]:[ID 63719 auth.info] ACTION:'SHUTDOWN'
Feb 21 14:40:01 dbserver DATABASE USER: '/'
Feb 21 14:40:01 dbserver PRIVILEGE : SYSDBA
Feb 21 14:40:01 dbserver CLIENT USER: oracle
Feb 21 14:40:01 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:40:01 dbserver STATUS: 0
When AUDIT SYSLOG LEVEL=AUTH.INFO, AUDIT SYS OPERATIONS=TRUE, and AUDIT TRAIL=NONE,
SQL and PL/SQL statements executed with SYSDBA or SYSOPER privileges are also logged via syslog. Dropping a user after connecting with / AS SYSDBA results in a syslog entry similar to the one shown here:
Feb 21 14:46:53 dbserver Oracle Audit[29170]: [ID 853627 auth.info]
ACTION : 'drop user appuser'
Feb 21 14:46:53 dbserver DATABASE USER: '/'
Feb 21 14:46:53 dbserver PRIVILEGE : SYSDBA
Feb 21 14:46:53 dbserver CLIENT USER: oracle
Feb 21 14:46:53 dbserver CLIENT TERMINAL: pts/3
Feb 21 14:46:53 dbserver STATUS: 0
Suscribirse a:
Entradas (Atom)