http://husnusensoy.wordpress.com/2009/10/30/how-to-install-oracle-11g-release-2-on-oel-5-4-on-virtualbox-installing-grid-infrastructure/
alter system flush buffer_cache;
alter system flush shared_pool;
http://www.red-partner.com/pdf/UsoDeBitmapJoinIndexes.pdf
http://www.oracle-base.com/articles/10g/TablespaceManagement10g.php
v$session_wait
Tablas IOT
http://plsqlhowtos.blogspot.com/2008/10/notes-on-index-organized-tables.html
Vistas Materializadas
Vista Materializadas Log
ALTER SESSION ENABLE RESUMABLE
AFTER SUSPEND TRIGGER
http://jjmora.es/Rendimiento_Tuning_Solaris_Memoria_Swap/
martes, 27 de abril de 2010
lunes, 19 de abril de 2010
Oracle Violacion de Politicas
POLITICAS DE VIOLACIONES ORACLE 10G
SQL> revoke execute on utl_file from public;
SQL> revoke execute on DBMS_RANDOM from public;
SQL> revoke execute on UTL_HTTP from public;
SQL> revoke execute on UTL_SMTP from public;
SQL> revoke execute on UTL_TCP from public;
SQL> ALTER PROFILE DEFAULT LIMIT FAILED_LOGIN_ATTEMPTS 5;
SQL> revoke sysdba from rman;
SQL> revoke execute on utl_file from public;
SQL> revoke execute on DBMS_RANDOM from public;
SQL> revoke execute on UTL_HTTP from public;
SQL> revoke execute on UTL_SMTP from public;
SQL> revoke execute on UTL_TCP from public;
SQL> ALTER PROFILE DEFAULT LIMIT FAILED_LOGIN_ATTEMPTS 5;
SQL> revoke sysdba from rman;
Examples Oracle Online Table Redefinition
Online Table Redefinition Examples
For the following examples, see Oracle Database PL/SQL Packages and Types Reference for descriptions of all DBMS_REDEFINITION subprograms.
Example Description
Example 1 Redefines a table by adding new columns and adding partitioning.
Example 2 Demonstrates redefinition with object datatypes.
Example 3 Demonstrates redefinition with manually registered dependent objects.
Example 4 Redefines a single table partition, moving it to a different tablespace.
Example 1
This example illustrates online redefinition of the previously created table hr.admin_emp, which at this point only contains columns: empno, ename, job, deptno. The table is redefined as follows:
•New columns mgr, hiredate, sal, and bonus are added. (These existed in the original table but were dropped in previous examples.)
•The new column bonus is initialized to 0
•The column deptno has its value increased by 10.
•The redefined table is partitioned by range on empno.
The steps in this redefinition are illustrated below.
1.Verify that the table is a candidate for online redefinition. In this case you specify that the redefinition is to be done using primary keys or pseudo-primary keys.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('hr','admin_emp',
DBMS_REDEFINITION.CONS_USE_PK);
END;
/
2.Create an interim table hr.int_admin_emp.
CREATE TABLE hr.int_admin_emp
(empno NUMBER(5) PRIMARY KEY,
ename VARCHAR2(15) NOT NULL,
job VARCHAR2(10),
mgr NUMBER(5),
hiredate DATE DEFAULT (sysdate),
sal NUMBER(7,2),
deptno NUMBER(3) NOT NULL,
bonus NUMBER (7,2) DEFAULT(1000))
PARTITION BY RANGE(empno)
(PARTITION emp1000 VALUES LESS THAN (1000) TABLESPACE admin_tbs,
PARTITION emp2000 VALUES LESS THAN (2000) TABLESPACE admin_tbs2);
3.Start the redefinition process.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE('hr', 'admin_emp','int_admin_emp',
'empno empno, ename ename, job job, deptno+10 deptno, 0 bonus',
dbms_redefinition.cons_use_pk);
END;
/
4.Copy dependent objects. (Automatically create any triggers, indexes, grants, and constraints on hr.int_admin_emp.)
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('hr', 'admin_emp','int_admin_emp',
DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, num_errors);
END;
Note that the ignore_errors argument is set to TRUE for this call. The reason is that the interim table was created with a primary key constraint, and when COPY_TABLE_DEPENDENTS attempts to copy the primary key constraint and index from the original table, errors occurs. You can ignore these errors, but you must run the query shown in the next step to see if there are other errors.
5.Query the DBA_REDEFINITION_ERRORS view to check for errors.
SQL> select object_name, base_table_name, ddl_txt from
DBA_REDEFINITION_ERRORS;
OBJECT_NAME BASE_TABLE_NAME DDL_TXT
------------- ---------------- ------------------------------
SYS_C005836 ADMIN_EMP CREATE UNIQUE INDEX "HR"."TMP$
$_SYS_C0058360" ON "HR"."INT_A
DMIN_EMP" ("EMPNO")
SYS_C005836 ADMIN_EMP ALTER TABLE "HR"."INT_ADMIN_EM
P" ADD CONSTRAINT "TMP$$_SYS_C
0058360" PRIMARY KEY
These errors are caused by the existing primary key constraint on the interim table and can be ignored. Note that with this approach, the names of the primary key constraint and index on the post-redefined table are changed. An alternate approach, one that avoids errors and name changes, would be to define the interim table without a primary key constraint. In this case, the primary key constraint and index are copied from the original table.
Note:
The best approach is to define the interim table with a primary key constraint, use REGISTER_DEPENDENT_OBJECT to register the primary key constraint and index, and then copy the remaining dependent objects with COPY_TABLE_DEPENDENTS. This approach avoids errors and ensures that the redefined table always has a primary key and that the dependent object names do not change.
6.Optionally, synchronize the interim table hr.int_admin_emp.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('hr', 'admin_emp', 'int_admin_emp');
END;
/
7.Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('hr', 'admin_emp', 'int_admin_emp');
END;
/
The table hr.admin_emp is locked in the exclusive mode only for a small window toward the end of this step. After this call the table hr.admin_emp is redefined such that it has all the attributes of the hr.int_admin_emp table.
8.Drop the interim table.
Example 2
This example redefines a table to change columns into object attributes. The redefined table gets a new column that is an object type.
The original table, named CUSTOMER, is defined as follows:
Name Type
------------ -------------
CID NUMBER <- Primary key
NAME VARCHAR2(30)
STREET VARCHAR2(100)
CITY VARCHAR2(30)
STATE VARCHAR2(2)
ZIP NUMBER(5)
The type definition for the new object is:
CREATE TYPE ADDR_T AS OBJECT (
street VARCHAR2(100),
city VARCHAR2(30),
state VARCHAR2(2),
zip NUMBER(5, 0) );
Here are the steps for this redefinition:
1.Verify that the table is a candidate for online redefinition. Specify that the redefinition is to be done using primary keys or pseudo-primary keys.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('STEVE','CUSTOMER',
DBMS_REDEFINITION.CONS_USE_PK);
END;
/
2.Create the interim table int_customer.
CREATE TABLE INT_CUSTOMER(
CID NUMBER,
NAME VARCHAR2(30),
ADDR ADDR_T);
Note that no primary key is defined on the interim table. When dependent objects are copied in step 5, the primary key constraint and index are copied.
3.Because CUSTOMER is a very large table, specify parallel operations for the next step.
alter session force parallel dml parallel 4;
alter session force parallel query parallel 4;
4.Start the redefinition process using primary keys.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(
uname => 'STEVE',
orig_table => 'CUSTOMER',
int_table => 'INT_CUSTOMER',
col_mapping => 'cid cid, name name,
addr_t(street, city, state, zip) addr');
END;
/
Note that addr_t(street, city, state, zip) is a call to the object constructor.
5.Copy dependent objects.
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
'STEVE','CUSTOMER','INT_CUSTOMER',DBMS_REDEFINITION.CONS_ORIG_PARAMS,
TRUE, TRUE, TRUE, FALSE, num_errors, TRUE);
END;
/
Note that for this call, the final argument indicates that table statistics are to be copied to the interim table.
6.Optionally synchronize the interim table.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('STEVE', 'CUSTOMER', 'INT_CUSTOMER');
END;
/
7.Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('STEVE', 'CUSTOMER', 'INT_CUSTOMER');
END;
/
8.Drop the interim table.
Example 3
This example addresses the situation where a dependent object must be manually created and registered.
Consider the case where a table T1 has a column named C1, and where this column becomes C2 after the redefinition. Assume that there is an index Index1 on C1. In this case, COPY_TABLE_DEPENDENTS tries to create an index on the interim table corresponding to Index1, and tries to create it on a column C1, which does not exist on the interim table. This results in an error. You must therefore manually create the index on column C2 and register it. Here are the steps:
1.Create the interim table INT_T1 and create an index Int_Index1 on column C2.
2.Ensure that T1 is a candidate for online redefinition with CAN_REDEF_TABLE, and then begin the redefinition process with START_REDEF_TABLE.
3.Register the original (Index1) and interim (Int_Index1) dependent objects.
BEGIN
DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT(
uname => 'STEVE',
orig_table => 'T1',
int_table => 'INT_T1',
dep_type => DBMS_REDEFINITION.CONS_INDEX,
dep_owner => 'STEVE',
dep_orig_name => 'Index1',
dep_int_name => 'Int_Index1');
END;
/
4.Use COPY_TABLE_DEPENDENTS to copy the remaining dependent objects.
5.Optionally synchronize the interim table.
6.Complete the redefinition and drop the interim table.
Example 4
This example demonstrates redefining a single partition. It moves the oldest partition of a range-partitioned sales table to a tablespace named TBS_LOW_FREQ. The table containing the partition to be redefined is defined as follows:
CREATE TABLE salestable
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
TABLESPACE users
PARTITION BY RANGE(s_saledate)
(PARTITION sal03q1 VALUES LESS THAN (TO_DATE('01-APR-2003', 'DD-MON-YYYY')),
PARTITION sal03q2 VALUES LESS THAN (TO_DATE('01-JUL-2003', 'DD-MON-YYYY')),
PARTITION sal03q3 VALUES LESS THAN (TO_DATE('01-OCT-2003', 'DD-MON-YYYY')),
PARTITION sal03q4 VALUES LESS THAN (TO_DATE('01-JAN-2004', 'DD-MON-YYYY')));
The table has a local partitioned index that is defined as follows:
CREATE INDEX sales_index ON salestable
(s_saledate, s_productid, s_custid) LOCAL;
Here are the steps. In the following procedure calls, note the extra argument: partition name (part_name).
1.Ensure that salestable is a candidate for redefinition.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE(
uname => 'STEVE',
tname => 'SALESTABLE',
options_flag => DBMS_REDEFINITION.CONS_USE_ROWID,
part_name => 'sal03q1');
END;
/
2.Create the interim table in the TBS_LOW_FREQ tablespace. Because this is a redefinition of a range partition, the interim table is non-partitioned.
CREATE TABLE int_salestable
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
TABLESPACE tbs_low_freq;
3.Start the redefinition process using rowid.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(
uname => 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
col_mapping => NULL,
options_flag => DBMS_REDEFINITION.CONS_USE_ROWID,
part_name => 'sal03q1');
END;
/
4.Manually create any local indexes on the interim table.
CREATE INDEX int_sales_index ON int_salestable
(s_saledate, s_productid, s_custid)
TABLESPACE tbs_low_freq;
5.Optionally synchronize the interim table.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE(
uname => 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
part_name => 'sal03q1');
END;
/
6.Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(
uname => 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
part_name => 'sal03q1');
END;
/
7.Drop the interim table.
The following query shows that the oldest partition has been moved to the new tablespace:
select partition_name, tablespace_name from user_tab_partitions
where table_name = 'SALESTABLE';
PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
SAL03Q1 TBS_LOW_FREQ
SAL03Q2 USERS
SAL03Q3 USERS
SAL03Q4 USERS
4 rows selected.
Privileges Required for the DBMS_REDEFINITION Package
Execute privileges on the DBMS_REDEFINITION package are granted to EXECUTE_CATALOG_ROLE. In addition to having execute privileges on this package, you must be granted the following privileges:
•CREATE ANY TABLE
•ALTER ANY TABLE
•DROP ANY TABLE
•LOCK ANY TABLE
•SELECT ANY TABLE
The following additional privileges are required to execute COPY_TABLE_DEPENDENTS:
•CREATE ANY TRIGGER
•CREATE ANY INDEX
77 DBMS_REDEFINITIONThe DBMS_REDEFINITION package provides an interface to perform an online redefinition of tables.
See Also:
Oracle Database Administrator's Guide for more information about online redefinition of tables
This chapter contains the following topics:
•Using DBMS_REDEFINITION
◦Overview
◦Constants
◦Operational Notes
•Summary of DBMS_REDEFINITION Subprograms
--------------------------------------------------------------------------------
Using DBMS_REDEFINITION
•Overview
•Constants
•Operational Notes
--------------------------------------------------------------------------------
Overview
To achieve online redefinition, incrementally maintainable local materialized views are used. These logs keep track of the changes to the master tables and are used by the materialized views during refresh synchronization.
--------------------------------------------------------------------------------
Constants
The DBMS_REDEFINITION package uses the constants shown in Table 77-1, "DBMS_REDEFINITION Constants":
Table 77-1 DBMS_REDEFINITION Constants
Constant Type Value Description
CONS_CONSTRAINT
PLS_INTEGER
3
Used to specify that dependent object type is a constraint
CONS_INDEX
PLS_INTEGER
2
Used to specify that dependent object type is a index
CONS_ORIG_PARAMS
PLS_INTEGER
1
Used to specify that indexes should be cloned with their original storage parameters
CONS_TRIGGER
PLS_INTEGER
4
Used to specify that dependent object type is a trigger
CONS_USE_PK
BINARY_INTEGER
1
Used to indicate that the redefinition should be done using primary keys or pseudo-primary keys (unique keys with all component columns having not-NULL constraints)
CONS_USE_ROWID
BINARY_INTEGER
2
Used to indicate that the redefinition should be done using rowids
--------------------------------------------------------------------------------
Operational Notes
•CONS_USE_PK and CONS_USE_ROWID are constants used as input to the "options_flag" parameter in both the START_REDEF_TABLE Procedure and CAN_REDEF_TABLE Procedure. CONS_USE_ROWID is used to indicate that the redefinition should be done using rowids while CONS_USE_PK implies that the redefinition should be done using primary keys or pseudo-primary keys (which are unique keys with all component columns having NOT NULL constraints).
•CONS_INDEX, CONS_TRIGGER and CONS_CONSTRAINT are used to specify the type of the dependent object being (un)registered in REGISTER_DEPENDENT_OBJECT Procedure and UNREGISTER_DEPENDENT_OBJECT Procedure (parameter "dep_type").
CONS_INDEX ==> dependent object is of type INDEX
CONS_TRIGGER ==> dependent object is of type TRIGGER
CONS_CONSTRAINT==> dependent object type is of type CONSTRAINT
•CONS_ORIG_PARAMS as used as input to the "copy_indexes" parameter in COPY_TABLE_DEPENDENTS Procedure. Using this parameter implies that the indexes on the original table be copied onto the interim table using the same storage parameters as that of the original index.
--------------------------------------------------------------------------------
Rules and Limits
For information about various rules and limits that apply to implementation of this package, see the Oracle Database Administrator's Guide.
--------------------------------------------------------------------------------
Summary of DBMS_REDEFINITION Subprograms
Table 77-2 DBMS_REDEFINITION Package Subprograms
Subprogram Description
ABORT_REDEF_TABLE Procedure
Cleans up errors that occur during the redefinition process and removes all temporary objects created by the reorganization process
CAN_REDEF_TABLE Procedure
Determines if a given table can be redefined online
COPY_TABLE_DEPENDENTS Procedure
Copies the dependent objects of the original table onto the interim table
FINISH_REDEF_TABLE Procedure
Completes the redefinition process.
REGISTER_DEPENDENT_OBJECT Procedure
Registers a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table
START_REDEF_TABLE Procedure
Initiates the redefinition process
SYNC_INTERIM_TABLE Procedure
Keeps the interim table synchronized with the original table
UNREGISTER_DEPENDENT_OBJECT Procedure
Unregisters a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table
--------------------------------------------------------------------------------
ABORT_REDEF_TABLE Procedure
This procedure cleans up errors that occur during the redefinition process. This procedure can also be used to terminate the redefinition process any time after the START_REDEF_TABLE Procedure has been called and before the FINISH_REDEF_TABLE Procedure is called. This process will remove the temporary objects that are created by the redefinition process such as materialized view logs.
Syntax
DBMS_REDEFINITION.ABORT_REDEF_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-3 ABORT_REDEF_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
--------------------------------------------------------------------------------
CAN_REDEF_TABLE Procedure
This procedure determines if a given table can be redefined online. This is the first step of the online redefinition process. If the table is not a candidate for online redefinition, an error message is raised.
Syntax
DBMS_REDEFINITION.CAN_REDEF_TABLE (
uname IN VARCHAR2,
tname IN VARCHAR2,
options_flag IN PLS_INTEGER := 1,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-4 CAN_REDEF_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the table
tname
The name of the table to be re-organized
options_flag
Indicates the type of redefinition method to use.
•If dbms_redefinition.cons_use_pk, the redefinition is done using primary keys or pseudo-primary keys (unique keys with all component columns having NOT NULL constraints). The default method of redefinition is using primary keys.
•If dbms_redefinition.cons_use_rowid, the redefinition is done using rowids.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
Exceptions
If the table is not a candidate for online redefinition, an error message is raised.
--------------------------------------------------------------------------------
COPY_TABLE_DEPENDENTS Procedure
This procedure clones the dependent objects of the table being redefined onto the interim table and registers the dependent objects. This procedure does not clone the already registered dependent objects.
This subprogram is used to clone the dependent objects like grants, triggers, constraints and privileges from the table being redefined to the interim table (which represents the post-redefinition table).
Syntax
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
copy_indexes IN PLS_INTEGER := 1,
copy_triggers IN BOOLEAN := TRUE,
copy_constraints IN BOOLEAN := TRUE,
copy_privileges IN BOOLEAN := TRUE,
ignore_errors IN BOOLEAN := FALSE,
num_errors OUT PLS_INTEGER,
copy_statistics IN BOOLEAN := FALSE);
Parameters
Table 77-5 COPY_TABLE_DEPENDENTS Procedure Parameters
Parameter Description
uname
The schema name of the tables.
orig_table
The name of the table being redefined.
int_table
The name of the interim table.
copy_indexes
A flag indicating whether to copy the indexes
•0 - do not copy any index
•dbms_redefinition.cons_orig_params – copy the indexes using the physical parameters of the source indexes
copy_triggers
TRUE = clone triggers, FALSE = do nothing
copy_constraints
TRUE = clone constraints, FALSE = do nothing. If compatibility setting is 10.2 or higher, then clone CHECK and NOT NULL constraints
copy_privileges
TRUE = clone privileges, FALSE = do nothing
ignore_errors
TRUE = if an error occurs while cloning a particular dependent object, then skip that object and continue cloning other dependent objects. FALSE = that the cloning process should stop upon encountering an error.
num_errors
The number of errors that occurred while cloning dependent objects
copy_statistics
TRUE = copy statistics, FALSE = do nothing
Usage Notes
•The user must check the column num_errors before proceeding to ensure that no errors occurred during the cloning of the objects.
•In case of an error, the user should fix the cause of the error and call the COPY_TABLE_DEPENDENTS Procedure again to clone the dependent object. Alternatively the user can manually clone the dependent object and then register the manually cloned dependent object using the REGISTER_DEPENDENT_OBJECT Procedure.
•All cloned referential constraints involving the interim tables will be created disabled (they will be automatically enabled after the redefinition) and all triggers on interim tables will not fire till the redefinition is completed. After the redefinition is complete, the cloned objects will be renamed to the corresponding pre-redefinition names of the objects (from which they were cloned from).
•It is the user's responsibility that the cloned dependent objects are unaffected by the redefinition. All the triggers will be cloned and it is the user's responsibility that the cloned triggers are unaffected by the redefinition.
--------------------------------------------------------------------------------
FINISH_REDEF_TABLE Procedure
This procedure completes the redefinition process. Before this step, you can create new indexes, triggers, grants, and constraints on the interim table. The referential constraints involving the interim table must be disabled. After completing this step, the original table is redefined with the attributes and data of the interim table. The original table is locked briefly during this procedure.
Syntax
DBMS_REDEFINITION.FINISH_REDEF_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-6 FINISH_REDEF_TABLE Procedure Parameters
Parameters Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
--------------------------------------------------------------------------------
REGISTER_DEPENDENT_OBJECT Procedure
This procedure registers a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table.
This can be used to have the same object on each table but with different attributes. For example: for an index, the storage and tablespace attributes could be different but the columns indexed remain the same
Syntax
DBMS_REDEFINITION.REGISTER_DEPEPENDENT_OBJECT(
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
dep_type IN PLS_INTEGER,
dep_owner IN VARCHAR2,
dep_orig_name IN VARCHAR2,
dep_int_name IN VARCHAR2);
Parameters
Table 77-7 REGISTER_DEPENDENT_OBJECT Procedure Parameters
Parameters Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
dep_type
The type of the dependent object.
dep_owner
The owner of the dependent object.
dep_orig_name
The name of the original dependent object.
dep_int_name
The name of the interim dependent object.
Usage Notes
•Attempting to register an already registered object will raise an error.
•Registering a dependent object will automatically remove that object from DBA_REDEFINITION_ERRORS if an entry exists for that object.
--------------------------------------------------------------------------------
START_REDEF_TABLE Procedure
Prior to calling this procedure, you must manually create an empty interim table (in the same schema as the table to be redefined) with the desired attributes of the post-redefinition table, and then call this procedure to initiate the redefinition.
Syntax
DBMS_REDEFINITION.START_REDEF_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
col_mapping IN VARCHAR2 := NULL,
options_flag IN BINARY_INTEGER := 1,
orderby_cols IN VARCHAR2 := NULL,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-8 START_REDEF_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
col_mapping
The mapping information from the columns in the original table to the columns in the interim table. (This is similar to the column list on the SELECT clause of a query.) If NULL, all the columns in the original table are selected and have the same name after redefinition.
options_flag
Indicates the type of redefinition method to use.
•If dbms_redefinition.cons_use_pk, the redefinition is done using primary keys or pseudo-primary keys (unique keys with all component columns having NOT NULL constraints). The default method of redefinition is using primary keys.
•If dbms_redefinition.cons_use_rowid, the redefinition is done using rowids.
orderby_cols
This optional parameter accepts the list of columns (along with the optional keyword(s) ascending/descending) with which to order by the rows during the initial instantiation of the interim table (the order by is only done for the initial instantiation and not for subsequent synchronizations)
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
--------------------------------------------------------------------------------
SYNC_INTERIM_TABLE Procedure
This procedure keeps the interim table synchronized with the original table.
Syntax
DBMS_REDEFINITION.SYNC_INTERIM_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-9 SYNC_INTERIM_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the table.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
Usage Notes
•This step is useful in minimizing the amount of synchronization needed to be done by the FINISH_REDEF_TABLE Procedure before completing the online redefinition.
•This procedure can be called between long running operations (such as CREATE INDEX) on the interim table to sync it up with the data in the original table and speed up subsequent operations.
--------------------------------------------------------------------------------
UNREGISTER_DEPENDENT_OBJECT Procedure
This procedure unregisters a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table.
Syntax
DBMS_REDEFINITION.UNREGISTER_DEPEPENDENT_OBJECT(
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
dep_type IN PLS_INTEGER,
dep_owner IN VARCHAR2,
dep_orig_name IN VARCHAR2,
dep_int_name IN VARCHAR2);
Parameters
Table 77-10 UNREGISTER_DEPENDENT_OBJECT Procedure Parameters
Parameters Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
dep_type
The type of the dependent object.
dep_owner
The owner of the dependent object.
dep_orig_name
The name of the original dependent object.
dep_int_name
The name of the interim dependent object.
For the following examples, see Oracle Database PL/SQL Packages and Types Reference for descriptions of all DBMS_REDEFINITION subprograms.
Example Description
Example 1 Redefines a table by adding new columns and adding partitioning.
Example 2 Demonstrates redefinition with object datatypes.
Example 3 Demonstrates redefinition with manually registered dependent objects.
Example 4 Redefines a single table partition, moving it to a different tablespace.
Example 1
This example illustrates online redefinition of the previously created table hr.admin_emp, which at this point only contains columns: empno, ename, job, deptno. The table is redefined as follows:
•New columns mgr, hiredate, sal, and bonus are added. (These existed in the original table but were dropped in previous examples.)
•The new column bonus is initialized to 0
•The column deptno has its value increased by 10.
•The redefined table is partitioned by range on empno.
The steps in this redefinition are illustrated below.
1.Verify that the table is a candidate for online redefinition. In this case you specify that the redefinition is to be done using primary keys or pseudo-primary keys.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('hr','admin_emp',
DBMS_REDEFINITION.CONS_USE_PK);
END;
/
2.Create an interim table hr.int_admin_emp.
CREATE TABLE hr.int_admin_emp
(empno NUMBER(5) PRIMARY KEY,
ename VARCHAR2(15) NOT NULL,
job VARCHAR2(10),
mgr NUMBER(5),
hiredate DATE DEFAULT (sysdate),
sal NUMBER(7,2),
deptno NUMBER(3) NOT NULL,
bonus NUMBER (7,2) DEFAULT(1000))
PARTITION BY RANGE(empno)
(PARTITION emp1000 VALUES LESS THAN (1000) TABLESPACE admin_tbs,
PARTITION emp2000 VALUES LESS THAN (2000) TABLESPACE admin_tbs2);
3.Start the redefinition process.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE('hr', 'admin_emp','int_admin_emp',
'empno empno, ename ename, job job, deptno+10 deptno, 0 bonus',
dbms_redefinition.cons_use_pk);
END;
/
4.Copy dependent objects. (Automatically create any triggers, indexes, grants, and constraints on hr.int_admin_emp.)
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('hr', 'admin_emp','int_admin_emp',
DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, num_errors);
END;
Note that the ignore_errors argument is set to TRUE for this call. The reason is that the interim table was created with a primary key constraint, and when COPY_TABLE_DEPENDENTS attempts to copy the primary key constraint and index from the original table, errors occurs. You can ignore these errors, but you must run the query shown in the next step to see if there are other errors.
5.Query the DBA_REDEFINITION_ERRORS view to check for errors.
SQL> select object_name, base_table_name, ddl_txt from
DBA_REDEFINITION_ERRORS;
OBJECT_NAME BASE_TABLE_NAME DDL_TXT
------------- ---------------- ------------------------------
SYS_C005836 ADMIN_EMP CREATE UNIQUE INDEX "HR"."TMP$
$_SYS_C0058360" ON "HR"."INT_A
DMIN_EMP" ("EMPNO")
SYS_C005836 ADMIN_EMP ALTER TABLE "HR"."INT_ADMIN_EM
P" ADD CONSTRAINT "TMP$$_SYS_C
0058360" PRIMARY KEY
These errors are caused by the existing primary key constraint on the interim table and can be ignored. Note that with this approach, the names of the primary key constraint and index on the post-redefined table are changed. An alternate approach, one that avoids errors and name changes, would be to define the interim table without a primary key constraint. In this case, the primary key constraint and index are copied from the original table.
Note:
The best approach is to define the interim table with a primary key constraint, use REGISTER_DEPENDENT_OBJECT to register the primary key constraint and index, and then copy the remaining dependent objects with COPY_TABLE_DEPENDENTS. This approach avoids errors and ensures that the redefined table always has a primary key and that the dependent object names do not change.
6.Optionally, synchronize the interim table hr.int_admin_emp.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('hr', 'admin_emp', 'int_admin_emp');
END;
/
7.Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('hr', 'admin_emp', 'int_admin_emp');
END;
/
The table hr.admin_emp is locked in the exclusive mode only for a small window toward the end of this step. After this call the table hr.admin_emp is redefined such that it has all the attributes of the hr.int_admin_emp table.
8.Drop the interim table.
Example 2
This example redefines a table to change columns into object attributes. The redefined table gets a new column that is an object type.
The original table, named CUSTOMER, is defined as follows:
Name Type
------------ -------------
CID NUMBER <- Primary key
NAME VARCHAR2(30)
STREET VARCHAR2(100)
CITY VARCHAR2(30)
STATE VARCHAR2(2)
ZIP NUMBER(5)
The type definition for the new object is:
CREATE TYPE ADDR_T AS OBJECT (
street VARCHAR2(100),
city VARCHAR2(30),
state VARCHAR2(2),
zip NUMBER(5, 0) );
Here are the steps for this redefinition:
1.Verify that the table is a candidate for online redefinition. Specify that the redefinition is to be done using primary keys or pseudo-primary keys.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE('STEVE','CUSTOMER',
DBMS_REDEFINITION.CONS_USE_PK);
END;
/
2.Create the interim table int_customer.
CREATE TABLE INT_CUSTOMER(
CID NUMBER,
NAME VARCHAR2(30),
ADDR ADDR_T);
Note that no primary key is defined on the interim table. When dependent objects are copied in step 5, the primary key constraint and index are copied.
3.Because CUSTOMER is a very large table, specify parallel operations for the next step.
alter session force parallel dml parallel 4;
alter session force parallel query parallel 4;
4.Start the redefinition process using primary keys.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(
uname => 'STEVE',
orig_table => 'CUSTOMER',
int_table => 'INT_CUSTOMER',
col_mapping => 'cid cid, name name,
addr_t(street, city, state, zip) addr');
END;
/
Note that addr_t(street, city, state, zip) is a call to the object constructor.
5.Copy dependent objects.
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
'STEVE','CUSTOMER','INT_CUSTOMER',DBMS_REDEFINITION.CONS_ORIG_PARAMS,
TRUE, TRUE, TRUE, FALSE, num_errors, TRUE);
END;
/
Note that for this call, the final argument indicates that table statistics are to be copied to the interim table.
6.Optionally synchronize the interim table.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE('STEVE', 'CUSTOMER', 'INT_CUSTOMER');
END;
/
7.Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('STEVE', 'CUSTOMER', 'INT_CUSTOMER');
END;
/
8.Drop the interim table.
Example 3
This example addresses the situation where a dependent object must be manually created and registered.
Consider the case where a table T1 has a column named C1, and where this column becomes C2 after the redefinition. Assume that there is an index Index1 on C1. In this case, COPY_TABLE_DEPENDENTS tries to create an index on the interim table corresponding to Index1, and tries to create it on a column C1, which does not exist on the interim table. This results in an error. You must therefore manually create the index on column C2 and register it. Here are the steps:
1.Create the interim table INT_T1 and create an index Int_Index1 on column C2.
2.Ensure that T1 is a candidate for online redefinition with CAN_REDEF_TABLE, and then begin the redefinition process with START_REDEF_TABLE.
3.Register the original (Index1) and interim (Int_Index1) dependent objects.
BEGIN
DBMS_REDEFINITION.REGISTER_DEPENDENT_OBJECT(
uname => 'STEVE',
orig_table => 'T1',
int_table => 'INT_T1',
dep_type => DBMS_REDEFINITION.CONS_INDEX,
dep_owner => 'STEVE',
dep_orig_name => 'Index1',
dep_int_name => 'Int_Index1');
END;
/
4.Use COPY_TABLE_DEPENDENTS to copy the remaining dependent objects.
5.Optionally synchronize the interim table.
6.Complete the redefinition and drop the interim table.
Example 4
This example demonstrates redefining a single partition. It moves the oldest partition of a range-partitioned sales table to a tablespace named TBS_LOW_FREQ. The table containing the partition to be redefined is defined as follows:
CREATE TABLE salestable
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
TABLESPACE users
PARTITION BY RANGE(s_saledate)
(PARTITION sal03q1 VALUES LESS THAN (TO_DATE('01-APR-2003', 'DD-MON-YYYY')),
PARTITION sal03q2 VALUES LESS THAN (TO_DATE('01-JUL-2003', 'DD-MON-YYYY')),
PARTITION sal03q3 VALUES LESS THAN (TO_DATE('01-OCT-2003', 'DD-MON-YYYY')),
PARTITION sal03q4 VALUES LESS THAN (TO_DATE('01-JAN-2004', 'DD-MON-YYYY')));
The table has a local partitioned index that is defined as follows:
CREATE INDEX sales_index ON salestable
(s_saledate, s_productid, s_custid) LOCAL;
Here are the steps. In the following procedure calls, note the extra argument: partition name (part_name).
1.Ensure that salestable is a candidate for redefinition.
BEGIN
DBMS_REDEFINITION.CAN_REDEF_TABLE(
uname => 'STEVE',
tname => 'SALESTABLE',
options_flag => DBMS_REDEFINITION.CONS_USE_ROWID,
part_name => 'sal03q1');
END;
/
2.Create the interim table in the TBS_LOW_FREQ tablespace. Because this is a redefinition of a range partition, the interim table is non-partitioned.
CREATE TABLE int_salestable
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
TABLESPACE tbs_low_freq;
3.Start the redefinition process using rowid.
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE(
uname => 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
col_mapping => NULL,
options_flag => DBMS_REDEFINITION.CONS_USE_ROWID,
part_name => 'sal03q1');
END;
/
4.Manually create any local indexes on the interim table.
CREATE INDEX int_sales_index ON int_salestable
(s_saledate, s_productid, s_custid)
TABLESPACE tbs_low_freq;
5.Optionally synchronize the interim table.
BEGIN
DBMS_REDEFINITION.SYNC_INTERIM_TABLE(
uname => 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
part_name => 'sal03q1');
END;
/
6.Complete the redefinition.
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(
uname => 'STEVE',
orig_table => 'salestable',
int_table => 'int_salestable',
part_name => 'sal03q1');
END;
/
7.Drop the interim table.
The following query shows that the oldest partition has been moved to the new tablespace:
select partition_name, tablespace_name from user_tab_partitions
where table_name = 'SALESTABLE';
PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
SAL03Q1 TBS_LOW_FREQ
SAL03Q2 USERS
SAL03Q3 USERS
SAL03Q4 USERS
4 rows selected.
Privileges Required for the DBMS_REDEFINITION Package
Execute privileges on the DBMS_REDEFINITION package are granted to EXECUTE_CATALOG_ROLE. In addition to having execute privileges on this package, you must be granted the following privileges:
•CREATE ANY TABLE
•ALTER ANY TABLE
•DROP ANY TABLE
•LOCK ANY TABLE
•SELECT ANY TABLE
The following additional privileges are required to execute COPY_TABLE_DEPENDENTS:
•CREATE ANY TRIGGER
•CREATE ANY INDEX
77 DBMS_REDEFINITIONThe DBMS_REDEFINITION package provides an interface to perform an online redefinition of tables.
See Also:
Oracle Database Administrator's Guide for more information about online redefinition of tables
This chapter contains the following topics:
•Using DBMS_REDEFINITION
◦Overview
◦Constants
◦Operational Notes
•Summary of DBMS_REDEFINITION Subprograms
--------------------------------------------------------------------------------
Using DBMS_REDEFINITION
•Overview
•Constants
•Operational Notes
--------------------------------------------------------------------------------
Overview
To achieve online redefinition, incrementally maintainable local materialized views are used. These logs keep track of the changes to the master tables and are used by the materialized views during refresh synchronization.
--------------------------------------------------------------------------------
Constants
The DBMS_REDEFINITION package uses the constants shown in Table 77-1, "DBMS_REDEFINITION Constants":
Table 77-1 DBMS_REDEFINITION Constants
Constant Type Value Description
CONS_CONSTRAINT
PLS_INTEGER
3
Used to specify that dependent object type is a constraint
CONS_INDEX
PLS_INTEGER
2
Used to specify that dependent object type is a index
CONS_ORIG_PARAMS
PLS_INTEGER
1
Used to specify that indexes should be cloned with their original storage parameters
CONS_TRIGGER
PLS_INTEGER
4
Used to specify that dependent object type is a trigger
CONS_USE_PK
BINARY_INTEGER
1
Used to indicate that the redefinition should be done using primary keys or pseudo-primary keys (unique keys with all component columns having not-NULL constraints)
CONS_USE_ROWID
BINARY_INTEGER
2
Used to indicate that the redefinition should be done using rowids
--------------------------------------------------------------------------------
Operational Notes
•CONS_USE_PK and CONS_USE_ROWID are constants used as input to the "options_flag" parameter in both the START_REDEF_TABLE Procedure and CAN_REDEF_TABLE Procedure. CONS_USE_ROWID is used to indicate that the redefinition should be done using rowids while CONS_USE_PK implies that the redefinition should be done using primary keys or pseudo-primary keys (which are unique keys with all component columns having NOT NULL constraints).
•CONS_INDEX, CONS_TRIGGER and CONS_CONSTRAINT are used to specify the type of the dependent object being (un)registered in REGISTER_DEPENDENT_OBJECT Procedure and UNREGISTER_DEPENDENT_OBJECT Procedure (parameter "dep_type").
CONS_INDEX ==> dependent object is of type INDEX
CONS_TRIGGER ==> dependent object is of type TRIGGER
CONS_CONSTRAINT==> dependent object type is of type CONSTRAINT
•CONS_ORIG_PARAMS as used as input to the "copy_indexes" parameter in COPY_TABLE_DEPENDENTS Procedure. Using this parameter implies that the indexes on the original table be copied onto the interim table using the same storage parameters as that of the original index.
--------------------------------------------------------------------------------
Rules and Limits
For information about various rules and limits that apply to implementation of this package, see the Oracle Database Administrator's Guide.
--------------------------------------------------------------------------------
Summary of DBMS_REDEFINITION Subprograms
Table 77-2 DBMS_REDEFINITION Package Subprograms
Subprogram Description
ABORT_REDEF_TABLE Procedure
Cleans up errors that occur during the redefinition process and removes all temporary objects created by the reorganization process
CAN_REDEF_TABLE Procedure
Determines if a given table can be redefined online
COPY_TABLE_DEPENDENTS Procedure
Copies the dependent objects of the original table onto the interim table
FINISH_REDEF_TABLE Procedure
Completes the redefinition process.
REGISTER_DEPENDENT_OBJECT Procedure
Registers a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table
START_REDEF_TABLE Procedure
Initiates the redefinition process
SYNC_INTERIM_TABLE Procedure
Keeps the interim table synchronized with the original table
UNREGISTER_DEPENDENT_OBJECT Procedure
Unregisters a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table
--------------------------------------------------------------------------------
ABORT_REDEF_TABLE Procedure
This procedure cleans up errors that occur during the redefinition process. This procedure can also be used to terminate the redefinition process any time after the START_REDEF_TABLE Procedure has been called and before the FINISH_REDEF_TABLE Procedure is called. This process will remove the temporary objects that are created by the redefinition process such as materialized view logs.
Syntax
DBMS_REDEFINITION.ABORT_REDEF_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-3 ABORT_REDEF_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
--------------------------------------------------------------------------------
CAN_REDEF_TABLE Procedure
This procedure determines if a given table can be redefined online. This is the first step of the online redefinition process. If the table is not a candidate for online redefinition, an error message is raised.
Syntax
DBMS_REDEFINITION.CAN_REDEF_TABLE (
uname IN VARCHAR2,
tname IN VARCHAR2,
options_flag IN PLS_INTEGER := 1,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-4 CAN_REDEF_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the table
tname
The name of the table to be re-organized
options_flag
Indicates the type of redefinition method to use.
•If dbms_redefinition.cons_use_pk, the redefinition is done using primary keys or pseudo-primary keys (unique keys with all component columns having NOT NULL constraints). The default method of redefinition is using primary keys.
•If dbms_redefinition.cons_use_rowid, the redefinition is done using rowids.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
Exceptions
If the table is not a candidate for online redefinition, an error message is raised.
--------------------------------------------------------------------------------
COPY_TABLE_DEPENDENTS Procedure
This procedure clones the dependent objects of the table being redefined onto the interim table and registers the dependent objects. This procedure does not clone the already registered dependent objects.
This subprogram is used to clone the dependent objects like grants, triggers, constraints and privileges from the table being redefined to the interim table (which represents the post-redefinition table).
Syntax
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
copy_indexes IN PLS_INTEGER := 1,
copy_triggers IN BOOLEAN := TRUE,
copy_constraints IN BOOLEAN := TRUE,
copy_privileges IN BOOLEAN := TRUE,
ignore_errors IN BOOLEAN := FALSE,
num_errors OUT PLS_INTEGER,
copy_statistics IN BOOLEAN := FALSE);
Parameters
Table 77-5 COPY_TABLE_DEPENDENTS Procedure Parameters
Parameter Description
uname
The schema name of the tables.
orig_table
The name of the table being redefined.
int_table
The name of the interim table.
copy_indexes
A flag indicating whether to copy the indexes
•0 - do not copy any index
•dbms_redefinition.cons_orig_params – copy the indexes using the physical parameters of the source indexes
copy_triggers
TRUE = clone triggers, FALSE = do nothing
copy_constraints
TRUE = clone constraints, FALSE = do nothing. If compatibility setting is 10.2 or higher, then clone CHECK and NOT NULL constraints
copy_privileges
TRUE = clone privileges, FALSE = do nothing
ignore_errors
TRUE = if an error occurs while cloning a particular dependent object, then skip that object and continue cloning other dependent objects. FALSE = that the cloning process should stop upon encountering an error.
num_errors
The number of errors that occurred while cloning dependent objects
copy_statistics
TRUE = copy statistics, FALSE = do nothing
Usage Notes
•The user must check the column num_errors before proceeding to ensure that no errors occurred during the cloning of the objects.
•In case of an error, the user should fix the cause of the error and call the COPY_TABLE_DEPENDENTS Procedure again to clone the dependent object. Alternatively the user can manually clone the dependent object and then register the manually cloned dependent object using the REGISTER_DEPENDENT_OBJECT Procedure.
•All cloned referential constraints involving the interim tables will be created disabled (they will be automatically enabled after the redefinition) and all triggers on interim tables will not fire till the redefinition is completed. After the redefinition is complete, the cloned objects will be renamed to the corresponding pre-redefinition names of the objects (from which they were cloned from).
•It is the user's responsibility that the cloned dependent objects are unaffected by the redefinition. All the triggers will be cloned and it is the user's responsibility that the cloned triggers are unaffected by the redefinition.
--------------------------------------------------------------------------------
FINISH_REDEF_TABLE Procedure
This procedure completes the redefinition process. Before this step, you can create new indexes, triggers, grants, and constraints on the interim table. The referential constraints involving the interim table must be disabled. After completing this step, the original table is redefined with the attributes and data of the interim table. The original table is locked briefly during this procedure.
Syntax
DBMS_REDEFINITION.FINISH_REDEF_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-6 FINISH_REDEF_TABLE Procedure Parameters
Parameters Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
--------------------------------------------------------------------------------
REGISTER_DEPENDENT_OBJECT Procedure
This procedure registers a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table.
This can be used to have the same object on each table but with different attributes. For example: for an index, the storage and tablespace attributes could be different but the columns indexed remain the same
Syntax
DBMS_REDEFINITION.REGISTER_DEPEPENDENT_OBJECT(
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
dep_type IN PLS_INTEGER,
dep_owner IN VARCHAR2,
dep_orig_name IN VARCHAR2,
dep_int_name IN VARCHAR2);
Parameters
Table 77-7 REGISTER_DEPENDENT_OBJECT Procedure Parameters
Parameters Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
dep_type
The type of the dependent object.
dep_owner
The owner of the dependent object.
dep_orig_name
The name of the original dependent object.
dep_int_name
The name of the interim dependent object.
Usage Notes
•Attempting to register an already registered object will raise an error.
•Registering a dependent object will automatically remove that object from DBA_REDEFINITION_ERRORS if an entry exists for that object.
--------------------------------------------------------------------------------
START_REDEF_TABLE Procedure
Prior to calling this procedure, you must manually create an empty interim table (in the same schema as the table to be redefined) with the desired attributes of the post-redefinition table, and then call this procedure to initiate the redefinition.
Syntax
DBMS_REDEFINITION.START_REDEF_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
col_mapping IN VARCHAR2 := NULL,
options_flag IN BINARY_INTEGER := 1,
orderby_cols IN VARCHAR2 := NULL,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-8 START_REDEF_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
col_mapping
The mapping information from the columns in the original table to the columns in the interim table. (This is similar to the column list on the SELECT clause of a query.) If NULL, all the columns in the original table are selected and have the same name after redefinition.
options_flag
Indicates the type of redefinition method to use.
•If dbms_redefinition.cons_use_pk, the redefinition is done using primary keys or pseudo-primary keys (unique keys with all component columns having NOT NULL constraints). The default method of redefinition is using primary keys.
•If dbms_redefinition.cons_use_rowid, the redefinition is done using rowids.
orderby_cols
This optional parameter accepts the list of columns (along with the optional keyword(s) ascending/descending) with which to order by the rows during the initial instantiation of the interim table (the order by is only done for the initial instantiation and not for subsequent synchronizations)
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
--------------------------------------------------------------------------------
SYNC_INTERIM_TABLE Procedure
This procedure keeps the interim table synchronized with the original table.
Syntax
DBMS_REDEFINITION.SYNC_INTERIM_TABLE (
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
part_name IN VARCHAR2 := NULL);
Parameters
Table 77-9 SYNC_INTERIM_TABLE Procedure Parameters
Parameter Description
uname
The schema name of the table.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
part_name
The name of the partition being redefined. If redefining only a single partition of a table, specify the partition name in this parameter. NULL implies the entire table is being redefined.
Usage Notes
•This step is useful in minimizing the amount of synchronization needed to be done by the FINISH_REDEF_TABLE Procedure before completing the online redefinition.
•This procedure can be called between long running operations (such as CREATE INDEX) on the interim table to sync it up with the data in the original table and speed up subsequent operations.
--------------------------------------------------------------------------------
UNREGISTER_DEPENDENT_OBJECT Procedure
This procedure unregisters a dependent object (index, trigger or constraint) on the table being redefined and the corresponding dependent object on the interim table.
Syntax
DBMS_REDEFINITION.UNREGISTER_DEPEPENDENT_OBJECT(
uname IN VARCHAR2,
orig_table IN VARCHAR2,
int_table IN VARCHAR2,
dep_type IN PLS_INTEGER,
dep_owner IN VARCHAR2,
dep_orig_name IN VARCHAR2,
dep_int_name IN VARCHAR2);
Parameters
Table 77-10 UNREGISTER_DEPENDENT_OBJECT Procedure Parameters
Parameters Description
uname
The schema name of the tables.
orig_table
The name of the table to be redefined.
int_table
The name of the interim table.
dep_type
The type of the dependent object.
dep_owner
The owner of the dependent object.
dep_orig_name
The name of the original dependent object.
dep_int_name
The name of the interim dependent object.
miércoles, 14 de abril de 2010
What Are the Basic Steps in Using the Data Pump API?
DATAPUMP FULL
expdp dbo/dbo@POWER full=y directory=DB_EXPORTS dumpfile=data.dmp logfile=log.log
DATAPUMP SCHEMA
expdp dbo/dbo@POWER schemas=DBO directory=DB_EXPORTS dumpfile=data.dmp logfile=log.log
DATAPUMP TABLA
expdp dbo/dbo@POWER tables=SCOTT.DEPT directory=DB_EXPORTS dumpfile=data.dmp logfile=log.log
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.logThe DBA_DATAPUMP_JOBS view can be used to monitor the current jobs.
system@db10g> select * from dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION
------------------------------ ------------------------------ ------------------------------
JOB_MODE STATE DEGREE ATTACHED_SESSIONS
------------------------------ ------------------------------ ---------- -----------------
SYSTEM SYS_EXPORT_FULL_01 EXPORT
FULL EXECUTING 1 1The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used all objects except those specified by it will be included in the export.
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.logData Pump API
Along with the data pump utilities Oracle provide an PL/SQL API. The following is an example of how this API can be used to perform a schema export.
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'EMP_EXPORT',
version => 'LATEST');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.dmp',
directory => 'TEST_DIR');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
END;
/Once the job has started the status can be checked using.
system@db10g> select * from dba_datapump_jobs;
What Are the Basic Steps in Using the Data Pump API?
To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP package. The following steps list the basic activities involved in using the Data Pump API. The steps are presented in the order in which the activities would generally be performed:
1.Execute the DBMS_DATAPUMP.OPEN procedure to create a Data Pump job and its infrastructure.
2.Define any parameters for the job.
3.Start the job.
4.Optionally, monitor the job until it completes.
5.Optionally, detach from the job and reattach at a later time.
6.Optionally, stop the job.
7.Optionally, restart the job, if desired.
These concepts are illustrated in the examples provided in the next section.
See Also:
Oracle Database PL/SQL Packages and Types Reference for a complete description of the DBMS_DATAPUMP package
Examples of Using the Data Pump API
This section provides the following examples to help you get started using the Data Pump API:
•Example 5-1, "Performing a Simple Schema Export"
•Example 5-2, "Importing a Dump File and Remapping All Schema Objects"
•Example 5-3, "Using Exception Handling During a Simple Schema Export"
The examples are in the form of PL/SQL scripts. If you choose to copy these scripts and run them, you must first do the following, using SQL*Plus:
•Create a directory object and grant READ and WRITE access to it. For example, to create a directory object named dmpdir to which you have access, do the following. Replace user with your username.
SQL> CREATE DIRECTORY dmpdir AS '/rdbms/work';
SQL> GRANT READ, WRITE ON DIRECTORY dmpdir TO user
•Ensure that you have the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. To see a list of all roles assigned to you within your security domain, do the following:
SQL> SELECT * FROM SESSION_ROLES;
If you do not have the necessary roles assigned to you, contact your system administrator for help.
•Turn on server output if it is not already on. This is done as follows:
SQL> SET SERVEROUTPUT ON
If you do not do this, you will not see any output to your screen. You must do this in the same session in which you invoke the example. Also, if you exit SQL*Plus, this setting is lost and must be reset when you begin a new session.
Example 5-1 Performing a Simple Schema Export
The PL/SQL script provided in this example shows how to use the Data Pump API to perform a simple schema export of the HR schema. It shows how to create a job, start it, and monitor it. Additional information about the example is contained in the comments within the script. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS to retrieve more detailed error information when a failure occurs.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a schema export.
h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'EXAMPLE1','LATEST');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');
-- A metadata filter is used to specify the schema that will be exported.
DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''HR'')');
-- Start the job. An exception will be generated if something is not set up
-- properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
Example 5-2 Importing a Dump File and Remapping All Schema Objects
This example imports the dump file created in Example 5-1 (an export of the hr schema). All schema objects are remapped from the hr schema to the blake schema. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS to retrieve more detailed error information when a failure occurs.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a "full" import (everything
-- in the dump file without filtering).
h1 := DBMS_DATAPUMP.OPEN('IMPORT','FULL',NULL,'EXAMPLE2');
-- Specify the single dump file for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. This is the dump file created by
-- the export operation in the first example.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');
-- A metadata remap will map all schema objects from HR to BLAKE.
DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','HR','BLAKE');
-- If a table already exists in the destination schema, skip it (leave
-- the preexisting table alone). This is the default, but it does not hurt
-- to specify it explicitly.
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
Example 5-3 Using Exception Handling During a Simple Schema Export
This example shows a simple schema export using the Data Pump API. It extends Example 5-1 to show how to use exception handling to catch the SUCCESS_WITH_INFO case, and how to use the GET_STATUS procedure to retrieve additional information about errors. If you want to get status up to the current point, but a handle has not yet been obtained, you can use NULL for DBMS_DATAPUMP.GET_STATUS.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a schema export.
h1 := dbms_datapump.open('EXPORT','SCHEMA',NULL,'EXAMPLE3','LATEST');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
dbms_datapump.add_file(h1,'example3.dmp','DMPDIR');
-- A metadata filter is used to specify the schema that will be exported.
dbms_datapump.metadata_filter(h1,'SCHEMA_EXPR','IN (''HR'')');
-- Start the job. An exception will be returned if something is not set up
-- properly.One possible exception that will be handled differently is the
-- success_with_info exception. success_with_info means the job started
-- successfully, but more information is available through get_status about
-- conditions around the start_job that the user might want to be aware of.
begin
dbms_datapump.start_job(h1);
dbms_output.put_line('Data Pump job started successfully');
exception
when others then
if sqlcode = dbms_datapump.success_with_info_num
then
dbms_output.put_line('Data Pump job started with info available:');
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end if;
else
raise;
end if;
end;
-- The export job should now be running. In the following loop, we will monitor
-- the job until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- Display any work-in-progress (WIP) or error messages that were received for
-- the job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
-- Any exceptions that propagated to this point will be captured. The
-- details will be retrieved from get_status and displayed.
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
END;
/
expdp dbo/dbo@POWER full=y directory=DB_EXPORTS dumpfile=data.dmp logfile=log.log
DATAPUMP SCHEMA
expdp dbo/dbo@POWER schemas=DBO directory=DB_EXPORTS dumpfile=data.dmp logfile=log.log
DATAPUMP TABLA
expdp dbo/dbo@POWER tables=SCOTT.DEPT directory=DB_EXPORTS dumpfile=data.dmp logfile=log.log
expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR parallel=4 dumpfile=SCOTT_%U.dmp logfile=expdpSCOTT.logThe DBA_DATAPUMP_JOBS view can be used to monitor the current jobs.
system@db10g> select * from dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION
------------------------------ ------------------------------ ------------------------------
JOB_MODE STATE DEGREE ATTACHED_SESSIONS
------------------------------ ------------------------------ ---------- -----------------
SYSTEM SYS_EXPORT_FULL_01 EXPORT
FULL EXECUTING 1 1The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export. When the EXCLUDE parameter is used all objects except those specified by it will be included in the export.
expdp scott/tiger@db10g schemas=SCOTT include=TABLE:"IN ('EMP', 'DEPT')" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log
expdp scott/tiger@db10g schemas=SCOTT exclude=TABLE:"= 'BONUS'" directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.logData Pump API
Along with the data pump utilities Oracle provide an PL/SQL API. The following is an example of how this API can be used to perform a schema export.
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'EMP_EXPORT',
version => 'LATEST');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.dmp',
directory => 'TEST_DIR');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'SCOTT.log',
directory => 'TEST_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.metadata_filter(
handle => l_dp_handle,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
END;
/Once the job has started the status can be checked using.
system@db10g> select * from dba_datapump_jobs;
What Are the Basic Steps in Using the Data Pump API?
To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP package. The following steps list the basic activities involved in using the Data Pump API. The steps are presented in the order in which the activities would generally be performed:
1.Execute the DBMS_DATAPUMP.OPEN procedure to create a Data Pump job and its infrastructure.
2.Define any parameters for the job.
3.Start the job.
4.Optionally, monitor the job until it completes.
5.Optionally, detach from the job and reattach at a later time.
6.Optionally, stop the job.
7.Optionally, restart the job, if desired.
These concepts are illustrated in the examples provided in the next section.
See Also:
Oracle Database PL/SQL Packages and Types Reference for a complete description of the DBMS_DATAPUMP package
Examples of Using the Data Pump API
This section provides the following examples to help you get started using the Data Pump API:
•Example 5-1, "Performing a Simple Schema Export"
•Example 5-2, "Importing a Dump File and Remapping All Schema Objects"
•Example 5-3, "Using Exception Handling During a Simple Schema Export"
The examples are in the form of PL/SQL scripts. If you choose to copy these scripts and run them, you must first do the following, using SQL*Plus:
•Create a directory object and grant READ and WRITE access to it. For example, to create a directory object named dmpdir to which you have access, do the following. Replace user with your username.
SQL> CREATE DIRECTORY dmpdir AS '/rdbms/work';
SQL> GRANT READ, WRITE ON DIRECTORY dmpdir TO user
•Ensure that you have the EXP_FULL_DATABASE and IMP_FULL_DATABASE roles. To see a list of all roles assigned to you within your security domain, do the following:
SQL> SELECT * FROM SESSION_ROLES;
If you do not have the necessary roles assigned to you, contact your system administrator for help.
•Turn on server output if it is not already on. This is done as follows:
SQL> SET SERVEROUTPUT ON
If you do not do this, you will not see any output to your screen. You must do this in the same session in which you invoke the example. Also, if you exit SQL*Plus, this setting is lost and must be reset when you begin a new session.
Example 5-1 Performing a Simple Schema Export
The PL/SQL script provided in this example shows how to use the Data Pump API to perform a simple schema export of the HR schema. It shows how to create a job, start it, and monitor it. Additional information about the example is contained in the comments within the script. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS to retrieve more detailed error information when a failure occurs.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a schema export.
h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'EXAMPLE1','LATEST');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');
-- A metadata filter is used to specify the schema that will be exported.
DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''HR'')');
-- Start the job. An exception will be generated if something is not set up
-- properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
Example 5-2 Importing a Dump File and Remapping All Schema Objects
This example imports the dump file created in Example 5-1 (an export of the hr schema). All schema objects are remapped from the hr schema to the blake schema. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS to retrieve more detailed error information when a failure occurs.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a "full" import (everything
-- in the dump file without filtering).
h1 := DBMS_DATAPUMP.OPEN('IMPORT','FULL',NULL,'EXAMPLE2');
-- Specify the single dump file for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. This is the dump file created by
-- the export operation in the first example.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');
-- A metadata remap will map all schema objects from HR to BLAKE.
DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','HR','BLAKE');
-- If a table already exists in the destination schema, skip it (leave
-- the preexisting table alone). This is the default, but it does not hurt
-- to specify it explicitly.
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
Example 5-3 Using Exception Handling During a Simple Schema Export
This example shows a simple schema export using the Data Pump API. It extends Example 5-1 to show how to use exception handling to catch the SUCCESS_WITH_INFO case, and how to use the GET_STATUS procedure to retrieve additional information about errors. If you want to get status up to the current point, but a handle has not yet been obtained, you can use NULL for DBMS_DATAPUMP.GET_STATUS.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a schema export.
h1 := dbms_datapump.open('EXPORT','SCHEMA',NULL,'EXAMPLE3','LATEST');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
dbms_datapump.add_file(h1,'example3.dmp','DMPDIR');
-- A metadata filter is used to specify the schema that will be exported.
dbms_datapump.metadata_filter(h1,'SCHEMA_EXPR','IN (''HR'')');
-- Start the job. An exception will be returned if something is not set up
-- properly.One possible exception that will be handled differently is the
-- success_with_info exception. success_with_info means the job started
-- successfully, but more information is available through get_status about
-- conditions around the start_job that the user might want to be aware of.
begin
dbms_datapump.start_job(h1);
dbms_output.put_line('Data Pump job started successfully');
exception
when others then
if sqlcode = dbms_datapump.success_with_info_num
then
dbms_output.put_line('Data Pump job started with info available:');
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end if;
else
raise;
end if;
end;
-- The export job should now be running. In the following loop, we will monitor
-- the job until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- Display any work-in-progress (WIP) or error messages that were received for
-- the job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
-- Any exceptions that propagated to this point will be captured. The
-- details will be retrieved from get_status and displayed.
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
END;
/
Examples of Using Data Pump Export
Example 2-1 Performing a Table-Mode Export
expdp hr/hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y
Because user hr is exporting tables in his own schema, it is not necessary to specify the schema name for the tables. The NOLOGFILE=y parameter indicates that an Export log file of the operation will not be generated.
Data-Only Unload of Selected Tables and Rows
Example 2-2 shows the contents of a parameter file (exp.par) that you could use to perform a data-only unload of all tables in the human resources (hr) schema except for the tables countries and regions. Rows in the employees table are unloaded that have a department_id other than 50. The rows are ordered by employee_id.
Example 2-2 Data-Only Unload of Selected Tables and Rows
DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"
You can issue the following command to execute the exp.par parameter file:
> expdp hr/hr PARFILE=exp.par
Estimating Disk Space Needed in a Table-Mode Export
Example 2-3 shows the use of the ESTIMATE_ONLY parameter to estimate the space that would be consumed in a table-mode export, without actually performing the export operation. Issue the following command to use the BLOCKS method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr) schema: employees, departments, and locations.
Example 2-3 Estimating Disk Space Needed in a Table-Mode Export
> expdp hr/hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees,
departments, locations LOGFILE=estimate.log
The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.
Performing a Schema-Mode Export
Example 2-4 shows a schema-mode export of the hr schema. In a schema-mode export, only objects belonging to the corresponding schemas are unloaded. Because schema mode is the default mode, it is not necessary to specify the SCHEMAS parameter on the command line, unless you are specifying more than one schema or a schema other than your own.
Example 2-4 Performing a Schema Mode Export
> expdp hr/hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema.log
Performing a Parallel Full Database Export
Example 2-5 shows a full database Export that will have 3 parallel worker processes.
Example 2-5 Parallel Full Export
> expdp hr/hr FULL=y DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull
Because this is a full database export, all data and metadata in the database will be exported. Dump files full101.dmp, full201.dmp, full102.dmp, and so on will be created in a round-robin fashion in the directories pointed to by the dpump_dir1 and dpump_dir2 directory objects. For best performance, these should be on separate I/O channels. Each file will be up to 2 gigabytes in size, as necessary. Initially, up to three files will be created. More files will be created, if needed. The job and master table will have a name of expfull. The log file will be written to expfull.log in the dpump_dir1 directory.
expdp hr/hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y
Because user hr is exporting tables in his own schema, it is not necessary to specify the schema name for the tables. The NOLOGFILE=y parameter indicates that an Export log file of the operation will not be generated.
Data-Only Unload of Selected Tables and Rows
Example 2-2 shows the contents of a parameter file (exp.par) that you could use to perform a data-only unload of all tables in the human resources (hr) schema except for the tables countries and regions. Rows in the employees table are unloaded that have a department_id other than 50. The rows are ordered by employee_id.
Example 2-2 Data-Only Unload of Selected Tables and Rows
DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"
You can issue the following command to execute the exp.par parameter file:
> expdp hr/hr PARFILE=exp.par
Estimating Disk Space Needed in a Table-Mode Export
Example 2-3 shows the use of the ESTIMATE_ONLY parameter to estimate the space that would be consumed in a table-mode export, without actually performing the export operation. Issue the following command to use the BLOCKS method to estimate the number of bytes required to export the data in the following three tables located in the human resource (hr) schema: employees, departments, and locations.
Example 2-3 Estimating Disk Space Needed in a Table-Mode Export
> expdp hr/hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees,
departments, locations LOGFILE=estimate.log
The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.
Performing a Schema-Mode Export
Example 2-4 shows a schema-mode export of the hr schema. In a schema-mode export, only objects belonging to the corresponding schemas are unloaded. Because schema mode is the default mode, it is not necessary to specify the SCHEMAS parameter on the command line, unless you are specifying more than one schema or a schema other than your own.
Example 2-4 Performing a Schema Mode Export
> expdp hr/hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema.log
Performing a Parallel Full Database Export
Example 2-5 shows a full database Export that will have 3 parallel worker processes.
Example 2-5 Parallel Full Export
> expdp hr/hr FULL=y DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull
Because this is a full database export, all data and metadata in the database will be exported. Dump files full101.dmp, full201.dmp, full102.dmp, and so on will be created in a round-robin fashion in the directories pointed to by the dpump_dir1 and dpump_dir2 directory objects. For best performance, these should be on separate I/O channels. Each file will be up to 2 gigabytes in size, as necessary. Initially, up to three files will be created. More files will be created, if needed. The job and master table will have a name of expfull. The log file will be written to expfull.log in the dpump_dir1 directory.
viernes, 9 de abril de 2010
AFTER SUSPEND Trigger - VISTA DBA_RESUMABLE
VISTA
DBA_RESUMABLE
SQL> show parameter resuma;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
resumable_timeout integer 0
SQL>
CREATE OR REPLACE TRIGGER resumable_default
AFTER SUSPEND
ON DATABASE
DECLARE
/* declare transaction in this trigger is autonomous */
/* this is not required because transactions within a trigger
are always autonomous */
PRAGMA AUTONOMOUS_TRANSACTION;
cur_sid NUMBER;
cur_inst NUMBER;
errno NUMBER;
err_type VARCHAR2;
object_owner VARCHAR2;
object_type VARCHAR2;
table_space_name VARCHAR2;
object_name VARCHAR2;
sub_object_name VARCHAR2;
error_txt VARCHAR2;
msg_body VARCHAR2;
ret_value BOOLEAN;
mail_conn UTL_SMTP.CONNECTION;
BEGIN
-- Get session ID
SELECT DISTINCT(SID) INTO cur_SID FROM V$MYSTAT;
-- Get instance number
cur_inst := userenv('instance');
-- Get space error information
ret_value :=
DBMS_RESUMABLE.SPACE_ERROR_INFO(err_type,object_type,object_owner,
table_space_name,object_name, sub_object_name);
/*
-- If the error is related to undo segments, log error, send email
-- to DBA, and abort the statement. Otherwise, set timeout to 8 hours.
--
-- sys.rbs_error is a table which is to be
-- created by a DBA manually and defined as
-- (sql_text VARCHAR2(1000), error_msg VARCHAR2(4000),
-- suspend_time DATE)
*/
IF OBJECT_TYPE = 'UNDO SEGMENT' THEN
/* LOG ERROR */
INSERT INTO sys.rbs_error (
SELECT SQL_TEXT, ERROR_MSG, SUSPEND_TIME
FROM DBMS_RESUMABLE
WHERE SESSION_ID = cur_sid AND INSTANCE_ID = cur_inst
);
SELECT ERROR_MSG INTO error_txt FROM DBMS_RESUMABLE
WHERE SESSION_ID = cur_sid and INSTANCE_ID = cur_inst;
-- Send email to receipient via UTL_SMTP package
msg_body:='Subject: Space Error Occurred
Space limit reached for undo segment ' || object_name ||
on ' || TO_CHAR(SYSDATE, 'Month dd, YYYY, HH:MIam') ||
'. Error message was ' || error_txt;
mail_conn := UTL_SMTP.OPEN_CONNECTION('localhost', 25);
UTL_SMTP.HELO(mail_conn, 'localhost');
UTL_SMTP.MAIL(mail_conn, 'sender@localhost');
UTL_SMTP.RCPT(mail_conn, 'recipient@localhost');
UTL_SMTP.DATA(mail_conn, msg_body);
UTL_SMTP.QUIT(mail_conn);
-- Abort the statement
DBMS_RESUMABLE.ABORT(cur_sid);
ELSE
-- Set timeout to 8 hours
DBMS_RESUMABLE.SET_TIMEOUT(28800);
END IF;
/* commit autonomous transaction */
COMMIT;
END;
/
jueves, 8 de abril de 2010
Running the DBVERIFY Utility
Running the DBVERIFY Utility
The DBVERIFY program is an external command-line utility that performs a physical data structure integrity check on an offline datafile. Use DBVERIFY primarily when you need to ensure that a user-managed backup of a datafile is valid before it is restored or as a diagnostic aid when you have encountered data corruption problems.
The name and location of DBVERIFY is dependent on your operating system. For example, to perform an integrity check on datafile tbs_52.f on UNIX, you can run the dbv command as follows:
% dbv file=tbs_52.f
Sample dbv output follows:
DBVERIFY - Verification starting : FILE = users01.dbf
DBVERIFY - Verification complete
Total Pages Examined : 250
Total Pages Processed (Data) : 1
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index): 0
Total Pages Processed (Other): 2
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 247
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
The DBVERIFY program is an external command-line utility that performs a physical data structure integrity check on an offline datafile. Use DBVERIFY primarily when you need to ensure that a user-managed backup of a datafile is valid before it is restored or as a diagnostic aid when you have encountered data corruption problems.
The name and location of DBVERIFY is dependent on your operating system. For example, to perform an integrity check on datafile tbs_52.f on UNIX, you can run the dbv command as follows:
% dbv file=tbs_52.f
Sample dbv output follows:
DBVERIFY - Verification starting : FILE = users01.dbf
DBVERIFY - Verification complete
Total Pages Examined : 250
Total Pages Processed (Data) : 1
Total Pages Failing (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing (Index): 0
Total Pages Processed (Other): 2
Total Pages Processed (Seg) : 0
Total Pages Failing (Seg) : 0
Total Pages Empty : 247
Total Pages Marked Corrupt : 0
Total Pages Influx : 0
Making User-Managed Backups of Offline Tablespaces and Datafiles
Making User-Managed Backups of Offline Tablespaces and Datafiles
Note the following guidelines when backing up offline tablespaces:
•You cannot offline the SYSTEM tablespace or a tablespace with active rollback segments. The following procedure cannot be used for such tablespaces.
•Assume that a table is in tablespace Primary and its index is in tablespace Index. Taking tablespace Index offline while leaving tablespace Primary online can cause errors when DML is issued against the indexed tables located in Primary. The problem only manifests when the access method chosen by the optimizer needs to access the indexes in the Index tablespace.
To back up offline tablespaces:
Before beginning a backup of a tablespace, identify the tablespace's datafiles by querying the DBA_DATA_FILES view. For example, assume that you want to back up the users tablespace. Enter the following in SQL*Plus:
SELECT TABLESPACE_NAME, FILE_NAME
FROM SYS.DBA_DATA_FILES
WHERE TABLESPACE_NAME = 'USERS';
TABLESPACE_NAME FILE_NAME
------------------------------- --------------------------------
USERS /oracle/oradata/trgt/users01.dbf
In this example, /oracle/oradata/trgt/users01.dbf is a fully specified filename corresponding to the datafile in the users tablespace.
Take the tablespace offline using normal priority if possible because it guarantees that you can subsequently bring the tablespace online without having to recover it. For example:
SQL> ALTER TABLESPACE users OFFLINE NORMAL;
Back up the offline datafiles. For example:
% cp /oracle/oradata/trgt/users01.dbf /d2/users01_'date "+%m_%d_%y"'.dbf
Bring the tablespace online. For example:
ALTER TABLESPACE users ONLINE;
Note:
If you took the tablespace offline using temporary or immediate priority, then you cannot bring the tablespace online unless you perform tablespace recovery.
Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived. For example, enter:
ALTER SYSTEM ARCHIVE LOG CURRENT;
Making User-Managed Backups of Online Tablespaces and Datafiles
You can back up all or only specific datafiles of an online tablespace while the database is open. The procedure differs depending on whether the online tablespace is read/write or read-only.
Note:
You should not back up temporary tablespaces.
Making User-Managed Backups of Online Read/Write Tablespaces
You must put a read/write tablespace in backup mode to make user-managed datafile backups when the tablespace is online and the database is open. The ALTER TABLESPACE ... BEGIN BACKUP statement places a tablespace in backup mode. In backup mode, the database copies whole changed data blocks into the redo stream. After you take the tablespace out of backup mode with the ALTER TABLESPACE ... END BACKUP or ALTER DATABASE END BACKUP statement, the database advances the datafile header to the current database checkpoint.
When restoring a datafile backed up in this way, the database asks for the appropriate set of redo log files to apply if recovery be needed. The redo logs contain all changes required to recover the datafiles and make them consistent.
To back up online read/write tablespaces in an open database:
Before beginning a backup of a tablespace, identify all of the datafiles in the tablespace with the DBA_DATA_FILES data dictionary view. For example, assume that you want to back up the users tablespace. Enter the following:
SELECT TABLESPACE_NAME, FILE_NAME
FROM SYS.DBA_DATA_FILES
WHERE TABLESPACE_NAME = 'USERS';
TABLESPACE_NAME FILE_NAME
------------------------------- --------------------
USERS /oracle/oradata/trgt/users01.dbf
USERS /oracle/oradata/trgt/users02.dbf
Mark the beginning of the online tablespace backup. For example, the following statement marks the start of an online backup for the tablespace users:
SQL> ALTER TABLESPACE users BEGIN BACKUP;
Caution:
If you do not use BEGIN BACKUP to mark the beginning of an online tablespace backup and wait for that statement to complete before starting your copies of online tablespaces, or then the datafile copies produced are not usable for subsequent recovery operations. Attempting to recover such a backup is risky and can return errors that result in inconsistent data. For example, the attempted recovery operation can issue a "fuzzy files" warning, and can lead to an inconsistent database that you cannot open.
Back up the online datafiles of the online tablespace with operating system commands. For example, UNIX users might enter:
% cp /oracle/oradata/trgt/users01.dbf /d2/users01_'date "+%m_%d_%y"'.dbf
% cp /oracle/oradata/trgt/users02.dbf /d2/users02_'date "+%m_%d_%y"'.dbf
After backing up the datafiles of the online tablespace, run the SQL statement ALTER TABLESPACE with the END BACKUP option. For example, the following statement ends the online backup of the tablespace users:
SQL> ALTER TABLESPACE users END BACKUP;
Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived. For example, enter:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Caution:
If you fail to take the tablespace out of backup mode, then Oracle continues to write copies of data blocks in this tablespace to the online logs, causing performance problems. Also, you will receive an ORA-01149 error if you try to shut down the database with the tablespaces still in backup mode.
Making Multiple User-Managed Backups of Online Read/Write TablespacesWhen backing up several online tablespaces, you can back them up either serially or in parallel. Use either of the following procedures depending on your needs.
Backing Up Online Tablespaces in Parallel
You can simultaneously create datafile copies of multiple tablespaces requiring backups in backup mode. Note, however, that by putting all tablespaces in online mode at once, you can generate large redo logs if there is heavy update activity on the affected tablespaces, because the redo must contain a copy of each changed data block in each changed datafile. Be sure to consider the size of the likely redo before using the procedure outlined here.
To back up online tablespaces in parallel:
Prepare all online tablespaces for backup by issuing all necessary ALTER TABLESPACE statements at once. For example, put tablespaces users, tools, and index in backup mode as follows:
SQL> ALTER TABLESPACE users BEGIN BACKUP;
SQL> ALTER TABLESPACE tools BEGIN BACKUP;
SQL> ALTER TABLESPACE indx BEGIN BACKUP;
If you are backing up all tablespaces, you might want to use this command:
SQL> ALTER DATABASE BEGIN BACKUP;
Back up all files of the online tablespaces. For example, a UNIX user might back up datafiles with the *.dbf suffix as follows:
% cp $ORACLE_HOME/oradata/trgt/*.dbf /disk2/backup/
Take the tablespaces out of backup mode as in the following example:
SQL> ALTER TABLESPACE users END BACKUP;
SQL> ALTER TABLESPACE tools END BACKUP;
SQL> ALTER TABLESPACE indx END BACKUP;
Again, it you are handling all datafiles at once you can use the ALTER DATABASE command instead of ALTER TABLESPACE:
SQL> ALTER DATABASE END BACKUP;
Archive the online redo logs so that the redo required to recover the tablespace backups will be available for later media recovery. For example, enter:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Backing Up Online Tablespaces Serially
You can place all tablespaces requiring online backups in backup mode one at a time. Oracle recommends the serial backup option because it minimizes the time between ALTER TABLESPACE ... BEGIN/END BACKUP statements. During online backups, more redo information is generated for the tablespace because whole data blocks are copied into the redo log.
To back up online tablespaces serially:
Prepare a tablespace for online backup. For example, to put tablespace users in backup mode enter the following:
SQL> ALTER TABLESPACE users BEGIN BACKUP;
In this case you probably do not want to use ALTER DATABASE BEGIN BACKUP to put all tablespaces in backup mode simultaneously, because of the unnecessary volume of redo log information generated for tablespaces in online mode.
Back up the datafiles in the tablespace. For example, enter:
% cp /oracle/oradata/trgt/users01.dbf /d2/users01_'date "+%m_%d_%y"'.dbf
Take the tablespace out of backup mode. For example, enter:
SQL> ALTER TABLESPACE users END BACKUP;
Repeat this procedure for each remaining tablespace.
Archive the unarchived redo logs so that the redo required to recover the tablespace backups is archived. For example, enter:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Note the following guidelines when backing up offline tablespaces:
•You cannot offline the SYSTEM tablespace or a tablespace with active rollback segments. The following procedure cannot be used for such tablespaces.
•Assume that a table is in tablespace Primary and its index is in tablespace Index. Taking tablespace Index offline while leaving tablespace Primary online can cause errors when DML is issued against the indexed tables located in Primary. The problem only manifests when the access method chosen by the optimizer needs to access the indexes in the Index tablespace.
To back up offline tablespaces:
Before beginning a backup of a tablespace, identify the tablespace's datafiles by querying the DBA_DATA_FILES view. For example, assume that you want to back up the users tablespace. Enter the following in SQL*Plus:
SELECT TABLESPACE_NAME, FILE_NAME
FROM SYS.DBA_DATA_FILES
WHERE TABLESPACE_NAME = 'USERS';
TABLESPACE_NAME FILE_NAME
------------------------------- --------------------------------
USERS /oracle/oradata/trgt/users01.dbf
In this example, /oracle/oradata/trgt/users01.dbf is a fully specified filename corresponding to the datafile in the users tablespace.
Take the tablespace offline using normal priority if possible because it guarantees that you can subsequently bring the tablespace online without having to recover it. For example:
SQL> ALTER TABLESPACE users OFFLINE NORMAL;
Back up the offline datafiles. For example:
% cp /oracle/oradata/trgt/users01.dbf /d2/users01_'date "+%m_%d_%y"'.dbf
Bring the tablespace online. For example:
ALTER TABLESPACE users ONLINE;
Note:
If you took the tablespace offline using temporary or immediate priority, then you cannot bring the tablespace online unless you perform tablespace recovery.
Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived. For example, enter:
ALTER SYSTEM ARCHIVE LOG CURRENT;
Making User-Managed Backups of Online Tablespaces and Datafiles
You can back up all or only specific datafiles of an online tablespace while the database is open. The procedure differs depending on whether the online tablespace is read/write or read-only.
Note:
You should not back up temporary tablespaces.
Making User-Managed Backups of Online Read/Write Tablespaces
You must put a read/write tablespace in backup mode to make user-managed datafile backups when the tablespace is online and the database is open. The ALTER TABLESPACE ... BEGIN BACKUP statement places a tablespace in backup mode. In backup mode, the database copies whole changed data blocks into the redo stream. After you take the tablespace out of backup mode with the ALTER TABLESPACE ... END BACKUP or ALTER DATABASE END BACKUP statement, the database advances the datafile header to the current database checkpoint.
When restoring a datafile backed up in this way, the database asks for the appropriate set of redo log files to apply if recovery be needed. The redo logs contain all changes required to recover the datafiles and make them consistent.
To back up online read/write tablespaces in an open database:
Before beginning a backup of a tablespace, identify all of the datafiles in the tablespace with the DBA_DATA_FILES data dictionary view. For example, assume that you want to back up the users tablespace. Enter the following:
SELECT TABLESPACE_NAME, FILE_NAME
FROM SYS.DBA_DATA_FILES
WHERE TABLESPACE_NAME = 'USERS';
TABLESPACE_NAME FILE_NAME
------------------------------- --------------------
USERS /oracle/oradata/trgt/users01.dbf
USERS /oracle/oradata/trgt/users02.dbf
Mark the beginning of the online tablespace backup. For example, the following statement marks the start of an online backup for the tablespace users:
SQL> ALTER TABLESPACE users BEGIN BACKUP;
Caution:
If you do not use BEGIN BACKUP to mark the beginning of an online tablespace backup and wait for that statement to complete before starting your copies of online tablespaces, or then the datafile copies produced are not usable for subsequent recovery operations. Attempting to recover such a backup is risky and can return errors that result in inconsistent data. For example, the attempted recovery operation can issue a "fuzzy files" warning, and can lead to an inconsistent database that you cannot open.
Back up the online datafiles of the online tablespace with operating system commands. For example, UNIX users might enter:
% cp /oracle/oradata/trgt/users01.dbf /d2/users01_'date "+%m_%d_%y"'.dbf
% cp /oracle/oradata/trgt/users02.dbf /d2/users02_'date "+%m_%d_%y"'.dbf
After backing up the datafiles of the online tablespace, run the SQL statement ALTER TABLESPACE with the END BACKUP option. For example, the following statement ends the online backup of the tablespace users:
SQL> ALTER TABLESPACE users END BACKUP;
Archive the unarchived redo logs so that the redo required to recover the tablespace backup is archived. For example, enter:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Caution:
If you fail to take the tablespace out of backup mode, then Oracle continues to write copies of data blocks in this tablespace to the online logs, causing performance problems. Also, you will receive an ORA-01149 error if you try to shut down the database with the tablespaces still in backup mode.
Making Multiple User-Managed Backups of Online Read/Write TablespacesWhen backing up several online tablespaces, you can back them up either serially or in parallel. Use either of the following procedures depending on your needs.
Backing Up Online Tablespaces in Parallel
You can simultaneously create datafile copies of multiple tablespaces requiring backups in backup mode. Note, however, that by putting all tablespaces in online mode at once, you can generate large redo logs if there is heavy update activity on the affected tablespaces, because the redo must contain a copy of each changed data block in each changed datafile. Be sure to consider the size of the likely redo before using the procedure outlined here.
To back up online tablespaces in parallel:
Prepare all online tablespaces for backup by issuing all necessary ALTER TABLESPACE statements at once. For example, put tablespaces users, tools, and index in backup mode as follows:
SQL> ALTER TABLESPACE users BEGIN BACKUP;
SQL> ALTER TABLESPACE tools BEGIN BACKUP;
SQL> ALTER TABLESPACE indx BEGIN BACKUP;
If you are backing up all tablespaces, you might want to use this command:
SQL> ALTER DATABASE BEGIN BACKUP;
Back up all files of the online tablespaces. For example, a UNIX user might back up datafiles with the *.dbf suffix as follows:
% cp $ORACLE_HOME/oradata/trgt/*.dbf /disk2/backup/
Take the tablespaces out of backup mode as in the following example:
SQL> ALTER TABLESPACE users END BACKUP;
SQL> ALTER TABLESPACE tools END BACKUP;
SQL> ALTER TABLESPACE indx END BACKUP;
Again, it you are handling all datafiles at once you can use the ALTER DATABASE command instead of ALTER TABLESPACE:
SQL> ALTER DATABASE END BACKUP;
Archive the online redo logs so that the redo required to recover the tablespace backups will be available for later media recovery. For example, enter:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Backing Up Online Tablespaces Serially
You can place all tablespaces requiring online backups in backup mode one at a time. Oracle recommends the serial backup option because it minimizes the time between ALTER TABLESPACE ... BEGIN/END BACKUP statements. During online backups, more redo information is generated for the tablespace because whole data blocks are copied into the redo log.
To back up online tablespaces serially:
Prepare a tablespace for online backup. For example, to put tablespace users in backup mode enter the following:
SQL> ALTER TABLESPACE users BEGIN BACKUP;
In this case you probably do not want to use ALTER DATABASE BEGIN BACKUP to put all tablespaces in backup mode simultaneously, because of the unnecessary volume of redo log information generated for tablespaces in online mode.
Back up the datafiles in the tablespace. For example, enter:
% cp /oracle/oradata/trgt/users01.dbf /d2/users01_'date "+%m_%d_%y"'.dbf
Take the tablespace out of backup mode. For example, enter:
SQL> ALTER TABLESPACE users END BACKUP;
Repeat this procedure for each remaining tablespace.
Archive the unarchived redo logs so that the redo required to recover the tablespace backups is archived. For example, enter:
SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;
Backing Up in NOARCHIVELOG Mode: Example
Backing Up in NOARCHIVELOG Mode: Example
This script puts the database into the correct mode for a consistent, whole database backup and then backs up the database. The script performs a shutdown, startup, shutdown, and then startup again before creating multiple copies of the backup:
# Shut down database cleanly with immediate option. This type of shutdown lets
# current calls to the database complete, but prevents further logons or calls.
# If the database is not up, you receive a message saying so but RMAN will not
# treat this situation as an error.
SHUTDOWN IMMEDIATE;
# Start up the database in case it suffered instance failure or was
# closed with SHUTDOWN ABORT before starting this script.
# The script performs instance recovery if
# needed. Oracle uses the default init.ora file. Alternatively, use this form:
# STARTUP FORCE DBA pfile=filename.
# Use the DBA option because you are going to shut down again
# and do not want to let users in during the short interval. Use the FORCE
# option because it cannot hurt and might help in certain situations.
STARTUP FORCE DBA;
SHUTDOWN IMMEDIATE;
# The database is cleanly closed and ready for a consistent backup. RMAN
# requires that the database be started and mounted to perform a backup.
RMAN> STARTUP MOUNT;
# this example uses automatic channels to make the backup
BACKUP
COPIES 2
INCREMENTAL LEVEL 0
MAXSETSIZE 10M
DATABASE
INCLUDE CURRENT CONTROLFILE
TAG 'BACKUP_1'
;
# Now that the backup is complete, open the database.
ALTER DATABASE OPEN;
Note the inclusion of the current control file with the backup, and the use of the tag to identify the backup. To use this backup of the database, the control file must be restored from the same backup as the rest of the database. Adding INCLUDE CURRENT CONTROLFILE ensures that a usable backup of the control file is included with the backup and tagged in order to simplify restoring the control file with the rest of the database.
You can skip tablespaces, such as read-only tablespaces, but any skipped tablespace that has not been offline or read-only since its last backup is lost if the database has to be restored from a backup.
This script puts the database into the correct mode for a consistent, whole database backup and then backs up the database. The script performs a shutdown, startup, shutdown, and then startup again before creating multiple copies of the backup:
# Shut down database cleanly with immediate option. This type of shutdown lets
# current calls to the database complete, but prevents further logons or calls.
# If the database is not up, you receive a message saying so but RMAN will not
# treat this situation as an error.
SHUTDOWN IMMEDIATE;
# Start up the database in case it suffered instance failure or was
# closed with SHUTDOWN ABORT before starting this script.
# The script performs instance recovery if
# needed. Oracle uses the default init.ora file. Alternatively, use this form:
# STARTUP FORCE DBA pfile=filename.
# Use the DBA option because you are going to shut down again
# and do not want to let users in during the short interval. Use the FORCE
# option because it cannot hurt and might help in certain situations.
STARTUP FORCE DBA;
SHUTDOWN IMMEDIATE;
# The database is cleanly closed and ready for a consistent backup. RMAN
# requires that the database be started and mounted to perform a backup.
RMAN> STARTUP MOUNT;
# this example uses automatic channels to make the backup
BACKUP
COPIES 2
INCREMENTAL LEVEL 0
MAXSETSIZE 10M
DATABASE
INCLUDE CURRENT CONTROLFILE
TAG 'BACKUP_1'
;
# Now that the backup is complete, open the database.
ALTER DATABASE OPEN;
Note the inclusion of the current control file with the backup, and the use of the tag to identify the backup. To use this backup of the database, the control file must be restored from the same backup as the rest of the database. Adding INCLUDE CURRENT CONTROLFILE ensures that a usable backup of the control file is included with the backup and tagged in order to simplify restoring the control file with the rest of the database.
You can skip tablespaces, such as read-only tablespaces, but any skipped tablespace that has not been offline or read-only since its last backup is lost if the database has to be restored from a backup.
DETERMINE OPTIMAL SIZE FOR REDO LOGS
DETERMINE OPTIMAL SIZE FOR REDO LOGS
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtunin.htm
You can use the V$INSTANCE_RECOVERY view column OPTIMAL_LOGFILE_SIZE to determine the size of your online redo logs. This field shows the redo log file size in megabytes that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. If this field consistently shows a value greater than the size of your smallest online log, then you should configure all your online logs to be at least this size.
Oracle V$INSTANCE_RECOVERY View:
We need to query the V$INSTANCE_RECOVERY view in order to use the Redo Logfile Size Advisor. Oracle V$INSTANCE_RECOVERY monitors the mechanisms available to users to limit recovery I/O. These include
•Setting LOG_CHECKPOINT_TIMEOUT parameter.
•Setting LOG_CHECKPOINT_INTERVAL parameter.
•Setting FAST_START_MTTR_TARGET parameter.
•Setting the size of the smallest redo log.
Note, however, that the redo log file size affects the MTTR. In some cases, you may be able to refine your choice of the optimal FAST_START_MTTR_TARGET value by re-running the MTTR Advisor with your suggested optimal log file size.
SQL> select OPTIMAL_LOGFILE_SIZE from V$INSTANCE_RECOVERY;
OPTIMAL_LOGFILE_SIZE
--------------------
92
SQL> show parameter mttr;
NAME TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
fast_start_mttr_target integer
180
SQL>
SELECT TARGET_MTTR, ESTIMATED_MTTR, CKPT_BLOCK_WRITES FROM V$INSTANCE_RECOVERY;
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtunin.htm
You can use the V$INSTANCE_RECOVERY view column OPTIMAL_LOGFILE_SIZE to determine the size of your online redo logs. This field shows the redo log file size in megabytes that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. If this field consistently shows a value greater than the size of your smallest online log, then you should configure all your online logs to be at least this size.
Oracle V$INSTANCE_RECOVERY View:
We need to query the V$INSTANCE_RECOVERY view in order to use the Redo Logfile Size Advisor. Oracle V$INSTANCE_RECOVERY monitors the mechanisms available to users to limit recovery I/O. These include
•Setting LOG_CHECKPOINT_TIMEOUT parameter.
•Setting LOG_CHECKPOINT_INTERVAL parameter.
•Setting FAST_START_MTTR_TARGET parameter.
•Setting the size of the smallest redo log.
Note, however, that the redo log file size affects the MTTR. In some cases, you may be able to refine your choice of the optimal FAST_START_MTTR_TARGET value by re-running the MTTR Advisor with your suggested optimal log file size.
SQL> select OPTIMAL_LOGFILE_SIZE from V$INSTANCE_RECOVERY;
OPTIMAL_LOGFILE_SIZE
--------------------
92
SQL> show parameter mttr;
NAME TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
fast_start_mttr_target integer
180
SQL>
SELECT TARGET_MTTR, ESTIMATED_MTTR, CKPT_BLOCK_WRITES FROM V$INSTANCE_RECOVERY;
Tuning RMAN Backup Performance: Procedure
Step 1: Remove RATE Parameters from Configured and Allocated Channels
The RATE parameter on a channel is intended to reduce, rather than increase, backup throughput, so that more disk bandwidth is available for other database operations.
If your backup is not streaming to tape, then make sure that the RATE parameter is not set on the ALLOCATE CHANNEL or CONFIGURE CHANNEL commands.
Step 2: If You Use Synchronous Disk I/O, Set DBWR_IO_SLAVES
If and only if your disk does not support asynchronous I/O, then try setting the DBWR_IO_SLAVES initialization parameter to a nonzero value. Any nonzero value for DBWR_IO_SLAVES causes a fixed number (four) of disk I/O slaves to be used for backup and restore, which simulates asynchronous I/O. If I/O slaves are used, I/O buffers are obtained from the SGA. The large pool is used, if configured. Otherwise, the shared pool is used.
Note: By setting DBWR_IO_SLAVES, the database writer processes will use slaves as well. You may need to increase the value of the PROCESSES initialization parameter.
Step 3: If You Fail to Allocate Shared Memory, Set LARGE_POOL_SIZE
Set this initialization parameter if the database reports an error in the alert.log stating that it does not have enough memory and that it will not start I/O slaves. The message should resemble the following:
ksfqxcre: failure to allocate shared memory means sync I/O will be used whenever async I/O to file not supported natively
When attempting to get shared buffers for I/O slaves, the database does the following:
•If LARGE_POOL_SIZE is set, then the database attempts to get memory from the large pool. If this value is not large enough, then an error is recorded in the alert log, the database does not try to get buffers from the shared pool, and asynchronous I/O is not used.
•If LARGE_POOL_SIZE is not set, then the database attempts to get memory from the shared pool.
•If the database cannot get enough memory, then it obtains I/O buffer memory from the PGA and writes a message to the alert.log file indicating that synchronous I/O is used for this backup.
The memory from the large pool is used for many features, including the shared server (formerly called multi-threaded server), parallel query, and RMAN I/O slave buffers. Configuring the large pool prevents RMAN from competing with other subsystems for the same memory.
Requests for contiguous memory allocations from the shared pool are usually small (under 5 KB) in size. However, it is possible that a request for a large contiguous memory allocation can either fail or require significant memory housekeeping to release the required amount of contiguous memory. Although the shared pool may be unable to satisfy this memory request, the large pool is able to do so. The large pool does not have a least recently used (LRU) list; the database does not attempt to age memory out of the large pool.
Use the LARGE_POOL_SIZE initialization parameter to configure the large pool. To see in which pool (shared pool or large pool) the memory for an object resides, query V$SGASTAT.POOL.
The formula for setting LARGE_POOL_SIZE is as follows:
LARGE_POOL_SIZE = number_of_allocated_channels *
(16 MB + ( 4 * size_of_tape_buffer ) )
Step 4: Tune RMAN Tape Streaming Performance BottlenecksThere are several tasks you can perform to identify and remedy bottlenecks that affect RMAN's performance on tape backups:
Using BACKUP... VALIDATE To Distinguish Between Tape and Disk Bottlenecks
One reliable way to determine whether the tape streaming or disk I/O is the bottleneck in a given backup job is to compare the time required to run backup tasks with the time required to run BACKUP VALIDATE of the same tasks. BACKUP VALIDATE of a backup to tape performs the same disk reads as a real backup but performs no tape I/O. If the time required for the BACKUP VALIDATE to tape is significantly less than the time required for a real backup to tape, then writing to tape is the likely bottleneck.
Using Multiplexing to Improve Tape Streaming with Disk Bottlenecks
In some situations when performing a backup to tape, RMAN may not be able to send data blocks to the tape drive fast enough to support streaming. For example, during an incremental backup, RMAN only backs up blocks changed since a previous datafile backup as part of the same strategy. If you do not turn on change tracking, RMAN must scan entire datafiles for changed blocks, and fill output buffers as it finds such blocks. If there are not many changed blocks, RMAN may not fill output buffers fast enough to keep the tape drive streaming.
You can improve performance by increasing the degree of multiplexing used for backing up. This increases the rate at which RMAN fills tape buffers, which makes it more likely that buffers are sent to the media manager fast enough to maintain streaming.
Using Incremental Backups to Improve Backup Performance With Tape Bottlenecks
If writing to tape is the source of a bottleneck for your backups, consider using incremental backups as part of your backup strategy. Incremental level 1 backups write only the changed blocks from datafiles to tape, so that any bottleneck on writing to tape has less impact on your overall backup strategy. In particular, if tape drives are not locally attached to the node running the database being backed up, then incremental backups can be faster.
Step 5: Query V$ Views to Identify BottlenecksIf none of the previous steps improves backup performance, then try to determine the exact source of the bottleneck. Use the V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO views to determine the source of backup or restore bottlenecks and to see detailed progress of backup jobs.
V$BACKUP_SYNC_IO contains rows when the I/O is synchronous to the process (or thread on some platforms) performing the backup. V$BACKUP_ASYNC_IO contains rows when the I/O is asynchronous. Asynchronous I/O is obtained either with I/O processes or because it is supported by the underlying operating system.
The RATE parameter on a channel is intended to reduce, rather than increase, backup throughput, so that more disk bandwidth is available for other database operations.
If your backup is not streaming to tape, then make sure that the RATE parameter is not set on the ALLOCATE CHANNEL or CONFIGURE CHANNEL commands.
Step 2: If You Use Synchronous Disk I/O, Set DBWR_IO_SLAVES
If and only if your disk does not support asynchronous I/O, then try setting the DBWR_IO_SLAVES initialization parameter to a nonzero value. Any nonzero value for DBWR_IO_SLAVES causes a fixed number (four) of disk I/O slaves to be used for backup and restore, which simulates asynchronous I/O. If I/O slaves are used, I/O buffers are obtained from the SGA. The large pool is used, if configured. Otherwise, the shared pool is used.
Note: By setting DBWR_IO_SLAVES, the database writer processes will use slaves as well. You may need to increase the value of the PROCESSES initialization parameter.
Step 3: If You Fail to Allocate Shared Memory, Set LARGE_POOL_SIZE
Set this initialization parameter if the database reports an error in the alert.log stating that it does not have enough memory and that it will not start I/O slaves. The message should resemble the following:
ksfqxcre: failure to allocate shared memory means sync I/O will be used whenever async I/O to file not supported natively
When attempting to get shared buffers for I/O slaves, the database does the following:
•If LARGE_POOL_SIZE is set, then the database attempts to get memory from the large pool. If this value is not large enough, then an error is recorded in the alert log, the database does not try to get buffers from the shared pool, and asynchronous I/O is not used.
•If LARGE_POOL_SIZE is not set, then the database attempts to get memory from the shared pool.
•If the database cannot get enough memory, then it obtains I/O buffer memory from the PGA and writes a message to the alert.log file indicating that synchronous I/O is used for this backup.
The memory from the large pool is used for many features, including the shared server (formerly called multi-threaded server), parallel query, and RMAN I/O slave buffers. Configuring the large pool prevents RMAN from competing with other subsystems for the same memory.
Requests for contiguous memory allocations from the shared pool are usually small (under 5 KB) in size. However, it is possible that a request for a large contiguous memory allocation can either fail or require significant memory housekeeping to release the required amount of contiguous memory. Although the shared pool may be unable to satisfy this memory request, the large pool is able to do so. The large pool does not have a least recently used (LRU) list; the database does not attempt to age memory out of the large pool.
Use the LARGE_POOL_SIZE initialization parameter to configure the large pool. To see in which pool (shared pool or large pool) the memory for an object resides, query V$SGASTAT.POOL.
The formula for setting LARGE_POOL_SIZE is as follows:
LARGE_POOL_SIZE = number_of_allocated_channels *
(16 MB + ( 4 * size_of_tape_buffer ) )
Step 4: Tune RMAN Tape Streaming Performance BottlenecksThere are several tasks you can perform to identify and remedy bottlenecks that affect RMAN's performance on tape backups:
Using BACKUP... VALIDATE To Distinguish Between Tape and Disk Bottlenecks
One reliable way to determine whether the tape streaming or disk I/O is the bottleneck in a given backup job is to compare the time required to run backup tasks with the time required to run BACKUP VALIDATE of the same tasks. BACKUP VALIDATE of a backup to tape performs the same disk reads as a real backup but performs no tape I/O. If the time required for the BACKUP VALIDATE to tape is significantly less than the time required for a real backup to tape, then writing to tape is the likely bottleneck.
Using Multiplexing to Improve Tape Streaming with Disk Bottlenecks
In some situations when performing a backup to tape, RMAN may not be able to send data blocks to the tape drive fast enough to support streaming. For example, during an incremental backup, RMAN only backs up blocks changed since a previous datafile backup as part of the same strategy. If you do not turn on change tracking, RMAN must scan entire datafiles for changed blocks, and fill output buffers as it finds such blocks. If there are not many changed blocks, RMAN may not fill output buffers fast enough to keep the tape drive streaming.
You can improve performance by increasing the degree of multiplexing used for backing up. This increases the rate at which RMAN fills tape buffers, which makes it more likely that buffers are sent to the media manager fast enough to maintain streaming.
Using Incremental Backups to Improve Backup Performance With Tape Bottlenecks
If writing to tape is the source of a bottleneck for your backups, consider using incremental backups as part of your backup strategy. Incremental level 1 backups write only the changed blocks from datafiles to tape, so that any bottleneck on writing to tape has less impact on your overall backup strategy. In particular, if tape drives are not locally attached to the node running the database being backed up, then incremental backups can be faster.
Step 5: Query V$ Views to Identify BottlenecksIf none of the previous steps improves backup performance, then try to determine the exact source of the bottleneck. Use the V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO views to determine the source of backup or restore bottlenecks and to see detailed progress of backup jobs.
V$BACKUP_SYNC_IO contains rows when the I/O is synchronous to the process (or thread on some platforms) performing the backup. V$BACKUP_ASYNC_IO contains rows when the I/O is asynchronous. Asynchronous I/O is obtained either with I/O processes or because it is supported by the underlying operating system.
Suscribirse a:
Entradas (Atom)