MERCADOS FINANCIEROS

martes, 26 de abril de 2016

BACKUP RECOVERY AREA

    rman target / log=$archivolog/log_rman_recovery_area_$ORACLE_SID_$fecha.log  <run {
    allocate channel ch1 device type disk;
    BACKUP RECOVERY AREA TO DESTINATION '/u02/backups/rman/$ORACLE_SID/$fecha' TAG= 'BACKUP RECOVERY AREA FS';
    }
    EOF


    RMAN> backup recovery area to destination '/u01/app/oracle/backup';

    Starting backup at 01-SEP-18
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=8 RECID=4 STAMP=985557093
    input archived log thread=1 sequence=9 RECID=5 STAMP=985558812
    input archived log thread=1 sequence=10 RECID=6 STAMP=985558989
    input archived log thread=1 sequence=11 RECID=7 STAMP=985576474
    input archived log thread=1 sequence=12 RECID=8 STAMP=985611613
    input archived log thread=1 sequence=13 RECID=9 STAMP=985644352
    input archived log thread=1 sequence=14 RECID=10 STAMP=985672841
    input archived log thread=1 sequence=15 RECID=11 STAMP=985683660
    input archived log thread=1 sequence=16 RECID=12 STAMP=985686332
    input archived log thread=1 sequence=17 RECID=13 STAMP=985686505
    input archived log thread=1 sequence=18 RECID=14 STAMP=985690934
    input archived log thread=1 sequence=19 RECID=15 STAMP=985690996
    input archived log thread=1 sequence=20 RECID=16 STAMP=985691388
    channel ORA_DISK_1: starting piece 1 at 01-SEP-18

Database Smart Flash Cache in Oracle Database 12c Release 1

Database Smart Flash Cache in Oracle Database 12c Release 1

The database Smart Flash Cache was originally released as an Exadata feature. From 11.2.0.2 it was available for use with any flash disks, not just Exadata storage. In Oracle Database 12c the Smart Flash Cache can now use multiple flash disks without needing a volume manager. I'm not sure why this makes it worthy of inclusion in the 12c OCP DBA exam, since it seems like a very minor change.

Introduction

The Smart Flash Cache feature is available on Solaris and Oracle Linux platforms. It allows you to define a second tier of buffer cache on flash disks to improve performance. Normally, when items are aged out of the buffer cache they must be reloaded from disk the next time they are required. With Smart Flash Cache, the aged out items are placed in the Smart Flash Cache, so if they are needed again they are reloaded more quickly into the buffer cache, rather than having to come from slow disks.
The documentation suggest that optimum performance is seen when the Smart Flash Cache is 2-10 times the size of the buffer cache. In this example I have a buffer cache of approximately 1G, so I need a Smart Flash Cache of 2G or more.
I don't have any flash storage to play with on this machine, so I'm going to fake two disks to allow me to use the feature. Doing this is a really stupid idea on a real system!
# dd if=/dev/zero of=/mnt/vdisk1 bs=1024 count=1024000
# dd if=/dev/zero of=/mnt/vdisk2 bs=1024 count=1024000
# chown oracle:oinstall /mnt/vdisk1 /mnt/vdisk2
So, let's assume I had real flash disks, or partitions on real flash disks. In this case my fake disks are named "/mnt/vdisk1" and "/mnt/vdisk1".
Metadata about the contents of the Smart Flash Cache is maintained in the buffer cache, so if you don't take this into account, your actual buffer cache size is effectively reduced by using the Smart Flash Cache. To counter this, increase the buffer cache by approximately 100 bytes multiplied by the number of blocks you expect to cache in the Smart Flash Cache. In RAC installations you also need to consider the size of the shared pool. Approximately 208 bytes should be allocated for each GCS resource.

Enable Smart Flash Cache

The DB_FLASH_CACHE_FILE parameter is used to identify the flash disks to be used by the Smart Flash Cache.
DB_FLASH_CACHE_FILE = /mnt/vdisk1, /mnt/vdisk2
The size of the individual disks is specified by the DB_FLASH_CACHE_SIZE parameter.
DB_FLASH_CACHE_SIZE = 1G, 1G
The documentation suggests these parameters can be set using the ALTER SYSTEM command, but although the ALTER SYSTEM command itself works, I could never get an acceptable value into the SPFILE. Each startup resulted in the following message.
SQL> startup
ORA-12427: invalid input value for db_flash_cache_file parameter
SQL>
Instead, edited the SPFILE manually as follows.
Create a PFILE from the SPFILE.
SQL> CREATE PFILE='/tmp/pfile.txt' FROM SPFILE;
Edit the "/tmp/pfile.txt" file manually, adding the following entries.
*.db_flash_cache_file=/mnt/vdisk1, /mnt/vdisk2
*.db_flash_cache_size=1073741824,1073741824
Recreate the SPFILE from the PFILE.
SQL> CREATE SPFILE FROM PFILE='/tmp/pfile.txt';
Restart the database.
SHUTDOWN IMMEDIATE;
STARTUP;
Once started, you can see the parameters are set correctly.
SQL> SHOW PARAMETER db_flash_cache

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_flash_cache_file                  string      /mnt/vdisk1, /mnt/vdisk2
db_flash_cache_size                  big integer 1G, 1G
SQL>
Information about Smart Flash Cache usage is displayed using the V$FLASHFILESTAT view.
SET LINESIZE 100
COLUMN name FORMAT A20

SELECT * FROM v$flashfilestat;

FLASHFILE# NAME                      BYTES    ENABLED SINGLEBLKRDS SINGLEBLKRDTIM_MICRO     CON_ID
---------- -------------------- ---------- ---------- ------------ -------------------- ----------
         1 /mnt/vdisk1          1073741824          1            0                    0          0
         2 /mnt/vdisk2          1073741824          1            0                    0          0

SQL>

Disable Smart Flash Cache

Resetting the initialization parameters disables the Smart Flash Cache.
ALTER SYSTEM RESET DB_FLASH_CACHE_FILE SCOPE=SPFILE;
ALTER SYSTEM RESET DB_FLASH_CACHE_SIZE SCOPE=SPFILE;
SHUTDOWN IMMEDIATE;
STARTUP;

SELECT * FROM v$flashfilestat;

no rows selected

SQL>

Modifying Table Usage

The default action is for blocks to be aged out of the buffer cache and into the Smart Flash Cache. The FLASH_CACHE clause of the ALTER TABLE command allows additional control.
-- Prevent EMP from using the Smart Flash Cache.
ALTER TABLE scott.emp STORAGE (FLASH_CACHE NONE);

-- Force EMP to remain in the Smart Flash Cache, space provided.
ALTER TABLE scott.emp STORAGE (FLASH_CACHE KEEP);

-- Reset EMP to default use of Smart Flash Cache.
ALTER TABLE scott.emp STORAGE (FLASH_CACHE);
ALTER TABLE scott.emp STORAGE (FLASH_CACHE DEFAULT);

Performance

As always, you need to be careful about testing the performance impact of using the Smart Flash Cache. Several people have reported better performance when using flash disks for actual data files, rather than using them for the Smart Flash Cache. Your results may vary depending on the hardware you are using and the type of processing done by your database.
For more information see:

lunes, 25 de abril de 2016

Recovery Manager (RMAN) Table Point In Time Recovery (PITR) in Oracle Database 12c Release 1 (12.1)

Recovery Manager (RMAN) Table Point In Time Recovery (PITR) in Oracle Database 12c Release 1 (12.1)

In previous releases point in time recovery of a table or table partition was only possible by manually creating a point in time clone of the database, retrieving the table using data pump, then removing the clone. Oracle 12c includes a new RMAN feature which performs all these steps, initiated from a single command.
Related articles.

Setup

To demonstrate this, we need to create a table to do a PITR on. This example assumes you are running in archivelog mode and have adequate backups in place to allow a recovery via a point in time clone. For such a recent modification, using a flashback query would be more appropriate, but this serves the purpose for this test.
CONN / AS SYSDBA

CREATE USER test IDENTIFIED BY test
  QUOTA UNLIMITED ON users;

GRANT CREATE SESSION, CREATE TABLE TO test;

CONN test/test

CREATE TABLE t1 (id NUMBER);
INSERT INTO t1 VALUES (1);
COMMIT;
Check the current SCN.
CONN / AS SYSDBA

SELECT DBMS_FLASHBACK.get_system_change_number FROM dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
                 1853267

SQL>
Add some more data since the SCN was checked.
CONN test/test

INSERT INTO t1 VALUES (2);
COMMIT;

SELECT * FROM t1;

        ID
----------
         1
         2

SQL> 
Exit from SQL*Plus and log in to RMAN as a root user with SYSDBA or SYSBACKUP privilege.

Table Point In Time Recovery (PITR)

Log in to RMAN as a user with SYSDBA or SYSBACKUP privilege.
$ rman target=/
Issue the RECOVER TABLE command, giving a suitable AUXILIARY DESTINATION location for the auxiliary database. In the following example the REMAP TABLE clause is used to give the recovered table a new name.
RECOVER TABLE 'TEST'.'T1'
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux'  
  REMAP TABLE 'TEST'.'T1':'T1_PREV';
The output from this command is shown here. It's rather long, but it clearly shows the creation of the clone and the data pump export and import operations. Once the operation is complete, we can see the T1_PREV table has been created and contains the data as it was when the SCN was captured.
sqlplus test/test

SELECT * FROM t1_prev;

 ID
----------
  1

SQL> 

Table Point In Time Recovery (PITR) to Dump File

Rather than completing the whole recovery, you can just stop at the point where the recovered table is in a data pump dump file, which you can import manually at a later time. The following example uses the DATAPUMP DESTINATION, DUMP FILE and NOTABLEIMPORT clauses to achieve this.
RECOVER TABLE 'TEST'.'T1'
  UNTIL SCN 1853267
  AUXILIARY DESTINATION '/u01/aux'
  DATAPUMP DESTINATION '/u01/export'
  DUMP FILE 'test_t1_prev.dmp'
  NOTABLEIMPORT;
The output from this command is shown here. Once the operation is complete, we can see the resulting dump file in the specified directory.
$ ls -al /u01/export
total 120
drwxr-xr-x. 2 oracle oinstall   4096 Dec 26 17:33 .
drwxrwxr-x. 5 oracle oinstall   4096 Dec 26 12:30 ..
-rw-r-----. 1 oracle oinstall 114688 Dec 26 17:34 test_t1_prev.dmp
$

Table Point In Time Recovery (PITR) in a Pluggable Database (PDB)

The process for performing a point in time recovery of a table in a PDB is similar to that of a non-CDB database. You can read about it here.
For more information see:

jueves, 21 de abril de 2016

Cluster Oracle RAC TEST ssh

ssh 10.1.1.10 date
ssh USBLDDB1010  date
ssh USBLDDB1010.usb.com date
ssh 10.1.1.11  date
ssh USBLDDB2011  date
ssh USBLDDB2011.usb.com date

ssh 192.168.2.10  date
ssh USBLDDB1010-priv  date
ssh USBLDDB1010-priv.usb.com date
ssh 192.168.2.11 date
ssh USBLDDB2011-priv  date
ssh USBLDDB2011-priv.usb.com date

miércoles, 13 de abril de 2016

"Upgrade" de bases de datos Oracle 10g(R1/R2) a 11gR2 ( Parte I )


"Upgrade" de bases de datos Oracle 10g(R1/R2) a 11gR2 ( Parte I )

Por Joel Pérez
Publicado en octubre 2012
Reciban estimados tecnólogos Oracle un cordial saludo. A través del presente artículo, tendremos la oportunidad de visualizar y adentrarnos un poco en el tema de "upgrade" de bases de datos 10g(R1/R2) a 11gR2.

Escenario y Método utilizado para el presente articulo:

  • Herramienta(s) o Utilitario(s): DBUA ( Database Upgrade Assistant )
  • Escenario: se llevara a cabo una migración de BBDD de 10g R2( Single Instance ) a 11g R2( Single Instance ) a través de DBUA en origen y destino.
  • Los servidores origen y destinos son distintos.
Dentro de todas las posibles tareas que uno como DBA ( Database Administrator ) realiza, personalmente, una por las que mas siento afinidad es realizar "upgrades". Las condiciones, atributos, criticidad de negocio, ventanas de tiempo, etc, generalmente siempre son diversas entre un caso y otro.
Realizar un "upgrade" es como construir un traje a la medida. Diversas serán las vías pero el objetivo siempre será escoger la opción mas optima para el caso, ofreciendo el menor "Downtime" posible.
Enmarcaremos los métodos de "upgrade" de la siguiente manera:
"Upgrades" ( Lógicos ): un "upgrade" se lleva a cabo a través de vías y métodos lógicos cuando hacemos uso de utilitarios y soluciones tales como:
  • export/import
  • Data Pump export/import.
  • Data Guard: SQL Apply
Las opciones lógicas comprenden internamente la ejecución de DMLs ( Data Manipulating Language )& DDLs ( Data Definition Language ).
"Upgrades" ( Físicos ): un "upgrade" se lleva a cabo físicamente cuando trasladamos nuestras bases de datos bloque a bloque tal cual como es la naturaleza de su almacenamiento. Para esta clasificación tenemos algunas opciones:
  • RMAN Restore & Recover
  • Manual Standby
  • Data Guard
  • Convert Database
  • DBUA ( Database Upgrade Assistant )
Ahora, la pregunta es… cuando utilizar una opción o la otra… allí es cuando inicia la construcción del traje a la medida… Para la escogencia del método se analizaran principalmente las siguientes condiciones:
  • La empresa tiene capacidad para un "Downtime" largo, moderado o casi nulo ?
  • Capacidad de ocupación de la base de datos ( medida: 30GB, 200GB, 5TB..etc )
  • "Endian Format" del Sistema operativo origen y destino
Cada una de las opciones poseen ventajas y desventajas, unas con respecto a las otras.
Analicemos posibles escenarios para proporcionarnos una idea de cómo seria el ciclo de vida de escoger la opción adecuada:
Caso 1.- Se desea migrar BBDD de 30GB de un servidor con sistema operativo (SO) Linux 64Bits a un servidor con SO HP-UX 11.31 64bits.
Elementos a considerar para el caso 1:
  • La BBDD se puede considerar una BBDD de medida reducida.
  • Los sistemas operativos origen y destinos no poseen el mismo "endian format" por lo tanto se opcionara en primer orden por un método lógico el cual no posee distinción en operación de acuerdo al sistema operativo origen o destino
  • Si la BBDD origen es 9i o versiones anteriores tendremos que llevar a cabo el uso de Export/Import el cual existía para las nombradas versiones y posee su utilitario en versión superior en 10g(R1/R2) & 11g(R1/R2). Si el origen de BBDD es 10g(R1) o superior se podrá utilizar Data Pump Export/Import. Nota: en versiones 9i o anteriores no existían los utilitarios Data Pump Export/Import
Conclusión para el caso 1: se utilizarían los utilitarios Export/Import debido a que la BBDD no es de largo alcance y los sistemas operativos origen y destinos poseen distintos "endian formats". Se asume que estará permitido un "Downtime" necesario para llevar a cabo la tarea lo cual lo podríamos clasificar como un "Downtime" bajo.
Caso 2.- Se desea migrar BBDD de 5TB de un servidor con sistema operativo (SO) Linux 64Bits a un servidor con SO HP-UX 11.31 64bits.
Elementos a considerar para el caso 2:
  • La BBDD esta vez si es de largo alcance. La opción de export/import ya no seria una opción aconsejable debido a que el "Downtime" posiblemente sea bastante largo.
  • Los sistemas operativos origen y destinos no poseen el mismo "endian format" por lo tanto se anulan las posibilidades de traslados físicos por el principio mencionado. Para el presente caso las posibles vías de migración podrían ser a través de las siguientes herramientas y/o soluciones:
    • Oracle Streams
    • Oracle Golden Gate
    • RMAN: Transportable Tablespaces
Cada una de estas herramientas poseen sus modos de usos, limitaciones, ventajas y desventajas de acuerdo al caso. Solo se podrá escoger la mejor vía si se posee el conocimiento de que tan versátil es cada una de estas y cuales son las implicaciones que cada una de ellas ( Soluciones de migración: utilitarios y/o métodos ) encierran.
Caso 3.- Se desea migrar BBDD de 3TB de un servidor con sistema operativo (SO) Linux 64Bits a un servidor con SO Linux 64bits.
Elementos a considerar para el caso 3:
  • La BBDD es de largo alcance pero tenemos la mejor condición y/o atributo posible en casos de upgrade. Migración entre sistemas operativos que poseen el mismo "endian format". Dentro de esta clasificación hay 2 sub-clasificaciones mas:
    • "Endian format" y Sistemas operativos origen y destino idénticos: con esta opción podremos utilizar: RMAN Restore & Recover, Data Guard, DBUA… etc.
    • "Endian format" Iguales y Sistemas operativos origen y destino diversos: en esta opción tendremos que utilizar "RMAN: Convert Database"
En análisis de solo 3 casos podemos proyectar lo complejo que podría tornarse la escogencia del método idóneo para la migración de una BBDd.. La objetivo de la serie de artículos: "Upgrade de base de datos10g(R1/R2) a 11gR2" ( Parte I/II/III,…etc ) es proporcionarnos los criterios y las claves necesarias para escoger entre los diversos métodos, herramientas y/o soluciones para realizar un exitoso "upgrade".
El siguiente diagrama ilustra de manera mas organizada lo que seria la toma de decisiones para llevar a cabo una tarea de "upgrade". Tal como se puede visualizar, el principal elemento de discernimiento para la escogencia de la técnica es la versión del sistema operativo y su respectivo "Endian Format". Si nos encontramos en el mismo sistema operativo escogeríamos el siguiente elemento en el camino en base al "Downtime" permitido. Si el mismo puede ser mayor a 30minutos entonces podemos realizarlo con técnicas que conlleven la reconstrucción del catalogo como ( RMAN: Restore & Recover + catupgrd script ) o a través del DBUA el cual es el foco del presente articulo.
Si el origen y destino no se encuentra en el mismo sistema operativo entonces tenemos opciones como:
  • Data Guard ( SQL Apply )
  • Oracle Streams
  • Transportable Database/Tablespaces
  • Oracle Golden Gate
  • Export/Import

upgrade
Dado por terminado la introducción al presente articulo, iniciemos la secuencia de pasos para llevar a cabo la migración en base al escenario originalmente descrito.

Escenario y Método utilizado para el presente articulo

  • Herramienta(s) o Utilitario(s): DBUA ( Database Upgrade Assistant )
  • Escenario: se llevara a cabo una migración de BBDD de 10g R2( Single Instance ) a 11g R2( Single Instance ) a través de DBUA en origen y destino.
  • Los servidores origen y destinos son distintos.
BBDD Origen: "DB10TO11G"
Sistema Operativo Origen: Linux x86 64-bit "MyjpServer1"
Versión de Oracle Server: 10.2.0.4

Ejecución del "Pre-Upgrade Tool"

Estando conectados en la BBDD origen debemos ejecutar el script "utlu112i.sql". Este script establece las condiciones necesarias para la posterior apertura de la BBDD en modo "upgrade". El mismo debe obtenerse en el "Oracle Home" del manejador destino, en el presente caso, el manejador destino es 11.2.0.3 tal como puede ser visualizado en el "feedback" de ejecución del mismo.
Dicho script realiza internamente la adición de un campo a la tabla "registry$database" y la actualización de su respectivo valor. Posterior a ello realiza el compilado del package "SYS.DBMS_REGISTRY" y de la vista "SYS.DBA_REGISTRY_DATABASE".
SQL> ALTER TABLE registry$database ADD (tz_version NUMBER);
SQL> UPDATE registry$database set tz_version =4;
SQL> ALTER  PACKAGE "SYS"."DBMS_REGISTRY"  COMPILE BODY;
SQL> ALTER VIEW "SYS"."DBA_REGISTRY_DATABASE"  COMPILE;

Nota: Es importante divisar las recomendaciones plasmadas en el "feedback" de la ejecución del script ( tamaño mínimo de tablespaces, valor para opción "Flashback", "feedback" de componentes instalados en la BBDD, "Recycle bin", EM, estadísticas, parámetros, etc )
[oracle@MyjpServer1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Aug 9 15:04:08 2012

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> start utlu112i.sql
Oracle Database 11.2 Pre-Upgrade Information Tool 08-09-2012 15:04:26
Script Version: 11.2.0.3.0 Build: 001
.
**********************************************************************
Database:
**********************************************************************
--> name:          DB10TO11
--> version:       10.2.0.4.0
--> compatible:    10.2.0.3.0
--> blocksize:     8192
--> platform:      Linux x86 64-bit
--> timezone file: V4
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 719 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 400 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 444 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 60 MB
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
Note: Pre-upgrade tool was run on a lower version 64-bit database.
**********************************************************************
--> If Target Oracle is 32-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.

--> If Target Oracle is 64-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
--> background_dump_dest         11.1       DEPRECATED   replaced by  "diagnostic_dest"
--> user_dump_dest               11.1       DEPRECATED   replaced by  "diagnostic_dest"
.

**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> JServer JAVA Virtual Machine [upgrade]  VALID
--> Oracle XDK for Java          [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
--> OLAP Analytic Workspace      [upgrade]  VALID
--> OLAP Catalog                 [upgrade]  VALID
--> EM Repository                [upgrade]  VALID
--> Oracle Text                  [upgrade]  VALID
--> Oracle XML Database          [upgrade]  VALID
--> Oracle Java Packages         [upgrade]  VALID
--> Oracle interMedia            [upgrade]  VALID
--> Spatial                      [upgrade]  VALID
--> Data Mining                  [upgrade]  VALID
--> Expression Filter            [upgrade]  VALID
--> Rule Manager                 [upgrade]  VALID
--> Oracle OLAP API              [upgrade]  VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Database is using a timezone file older than version 14.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 10.2.0.4.0 database timezone version
.... to the latest version which comes with the new release.
WARNING: --> EM Database Control Repository exists in the database.
.... Direct downgrade of EM Database Control is not supported. Refer to the
.... Upgrade Guide for instructions to save the EM data prior to upgrade.
WARNING: --> Your recycle bin is turned on and currently contains no objects.
.... Because it is REQUIRED that the recycle bin be empty prior to upgrading
.... and your recycle bin is turned on, you may need to execute the command:
        PURGE DBA_RECYCLEBIN
.... prior to executing your upgrade to confirm the recycle bin is empty.
.
**********************************************************************
Recommendations
**********************************************************************
Oracle recommends gathering dictionary statistics prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:

    EXECUTE dbms_stats.gather_dictionary_stats;

**********************************************************************
Oracle recommends reviewing any defined events prior to upgrading.

To view existing non-default events execute the following commands
while connected AS SYSDBA:
  Events:
    SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2
      WHERE  UPPER(name) ='EVENT' AND  isdefault='FALSE'

  Trace Events:
    SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2
      WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE'

Changes will need to be made in the init.ora or spfile.

**********************************************************************
SQL>

Purgando "Recycle bin"

De acuerdo a las recomendaciones dadas por el script anterior. Se aconseja ejecutar el "Purge" al "Recycle bin".
SQL> purge DBA_RECYCLEBIN;

Obtención de Plantilla con Datos en BBDD origen

Una vez ejecutado el script de "Pre-upgrade Tool". Procederemos a generar una plantilla con datos de la BBDD origen "DB10TO11G" a través del DBCA. Durante la generación de la plantilla el DBCA establece la BBDD en modo "mount" para generar así un "Cold Backup" comprimido como parte de la plantilla. Los backups comprimidos normalmente tardan 2 o 3 veces más el tiempo habitual de un backup sin comprimir y el restaurado del mismo también tarda aproximadamente 3 veces más tiempo respecto a un restaurado regular. Este factor es importante tomarlo en cuenta para calcular parte del "Downtime" que se llevara a cabo con la actividad.
[oracle@MyjpServer1 ~]$ dbca

Pantalla Inicial del DBCA:
pantalla inicial DBCA
Escogencia de la opción para trabajar con plantillas:
trabajar con plantillas
Escogemos la opción para generar una plantilla con datos:
plantillas con datos
Seleccionamos la BBDD en base a la cual se generara la plantilla:
seleccionar la BBDD
Establecemos un nombre para la plantilla y la ruta por defecto de la misma:
plantilla y ruta por defecto
Seleccionamos la opción "convert the file locations to use OFA structure" de manera que tengamos flexibilidad para escoger las rutas de la BBDD a crear en base a la presente plantilla
OFA: "Oracle Flexible Arquitecture"
seleccionar - convert the file locations to use OFA structure
Se chequea la confirmación de la generación de la plantilla. Divisar el nombre y la ruta que se establecerá para la misma.
confirmar la plantilla
Una vez generada la plantilla podemos ver los 3 archivos generados en la ruta previamente establecida ( $ORACLE_HOME/assistants/dbca/templates ):
[oracle@MyjpServer1 templates]$ pwd
/u01/app/oracle/product/10.2.0/db_1/assistants/dbca/templates
[oracle@MyjpServer1 templates]$
[oracle@MyjpServer1 templates]$ ls -lt *DB10TO11G*
-rw-r-----  1 oracle oinstall     5236 Aug  9 13:49 Template_for_DB10TO11G.dbc
-rw-r-----  1 oracle oinstall  7061504 Aug  9 13:49 Template_for_DB10TO11G.ctl
-rw-r-----  1 oracle oinstall 98500608 Aug  9 13:49 Template_for_DB10TO11G.dfb
[oracle@MyjpServer1 templates]$

Los archivos de la plantilla serán transferidos al servidor destino en el cual se posee una instalación de Oracle Server11gR2. La ruta destino para los archivos de la plantilla es el siguiente ( $ORACLE_HOME/assistants/dbca/templates ). Es esta la ruta por defecto donde el DBCA 11gR2 ubicara las plantillas existentes para el "Oracle Server" en funcionamiento
[oracle@MyjpServer1 templates]$ scp *DB10TO11G* 
MyjpServer2:/u01/app/oracle/product/11.2.0/dbhome_1/assistants/dbca/templates
The authenticity of host '16.0.0.118 (16.0.0.118)' can't be established.
RSA key fingerprint is 76:74:5e:63:0e:34:c6:36:d5:0c:f0:30:b0:60:3d:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '16.0.0.118' (RSA) to the list of known hosts.
oracle@16.0.0.118's password:
Template_for_DB10TO11G.ctl                    100% 6896KB   6.7MB/s   00:00
Template_for_DB10TO11G.dbc                    100% 5236     5.1KB/s   00:00
Template_for_DB10TO11G.dfb                    100%   94MB  10.4MB/s   00:09
[oracle@MyjpServer1 templates]$

Visualización de archivos transferidos al servidor destino. Es importante transferir los archivos de la plantilla a la ruta señalada ( $ORACLE_HOME/assistants/dbca/templates ) para que el DBCA 11g pueda reconocer la misma de forma automática.
BBDD destino a ser construida: "DB11GR2"
Sistema Operativo Destino: Linux x86 64-bit "MyjpServer2"
Versión de Oracle Server: 11.2.0.3
[oracle@MyjpServer2 templates]$ pwd
/u01/app/oracle/product/11.2.0/dbhome_1/assistants/dbca/templates
[oracle@MyjpServer2 templates]$
[oracle@MyjpServer2 templates]$ ls -lt *DB10TO11G*
-rw-r----- 1 oracle oinstall 98500608 Aug  9 08:13 Template_for_DB10TO11G.dfb
-rw-r----- 1 oracle oinstall  7061504 Aug  9 08:13 Template_for_DB10TO11G.ctl
-rw-r----- 1 oracle oinstall     5236 Aug  9 08:13 Template_for_DB10TO11G.dbc
[oracle@MyjpServer2 templates]$

Una vez que se poseen los archivos de la plantilla en el servidor destino en la ruta señalada. Procedemos a iniciar el DBCA.
[oracle@MyjpServer2 ~]$ dbca

iniciar DBCA
Seleccionamos la opción para crear una nueva BBDD:
seleccionar la opción crear BBDD
Allí encontraremos disponible la plantilla que se transfirió "Template for DB10TO11G":
template for DB10TO11G
Para el presente caso crearemos una BBDD llamada "DB11GR2":
crear BBDD - DB11GR2
En la presenta pantalla se encuentran escogidas las opciones para crear la BBDD y generar los scripts. Para el presente caso el DBCA no podrá crear via "wizard" satisfactoriamente la BBDD por un elemento que visualizaremos en los siguientes pasos. El procedimiento a llevar a cabo estará basado solamente en generar los scripts. Para ello solo escoja la generación de scripts.
escoger la generación de scripts
Los scripts se generaran satisfactoriamente en disco:
scripts
Si se selecciono la creación de BBDD, el asistente iniciara la actividad:
seleccionar la BBDD
Si se selecciono la creación de BBDD, el asistente iniciara la actividad. Internamente cuando intente abrir la BBDD el mismo se encontrara con el problema de que esta intentando restaurar una BBDD 10g en un "Oracle Server 11g" y por ello la opción posible es de modo "upgrade".
upgrade
Siendo de esta manera. El asistente culminara su labor no satisfactoriamente. Los scripts ya fueron creados y ellos trabajaremos para poder llevar a cabo la tarea.
 El asistente culminara su labor no satisfactoriamente
Los scripts generados por el DBCA son los siguientes:
[oracle@MyjpServer2 scripts]$ pwd
/u01/app/oracle/admin/DB11GR2/scripts
[oracle@MyjpServer2 scripts]$
[oracle@MyjpServer2 scripts]$ ls -lt
total 27644
-rw-r----- 1 oracle oinstall     2182 Aug  9 09:46 cloneDBCreation.sql
-rw-r----- 1 oracle oinstall      736 Aug  9 09:41 postDBCreation.sql
-rw-r----- 1 oracle oinstall      507 Aug  9 09:41 lockAccount.sql
-rwxr-xr-x 1 oracle oinstall      550 Aug  9 09:41 DB11GR2.sql
-rw-r----- 1 oracle oinstall      667 Aug  9 09:41 postScripts.sql
-rw-r----- 1 oracle oinstall     2524 Aug  9 09:41 initDB11GR2Temp.ora
-rw-r----- 1 oracle oinstall      277 Aug  9 09:41 CloneRmanRestore.sql
-rw-r----- 1 oracle oinstall     1361 Aug  9 09:41 rmanRestoreDatafiles.sql
-rw-r----- 1 oracle oinstall     2511 Aug  9 09:41 init.ora
-rwxr-xr-x 1 oracle oinstall      647 Aug  9 09:41 DB11GR2.sh
[oracle@MyjpServer2 scripts]$
[oracle@MyjpServer2 scripts]$

La tarea de crear la BBDD la llevaremos a cabo a través de los scripts. Para que la creación en el presente escenario se realice de forma satisfactoria solo deberemos modificar una línea de un script y adicionar otra.
Script a ser modificado: /u01/app/oracle/admin/DB11GR2/scripts/cloneDBCreation.sql
Línea a ser modificada: alter database open resetlogs;
Valor original: alter database "DB11GR2" open resetlogs;
Valor posterior a la modificación: alter database "DB11GR2" open resetlogs upgrade;

Los scripts de BBDD originalmente aperturan la BBDD en modo resetlogs de manera de que los "Redo Log groups" sean creados automáticamente posterior a la recreación de controlfiles. La modificación a realizar es colocar la clausula "upgrade" de manera de que se realice la misma operación pero en modo "upgrade". Una vez aperturada la BBDD en modo "upgrade" es necesario ejecutar la re-construcción del catalogo lo cual no solo actualiza las vistas del catalogo sino que también realiza el "upgrade" de las opciones existentes en la BBDD ( Oracle Text, Oracle XML Database, etc ). La ejecución de este script es lo que mas consumo de tiempo toma al realizar el "upgrade" en el presente escenario, regularmente en promedio esta ejecución tarda de 30 a 90 minutos. La ejecución de las estadísticas del diccionario de datos ( EXECUTE dbms_stats.gather_dictionary_stats;) promoverá la rapidez de la ejecución de esta fase y la relacionada con la compilación de objetos inválidos.
Oracle Text, Oracle XML Database, etc ). La ejecución de este script es lo que mas consumo de tiempo toma al realizar el "upgrade" en el presente escenario, regularmente en promedio esta ejecución tarda de 30 a 90 minutos. La ejecución de las estadísticas del diccionario de datos ( EXECUTE dbms_stats.gather_dictionary_stats;) promoverá la rapidez de la ejecución de esta fase y la relacionada con la compilación de objetos inválidos.
Posterior a la línea de apertura se deberá adicionar la línea para la actualización del catalogo en modo "upgrede" con el siguiente script:
start /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/catupgrd.sql

En definitiva las 2 lineas quedaran de la siguiente manera:
alter database "DB11GR2" open resetlogs upgrade;
start /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/catupgrd.sql

El contenido completo del script mencionado es el siguiente:
[oracle@MyjpServer2 scripts]$ cat /u01/app/oracle/admin/DB11GR2/  scripts/cloneDBCreation.sql
SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /u01/app/oracle/admin/DB11GR2/scripts/cloneDBCreation.log append
Create controlfile reuse set database "DB11GR2"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
Datafile
'/u01/app/oracle/oradata/DB11GR2/system01.dbf',
'/u01/app/oracle/oradata/DB11GR2/undotbs01.dbf',
'/u01/app/oracle/oradata/DB11GR2/sysaux01.dbf',
'/u01/app/oracle/oradata/DB11GR2/users01.dbf'
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/DB11GR2/redo01.log') SIZE 51200K,
GROUP 2 ('/u01/app/oracle/oradata/DB11GR2/redo02.log') SIZE 51200K,
GROUP 3 ('/u01/app/oracle/oradata/DB11GR2/redo03.log') SIZE 51200K RESETLOGS;
exec dbms_backup_restore.zerodbid(0);
shutdown immediate;
startup nomount pfile="/u01/app/oracle/admin/DB11GR2/scripts/initDB11GR2Temp.ora";
Create controlfile reuse set database "DB11GR2"
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
Datafile
'/u01/app/oracle/oradata/DB11GR2/system01.dbf',
'/u01/app/oracle/oradata/DB11GR2/undotbs01.dbf',
'/u01/app/oracle/oradata/DB11GR2/sysaux01.dbf',
'/u01/app/oracle/oradata/DB11GR2/users01.dbf'
LOGFILE GROUP 1 ('/u01/app/oracle/oradata/DB11GR2/redo01.log') SIZE 51200K,
GROUP 2 ('/u01/app/oracle/oradata/DB11GR2/redo02.log') SIZE 51200K,
GROUP 3 ('/u01/app/oracle/oradata/DB11GR2/redo03.log') SIZE 51200K RESETLOGS;
alter system enable restricted session;
alter database "DB11GR2" open resetlogs upgrade;
start /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/catupgrd.sql
exec dbms_service.delete_service('DB10TO11');
exec dbms_service.delete_service('DB10TO11XDB');
alter database rename global_name to "DB11GR2";
ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/app/oracle/oradata/DB11GR2/temp01.dbf' 
  SIZE 20480K REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED;
select tablespace_name from dba_tablespaces where tablespace_name='USERS';
select sid, program, serial#, username from v$session;
alter database character set INTERNAL_CONVERT WE8MSWIN1252;
alter database national character set INTERNAL_CONVERT AL16UTF16;
alter user sys account unlock identified by "&&sysPassword";
alter user system account unlock identified by "&&systemPassword";
alter system disable restricted session;
[oracle@MyjpServer2 scripts]$

Una vez realizado el cambio en el script ( /u01/app/oracle/admin/DB11GR2/scripts/cloneDBCreation.sql ). Se podrá iniciar la creación formal de la BBDD con la ejecución del script:
[oracle@MyjpServer2 scripts]$ /u01/app/oracle/admin/DB11GR2/scripts/DB11GR2.sh

Posterior a la finalización del script de construcción de BBDD. Es aconsejable adicionar la referencia de la nueva BBDD al archivo /etc/oratab de la siguiente manera:
DB11GR2:/u01/app/oracle/product/11.2.0/dbhome_1:Y
Esto se realiza con el objetivo de que el DBCA pueda tener referencia de existencia de la reciente BBDD creada para futuras tareas de mantenimiento relacionadas con la misma.
Después de la finalización del script de construcción de BBDD podemos visualizar si los componentes y/o opciones internas de la BBDD quedaron correctamente actualizadas mediante el siguiente script:
SQL> start $ORACLE_HOME/rdbms/admin/utlu112s.sql

Se aconseja validar objetos inválidos y proceder a la compilación de los mismos. A través de la siguiente consulta se pueden verificar los objetos inválidos:
SQL> SELECT UNIQUE object_name, object_type, owner FROM dba_objects WHERE status='INVALID';

Asegúrese que no queden objetos inválidos en los schemas sys y system.
Podrá Recompilar objetos inválidos con el siguiente script:
SQL> start $ORACLE_HOME/rdbms/admin/utlrp.sql
A partir de la versión de manejador 11.1.0.7 podrá comparar objetivos inválidos antes y posterior al "upgrade" a través de las siguientes vistas:
  • registry$sys_inv_objs
  • registry$nonsys_inv_objs
O ejecutar el siguiente script para el mismo objetivo:
SQL> start $ORACLE_HOME/rdbms/admin/utluiobj.sql

La vista dba_invalid_objs podrá ser consultada para determinar los objetos inválidos posterior al "upgrade".

Conclusión

La presente técnica mostrada posee características de ejecución que se adaptan para migraciones con perfil "No Zero Downtime". El tiempo de "Downtime" estará basado en el "cold backup" original, transferencia de los archivos de plantilla, restaurado de la BBDD, reconstrucción de catalogo y posteriores tareas finales. Dependiendo del tamaño de la BBDD esta pudiese ser una opción excelente por su sencillez, efectividad y confiabilidad. En un artículos posteriores abordaremos diversos escenarios de para realizar "upgrades" con DBUA en el mismo "host", en modo "Rolling Upgrade" de manera que nuestro "Downtime" no sea tan prolongado para BBDDs de largo alcance y demás escenarios posibles.

lunes, 11 de abril de 2016

Managing Audit Trails ASM

TECHNOLOGY: Security
As Published In

November/December 2010



Managing Audit Trails
By Arup Nanda


Relocate the audit trail to a different tablespace and set up an automatic purge process to keep its size under control.

One of the most significant aspects of database security involves setting up auditing to record user activities. The very knowledge that a user’s actions are being recorded can act as a significant deterrent to prevent wrongdoers from committing malicious acts.

When auditing is enabled, the audit output is recorded in an audit trail, which is usually stored in the database in a table under the SYS schema called AUD$. It can also reside as files in the file system, and the files can optionally be stored in XML format. For more-precise control, the Fine Grained Auditing feature of Oracle Database 11g provides granular control of what to audit, based on a more detailed set of policies. Fine Grained Auditing audits are usually stored in another table, FGA_LOG$, under the SYS schema.

These various audit trails can quickly grow out of control when database activity increases. As audit trails grow, two main challenges must be addressed:

1.Trails need to be kept to a manageable size (and old records purged) if they are to be used effectively in forensic analysis.
2.Because database-resident trails are typically stored in the SYSTEM tablespace, they can potentially fill it up—bringing the database to a halt.


Fortunately, the new auditing features in Oracle Database 11g Release 2 can help address these challenges. These capabilities, implemented in a package called DBMS_AUDIT_MGMT, enable you to move audit trails from the SYSTEM tablespace to one of your choice.

The new auditing features also let you set up one-time and automated purge processes for each of your audit trail types. Historically, to purge an audit trail, you were generally forced to stop auditing (which may have required bouncing the database), truncate, and then restart auditing (and bouncing the database again).

In this article, you will learn how to use the new features in Oracle Database 11g Release 2 to manage your audit trails.

Relocating the Audit Trail Tables
Let’s first examine how to relocate an audit trail from the default SYSTEM tablespace to a new one. In case you don’t already have a suitable target tablespace, the code below shows how to create one:


create tablespace audit_trail_ts
datafile '+DATA'
size 500M
segment space management auto
/
For moving an audit trail to the new tablespace, Oracle Database 11g Release 2 provides a procedure in DBMS_AUDIT_MGMT called SET_AUDIT_TRAIL_LOCATION. Listing 1 shows how to move a “standard” audit trail, which is the Oracle Database audit recorded in the AUD$ table.

Code Listing 1: Relocating a standard audit trail

begin
dbms_audit_mgmt.set_audit_trail_location(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
audit_trail_location_value => 'AUDIT_TRAIL_TS');
end;
/
This move operation can be performed even when the database is up and an audit trail is being written. The target tablespace (AUDIT_TRAIL_TS in this case) must be available and online. If the tablespace is not available, auditing will stop, also stopping the database in the process. You should therefore be very careful about where you create the tablespace. The location should be permanent (and not on a temporary file system such as /tmp), and the underlying hardware should be resilient against failures (using RAID-1, for example).

The procedure can also be used for Fine Grained Auditing audit trails. To move a Fine Grained Auditing audit trail, simply replace the value of the audit_trail_type parameter in Listing 1 with dbms_audit_mgmt.audit_trail_fga_std. If you want to move both the standard and Fine Grained Auditing audit trails to the new tablespace, use the dbms_audit.audit_trail_db_std value as the audit_trail_type parameter.

Purging Old Data
Next, let’s examine how to purge audit trails. The audit management package includes a procedure that automatically performs the purge for you. But before you can actually use it, you must call a one-time initialization procedure—INIT_CLEANUP—to set up the audit management infrastructure. Listing 2 shows how to perform the initialization.


Code Listing 2: Initializing cleanup of audit entries

begin
dbms_audit_mgmt.init_cleanup(
audit_trail_type => dbms_audit_mgmt.audit_trail_db_std,
default_cleanup_interval => 24 );
end;
The INIT_CLEANUP procedure takes two parameters, neither of which takes a default value:

audit_trail_type—designates the type of audit trail being initialized. For instance, audit_trail_aud_std indicates the standard database audit trail (the AUD$ table). Table 1 lists the possible values for this parameter and the audit trail types they represent.
default_cleanup_interval—designates the default interval in hours between executions of automatic purge jobs (to be discussed later in this article).



Parameter Description
audit_trail_aud_std The standard AUD$ audit trail in the database
audit_trail_fga_std The FGA_LOG$ table, for Fine Grained Auditing
audit_trail_db_std Both standard and FGA audit trails
audit_trail_os The OS audit trail
audit_trail_xml The XML audit trail
audit_trail_files Both OS and XML audit trails
audit_trail_all All of the above


Table 1: Types of audit trails for audit_trail_type



In addition to setting the default cleanup frequency, the INIT_CLEANUP procedure moves the audit trail out of the SYSTEM tablespace. If the FGA_LOG$ and AUD$ tables are in the SYSTEM tablespace, the procedure will move them to the SYSAUX tablespace. Needless to say, you should ensure that the SYSAUX tablespace has sufficient space to hold both of these tables. The process of moving data from one tablespace to the other can have an impact on performance, so you should avoid calling the procedure during peak hours.

If you have already relocated these two tables to another tablespace (as described in the previous section), they will stay in the new location and the procedure will execute much more quickly.

After calling the initialization procedure, you can perform the actual audit trail cleanup, but you likely wouldn’t just remove an audit trail blindly. In most cases, you would archive the trail first before performing a permanent purge. When doing so, you can call another procedure—SET_LAST_ARCHIVE_TIMESTAMP—to let the purge process know the time stamp up to which an audit trail has been archived. This procedure accepts three parameters:

audit_trail_type—the type of audit trail you are about to purge.
last_archive_time—the last time the audit trail was archived for this type.
rac_instance_number—with an Oracle Real Application Clusters (Oracle RAC) database, OS audit trail files exist on more than one server. It’s possible to archive these files at different times, so this parameter tells the purge process the archive time of each node (or instance number) of the cluster. This parameter is applicable to Oracle RAC databases only; it has no significance for single-instance databases. Furthermore, this parameter is irrelevant for database audit trails, because they are common to all Oracle RAC instances.


After you set the archive time stamp, you can check its value from a data dictionary view, DBA_AUDIT_MGMT_LAST_ARCH_TS. Listing 3 shows how to set the cutoff time stamp to September 30, 2009 at 10 a.m. and subsequently check its value from the view.

Code Listing 3: Setting the last archived time

begin
dbms_audit_mgmt.set_last_archive_timestamp(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
last_archive_time =>
to_timestamp('2009-09-30 10:00:00','YYYY-MM-DD HH24:MI:SS'),
rac_instance_number => null
);
end;
/

SQL> select * from DBA_AUDIT_MGMT_LAST_ARCH_TS;

AUDIT_TRAIL RAC_INSTANCE
——————————————————————————————————
LAST_ARCHIVE_TS
——————————————————————————————————
STANDARD AUDIT TRAIL 0
30-SEP-09 10.00.00.000000 AM +00:00
Now you can execute the purge. To do so, run the code shown in Listing 4. The CLEAN_AUDIT_TRAIL procedure in the listing accepts two parameters. The first one is audit_trail_type. The second parameter—use_last_arch_timestamp—specifies whether the purge should be performed, depending on the last archive time stamp. If the parameter is set to TRUE (the default), the purge will delete the records generated before the time stamp (September 30, 2009 at 10 a.m. in this case). If it is set to FALSE, all audit trail records will be deleted.

Code Listing 4: Purging a standard database audit trail

begin
dbms_audit_mgmt.clean_audit_trail(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
use_last_arch_timestamp => TRUE
);
end;
/
This same procedure is also used to purge file-based audit trails such as OS file audit trails and XML trails. To purge those trails, just specify the appropriate value for the audit_trail_type parameter (as shown in Table 1). However, note that for file-based audit trails, only the files in the current audit directory (as specified by the audit_file_dest initialization parameter) will be deleted. If you have audit trail files in a different directory from the one specified in audit_file_dest, those files will not be deleted.

Note that in Microsoft Windows, audit trails are entries in Windows Event Viewer and not actual OS files. So purging OS-based audit trails on that platform will not delete the trails.

Setting Up Automatic Purge
The foregoing process is good for a one-time purge of audit trails. To ensure that audit trails do not overwhelm their tablespace, you may want to institute an automatic purge mechanism. The DBMS_AUDIT_MGMT package has another procedure—CREATE_PURGE_JOB—to do just that. This procedure takes four parameters:


audit_trail_type—the type of the audit trail
audit_trail_purge_interval—the duration, in hours, between executions of the purge process
audit_trail_purge_name—the name you assign to this job
use_last_arch_timestamp—an indication of whether the job should delete audit trail records marked as archived. The default is TRUE. If the parameter is set to FALSE, the procedure will delete the entire trail.


Listing 5 shows how to create a purge job that deletes standard audit trail records every 24 hours. As with one-time purges, you can create different jobs for each type of trail—such as standard, Fine Grained Auditing, OS files, and XML—simply by specifying different values for audit_trail_type when calling CREATE_PURGE_JOB. You can even set different purge intervals for each audit trail type to suit your archival needs. For instance, you can use a simple database-link-based script to pull database audit trail records to a different database while using a third-party tool to pull the OS audit trails. The execution time of each approach may be different, causing the database records to be pulled every day while the OS files are being pulled every hour. As a result, you might schedule purge jobs with an interval of 24 hours for database-based trails and with an interval of one hour for OS-file-based trails.

Code Listing 5: Creating a purge job for a standard audit trail

begin
dbms_audit_mgmt.create_purge_job (
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
audit_trail_purge_interval => 24,
audit_trail_purge_name => 'std_audit_trail_purge_job',
use_last_arch_timestamp => TRUE
);
end;
/
You can view information about automatic purge jobs by accessing the DBA_AUDIT_MGMT_CLEANUP_JOBS data dictionary view. It shows all the important attributes of the job, such as the name, the type of audit trail being cleaned, and the frequency.

Setting Audit Trail Properties
Next Steps


READ more about
the audit management package
database security

When setting up a purge job, you should always remember one very important fact. It performs a DELETE operation—not TRUNCATE—on database-based trails, so the purge operation generates redo and undo records, which may be quite significant, depending on the number of trail records deleted. A large deletion can potentially fill up the undo tablespace. To reduce the redo size of a transaction, the purge job deletes in batches of 1,000 and performs commits between them. If the database is very large, it may be able to handle much more redo easily. You can change the delete batch size by using the SET_AUDIT_TRAIL_PROPERTY procedure. Listing 6 shows how to set the delete batch size to 100,000.


Code Listing 6: Setting the deletion batch size

begin
dbms_audit_mgmt.set_audit_trail_property(
audit_trail_type => dbms_audit_mgmt.audit_trail_aud_std,
audit_trail_property => dbms_audit_mgmt.db_delete_batch_size,
audit_trail_property_value => 100000);
end;
/

In addition to the db_delete_batch_size property referenced in Listing 6, you can use SET_AUDIT_TRAIL_PROPERTY to set several other important properties. They include the following:

file_delete_batch_size specifies how many OS audit trail files will be deleted by the purge job in one batch.
cleanup_interval specifies the default interval, in hours, between executions of a purge job.
os_file_max_age specifies how many days an OS file or an XML file can be left open before a new file is created.
os_file_max_size specifies the maximum size of an audit trail file (in kilobytes).


To find the current value of a property, you can check the data dictionary view DBA_AUDIT_MGMT_CONFIG_PARAMS.

viernes, 8 de abril de 2016

wrap

wrap iname=input_file [ oname=output_file ]
input_file is the name of a file containing SQL statements, that you typically run using SQL*Plus. If you omit the file extension, an extension of .sql is assumed. For example, the following commands are equivalent:
wrap iname=/mydir/myfile
wrap iname=/mydir/myfile.sql
You can also specify a different file extension:
wrap iname=/mydir/myfile.src
output_file is the name of the wrapped file that is created. The defaults to that of the input file and its extension default is .plb. For example, the following commands are equivalent:
wrap iname=/mydir/myfile
wrap iname=/mydir/myfile.sql oname=/mydir/myfile.plb
You can use the option oname to specify a different file name and extension:
wrap iname=/mydir/myfile oname=/yourdir/yourfile.out

miércoles, 6 de abril de 2016

Administering a CDB with SQL*Plus

Multitenant : Connecting to Container Databases (CDB) and Pluggable Databases (PDB) in Oracle Database 12c Release 1 (12.1)



https://oracle-base.com/articles/12c/multitenant-overview-container-database-cdb-12cr1

The multitenant option introduced in Oracle Database 12c allows a single container database (CDB) to host multiple separate pluggable databases (PDB). This article describes how to connect to container databases (CDB) and pluggable databases (PDB).
Related articles.

Connecting to a Container Database (CDB)

Connecting to the root of a container database is the same as that of any previous database instance. On the database server you can use OS Authentication.
$ export ORACLE_SID=cdb1
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Aug 26 15:29:49 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>
You can connect to other common users in similar way.
SQL> CONN system/password
Connected.
SQL>
The V$SERVICES views can be used to display available services from the database.
COLUMN name FORMAT A30

SELECT name, pdb
FROM   v$services
ORDER BY name;

NAME          PDB
------------------------------ ------------------------------
SYS$BACKGROUND                 CDB$ROOT
SYS$USERS                      CDB$ROOT
cdb1                           CDB$ROOT
cdb1XDB                        CDB$ROOT
pdb1                           PDB1
pdb2                           PDB2

6 rows selected.

SQL>
The lsnrctl utility allows you to display the available services from the command line.
$ lsnrctl service

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 20-MAY-2014 09:01:34

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "cdb1XDB" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "D000" established:0 refused:0 current:0 max:1022 state:ready
         DISPATCHER 
         (ADDRESS=(PROTOCOL=tcp)(HOST=ol6-121.localdomain)(PORT=21196))
Service "pdb1" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
Service "pdb2" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         LOCAL SERVER
The command completed successfully
$
Connections using services are unchanged from previous versions.
SQL> -- EZCONNECT
SQL> CONN system/password@//localhost:1521/cdb1
Connected.
SQL>

SQL> -- tnsnames.ora
SQL> CONN system/password@cdb1
Connected.
SQL>
The connection using a TNS alias requires an entry in the "$ORACLE_HOME/network/admin/tnsnames.ora" file, such as the one shown below.
CDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cdb1)
    )
  )

Displaying the Current Container

The SHOW CON_NAME and SHOW CON_ID commands in SQL*Plus display the current container name and ID respectively.
SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT
SQL>

SQL> SHOW CON_ID

CON_ID
------------------------------
1
SQL>
They can also be retrieved using the SYS_CONTEXT function.
SELECT SYS_CONTEXT('USERENV', 'CON_NAME')
FROM   dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT

SQL>


SELECT SYS_CONTEXT('USERENV', 'CON_ID')
FROM   dual;

SYS_CONTEXT('USERENV','CON_ID')
--------------------------------------------------------------------------------
1

SQL>

Switching Between Containers

When logged in to the CDB as an appropriately privileged user, the ALTER SESSION command can be used to switch between containers within the container database.
SQL> ALTER SESSION SET container = pdb1;

Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
PDB1
SQL> ALTER SESSION SET container = cdb$root;

Session altered.

SQL> SHOW CON_NAME

CON_NAME
------------------------------
CDB$ROOT
SQL>

Connecting to a Pluggable Database (PDB)

Direct connections to pluggable databases must be made using a service. Each pluggable database automatically registers a service with the listener. This is how any application will connect to a pluggable database, as well as administrative connections.
SQL> -- EZCONNECT
SQL> CONN system/password@//localhost:1521/pdb1
Connected.
SQL>

SQL> -- tnsnames.ora
SQL> CONN system/password@pdb1
Connected.
SQL>
The connection using a TNS alias requires an entry in the "$ORACLE_HOME/network/admin/tnsnames.ora" file, such as the one shown below.
PDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = ol6-121.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1)
    )
  )
PDB users with the SYSDBA, SYSOPER, SYSBACKUP, or SYSDG privilege can connect to a closed PDB. All other PDB users can only connect when the PDB is open. As with regular databases, the PDB users require the CONNECT SESSION privilege to enable connections.

JDBC Connections to PDBs

It has already been mentioned that you must connect to a PDB using a service. This means that by default many JDBC connect strings will be broken. Valid JDBC connect strings for Oracle use the following format.
# Syntax
jdbc:oracle:thin:@[HOST][:PORT]:SID
jdbc:oracle:thin:@[HOST][:PORT]/SERVICE

# Example
jdbc:oracle:thin:@ol6-121:1521:pdb1
jdbc:oracle:thin:@ol6-121:1521/pdb1
When attempting to connect to a PDB using the SID format, you will receive the following error.
ORA-12505, TNS:listener does not currently know of SID given in connect descriptor
Ideally, you would correct the connect string to use services instead of SIDs, but if that is a problem the USE_SID_AS_SERVICE_listener_name listener parameter can be used.
Edit the "$ORACLE_HOME/network/admin/listener.ora" file, adding the following entry, with the "listener" name matching that used by your listener.
USE_SID_AS_SERVICE_listener=on
Reload or restart the listener.
$ lsnrctl reload
Now both of the following connection attempts will be successful as any SIDs will be treated as services.
jdbc:oracle:thin:@ol6-121:1521:pdb1
jdbc:oracle:thin:@ol6-121:1521/pdb1

db_keep_cache_size example

#!/bin/bash

#-- create a KEEP pool

#-- and set the table properties to make use of it

#-- Select statements populate the keep cache

sqlplus -S /nolog >/dev/null <<-eof p="">
set termout off

set feedback off

connect / as sysdba

ALTER SYSTEM SET db_keep_cache_size = 4M;

exit

EOF

sqlplus -S /nolog >/dev/null <<-eof p="">
set termout off

set feedback off

connect hr/hr

ALTER TABLE EMPLOYEES STORAGE(BUFFER_POOL KEEP);

ALTER TABLE COUNTRIES STORAGE(BUFFER_POOL KEEP);

ALTER TABLE DEPARTMENTS STORAGE(BUFFER_POOL KEEP);

ALTER TABLE LOCATIONS STORAGE(BUFFER_POOL KEEP);

ALTER TABLE REGIONS STORAGE(BUFFER_POOL KEEP);

ALTER TABLE JOB_HISTORY STORAGE(BUFFER_POOL KEEP);

select * from employees;

select * from departments;

select * from countries;

select * from locations;

select * from regions;

select * from job_history;

exit

EOF

sqlplus -S /nolog > /dev/null <<-eof p="">
set termout off

set feedback off

connect oe/oe

ALTER TABLE orders STORAGE(BUFFER_POOL KEEP);

ALTER TABLE order_items STORAGE(BUFFER_POOL KEEP);

ALTER TABLE inventories STORAGE(BUFFER_POOL KEEP);

ALTER TABLE customers STORAGE(BUFFER_POOL KEEP);

ALTER TABLE product_information STORAGE(BUFFER_POOL KEEP);

ALTER INDEX INV_PRODUCT_IX STORAGE(BUFFER_POOL KEEP);

ALTER INDEX PRODUCT_INFORMATION_PK STORAGE(BUFFER_POOL KEEP);

 

select * from orders;

select * from order_items;

select * from inventories;

select * from customers;

select * from product_information;

exit

EOF

sqlplus -S /nolog > /dev/null <<-eof p="">
set termout off

set feedback off

connect sh/sh

ALTER TABLE SALES MODIFY PARTITION SALES_Q3_1999 STORAGE(BUFFER_POOL KEEP);

ALTER TABLE SALES MODIFY PARTITION SALES_Q1_1999 STORAGE(BUFFER_POOL KEEP);

select * from sales;

EOF