MERCADOS FINANCIEROS

jueves, 31 de agosto de 2017

RESULT CACHE PL/SQL


Supongamos que estoy en un equipo que está construyendo una aplicación de recursos humanos. La tabla de empleados es una de las estructuras clave, manteniendo todos los datos para todos los empleados. Cientos de usuarios ejecutan numerosos programas en la aplicación que se leen de esta tabla y leen con mucha frecuencia. Sin embargo, la tabla cambia con relativa poca frecuencia, tal vez una o dos veces por hora. Como resultado, el código de aplicación recupera repetidamente de la caché de memoria intermedia de bloque lo que es en su mayoría datos estáticos, soportando la sobrecarga de comprobar si la consulta particular ya se ha analizado, encontrando los datos en el búfer y devolviéndolos.

El equipo necesita mejorar el rendimiento de consultar los datos de la tabla de empleados. En la actualidad, utilizamos la siguiente función para devolver una fila de la tabla de empleados:



The team needs to improve the performance of querying data from the employees table. Currently,

we use the following function to return a row from the employees table:
FUNCTION one_employee (employee_id_in 
IN employees.employee_id%TYPE)
   RETURN employees%ROWTYPE
IS
    l_employee   employees%ROWTYPE;
BEGIN
   SELECT *
      INTO l_employee
      FROM employees
    WHERE employee_id = employee_id_in;

    RETURN l_employee;
EXCEPTION
    WHEN NO_DATA_FOUND
    THEN
       /* Return an empty record. */
       RETURN l_employee;
END one_employee;


In Oracle Database 11g, however, we can add a line to the header of this function as follows:
FUNCTION one_employee (employee_id_in 
IN employees.employee_id%TYPE)
   RETURN employees%ROWTYPE
   RESULT_CACHE RELIES_ON (employees)
IS
    l_employee   employees%ROWTYPE;
BEGIN

Esta cláusula RESULT_CACHE le dice a Oracle Database que debería recordar (almacenar en una memoria caché de resultados en memoria especial) cada registro recuperado para un número de identificación de empleado específico. Y cuando una sesión ejecuta esta función y pasa un ID de empleado que fue almacenado previamente, el motor de tiempo de ejecución de PL / SQL no ejecutará el cuerpo de la función, que incluye esa consulta.En su lugar, simplemente recuperará el registro del caché y devolverá los datos inmediatamente. El resultado es una recuperación mucho más rápida.Además, especificando RELIES_ON (empleados), informamos a Oracle Database que si cualquier sesión confirma cambios en esa tabla, se invalidarán todos los datos de la caché de resultados extraídos de la tabla. La siguiente llamada a la función one_employee tendría entonces que ejecutar la consulta y recuperar los datos frescos de la tabla.Debido a que el caché es una parte del Área Global del Sistema (SGA), su contenido está disponible para todas las sesiones conectadas a la instancia. Además, Oracle Database aplicará su "algoritmo utilizado menos recientemente" a la caché, para garantizar que los datos más recientemente accedidos se conservarán en la caché.Antes de Oracle Database 11g, era posible un tipo similar de almacenamiento en caché con colecciones a nivel de paquete, pero esta caché era específica de la sesión y estaba ubicada en el Área de Proceso Global (PGA). Esto significa que si tengo 1.000 sesiones diferentes ejecutando la aplicación, podría utilizar una enorme cantidad de memoria además de la consumida por la SGA.La cache de resultados de la función PL / SQL minimiza la cantidad de memoria necesaria para almacenar en caché y compartir estos datos en todas las sesiones. Este perfil de memoria baja, además de la purga automática de los resultados almacenados en caché siempre que se cometen cambios, hace que esta característica de Oracle Database 11g sea muy práctica para optimizar el rendimiento en aplicaciones PL / SQL.

martes, 22 de agosto de 2017

Predefined PL/SQL Exceptions

Predefined PL/SQL Exceptions

An internal exception is raised implicitly whenever your PL/SQL program violates an Oracle rule or exceeds a system-dependent limit. Every Oracle error has a number, but exceptions must be handled by name. So, PL/SQL predefines some common Oracle errors as exceptions. For example, PL/SQL raises the predefined exception NO_DATA_FOUND if a SELECT INTO statement returns no rows.
To handle other Oracle errors, you can use the OTHERS handler. The functions SQLCODE and SQLERRM are especially useful in the OTHERS handler because they return the Oracle error code and message text. Alternatively, you can use the pragma EXCEPTION_INIT to associate exception names with Oracle error codes.
PL/SQL declares predefined exceptions globally in package STANDARD, which defines the PL/SQL environment. So, you need not declare them yourself. You can write handlers for predefined exceptions using the names in the following list:
ExceptionOracle ErrorSQLCODE Value
ACCESS_INTO_NULL
ORA-06530
-6530
CASE_NOT_FOUND
ORA-06592
-6592
COLLECTION_IS_NULL
ORA-06531
-6531
CURSOR_ALREADY_OPEN
ORA-06511
-6511
DUP_VAL_ON_INDEX
ORA-00001
-1
INVALID_CURSOR
ORA-01001
-1001
INVALID_NUMBER
ORA-01722
-1722
LOGIN_DENIED
ORA-01017
-1017
NO_DATA_FOUND
ORA-01403
+100
NOT_LOGGED_ON
ORA-01012
-1012
PROGRAM_ERROR
ORA-06501
-6501
ROWTYPE_MISMATCH
ORA-06504
-6504
SELF_IS_NULL
ORA-30625
-30625
STORAGE_ERROR
ORA-06500
-6500
SUBSCRIPT_BEYOND_COUNT
ORA-06533
-6533
SUBSCRIPT_OUTSIDE_LIMIT
ORA-06532
-6532
SYS_INVALID_ROWID
ORA-01410
-1410
TIMEOUT_ON_RESOURCE
ORA-00051
-51
TOO_MANY_ROWS
ORA-01422
-1422
VALUE_ERROR
ORA-06502
-6502
ZERO_DIVIDE
ORA-01476
-1476
Brief descriptions of the predefined exceptions follow:

ExceptionRaised when ...
ACCESS_INTO_NULL
Your program attempts to assign values to the attributes of an uninitialized (atomically null) object.
CASE_NOT_FOUND
None of the choices in the WHEN clauses of a CASE statement is selected, and there is no ELSE clause.
COLLECTION_IS_NULL
Your program attempts to apply collection methods other than EXISTS to an uninitialized (atomically null) nested table or varray, or the program attempts to assign values to the elements of an uninitialized nested table or varray.
CURSOR_ALREADY_OPEN
Your program attempts to open an already open cursor. A cursor must be closed before it can be reopened. A cursor FOR loop automatically opens the cursor to which it refers. So, your program cannot open that cursor inside the loop.
DUP_VAL_ON_INDEX
Your program attempts to store duplicate values in a database column that is constrained by a unique index.
INVALID_CURSOR
Your program attempts an illegal cursor operation such as closing an unopened cursor.
INVALID_NUMBER
In a SQL statement, the conversion of a character string into a number fails because the string does not represent a valid number. (In procedural statements, VALUE_ERROR is raised.) This exception is also raised when the LIMIT-clause expression in a bulk FETCHstatement does not evaluate to a positive number.
LOGIN_DENIED
Your program attempts to log on to Oracle with an invalid username and/or password.
NO_DATA_FOUND
A SELECT INTO statement returns no rows, or your program references a deleted element in a nested table or an uninitialized element in an index-by table. SQL aggregate functions such as AVG and SUM always return a value or a null. So, a SELECT INTO statement that calls an aggregate function never raises NO_DATA_FOUND. The FETCH statement is expected to return no rows eventually, so when that happens, no exception is raised.
NOT_LOGGED_ON
Your program issues a database call without being connected to Oracle.
PROGRAM_ERROR
PL/SQL has an internal problem.
ROWTYPE_MISMATCH
The host cursor variable and PL/SQL cursor variable involved in an assignment have incompatible return types. For example, when an open host cursor variable is passed to a stored subprogram, the return types of the actual and formal parameters must be compatible.
SELF_IS_NULL
Your program attempts to call a MEMBER method on a null instance. That is, the built-in parameter SELF (which is always the first parameter passed to a MEMBER method) is null.
STORAGE_ERROR
PL/SQL runs out of memory or memory has been corrupted.
SUBSCRIPT_BEYOND_COUNT
Your program references a nested table or varray element using an index number larger than the number of elements in the collection.
SUBSCRIPT_OUTSIDE_LIMIT
Your program references a nested table or varray element using an index number (-1 for example) that is outside the legal range.
SYS_INVALID_ROWID
The conversion of a character string into a universal rowid fails because the character string does not represent a valid rowid.
TIMEOUT_ON_RESOURCE
A time-out occurs while Oracle is waiting for a resource.
TOO_MANY_ROWS
A SELECT INTO statement returns more than one row.
VALUE_ERROR
An arithmetic, conversion, truncation, or size-constraint error occurs. For example, when your program selects a column value into a character variable, if the value is longer than the declared length of the variable, PL/SQL aborts the assignment and raises VALUE_ERROR. In procedural statements, VALUE_ERROR is raised if the conversion of a character string into a number fails. (In SQL statements, INVALID_NUMBER is raised.)
ZERO_DIVIDE
Your program attempts to divide a number by zero.

jueves, 17 de agosto de 2017

BULK COLLECT

http://www.oracle.com/technetwork/issue-archive/2008/08-mar/o28plsql-095155.html

DEVELOPER: PL/SQL Practices


On BULK COLLECT

By Steven Feuerstein Oracle ACE Director 

Best practices for knowing your LIMIT and kicking %NOTFOUND
I have started using BULK COLLECT whenever I need to fetch large volumes of data. This has caused me some trouble with my DBA, however. He is complaining that although my programs might be running much faster, they are also consuming way too much memory. He refuses to approve them for a production rollout. What's a programmer to do?
The most important thing to remember when you learn about and start to take advantage of features such as BULK COLLECT is that there is no free lunch. There is almost always a trade-off to be made somewhere. The tradeoff with BULK COLLECT, like so many other performance-enhancing features, is "run faster but consume more memory."
Specifically, memory for collections is stored in the program global area (PGA), not the system global area (SGA). SGA memory is shared by all sessions connected to Oracle Database, but PGA memory is allocated for each session. Thus, if a program requires 5MB of memory to populate a collection and there are 100 simultaneous connections, that program causes the consumption of 500MB of PGA memory, in addition to the memory allocated to the SGA.
Fortunately, PL/SQL makes it easy for developers to control the amount of memory used in a BULK COLLECT operation by using the LIMIT clause.
Suppose I need to retrieve all the rows from the employees table and then perform some compensation analysis on each row. I can use BULK COLLECT as follows:

PROCEDURE process_all_rows
IS
   TYPE employees_aat 
   IS TABLE OF employees%ROWTYPE
      INDEX BY PLS_INTEGER;
   l_employees employees_aat;
BEGIN
   SELECT *
   BULK COLLECT INTO l_employees
      FROM employees;
     
   FOR indx IN 1 .. l_employees.COUNT 
   LOOP
       analyze_compensation 
      (l_employees(indx));
   END LOOP;
END process_all_rows;


Very concise, elegant, and efficient code. If, however, my employees table contains tens of thousands of rows, each of which contains hundreds of columns, this program can cause excessive PGA memory consumption.
Consequently, you should avoid this sort of "unlimited" use of BULK COLLECT. Instead, move the SELECT statement into an explicit cursor declaration and then use a simple loop to fetch many, but not all, rows from the table with each execution of the loop body, as shown in Listing 1.
Code Listing 1: Using BULK COLLECT with LIMIT clause

PROCEDURE process_all_rows (limit_in IN PLS_INTEGER DEFAULT 100)
IS
    CURSOR employees_cur 
    IS 
        SELECT * FROM employees;

    TYPE employees_aat IS TABLE OF employees_cur%ROWTYPE
        INDEX BY PLS_INTEGER;

    l_employees employees_aat;
BEGIN   
    OPEN employees_cur;
    LOOP
        FETCH employees_cur 
            BULK COLLECT INTO l_employees LIMIT limit_in;

        FOR indx IN 1 .. l_employees.COUNT 
        LOOP
            analyze_compensation (l_employees(indx));
        END LOOP;

        EXIT WHEN l_employees.COUNT < limit_in;

   END LOOP;

   CLOSE employees_cur;
END process_all_rows;


The process_all_rows procedure in Listing 1 requests that up to the value of limit_in rows be fetched at a time. PL/SQL will reuse the same limit_in elements in the collection each time the data is fetched and thus also reuse the same memory. Even if my table grows in size, the PGA consumption will remain stable.
How do you decide what number to use in the LIMIT clause? Theoretically, you will want to figure out how much memory you can afford to consume in the PGA and then adjust the limit to be as close to that amount as possible.
From tests I (and others) have performed, however, it appears that you will see roughly the same performance no matter what value you choose for the limit, as long as it is at least 25. The test_diff_limits.sql script, included with the sample code for this column, demonstrates this behavior, using the ALL_SOURCE data dictionary view on an Oracle Database 11g instance. Here are the results I saw (in hundredths of seconds) when fetching all the rows (a total of 470,000):

Elapsed CPU time for limit of 1 = 1839
Elapsed CPU time for limit of 5 = 716
Elapsed CPU time for limit of 25 = 539
Elapsed CPU time for limit of 50 = 545
Elapsed CPU time for limit of 75 = 489
Elapsed CPU time for limit of 100 = 490
Elapsed CPU time for limit of 1000 = 501
Elapsed CPU time for limit of 10000 = 478
Elapsed CPU time for limit of 100000 = 527 


Kicking the %NOTFOUND Habit

I was very happy to learn that Oracle Database 10g will automatically optimize my cursor FOR loops to perform at speeds comparable to BULK COLLECT. Unfortunately, my company is still running on Oracle9i Database, so I have started converting my cursor FOR loops to BULK COLLECTs. I have run into a problem: I am using a LIMIT of 100, and my query retrieves a total of 227 rows, but my program processes only 200 of them. [The query is shown in Listing 2.] What am I doing wrong?
Code Listing 2: BULK COLLECT, %NOTFOUND, and missing rows

PROCEDURE process_all_rows
IS
   CURSOR table_with_227_rows_cur 
   IS 
      SELECT * FROM table_with_227_rows;

   TYPE table_with_227_rows_aat IS 
      TABLE OF table_with_227_rows_cur%ROWTYPE
      INDEX BY PLS_INTEGER;

   l_table_with_227_rows table_with_227_rows_aat;
BEGIN   
   OPEN table_with_227_rows_cur;
   LOOP
      FETCH table_with_227_rows_cur 
         BULK COLLECT INTO l_table_with_227_rows LIMIT 100;

         EXIT WHEN table_with_227_rows_cur%NOTFOUND;     /* cause of missing rows */

      FOR indx IN 1 .. l_table_with_227_rows.COUNT 
      LOOP
         analyze_compensation (l_table_with_227_rows(indx));
      END LOOP;
   END LOOP;

   CLOSE table_with_227_rows_cur;
END process_all_rows;


You came so close to a completely correct conversion from your cursor FOR loop to BULK COLLECT! Your only mistake was that you didn't give up the habit of using the %NOTFOUND cursor attribute in your EXIT WHEN clause.
The statement

EXIT WHEN 
table_with_227_rows_cur%NOTFOUND;


makes perfect sense when you are fetching your data one row at a time. With BULK COLLECT, however, that line of code can result in incomplete data processing, precisely as you described.
Let's examine what is happening when you run your program and why those last 27 rows are left out. After opening the cursor and entering the loop, here is what occurs:
1. The fetch statement retrieves rows 1 through 100. 
2. table_with_227_rows_cur%NOTFOUND evaluates to FALSE, and the rows are processed. 
3. The fetch statement retrieves rows 101 through 200. 
4. table_with_227_rows_cur%NOTFOUND evaluates to FALSE, and the rows are processed. 
5. The fetch statement retrieves rows 201 through 227. 
6. table_with_227_rows_cur%NOTFOUND evaluates to TRUE , and the loop is terminated—with 27 rows left to process!

Next Steps



When you are using BULK COLLECT and collections to fetch data from your cursor, you should never rely on the cursor attributes to decide whether to terminate your loop and data processing. 

So, to make sure that your query processes all 227 rows, replace this statement:

EXIT WHEN 
table_with_227_rows_cur%NOTFOUND; 

with

EXIT WHEN 
l_table_with_227_rows.COUNT = 0; 


Generally, you should keep all of the following in mind when working with BULK COLLECT:

  • The collection is always filled sequentially, starting from index value 1.
  • It is always safe (that is, you will never raise a NO_DATA_FOUND exception) to iterate through a collection from 1 to collection .COUNT when it has been filled with BULK COLLECT.
  • The collection is empty when no rows are fetched.
  • Always check the contents of the collection (with the COUNT method) to see if there are more rows to process.
  • Ignore the values returned by the cursor attributes, especially %NOTFOUND.