MERCADOS FINANCIEROS

viernes, 15 de abril de 2011

11g RAC Administration and Maintenance Tasks and Utilities

11g RAC Administration and Maintenance Tasks and Utilities:

Task List:

Checking CRS Status
Viewing Name Of the Cluster
Viewing Nodes Configuration
Checking Votedisk Information
Checking OCR Disk information
Timeout Settings in Cluster
ADD/Remove OCR files
ADD/Remove Votedisk
Backing Up OCR
Backing Up Votedisk
Restoring OCR Devices
Restoring Voting Disk Devices
Changing Public IPs as well as Virtual IPs

Checking CRS Status:


The below two commands are generally used to check the status of CRS. The first command lists the status of CRS
on the local node where as the other command shows the CRS status across all the nodes in Cluster.

crsctl check crs <<-- for the local node
crsctl check cluster <<-- for remote nodes in the cluster

[root@node1-pub ~]# crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
[root@node1-pub ~]#

Checking Viability of CSS across nodes:

crsctl check cluster

For this command to run, CSS needs to be running on the local node. The "ONLINE" status for remote node says that CSS is running on that node.
When CSS is down on the remote node, the status of "OFFLINE" is displayed for that node.

[root@node1-pub ~]# crsctl check cluster
node1-pub ONLINE
node2-pub ONLINE


Viewing Cluster name:


I use below command to get the name of Cluster. You can also dump the ocr and view the name from the dump file.

ocrdump -stdout -keyname SYSTEM | grep -A 1 clustername | grep ORATEXT | awk '{print $3}'

[root@node1-pub ~]# ocrdump -stdout -keyname SYSTEM | grep -A 1 clustername | grep ORATEXT | awk '{print $3}'
test-crs
[root@node1-pub ~]#

OR

ocrconfig -export /tmp/ocr_exp.dat -s online
for i in `strings /tmp/ocr_exp.dat | grep -A 1 clustername` ; do if [ $i != 'SYSTEM.css.clustername' ]; then echo $i; fi; done


[root@node1-pub ~]# ocrconfig -export /tmp/ocr_exp.dat -s online
[root@node1-pub ~]# for i in `strings /tmp/ocr_exp.dat | grep -A 1 clustername` ; do if [ $i != 'SYSTEM.css.clustername' ]; then echo $i; fi; done
test-crs
[root@node1-pub ~]#

OR

Oracle creates a directory with the same name as Cluster under the $ORA_CRS_HOME/cdata. you can get the cluster name from this directory as well.

[root@node1-pub ~]# ls /u01/app/crs/cdata
localhost test-crs


Viewing No. Of Nodes configured in Cluster:


The below command can be used to find out the number of nodes registered into the cluster.
It also displays the node's Public name, Private name and Virtual name along with their numbers.

olsnodes -n -p -i

[root@node1-pub ~]# olsnodes -n -p -i
node1-pub 1 node1-prv node1-vip
node2-pub 2 node2-prv node2-vip


Viewing Votedisk Information:


The below command is used to view the no. of Votedisks configured in the Cluster.

crsctl query css votedisk

[root@node1-pub ~]# crsctl query css votedisk
0. 0 /u02/ocfs2/vote/VDFile_0
1. 0 /u02/ocfs2/vote/VDFile_1
2. 0 /u02/ocfs2/vote/VDFile_2
Located 3 voting disk(s).
[root@node1-pub ~]#

[root@node1-pub ~]# crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
[root@node1-pub ~]#


Viewing OCR Disk Information:


The below command is used to view the no. of OCR files configured in the Cluster. It also displays the version of OCR
as well as storage space information. You can only have 2 OCR files at max.

ocrcheck

[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 3848
Available space (kbytes) : 258272
ID : 744414276
Device/File Name : /u02/ocfs2/ocr/OCRfile_0
Device/File integrity check succeeded
Device/File Name : /u02/ocfs2/ocr/OCRfile_1
Device/File integrity check succeeded

Cluster registry integrity check succeeded


Various Timeout Settings in Cluster:


Disktimeout:
Disk Latencies in seconds from node-to-Votedisk. Default Value is 200. (Disk IO)
Misscount:
Network Latencies in second from node-to-node (Interconnect). Default Value is 60 Sec (Linux) and 30 Sec in Unix platform. (Network IO)
Misscount < Disktimeout

NOTE: Do not change them without contacting Oracle Support. This may cause logical corruption to the Data.

IF
(Disk IO Time > Disktimeout) OR (Network IO time > Misscount)
THEN
REBOOT NODE
ELSE
DO NOT REBOOT
END IF;

crsctl get css disktimeout
crsctl get css misscount
crsctl get css reboottime

[root@node1-pub ~]# crsctl get css disktimeout
200

[root@node1-pub ~]# crsctl get css misscount
Configuration parameter misscount is not defined. <<<<< This message indicates that the Misscount is not set maually and it is set to it's
Default Value On Linux, it is default to 60 Second. If you want to chang it, you can do that as below. (Not recommended)

[root@node1-pub ~]# crsctl set css misscount 100
Configuration parameter misscount is now set to 100.
[root@node1-pub ~]# crsctl get css misscount
100

The below command sets the value of misscount back to its Default values:

crsctl unset css misscount

[root@node1-pub ~]# crsctl unset css misscount

[root@node1-pub ~]# crsctl get css reboottime
3


Add/Remove OCR file in Cluster:


Removing OCR File

(1) Get the Existing OCR file information by running ocrcheck utility.

[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 3852
Available space (kbytes) : 258268
ID : 744414276
Device/File Name : /u02/ocfs2/ocr/OCRfile_0 <-- OCR
Device/File integrity check succeeded
Device/File Name : /u02/ocfs2/ocr/OCRfile_1 <-- OCR Mirror
Device/File integrity check succeeded

Cluster registry integrity check succeeded

(2) The First command removes the OCR mirror (/u02/ocfs2/ocr/OCRfile_1). If you want to remove the OCR
file (/u02/ocfs2/ocr/OCRfile_1) run the next command.

ocrconfig -replace ocrmirror
ocrconfig -replace ocr

[root@node1-pub ~]# ocrconfig -replace ocrmirror
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 3852
Available space (kbytes) : 258268
ID : 744414276
Device/File Name : /u02/ocfs2/ocr/OCRfile_0 <<-- OCR File
Device/File integrity check succeeded

Device/File not configured <-- OCR Mirror not existed any more

Cluster registry integrity check succeeded

Adding OCR

You need to add OCR or OCR Mirror file in a case where you want to move the existing OCR file location to the different Devices.
The below command add ths OCR mirror file if OCR file alread exists.

(1) Get the Current status of OCR:

[root@node1-pub ~]# ocrconfig -replace ocrmirror
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 3852
Available space (kbytes) : 258268
ID : 744414276
Device/File Name : /u02/ocfs2/ocr/OCRfile_0 <<-- OCR File
Device/File integrity check succeeded

Device/File not configured <-- OCR Mirror does not exist

Cluster registry integrity check succeeded

As You can see, I only have one OCR file but not the second file which is OCR Mirror.
So, I can add second OCR (OCR Mirror) as below command.

ocrconfig -replace ocrmirror

[root@node1-pub ~]# ocrconfig -replace ocrmirror /u02/ocfs2/ocr/OCRfile_1
[root@node1-pub ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 262120
Used space (kbytes) : 3852
Available space (kbytes) : 258268
ID : 744414276
Device/File Name : /u02/ocfs2/ocr/OCRfile_0
Device/File integrity check succeeded
Device/File Name : /u02/ocfs2/ocr/OCRfile_1
Device/File integrity check succeeded

Cluster registry integrity check succeeded

You can have at most 2 OCR devices (OCR itself and its single Mirror) in a cluster. Adding extra Mirror gives you below error message

[root@node1-pub ~]# ocrconfig -replace ocrmirror /u02/ocfs2/ocr/OCRfile_2
PROT-21: Invalid parameter
[root@node1-pub ~]#


Add/Remove Votedisk file in Cluster:


Adding Votedisk:

Get the existing Vote Disks associated into the cluster. To be safe, Bring crs cluster stack down on all the nodes
but one on which you are going to add votedisk from.

(1) Stop CRS on all the nodes in cluster but one.

[root@node2-pub ~]# crsctl stop crs

(2) Get the list of Existing Vote Disks

crsctl query css votedisk

[root@node1-pub ~]# crsctl query css votedisk
0. 0 /u02/ocfs2/vote/VDFile_0
1. 0 /u02/ocfs2/vote/VDFile_1
2. 0 /u02/ocfs2/vote/VDFile_2
Located 3 voting disk(s).

(3) Backup the VoteDisk file

Backup the existing votedisks as below as oracle:

dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0

[root@node1-pub ~]# su - oracle
[oracle@node1-pub ~]$ dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0
41024+0 records in
41024+0 records out
[oracle@node1-pub ~]$

(4) Add an Extra Votedisk into the Cluster:

If it is a OCFS, then touch the file as oracle. On raw devices, initialize the raw devices using "dd" command

touch /u02/ocfs2/vote/VDFile_3 <<-- as oracle
crsctl add css votedisk /u02/ocfs2/vote/VDFile_3 <<-- as oracle
crsctl query css votedisks

[root@node1-pub ~]# su - oracle
[oracle@node1-pub ~]$ touch /u02/ocfs2/vote/VDFile_3
[oracle@node1-pub ~]$ crsctl add css votedisk /u02/ocfs2/vote/VDFile_3
Now formatting voting disk: /u02/ocfs2/vote/VDFile_3.
Successful addition of voting disk /u02/ocfs2/vote/VDFile_3.

(5) Confirm that the file has been added successfully:

[root@node1-pub ~]# ls -l /u02/ocfs2/vote/VDFile_3
-rw-r----- 1 oracle oinstall 21004288 Oct 6 16:31 /u02/ocfs2/vote/VDFile_3
[root@node1-pub ~]# crsctl query css votedisks
Unknown parameter: votedisks
[root@node1-pub ~]# crsctl query css votedisk
0. 0 /u02/ocfs2/vote/VDFile_0
1. 0 /u02/ocfs2/vote/VDFile_1
2. 0 /u02/ocfs2/vote/VDFile_2
3. 0 /u02/ocfs2/vote/VDFile_3
Located 4 voting disk(s).
[root@node1-pub ~]#

Removing Votedisk:

Removing Votedisk from the cluster is very simple. Tthe below command removes the given votedisk from cluster configuration.

crsctl delete css votedisk /u02/ocfs2/vote/VDFile_3

[root@node1-pub ~]# crsctl delete css votedisk /u02/ocfs2/vote/VDFile_3
Successful deletion of voting disk /u02/ocfs2/vote/VDFile_3.
[root@node1-pub ~]#

[root@node1-pub ~]# crsctl query css votedisk
0. 0 /u02/ocfs2/vote/VDFile_0
1. 0 /u02/ocfs2/vote/VDFile_1
2. 0 /u02/ocfs2/vote/VDFile_2
Located 3 voting disk(s).
[root@node1-pub ~]#


Backing Up OCR


Oracle performs physical backup of OCR devices every 4 hours under the default backup direcory $ORA_CRS_HOME/cdata/
and then it rolls that forward to Daily, weekly and monthly backup. You can get the backup information by executing below command.

ocrconfig -showbackup

[root@node1-pub ~]# ocrconfig -showbackup

node2-pub 2007/09/03 17:46:47 /u01/app/crs/cdata/test-crs/backup00.ocr

node2-pub 2007/09/03 13:46:45 /u01/app/crs/cdata/test-crs/backup01.ocr

node2-pub 2007/09/03 09:46:44 /u01/app/crs/cdata/test-crs/backup02.ocr

node2-pub 2007/09/03 01:46:39 /u01/app/crs/cdata/test-crs/day.ocr

node2-pub 2007/09/03 01:46:39 /u01/app/crs/cdata/test-crs/week.ocr
[root@node1-pub ~]#

Manually backing up the OCR

ocrconfig -manualbackup <<--Physical Backup of OCR

The above command backs up OCR under the default Backup directory. You can export the contents of the OCR using below command (Logical backup).

ocrconfig -export /tmp/ocr_exp.dat -s online <<-- Logical Backup of OCR


Restoring OCR


The below command is used to restore the OCR from the physical backup. Shutdown CRS on all nodes.

ocrconfig -restore

Locate the avialable Backups

[root@node1-pub ~]# ocrconfig -showbackup

node2-pub 2007/09/03 17:46:47 /u01/app/crs/cdata/test-crs/backup00.ocr

node2-pub 2007/09/03 13:46:45 /u01/app/crs/cdata/test-crs/backup01.ocr

node2-pub 2007/09/03 09:46:44 /u01/app/crs/cdata/test-crs/backup02.ocr

node2-pub 2007/09/03 01:46:39 /u01/app/crs/cdata/test-crs/day.ocr

node2-pub 2007/09/03 01:46:39 /u01/app/crs/cdata/test-crs/week.ocr

node1-pub 2007/10/07 13:50:41 /u01/app/crs/cdata/test-crs/backup_20071007_135041.ocr

Perform Restore from previous Backup

[root@node2-pub ~]# ocrconfig -restore /u01/app/crs/cdata/test-crs/week.ocr

The above command restore the OCR from week old backup.
If you have logical backup of OCR (taken using export option), then You can import it with the below command.

ocrconfig -import /tmp/ocr_exp.dat


Restoring Votedisks



Shutdown CRS on all the nodes in Cluster.
Locate the current location of the Votedisks
Restore each of the votedisks using "dd" command from the previous good backup of Votedisk taken using the same "dd" command.
Start CRS on all the nodes.
crsctl stop crs
crsctl query css votedisk
dd if= of= <<-- do this for all the votedisks
crsctl start crs


Changing Public and Virtual IP Address:



Current Config Changed to

Node 1:

Public IP: 216.160.37.154 192.168.10.11
VIP: 216.160.37.153 192.168.10.111
subnet: 216.160.37.159 192.168.10.0
Netmask: 255.255.255.248 255.255.255.0
Interface used: eth0 eth0
Hostname: node1-pub.hingu.net node1-pub.hingu.net

Node 2:

Public IP: 216.160.37.156 192.168.10.22
VIP: 216.160.37.157 192.168.10.222
subnet: 216.160.37.159 192.168.10.0
Netmask: 255.255.255.248 255.255.255.0
Interface used: eth0 eth0
Hostname: node1-pub.hingu.net node2-pub.hingu.net

=======================================================================
(A)

Take the Services, Database, ASM Instances and nodeapps down on both the Nodes in Cluster. Also disable the nodeapps, asm and database instances to prevent them from restarting in case if this node gets rebooted during this process.


srvctl stop service -d test
srvctl stop database -d test
srvctl stop asm -n node1-pub
srvctl stop asm -n node2-pub
srvctl stop nodeapps -n node1-pub,node1-pub2
srvctl disable instance -d test -i test1,test2
srvctl disable asm -n node1-pub
srvctl disable asm -n node2-pub
srvctl disable nodeapps -n node1-pub
srvctl disable nodeapps -n node2-pub


(B)
Modify the /etc/hosts and/or DNS, ifcfg-eth0 (local node) with the new IP values
on All the Nodes

(C)
Restart the specific network interface in order to use the new IP.


ifconfig eth0 down
ifconfig eth0 up


Or, you can restart the network.
CAUTION: on NAS, restarting entire network may cause the node to be rebooted.

(D)
Update the OCR with the New Public IP.
In case of public IP, you have to delete the interface first and then add it back with the new IP address.

As oracle user, Issue the below command:


oifcfg delif -global eth0
oifcfg setif -global eth0/192.168.10.0:public


(E)
Update the OCR with the New Virtual IP.
Virtual IP is part of the nodeapps and so you can modify the nodeapps to update the Virtual IP information.

As privileged user (root), Issue the below commands:


srvctl modify nodeapps -n node1-pub -A 192.168.10.111/255.255.255.0/eth0 <-- for Node 1
srvctl modify nodeapps -n node1-pub -A 192.168.10.222/255.255.255.0/eth0 <-- for Node 2


(F)
Enable the nodeapps, ASM, database Instances for all the Nodes.


srvctl enable instance -d test -i test1,test2
srvctl enable asm -n node1-pub
srvctl enable asm -n node2-pub
srvctl enable nodeapps -n node1-pub
srvctl enable nodeapps -n node2-pub


(G)
Update the listener.ora file on each nodes with the correct IP addresses in case if it uses the IP address instead of the hostname.

(H)
Restart the Nodeapps, ASM and Database instance


srvctl start nodeapps -n node1-pub
srvctl start nodeapps -n node2-pub
srvctl start asm -n node1-pub
srvctl start asm -n node2-pub
srvctl start database -d test

jueves, 14 de abril de 2011

SQLSERVER CREATE TRIGGER

1> create table employee(
2> ID int,
3> name nvarchar (10),
4> salary int,
5> start_date datetime,
6> city nvarchar (10),
7> region char (1))
8> GO
1>
2> insert into employee (ID, name, salary, start_date, city, region)
3> values (1, 'Jason', 40420, '02/01/94', 'New York', 'W')
4> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (2, 'Robert',14420, '01/02/95', 'Vancouver','N')
3> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (3, 'Celia', 24020, '12/03/96', 'Toronto', 'W')
3> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (4, 'Linda', 40620, '11/04/97', 'New York', 'N')
3> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (5, 'David', 80026, '10/05/98', 'Vancouver','W')
3> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (6, 'James', 70060, '09/06/99', 'Toronto', 'N')
3> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (7, 'Alison',90620, '08/07/00', 'New York', 'W')
3> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (8, 'Chris', 26020, '07/08/01', 'Vancouver','N')
3> GO

(1 rows affected)
1> insert into employee (ID, name, salary, start_date, city, region)
2> values (9, 'Mary', 60020, '06/09/02', 'Toronto', 'W')
3> GO

(1 rows affected)
1>
2> select * from employee
3> GO
ID name salary start_date city region
----------- ---------- ----------- ----------------------- ---------- ------
1 Jason 40420 1994-02-01 00:00:00.000 New York W
2 Robert 14420 1995-01-02 00:00:00.000 Vancouver N
3 Celia 24020 1996-12-03 00:00:00.000 Toronto W
4 Linda 40620 1997-11-04 00:00:00.000 New York N
5 David 80026 1998-10-05 00:00:00.000 Vancouver W
6 James 70060 1999-09-06 00:00:00.000 Toronto N
7 Alison 90620 2000-08-07 00:00:00.000 New York W
8 Chris 26020 2001-07-08 00:00:00.000 Vancouver N
9 Mary 60020 2002-06-09 00:00:00.000 Toronto W

(9 rows affected)
1>
2> CREATE TABLE myArchive (
3> AID int IDENTITY(1,1) PRIMARY KEY CLUSTERED,
4> type nvarchar(6) NOT NULL,
5> whenchanged smalldatetime NOT NULL DEFAULT Getdate(),
6> ID int,
7> newName nvarchar(30),
8> oldName nvarchar(50)
9> )
10> GO
1>
2>
3> CREATE TRIGGER myTriggerINSERT
4> ON Employee
5> FOR INSERT
6> AS
7> DECLARE @ID int, @Name nvarchar(30)
8>
9> SET @ID = (SELECT ID FROM inserted)
10> SET @Name = (SELECT Name FROM inserted)
11>
12> INSERT myArchive (type, ID, newName) VALUES('INSERT', @ID, @Name)
13> GO
1>
2>
3> CREATE TRIGGER myTriggerDELETE
4> ON Employee
5> FOR DELETE
6> AS
7> DECLARE @ID int, @Name nvarchar(30)
8>
9> SET @ID = (SELECT ID FROM deleted)
10> SET @Name = (SELECT Name FROM deleted)
11>
12> INSERT myArchive (type, ID, oldName ) VALUES('DELETE', @ID, @Name)
13> GO
1>
2> CREATE TRIGGER myTriggerUPDATE
3> ON Employee
4> INSTEAD OF UPDATE
5> AS
6>
7> DECLARE @ID int, @newName nvarchar(30), @oldName nvarchar(30)
8>
9> IF (SELECT ID FROM inserted) <> (SELECT ID FROM deleted)
10> RAISERROR ('You are not allowed to change ID.', 10,1)
11> ELSE
12> BEGIN
13>
14> --set local variables
15> SET @ID = (SELECT ID FROM inserted)
16> SET @newName = (SELECT Name FROM inserted)
17> SET @oldName = (SELECT Name FROM deleted)
18>
19> --write to table
20> UPDATE Employee SET Name = @newName WHERE ID = @ID
21> -- write to archive
22> INSERT myArchive (type, ID, newName, oldName) VALUES('UPDATE', @ID, @newName, @oldName)
23> END
24> GO
1>
2> INSERT Employee (id, name) VALUES (13, 'Rickie')
3> GO

(1 rows affected)
1>
2> SELECT * FROM myArchive
3> GO
AID type whenchanged ID newName oldName
----------- ------ -------------------- ----------- ------------------------------ ------------------
1 INSERT 2006-10-10 20:21:00 13 Rickie NULL

(1 rows affected)
1>
2> UPDATE Employee
3> SET Name = 'Rick'
4> WHERE ID = 3
5> GO

(1 rows affected)

(1 rows affected)
1>
2> SELECT * FROM myArchive
3> GO
AID type whenchanged ID newName oldName
----------- ------ -------------------- ----------- ------------------------------ ------------------
1 INSERT 2006-10-10 20:21:00 13 Rickie NULL
2 UPDATE 2006-10-10 20:21:00 3 Rick Celia

(2 rows affected)
1>
2>
3> drop table myArchive
4> drop table employee
5> GO
1>

miércoles, 13 de abril de 2011

DBMS_STATS

DBMS_APPLICATION_INFO PL/SQL

EXAMPLE

CREATE or replace PROCEDURE add_employee(
name VARCHAR2,
salary NUMBER,
manager NUMBER,
title VARCHAR2,
commission NUMBER,
department NUMBER) AS
BEGIN
DBMS_APPLICATION_INFO.SET_MODULE(
module_name => 'add_employee',
action_name => 'insert into emp');
INSERT INTO emp
(ename, empno, sal, mgr, job, hiredate, comm, deptno)
VALUES (name, emp_seq.nextval, salary, manager, title, SYSDATE,
commission, department);
DBMS_APPLICATION_INFO.SET_MODULE(null,null);
END;


set serveroutput on

DECLARE
mod_in VARCHAR2(64);
act_in VARCHAR2(64);

mod_out VARCHAR2(64);
act_out VARCHAR2(64);

display_str VARCHAR2(200);
BEGIN
mod_in := 'Test Module';
act_in := 'Test Action';
dbms_application_info.set_module(mod_in, act_in);

dbms_lock.sleep(5);

dbms_application_info.read_module(mod_out, act_out);

display_str := 'Module Is '||mod_out||' and Action is '||act_out;

dbms_output.put_line(display_str);
END;
/



--------------------------------------------------------------------------------

lunes, 11 de abril de 2011

Oracle RAC Apuntes Curso

select * from gv$logfile

select * from v$cluster_interconnects

PACKAGE PARA CONFIGURAR MODULO Y ACCION

CREAR LINK SIMBOLICO PARA

crscrl query css votedisk

OCR

/var/opt/oracle/ocr.loc

Oracle Crear Trace Sentencias SQL

SQL> ALTER SYSTEM SET trace_enabled = FALSE;

SQL> ALTER SYSTEM SET trace_enabled = TRUE;

1. Enable trace at instance level

Put the following line in init.ora. It will enable trace for all sessions and the background
processes

sql_trace = TRUE

to disable trace:

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUS> ALTER SYSTEM SET trace_enabled = TRUE;

to stop trace run:

SQLPLUS> ALTER SYSTEM SET trace_enabled = FALSE;


2. Enable trace at session level

to start trace:

ALTER SESSION SET sql_trace = TRUE;

to stop trace:

ALTER SESSION SET sql_trace = FALSE;

- or -

EXECUTE dbms_session.set_sql_trace (TRUE);
EXECUTE dbms_session.set_sql_trace (FALSE);

- or -

EXECUTE dbms_support.start_trace;
EXECUTE dbms_support.stop_trace;


3. Enable trace in another session

Find out SID and SERIAL# from v$session. For example:

SELECT * FROM v$session WHERE osuser = OSUSER;

to start trace:

EXECUTE dbms_support.start_trace_in_session (SID, SERIAL#);

to stop trace:

EXECUTE dbms_support.stop_trace_in_session (SID, SERIAL#);

- or -

EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, TRUE);
EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, FALSE);

- or -

-- Oracle 10.1
EXECUTE DBMS_SYSTEM.set_sql_trace_in_session(sid=>SID, serial#=>SERIAL, sql_trace=>TRUE);
EXECUTE DBMS_SYSTEM.set_sql_trace_in_session(sid=>SID, serial#=>SERIAL, sql_trace=>FALSE);



jueves, 7 de abril de 2011

Oracle Clusterware R2 Scan DNS

So how does it work:

First, my (named, linux) DNS is running on 10.161.102.40.
This DNS does the naming for cluster01.nl.oracle.com and pts.local.
For cluster01.nl.oracle.com a "delegation" is made, so that every request to a machine in the domain .cluster01.nl.oracle.com is delegated to the GNS. (with the GNS VIP).

In DNS:

cluster01.nl.oracle.com NS gns.cluster01.nl.oracle.com
gns.cluster01.nl.oracle.com. 10.161.102.55
So, once the cluster installation is done, the GNS in the cluster will be stared and a request to scan.cluster01.nl.oracle.com will be forwarded to the GNS. The GNS will then take care of the request and answer which three nodes in the cluster will serve as scan listeners:

[root@gridnode01pts05 ~]# nslookup scan.cluster01.nl.oracle.com
Server: 10.161.102.40
Address: 10.161.102.40#53
Non-authoritative answer:
Name: scan.cluster01.nl.oracle.com
Address: 10.161.102.78
Name: scan.cluster01.nl.oracle.com
Address: 10.161.102.79
Name: scan.cluster01.nl.oracle.com
Address: 10.161.102.77
Also, with dig, you can see all information coming from GNS:

[root@dns-dhcp ~]# dig scan.cluster01.nl.oracle.com
; <<>> DiG 9.3.4-P1 <<>> scan.cluster01.nl.oracle.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46016
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 10, ADDITIONAL: 10
;; QUESTION SECTION:
;scan.cluster01.nl.oracle.com. IN A
;; ANSWER SECTION:
scan.cluster01.nl.oracle.com. 6 IN A 10.161.102.78
scan.cluster01.nl.oracle.com. 6 IN A 10.161.102.79
scan.cluster01.nl.oracle.com. 6 IN A 10.161.102.77
;; AUTHORITY SECTION:
oracle.com. 10732 IN NS dns2.us.oracle.com.
oracle.com. 10732 IN NS dns3.us.oracle.com.
oracle.com. 10732 IN NS dns4.us.oracle.com.
oracle.com. 10732 IN NS dns1-us.us.oracle.com.
oracle.com. 10732 IN NS dnsmaster1.oracle.com.
oracle.com. 10732 IN NS dnsmaster2.oracle.com.
oracle.com. 10732 IN NS dnsmaster3.oracle.com.
oracle.com. 10732 IN NS dnsmaster4.oracle.com.
oracle.com. 10732 IN NS dnsmaster5.oracle.com.
oracle.com. 10732 IN NS dnsmaster6.oracle.com.
;; ADDITIONAL SECTION:
dns2.us.oracle.com. 3984 IN A 130.35.249.52
dns3.us.oracle.com. 3984 IN A 144.20.190.70
dns4.us.oracle.com. 3984 IN A 138.2.202.15
dns1-us.us.oracle.com. 3984 IN A 130.35.249.41
dnsmaster1.oracle.com. 1060 IN A 192.135.82.4
dnsmaster2.oracle.com. 1060 IN A 192.135.82.20
dnsmaster3.oracle.com. 1060 IN A 192.135.82.36
dnsmaster4.oracle.com. 1060 IN A 192.135.82.52
dnsmaster5.oracle.com. 1060 IN A 192.135.82.70
dnsmaster6.oracle.com. 1060 IN A 192.135.82.84
;; Query time: 0 msec
;; SERVER: 10.161.102.40#53(10.161.102.40)
;; WHEN: Sat Sep 19 17:15:47 2009
;; MSG SIZE rcvd: 486

Later, when the database is installed, you can use the SCAN with SQLNet EZ connect to connect to the database. Can't wait, I just have to demo it now:

[oracle@gridnode01pts05 ~]$ sqlplus system/oracle@scan.cluster01.nl.oracle.com:1521/dbpts05.pts.local
SQL*Plus: Release 11.2.0.1.0 Production on Sat Sep 19 17:11:32 2009
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL>

miércoles, 6 de abril de 2011

Single Instancia TO RAC

EJEMPLO 1

How to Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure [ID 747457.1]

--------------------------------------------------------------------------------

Modified 04-AUG-2010 Type HOWTO Status ARCHIVED

In this Document
Goal
Solution
References



--------------------------------------------------------------------------------



Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 and later [Release: 10.1 and later ]
Information in this document applies to any platform.
This note applies to all Unix platforms.
Goal
This article provides a method to convert a Single Instance 10g database to a RAC 10g database. It can be used for 10gR2 or 11gR1, too.
Solution
Following are the steps:


1. on the first node


Make a full database backup of the single-instance database before you change anything.

1) Install Oracle Clusterware on all nodes you intend to have as cluster members, following the directions in Chapter 4 and Chapter 5 (Oracle� Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide ).
http://www.oracle.com/pls/db102/portal.portal_db?selected=3#index-ORA

Eventually patch the clusterware software (the clusterware version need to be equal or higher than the rdbms version) as well as the rdbms software

During the "Install Oracle Database 10g Software with Real Application Clusters", make sure that you select a new Oracle home other than the one from which the single-instance database was running in case the single-instance database is running on one of the systems. Upgrade the RAC RDBMS software to the same version as the original single-instance database.

2) Configure the cluster listener, i.e. configure the listener through netca from the new cluster RDBMS Home. Eventually stop the single instance listeners when they are running on one of the clustered nodes in case they are using the same listener ports. Ideally, use the 'LISTENER' as name for that listener.

3) Restore the backup of datafiles,redo logs,control file to a shared location on the cluster
(If you are using ASM ,then please follow Note 452758.1 How to Convert a Single-Instance ASM to Cluster ASM)

4) Take a backup of original single-instance pfile to e.g. /tmp/initorcl.ora and Add the following entry in pfile, e.g. for a two node RAC cluster


*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
.undo_tablespace=undotbs (undo tablespace which already exists)
.instance_name=
.instance_number=1
.thread=1
.local_listener=_
.instance_name=
.instance_number=2
.local_listener=_
.thread=2
.undo_tablespace=UNDOTBS2
.cluster_database = TRUE
.cluster_database_instances = 2

is equal to "1". is equal to "2", e.g. ORCL1, ORCL2.


5) change the location of control file in parameter file

local drive to shared cluster file system location

ie control_files='/control01.ctl'

to ie control_files='/control01.ctl'

6) create spfile from pfile( spfile should be stored in shared device)

export ORACLE_SID=ORCL1
sqlplus "/ as sysdba"
create spfile='/spfileORCL.ora' from pfile='/tmp/initORCL.ora';
exit

7) Create the $ORACLE_HOME/dbs/init.ora e.g. initORCL1.ora file that contains the following entry

spfile='spfile_path_name'

spfile_path_name is the complete path name of the SPFILE.

example :-

spfile='/cfs/spfile/spfileORCL1.ora'

8) create new password file for ORCL1 instance.

orapwd file=orapwORCL1 password=oracle

9) start the database in mount stage

10) Rename the datafile,redo logs to new shared device

alter database rename file '' to '
11) Add second instance redo logs (or more when multiple instances will be started)

alter database
add logfile thread 2
group 3 ('group 4 ('
alter database enable public thread 2;


12) create the second (or more) instance undo tablespace from existing instance

Path and file name will different for your environment


CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE
'/dev/RAC/undotbs_02_210.dbf' SIZE 200M ;


13) Open your database (i.e. alter database open;) and run $ORACLE_HOME/rdbms/admin/catclust.sql to create cluster database specific views within the existing instance


2. On the second node and other nodes

14) Set ORACLE_SID and ORACLE_HOME environment variables on the second node

15) Create the $ORACLE_HOME/dbs/init.ora e.g. initORCL2.ora file for the second node the same way as with point 7.

16) create new password file for second instance ORCL2 instance as in point 8

orapwd file=orapwORCL2 password=oracle

17) Start the second Instance

3. on one of the nodes

18) After configuring the listener,you have to add the database in cluster as below


srvctl add database -d -o -p

srvctl add instance -d -i -n

srvctl add instance -d -i -n
19) in case ASM is used, add the rdbms instance / asm dependency, e.g.

srvctl modify instance -d -i -s <+ASM1>
References
NOTE:208375.1 - How To Convert A Single Instance Database To RAC In A Cluster File System Configuration
NOTE:452758.1 - How to Convert a Single-Instance ASM to Cluster ASM

EJEMPLO 2

Convert a single instance database to RAC
There are different ways to convert a single instance database to RAC.

1) Manual Method

2) Using rconfig

3) Using DBCA

4) Using grid control

Manual Method

Following are the single instance details

ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1

ORACLE_SID=prod

Datafile location = /u03/oradata/prod — /u03 is a ocfs shared file system. So we are going to keep the datafiles are same location while converting to RAC.

In case your files are at some other slot or disk and not in shared file system, you need to copy the same and then rename the files when you mount the instance.

Database version = 10g R2 (10.2.0.1.0)

Steps to convert single instance to RAC

Step 1) Install clusterware on the nodes on which you want to setup RAC

you can refer to post over the my website. Basically you need to setup the IP addresses and other OS related files and variables before you can install the clusterware.

Your clusterware version must be greater then or equal to the single instance RDBMS version. Make sure you do this step correct.

Following are cluster installation details

Cluster name : crs

Cluster install location : /u01/app/oracle/product/10.2.0/crs

OCR File location : /u03/oracrs/ocr.ora

Voting disk location : /u03/oracrs/vote.crs

Step 2) Install Oracle Database 10g Real Application Cluster software

Just install the s/w. The RDBMS software version must be same as your single instance RDBMS software version

Following are the details of installation

RAC RACLE_HOME=/u01/app/oracle/product/10.2.0/db

Number of instances = 2

Node names for 2 instances = ocvmrh2103, ocvmrh2190

/u01 is a individual filesystem and is not mounted commonly on both nodes. This is a separate ORACLE_HOME architecture.

Step 3) Take the backup of single instance database and restore the same to the shared file system location.

This step is not required in my case as I created the database on a shared filesystem only. But this is only for demo purpose. For real time scenario, you need to copy datafiles to shared filsystem.

Step 4) Copy init.ora file of single instance and add following parameters

bash-3.00$ cp initprod.ora /tmp/initprod.ora
In my case the database name is “prod” and I am converting this single instance database to a 2 node RAC. So in my case instance 1 name becomes prod1 and instance 2 name becomes prod2

So add following parameters to /tmp/initprod.ora file

*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
prod1.undo_tablespace=UNDOTBS1
prod1.instance_name=prod1
prod1.instance_number=1
prod1.thread=1
prod1.local_listener=listener_ocvmrh2103
prod2.instance_name=prod2
prod2.instance_number=2
prod2.local_listener=listener_ocvmrh2190
prod2.thread=2
prod2.undo_tablespace=UNDOTBS2
Step 5) change the location of controlfile in the above /tmp/initprod.ora file

In my case the controlfiles are present in /u03 location which is a OCFS shared filesystem. So I dont have to change the locaiton of controlfiles in my init.ora

Incase you have moved the controlfiles along with the datafiles to shared filesystem location, then you need to change the path of controlfile in the above init.ora file copied in/tmp location

Step 6) Create SPFILE from PFILE

SQL> select name from v$database;

NAME
---------
PROD

SQL> create spfile='/u03/oradata/prod/spfileprod.ora' from pfile='/tmp/initprod.ora';

File created.

SQL>
Step 7) Copy spfile to the RAC ORACLE_HOME/dbs location of instance 1 and create pfile

bash-3.00$ cp spfileprod.ora /u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora
bash-3.00$ pwd
/u01/app/oracle/product/10.2.0/db/dbs
bash-3.00$ cat initprod1.ora
spfile='/u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora'
Step 8)Create new password file for prod1 instance under RAC oracle home

bash-3.00$ orapwd file=orapwprod1 password=welcome1
Step 9) Start database in mount stage and rename datafiles and redo log files to new shared location

In my case since the datafiles and online redo logs are placed at same shared location, I dont need to do this step. However in real time scenario, this step is required.

make sure that your ORACLE_HOME variable is set to RAC ORACLE_HOME

bash-3.00$ echo $ORACLE_HOME
/u01/app/oracle/product/10.2.0/db
bash-3.00$ echo $ORACLE_SID
prod1
SQL> startup mount pfile=initprod1.ora
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 1222168 bytes
Variable Size 213912040 bytes
Database Buffers 620756992 bytes
Redo Buffers 2969600 bytes
Database mounted.
SQL>
Step 10) Add second thread to database which will be for instance 2

SQL> alter database add logfile thread 2 group 4 ('/u03/oradata/prod/redo2_01.dbf') size 50M, group 5 ('/u03/oradata/prod/redo2_02.dbf') size 50M, group 6 ('/u03/oradata/prod/redo2_03.dbf') size 50M;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL> alter database enable public thread 2;

Database altered.
Step 11) Create undo tablespace for instance 2

The name of the undo tablespace should be same as you specified in the init.ora file in step 4 above.

SQL> CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE '/u03/oradata/prod/undotbs2_01.dbf' size 25M;

Tablespace created.
Step 12) Run $ORACLE_HOME/rdbms/admin/catclust.sql to create cluster database specific views within the existing instance 1

SQL> @?/rdbms/admin/catclust.sql
Step 13) On the second node, set ORACLE_HOME and SID for instance 2

bash-3.00$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db
bash-3.00$ export ORACLE_SID=prod2
Create initprod2.ora on second node similar to node 1. In this case you have to copy spfile to second node as well. You can also keep spfile in shared location (/u03 in my case) and put same path in initprod2.ora

bash-3.00$ pwd
/u01/app/oracle/product/10.2.0/db/dbs
bash-3.00$ ls -lrt spfileprod.ora
-rw-r----- 1 oracle oinstall 3584 Feb 19 12:36 spfileprod.ora
bash-3.00$ cat initprod2.ora
spfile='/u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora'
Step 14) Create new password file for instance 2

bash-3.00$ orapwd file=orapwprod2 password=welcome1
Step 15) Start the second instance

SQL> startup pfile=initprod2.ora
ORACLE instance started.

Total System Global Area 838860800 bytes
Fixed Size 1222168 bytes
Variable Size 213912040 bytes
Database Buffers 620756992 bytes
Redo Buffers 2969600 bytes
Database mounted.
Database opened.
You might face some issue while starting second instance as bdump, udump and cdump dir location will be that of single instance ORACLE_HOME which is not present in node2.

Also you might hit following error

SQL> startup
ORA-09925: Unable to create audit trail file
Linux Error: 2: No such file or directory
Additional information: 9925
SQL> Disconnected
Make sure you alter following parameters to a valid location and copy spfileprod.ora again to node2

audit_file_dest
background_dump_dest
user_dump_dest
core_dump_dest
Step 16) Add the converted database to cluster

move the spfile to the common location such as /u03/oradata/prod and modify both the pfiles so that both pfiles refers to same spfile and there are no 2 copies.

bash-3.00$ srvctl add database -d prod -o /u01/app/oracle/product/10.2.0/db -p /u03/oradata/prod/spfileprod.ora
bash-3.00$ srvctl add instance -d prod -i prod1 -n OCVMRH2103
bash-3.00$ srvctl add instance -d prod -i prod2 -n OCVMRH2190


**************************************************************************************************

Rconfig Method

Pre-requisites:

1. Configure Shared Storage setup ASM, NFS (NAS) or clustered storage.

2. A clustered Grid Infrastructure install with at least one Scan listener address.

3. rconfig imposes a restriction on the choice of listener. The listener must be the default listener, and it must run from the Grid Infrastructure home.

1
srvctl add listener -p 1522


After conversion, you can reconfigure the listener as required.

4. Install Clustered Oracle Database Software as per documentation, this can be done by choosing the right configuration option. Refer to :

http://download.oracle.com/docs/cd/E11882_01/install.112/e10813/racinstl.htm#BABJGBHB

I’ve installed the new 11gR2 clustered ORACLE_HOME at

/u01/app/oracle/product/11.2.0/db_2

on both the nodes orarac01and orarac02

Converting Single Instance Database using rconfig

1. As an “oracle” OS user navigate to

$ORACLE_HOME/assistants/rconfig/sampleXMLs

2. Open the sample file ConvertToRAC_AdminManaged.xml using a text editor such as vi. This XML sample file contains comment lines that provide instructions on how to edit the file to suit your site’s specific needs.

3. Ensure you edit the xml with convert verify="ONLY"
The following are the sample entries:

01


02





03
/u01/app/oracle/product/11.2.0/db_1

04





05
/u01/app/oracle/product/11.2.0/db_2

06





07


08





09
sys

10
sys




11
sysdba

12





13


14


16





17


18





19


20





21
...

22





23


24


26
+DATA


4. Move the spfile to the shared location, in this case the Single Instance Database was hosted on file system, in this process we will move the datafiles from file system storage to ASM.

So create spfile in the shared disk location

1
SQL>create spfile='+DATA/TEST/spfiletest.ora' from pfile;


You can check if the file is created through “asmcmd”

5. Take a backup of existing $SOURCE_ORACLE_HOME/dbs/initTEST.ora, and create a new $SOURCE_ORACLE_HOME/dbs/initTEST.ora with the following parameter:

1
spfile='+DATA/TEST/spfiletest.ora'


6. Restart the Database

7. Now lets test if “rconfig” is ready for conversion, navigate to $ORACLE_HOME/bin and issue the following command


$./rconfig $ORACLE_HOME/assistants/rconfig/sampleXMLs

/ConvertToRAC_AdminManaged.xml


The above command validates( as we’ve set convert=”ONLY”) if rconfig is ready for conversion. If the output throws any error, diagnose and troubleshoot to fix the issue. Refer to the following output for successful validation:

01
...

02





03


04





05


06





07
Operation Succeeded

08





09


10





11
There is no return value for this step


12





13


14





15
..


8. Now are we are ready for conversion, edit the xml file “ConvertToRAC_AdminManaged.xml” and change:

from:

1
..

2





3


4





5
..


to

1
..

2





3


4





5
..


9. Perform the conversion


$./rconfig $ORACLE_HOME/assistants/rconfig/sampleXMLs

/ConvertToRAC_AdminManaged.xml


The conversion will take some time to complete. The progress can be monitored from the logs located at $ORACLE_BASE/cfgtoollogs/rconfig

10. Once the conversion is complete you’d get a similar message in step 7.

11. Perform sanity checks and tweak the listener to suit your needs.





EJEMPLO 3


1. Convert the oracle home on each Node

(Note 211177.1 on Metalink)

a. Login as the Oracle software owner and make sure any databases running out of this oracle home are down

b. cd $ORACLE_HOME/rdbms/lib

c. make -f ins_rdbms.mk rac_on

If this step did not fail with fatal errors then proceed to step 4.

d. make -f ins_rdbms.mk ioracle

2. Each instance require its own redo thread. So add a new logfile thread for each additional instance.

alter database add logfile thread 2 group 4 ('/oradata1/ORA2/ORA2_t2g4_m1.rdo', '/oradata1/ORA2/ORA2_t2g4_m2.rdo') size 50m, group 5 ('/oradata1/ORA2/ORA2_t2g5_m1.rdo', '/oradata1/ORA2/ORA2_t2g5_m2.rdo') size 50m, group 6 ('/oradata1/ORA2/ORA2_t2g6_m1.rdo', '/oradata1/ORA2/ORA2_t2g6_m2.rdo') size 50m / ALTER DATABASE ENABLE PUBLIC THREAD 2 /3. Each instance requires its own undo tablespace. So add an undo tablespace for each additional instance

create undo tablespace undo02 datafile '/oradata1/ORA2/ORA2_undo02_01.dbf' size 4001m /4. Create the cluster views needed for RAC

SQL> @?/rdbms/admin/catclust5. If you are using an spfile, create an init.ora from it.

SQL> create pfile='/tmp/initORA.ora' from spfile
/6. Edit the init.ora to include the cluster parameters

*.cluster_database_instances=2
*.cluster_database=TRUE
ORA1.instance_name='ORA1'
ORA2.instance_name='ORA2'
ORA1.instance_number=1
ORA2.instance_number=2
ORA1.thread=1
ORA2.thread=2
ORA1.undo_tablespace='UNDO01'
ORA2.undo_tablespace='UNDO02'
ORA1.local_listener='LISTENER_ORA1'
ORA2.local_listener='LISTENER_ORA2'
ORA1.remote_listener='LISTENER_ORA2'
ORA2.remote_listener='LISTENER_ORA1'7. Shutdown and startup using the edited init.ora

SQL> connect / as sysdba
SQL> shutdown immediate
SQL> startup pfile='/tmp/initORA.ora'8. If the db starts up ok using this init.ora, create your spfile in a shared location

SQL> create spfile='/sharedlocation/spfileORA.ora' from pfile='/tmp/initORA.ora';9. On each node create a link in $ORACLE_HOME/dbs to the shared spfile, ie

cd $ORACLE_HOME/dbs
ln -s /sharedlocation/spfileORA.ora spfileORA1.ora10. Add the database and instances to the cluster registry

srvctl add database -d ORA -o $ORACLE_HOME
srvctl add instance -d ORA -i ORA1 -n oraserv1
srvctl add instance -d ORA -i ORA2 -n oraserv211. Start the db through server control

srvctl start database -d ORANB If you still have one instance up from step 7 you will get an error but this is nothing to worry about, as the node that is down should still start.

12. Create services as needed.

This can be done through the dbca under Service management or manually as follows:

srvctl add service -d ORA -s ORA_TAF -r ORA1, ORA2

Clusterware

Aplicar Parche 11.1.0.7 patset

$ crsctl query crs softwareversion host01

$ crsctl query crs activeversion

$ srvctl stop nodeapps -n host01 -r -- Baja el Nodeapps gsd, ons
$ crs_stat -t

./runInstaller

Nodo1

#/u01/app/crs/bin/crsctl stop crs -wait

#/u01/app/crs/install/root111.sh

$ crsctl query crs softwareversion host01
$ crsctl query crs softwareversion host02


$ srvctl stop nodeapps -n host02 -r -- Baja el Nodeapps gsd, ons
$ crs_stat -t

Nodo2
#/u01/app/crs/bin/crsctl stop crs -wait

crsctl query crs softwareversion host02






Backup ASM using RMAN

/u01/app/crs/bin srvctl stop database -d ORA11G

rman target sys

rman>startup force dba;

rman>shutdown immediate;

rman>startup mount;

rman>backup as backup database tag 'DB BACKUPSET';

rman> list backup;

rman> list backup summary;

rman> backup backupset 3,4 format '/home/oracle/backups/%U';


$ ls -l /home/oracle/backups/