How to Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure [ID 747457.1]
--------------------------------------------------------------------------------
Modified 04-AUG-2010 Type HOWTO Status ARCHIVED
In this Document
Goal
Solution
References
--------------------------------------------------------------------------------
Applies to:
Oracle Server - Enterprise Edition - Version: 10.1.0.2 and later [Release: 10.1 and later ]
Information in this document applies to any platform.
This note applies to all Unix platforms.
Goal
This article provides a method to convert a Single Instance 10g database to a RAC 10g database. It can be used for 10gR2 or 11gR1, too.
Solution
Following are the steps:
1. on the first node
Make a full database backup of the single-instance database before you change anything.
1) Install Oracle Clusterware on all nodes you intend to have as cluster members, following the directions in Chapter 4 and Chapter 5 (Oracle� Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide
http://www.oracle.com/pls/db102/portal.portal_db?selected=3#index-ORA
Eventually patch the clusterware software (the clusterware version need to be equal or higher than the rdbms version) as well as the rdbms software
During the "Install Oracle Database 10g Software with Real Application Clusters", make sure that you select a new Oracle home other than the one from which the single-instance database was running in case the single-instance database is running on one of the systems. Upgrade the RAC RDBMS software to the same version as the original single-instance database.
2) Configure the cluster listener, i.e. configure the listener through netca from the new cluster RDBMS Home. Eventually stop the single instance listeners when they are running on one of the clustered nodes in case they are using the same listener ports. Ideally, use the 'LISTENER' as name for that listener.
3) Restore the backup of datafiles,redo logs,control file to a shared location on the cluster
(If you are using ASM ,then please follow Note 452758.1 How to Convert a Single-Instance ASM to Cluster ASM)
4) Take a backup of original single-instance pfile to e.g. /tmp/initorcl.ora and Add the following entry in pfile, e.g. for a two node RAC cluster
*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
5) change the location of control file in parameter file
local drive to shared cluster file system location
ie control_files='
to ie control_files='
6) create spfile from pfile( spfile should be stored in shared device)
export ORACLE_SID=ORCL1
sqlplus "/ as sysdba"
create spfile='
exit
7) Create the $ORACLE_HOME/dbs/init
spfile='spfile_path_name'
spfile_path_name is the complete path name of the SPFILE.
example :-
spfile='/cfs/spfile/spfileORCL1.ora'
8) create new password file for ORCL1 instance.
orapwd file=orapwORCL1 password=oracle
9) start the database in mount stage
10) Rename the datafile,redo logs to new shared device
alter database rename file '
11) Add second instance redo logs (or more when multiple instances will be started)
alter database
add logfile thread 2
group 3 ('
alter database enable public thread 2;
12) create the second (or more) instance undo tablespace from existing instance
Path and file name will different for your environment
CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE
'/dev/RAC/undotbs_02_210.dbf' SIZE 200M ;
13) Open your database (i.e. alter database open;) and run $ORACLE_HOME/rdbms/admin/catclust.sql to create cluster database specific views within the existing instance
2. On the second node and other nodes
14) Set ORACLE_SID and ORACLE_HOME environment variables on the second node
15) Create the $ORACLE_HOME/dbs/init
16) create new password file for second instance ORCL2 instance as in point 8
orapwd file=orapwORCL2 password=oracle
17) Start the second Instance
3. on one of the nodes
18) After configuring the listener,you have to add the database in cluster as below
srvctl add database -d
srvctl add instance -d
srvctl add instance -d
19) in case ASM is used, add the rdbms instance / asm dependency, e.g.
srvctl modify instance -d
References
NOTE:208375.1 - How To Convert A Single Instance Database To RAC In A Cluster File System Configuration
NOTE:452758.1 - How to Convert a Single-Instance ASM to Cluster ASM
EJEMPLO 2
Convert a single instance database to RAC
There are different ways to convert a single instance database to RAC.
1) Manual Method
2) Using rconfig
3) Using DBCA
4) Using grid control
Manual Method
Following are the single instance details
ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
ORACLE_SID=prod
Datafile location = /u03/oradata/prod — /u03 is a ocfs shared file system. So we are going to keep the datafiles are same location while converting to RAC.
In case your files are at some other slot or disk and not in shared file system, you need to copy the same and then rename the files when you mount the instance.
Database version = 10g R2 (10.2.0.1.0)
Steps to convert single instance to RAC
Step 1) Install clusterware on the nodes on which you want to setup RAC
you can refer to post over the my website. Basically you need to setup the IP addresses and other OS related files and variables before you can install the clusterware.
Your clusterware version must be greater then or equal to the single instance RDBMS version. Make sure you do this step correct.
Following are cluster installation details
Cluster name : crs
Cluster install location : /u01/app/oracle/product/10.2.0/crs
OCR File location : /u03/oracrs/ocr.ora
Voting disk location : /u03/oracrs/vote.crs
Step 2) Install Oracle Database 10g Real Application Cluster software
Just install the s/w. The RDBMS software version must be same as your single instance RDBMS software version
Following are the details of installation
RAC RACLE_HOME=/u01/app/oracle/product/10.2.0/db
Number of instances = 2
Node names for 2 instances = ocvmrh2103, ocvmrh2190
/u01 is a individual filesystem and is not mounted commonly on both nodes. This is a separate ORACLE_HOME architecture.
Step 3) Take the backup of single instance database and restore the same to the shared file system location.
This step is not required in my case as I created the database on a shared filesystem only. But this is only for demo purpose. For real time scenario, you need to copy datafiles to shared filsystem.
Step 4) Copy init.ora file of single instance and add following parameters
bash-3.00$ cp initprod.ora /tmp/initprod.ora
In my case the database name is “prod” and I am converting this single instance database to a 2 node RAC. So in my case instance 1 name becomes prod1 and instance 2 name becomes prod2
So add following parameters to /tmp/initprod.ora file
*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
prod1.undo_tablespace=UNDOTBS1
prod1.instance_name=prod1
prod1.instance_number=1
prod1.thread=1
prod1.local_listener=listener_ocvmrh2103
prod2.instance_name=prod2
prod2.instance_number=2
prod2.local_listener=listener_ocvmrh2190
prod2.thread=2
prod2.undo_tablespace=UNDOTBS2
Step 5) change the location of controlfile in the above /tmp/initprod.ora file
In my case the controlfiles are present in /u03 location which is a OCFS shared filesystem. So I dont have to change the locaiton of controlfiles in my init.ora
Incase you have moved the controlfiles along with the datafiles to shared filesystem location, then you need to change the path of controlfile in the above init.ora file copied in/tmp location
Step 6) Create SPFILE from PFILE
SQL> select name from v$database;
NAME
---------
PROD
SQL> create spfile='/u03/oradata/prod/spfileprod.ora' from pfile='/tmp/initprod.ora';
File created.
SQL>
Step 7) Copy spfile to the RAC ORACLE_HOME/dbs location of instance 1 and create pfile
bash-3.00$ cp spfileprod.ora /u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora
bash-3.00$ pwd
/u01/app/oracle/product/10.2.0/db/dbs
bash-3.00$ cat initprod1.ora
spfile='/u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora'
Step 8)Create new password file for prod1 instance under RAC oracle home
bash-3.00$ orapwd file=orapwprod1 password=welcome1
Step 9) Start database in mount stage and rename datafiles and redo log files to new shared location
In my case since the datafiles and online redo logs are placed at same shared location, I dont need to do this step. However in real time scenario, this step is required.
make sure that your ORACLE_HOME variable is set to RAC ORACLE_HOME
bash-3.00$ echo $ORACLE_HOME
/u01/app/oracle/product/10.2.0/db
bash-3.00$ echo $ORACLE_SID
prod1
SQL> startup mount pfile=initprod1.ora
ORACLE instance started.
Total System Global Area 838860800 bytes
Fixed Size 1222168 bytes
Variable Size 213912040 bytes
Database Buffers 620756992 bytes
Redo Buffers 2969600 bytes
Database mounted.
SQL>
Step 10) Add second thread to database which will be for instance 2
SQL> alter database add logfile thread 2 group 4 ('/u03/oradata/prod/redo2_01.dbf') size 50M, group 5 ('/u03/oradata/prod/redo2_02.dbf') size 50M, group 6 ('/u03/oradata/prod/redo2_03.dbf') size 50M;
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL> alter database enable public thread 2;
Database altered.
Step 11) Create undo tablespace for instance 2
The name of the undo tablespace should be same as you specified in the init.ora file in step 4 above.
SQL> CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE '/u03/oradata/prod/undotbs2_01.dbf' size 25M;
Tablespace created.
Step 12) Run $ORACLE_HOME/rdbms/admin/catclust.sql to create cluster database specific views within the existing instance 1
SQL> @?/rdbms/admin/catclust.sql
Step 13) On the second node, set ORACLE_HOME and SID for instance 2
bash-3.00$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db
bash-3.00$ export ORACLE_SID=prod2
Create initprod2.ora on second node similar to node 1. In this case you have to copy spfile to second node as well. You can also keep spfile in shared location (/u03 in my case) and put same path in initprod2.ora
bash-3.00$ pwd
/u01/app/oracle/product/10.2.0/db/dbs
bash-3.00$ ls -lrt spfileprod.ora
-rw-r----- 1 oracle oinstall 3584 Feb 19 12:36 spfileprod.ora
bash-3.00$ cat initprod2.ora
spfile='/u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora'
Step 14) Create new password file for instance 2
bash-3.00$ orapwd file=orapwprod2 password=welcome1
Step 15) Start the second instance
SQL> startup pfile=initprod2.ora
ORACLE instance started.
Total System Global Area 838860800 bytes
Fixed Size 1222168 bytes
Variable Size 213912040 bytes
Database Buffers 620756992 bytes
Redo Buffers 2969600 bytes
Database mounted.
Database opened.
You might face some issue while starting second instance as bdump, udump and cdump dir location will be that of single instance ORACLE_HOME which is not present in node2.
Also you might hit following error
SQL> startup
ORA-09925: Unable to create audit trail file
Linux Error: 2: No such file or directory
Additional information: 9925
SQL> Disconnected
Make sure you alter following parameters to a valid location and copy spfileprod.ora again to node2
audit_file_dest
background_dump_dest
user_dump_dest
core_dump_dest
Step 16) Add the converted database to cluster
move the spfile to the common location such as /u03/oradata/prod and modify both the pfiles so that both pfiles refers to same spfile and there are no 2 copies.
bash-3.00$ srvctl add database -d prod -o /u01/app/oracle/product/10.2.0/db -p /u03/oradata/prod/spfileprod.ora
bash-3.00$ srvctl add instance -d prod -i prod1 -n OCVMRH2103
bash-3.00$ srvctl add instance -d prod -i prod2 -n OCVMRH2190
**************************************************************************************************
Rconfig Method
Pre-requisites:
1. Configure Shared Storage setup ASM, NFS (NAS) or clustered storage.
2. A clustered Grid Infrastructure install with at least one Scan listener address.
3. rconfig imposes a restriction on the choice of listener. The listener must be the default listener, and it must run from the Grid Infrastructure home.
1
srvctl add listener -p 1522
After conversion, you can reconfigure the listener as required.
4. Install Clustered Oracle Database Software as per documentation, this can be done by choosing the right configuration option. Refer to :
http://download.oracle.com/docs/cd/E11882_01/install.112/e10813/racinstl.htm#BABJGBHB
I’ve installed the new 11gR2 clustered ORACLE_HOME at
/u01/app/oracle/product/11.2.0/db_2
on both the nodes orarac01and orarac02
Converting Single Instance Database using rconfig
1. As an “oracle” OS user navigate to
$ORACLE_HOME/assistants/rconfig/sampleXMLs
2. Open the sample file ConvertToRAC_AdminManaged.xml using a text editor such as vi. This XML sample file contains comment lines that provide instructions on how to edit the file to suit your site’s specific needs.
3. Ensure you edit the xml with convert verify="ONLY"
The following are the sample entries:
01
02
03
04
05
06
07
08
09
10
11
12
13
14
16
17
18
19
20
21
...
22
23
24
26
4. Move the spfile to the shared location, in this case the Single Instance Database was hosted on file system, in this process we will move the datafiles from file system storage to ASM.
So create spfile in the shared disk location
1
SQL>create spfile='+DATA/TEST/spfiletest.ora' from pfile;
You can check if the file is created through “asmcmd”
5. Take a backup of existing $SOURCE_ORACLE_HOME/dbs/initTEST.ora, and create a new $SOURCE_ORACLE_HOME/dbs/initTEST.ora with the following parameter:
1
spfile='+DATA/TEST/spfiletest.ora'
6. Restart the Database
7. Now lets test if “rconfig” is ready for conversion, navigate to $ORACLE_HOME/bin and issue the following command
$./rconfig $ORACLE_HOME/assistants/rconfig/sampleXMLs
/ConvertToRAC_AdminManaged.xml
The above command validates( as we’ve set convert=”ONLY”) if rconfig is ready for conversion. If the output throws any error, diagnose and troubleshoot to fix the issue. Refer to the following output for successful validation:
01
...
02
03
04
05
06
07
Operation Succeeded
08
09
10
11
There is no return value for this step
12
13
14
15
..
8. Now are we are ready for conversion, edit the xml file “ConvertToRAC_AdminManaged.xml” and change:
from:
1
..
2
3
4
5
..
to
1
..
2
3
4
5
..
9. Perform the conversion
$./rconfig $ORACLE_HOME/assistants/rconfig/sampleXMLs
/ConvertToRAC_AdminManaged.xml
The conversion will take some time to complete. The progress can be monitored from the logs located at $ORACLE_BASE/cfgtoollogs/rconfig
10. Once the conversion is complete you’d get a similar message in step 7.
11. Perform sanity checks and tweak the listener to suit your needs.
EJEMPLO 3
1. Convert the oracle home on each Node
(Note 211177.1 on Metalink)
a. Login as the Oracle software owner and make sure any databases running out of this oracle home are down
b. cd $ORACLE_HOME/rdbms/lib
c. make -f ins_rdbms.mk rac_on
If this step did not fail with fatal errors then proceed to step 4.
d. make -f ins_rdbms.mk ioracle
2. Each instance require its own redo thread. So add a new logfile thread for each additional instance.
alter database add logfile thread 2 group 4 ('/oradata1/ORA2/ORA2_t2g4_m1.rdo', '/oradata1/ORA2/ORA2_t2g4_m2.rdo') size 50m, group 5 ('/oradata1/ORA2/ORA2_t2g5_m1.rdo', '/oradata1/ORA2/ORA2_t2g5_m2.rdo') size 50m, group 6 ('/oradata1/ORA2/ORA2_t2g6_m1.rdo', '/oradata1/ORA2/ORA2_t2g6_m2.rdo') size 50m / ALTER DATABASE ENABLE PUBLIC THREAD 2 /3. Each instance requires its own undo tablespace. So add an undo tablespace for each additional instance
create undo tablespace undo02 datafile '/oradata1/ORA2/ORA2_undo02_01.dbf' size 4001m /4. Create the cluster views needed for RAC
SQL> @?/rdbms/admin/catclust5. If you are using an spfile, create an init.ora from it.
SQL> create pfile='/tmp/initORA.ora' from spfile
/6. Edit the init.ora to include the cluster parameters
*.cluster_database_instances=2
*.cluster_database=TRUE
ORA1.instance_name='ORA1'
ORA2.instance_name='ORA2'
ORA1.instance_number=1
ORA2.instance_number=2
ORA1.thread=1
ORA2.thread=2
ORA1.undo_tablespace='UNDO01'
ORA2.undo_tablespace='UNDO02'
ORA1.local_listener='LISTENER_ORA1'
ORA2.local_listener='LISTENER_ORA2'
ORA1.remote_listener='LISTENER_ORA2'
ORA2.remote_listener='LISTENER_ORA1'7. Shutdown and startup using the edited init.ora
SQL> connect / as sysdba
SQL> shutdown immediate
SQL> startup pfile='/tmp/initORA.ora'8. If the db starts up ok using this init.ora, create your spfile in a shared location
SQL> create spfile='/sharedlocation/spfileORA.ora' from pfile='/tmp/initORA.ora';9. On each node create a link in $ORACLE_HOME/dbs to the shared spfile, ie
cd $ORACLE_HOME/dbs
ln -s /sharedlocation/spfileORA.ora spfileORA1.ora10. Add the database and instances to the cluster registry
srvctl add database -d ORA -o $ORACLE_HOME
srvctl add instance -d ORA -i ORA1 -n oraserv1
srvctl add instance -d ORA -i ORA2 -n oraserv211. Start the db through server control
srvctl start database -d ORANB If you still have one instance up from step 7 you will get an error but this is nothing to worry about, as the node that is down should still start.
12. Create services as needed.
This can be done through the dbca under Service management or manually as follows:
srvctl add service -d ORA -s ORA_TAF -r ORA1, ORA2
No hay comentarios:
Publicar un comentario