Note:
node1, node2 and node3 are existing nodes and node4 will be the new node
Add instance to the cluster database
on primary
1. On node4 do the following
Cd /oracle
Mkdir –p admin oper backup
log(already exists)
Mkdir oppr
Mkdir –p adump dpdump hdump pfile
wallet
Copy the contents of
/oracle/admin/oppr/wallet from node1 to node4
Copy setdb_crs, setdb_+ASM,
setdb_oppr from /oracle on node1 to /oracle on node2
2. Make
a copy of the tnsnames.ora on node1. Edit the tnsnames.ora to add the local
listener for node4
For eg:
LISTENER_ OPPR4
(ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))
LISTENERS_OPPR =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1_vip(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2_vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node3_vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))
)
Make sure you are using vips and try doing tnsping
$tnsping LISTENERS_OPPR
$tnsping LISTENER_ OPPR4
- Create undo tablespace for the 4th instance. Do this from node1.
Create undo tablespace undotbs4 datafile
‘+DATA1’ size 2g autoextend on maxsize 32767m;
- From node1. sqlplus into oppr (Instance1). Add redo groups and set instance level parameters
Alter database add logfile thread 4 group 7 size 100m;
Alter database add logfile thread 4 group 8 size 100m;
Note: While creating the redo logfiles on primary they will get created on standby as broker is set to true
5. From node1 sqlplus into odm01ppr(Instance1) Add standby logfiles on primary.
Alter database add standby logfile thread 4 group 31 size 100m;
Alter database add standby logfile thread 4 group 32 size 100m;
Alter database add standby logfile thread 4 group 33 size 100m;
6. Make sure that primary and standby are in sync by switching logfiles and check if they are applied and then make sure recovery is running on standby.
Verify online / standby redologs for thread 4 are created on DR, if they are not created do following to create them.
To Verify:
select GROUP#,THREAD# ,BYTES/1024/1024 from v$log order by THREAD# ;
select GROUP#,THREAD# ,BYTES/1024/1024 from V$STANDBY_LOG order by THREAD# ;
To add on the standby do following:
Disable recovery:
alter database recover managed standby database cancel;
alter system set dg_broker_start=false scope=both sid='*';
alter system set standby_file_management=manual scope=both sid='*';
Add logfiles:
alter database add logfile thread 4 group 401 size 50m;
Alter database add logfile thread 4 group 402 size 50m;
Alter database add logfile thread 4 group 403 size 50m;
alter database add standby logfile thread 4 group 411 size 500m;
Alter database add standby logfile thread 4 group 412 size 500m;
Alter database add standby logfile thread 4 group 413 size 500m;
Enable recovery:
alter system set standby_file_management=auto scope=both sid='*';
alter system set dg_broker_start=true scope=both sid='*';
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
7. Make sure that primary and standby are in sync by switching logfiles and check if they are applied and then make sure recovery is running on standby and then only issue following command on Primary.
On primary:
Alter database enable thread 4;
Then switch logfiles.
Alter system archive log current;
Alter system checkpoint global;
8. Set instance specific parameters in the spfile
alter system set cluster_database_instances=4 scope=spfile;
alter system set undo_tablespace=UNDOTBS4 scope=spfile sid=’oppr4′;
alter system set instance_name=oppr4 scope=spfile sid=’oppr4′;
alter system set instance_number=4 scope=spfile sid=’oppr4′;
alter system set thread=4 scope=spfile sid=’oppr4′;
alter system set local_listener=LISTENER_OPPR4 scope=spfile sid=’oppr4’;
Alter system set remote_listener=LISTENERS_OPPR scope=spfile
sid=’*’;
5.On node4 cd /oracle/product/11.1.0/network/admin.
Create a link for tnsnames.ora to
point to tnsnames.ora on ASM home
6.Now Shutdown the database and restart the database.
Before this make sure
setdb_oppr points to oppr4 and oracle password file and init file
exists for oppr4
Srvctl stop database –d oppr
Srvctl add instance –d oppr –I
oppr4 –n node4
Now start the database
Srvctl start database –d oppr
Modify the services to be preferred
on 3 nodes and available on 4th node. Change the reporting service
if any to run only on 4th node
Now as root run the following
command
srvctl setenv nodeapps -n <new nodename>
-t ORACLE_BASE=/oracle
Recycle the instance and the listener just on the
new node
Add instance to the cluster database on
DR
1. On node4 do the following
Cd /oracle
Mkdir –p admin oper backup log
Mkdir opdr
Mkdir –p adump dpdump hdump pfile
wallet
Copy the contents of
/oracle/admin/opdr/wallet from node1 to node4
Copy setdb_crs, setdb_+ASM,
setdb_opdr from /oracle on node1 to /oracle on node4
2. Make
a copy of the tnsnames.ora on node1. Edit the tnsnames.ora to add the local
listener for node4
For eg:
LISTENER_ OPDR4
(ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))
LISTENERS_OPDR =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1_vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node2_vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node3_vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))
)
Make sure you are using vips and try doing tnsping
$tnsping LISTENERS_OPDR
$tnsping LISTENER_ OPDR4
6. Set instance specific parameters in the spfile
alter system set cluster_database_instances=4 scope=spfile;
alter system set undo_tablespace=UNDOTBS4 scope=spfile sid=’opdr4′;
alter system set instance_name=opdr4 scope=spfile sid=’opdr4′;
alter system set instance_number=4 scope=spfile sid=’opdr4′;
alter system set thread=4 scope=spfile sid=’opdr4′;
alter system set local_listener=LISTENER_OPDR4 scope=spfile sid=’opdr4’;
Alter system set remote_listener=LISTENERS_OPDR
scope=spfile sid=’*’;
7.On node4 cd /oracle/product/11.1.0/network/admin.
Create a link for tnsnames.ora to
point to tnsnames.ora on ASM home
8.Now Shutdown the database and restart the database.
Before this make sure
setdb_opdr points to opdr4 and oracle password file and init file
exists for opdr4
Srvctl stop database –d opdr
Srvctl add instance –d opdr –I
opdr4 –n node4
Now start the database
Srvctl start database –d opdr
Modify the services to be preferred
on 3 nodes and available on 4th node. Change the reporting service
if any to run only on 4th node
Now as root run the following
command
srvctl setenv nodeapps -n <new nodename>
-t ORACLE_BASE=/oracle
Recycle the instance and the listener just on the
new node
No comments:
Post a Comment