Tuesday, October 29, 2013

Oracle online redologs resize with Oracle Dataguard in place.

Step1

ON PRIMARY

check primary

select status,instance_name,database_role from v$database,v$instance;  

check current size of online redo

select THREAD# ,group#,bytes/1024/1024 from v$log ;

check current size of standby redo log

select THREAD# ,group#,bytes/1024/1024 from v$standby_log ;

Step2

ON STANDBY


check standby


select status,instance_name,database_role from v$database,v$Instance;


check current size of online redo logs


select THREAD# ,group#,bytes/1024/1024 from v$log ;

check standby redlo log size.

select THREAD# ,group#,bytes/1024/1024 from v$standby_log  ;


Step3
on standby

SET standby_file_management  to manual

show parameter standby_file_management


alter system set standby_file_management=manual sid='*' scope=both;


show parameter standby_file_management



Step4

ON PRIMARY


On the primary database, check the status of the Online Redo Logs and resize them by dropping the INACTIVE redo logs and re-creating them with the new size.

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 50 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 51 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 52 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 53 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;


ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 54 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 55 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 56 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 57 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;


ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 58 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 59 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 60 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 61 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

alter database drop logfile group 1;

alter database drop logfile group 2;

alter database drop logfile group 3;

alter database drop logfile group 4;

alter database drop logfile group 5;

alter database drop logfile group 6;

alter database drop logfile group 7;


Step5

ON STANDBY

cancel recovery


set dg broker =flase before cancel recovery

alter database recover managed standby database cancel;



select THREAD# ,group#,bytes/1024/1024 from v$log ;



ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 50 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 51 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 52 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 53 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;




ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 54 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 55 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 56 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 57 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;


ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 58 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 59 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 60 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;

ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 61 ('+DATA1(ONLINELOG)', '+FRA(ONLINELOG)') SIZE 500M;


alter database drop logfile group 1;


Step6

Verify online redolog size on primary and dr then do following on DR

alter system set standby_file_management=auto sid='*' scope=both;




ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;


set dg broker =true

Dataguard test using flashback/restorepoint.

Pre chesk


 

 1.Verify FRA Space Availability at Primary and Standby end. – Start at 06:00PM IST – Offshore.

    SQL> SELECT file_type, sum(percent_space_used) used_space,

   100 - sum(percent_space_used) free_space,

   sum(percent_space_reclaimable) reclaimable_space,

  (100 - sum(percent_space_used)) + sum(percent_space_reclaimable) available_space

  FROM v$flash_recovery_area_usage

     Group by file_type;


 2. Run the FRA_CLEANUP to clear FRA Space.

 3. Check the FRA allocation

     SQL> Show Parameter DB_RECOVERY_FILE_DEST


4.Change the FRA allocation(If necessary)

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=<nn>G;

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='<absolute Path>';


 

Activating the Standby as Primary


 

Issue the following command on the primary to make the standby current

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

 

Check the apply Instance on the Standby

$ dgmgrl /

DGMGRL> show database <standby database>; (Check for apply Instance)

DGMGRL> exit

 

Stop Data Guard Broker on primary.

SQL>alter system set dg_broker_start=false scope=both sid=’*’;

 

Stop Data Guard Broker on Standby

SQL>alter system set dg_broker_start=false scope=both sid=’*’;

 

Stop the Redo apply on Standby Instance (as observed in step 3)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

 

Create the Guaranteed Restore point on the Standby Database.

SQL> CREATE RESTORE POINT BEFORE_DR_TEST GUARANTEE FLASHBACK DATABASE;

 

Issue the following command on the primary to make the standby current

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

 

Disable the redo transport to standby

                          i.      Please find the LOG_ARCHIVE_DEST_n where the service is configured for redo transport

SQL> show parameters dest;

 

                        ii.      Disable the Redo Transport

ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER scope=both sid=’*’;

 

Stop the Standby database

$ srvctl stop database –d <db_unique_name>

 

Disable all Services on Standby except the one identified for the DR

$srvctl disable service –d <db_unique_name> -s <service_name>

 

Start only first instance of Standby in mount state

$ srvctl start instance –d <db_unique_name> -i <Instance_name> -o mount

 

Activate the first instance of standby.

SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;

Remount the first Instance of standby.

SQL> STARTUP MOUNT FORCE;

 

Set the Protection mode for the first Instance of standby to ‘Maximum Performance’

SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

 

Open the first instance of Standby

SQL> ALTER DATABASE OPEN;

Start the service identified for the DR on the Standby

$ srvctl start service –d <unique_db_name> -s <service_name>

 

 

Converting Activated Standby as Physical Standby


Shutdown all instances of Standby

$ srvctl stop database –d <db_unique_name>

 

Startup the first Instance of Standby in mount State

$ srvctl start instance –d <db_unique_name> -i <Instance_name> -o mount

 

Restore the activated standby back to the state it was of Physical Standby

SQL> FLASHBACK DATABASE TO RESTORE POINT BEFORE_DR_TEST;

 

Convert the Activated Standby to Physical Standby

SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

 

Mount the Instance and Verify the Database Role.

SQL> STARTUP MOUNT FORCE;

SQL> SELECT database_role FROM V$DATABASE;

 

Enable all the disabled Services on Standby.

$srvctl enable service –d <db_unique_name> -s <service_name>

 

Shutdown and Restart all Instances of the Database.

$ srvctl stop database –d <db_unique_name>

$ srvctl start database –d <db_unique_name>

 

Start the Redo apply on preferred instance the Standby Database

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

 

Start the Data guard Broker on the standby.

SQL>alter system set dg_broker_start=true scope=both sid=’*’;

 

Enable the redo transport on the Primary.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;

 

Start the Data guard Broker on the Primary

SQL>alter system set dg_broker_start=true scope=both sid=’*’;

 

Verify the  primary and standby are current

On the Primary

select thread#, max(sequence#) "Last Seq Generated"

from v$archived_log val, v$database vdb

where val.resetlogs_change# = vdb.resetlogs_change#

group by thread# order by 1;

 

On the Standby

select thread#, max(sequence#) "Last Standby Seq Received"

from v$archived_log val, v$database vdb

where val.resetlogs_change# = vdb.resetlogs_change#

group by thread# order by 1;

 

select thread#, max(sequence#) "Last Standby Seq Applied"

from v$archived_log val, v$database vdb

where val.resetlogs_change# = vdb.resetlogs_change#

and val.applied='YES' group by thread# order by 1;

 

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;  (Repeat 12a and 12b to verify sync up)

 

 

Drop the Guaranteed Restore Point on the standby.

SQL> drop restore point BEFORE_DR_TEST;

 

Verify no services are up and running on the standby.

$ srvctl status service –d <unique_db_name>

 

Verify all Services up and running on the Primary.

$ srvctl status service –d <unique_db_name>

 

Restore FRA Changes on both Primary and Standby

 

Remove the standby target blackout from Grid. 

Extend Oracle 11g database to 4th node.



  Note: node1, node2 and node3 are existing nodes and node4 will be the new node



Add instance to the cluster database  on primary

1. On node4 do the following

Cd /oracle
Mkdir –p admin oper backup log(already exists)
Mkdir oppr
Mkdir –p adump dpdump hdump pfile wallet
Copy the contents of /oracle/admin/oppr/wallet from node1 to node4
Copy setdb_crs, setdb_+ASM, setdb_oppr from /oracle on node1 to /oracle on node2

2.      Make a copy of the tnsnames.ora on node1. Edit the tnsnames.ora to add the local listener for node4
For eg:
 LISTENER_ OPPR4
  (ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))

LISTENERS_OPPR =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1_vip(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2_vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node3_vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))
  )
Make sure you are using vips and try doing tnsping

$tnsping LISTENERS_OPPR
$tnsping LISTENER_ OPPR4

  1. Create undo tablespace for the 4th instance. Do this from node1.

Create undo tablespace undotbs4 datafile ‘+DATA1’ size 2g autoextend on maxsize 32767m;

  1. From node1. sqlplus into oppr (Instance1). Add redo groups and set instance level parameters
      Add redo logfiles for thread 4 on primary.
Alter database add logfile thread 4 group 7 size 100m;
Alter database add logfile thread 4 group 8 size 100m;


Note: While creating the redo logfiles on primary they will get created on standby as broker is set to true
5. From node1 sqlplus into odm01ppr(Instance1) Add standby logfiles on primary.
Alter database add standby logfile thread 4 group 31 size 100m;
Alter database add standby logfile thread 4 group 32 size 100m;
Alter database add standby logfile thread 4 group 33 size 100m;
6. Make sure that primary and standby are in  sync by switching logfiles and check if they are applied  and then make sure recovery is running on standby.
Verify online / standby  redologs for thread 4 are created on DR, if they are not created do following to create them.
To Verify:
select GROUP#,THREAD# ,BYTES/1024/1024 from v$log order by THREAD# ;
select GROUP#,THREAD# ,BYTES/1024/1024 from V$STANDBY_LOG order by THREAD# ;

To add on the standby do following:
Disable recovery:
alter database recover managed standby database cancel;
alter system set dg_broker_start=false scope=both sid='*';
alter system set standby_file_management=manual scope=both sid='*';
Add logfiles:
alter database add  logfile thread 4 group 401 size 50m;
Alter database add  logfile thread 4 group 402 size 50m;
Alter database add  logfile thread 4 group 403 size 50m;
alter database add standby logfile thread 4 group 411 size 500m;
Alter database add standby logfile thread 4 group 412 size 500m;
Alter database add standby logfile thread 4 group 413 size 500m;
Enable recovery:
alter system set standby_file_management=auto scope=both sid='*';
alter system set dg_broker_start=true scope=both sid='*';
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

7. Make sure that primary and standby are in  sync by switching logfiles and check if they are applied  and then make sure recovery is running on standby and then only issue following command on Primary.
On primary:
Alter database enable thread 4;
Then switch logfiles.
Alter system archive log current;
Alter system checkpoint global;

8. Set instance specific parameters in the spfile
alter system set cluster_database_instances=4 scope=spfile;
alter system set undo_tablespace=UNDOTBS4 scope=spfile sid=’oppr4′;
alter system set instance_name=oppr4 scope=spfile sid=’oppr4′;
alter system set instance_number=4 scope=spfile sid=’oppr4′;
alter system set thread=4 scope=spfile sid=’oppr4′;
alter system set local_listener=LISTENER_OPPR4 scope=spfile sid=’oppr4’;
Alter system set remote_listener=LISTENERS_OPPR scope=spfile sid=’*’;

5.On node4 cd /oracle/product/11.1.0/network/admin.
Create a link for tnsnames.ora to point to tnsnames.ora on ASM home

6.Now Shutdown the database and restart the database.
Before this make sure setdb_oppr points to oppr4 and oracle password file and init file exists for oppr4
Srvctl stop database –d oppr
Srvctl add instance –d oppr –I oppr4 –n node4
Now start the database
Srvctl start database –d oppr

Modify the services to be preferred on 3 nodes and available on 4th node. Change the reporting service if any to run only on 4th node


Now as root run the following command
srvctl setenv nodeapps -n <new nodename>  -t ORACLE_BASE=/oracle

Recycle the instance and the listener just on the new node





Add instance to the cluster database on DR

 1. On node4 do the following

Cd /oracle
Mkdir –p admin oper backup log
Mkdir opdr
Mkdir –p adump dpdump hdump pfile wallet
Copy the contents of /oracle/admin/opdr/wallet from node1 to node4
Copy setdb_crs, setdb_+ASM, setdb_opdr from /oracle on node1 to /oracle on node4

2.      Make a copy of the tnsnames.ora on node1. Edit the tnsnames.ora to add the local listener for node4
For eg:
LISTENER_ OPDR4
  (ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))

LISTENERS_OPDR =
  (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1_vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2_vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node3_vip)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node4_vip)(PORT = 1521))
  )
Make sure you are using vips and try doing tnsping

$tnsping LISTENERS_OPDR
$tnsping LISTENER_ OPDR4

Steps 3,4,5,6,7 need not be performed on standby as they will be created on standby when you did it on primary because broker was set to true

6. Set instance specific parameters in the spfile
alter system set cluster_database_instances=4 scope=spfile;
alter system set undo_tablespace=UNDOTBS4 scope=spfile sid=’opdr4′;
alter system set instance_name=opdr4 scope=spfile sid=’opdr4′;
alter system set instance_number=4 scope=spfile sid=’opdr4′;
alter system set thread=4 scope=spfile sid=’opdr4′;
alter system set local_listener=LISTENER_OPDR4 scope=spfile sid=’opdr4’;
Alter system set remote_listener=LISTENERS_OPDR scope=spfile sid=’*’;

7.On node4 cd /oracle/product/11.1.0/network/admin.
Create a link for tnsnames.ora to point to tnsnames.ora on ASM home

8.Now Shutdown the database and restart the database.
Before this make sure setdb_opdr points to opdr4 and oracle password file and init file exists for opdr4
Srvctl stop database –d opdr
Srvctl add instance –d opdr –I opdr4 –n node4
Now start the database
Srvctl start database –d opdr

Modify the services to be preferred on 3 nodes and available on 4th node. Change the reporting service if any to run only on 4th node


Now as root run the following command
srvctl setenv nodeapps -n <new nodename>  -t ORACLE_BASE=/oracle

Recycle the instance and the listener just on the new node







  










Migrate/Replace disks in OCR,Voting and ASM.

Step1. Identify existing disk devices.
OCR Disks:

$ocrcheck

Voting Disks:

$crsctl query css votedisk


Step2. Make sure permissions are same as the old disk devices on new devices.(Unix check)

Step3 Backup OCR and voting devices.

OCR backup all disks:

$ ocrconfig -showbackup


$ocrconfig -export <backup_file_name>

All Voting disk backup:

$crsctl query css votedisk


dd if=/u02/oradata/racdb/CSSFile of=/home/oracle/VotingDiskBackup.dmp bs=4k




Step4 Replace OCR and Voting disk

OCR Disk Replace:

$ocrconfig -replace ocrmirror <disk>


$ocrconfig -replace ocr <disk>

Voting disk Replace:

Get the list first

$crsctl query css votedisk

then add new disks as follows

$crsctl add css votedisk /dev/raw/raw3 -force

Then remove old voting disks as follows

$crsctl delete css votedisk <path>



ASM disks replace.

Step1: Identify disks of disk group.


connect to asm

sql>select path, header_status, mode_status from v$asm_disk;



Step2: Make sure the new disks are shared across all cluster nodes and have same ownership and privilages as old disks.(Unix confirmation checklist)


Step3: add new disks

SQL> alter diskgroup DATA add disk path;

SQL> alter diskgroup FRA add disk path;

Step4: Drop old disks (Use disk names from query in step1)

alter diskgroup data drop disk DATA_0000, DATA_0001 ;


Step5: Check rebalance, asm operations, increase power if needed.


alter diskgroup DATA rebalance power 5;

alter diskgroup FRA rebalance power 5;


select * from v$asm_operations;


Step6: Once rebalance is complete check,verify new paths.