I have a RAC database named “TEST”, with instances TEST1 and TEST2. The default entry in the oratab looks something like this:
TEST:/u01/app/oracle/product/11.2.0/dbhome_1:N
[oracle@oscqadb02 ~]$ . oraenv
ORACLE_SID = [QAERIE1] ? TEST --- sets the *home* correctly, but does not set the instance.
I have to do it manually
#export ORACLE_SID=TEST1
The below alias does not work because main folder depends upon database name not the instance name
alias alog='/bin/vi $ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_$ORACLE_SID.log'
Here is the workaround:
Login as root
Copy the existing oraenv file to oraenvrac (whatever name) and change the owner of this file.
For example:
[root@oscqadb02 oracle]# cp /usr/local/bin/oraenv /usr/local/bin/oraenvrac
[root@oscqadb02 oracle]# chown oracle /usr/local/bin/oraenvrac
oracle@oscqadb02 ~]$ ll /usr/local/bin/oraenvrac
-rwxr-xr-x 1 oracle root 3094 Aug 18 14:14 /usr/local/bin/oraenvrac
Add the below lines at bottom of the oraenvrac file:
#To get database name:
export DB_NAME=$ORACLE_SID
#To get instance number:
len=`hostname
wc -c`
len1=`expr $len - 1`
nodenum=`hostname
cut -c$len1-$len`
#To get instance name:
export ORACLE_SID=$ORACLE_SID$nodenum
You can use following alias for view and tail the alert.log file
alias alog='/bin/vi $ORACLE_BASE/admin/$DB_NAME/bdump/alert_$ORACLE_SID.log'
alias alogt='/bin/tail -f $ORACLE_BASE/admin/$DB_NAME/bdump/alert_$ORACLE_SID.log'
# [oracle@oscqadb02 ~]$. oraenvrac
ORACLE_SID = [ABC2] ? TEST
[oracle@oscqadb02 ~]$ sid
TEST2
[oracle@oscqadb02~]
You are all set now.
MY LEARNING AS AN ORACLE DBA
Thursday, August 19, 2010
Sunday, August 8, 2010
Manually upgrade the Database 10g2 (10.2.0.4) to Oracle database 11g2 on Redhat Linux 5
1. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.
SQL> @utlrp.sql
2. Check dba_registry
column comp_name format a40
set pagesize 50
set line 2000
select trim(comp_name) comp_name , version , status from dba_registry;
3. Verify that all expected packages and classes are valid:
SQL> select count(*) from dba_objects where status <>’VALID’;
SQL>select owner,object_type,count(*) from dba_objects where status <>’VALID’ group by owner,object_type order by 1,2;
4. Backup the database.
5. Analyze the existing instance using the
SQL>@$ORACLE_11G_HOME/rdbms/admin/utlu112i.sql script
6. Shut down the instance:
SQL> SHUTDOWN IMMEDIATE
7. If your operating system is Linux or UNIX, then make the following checks:
a. The oratab file points to your Oracle Database 11g Rel. 2 Oracle home.
b. Your ORACLE_SID is set correctly
c. The following environment variables point to the Oracle Database 11g2 directories:
– ORACLE_HOME
– PATH
d. Any scripts that clients use to set the ORACLE_HOME value must point to the New Oracle home.
Note: If you are upgrading a cluster database, then perform these checks on all nodes in which this cluster database has instances configured.
8. Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2)
Oracle home directory. At a system prompt, change to the $ORACLE_HOME/rdbms/admin directory.
9. Copy init or spfile file to 11g ORACLE_HOME.
10. Connect to the database instance as a user with SYSDBA privileges.
Start the instance by issuing the following command:
SQL> STARTUP UPGRADE
11. Set the system to spool results to a log file for later verification of success:
SQL> SPOOL upgrade.log
12. Run the catupgrd.sql script:
SQL> @catupgrd.sql This scripts runs the desired upgrade scripts and Shuts Down the database.
13. Restart the instance to reinitialize the system parameters for normal operation.
SQL> STARTUP
14. Run utlu112s.sql to display the results of the upgrade:
SQL> @$ORACLE_HOME/rdbms/admin/utlu112s.sql
15. (Optional ) Run catuppst.sql, located in the ORACLE_HOME/rdbms/admin directory, to
perform upgrade actions that do not require the database to be in UPGRADE mode:
SQL> @rdbms/admin/catuppst.sql
16. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.
SQL> @utlrp.sql
17. Verify that all expected packages and classes are valid:
SQL> SELECT count(*) FROM dba_invalid_objects;
SQL> SELECT distinct object_name FROM dba_invalid_objects;
18. Exit SQL* Plus
SQL> @utlrp.sql
2. Check dba_registry
column comp_name format a40
set pagesize 50
set line 2000
select trim(comp_name) comp_name , version , status from dba_registry;
3. Verify that all expected packages and classes are valid:
SQL> select count(*) from dba_objects where status <>’VALID’;
SQL>select owner,object_type,count(*) from dba_objects where status <>’VALID’ group by owner,object_type order by 1,2;
4. Backup the database.
5. Analyze the existing instance using the
SQL>@$ORACLE_11G_HOME/rdbms/admin/utlu112i.sql script
6. Shut down the instance:
SQL> SHUTDOWN IMMEDIATE
7. If your operating system is Linux or UNIX, then make the following checks:
a. The oratab file points to your Oracle Database 11g Rel. 2 Oracle home.
b. Your ORACLE_SID is set correctly
c. The following environment variables point to the Oracle Database 11g2 directories:
– ORACLE_HOME
– PATH
d. Any scripts that clients use to set the ORACLE_HOME value must point to the New Oracle home.
Note: If you are upgrading a cluster database, then perform these checks on all nodes in which this cluster database has instances configured.
8. Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2)
Oracle home directory. At a system prompt, change to the $ORACLE_HOME/rdbms/admin directory.
9. Copy init or spfile file to 11g ORACLE_HOME.
10. Connect to the database instance as a user with SYSDBA privileges.
Start the instance by issuing the following command:
SQL> STARTUP UPGRADE
11. Set the system to spool results to a log file for later verification of success:
SQL> SPOOL upgrade.log
12. Run the catupgrd.sql script:
SQL> @catupgrd.sql This scripts runs the desired upgrade scripts and Shuts Down the database.
13. Restart the instance to reinitialize the system parameters for normal operation.
SQL> STARTUP
14. Run utlu112s.sql to display the results of the upgrade:
SQL> @$ORACLE_HOME/rdbms/admin/utlu112s.sql
15. (Optional ) Run catuppst.sql, located in the ORACLE_HOME/rdbms/admin directory, to
perform upgrade actions that do not require the database to be in UPGRADE mode:
SQL> @rdbms/admin/catuppst.sql
16. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.
SQL> @utlrp.sql
17. Verify that all expected packages and classes are valid:
SQL> SELECT count(*) FROM dba_invalid_objects;
SQL> SELECT distinct object_name FROM dba_invalid_objects;
18. Exit SQL* Plus
Wednesday, July 28, 2010
Recovering a Standby database from a missing archives log
Recover Gaps in Standby Database from primary Using incremental backups
Step 1: On the standby database find out the current SCN.
standby > select current_scn from v$database;
CURRENT_SCN
-----------
2757142742
Step 2: On the primary database, create an incremental backup from the above SCN
#rman target /
RMAN> BACKUP as compressed backupset device type disk INCREMENTAL FROM SCN 2757142742 DATABASE FORMAT '/oracle/backups/test1/rmanbackup/manual/forstandby_%U' tag 'FOR STANDBY';
Step 3: Cancel managed recovery at the standby database
STDBY>alter database recover managed standby database cancel;
Media recovery complete.
scp the backup files to standby server to /oracle/backups/priv2/rmanbackup/manual/ folder.
Step 4: Catalog the Incremental Backup Files at the Standby Database
#rman target /
RMAN> CATALOG START WITH '/oracle/backups/priv1/rmanbackup/manual/';
searching for all files that match the pattern /oracle/backups/priv1/rmanbackup/manual/
List of Files Unknown to the Database
=====================================
.....
Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done
Step 5: Apply the Incremental Backup to the Standby Database:
RMAN> recover database noredo;
Step 6: Put the standby database back to managed recovery mode.
STDBY>> recover managed standby database disconnect;
Media recovery complete.
From the alert.log you will notice that the standby database is still looking for the old log files
*************************************************
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence ....
DBID 23623369570 branch 704397469
FAL[client]: All defined FAL servers have been attempted.
**************************************************
This is because the controlfile has not been updated.
Hence the standby controlfile has to be recreated
Step 7: At Primary .. Create latest control file for standby database
RMAN> backup device type disk current controlfile for standby format '/oracle/backups/test1/rmanbackup/manual/forstandbyctrl.bck';
Step 8: At Standby ... Create a backup (text copy) of control file (to find out the actual location of data files)
SQL> alter database backup controlfile to trace;
Step 9: At Standby …
SQL> shutdown immediate;
SQL> startup nomount;
RMAN> restore standby controlfile from '/oracle/backups/test1/rmanbackup/manual/forstandbyctrl.bck';
Step 10 : At Standby … Create standby Redo log file
For example ( check out for location , group number and size )
SQL>alter database add standby logfile thread 1 group x 'redo log location' size mb;
SQL>alter database add standby logfile thread 1 group 5 '/oracle/oradata01/test1/redo05.log' size 52428800;
SQL>alter database add standby logfile thread 1 group 6 '/oracle/oradata01/test1/redo06.log' size 52428800;
Step 11: At Standby … If there is mismatch in location of data files between primary and standby, need to update control file, follow these steps
SQL>Alter system set standby_file_management=manual;
SQL> alter database recover managed standby database cancel;
SQL>Shutdown immediate;
SQL>Startup mount
SQL>Alter database rename datafile ‘primary data file location’ to ‘standby datafile location’;
For example:
SQL>alter database rename file '/oracle/oradata02/test2/data/app_data05.dbf' to '/oracle/oradata/test1/data/lob/app_data05.dbf';
SQL>alter system set standby_file_management=auto;
SQL>Shutdown immediate
SQL>Startup mount
SQL>recover managed standby database disconnect;
Step 12: Verify the standby alert log file, after couple of switch logfile at primary:
SQL> alter system switch logfile;
A Physical Standby database synchs with Primary by continuous apply of archivelogs from a Primary Database. In case of an archivelog gone missing or corrupt you have to rebuild the standby database from scratch but in 10g, an incremental backup created with BACKUP INCREMENTAL... FROM SCN can be used to refresh the standby database with changes at the primary database since the last SCN at Standby and then managed recovery can resume i.e. Compensate for the missing archivelogs
Step 1: On the standby database find out the current SCN.
standby > select current_scn from v$database;
CURRENT_SCN
-----------
2757142742
Step 2: On the primary database, create an incremental backup from the above SCN
#rman target /
RMAN> BACKUP as compressed backupset device type disk INCREMENTAL FROM SCN 2757142742 DATABASE FORMAT '/oracle/backups/test1/rmanbackup/manual/forstandby_%U' tag 'FOR STANDBY';
Step 3: Cancel managed recovery at the standby database
STDBY>alter database recover managed standby database cancel;
Media recovery complete.
scp the backup files to standby server to /oracle/backups/priv2/rmanbackup/manual/ folder.
Step 4: Catalog the Incremental Backup Files at the Standby Database
#rman target /
RMAN> CATALOG START WITH '/oracle/backups/priv1/rmanbackup/manual/';
searching for all files that match the pattern /oracle/backups/priv1/rmanbackup/manual/
List of Files Unknown to the Database
=====================================
.....
Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done
Step 5: Apply the Incremental Backup to the Standby Database:
RMAN> recover database noredo;
Step 6: Put the standby database back to managed recovery mode.
STDBY>> recover managed standby database disconnect;
Media recovery complete.
From the alert.log you will notice that the standby database is still looking for the old log files
*************************************************
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence ....
DBID 23623369570 branch 704397469
FAL[client]: All defined FAL servers have been attempted.
**************************************************
This is because the controlfile has not been updated.
Hence the standby controlfile has to be recreated
Step 7: At Primary .. Create latest control file for standby database
RMAN> backup device type disk current controlfile for standby format '/oracle/backups/test1/rmanbackup/manual/forstandbyctrl.bck';
Step 8: At Standby ... Create a backup (text copy) of control file (to find out the actual location of data files)
SQL> alter database backup controlfile to trace;
Step 9: At Standby …
SQL> shutdown immediate;
SQL> startup nomount;
RMAN> restore standby controlfile from '/oracle/backups/test1/rmanbackup/manual/forstandbyctrl.bck';
Step 10 : At Standby … Create standby Redo log file
For example ( check out for location , group number and size )
SQL>alter database add standby logfile thread 1 group x 'redo log location' size mb;
SQL>alter database add standby logfile thread 1 group 5 '/oracle/oradata01/test1/redo05.log' size 52428800;
SQL>alter database add standby logfile thread 1 group 6 '/oracle/oradata01/test1/redo06.log' size 52428800;
Step 11: At Standby … If there is mismatch in location of data files between primary and standby, need to update control file, follow these steps
SQL>Alter system set standby_file_management=manual;
SQL> alter database recover managed standby database cancel;
SQL>Shutdown immediate;
SQL>Startup mount
SQL>Alter database rename datafile ‘primary data file location’ to ‘standby datafile location’;
For example:
SQL>alter database rename file '/oracle/oradata02/test2/data/app_data05.dbf' to '/oracle/oradata/test1/data/lob/app_data05.dbf';
SQL>alter system set standby_file_management=auto;
SQL>Shutdown immediate
SQL>Startup mount
SQL>recover managed standby database disconnect;
Step 12: Verify the standby alert log file, after couple of switch logfile at primary:
SQL> alter system switch logfile;
Thursday, July 1, 2010
ASM Configuration problem for Oracle 11g
If you encounter below error message when you are configuring ASM , check out below document.
Initializing the Oracle ASMLib driver: [FAILED ]
Sunday, April 25, 2010
Precautions before a shutdown a database
Before a shutdown the Production or any important database, take some basic precaution besides the backup.
Create an init parameter file from current spfile (If database is using spfile)
In case, If database is not mounting / coming up or need to disable some of parameter(s) or database feature to startup :
SQL> create pfile from spfile;
Create backup of a control file in text format
To know the location and data file's name of a database, In case if a database is not mounting or opening after shutdown
SQL> alter database backup controlfile to trace;
Database altered.
Go to udump folder and check for latest trc file to get control file definition:
[oracle@xxdb1 udump]$ ll -ltr tail -2
-rw-r----- 1 oracle oinstall 743 Apr 24 00:15 xxx1_ora_9000.trc
-rw-r----- 1 oracle oinstall 18004 Apr 24 07:23 xxx1_ora_30518.trc
[oracle@xxxdb1 udump]$ vi xxx1_ora_30518.trc
Check the status of the data files
To confirm, All the data files are available and none of them offline
SQL> select TABLESPACE_NAME, ONLINE_STATUS from dba_data_files;
TABLESPACE_NAME ONLINE_
------------------------------ -------
SYSTEM SYSTEM
UNDOTBS1 ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
.....
Review the last 200 lines of alert log file:
For any possible error message or problem
[oracle@xxxdb1 bdump]$ view alert_db_name.log
Open the alert log file as tail -f alert_db_name.log
Make a practice to view what is going with a database, When are you performing some major changes or activities on a database:
[oracle@xxxdb1 ~]$ tail -f /oracle/admin/xxx1/bdump/alert_xxx1.log
In Data Guard / Standby setup
Before a shutdown of a database and after open of a database:
open an alert log as tail -f alert_db_name.log at primary and a standby database:
[oracle@xxxdb1 ~]$ tail -f /oracle/admin/xxx1/bdump/alert_xxx1.log
[oracle@xxxdb2 ~]$ tail -f /oracle/admin/xxx2/bdump/alert_xxx2.log
Switch an archive log manually, checked archive has shipped and media recovery process is applying it.
SQL> alter system switch logfile;
---content from Standby database alert_db_name.log --------
Fri Apr 23 23:41:47 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[24]: Assigned to RFS process 14252
RFS[24]: Identified database type as 'physical standby'
Fri Apr 23 23:43:07 2010
Primary database is in MAXIMUM PERFORMANCE mode
RFS[23]: Successfully opened standby log 8: '/oracle/oradata/xxx2/redo/stby_redo08a.log'
Fri Apr 23 23:43:12 2010
Media Recovery Log /oracle/backups/xxx2/stby_archive/1_12529_704597469.arc
Media Recovery Waiting for thread 1 sequence 12530 (in transit)
Fri Apr 23 23:43:19 2010
Primary database is in MAXIMUM PERFORMANCE mode
RFS[23]: Successfully opened standby log 8: '/oracle/oradata/xxx2/redo/stby_redo08a.log'
Fri Apr 23 23:43:23 2010
Media Recovery Log /oracle/backups/xxx2/stby_archive/1_12530_704597469.arc
Media Recovery Waiting for thread 1 sequence 12531 (in transit)
Create an init parameter file from current spfile (If database is using spfile)
In case, If database is not mounting / coming up or need to disable some of parameter(s) or database feature to startup :
SQL> create pfile from spfile;
Create backup of a control file in text format
To know the location and data file's name of a database, In case if a database is not mounting or opening after shutdown
SQL> alter database backup controlfile to trace;
Database altered.
Go to udump folder and check for latest trc file to get control file definition:
[oracle@xxdb1 udump]$ ll -ltr tail -2
-rw-r----- 1 oracle oinstall 743 Apr 24 00:15 xxx1_ora_9000.trc
-rw-r----- 1 oracle oinstall 18004 Apr 24 07:23 xxx1_ora_30518.trc
[oracle@xxxdb1 udump]$ vi xxx1_ora_30518.trc
Check the status of the data files
To confirm, All the data files are available and none of them offline
SQL> select TABLESPACE_NAME, ONLINE_STATUS from dba_data_files;
TABLESPACE_NAME ONLINE_
------------------------------ -------
SYSTEM SYSTEM
UNDOTBS1 ONLINE
UNDOTBS1 ONLINE
SYSAUX ONLINE
.....
Review the last 200 lines of alert log file:
For any possible error message or problem
[oracle@xxxdb1 bdump]$ view alert_db_name.log
Open the alert log file as tail -f alert_db_name.log
Make a practice to view what is going with a database, When are you performing some major changes or activities on a database:
[oracle@xxxdb1 ~]$ tail -f /oracle/admin/xxx1/bdump/alert_xxx1.log
In Data Guard / Standby setup
Before a shutdown of a database and after open of a database:
open an alert log as tail -f alert_db_name.log at primary and a standby database:
[oracle@xxxdb1 ~]$ tail -f /oracle/admin/xxx1/bdump/alert_xxx1.log
[oracle@xxxdb2 ~]$ tail -f /oracle/admin/xxx2/bdump/alert_xxx2.log
Switch an archive log manually, checked archive has shipped and media recovery process is applying it.
SQL> alter system switch logfile;
---content from Standby database alert_db_name.log --------
Fri Apr 23 23:41:47 2010
Redo Shipping Client Connected as PUBLIC
-- Connected User is Valid
RFS[24]: Assigned to RFS process 14252
RFS[24]: Identified database type as 'physical standby'
Fri Apr 23 23:43:07 2010
Primary database is in MAXIMUM PERFORMANCE mode
RFS[23]: Successfully opened standby log 8: '/oracle/oradata/xxx2/redo/stby_redo08a.log'
Fri Apr 23 23:43:12 2010
Media Recovery Log /oracle/backups/xxx2/stby_archive/1_12529_704597469.arc
Media Recovery Waiting for thread 1 sequence 12530 (in transit)
Fri Apr 23 23:43:19 2010
Primary database is in MAXIMUM PERFORMANCE mode
RFS[23]: Successfully opened standby log 8: '/oracle/oradata/xxx2/redo/stby_redo08a.log'
Fri Apr 23 23:43:23 2010
Media Recovery Log /oracle/backups/xxx2/stby_archive/1_12530_704597469.arc
Media Recovery Waiting for thread 1 sequence 12531 (in transit)
Saturday, April 24, 2010
Managing Archive Gaps Manually on Standby Database Setup
To determine if there is an archive gap on your physical standby database
SQL> SELECT * FROM V$ARCHIVE_GAP;
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
----------- ------------- --------------
1 12499 12511
In this case, Standby database is waiting for archive log 12499
SQL> select process,sequence#,status from v$managed_standby;
PROCESS SEQUENCE# STATUS
--------- ---------- ------------
ARCH 12509 CLOSING
ARCH 12510 CLOSING
ARCH 12504 CLOSING
ARCH 12508 CLOSING
MRP0 12499 WAIT_FOR_GAP
RFS 12511 IDLE
6 rows selected.
But if you check, archives have been either shipped already or even if you copy these on to Standby server (MRP is still waiting for 12499)
[oracle@xxxdb2 ~]$ cd /oracle/backups/xxx2/stby_archive
[oracle@xxxdb2 stby_archive]$ ll -l 1_12499*
-rw-r----- 1 oracle oinstall 85923328 Apr 23 19:01 1_12499_704597469.arc
[oracle@xxxdb2 stby_archive]$ ll -l 1_1250* head -5
-rw-r----- 1 oracle oinstall 85897728 Apr 23 19:38 1_12500_704597469.arc
-rw-r----- 1 oracle oinstall 83760640 Apr 23 19:39 1_12501_704597469.arc
-rw-r----- 1 oracle oinstall 1121792 Apr 23 19:39 1_12502_704597469.arc
-rw-r----- 1 oracle oinstall 21140480 Apr 23 19:39 1_12503_704597469.arc
-rw-r----- 1 oracle oinstall 246272 Apr 23 19:31 1_12504_704597469.arc
[oracle@xxxdb2 stby_archive]$
Below SQL is also confirmed for same:
SQL> column dest format a20
SQL> select substr(DEST_NAME,1,20) dest ,archived_thread#,archived_seq#,applied_thread#,applied_seq# from v$archive_dest_status;
DEST ARCHIVED_THREAD# ARCHIVED_SEQ# APPLIED_THREAD# APPLIED_SEQ#
----------------------- ---------------- ------------- --------------- ------------
LOG_ARCHIVE_DEST_1 0 0 0 0
LOG_ARCHIVE_DEST_2 1 12509 0 0
….
LOG_ARCHIVE_DEST_10 0 0 0 0
STANDBY_ARCHIVE_DEST 1 12508 1 12498
11 rows selected.
You may find similar entry or error in standby database's alert log file:
Media Recovery Waiting for thread 1 sequence 12499
Fetching gap sequence in thread 1, gap sequence 12499-12503
Fri Apr 23 21:56:44 2010
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 12499-12503
DBID 2467369570 branch 704597469
FAL[client]: All defined FAL servers have been attempted.
Some time, due to the cancelation of Recovery Process at standby causes a partial apply of archive log. To fix this problem
To find out archive log at Primary database Server:
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# between 12499 and 12503;
Copy the missing archives to Standby Server (optional).
Register them using the ALTER DATABASE REGISTER LOGFILE sQL on your physical standby database (to re-appy them):
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12499_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12500_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12501_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12502_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12503_704597469.arc';
Database altered.
MRP automatically start and apply the archive one by one.
SQL> SELECT * FROM V$ARCHIVE_GAP;
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
----------- ------------- --------------
1 12499 12511
In this case, Standby database is waiting for archive log 12499
SQL> select process,sequence#,status from v$managed_standby;
PROCESS SEQUENCE# STATUS
--------- ---------- ------------
ARCH 12509 CLOSING
ARCH 12510 CLOSING
ARCH 12504 CLOSING
ARCH 12508 CLOSING
MRP0 12499 WAIT_FOR_GAP
RFS 12511 IDLE
6 rows selected.
But if you check, archives have been either shipped already or even if you copy these on to Standby server (MRP is still waiting for 12499)
[oracle@xxxdb2 ~]$ cd /oracle/backups/xxx2/stby_archive
[oracle@xxxdb2 stby_archive]$ ll -l 1_12499*
-rw-r----- 1 oracle oinstall 85923328 Apr 23 19:01 1_12499_704597469.arc
[oracle@xxxdb2 stby_archive]$ ll -l 1_1250* head -5
-rw-r----- 1 oracle oinstall 85897728 Apr 23 19:38 1_12500_704597469.arc
-rw-r----- 1 oracle oinstall 83760640 Apr 23 19:39 1_12501_704597469.arc
-rw-r----- 1 oracle oinstall 1121792 Apr 23 19:39 1_12502_704597469.arc
-rw-r----- 1 oracle oinstall 21140480 Apr 23 19:39 1_12503_704597469.arc
-rw-r----- 1 oracle oinstall 246272 Apr 23 19:31 1_12504_704597469.arc
[oracle@xxxdb2 stby_archive]$
Below SQL is also confirmed for same:
SQL> column dest format a20
SQL> select substr(DEST_NAME,1,20) dest ,archived_thread#,archived_seq#,applied_thread#,applied_seq# from v$archive_dest_status;
DEST ARCHIVED_THREAD# ARCHIVED_SEQ# APPLIED_THREAD# APPLIED_SEQ#
----------------------- ---------------- ------------- --------------- ------------
LOG_ARCHIVE_DEST_1 0 0 0 0
LOG_ARCHIVE_DEST_2 1 12509 0 0
….
LOG_ARCHIVE_DEST_10 0 0 0 0
STANDBY_ARCHIVE_DEST 1 12508 1 12498
11 rows selected.
You may find similar entry or error in standby database's alert log file:
Media Recovery Waiting for thread 1 sequence 12499
Fetching gap sequence in thread 1, gap sequence 12499-12503
Fri Apr 23 21:56:44 2010
FAL[client]: Failed to request gap sequence
GAP - thread 1 sequence 12499-12503
DBID 2467369570 branch 704597469
FAL[client]: All defined FAL servers have been attempted.
Some time, due to the cancelation of Recovery Process at standby causes a partial apply of archive log. To fix this problem
To find out archive log at Primary database Server:
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# between 12499 and 12503;
Copy the missing archives to Standby Server (optional).
Register them using the ALTER DATABASE REGISTER LOGFILE sQL on your physical standby database (to re-appy them):
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12499_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12500_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12501_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12502_704597469.arc';
Database altered.
SQL> ALTER DATABASE REGISTER LOGFILE '/oracle/backups/xxx2/stby_archive/1_12503_704597469.arc';
Database altered.
MRP automatically start and apply the archive one by one.
Wednesday, March 31, 2010
Encountered file error when copying listeners from home
If you encountered same or similar error message during of Oracle database creation on RAC
Environment:
“Encountered file error when copying listeners from home=/u01/app/11.2.0/grid”
Follow below steps to fix the problem
$ echo $TNS_ADMIN
If set, unset and run dbua
$ unset TNS_ADMIN
$ echo
$TNS_ADMIN
$ dbua
Check out Metalink [ID 763578.1] for detail.
Environment:
“Encountered file error when copying listeners from home=/u01/app/11.2.0/grid”
Follow below steps to fix the problem
$ echo $TNS_ADMIN
If set, unset and run dbua
$ unset TNS_ADMIN
$ echo
$TNS_ADMIN
$ dbua
Check out Metalink [ID 763578.1] for detail.
Subscribe to:
Posts (Atom)