Category Archives: Oracle

How to Audit User Activity in Oracle Database

Step 1.

SQL> alter system set audit_trail=db scope=spfile;
System altered.

SQL> alter system set audit_sys_operations=true scope=spfile;

Step 2. stop Database

Step 3. start database.

Step 4. check parameter
SQL> show parameter audit_
NAME TYPE VALUE
———————————— ———– ——————————
audit_file_dest string /u01/db/product/10.2.0/rdbms/audit
audit_sys_operations boolean TRUE
audit_syslog_level string
audit_trail string DB

Step 5. Log in as sysdba.

SQL> audit all by apps by access;

OR
Give the audit permission for delete and drop objects.

SQL> audit DELETE ANY TABLE,DROP ANY INDEX,DROP ANY PROCEDURE,DROP ANY TABLE,DROP ANY VIEW,ALTER ANY PROCEDURE,ALTER ANY INDEX by apps
by access whenever successful;

Step 7. Check the operation (log in as sysdba);

SQL> SELECT username, extended_timestamp, owner,obj_name ,action_name FROM dba_audit_trail WHERE owner = ‘APPS’ and extended_timestamp=sysdate -1 ORDER BY timestamp;

SQL> SELECT count(1) FROM dba_audit_trail WHERE owner = ‘APPS’ and extended_timestamp=sysdate -1 ORDER BY timestamp;

SQL> SELECT count(1) FROM dba_audit_trail WHERE owner = ‘APPS’ and extended_timestamp=sysdate – 1

SELECT count(1) FROM dba_audit_trail WHERE owner = ‘APPS’ ORDER BY timestamp;

Step 8. Check how many no of record in audit tables;
select count(1) from sys.aud$;

Some basic education on how to read an oracle error stack

The errors are given in descending order – the top most error is typically the application error. The errors beneath it go into more and more technical/explicit detail of what went wrong, usually down to the operating system level.

Thus the cause of the (application) exception at the top of the error stack is the (system) exception at the bottom of the stack. If we only look at the top and bottom errors, then:

ORA-00204: error in reading (block 3, # blocks 8) of control file

Oracle error, saying that it failed attempting to read a file. Why? The bottom error says:

O/S-Error: (OS 23) Data error (cyclic redundancy check).

O/S error obviously means that this is from the operating system. It tried to perform the file I/O requested by Oracle (application layer) and it failed. It reports a CRC data error on the file.

So is this an Oracle problem? No, the root error is from the o/s and it reports a CRC error on a file.

So should you have even asked for assistance here? Or should your first stop not have been going on google (or bling/yahoo/whatever) and searching “+windows+” (the o/s in question) and “+cyclic redundancy check+” (the error reported by the o/s)? Perhaps even throw “+ntfs+” into the search too (assuming that this is the file system being used).

One of the most important skills for a software engineer is the ability to solve problems – not technical details (like what is a CRC error?). The technical stuff you can look up in the reference docs and material. But if you do not try to understand a simple error stack and analyse the problem, you will never acquire the skill to solve problems. And always be reliant on the skills of others to spoonfeed you and tell you what the problem is.

Primary and Standby out of sync issue.

Errors Observed

Thread 1 advanced to log sequence 42642 (LGWR switch)

Current log# 7 seq# 42642 mem# 0: +REDO_DG01/rp27db1_ttc/onlinelog/group_7.288.823921543

Current log# 7 seq# 42642 mem# 1: +REDO_DG02/rp27db1_ttc/onlinelog/group_7.288.823921547

LNS: Standby redo logfile selected for thread 1 sequence 42642 for destination LOG_ARCHIVE_DEST_2
Tue Nov 26 12:10:35 2013
Archived Log entry 40018 added for thread 1 sequence 42641 ID 0xfc7bdacf dest 1:
Tue Nov 26 12:10:36 2013
ARC2: Standby redo logfile selected for thread 1 sequence 42641 for destination LOG_ARCHIVE_DEST_2
Tue Nov 26 12:11:52 2013
ALTER SYSTEM SET log_archive_dest_state_2=’ENABLE’ SCOPE=BOTH;
Tue Nov 26 12:21:36 2013
LNS: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3113)
LNS: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
Errors in file /opt/oracle/diag/rdbms/rp27db1_ttc/RP27DB11/trace/RP27DB11_lns1_16642.trc:
ORA-03113: end-of-file on communication channel
LGWR: I/O error 3113 archiving log 7 to ‘rp27db1_ttce’
Errors in file /opt/oracle/diag/rdbms/rp27db1_ttc/RP27DB11/trace/RP27DB11_lns1_16642.trc:
ORA-03113: end-of-file on communication channel
Errors in file /opt/oracle/diag/rdbms/rp27db1_ttc/RP27DB11/trace/RP27DB11_lns1_16642.trc:
ORA-03113: end-of-file on communication channel
LNS: Failed to archive log 7 thread 1 sequence 42642 (3
113)

SOLUTION:
——–

On primary database use the following command:

SQL> select dest_id,error from v$archive_dest;

DEST_ID ERROR

———- —————————————————————–

1

2 ORA-03113: end-of-file on communication channel

3

4

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

Take an incremental backup of primary from the SCN where standby is lagging behind and apply on the standby server
*******************************************************************************************************************
>>>>>> On STANDBY database query the v$database view and record the current SCN of the standby database:

SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN

———–

1.3945E+10

SQL> SELECT to_char(CURRENT_SCN) FROM V$DATABASE;

TO_CHAR(CURRENT_SCN)

—————————————-

13945141914

>>>>>> Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL

*

ERROR at line 1:

ORA-16136: Managed Standby Recovery not active

If you see the above error, it means Managed Recovery is already off

>>>>>> You can also confirm from the view v$managed_standby to see if the MRP is running or not

SQL> SELECT PROCESS, STATUS FROM V$MANAGED_STANDBY;

>>>>>> Connect to the primary database as the RMAN target and create an incremental

backup from the current SCN of the standby database:

RMAN> BACKUP INCREMENTAL FROM SCN 13945141914 DATABASE FORMAT ‘/tmp/ForStandby_%U’ tag ‘FOR STANDBY’

>>>>>> Do a recovery of the standby database using the incremental backup of primary taken above:

On the Standby server, without connecting to recovery catalog, catalog the backupset of the incremental

backup taken above. Before this, of course you need to copy the backup piece of the incremental backup taken above to a location accessible to standby server.

$ rman nocatalog target /

RMAN> CATALOG BACKUPPIECE ‘/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1’;

>>>>>> Now in the same session, start the recovery

RMAN> RECOVER DATABASE NOREDO;

>>>>>> You should see something like:

Starting recover at 2013-11-17 04:59:57

allocated channel: ORA_DISK_1

channel ORA_DISK_1: sid=309 devtype=DISK

channel ORA_DISK_1: starting incremental datafile backupset restore

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

….

..

..

.

channel ORA_DISK_1: reading from backup piece /dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1

channel ORA_DISK_1: restored backup piece 1

piece handle=/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1 tag=FOR STANDBY

channel ORA_DISK_1: restore complete, elapsed time: 01:53:08

Finished recover at 2013-11-25 05:20:3

>>>>>> Delete the backup set from standby:

RMAN> DELETE BACKUP TAG ‘FOR STANDBY’;

using channel ORA_DISK_1

List of Backup Pieces

BP Key BS Key Pc# Cp# Status Device Type Piece Name

——- ——- — — ———– ———– ———-

17713 17713 1 1 AVAILABLE DISK /dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1

Do you really want to delete the above objects (enter YES or NO)? YES

deleted backup piece

backup piece handle=/dump/proddb/inc_bkup/ForStandby_1qjm8jn2_1_1 recid=17713 stamp=660972421

Deleted 1 objects

>>>>>> Try to start the managed recovery.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

If you get an error here, you need to ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

If no error, then using the view v$managed_standby, verify that MRP process is started and has the status APPLYING_LOGS.

6. After this, check whether the logs are being applied on the standby or not:

SQL> SELECT SEQUENCE#, APPLIED FROM V$ARCHIVED_LOG;

>>>>>> Note:

After doing a recovery using the incremental backup, you will not see the sequence#’s which were visible earlier

with APPLIED=NO because they have been absorbed as part of the incremental backup and applied on standby during recovery.

The APPLIED column starts showing YES for the logs which are being transmitted now, this means logs are being applied.

Check the status of MRP process in the view v$managed_standby. The status should be APPLYING_LOGS for the duration

that available logs are being applied and once all available logs have been applied, the status should be WAITING_FOR_LOGS

>>>>>> Another check to verify that primary and standby are in sync. Run the following query on both standby and primary:

SQL> select max(sequence#) from v$log_history.

Output should be same on both databases.

*******************************************************************************************

>>>>>> Still find that logs are being transmitted but not being applied on the standby

Register all these archived logs with the standby database.

alter database register logfile

Changing “NLS_CHARACTERSET” in Oracle

NLS_CHARACTERSET  change
========================
Spool charfix.log
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
ALTER SYSTEM SET AQ_TM_PROCESSES=0;
ALTER DATABASE OPEN;
COL VALUE NEW_VALUE CHARSET
–SELECT VALUE FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER=’NLS_CHARACTERSET’;
SELECT ‘AL32UTF8′ as value FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER=’NLS_CHARACTERSET’;
COL VALUE NEW_VALUE NCHARSET
SELECT VALUE FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER=’NLS_NCHAR_CHARACTERSET’;
ALTER DATABASE CHARACTER SET INTERNAL_USE &CHARSET;
ALTER DATABASE NATIONAL CHARACTER SET INTERNAL_USE &NCHARSET;
SHUTDOWN IMMEDIATE;
STARTUP;
— yes, 2 times startup/shutdown . This is not a typo
SHUTDOWN IMMEDIATE;
STARTUP;
spool off

 

Oracle DBA Interview Questions and Answers

RMAN-03002,RMAN-06004,RMAN-20001

RMAN-03002: failure of configure command at 08/14/2013 19:00:13
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20001:
target database not found in recovery catalog
====================================================================

Problem: The target has a different dbid as show below:

Connect Cataloge Database:
————————–
SQL> connect catalog rman/”************”@pcatdb
SQL> select name, dbid from rc_database
where name = ‘COSPSWDM’;

NAME DBID
———————— ———-
COSPSWDM 371053283

Connect Target Database:
————————

SQL> select name, dbid from v$database;

NAME DBID
——— ———-
COSPSWDM 373182751

Solution: Register this COSPSWDM (DBID=373182751) before we can make a backup with rman connected to a catalog.

Connect Cataloge Database for Unregister database:
————————————————–

SQL> connect rman/”*********”@pcatdb

Connected.

If you want to purge the COSPSWDM currently in, you need to run the following metaquery against the catalog:

SQL> select ‘EXEC DBMS_RCVCAT.UNREGISTERDATABASE(‘||DB_KEY||’,’||DBID||’);’||’ ‘||resetlogs_time
from rc_database
where dbid = 371053283;

‘EXECDBMS_RCVCAT.UNREGISTERDATABASE(‘||DB_KEY||’,’||DBID||’);’||”||RESETLOGS_TI
——————————————————————————–
EXEC DBMS_RCVCAT.UNREGISTERDATABASE(6064586,371053283); 24-JUL-13

SQL> EXEC DBMS_RCVCAT.UNREGISTERDATABASE(6064586,371053283);

PL/SQL procedure successfully completed.

Register database:
——————
uspuh23p2:COSPSWDM> rman target /
Recovery Manager: Release 11.2.0.3.0 – Production on Wed Aug 14 19:32:21 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: COSPSWDM (DBID=373182751)

RMAN> connect catalog rman/”pr0drm3n!”@pcatdb

connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

ORA-31626: job does not exist and How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS:

ORA-31626: job does not exist and How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS:
=================================================================

expdp system/xxxxxxx full=y directory=EXPORT_DUMP_MYTEST dumpfile=${DUMPFILE} logfile=${LOGFILE}

Export: Release 10.2.0.4.0 – 64bit Production on Monday, 23 May, 2013 11:06:13

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, Data Mining and Real Application Testing options

ORA-31626: job does not exist.
ORA-31650: timeout waiting for master process response.

Solution:
=========

We need to rerun catproc.sql and delete orphaned jobs

Frist of all the CATPROC component should not be INVALID in the database.
For this you have to rerun the catproc.sql script which can be found in $ORACLE_HOME/rdbms/admin

Verify the dictionary objects are valid per the dba_registry view:
——————————————————————

set pages 1000
set lines 120
col comp_id format a20
col comp_name format a40
col version format a10
col status format a15
select comp_id,comp_name,version,status from dba_registry order by 1;

Note:
—–

If any components show as invalid, run the utlrp.sql script again to try to validate.
At any time the utlrp script can be run independently of the catalog and catproc scripts.
There should be no other DDL occurring on the database while running the utlrp script.
Not following this recommendation may lead to deadlocks.
Recommend running the utlrp script when the database is in restricted mode.

ACTION PLAN
============
1. startup restrict
2. spool catproc.log
3. @?/rdbms/admin/catproc.sql
4. spool off

How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS:
===========================================================

Step 1. Determine in SQL*Plus which Data Pump jobs exist in the database:


SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12
COL operation LIKE state
COL job_mode LIKE state

— locate Data Pump jobs:

SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED
---------- ------------------- --------- --------- ----------- --------
USER1 EXPDP_20051121 EXPORT SCHEMA EXECUTING 1
USER1 EXP_EXPORT_TABLE_01 EXPORT TABLE NOT RUNNING 0
USER1 EXP_EXPORT_TABLE_02 EXPORT TABLE NOT RUNNING 0
SYSTEM EXP_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0

Step 2. Ensure that the listed jobs in dba_datapump_jobs are not export/import Data Pump jobs that are active: status should be ‘NOT RUNNING’.

Step 3. Check with the job owner that the job with status ‘NOT RUNNING’ in dba_datapump_jobs is not an export/import Data Pump job that has been temporary stopped, but is actually a job that failed. (E.g. the full database export job by SYSTEM is not a job that failed, but was deliberately paused with STOP_JOB).

Step 4. Determine in SQL*Plus the related master tables:

— locate Data Pump master tables:

SELECT o.status, o.object_id, o.object_type,
o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;

STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT
------- ---------- ------------ -------------------------
VALID 85283 TABLE USER1.EXPDP_20051121
VALID 85215 TABLE USER1.SYS_EXPORT_TABLE_02
VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01

Step 5. For jobs that were stopped in the past and won't be restarted anymore,
delete the master table. E.g.:

DROP TABLE USER1.sys_export_table_02;

-- For systems with recycle bin additionally run:

purge dba_recyclebin;

Step 6. Re-run the query on dba_datapump_jobs and dba_objects (step 1 and 4). If there are still jobs listed in dba_datapump_jobs,
and these jobs do not have a master table anymore, cleanup the job while connected as the job owner. E.g.:

CONNECT USER1/tiger
SET serveroutput on
SET lines 100
DECLARE
h1 NUMBER;
BEGIN
h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','USER1');
DBMS_DATAPUMP.STOP_JOB (h1);
END;
/

Note that after the call to the STOP_JOB procedure, it may take some time for the job to be removed. Query the view user_datapump_jobs to check
whether the job has been removed:

SELECT * FROM user_datapump_jobs;Step 7. Confirm that the job has been removed:

CONNECT / as sysdba
SET lines 200
COL owner_name FORMAT a10;
COL job_name FORMAT a20
COL state FORMAT a12
COL operation LIKE state
COL job_mode LIKE state

-- locate Data Pump jobs:

SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE ATTACHED
---------- ------------------- --------- --------- ----------- --------
USER1 EXPDP_20051121 EXPORT SCHEMA EXECUTING 1
SYSTEM SYS_EXPORT_FULL_01 EXPORT FULL NOT RUNNING 0

-- locate Data Pump master tables:

SELECT o.status, o.object_id, o.object_type,
o.owner||'.'||object_name "OWNER.OBJECT"
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2;

STATUS OBJECT_ID OBJECT_TYPE OWNER.OBJECT
------- ---------- ------------ -------------------------
VALID 85283 TABLE USER1.EXPDP_20051121
VALID 85162 TABLE SYSTEM.SYS_EXPORT_FULL_01

——————————————————————–
The drop of the master table itself, does not lead to any data dictionary corruption. If you keep the master table after the job completes (using the undocumented parameter: KEEP_MASTER=Y), then a drop of the master table afterwards, will not cause any corruption.

ORA-00600: How to recover and open the database if the archive log required for recovery is missing.

How to recover and open the database if the archive log required for recovery is missing.

ORA-00600: internal error code, arguments: [4194], [17], [9], [], [], [], [], []
Tue OCT 16 12:45:55 2008
Errors in file /u01/XSCLFYDB/admin/XSCLFY/bdump/xsclfy_smon_24975.trc:
ORA-00600: internal error code, arguments: [4193], [53085], [50433], [], [], [], [], []
Doing block recovery for file 433 block 13525
Block recovery from logseq 2, block 31 to scn 9867098416340
——————————————————————————————

To resolve undo corruption issue, change undo_management to “Manual” in init.ora. Now it allowed to open database successfully.

Once database was up and running, create new undo tablespace and dropped old corrupted undo tablespace

and change back the undo_management to “Auto” and undo_tablespace to “NewUndoTablespace”.

1. Create Pfile

2. Change undomanagement auto to manual

UNDO_MANAGEMENT = MANUAL

3. Up the db with Pfile

Note: due to undo segment corruption we need to create a new undo file

5. create New undofile with same specifcation of old one.

CREATE UNDO TABLESPACE undotbs_02
DATAFILE ‘/u01/oracle/rbdb1/undo0201.dbf’ SIZE 2M;

6. Change undomanagement to manual to auto.

ALTER SYSTEM SET UNDO_MANAGEMENT=AUTO SCOPE=BOTH;

Note: If it not works,change the undo_management parameter in pfile.

UNDO_MANAGEMENT = AUTO

7. Switch the new unod tablespace with new one.

ALTER SYSTEM SET UNDO_TABLESPACE = ”;
ALTER SYSTEM SET UNDO_TABLESPACE = undotbs_02;

8. Create SPfile from pfile.

9. Up the databse.

——————————————
Monitoring:
———-

Undo information can be queried using the following views.

V$UNDOSTAT
V$ROLLSTAT
V$TRANSACTION
DBA_UNDO_EXTENTS

— Display the rollback segments

select segment_name
, status
from dba_rollback_segs
/

— Is anything rolling back at the moment?

Look for the used_ublk value decreasing. If it is, the session connected with it is rolling back.
When it reaches zero, rollback is complete.

set lines 100 pages 999
col username format a15
col command format a20
select ses.username
, substr(ses.program, 1, 19) command
, tra.used_ublk
from v$session ses
, v$transaction tra
where ses.saddr = tra.ses_addr
/

—————————————
Maintenance:
————

— Add a datafile.
ALTER TABLESPACE undotbs_01
ADD DATAFILE ‘C:\Oracle\Ordata\TSH1\undo0102.dbf’
AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;

— Resize an undo datafile.
ALTER DATABASE DATAFILE ‘C:\Oracle\Ordata\TSH1\undo0102.dbf’
RESIZE 10M;

— Perform backup operations
ALTER TABLESPACE undotbs_01 BEGIN BACKUP;
ALTER TABLESPACE undotbs_01 END BACKUP;

— Drop an undo tablespace.
DROP TABLESPACE undotbs_01;

Oracle Database Standby Database Failover for Oracle 10g

Step 1. If the standby database is in managed recovery mode, it will need to be cancelled and the instance shutdown using the IMMEDIATE option:

SQL> recover managed standby database cancel;
Media recovery complete.

SQL> shutdown immediate
======================================================

Step 2. Startup the instance and then MOUNT the DB in standby mode:

SQL> connect / as sysdba
Connected to an idle instance.

SQL> startup nomount

SQL> alter database mount standby database;

Database altered.

====================================================

Step3. Apply the primary database’s archived logs by recovering the standby database:

SQL> recover standby database until cancel;

A CANCEL will need to be performed as this is Incomplete Recovery.

=====================================================
Step4. Activate the standby database.

In issuing the ALTER DATABASE ACTIVATE STANDBY DATABASE command, the standby bit in the control file is reset. This process takes the database back to nomount mode:

SQL> alter database activate standby database;

Database altered.

NOTE: This process can take several minutes to complete so be patient!

============================================================

Step 7. Shutdown and restart the new primary database. This step ensures that all file headers are reset and clears all buffers.

SQL> shutdown immediate

SQL> startup open

SQL>select name,open_mode,database_role from v$database;

8. Check Listener

9.Try to connect “sqlplus username/passwd@pr1

ORA-04031,ORA-06508,RMAN-00554,RMAN-06429

ORA-04031_ORA-06508_RMAN-00554_RMAN-06429

ORACLE error from target database:
ORA-04031: unable to allocate 41152 bytes of shared memory (“shared pool”,”DBMS_RCVMAN”,”PL/SQL MPCODE”,”BAMIMA: Bam Buffer”)
ORA-06508: PL/SQL: could not find program unit being called: “SYS.DBMS_RCVMAN”

error executing package DBMS_RCVMAN in TARGET database
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-06429: TARGET database is not compatible with this version of RMAN

SQL> show parameter share pool;

SQL> show parameter SGA

SQL> select dbms_rcvman.getpackageversion from dual;
If getting an error

I tried flushing my shared_pool and then checked to see how large my shared_pool was:

SQL> alter system flush shared_pool;

System altered.

SQL> select dbms_rcvman.getpackageversion from dual;
select dbms_rcvman.getpackageversion from dual
*
ERROR at line 1:
ORA-04031: unable to allocate 41152 bytes of shared memory (“shared
pool”,”DBMS_RCVMAN”,”PL/SQL MPCODE”,”BAMIMA: Bam Buffer”)

SQL> show parameter shared_pool

NAME TYPE VALUE
———————————— ———– ——————————
shared_pool_reserved_size big integer 10M
shared_pool_size big integer 200M

Shared_pool size was small .. so i increased it and that fixed the problem:

SQL> alter system set shared_pool_size=300m;

System altered.

SQL> select dbms_rcvman.getpackageversion from dual;

GETPACKAGEVERSION
——————————————————————————–
08.00.04.00

backup is now running ok.