Category Archives: Oracle

Clone Oracle Database Using Hot Backup

1.First get the details of datafiles and archivelog location that present in the original database using the below command.

SQL> Select name from v$datafile;
SQL> archive log list;

2.Get the latest SCN by using the below command.

SQL> select max(first_change#) chng
2 from v$archived_log
3 /

CHNG
———-
424485

3.Bring the database in begin backup mode.

SQL> Alter database begin backup;

Database altered.

Note :- Make sure once you start begin backup mode there shouldn’t be No RMAN backup run on the database till you end the backup mode. Because till 10G if begin backup mode and RMAN backup is start at the same time there is a change of undo segment corruption and its a BUG reported by oracle support and I too faced the same problem, to resolve this we need downtime so be cautious.

4.Check the status whether the datafiles are in backup mode.

SQL> select * from v$backup;

FILE# STATUS CHANGE# TIME
———- —————— ———- ———
1 ACTIVE 424935 22-FEB-12
2 ACTIVE 424935 22-FEB-12

5.Now copy the datafiles physically from the original location to the destination.
Example :
SQL> select name from v$datafile;

NAME
——————————————————————————–
/u04/app/oracle/product/10.2.0/oradata/test/system01.dbf
/u04/app/oracle/product/10.2.0/oradata/test/undotbs01.dbf
SQL>exit
$ cd /u04/app/oracle/product/10.2.0/oradata/test/
$ ls -lrt
-rw-r—– 1 oracle staff 209723392 Feb 22 12:29 undotbs01.dbf
-rw-r—– 1 oracle staff 429924352 Feb 22 12:29 system01.dbf
$ cp *.dbf /u03/oradata/clonedb/data

Note :- If you want to clone the database to the different server refer the end of the command for SCP (server level copy command)

6.After copying all the datafiles to destination location stop the backup mode in the original database.

SQL> alter database end backup;

Database altered.

7. Check the status of the datafiles.

SQL> select * from v$backup;

FILE# STATUS CHANGE# TIME
———- —————— ———- ———
1 NOT ACTIVE 424935 22-FEB-12
2 NOT ACTIVE 424935 22-FEB-12

8.Archive the current logfile.

SQL> alter system archive log current;

System altered.

9.Get the details of archivelogs that will be needed for recovery while bringing up the clone database.

SQL> select name
2 from v$archived_log
3 where first_change# >= &change_no (424485)
4 order by name
5 /

Enter value for change_no: 424485
old 3: where first_change# >= &change_no
new 3: where first_change# >= 424485 (Enter the no which we got already ref #2)

NAME
——————————————————————————-
/u04/app/oracle/product/10.2.0/oradata/test/arch/1_26_775911007.dbf
/u04/app/oracle/product/10.2.0/oradata/test/arch/1_27_775911007.dbf

Copy the above listed archivelog files to the clone database archivelog location. (These logs are need for point in time recovery since we are cloning the database using hot backup)

10.Create Pfile from the spfile.

SQL> show parameter spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string /u04/app/oracle/product/10.2.0
/dbs/spfileprim.ora
SQL> create pfile from spfile;

File created.

11. In original database generate CREATE CONTROLFILE statement by typing the following command.

SQL>alter database backup controlfile to trace;
This will create a trace file containing the “CREATE CONTROLFILE” command to recreate the controlfile in text form.

12.Create the necessary directory on the clone database (destination database) server on your desired location.

Example :- mkdir udump adump cdump bdump arch

udump – user dump destination
bdump – background dump destination
adump – audit dump destination
cdump – core dump destination
arch – Archive log destination

13. Now, go to the USER_DUMP_DEST directory on the original Database server and open the latest trace file.The trace file will have the form “ora_NNNN.trc with NNNN being a number. This file will contain steps and as well as CREATE CONTROLFILE statement. Copy the CREATE CONTROLFILE statement and paste it in a notepad.

14. Edit the file

FROM: CREATE CONTROLFILE REUSE DATABASE “olddbname” RESETLOGS …
TO: CREATE CONTROLFILE set DATABASE “newdbname” RESETLOGS …

Change the word ‘REUSE’ to ‘set’ and the ‘olddbname’ to ‘newdbname’. Also change the datafiles location parameter to clone database location.

15. Now copy the pfile from the original database server to the clone database server and placed it under the $ORACLE_HOME/dbs location. Now open the parameter file in clone database and change the following parameters with the respective current location.

CONTROL FILES
BACKGROUND_DUMP_DEST
USER_DUMP_DEST
CORE_DUMP_DEST
LOG_ARCHIVE_DEST_1

16. In clone database SERVER export ORACLE_SID environment variable and start the instance

$export ORACLE_SID=clone database name
$sqlplus
Enter User:/ as sysdba
SQL> startup nomount pfile=’init.clonedb.ora’;

17.Run create controlfile script to create the controlfile

SQL>@createcontrolfile.sql

Control file created.

18.Check the status of the Database.

SQL> select name,open_mode from v$database;

NAME OPEN_MODE
——— ———-
CLONE MOUNTED

19. Also check whether the clone database is pointing to its datafiles and dump files (just for verification)

SQL> select name from v$datafile;

NAME
——————————————————————————–
/u03/oradata/clone/data/system01.dbf
/u03/oradata/clone/data/undotbs01.dbf
/u03/oradata/clone/data/sysaux01.dbf
/u03/oradata/clone/data/users01.dbf

SQL> show parameter dump

NAME TYPE VALUE
———————————— ———– ——————————
background_core_dump string partial
background_dump_dest string /u03/oradata/clone/bdump
core_dump_dest string /u03/oradata/clone/cdump
max_dump_file_size string UNLIMITED
shadow_core_dump string partial
user_dump_dest string /u03/oradata/clone/udump

20. Now media recovery is needed because we are cloning from the HOT backup. Follow the below setps.

SQL> recover database using BACKUP CONTROLFILE until cancel;

ORA-00279: change 424935 generated at 02/22/2012 12:29:27 needed for thread 1
ORA-00289: suggestion : /u03/oradata/clone/arch/1_27_775911007.dbf
ORA-00280: change 424935 for thread 1 is in sequence #27

21. Now we need to apply the necessary archive log files. (Refer the point no 9 & 10 apply all the archivelog files )

Specify log: {=suggested | filename | AUTO | CANCEL}
/u03/oradata/clone/arch/1_27_775911007.dbf (Give the archive location and file name)
ORA-00279: change 425120 generated at 02/22/2012 12:37:58 needed for thread 1
ORA-00289: suggestion : /u03/oradata/clone/arch/1_28_775911007.dbf
ORA-00280: change 425120 for thread 1 is in sequence #28
ORA-00278: log file ‘/u03/oradata/clone/arch/1_27_775911007.dbf’ no longer
needed for this recovery

22.Once applied all the neceesary archive log files give cancel .

Specify log: {=suggested | filename | AUTO | CANCEL}
CANCEL
Media recovery cancelled.

23.Now open the database with open reset logs.

SQL> alter database open resetlogs;

Database altered.

SQL> select name,open_mode from v$database;

NAME OPEN_MODE
——— ———-
CLONE READ WRITE

24. Get the latest SCN no in the clone database

SQL> select max(first_change#) chng
2 from v$archived_log
3 /

CHNG
———-
424488 (It will matches with the No if we got it on the Point no #2 )

Command for Server copy (SCP)

If you want to clone the database on the different server, for copying the file from source to destination we need to use SCP comand.

Syntax:

$ scp your_username@remotehost.edu:foobar.txt /some/local/directory

Where, username is your Clone DB server login user id
remotehost is CLONE DB host name
foobar.txt is your filename
/some/local/directory is your destination location
Example :
Consider you are in the source server location (ie., Original DB location IP address 10.250.27.234)

$ pwd

/u04/app/oracle/product/10.2.0/oradata/test/

$ ls -lrt

rw-r—– 1 oracle staff 209723392 Feb 22 12:29 undotbs01.dbf
rw-r—– 1 oracle staff 429924352 Feb 22 12:29 system01.dbf

$ scp oracle@10.251.55.123:undotbs01.dbf /u03/oradata/clone/data/

Where, oracle is your Clone DB server login user id
10.251.55.123 is CLONE DB host name
undotbs01.dbf is your filename
/u03/oradata/clone/data is your destination location

CLONE ORACLE DATABASE WITH COLD BACKUP:

1. Identify and copy the database files

With the source database started, identify all of the database’s files. The following query will display all datafiles, tempfiles and redo logs:

set lines 100 pages 999

col name format a50

select name, bytes

from (select name, bytes

from v$datafile

union all

select name, bytes

from v$tempfile

union all

select lf.member “name”, l.bytes

from v$logfile lf

, v$log l

where lf.group# = l.group#) used

, (select sum(bytes) as poo

from dba_free_space) free

/

OR

SQL>Select name from v$datafile;

SQL>Select member from v$logfile;

Make sure that the clone databases file-system is large enough and has all necessary directories.

If the source database has a complex file structure, you might want to consider modifying the

above sql to produce a file copy script.

Stop the source database with:

shutdown immediate

Copy, scp or ftp the files from the source database/machine to the target.

Do not copy the control files across. Make sure that the files have the correct permissions and ownership.

Start the source database up again

startup

2. Produce a pfile for the new database

This step assumes that you are using a spfile. If you are not, just copy the existing pfile.

From sqlplus:

create pfile=’init.ora’ from spfile;

This will create a new pfile in the $ORACLE_HOME/dbs directory.

Once created, the new pfile will need to be edited. If the cloned database is to have a new name,

this will need to be changed, as will any paths. Review the contents of the file and make

alterations as necessary.

Also think about adjusting memory parameters. If you are cloning a production database onto

a slower development machine you might want to consider reducing some values.

Now open the parameter file in clone database and change the following parameters with the respective current location.

CONTROL FILES

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

CORE_DUMP_DEST

LOG_ARCHIVE_DEST_1

And Place the BST4 pfile on /$ORACLE_HOME/dbs

Note. Pay particular attention to the control locations.

3. Create the clone controlfile

Create a control file for the new database. To do this, connect to the source database and request a dump of the current control file. From sqlplus:

alter database backup controlfile to trace as ‘/home/oracle/cr_.sql’

/

4. Edit the file

FROM: CREATE CONTROLFILE REUSE DATABASE “BST2” RESETLOGS …

TO: CREATE CONTROLFILE set DATABASE “BST4” RESETLOGS …

Change the word ‘REUSE’ to ‘set’ and the ‘BST2’ to ‘BST4′. Also change the datafiles location parameter to BST4 database location.

CONTROL FILES

BACKGROUND_DUMP_DEST

USER_DUMP_DEST

CORE_DUMP_DEST

LOG_ARCHIVE_DEST_1

And Place the BST4 pfile on /DV1_u31/oraBST2/db/tech_st/10.2.0/dbs

5. In clone database SERVER export ORACLE_SID environment variable and start the instance

$export ORACLE_SID=bst4

$sqlplus

Enter User:/ as sysdba

SQL> startup nomount pfile=’initBST4.ora’;

6. Run create controlfile script to create the controlfile

SQL>@createcontrolfile.sql

Trouble shoot:

It is quite common to run into problems at this stage. Here are a couple of common errors and solutions:

ORA-01113: file 1 needs media recoveryYou probably forgot to stop the source database before copying the files.

Go back to step 1 and recopy the files.

ORA-01503: CREATE CONTROLFILE failed

ORA-00200: controlfile could not be created

ORA-00202: controlfile: ‘/u03/oradata/dg9a/control01.ctl’

ORA-27038: skgfrcre: file exists

Double check the pfile created in step 2. Make sure the control_files setting

is pointing at the correct location. If the control_file setting is ok, make sure that the control

files were not copied with the rest of the database files. If they were, delete or rename them.

7. Open the database

SQL>alter database open;

8. Perform a few checks

If the last step went smoothly, the database should be open.

It is advisable to perform a few checks at this point:

Check that the database has opened with:

select status from v$instance;

The status should be ‘OPEN’

Make sure that the datafiles are all ok:

select distinct status from v$datafile;

It should return only ONLINE and SYSTEM.

Take a quick look at the alert log too.

9. Set the databases global name

The new database will still have the source databases global name. Run the following to reset it:

alter database rename global_name to

/

10. Create a spfile

From sqlplus:

create spfile from pfile;

11. Change the database ID

If RMAN is going to be used to back-up the database, the database ID must be changed.

If RMAN isn’t going to be used, there is no harm in changing the ID anyway – and it’s a good practice to do so.

From sqlplus:

shutdown immediate

startup mount

exit

From unix:

nid target=/

NID will ask if you want to change the ID. Respond with ‘Y’. Once it has finished, start the database up again in sqlplus:

shutdown immediate

startup mount

alter database open resetlogs

/

12. Configure TNS

Add entries for new database in the listener.ora and tnsnames.ora as necessary.

13. Finished

That’s it!

Temporarily disabling the log shipping to standby database

Temporarily disabling the log shipping to standby database.

$ sqlplus

SQL*Plus: Release 10.2.0.3.0 – Production on Wed Jul 25 21:07:57 2012

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Enter user-name: /as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options.
SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
——— ———-
AIRMAN    READ WRITE

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8941

SQL> alter system switch logfile;

System altered.

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8942

SQL> select status, DEST_NAME, DESTINATION from v$archive_dest where status = ‘VALID’;
SQL> show parameter LOG_ARCHIVE_DEST_2

NAME                                 TYPE        VALUE
———————————— ———– ——————————
log_archive_dest_2                   string      SERVICE=airman_sj LGWR ASYNC V
ALID_FOR=(ONLINE_LOGFILE, PRIM
ARY_ROLE) db_unique_name=airma
n_sj

SQL> show parameter log_archive

NAME                                 TYPE        VALUE
———————————— ———– ——————————
log_archive_config                   string      DG_CONFIG=(airman_kj,airman_sj
)
log_archive_dest                     string
log_archive_dest_1                   string      LOCATION=/u01/oradata/airman/a
rchive VALID_FOR=(ALL_LOGFILES
, ALL_ROLES) db_unique_name=ai
rman_kj
log_archive_dest_10                  string
log_archive_dest_2                   string      SERVICE=airman_sj LGWR ASYNC V
ALID_FOR=(ONLINE_LOGFILE, PRIM
ARY_ROLE) db_unique_name=airma

NAME                                 TYPE        VALUE
———————————— ———– ——————————
n_sj
log_archive_dest_3                   string
log_archive_dest_4                   string
log_archive_dest_5                   string
log_archive_dest_6                   string
log_archive_dest_7                   string
log_archive_dest_8                   string
log_archive_dest_9                   string
log_archive_dest_state_1             string      enable
log_archive_dest_state_10            string      enable
log_archive_dest_state_2             string      ENABLE
log_archive_dest_state_3             string      enable
log_archive_dest_state_4             string      enable
log_archive_dest_state_5             string      enable
log_archive_dest_state_6             string      enable
log_archive_dest_state_7             string      enable
log_archive_dest_state_8             string      enable
log_archive_dest_state_9             string      enable
log_archive_duplex_dest              string
log_archive_format                   string      %t_%s_%r.arc
log_archive_local_first              boolean     TRUE
log_archive_max_processes            integer     2
log_archive_min_succeed_dest         integer     1
log_archive_start                    boolean     FALSE
log_archive_trace                    integer     0
SQL> alter system set log_archive_dest_state_2=defer scope=both;

System altered.

SQL> show parameter log_archive_dest_state_2

NAME                                 TYPE        VALUE
———————————— ———– ——-
log_archive_dest_state_2             string      DEFER

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8942

SQL> alter system switch logfile;

System altered.

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8943

SQL>  alter system switch logfile;

System altered.

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8944

SQL>  alter system switch logfile;

System altered.

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8945

How to switch the database to a new UNDO tablespace and drop the old one

STEP -1
=======

$ sqlplus / as sysdba

SQL> show parameter undo

NAME                                 TYPE        VALUE
———————————— ———– ——————————
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS1
SQL>

The current undo tablespace as suggested by the initialization parameter undo_tablespace is UNDOTBS1.
Leave this sysdba as is, open another console, log in as user SCOTT and initiate a transaction.

STEP -2
=======

— Create a new undo tablespace

CREATE UNDO TABLESPACE undotbs2
DATAFILE ‘/d01/apps/oradata/oraxpo/undotbs201.dbf’
SIZE 50M AUTOEXTEND ON NEXT 5M;

Tablespace created.

STEP -3
=======

— Switch the database to the new UNDO tablespace.

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS2 SCOPE=BOTH;

System altered.

STEP -4
=======

— Try to drop the tablespace but failed.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
*
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

With the alter system set undo_tablespace=UNDOTBS2, the database UNDO tablespace is changed and any
new transaction’s undo data will go to the new tablespace i.e. UNDOTBS2.
But the undo data for already pending transaction (e.g. the one initiated by SCOTT before the database
UNDO tablespace switch) is still in the old tablespace with a status of PENDING OFFLINE. As far as it
is there you cannot drop the old tablespace.

STEP -5
=======

— The query shows the name of the UNDO segment in the UNDOTBS1 tablespace and its status.
Now lets see which users/sessions are running this pending transaction.

set lines 10000
column name format a10

SELECT a.name,b.status
FROM   v$rollname a,v$rollstat b
WHERE  a.usn = b.usn
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

NAME       STATUS
———- —————
_SYSSMU8$  PENDING OFFLINE

column username format a6

SELECT a.name,b.status , d.username , d.sid , d.serial#
FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
WHERE  a.usn = b.usn
AND    a.usn = c.xidusn
AND    c.ses_addr = d.saddr
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

NAME       STATUS          USERNA        SID    SERIAL#
———- ————— —— ———- ———-
_SYSSMU8$  PENDING OFFLINE SCOTT         147          4

So this is SCOTT with SID=147 and SERIAL#=4. Since we know now the user, we can go to him/her and
request to end the transaction gracefully i.e. issue a ROLLBACK or COMMIT. However,
if this is not possible (say the user initiated the transaction and left for annual leave 🙂
and trust me this happens) you may go ahead and kill the session to release the undo
segments in the UNDOTBS1 tablespace.

SQL> alter system kill session ‘147,4’;

System altered.

SELECT a.name,b.status , d.username , d.sid , d.serial#
FROM   v$rollname a,v$rollstat b, v$transaction c , v$session d
WHERE  a.usn = b.usn
AND    a.usn = c.xidusn
AND    c.ses_addr = d.saddr
AND    a.name IN (
SELECT segment_name
FROM dba_segments
WHERE tablespace_name = ‘UNDOTBS1’
);

no rows selected

As we can see once the session is kills we don’t see anymore segments occupied in the UNDOTBS1 tablespace.
Lets drop UNDOTBS1.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;
DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES
*
ERROR at line 1:
ORA-30013: undo tablespace ‘UNDOTBS1’ is currently in use

If you are retaining undo data then you still won’t be able to drop the tablespace because it is still in use by undo_retention.
Let the UNDO_RETENTION time pass and then try to drop the tablespace. In my case it is 900 seconds i.e. 15 minutes.

— After 15 minutes.

SQL> DROP TABLESPACE undotbs1 INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

Oracle best practice: Primary and Standby archive crosscheck

Primary:

STEP -1
========

LOGIN TO THE PRIMARY SERVER

$ su – orapr1
Password:

STEP -2
=======

GET THE SEQUENCE MAX FROM V$LOG_HISTORY

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
76968

SQL> alter system switch logfile;

System altered.

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
76969

SQL> exit

=========================================

ON STANDBY SERVER:

$ ps -ef|grep pmon
oratst  2978     1   0   Sep 08 ?         147:34 ora_pmon_amantst
oracle  3039     1   0   Sep 08 ?         137:34 ora_pmon_airman
e460269 16109 16104   0 18:54:44 pts/1       0:00 grep pmon

$ su – oracle

Password:
mesg: cannot change mode

$ sqlplus

SQL*Plus: Release 10.2.0.3.0 – Production on Thu May 17 18:55:10 2012

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Enter user-name: /as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8548

SQL> select max(sequence#) from v$log_history;

MAX(SEQUENCE#)
————–
8549

SQL> select * from v$archive_gap;
SQL> select sequence#, archived, applied, status from v$archived_log;

Oracle exp and imp: oracle9i to 11g

Setp — 1
==========
Set the same character on source server:
========================================

In source database
export NLS_LANG=AMERICAN_AMERICA.US7ASCII
Do actual export(exp parfile=exp.par)

Setp — 2
==========
Exporting the data from source database:
========================================

exp system/system123 file=NAS30.dmp log=NAS30.log owner=NAS30 buffer=100000000 grants=y rows=y statistics=none direct=y;

Setp — 3
==========
Extract the table sript from soured databae with the help of below script:
===========================================================================

For tablespace:
===============

select ‘create tablespace ‘ || df.tablespace_name || chr(10)
|| ‘ datafile ”’ || df.file_name || ”’ size ‘ || df.bytes
|| decode(autoextensible,’N’,null, chr(10) || ‘ autoextend on maxsize ‘
|| maxbytes)
|| chr(10)
|| ‘default storage ( initial ‘ || initial_extent
|| decode (next_extent, null, null, ‘ next ‘ || next_extent )
|| ‘ minextents ‘ || min_extents
|| ‘ maxextents ‘ ||  decode(max_extents,’2147483645′,’unlimited’,max_extents)
|| ‘) ;’
from dba_data_files df, dba_tablespaces t
where df.tablespace_name=t.tablespace_name
/

For the Users:
==============

set pagesize 0
set escape on
select ‘create user ‘ || U.username || ‘ identified ‘ ||
DECODE(password,
NULL, ‘EXTERNALLY’,
‘ by values ‘ || ”” || password || ””
)
|| chr(10) ||
‘default tablespace ‘ || default_tablespace || chr(10) ||
‘temporary tablespace ‘ || temporary_Tablespace || chr(10) ||
‘ profile ‘ || profile || chr(10) ||
‘quota ‘ ||
decode ( Q.max_bytes, -1, ‘UNLIMITED’, NULL, ‘UNLIMITED’, Q.max_bytes) ||
‘ on ‘ || default_tablespace ||
decode (account_status,’LOCKED’, ‘ account lock’,
‘EXPIRED’, ‘ password expire’,
‘EXPIRED \& LOCKED’, ‘ account lock password expire’,
null)
||
‘;’
from dba_users U, dba_ts_quotas Q
— Comment this clause out to include system & default users
where U.username not in (‘SYS’,’SYSTEM’)
and U.username=Q.username(+) and U.default_tablespace=Q.tablespace_name(+)
;
set pagesize 100
set escape off

Setp — 4
==========

Fttp all the dumps and script of tablespace and users to destionation server:
=============================================================================

FTP:-
======
1.    Copy the file to tmp location.

#cp filename /tmp

2.    Change that file permission in the tmp location

#chmod 755 filename

3.    Open the command prompt.

In Run type CMD
>ftp source/destination ip address
>username
>password
>cd /tmp (file location)
>ls -lrt (To see all the files)
>lcd “location” (location is where you want to copy the file)
>bi (binary mode transfer)
>get filename (if its source ip address)
>put filename (if its destination ip address)
If you want to copy many files then (mput.file extension & mget.file extension)

Setp — 5
==========

1)    Instal Binary of Oracle 11g on Destination Server
=================================================

2)    Create the database on installed 11g
=====================================

3)    Run the script for creating tablespace and users
================================================

Setp — 6
=========
Set the same character on destination server:
=============================================

In destination database
export NLS_LANGAMERICAN_AMERICA.UTF8
Do actual import (imp parfile=imp.par)

Setp — 7
=========

Importing dump file to destination server
=========================================

imp nas30/nas30 file=NAS30.dmp log=imp_nas30.log fromuser=nas30 touser=imp_test buffer=100000000 grants=y ignore=y rows=y commit=y statistics=none;

Oracle Security Alert for CVE-2012-1675 “TNS Listener Poison Attack”

Oracle Critical Patch Updates (CPU)

The Oracle database server has a separate network connection process
that usually operates on TCP port 1521. The database registers as a
listener with this process and the process forwards the client requests
on to the actual database system that handles the requested
database instance. Since version 8i, these network connection
processes can register additional listeners. Such a listener can
even be registered for an existing database instance. The active listener
interprets this as a new Oracle Real Application Clusters (RAC) node and uses
the new listener to implement load balancing. In other words: every second
database connection will be routed via the new listener.

This security hole is particularly serious “because it allows remote
and unauthenticated attackers to redirect the database’s network
traffic on the database server to an arbitrary server and then
intercept it. All they need to know is the Oracle SID
or Oracle service name.”

Affected Products and Versions:

Oracle Database 11g Release 2, versions 11.2.0.2, 11.2.0.3
Oracle Database 11g Release 1, version 11.1.0.7
Oracle Database 10g Release 2, versions 10.2.0.3, 10.2.0.4, 10.2.0.5

Immediate solution for non-cluster envernment:

dynamic_registration_<listener> = off

For Example:

Step 1
======
LSNRCTL> show dynamic_registration

Connecting to (ADDRESS=(PROTOCOL=IPC)(KEY=XS2.WORLD))

LISTENER parameter “dynamic_registration” set to ON

The command completed successfully

Step 2
======

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
(PROGRAM = extproc)
)
(SID_DESC =
(global_dbname = ORCL.hostname)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0.2)
(sid_name = hostname)
)
)

ADR_BASE_LISTENER = /u01/app/oracle

INBOUND_CONNECT_TIMEOUT_ = 120

DYNAMIC_REGISTRATION_LISTENER = off

Conclusion:

The attack is on the Listener itself – so if we want to prevent this attack,
we need to secure that Listener, irrespective of its location.

Note: Mandatory if we expose our Listener to an unsecured or public network (e.g. internet).

As for Listeners running on our internal network –
Internal network already needs to be compromised in order for the attack to occur.

Optimizing the Oracle 11g Result Cache

The Oracle 11g Result Set Cache stores entire result sets in shared memory.
If a SQL query is  executed and its result set is in the cache then almost the entire overhead of the
SQL execution is avoided:  this includes parse time, logical reads, physical reads and any cache contention
overhead (latches for instance) that might normally be incurred.

Adverse effects:

1) Multiple SQLs that have overlapping data will store that data redundantly in the cache.  So the result set that contains
all customers from ‘X’ will duplicate some of the data in the cached result set for all customers from ‘Y’.
Therefore, the result set cache is not always as memory efficient as the buffer cache.

2) Any change to a dependent object – to any table referenced in the query – will invalidate the entire cached result set.
So the result set cache is most suitable for tables that are read only or nearly read only.

3) Really big result sets will either be too big to fit in the result set cache, or will
force most of the existing entries in the cache out.

4) Rapid concurrent creation of result sets in the cache will result in latch contention.

Note: The result set cache is part of the shared pool.  By default it is sized at only 1% of the shared pool,
so is usually pretty small. I increased the size of my result set cche to 10MB using the RESULT_CACHE_MAX_SIZE parameter.

************** Not suitable for all SQLs ************************

Result set caching makes sense  only if the following are true:

1) The result set is small
2) The SQL statement is expensive
3) The SQL statement does not experience high rates of concurrent execution
4) The SQL does not generate a large number of result sets
5) The SQL is against relatively static tables

Conclusion:
The result set cache best suits small result sets from expensive queries on tables that are infrequently updated.
So before applying the result set cache to all SQLs is to be ineffective and can lead to significant latch contention.
Application team has to drill down the SQLs and provide more information before going for some specific implemenation.

Oracle Books Recomented by Jonathan Lewis

By  Jonathan Lewis

When I was in Salt Lake City a few months ago, Barbara Matthews (one of the organisers of the SLC Oracle User Group) asked me for my “Top 10” books about Oracle.

I don’t normally do lists – and I don’t read many Oracle books – but Iggy Fernandez put the same question, slightly differently phrased, as part of a brief interview for the Northern California Oracle User Group magazine a few days later. So I thought I’d try to come up with a useful answer. This is Iggy’s phrasing of the question:

Let’s say that I’m a junior Oracle DBA and my manager offered to buy ten books on Oracle for me. Which books would you recommend?

Ten is a pretty big number, and I think you’d have to consider getting a couple of books that were tailored to the parts of Oracle used by the business (such as RAC, streams, and so on) but I’ll restrict myself to a fairly generic response – to the extent that some of the books aren’t even specific to Oracle.

For my first choice I’d try to claim a Kindle or Sony e-Reader as the first “book”. I haven’t used one in anger, and the Kindle isn’t even available in the UK, but the brief play I had with the e-reader left me keen to buy one. (Unfortunately the demo model in the shop was sealed in a tamper-proof frame and had no real books on it because someone had accidentally deleted all (100) books from its memory – so maybe I should be a little cautious about recommending it.)

The second choice(s) ought to be some Oracle manuals. I know you can get them in html or pdf form online (I prefer the pdfs with Acrobat indexing); but for simply reading (rather than reference) there’s nothing quite like a real book – and I don’t mean a stack of paper from a laser printer. My manuals of choice would be the Concepts Guide, the Database Admin Guide (Fundamentals) and the Performance Tuning Guide. I might pick the 9i versions (especially the tuning guide), rather than the 10g versions because some of the 10g manuals are getting to the stage where they lean too heavily on “how to use the OEM GUI”.

But if you’ve got a Kindle or eReader, you can probably load the entire set of pdf files for the last four versions of Oracle, and carry them all around at once ! So I’m not going to count any of the Oracle manuals towards my list of 10.

In no particular order, then, here are the other nine:

2) Tapio Lahdenmaki and Mike Leach: Relational Database Index Design and the Optimizers. The authors are better known in the DB2 world than the Oracle world, but the concepts and understanding needed to design good indexes are the same even when there are variations in vendor implementation. (For more details, check my review on Amazon)

3) Dan Tow: SQL Tuning. This is the book that does for SQL Tuning what Tapio Lahdenmaki and Mike Leach do for understanding indexes. There is a way to think through the problems of data access – that method is going to help you find the best execution plan and work out what indexes your system needs.

4) Toon Koppelaars and Lex de Haan: Applied Mathematics for Database Professionals. This is a book that many might think too theoretical for a practising DBA – but if you don’t have some idea of why the relational model is a “good thing” then you can’t do your job properly. This book may not be a direct help to you in your work, but it may help you to appreciate what you can achieve with the Oracle software. (For more details, check my review on Amazon)

5) Tom Kyte: Expert Oracle Database Architecture. For a wealth of information about a wide range of the Oracle technology and how best to use it. This book tells you a lot about what Oracle can do, but also shows you, by the way it’s written, how to extend your knowledge into new areas. If I were allowed a longer list, I would also include Tom’s Expert One-on-one Oracle.

6) Jonathan Lewis: Cost Based Oracle – Fundamentals. I know that it’s my own, but it is the best book you’re going to find about how the optimizer works, and if you understand the core features of the optimizer you’ll find it much easier to solve problems with slow SQL. (And if I were allowed a longer list I probably would include Practical Oracle 8i)

7) Christian Antognini : Troubleshooting Oracle Performance. A recent release that contains a huge amount of information about how to identify and address performance problems. I think I’d probably rate this as the top Oracle tuning book on the market at present (UK link) (US link).

8 ) Cary Millsap and Jeff Holt: Optimizing Oracle Performance. There are a couple of key points about Oracle performance that are rarely stated as clearly as they are in this book. One is the focus on addressing the users’ complaints, the other is the impact of concurrency. It won’t be casual reading for everyone – but the chapter on queueing theory is a must-read for all DBAs.

9) Recovery. If you’re a DBA, then the most important job you’ll ever do is to make sure that you can recover the database quickly after a disaster – and if you’re lucky you’ll never have to do it. There’s probably a good book out there somewhere, but I’ve never had any need to go looking for it. If there is one, then every DBA should have a copy and be made to practise. The difficulty with recovery is that the bits you need to cover well are site dependent, and no one book will tell you all you really need to know about the way you end up doing things for your site.

10) “X”: One of my pet theories is that people learn more by seeing things go wrong, rather than being told what things look like when they are going well. So all DBAs should be given at least one really bad book about Oracle and told to compare the clarity of writing, quality of explanation, and method of teaching with a good book so that they can learn to recognise suspect material. There are plenty of bad books on the market – fortunately I haven’t seen enough of them to be able to give you a “bottom 10″ list.

 

Steps for performing database cloning using hot backup

Below steps helps you in performing database cloning using hot backup

Assumptions:

1. directory structure is different in both source and target servers
2. Oracle version : 10.2.0.5
3. OS version : Linux 5
4. target database name is same as source database name
5. DATABASE SHOULD BE IN ARCHIVEMODE

step 1 :  Take the hot backup of source database

If the souce database in not in Archivemode, alter database to archivemode.

SQL> STARTUP MOUNT;

SQL> ALTER DATABASE ARCHIVELOG;

SQL> ALTER DATABASE OPEN;

sql> alter database begin backup;

$ copy datafiles to backup location

sql> alter database end backup;

step 2 : Take controlfile trace and pfile or spfile (that was using by the source database)

step 3 : Install Oracle software on another machine (choose “Install only” option in OUI). Don’t create any database

step 4 : Copy all the files (including trace file and pfile or spfile) from source server to target server either using FTP or rcp
Note: Not necessary in copying control files and redologfiles

step 5 : Place pfile or spfile in “dbs” directory on target

step 6 : Copy the remaining files to their respective locations (If any directories are missing, do create them)

step 7 : Connect as sysdba and Startup the database in nomount stage

step 8 : Edit the trace file (that was copied) and generate a create controlfile script from it.
Modify the script and specify the new locations of  the files.

step 9 : Execute the controlfile script which will create controlfiles in the location specified in CONTROL_FILES parameter.
Once control files are created, database will be forwarded to MOUNT state.

sql> @create_controlfile.sql

step 10 : Finally, Open the database with resetlogs option

sql> startup nomount

sql> alter database mount;

sql> recover database using backup controlfile until cancel;

sql> alter database open resetlogs;