The Classic case of OFFLINE Drop in an EMC BCV with Transportable Tablespace

Introduction:

Recently  in the team we came across the fact that we had to be working a lot with technology inside the EMC  storage environment which  is called  BCV ( business continuity  volume) .  This technology enables the possibility  to clone a (production) Oracle database to another server.   Another method would be to use Snaps in the EMC  but that is out of scope for this blog now ( though I do work with them as well and to an Oracle Dba  the End result is the same  cause both technologies offer replication – cloning ).

In this scenario there are two ways to do to an replication of the data in the EMC env..

  1. Either you will do an alter database begin backup – and backup than do a split in the EMC  , do and  alter database end backup on the source side . After that on the Unix Side the disks need to be made visible and mounted (when using file systems). If you are using ASM then Linux administrators need to make sure that all the disks needed for the database are present in the multi-path configuration. Once that Unix/ Linux part is done you could  mount the database ( when using a file system structure  )  or mount the disk groups in ASM and then mount the database for  backups  or open the database.
  2. Or you do a split in the EMC ( without the begin and end backup) on  the source environment.  The rest  of the scenario is similar to 1 which means  make the disk visible and mount the database or mount the disk groups , then mount the database. In this scenario you will open the Database on this target side thus performing a crash recovery  of the instance.

In this blog we will follow scenario 2. Once the file systems are mounted on the target Unix machine  the database is opened.  Purpose of this all would be to create  a migration database where the project  members could consolidate data in a local and extra created tablespace as part of a migration strategy.

Details

After the file systems had been mounted by Unix Admins , by dbas request some extra local mount points had been set up  on top of the  mount points that had come across  with the BCV.   Once the database was opened the first time an extra tablespace was  created on those file systems only locally present on the target machine.  This tablespace was created 1.1TB in size. Together with project it was agreed that this tablespace would become a transportable tablespace since recreating it after  a BCV refresh would take some 6 to 7 hours each time a refresh occurred.  Making it a transportable tablespace would offer the idea of unplugging it right before a BCV refresh would occur and  plugging it back in after a refresh thus saving time.

This scenario worked well  many times, but as always one time things did not work out well. After a refresh and after the plug-in of the tablespaces there was technical issue on the local file systems . Unix Admins had run repair jobs on the file system where the datafiles of the  transportable tablespace (TTS)  was located and in the end even had cleaned out  that file system ..   So there i was with a refreshed database  that had plugged in the transportable tablespace of 1.1TB  and the database  had crashed and would not open ( since the datafiles for the  TTS where missing).  A refresh  from production would take  3 – 4 hours extra downtime  and i would still have to recreate the  tablespace by hand anyhow  after that .

Then i recalled a scenario from the old days in which  the scenario was that you had a database without a backup of the new created tablespace that had crashed. Once again Google was my friend and  after doing some reading I set up the scenario below after confirming that this could be my way out:

## First of all it was needed to find  all  the datafiles supposed to be part of  this  TTS:

select FILE#,substr(NAME,1,80)
from v$datafile
where name like ‘%CVN_TBS%’
order by 1;

SQL> /

FILE# SUBSTR(NAME,1,80)
———- ——————————————————————————–
1063 /db/MYDB1/cvn_tbs/CVN_TBS_050.dbf
.

.

.
1113 /db/MYDB1/cvn_tbs/CVN_TBS_005.dbf
1114 /db/MYDB1/cvn_tbs/CVN_TBS_004.dbf
1115 /db/MYDB1/cvn_tbs/CVN_TBS_003.dbf
1116 /db/MYDB1/cvn_tbs/CVN_TBS_002.dbf
1117 /db/MYDB1/cvn_tbs/CVN_TBS_001.dbf

55 rows selected.

##  Next step was to offline drop them since there was no Backup present.

select ‘alter database datafile ‘||file#||’ offline drop ;’
from v$datafile
where name like ‘%CVN_TBS%’
order by 1;

## Spooled it to a file and ran the script so that dropped the datafiles ..

##  In the next step opened db
alter database open;

## Recreating the 1.1  TB  tablespace would take long hours so i created the tablespace first with a small datafile then started alter  tablespace  with ranges of datafile to be added in script and ran it in background..

#!/bin/ksh
#
# start all  steps
#

## recreate
cat step6_all.ksh
nohup /opt/oracle/SRCENS1/admin/create/step6_1.ksh&
sleep 60
nohup /opt/oracle/SRCENS1/admin/create/step6_11.ksh& <– adding datafile #2 – 10
nohup /opt/oracle/SRCENS1/admin/create/step6_21.ksh& <– adding datafile #11 – 20
nohup /opt/oracle/SRCENS1/admin/create/step6_31.ksh&
nohup /opt/oracle/SRCENS1/admin/create/step6_41.ksh&
nohup /opt/oracle/SRCENS1/admin/create/step6_51.ksh&

Since  I knew that project would not need all 1.1 TB from the first minute that scenario  minimized my downtime, thus saving time. And indeed at the next normal refresh  first I unplugged the TTS again , got a new refreshed BCV and plugged the TTS back in .  Thus making everybody happy again.

As always  happy reading  and till we meet again,

Mathijs