Introduction:
For one of the projects the question came in to investigate and set up a 11.2.0.4 Real application cluster database with an extra challenge that the migration had to be done cross-platform from Oracle 10.2.0.3 on Solaris platform to 11.2.0.4.0 on Linux. From application provider came the suggestion to investigate a back-up – restore scenario with an upgrade on the new server ( Linux environment). Due to the fact that the Source environment was 10.20.3 on Solaris and due to fact we were heading towards a Rac cluster environment on on Linux that suggestion was the first that was send to the dustbin.
Normal export / import was the second scenario that was explored. Of course this is a valid scenario but given the fact that the database was more than 1.x TB not exactly the most favorite way to bring the data across. But whit scripting and using multiple par-files and or with moving partitioned data across in waves would be a fair plan-b.
From reading though Had put my mind to the use of transportable tablespaces as a way forward with this challenging question.
Requirements:
As preparation for the job requested to have Nas filesystem mounted between the source Server (MySunServer) holding the 10G database and the target Server (MyLinuxcluster). This Nas filesystem would hold the datapumps to be created, to hold the scripts and parfiles / config files as was suggested based on Mos Note ( 1389592.1 ). Nas system was / read-writable from both servers. The perl scripts that come with the note will support in the transport of the tablespaces but also help in the convert of big endian to little endian And as a bonus in my case will do the copy into ASM.
Due to the layout of the database in the source environment Rman was chosen as the best way forward with the scenario.
As a preparation an 110204 Rac database was set up on the target cluster. This database only to hold the normal tablespaces and a smal temporary tablespace for the users. ( In TTS solution the name of the data tablespaces that come across to the new environment may not exist in the new environment). All data- application users have been pre created on the new environment with a new – default user tablespace.
Details & Comments
Configuration file for the Perl scripts:
This is a file that is part of the unzipped file from the Mos note. It needs to be setup to match your specific needs. Will only show settings I have used and its comments:
xtt.properties:
## Reduce Transportable Tablespace Downtime using Incremental Backups
## (Doc ID 1389592.1)
## Properties file for xttdriver.pl
## See documentation below and My Oracle Support Note 1389592.1 for details.
## Tablespaces to transport
## Specify tablespace names in CAPITAL letters.
tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,MYDB_INDEX,MYTS,USERS
##tablespaces=MYCOMPTTS
## Source database platform ID
## platformid
## Source database platform id, obtained from V$DATABASE.PLATFORM_ID
platformid=2
## srclink
## Database link in the destination database that refers to the source
## database. Datafiles will be transferred over this database link using
## dbms_file_transfer.
srclink=TTSLINK
## Location where datafile copies are created during the “-p prepare” step.
## This location must have sufficient free space to hold copies of all
## datafiles being transported.
dfcopydir=/mycomp_mig_db_2_linux/mybox/rman
## backupformat
## Location where incremental backups are created.
backupformat=/mycomp_mig_db_2_linux/mybox/rman
## Destination system file locations
## stageondest
## Location where datafile copies are placed by the user when they are
## transferred manually from the souce system. This location must have
## sufficient free space to hold copies of all datafiles being transported.
stageondest=/mycomp_mig_db_2_linux/mybox/rman
# storageondest
## This parameter is used only when Prepare phase method is RMAN backup.
## Location where the converted datafile copies will be written during the
## "-c conversion of datafiles" step. This is the final location of the
## datafiles where they will be used by the destination database.
storageondest=+MYDBP_DATA01/mydbp/datafile
## backupondest
## Location where converted incremental backups on the destination system
## will be written during the "-r roll forward datafiles" step.
## NOTE: If this is set to an ASM location then define properties
## asm_home and asm_sid below. If this is set to a file system
## location, then comment out asm_home and asm_sid below
backupondest=+MYDBP_DATA01/mydbp/datafile
## asm_home, asm_sid
## Grid home and SID for the ASM instance that runs on the destination
asm_home=/opt/crs/product/11204/crs
asm_sid=+ASM1
## Parallel parameters
parallel=8
## rollparallel
## Defines the level of parallelism for the -r roll forward operation.
## If undefined, default value is 0 (serial roll forward).
rollparallel=2
## getfileparallel
## Defines the level of parallelism for the -G operation
getfileparallel=4
## desttmpdir
## This should be defined to same directory as TMPDIR for getting the
## temporary files. The incremental backups will be copied to directory pointed
## by stageondest parameter.
desttmpdir=/mycomp_mig_db_2_linux/MYDBP/scripts
Below in a Table format you will see the steps performed with comments.
Steps do qualify for
- I for Initial steps – activities
- P for Preparation
- R for Roll Forward activities
- T for Transport activities
Server column shows where the action needs to be done.
Step | Server | What needs 2 b done |
I1.3 | Source | Identify the tablespace(s) in the source database that will be transported ( Application owner needs to support with schema owner information) : |
tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,
MYDB_INDEX,MYTS,USERS |
||
I1.5 | Source + Target | In my case project offered an nfs filesystem which i could use : Nfs filesystem : /mycomp_mig_db_2_linux |
I1.6 | Source | Together with the Mos note cam this zip file : Unzip rman-xttconvert.zip. |
I1.7 | Source | Tailor the extracted file xtt.properties file on the source system to match your environment. |
I1.8 | Target | As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system. This was not needed since we used the nas filesystem. |
P1.9 | Source + Target | On both environments set up this:
export TMPDIR= /mycomp_mig_db_2_linux/MYDBP/scripts. |
P2B.1 | Source | perl xttdriver.pl -p |
Note. Do Not use ]$ $ORACLE_HOME/perl/bin/perl this did not work | ||
P2B.2 | Source | Copy files to destination. N/A since we use NFS |
P2B3 | Target | On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows: |
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt N/A since we use NFS. | ||
perl/bin/perl xttdriver.pl –c | ||
R3.1 | Source | On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows: |
perl xttdriver.pl –I | ||
R3.3 | Target | [ |
Since we are using Nas shared filesystem no need to copy with scp between source and target. | ||
perl xttdriver.pl -r | ||
R3.4 | Source | On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the determine new FROM_SCN step as follows: |
perl xttdriver.pl –s | ||
R3.5 | Source | 1. If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1. |
2. If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase. | ||
T4.0 | Source | As found in note : Alter Tablespace Read Only Hanging When There Are Active TX In Any Tablespace (Doc ID 554832.1). A restart of the database is required to have no active transactions. Alternative during off hours . Actually during a first test with one dedicated tablespace with only one object it took more than 7 hrs. Oracle seems to look and wait for ALL active transactions, not only the ones that would impact the object in the test tablespace i worked with. |
T4.1 | Source | On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, make the tablespaces being transported READ ONLY. |
alter tablespace MYDB_DATA read only; | ||
alter tablespace MYDB_EUC_DATA read only; | ||
alter tablespace MYDB_EUC_INDEX read only; | ||
alter tablespace MYDB_INDEX read only; | ||
alter tablespace MYTS read only; | ||
alter tablespace USERS read only; | ||
T4.2 | Source | Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles. |
perl xttdriver.pl -i | ||
T4.2 | Target | [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt |
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt | ||
perl xttdriver.pl –r | ||
.. | ||
T4.3 | Target | On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows: |
perl xttdriver.pl –e | ||
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt. It will hold list of all the TTS you have configured and all its transport_datafiles in details. | ||
Example of that generated file : cat xttplugin.txt | ||
impdp directory=MYDB_XTT_DIR logfile=tts_imp.log \ | ||
network_link=TTSLINK.PROD.NL transport_full_check=no \ | ||
transport_tablespaces=MYCOMPTTS ,A,B,C\ | ||
transport_datafiles=’+MYDBP_DATA01/mycomptts_152.xtf’ | ||
Note in our example once edited we chmodded xttplugin.txt with 744 and ran it as script . | ||
T4.3 | Source | After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired. |
T4.4 | Target | At this step, the transported data is READ ONLY in the destination database. Perform application specific validation to verify the transported data. |
Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows: | ||
In rman: | ||
validate tablespace MYDB_DATA, MYDB_EUC_DATA, MYDB_EUC_INDEX, MYDB_INDEX, MYTS, USERS check logical; | ||
T4.5 | Target | alter tablespace MYDB_DATA read write; |
alter tablespace MYDB_EUC_DATA read write; | ||
alter tablespace MYDB_EUC_INDEX read write; | ||
alter tablespace MYDB_INDEX read write, | ||
alter tablespace MYTS read write; | ||
alter tablespace USERS read write; | ||
T5 | Source + Target | Cleanup of NFS filesystem. |
Put Source Db in restricted mode as a fallback after the go live for couple of days then put it to tape and decommission it; |