Upgrade to 12C GridInfra lessons learned

Introduction:

Not sure if it was word from a wise Dba or just from a fortune cookie (might even have been from a Pink Panther movie). It said always expect the unexpected and as an add-on success just loves preparation.

This week one of my tasks was to upgrade a 4 node Oracle Rac cluster from 11.2.0.4 to 12c (12.1.0.2.0) grid infrastructure. And even though I came well prepared (see also detailed other blog for that ( https://mathijsbruggink.com/2017/05/01/upgrading-11g-gridinfra-to-12c-in-linux/) several small surprises occurred which will be used as a lesson learned in upcoming upgrades of  the grid infra structure. Also I would like to offer some timeline as with regard to how long the upgrade process really took.

Lessons learned:

  • During the preparations needed to order extra disks for ASM storage for the Grid Infrastructure management repository (GIMR). When i started the runInstaller as a first check if all was well prepared  noticed that the installer software is indeed most likely looking for a diskgroup called +OCR or +VOTING. This could be a trap if you had not extended one of them ( but instead a +GRID diskgroup ). So when preparing look for either OCR or VOTING ( best both if present) to add extra disks ( and have some disks at spare).
  • During the start of maintenance window the Linux colleague mentioned that he would have to stop the Hyperion services. This activity took some 45 minutes of the change window. Will have to find out if this was a justified claim to stop those services and will need to add an extra step to the pre-checks to find out about other services – daemons that are running on the cluster that might be impacted when doing an upgrade.
  • Purpose of rootupgrade.sh after the installation part via the runinstaller completes and the upgrade part via the runinstaller commences. Rootupgrade.sh will perform the actual ASM upgrade, will configure the OLR (local registry) amongst other things.

Timelines:

Every change on a test  or production environment will have to come with a plan with regard to an estimated time needed how long the change will take. First and most important of course choose the strategy, will  a rolling window be used (thus minimizing impact since at least one node will be up ( thinking about a kind of batch where first batch will hold first node , second batch holding node 2 and 3 in my  4 node example, and a last batch holding the last node)).

Start of Change Window : 20:00 CET ( 6:00 UTC ) .
According to Linux expert Hyperion services needed to be stopped before we could continue.

Start of installation: 20:45 CET.
Started the runInstaller on the first node. Software was deployed to first node and all the nodes in the cluster (4 Node Rac).

Upgrade part of the existing 11.2.0.4 GridInfra structure:
21:30 – 21:58 on the first Node (MYSRVR09hr) the rootupgrade.sh was started. (used the manual upgrade ( still a bit hmm unwilling to leave it all to the automated option), this means set up a root session on first node and run: ./app/grid/product/`1102/grid/rootupgrade.sh).

In the runInstaller it was offered to automate and to run the rootupgrade.sh in parallel on Node number 2 and 3. So in separate windows but to me it felt better to open a terminal session as root in parallel  to run the script on each server.

22:06 – 22:13 on MYSRVR10hr : ./app/grid/product/`1102/grid/rootupgrade.sh
22:06 – 22:24 on MYSRVR11hr : ./app/grid/product/`1102/grid/rootupgrade.sh

On the last node MYSRVR12hr:
22:28 – 22:48 ./app/grid/product/`1102/grid/rootupgrade.sh

After that install continued with the Grid Infrastructure management repository (GIMR) database  and  once completed  i ran a number sanity checks in the cluster:

22:50 23:55

At 23:59:59 Reported mission completed.

Happy reading and till next time,

Mathijs

 

 

Upgrading 11G GridInfra to 12C in Linux

Introduction:

With spring 2017 around new initiatives are developed. As a preparation to start doing Database upgrades to 12C  it will be a mandatory step to upgrade the Cluster-ware ( Grid-Infrastructure) first before doing the database part. So in  this case very happy me that finally the time has come that one of the customers requests to upgrade a number of Clusters to 12C Grid-infrastructure.  In  this document will share thoughts , and my plan to tackle this interesting puzzle. Since the first Cluster upgrade will happen pretty soon (this week) the document might evolve with the lessons learned of that first upgrade. Happy reading in advance.

Preparations:

It could be some text of a fortune cookie but every success just loves preparation so in this case that will not be any different. First thing to do was to identify a scope of clusters that had to be upgraded. Together with customer an inventory  list had to be created and in the end 10 Clusters have been defined as part of scope for this action. 8 Test clusters and 2 production environments . Interesting detail will be that all Clusters have been patched pretty recently all holding 11.2.0.4 Grid infrastructure with some extra challenge that the below Operating system will come in two flavors (being Red Hat Linux server release 5.11 (Tikanga) and 6.5 (Santiago). Curious in advance already to see if these different versions of Red Hat will have an influence of the steps to be performed. In the details below you will find more details on detailed preparations and actions of the upgrade.

Operating System:

One of the first steps to investigate is of course to find out if the Operating versions at hand are supported ones for the Upgrade. Oracle support confirmed that even though it would be recommended to upgrade the 5.11 Red Hat version first to Red Hat 7, it should work with the 5.11 version at hand. The 6.5 Os version was okay anyhow. The project decided however that an OS upgrade of the 5.11 boxes would delay things so upgrading the OS will be done in a different project.

Storage:

Before even considering to run the upgrade of the grid-infrastructure some extra time needs to be spend to investigate the storage in place in the Cluster for such upgrade. Often the Oracle software is first set up locally on each box on Volume group VG0 but with the out-of-place-installation these days that might become a challenge if  there is not enough local storage present anymore in the box. Due to standards those root disk become nearly untouchable. For my project this storage requirement has been defined as an absolute minimum which means there will  be a need for extra local storage per node or even for San storage per node which  will be presented as required mount points to me. If such storage would not (or no longer be present locally)  I have to request and received additional storage for it.

/app/grid /app/oracle /var/opt/oracle /tmp San 4 lvm dbs
50GB 70GB 32M 1GB

Short explain for this:

/app/grid : 12C Grid-Infra software will be installed.
/app/oracle: For the 12C Database software.
/var/opt/oracle and /tmp: required minimum space.
San 4 lvm dbs:  will be setup for 4GB mountpoints, for each Instance on the local node in order to hold logfiles.

When migrating to 12C and coming  from 11G  please be informed that you might need extra storage in your OCR – VOTING disk group due to a new feature as well. This new repository database will have to be implemented during the upgrade. This Grid Infrastructure management repository (GIMR) database has become mandatory in Oracle GI 12.1.0.2. Data files associated with it will be created in same diskgroup as OCR or voting.  (Average growth per day per node = app 750 MB so a 4 node cluster would lead at default retention of 3 days to app 9 GB storage requirement in OCR  or VOTING diskgroup).  A fortunate Note is that retention can be changed. Well in my case this means that more ASM disks will need to be added  to the specific disk group. At work most OCR and VOTING diskgroups are set up as bare minimum ( in normal redundancy with three disks each like 4 GB each). ( extra info on this topic: https://blogs.oracle.com/UPGRADE/entry/grid_infrastructure_management_repository_gimr)

Detailed preparations and health checks.

One of the quotes in IT sometimes is that you should not touch a well running system. Well in this case I would like to add but if you do, come well prepared.  In this case i have put the focus on the three below tools to prove that the current system is in a good shape to run the upgrade which is also to be regarded as a health check of the environment. These preps are based on the Mos note (1579762.1) from from reading Chapter 13 in the great book “Expert Oracle Rac 12C”  by Syed Jaffar Hussain, Tariq Farooq,Riyaj Shamsudeen and Kai Yu. ( ISBN-13 (electronic): 978-1-4302-5045-6).

  • Opatch
  • RACcheck: Orachk
  • Runcluvfy

Opatch

Using opatch in order to make sure that the Orainventory is in good shape on all nodes in the cluster. Command issued is investiging the current gridinfrastructure:

opatch lsinventory -oh /opt/crs/product/11204/crs -detail

-oh means for the specific ORACLE_HOME.

-detail shows all details.

RACcheck: Orachk

I have looked on Metalink and Downloaded and installed this tool on the cluster (nodes).

orachk Version
12.2.0.1.2_20161215

Following Quick start guide for this tool:

http://docs.oracle.com/cd/E68491_01/OEXUG/quick-start-guide.htm#OEXUG-GUID-CB4224DA-F389-4E9C-AB6A-C57F46A80C61

Clear information to be found In mos :

ORAchk Upgrade Readiness Assessment (Doc ID 1457357.1)

With the  tool downloaded below steps have been performed:

According to documentation the tool needs to be copied, unpacked (and installed) in suptools subdirectory of the cluster software installation.

scp orachk.zip oracle@mysrvr23hr:/opt/crs/product/11204/crs/suptools
scp orachk.zip oracle@mysrvr24hr:/opt/crs/product/11204/crs/suptools

Once unzipped the tool can run in two modes, a pre upgrade mode and a post upgrade mode:

./orachk u -o pre |tee Orachk_pre_20170124.log
./orachk u -o post |tee Orachk_post_20170124.log

Note: the tee command will also create a log file holding all the steps – progress information during run time.
Note: /opt/oracle/.orachk should be empty before stat otherwise:‘Another instance of orachk is running on:: #  message.

Runcluvfy

Working with runcluvfy  is like meeting an old friend again. Yet each time it is a bit of struggle to find optimal syntax – parameters to be used for your set up.

#Wrong setup was
./runcluvfy.sh stage -pre crsinst -upgrade -n mysrvr23hr,mysrvr24hr -rolling -fixup -src_crshome /opt/crs/product/11204/crs -dest_home /app/grid/product/12102/grid -dest_version 12.1.0 -verbose
## working version
./runcluvfy.sh stage -pre crsinst -n mysrvr23hr,mysrvr24hr -verbose|tee runcluvfy_20170130_pre.lst
Or
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/crs/product/11204/crs -dest_crshome /app/grid/product/12102/grid -dest_version 12.1.0.2.0 -verbose|tee runcluvfy_20170130_preUpgrade.lst

Upgrade steps:

Now it will become to plan and set up your upgrade steps after the confidence build on the preparation. In the upgrade multiple approaches will be possible.  But my goal in this is plain and simple, minimum Impact on Cluster and on the databases hosted on that cluster so I will be aiming for this Scenario:  rolling upgrade ASM + Clusterware. A baseline for such will be the below URL:

https://docs.oracle.com/database/121/CWLIN/procstop.htm#CWLIN10001

Working according to company standards will require to use following specific settings for an $ORACLE_BASE, $ORACLE_HOME for the GI installation and a different $ORACLE_HOME for the database software.

oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_BASE
/app/oracle
oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_HOME
/app/grid/product/12102/grid

oracle@mysrvrhr:/home/oracle [MYDB1]# echo $ORACLE_HOME
/app/oracle/product/12102/db

Below in the bullets will go through the steps and comment where needed.

  • Due to Grid Infrastructure management repository (GIMR) database I had to add larger disks to VOTING diskgroup to have enough storage in place (the steps on how to add the new disks and drop the old ones are too detailed for this blog (after all it is a blog and not a book 🙂 so I will have to blog about that in a separate blog).
  • Check /tmp because upgrade requires at least 1GB present in /tmp. Either clean up or have  /tmp extended. (use ls -lSh  command).
  •  check ocr integrity by :
cluvfy comp ocr -n all -verbose
  • Check backup of ocr and voting disk in the cluster:
    ocrconfig -showbackup

Note: this command can be performed as ORACLE user and will shows info similar to the information below.  Interesting aspect here was that I issued the command on the first node ( but the automated back-ups are all on  node 11hr).

oracle@mysrvr09hr:/opt/oracle [CRS]# ocrconfig -showbackup
mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr
mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr
mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr
mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr
mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
  • As the root user Run a Manual Backup of the OCR information. Run the ocrconfig -manualbackup command on a node where the Oracle Cluster-ware stack is up and running to force Oracle Cluster-ware to perform a backup of OCR at any time, rather than wait for the automatic backup.  Note: The -manualbackup option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to OCR. The OLR only supports manual backups. NOTE: In 11gR2, the voting files are backed up automatically as part of OCR. Oracle recommends NOT used dd command to backup or restore as this can lead to loss of the voting disk.
mysrvr09hr:root:/root # cd /opt/crs/product/11204/crs/bin/
mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -manualbackup
mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr

## Checking a second time will now also show a manual backup 2 b in place:
mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -showbackup
mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr
mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr
mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr
mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr
mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr
mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr

Last line is now showing the manual backup
(since it is showing the format (backup_yyyymmdd_hhmmss.ocr)
  • Check Location of OCR and Voting Disk (need to be in a diskgroup )
##How:
cat /etc/oracle/ocr.loc
## Shows output similiar to this
## (if ocr is already mirrored in other Diskgroup with normal Redundancy)
#Device/file getting replaced by device +OCR
ocrconfig_loc=+VOTE
ocrmirrorconfig_loc=+OCR

 

 

##How: 
crsctl query css votedisk

## Will show 3 voting disks in Disk group Vote due to Normal redundancy (and 3 Disk)
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
 1. ONLINE 36b26f862b9a4f54bfba3096e3d50afa (/dev/mapper/asm-vote01) [VOTE]
 2. ONLINE 9d45d791c1124febbf0a093d5a185c13 (/dev/mapper/asm-vote02) [VOTE]
 3. ONLINE 1b7e510a302e4f03bfdea942d55d7067 (/dev/mapper/asm-vote03) [VOTE]
Located 3 voting disk(s).
## check in ASM:
select a.name dg_name,
a.GROUP_NUMBER dg_number,
a.state dg_state,
b.DISK_NUMBER d_number, 
b.name d_name,
b.mount_status
d_mount_status,
b.header_status
d_header_status,
b.mode_status
d_mode_status,
b.state d_state,
b.FAILGROUP
d_failgroup,
b.path d_path
from
v$asm_diskgroup a,
v$asm_disk b
where
a.GROUP_NUMBER(+) = b.GROUP_NUMBER
order by 2,4;
  • Unset environment Variables:
unset ORACLE_BASE 
unset ORACLE_HOME 
unset GI_HOME 
unset ORA_CRS_HOME 
unset TNS_ADMIN
unset ORACLE_SID
unset ORA_NLS10
  • Check active crs version and software version:
## using the current CRS to document current active - and software version
/opt/crs/product/11204/crs/bin/crsctl query crs activeversion
/opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
  • Performing a Standard Upgrade from an Earlier Release
## Use the following procedure to upgrade the cluster from an earlier release:
Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.
On the node selection page, select all nodes.
Select installation options as prompted. 
Note: Oracle recommends that you configure root script automation,
so that the sh script can be run automatically during the upgrade.
Run root scripts, using either automatically or manually:

Running root scripts automatically:
TIP: If you have configured root script automation, 
then use the pause between batches to relocate services from the nodes running the previous release to the new release.
Comment Mathijs: I have not decided yet on this automation step. 
In the documentation read as prep for the upgrade you see the option to create multiple batches:
like batch 1 starting node, 
batch 2 all but last node,
batch 3 last node. 
I will use both the automated way for one cluster and then use the below manual (old school method mentioned below) on another cluster.

Running root scripts manually:
If you have not configured root script automation, then when prompted, 
run the rootupgrade.sh script on each node in the cluster that you want to upgrade.

If you run root scripts manually, then run the script on the local node first. 
The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. 
When the script is run successfully on all the nodes except the last node, run the script on the last node.
After running the sh script on the last node in the cluster, if you are upgrading from a release earlier than Oracle Grid Infrastructure 11g Release 2 (11.2.0.2), 
and left the check box labeled ASMCA checked, which is the default, then Oracle Automatic Storage Management Configuration Assistant ASMCA runs automatically, 
and the Oracle Grid Infrastructure upgrade is complete. 
If you unchecked the box during the interview stage of the upgrade, then ASMCA is not run automatically.

If an earlier release of Oracle Automatic Storage Management (Oracle ASM) is installed, then the installer starts ASMCA to upgrade Oracle ASM to 12c Release 1 (12.1). 
You can choose to upgrade Oracle ASM at this time, or upgrade it later.
Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade Oracle Clusterware. 
Until Oracle ASM is upgraded, Oracle Databases that use Oracle ASM cannot be created and the Oracle ASM management tools in the Oracle Grid Infrastructure 12c Release 1 (12.1) home (for example, srvctl) do not work.

Note: 
Because the Oracle Grid Infrastructure home is in a different location than the former Oracle Clusterware and Oracle ASM homes, 
update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.
  • Check active crs version and software version:
/opt/crs/product/11204/crs/bin/crsctl query crs activeversion
/opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
  • Post upgrade checks:
 ps -ef|grep d.bin should show daemons started from 12C.

Thoughts on Rollback:

Of course each migration will be as good as its preparation. But still your plan should at least hold the steps for a rollback in case you might not make it to a successful completed task. Below you will find the steps mentioned in general.

On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade to stop the 12c Release 1 (12.1).
On the local node use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade -lastnode
On any of the cluster member nodes where the rootupgrade.sh script has run successfully:

cd /u01/app/12.1.0/grid/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList
-silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid

On any of the cluster member nodes where the rootupgrade script has run successfully:
In Old ORACLE_HOME (the earlier Oracle Clusterware installation).$ cd /opt/crs/product/11204/crs/oui/bin/
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs

Start the Oracle Clusterware stack manually:
On each node, start Oracle Clusterware from the earlier release Oracle Clusterware home:
/opt/crs/product/11204/crs/bin/crsctl start crs

As always thank you for taking an interest in my blog. Happy reading and till the next time.

Mathijs

 

 

Transport Tablespace as Mig to 11.2.0.4 with rman.

Introduction:

For one of the projects the question came in to investigate and set up a  11.2.0.4 Real application cluster database with an extra challenge that the migration had to be done cross-platform from Oracle 10.2.0.3 on Solaris platform to 11.2.0.4.0 on Linux. From application provider came the suggestion to investigate a back-up – restore scenario with an upgrade on the new server ( Linux environment). Due to the fact that the Source environment was 10.20.3 on Solaris and due to fact we were heading towards a Rac cluster environment on on Linux that suggestion was  the first that was send to the dustbin.

Normal export  / import was the second scenario that was explored. Of course this is a valid scenario  but given  the fact that the database was more than 1.x TB not exactly the most favorite way to bring  the data across. But whit scripting and using multiple par-files  and or with moving  partitioned data across in waves would be a  fair plan-b.

From reading though Had put my mind to the use of  transportable tablespaces as a way forward with this challenging question.

Requirements:

As preparation for the job requested to have Nas filesystem mounted between the source Server (MySunServer) holding the 10G database and the target Server (MyLinuxcluster). This Nas filesystem  would hold  the datapumps to be created, to hold the scripts and parfiles  / config files as was suggested  based on Mos Note ( 1389592.1 ). Nas system was / read-writable from both  servers. The  perl scripts that come with the note will support in the transport of the tablespaces but also help in  the convert of big endian to little endian And as a bonus in my case will do the copy into ASM.

Due to the layout of the database in the source environment  Rman was chosen as the best way forward with the scenario.

As a preparation an 110204 Rac database was set up on the target cluster. This  database only to hold the normal tablespaces and a smal temporary tablespace for the users. ( In TTS solution the name of the data tablespaces that come across to the new environment may not exist in the new environment). All data- application users have been pre created on the new environment with a  new – default user tablespace.

Details & Comments

Configuration file for the Perl scripts:

This is a file  that is part of the unzipped file from the Mos note. It needs to be setup to match your specific needs.  Will only show settings  I have used and  its comments:

xtt.properties:
## Reduce Transportable Tablespace Downtime using Incremental Backups
## (Doc ID 1389592.1)

## Properties file for xttdriver.pl

## See documentation below and My Oracle Support Note 1389592.1 for details.
## Tablespaces to transport

## Specify tablespace names in CAPITAL letters.

tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,MYDB_INDEX,MYTS,USERS
##tablespaces=MYCOMPTTS
## Source database platform ID

## platformid

## Source database platform id, obtained from V$DATABASE.PLATFORM_ID

platformid=2

## srclink

## Database link in the destination database that refers to the source

## database. Datafiles will be transferred over this database link using
## dbms_file_transfer.
srclink=TTSLINK

## Location where datafile copies are created during the “-p prepare” step.

## This location must have sufficient free space to hold copies of all
## datafiles being transported.

dfcopydir=/mycomp_mig_db_2_linux/mybox/rman

## backupformat

## Location where incremental backups are created.

backupformat=/mycomp_mig_db_2_linux/mybox/rman

## Destination system file locations

## stageondest

## Location where datafile copies are placed by the user when they are

## transferred manually from the souce system. This location must have
## sufficient free space to hold copies of all datafiles being transported.

stageondest=/mycomp_mig_db_2_linux/mybox/rman

# storageondest

## This parameter is used only when Prepare phase method is RMAN backup.

## Location where the converted datafile copies will be written during the

## "-c conversion of datafiles" step. This is the final location of the
## datafiles where they will be used by the destination database.
storageondest=+MYDBP_DATA01/mydbp/datafile
## backupondest

## Location where converted incremental backups on the destination system

## will be written during the "-r roll forward datafiles" step.

## NOTE: If this is set to an ASM location then define properties

##      asm_home and asm_sid below. If this is set to a file system
##       location, then comment out asm_home and asm_sid below
backupondest=+MYDBP_DATA01/mydbp/datafile

## asm_home, asm_sid

## Grid home and SID for the ASM instance that runs on the destination

asm_home=/opt/crs/product/11204/crs

asm_sid=+ASM1
## Parallel parameters

parallel=8

## rollparallel

## Defines the level of parallelism for the -r roll forward operation.

## If undefined, default value is 0 (serial roll forward).

rollparallel=2
## getfileparallel

## Defines the level of parallelism for the -G operation

getfileparallel=4

## desttmpdir

## This should be defined to same directory as TMPDIR for getting the

## temporary files. The incremental backups will be copied to directory pointed
## by stageondest parameter.
desttmpdir=/mycomp_mig_db_2_linux/MYDBP/scripts

 

Below in a Table format you will see the steps performed with comments.

Steps do qualify for

  • I for Initial steps – activities
  • P for Preparation
  • R for Roll Forward activities
  • T for Transport activities

Server column shows where the action needs to be done.

Step Server What needs 2 b done
I1.3 Source Identify the tablespace(s) in the source database that will be transported ( Application owner needs to support with schema owner information) :
tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,

MYDB_INDEX,MYTS,USERS

I1.5 Source + Target In my case project offered an nfs filesystem which i could use : Nfs filesystem : /mycomp_mig_db_2_linux
I1.6 Source Together with the Mos note cam  this zip file : Unzip rman-xttconvert.zip.
I1.7 Source Tailor the extracted file xtt.properties file on the source system to match your environment.
I1.8 Target As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system. This was not needed since we used the nas filesystem.
P1.9 Source + Target On both environments set up this:

export TMPDIR= /mycomp_mig_db_2_linux/MYDBP/scripts.

P2B.1 Source perl xttdriver.pl -p
Note. Do Not use ]$ $ORACLE_HOME/perl/bin/perl this did not work
P2B.2 Source Copy files to destination. N/A since we use NFS
P2B3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt N/A since we use NFS.
perl/bin/perl xttdriver.pl –c
R3.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:
perl xttdriver.pl –I
R3.3 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
 Since we are using Nas shared filesystem no need to copy with scp  between source and target.
perl xttdriver.pl -r
R3.4 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the determine new FROM_SCN step as follows:
perl xttdriver.pl –s
R3.5 Source 1.     If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
2.     If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.
T4.0 Source As found in note : Alter Tablespace Read Only Hanging When There Are Active TX In Any Tablespace (Doc ID 554832.1). A restart of the database is required to have no active transactions. Alternative during off hours . Actually during a first test with one dedicated tablespace with only one  object it took more than 7 hrs. Oracle seems to look and wait  for ALL active transactions, not only the ones that would impact   the object in the test tablespace i worked with.
T4.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, make the tablespaces being transported READ ONLY.
alter tablespace MYDB_DATA read only;
alter tablespace MYDB_EUC_DATA read only;
alter tablespace MYDB_EUC_INDEX read only;
alter tablespace MYDB_INDEX read only;
alter tablespace MYTS read only;
alter tablespace USERS read only;
T4.2 Source Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles.
perl xttdriver.pl -i
T4.2 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
perl xttdriver.pl –r
..
T4.3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:
perl xttdriver.pl –e
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt. It will hold list of all the TTS you have configured and all its transport_datafiles in details.
Example of that generated file : cat xttplugin.txt
impdp directory=MYDB_XTT_DIR logfile=tts_imp.log \
network_link=TTSLINK.PROD.NL transport_full_check=no \
transport_tablespaces=MYCOMPTTS ,A,B,C\
transport_datafiles=’+MYDBP_DATA01/mycomptts_152.xtf’
Note in our example once edited we chmodded   xttplugin.txt with 744 and ran it as script .
T4.3 Source After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.
T4.4 Target At this step, the transported data is READ ONLY in the destination database.  Perform application specific validation to verify the transported data.
Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:
In rman:
validate tablespace MYDB_DATA, MYDB_EUC_DATA, MYDB_EUC_INDEX, MYDB_INDEX, MYTS, USERS check logical;
T4.5 Target alter tablespace MYDB_DATA read write;
alter tablespace MYDB_EUC_DATA read write;
alter tablespace MYDB_EUC_INDEX read write;
alter tablespace MYDB_INDEX read write,
alter tablespace MYTS read write;
alter tablespace USERS read write;
T5 Source + Target Cleanup of NFS filesystem.
Put Source Db in restricted mode as a fallback after the go live for couple of days then put it to tape and decommission it;

Adding Vip Address and Goldengate to Grid Infra structure 11.2.0.3

Introduction:

Earlier this week preparations have been started to add  the Goldengate software to the Grid  infrastructure of 11.2.0.0.3 on  the Billing environment on production. As part of that scenario   I also had to add a Vip address  that is to be used by the Goldengate  Software  as part of high(er) availability. In my concept  Goldengate Daemons are  running on on Node only  by default. During a Node crash ( of course not wanted  nor desired )  or as a way to load balance work on the cluster the Vip address and the Goldengate software need to stop and restart on the other Node. Below you will  find a working example as part of the preparations I have performed.  Some comment has been added to the specific steps.

Commands will be typed in italic in this blog.

Details

## First step will be to be adding the vip address 10.97.242.40 oradb-gg-localp1.dc-lasvegas.us  to the Grid Infra (GI). Note IP address and the description have been defined in the DNS. Once I got feedback that the address was added I was able to perform a nslookup. Of course it was not possible yet to ping the ip  because we first have to add it to the cluster as is done here.

## As root:
/opt/crs/product/11203/crs/bin/appvipcfg create -network=1 -ip=10.97.242.40 -vipname=oradb-gg-localp1.dc-lasvegas.us -user=root

## Once that is in place , grant permissions to Oracle user to work with the vip address:
(As root, allow the Oracle Grid infrastructure software owner (e.g. Oracle) to run the script to start the VIP.)

/opt/crs/product/11203/crs/bin/crsctl setperm resource oradb-gg-localp1.dc-lasvegas.us -u user:oracle:r-x

## Now it is time to start  the Vip:
## As Oracle, start the VIP:
/opt/crs/product/11203/crs/bin/crsctl start resource oradb-gg-localp1.dc-lasvegas.us

##Check our activities:
## As Oracle:

/opt/crs/product/11203/crs/bin/crsctl status resource oradb-gg-localp1.dc-lasvegas.us -p

## In my setup  Goldengate is defined to be able to run on either node one (usapb1hr)  or  on node 2 (usapb2hr) in my four node cluster. And  Since i want to make sure it only runs on those two servers I add placement to restricted.
## As root:

/opt/crs/product/11203/crs/bin/crsctl modify resource oradb-gg-localp1.dc-lasvegas.us  -attr “HOSTING_MEMBERS=usapb1hr usapb2hr”
/opt/crs/product/11203/crs/bin/crsctl modify resource oradb-gg-localp1.dc-lasvegas.us  -attr  “PLACEMENT=restricted”

## As always the taste of the creme brulee is in the details  so let’ s check :
## As Oracle:
/opt/crs/product/11203/crs/bin/crsctl status resource oradb-gg-localp1.dc-lasvegas.us -p

##  Great that worked , now lets relocate the Vip to the other node as a test:
## As Oracle:

/opt/crs/product/11203/crs/bin/crsctl relocate resource oradb-gg-localp1.dc-lasvegas.us

## completed  action with a smile Because it worked as planned.

## As always the taste of the creme brulee is in the details  so let’ s check :
## As Oracle:
/opt/crs/product/11203/crs/bin/crsctl status resource oradb-gg-localp1.dc-lasvegas.us -p

## As part of making sure that setup from scratch was same on all machines ( had the same solution in Pre Prod env. ) let us first remove the existing resource  for Goldengate and then add it to the GI again.

/opt/crs/product/11203/crs/bin/crsctl delete resource myGoldengate

## as Oracle  ( white paper was very specific about that , performed it as root first time ending up with   wrong primary group in the ACL which i checked in the end) . So stick to plan ! And do this als ORACLE. Add the resource to the GI and  put in a relationship to the Vip address that has been created in the GI earlier, AND  inform the cluster about  the action script that is to be used during a relocate – server boot  – node crash . ( This script is in my case a shell script holding conditions like stop, start , status etc   and the correspondig commands in the Goldengate that are to be used by the GI:

/opt/crs/product/11203/crs/bin/crsctl add resource myGoldengate -type cluster_resource -attr “ACTION_SCRIPT=/opt/crs/product/11203/crs/crs/public/gg.local.active, CHECK_INTERVAL=30, START_DEPENDENCIES=’hard(oradb-gg-localp1.dc-lasvegas.us) pullup(oradb-gg-localp1.dc-lasvegas.us)’, STOP_DEPENDENCIES=’hard(oradb-gg-localp1.dc-lasvegas.us)‘”

## Altering  hosting members and placement again ( by default only one node part of hosting_members and placement=balanced by default).

## As root:
/opt/crs/product/11203/crs/bin/crsctl modify resource  myGoldengate -attr “HOSTING_MEMBERS=usapb1hr usapb2hr”

/opt/crs/product/11203/crs/bin/crsctl modify resource  myGoldengate -attr  “PLACEMENT=restricted”

## so in the end you should check it with this:

/opt/crs/product/11203/crs/crs/public [CRS]# crsctl status resource myGoldengate -p

## Time to set  set permission to myGoldengate (altering Ownership to myGoldengate user ( which is my OS user for this).
### As root:
/opt/crs/product/11203/crs/bin/crsctl setperm resource myGoldengate -o myGoldengate

## needed and sometimes forgotten to make sure that the oracle user ( who is also the owner of the Grid infra software on these boxes).
###As root, allow oracle to run the script to start the goldengate_app application.
/opt/crs/product/11203/crs/bin/crsctl setperm resource myGoldengate -u user:oracle:r-x

 

Wrap-up:

All preparations are now in place. During an already scheduled maintenance window  following steps will be performed to bring this scenario to a HA solution for Goldengate.

  • Stop the Goldengate software daemons ( at moment  stopped and started by hand) .
  • Start the  Goldengate resource  via the Grid Infra ( giving her control of status and activities) .
  • Perform checks that Goldengate is starting  its activities .
  • Perform a relocate  of the  Goldengate resource  via the Grid Infra to the other node.
  • Perform checks that Goldengate is starting  its activities .

As an old quote states . Success just loves preparation. With these preparations in place  I feel confident  for the maintenance window to put this Solution  live .

 

As Always ,  happy reading and till next Time ,

 

Mathijs

When PRCD-1027 and PRCD-1229 is spoiling your rainy day

Introduction:

More then one year ago i had set up an Oracle restart environment  with Grid Infra, ASM and Databases all in 11.2.0.2.0 since that was a requirement from vendor at first. Once the server had been handed over to production  I got the request that it should also host EMC  based  Clones and those clones where 11.2.0.3.0. That meant i had to upgrade both Grid infrastructure and the database software and of course the databases as well.

So i geared up , did an upgrade of the GI and the Rdbms software and of course of the local databases in place. After that  the Emc clones had been added and every thing  looked fine .

Until ……….

Error Messages after Server reboot:

Well until the server got rebooted. After that server reboot a first sign that things where wrong was that the databases , did not start via the grid infra structure which was not expected !

So there I was again ready for solving another puzzle and of course people waiting for the DBs to come online so they could work.

## First clue:

I checked the Resource ( the database ) in the cluster with:   crsctl status resource ….  –p

Much to my surprise that showed the wrong oracle home ( it was 11.2.0.2.0 the initial Oracle Home before upgrade). But I was so sure that I had upgraded the database.. What did i miss . Even more strange was that the Cluster agent kept altering my oratab  for the specific database to have the old oracle home ( and  it would almost stick out tongue at me telling  #line has been added by agent ).

## Second clue

When i altered the oratab to show the correct oracle home i could start the database via sqlplus which was indeed my second clue .

After a big face-palm it became clear to me that the cluster was not having correct status in the cluster ware about that Oracle Home ..

## Will srvctl modify do the Job:

srvctl modify database -d mydb -o /opt/oracle/product/11203_ee_64/db

##output:

PRCD-1027 : Failed to retrieve database mydb

PRCD-1229 : An attempt to access configuration of database migoms was rejected because its version 11.2.0.2.0 differs from the program version 11.2.0.3.0. Instead run the program from /opt/oracle/product/11202_ee_64/db

Well that was not expected.  Especially since that other clue was that the db can be started as 1120.3 db when oracle env is put properly in ORATAB.

##Solution:

First I tried :

srvctl modify database -d mydb -o /opt/oracle/product/11203_ee_64/db

but that is wrong as we already saw in this post.

Hmm then i thought of an expression in German. once you start doing it the right way  things will start to work for you :

Plain and simple this is what  I have to do making things right again:

srvctl upgrade database -d  mydb -o /opt/oracle/product/11203_ee_64/db

After that I started mydb via cluster she is happy now.

## Bottom-line ( aka lesson learned ).

If you upgrade your databases in on an oracle Restart /  Rac cluster environment make it part of your upgrade plan to upgrade the information in the clusterlayer of that specific database.

As always,

Happy Reading and till we meet again.

Mathijs

When your Restart / Rac Node screams : [ohasd(1749)]CRS-0004:logging terminated

Introduction:

During  one of those days on a Monday where  a hot line ( ticket ) service was provided team received a ticket that all instances  of an Oracle Restart ( 11.2.0.3)  where down. I assisted in doing  troubleshooting  which is always fun because that gives a bit of a Sherlock Holmes idea investigating issues at hand .  After logging on to the box  I saw  that the ASM Instance was up  but a  “crsctl check  has”  showed that no communication with has daemon could be established.  So the puzzle became a bit more interesting at that moment .

Details:

  • Environment is an Oracle restart environment on Linux.
  • 11.2.0.3 Grid Infra with 11.2.0.3 Databases running on it
  • Grid infra has been installed in: /opt/crs/product/11203/crs/
  • Environment went down during the weekend ( fortunately it was a Test environment  with a standard office time SLA)

It was time to  take a peek in the alert-log of the cluster on that node.  Since my installation was done in: /opt/crs/product/11203/crs/  that meant I had to look for the log-file in the subdirectory:  /opt/crs/product/11203/crs/log/mysrvr01/   ( where in  the cluster mysrvr01 is the name of the node  i was working on ) to see the alert-file. In there I found this showing what was going on:

2014-10-18 10:47:37.060:
[ohasd(1749)]CRS-0004:logging terminated for the process. log file: “/opt/crs/product/11203/crs/log/mysrvr01/ohasd/ohasd.log”
2014-10-18 10:47:37.128:
[ohasd(1749)]CRS-10000:CLSU-00100: Operating System function: mkdir failed with error data: 28
CLSU-00101: Operating System error message: No space left on device
CLSU-00103: error location: authprep6
CLSU-00104: additional error information: failed to make dir /opt/crs/product/11203/crs/auth/ohasd/mysrvr01/A4107671
2014-10-18 10:47:37.146:
[ohasd(1749)]CRS-10000:CLSU-00100: Operating System function: mkdir failed with error data: 28
CLSU-00101: Operating System error message: No space left on device
CLSU-00103: error location: authprep6
CLSU-00104: additional error information: failed to make dir /opt/crs/product/11203/crs/auth/ohasd/mysrvr01/A5139099

With this as a clue dear Watson it was a lot easier to proceed.  Started hunting large files in the file system of the  /opt/crs/product/11203/crs/  and found a flood of XML files  that had not  been erased for ages there . Which made me realize that a clean up job in cron would become handy for this as well.

After  removing the xmls and checking the mount point again  I logged in as Root  and first issued a  crsctl start has.

After some time i noticed in the alert log of the node :
014-10-20 08:26:50.847:
[/opt/crs/product/11203/crs/bin/oraagent.bin(32461)]CRS-5818:Aborted command ‘check’ for resource ‘ora.DATA1.dg’. Details at (:CRSAGF00113:) {0:0:2} in /opt/crs/product/11203/crs/log/mysrvr01/agent/ohasd/oraagent_oracle/oraagent_oracle.log.
2014-10-20 08:26:50.847:
[/opt/crs/product/11203/crs/bin/oraagent.bin(32461)]CRS-5818:Aborted command ‘check’ for resource ‘ora.DUMMY_DATA.dg’. Details at (:CRSAGF00113:) {0:0:2} in /opt/crs/product/11203/crs/log/mysrvr01/agent/ohasd/oraagent_oracle/oraagent_oracle.log.
2014-10-20 08:26:50.847:
[/opt/crs/product/11203/crs/bin/oraagent.bin(32461)]CRS-5818:Aborted command ‘check’ for resource ‘ora.FRA1.dg’. Details at (:CRSAGF00113:) {0:0:2} in /opt/crs/product/11203/crs/log/mysrvr01/agent/ohasd/oraagent_oracle/oraagent_oracle.log.
2014-10-20 08:27:52.189:
[evmd(4105)]CRS-1401:EVMD started on node mysrvr01.
2014-10-20 08:27:53.685:
[cssd(4193)]CRS-1713:CSSD daemon is started in local-only mode
2014-10-20 08:27:55.581:
[ohasd(32421)]CRS-2767:Resource state recovery not attempted for ‘ora.diskmon’ as its target state is OFFLINE
2014-10-20 08:28:02.233:
[cssd(4193)]CRS-1601:CSSD Reconfiguration complete. Active nodes are mysrvr01 .

When I checked again 🙂 ah good news not only ASM instance running but also all Instances present on the box .  Another puzzle solved.

Happy reading and till next Time ,

Mathijs

PRVF-4557 During installation of Oracle Binaries ( 11.2.0.3)

Introduction:

During an Oracle software install for a new RAC environment once again  I was surprised by Oracle . I had installed the Grid infrastructure (GI) 11.2.0.3 on RedHat Linux with success and I was about to install the Oracle Binaries which should be a piece of cake with the installed  GI up and running properly.  According to the old quote “Never stop learning ( or being surprised  )”   during the preparations the runInstaller  managed to give me a surprise  and I was unable to continue with installation. So obviously this  surprise needed  fixing first.

 

Details:

When running the runInstaller , after choosing the option to install the Oracle binaries on a Real Application Cluster environment  and after  a couple of options  I received a pop up telling me :

PRVF-4557 : Node application “ora.svrb1hr.vip” is offline on node

I was unable to go ahead and had to investigate. As always Google and the Oracle Community where  my brothers in oracle arms for this so i Came across this scenario :

First I checked my hosts file to make sure  information to be present on both nodes. That showed  following details which looked ok:

oracle@svrb1hr:/opt/oracle [CRM]# grep  vip /etc/hosts 
10.97.242.32 svrb1hr-vip.myenv.dc-world.de svrb1hr-vip
10.97.242.33 svrb2hr-vip.myenv.dc-world.de svrb2hr-vip

Next step was  checking my cluster resources. As  mentioned before GI install had finished  properly the day before so that really made me wonder:

oracle@svrb1hr:/opt/oracle [CRM]# crsctl status resource -t 
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLUSTERDATA.dg
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.LISTENER.lsnr
               ONLINE  OFFLINE      svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.asm
               ONLINE  ONLINE       svrb1hr                 Started             
               ONLINE  ONLINE       svrb2hr                 Started             
ora.gsd
               OFFLINE OFFLINE      svrb1hr                                     
               OFFLINE OFFLINE      svrb2hr                                     
ora.net1.network
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.ons
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.registry.acfs
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       svrb1hr                                     
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       svrb2hr                                     
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       svrb1hr                                     
ora.cvu
      1        ONLINE  ONLINE       svrb2hr                                     
ora.svrb1hr.vip
      1        ONLINE  INTERMEDIATE svrb2hr                 FAILED OVER         
ora.svrb2hr.vip
      1        ONLINE  ONLINE       svrb2hr                                     
ora.oc4j
      1        ONLINE  ONLINE       svrb1hr                                     
ora.scan1.vip
      1        ONLINE  ONLINE       svrb1hr                                     
ora.scan2.vip
      1        ONLINE  ONLINE       svrb2hr                                     
ora.scan3.vip
      1        ONLINE  ONLINE       svrb1hr                                     

Well that was unexpected, because  expectation was that the srvrb1hr.vip should be running on the first node. Still don’t understand what happened to cause this. Hmm and frankly if you have suggestions what happened please let me know . But  I did know  it was needed to bring back the vip  address to the first server.

First attempt  was to issue the command needed  ( crs_relocate ) on the node where  I was already working ( node 1) .

oracle@svrb1hr:/opt/oracle [CRM]# which crs_relocate
/opt/crs/product/11203/crs/bin/crs_relocate

oracle@svrb1hr:/opt/oracle [CRM]# crs_relocate svrb1hr.vip
CRS-0210: Could not find resource ‘svrb1hr.vip’.
###  activities needed 2 b done from second node

Grumbling with this  but well at least it was explained what to do next ….

So I opened a session against the second node , made sure my Oracle Home was pointing to the GI.

oracle@svrb1hr:/opt/oracle [CRM]# ssh svrb2hr

On the second box  the command crs_relocate was entered:

                     
oracle@svrb2hr:/opt/oracle [CRS]# crs_relocate ora.svrb1hr.vip
Attempting to stop `ora.svrb1hr.vip` on member `svrb2hr`
Stop of `ora.svrb1hr.vip` on member `svrb2hr` succeeded.
Attempting to start `ora.svrb1hr.vip` on member `svrb1hr`
Start of `ora.svrb1hr.vip` on member `svrb1hr` succeeded.
Attempting to start `ora.LISTENER.lsnr` on member `svrb1hr`
Start of `ora.LISTENER.lsnr` on member `svrb1hr` succeeded.

Well that  looked promessing  so let’s check one more time then:

 

oracle@svrb2hr:/opt/oracle [CRS]#  crsctl status resource -t 
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CLUSTERDATA.dg
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.asm
               ONLINE  ONLINE       svrb1hr                 Started             
               ONLINE  ONLINE       svrb2hr                 Started             
ora.gsd
               OFFLINE OFFLINE      svrb1hr                                     
               OFFLINE OFFLINE      svrb2hr                                     
ora.net1.network
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.ons
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
ora.registry.acfs
               ONLINE  ONLINE       svrb1hr                                     
               ONLINE  ONLINE       svrb2hr                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       svrb1hr                                     
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       svrb2hr                                     
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       svrb1hr                                     
ora.cvu
      1        ONLINE  ONLINE       svrb2hr                                     
ora.svrb1hr.vip
      1        ONLINE  ONLINE       svrb1hr                                     
ora.svrb2hr.vip
      1        ONLINE  ONLINE       svrb2hr                                     
ora.oc4j
      1        ONLINE  ONLINE       svrb1hr                                     
ora.scan1.vip
      1        ONLINE  ONLINE       svrb1hr                                     
ora.scan2.vip
      1        ONLINE  ONLINE       svrb2hr                                     
ora.scan3.vip
      1        ONLINE  ONLINE       svrb1hr

 

Much better!!  after this I restarted the runInstaller again from the first node and indeed installation of  the Oracle Binaries was flawless.

 

As always  happy  reading ,

 

Mathijs