Migration to 11gr2 Cluster ware (Moving from 11.1.7 Crs to 11.2.0.3.0 Grid Infra structure)

Introduction:

Always a challenge patching a Real application cluster. My assignment,  goal in this is to bring Oracle Clusterware ( 11.1.0.7.0)  (without any PSU patches installed) to 11.2.0.3.0.  Grid Infrastructure. This will be performed in two steps , in two maintenance windows in order to minimize the change of breaching SLA of maintenance windows.

Happy reading ,

Mathijs

Summary of Patches in Scope

1

11.1.0.7.7 for CRS Patch Set Update (psu april  2011  for Crs) 11724953
Appropriate Opatch  version  (11.1.0.8.2 or higher) 6880880

n/a for the migration cause  Databases will not be upgraded. This is a crs Upgrade only:

Oracle Database (includes Oracle Database and Oracle RAC) p10404530_112030_platform_1of7.zipp10404530_112030_platform_2of7.zip

2

Oracle Grid Infrastructure (includes Oracle ASM, Oracle Clusterware, and Oracle Restart) p10404530_112030_platform_3of7.zip

3 (No time left during first maintenance window)

Psu Jan 2012  (11.2.0.3.0) Crs 13348650

4 (optional will not be performed during the upgrade of this crs)

Psu Jan 2012  (11.1.0.7.0) Rdbms Patch 13343461

Summary of approach

On the preproduction environment the following scenario has been in place as a so called rolling upgrade. That means that every SINGLE node was patched first with the required patch for the 11.1.0.7.0 Crs environment (11724953). That means that all the time two nodes were up and running. After I finished patching first node with success, the second one was patched etc.  So after this first step all three nodes came prepared for the grid infrastructure upgrade (10404530). That patch  itself is a rolling one as well  which means while patching one node all the other ones  remain  available  (which means two  or n instances  are up and running.

So following steps have been performed on a per Node Base .

  1. Shutdown the old cluster ware,
  2. Install 11724953 on the existing cluster ware.
  3. Then install the 11.2.0.3.0 Grind infrastructure (step 2),

Step 1 Check If level backups and archive backups have been saved successful

RMAN tool is in use so check level backups and archive backups are scheduled.  Take a moment to investigate if they have been running with success last couple of days.

Step 2 Make backup of ASM meta data:

DATUM=`date +”%Y%m%d_%H%M%S”`

echo $DATUM

mkdir -p  /opt/oracle/${ORACLE_SID}/admin/backup

export BACKDIR=/opt/oracle/${ORACLE_SID}/admin/backup

cd $BACKDIR

SQL> select NAME from v$asm_diskgroup;

NAME

——————————

DATA1

FRA1

So :

export DG=DATA1

export DG=FRA1

$ORACLE_HOME/bin/asmcmd  md_backup -b ${BACKDIR}/ASM_${DG}_${DATUM}.bck -g $DG

Step  3 Install required patch (PSU) in existing Clusterware

Patch 11724953 – 11.1.0.7.7 for CRS Patch Set Update

Released: April 19, 2011

1 Patch Information

PSU 11.1.0.7.7 for CRS contains the bug fixes listed in Section 5, “Bugs Fixed by This Patch”.

2 Patch Installation and De-installation

2.1 OPatch Utility Information

You must use the OPatch utility version 11.1.0.6.7 or later to apply this patch. Oracle recommends that you use the latest released OPatch 11.1, which is available for download from My Oracle Support patch 6880880 by selecting the 11.1.0.0.0 release.

## Mathijs right version in place

2.2 Additional Requirements

Your system configuration (Oracle Server version and patch level, and operating system version) must exactly match those in the bug database entry. Any one-off patches that were installed before applying PSU 11.1.0.7.7 for CRS will need to be removed; otherwise, they will be superseded by this patch.

2.3 Patch Installation and DE installation

This section contains instructions for the following:

2.3.1 Patch Pre-Installation Instructions

Before you install PSU 11.1.0.7.7 for CRS, perform the following actions to check the environment and to detect and resolve any one-off patch conflicts.

2.3.1.1 Environment Checks
  1. Ensure that the $PATH definition includes the directory containing the opatch script.

To check the location of this directory, use the following command:

export OP=/opt/oracle/product/111_ee_64/db/OPatch/opatch 
$OP lsinventory
## Mathijs this will show your inventory, the version of the Opatch tool and its actual patches in place 

Ensure that the following environment variables are correctly defined:

  • CRS_HOME = the full path to the CRS home
  • RDBMS_HOME = the full path to the Oracle Database server home

export CRS_HOME=/opt/crs/product/111_ee_64/crs

echo $CRS_HOME

If the owners of these homes are different, ensure that you perform the installation steps as the correct owner in the correct environment.

  1. Ensure that all instances running under the ORACLE_HOME being patched are cleanly shut down before installing this patch, and ensure that the tool used to terminate the instance or instances has exited cleanly.
  2. If the CRS home is shared, plan for a full cluster outage. The patch will update the shared copy of the binaries, and no daemons can be online while the binaries are modified.
  3. If the CRS home is not shared (that is, if each node of the cluster has its own CRS home), apply the patch as a rolling upgrade. Perform all the installation steps for each node, and do not patch two or more nodes at the same time.

## Step 3 Applies to us here.

2.3.2 Patch Installation

To install the patch, follow these steps:

  1. Verify that the Oracle Inventory is properly configured by entering the following commands:
    1. As the Oracle Clusterware (CRS) software owner:
    2. As the RDBMS (Oracle Database) server owner:
$OP lsinventory -detail -oh /opt/crs/product/111_ee_64/crs
$OP lsinventory -detail -oh /opt/oracle/product/111_ee_64/db

These commands should list the components for the list of nodes. If the Oracle inventory is not set up correctly, one or both of these commands will fail.

  1. Unzip the PSE container file:
  2. If the CRS home is not shared, shut down the RDBMS and ASM instances, listeners, nodeapps, and CRS daemons on each local node, as follows:
  3. ### Mathijs NOTE it has turned out to be the fastest to simply stop the clusterware  so this is the preferred way and not follow detailled instructions as mentioned here ( so you can proceed to step d).
    1. a.       To shut down the RDBMS instance, enter the following command:
    2. b.      To shut down listeners, enter the following command on each node:
    3. c.       To shut down nodeapps, enter the following command:
unzip p11724953_<sver>_<os>.zip
cd 11724953
srvctl stop instance  -d MYDBA -i MYDBA1
srvctl stop instance  -d MYDBA -i MYDBB1
srvctl stop instance  -d MYDBA -i MYDBC1
 
To shut down ASM instances, enter the following command:

srvctl stop asm  -n  MYSERVERd64r -i +ASM1
srvctl stop asm  -n  MYSERVERd65r -i +ASM2
srvctl stop asm  -n  MYSERVERd66r -i +ASM3
 
srvctl stop listener -n MYSERVERd64r -l listener_MYDBA1
srvctl stop listener -n MYSERVERd64r -l listener_MYDBB1
srvctl stop listener -n MYSERVERd64r -l listener_MYDBC1
srvctl stop listener -n MYSERVERd64r -l listener_+ASM1
 
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_MYDBA2
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_MYDBB2
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_MYDBC2
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_+ASM2
 
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_MYDBA3
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_MYDBB3
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_MYDBC3
$ORACLE_HOME/bin/srvctl stop listener -n MYSERVERd64r -l listener_+ASM3

There are NO  nodeapps

$ORACLE_HOME/bin/srvctl stop nodeapps -n MYSERVERd64r
$ORACLE_HOME/bin/srvctl stop nodeapps -n MYSERVERd65r
$ORACLE_HOME/bin/srvctl stop nodeapps -n MYSERVERd66r
 
  1. To shut down CRS daemons, enter the following command as root:
cd /opt/crs/product/112_ee_64/crs/bin
./crsctl stop crs

Step 4 Make backup of OCR / Voting disks:

OCR

In the upgrade there is a requirement to back-up the current ocr.

So let’s find it:

oracle@MYSERVERd64r:/dev/mapper [MYDBB1]# ocrcheck

Status of Oracle Cluster Registry is as follows:

Version                  :          2

Total space (kbytes)     :     505776

Used space (kbytes)      :       9760

Available space (kbytes) :     496016

ID                       :  723446143

Device/File Name         : /dev/mapper/asm-ocr1p1

Device/File integrity check succeeded

Device/File Name         : /dev/mapper/asm-ocr2p1

Device/File integrity check succeeded

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

Making a backup on each node:

shut down Oracle Clusterware on all nodes in the cluster ( on the first node,

And then backup the OCR using the dd command:

dd if=/dev/mapper/asm-ocr1p1 of=/tmp/asm-ocr1p1 bs=1M count=256

dd if=/dev/mapper/asm-ocr2p1 of=/tmp/asm-ocr2p1 bs=1M count=256

Voting Disks

oracle@MYSERVERd64r:/opt/oracle []# ls -l /dev/mapper/*|grep vote|grep p1

brw-rw—- 1 oracle dba  253,  61 Mar 15 09:00 /dev/mapper/asm-vote1p1

brw-rw—- 1 oracle dba  253,  59 Mar 15 09:00 /dev/mapper/asm-vote2p1

brw-rw—- 1 oracle dba  253,  58 Mar 15 09:00 /dev/mapper/asm-vote3p1

dd if=/dev/mapper/asm-vote1p1 of=/opt/oracle/asm-vote1p1.dmp

dd if=/dev/mapper/asm-vote2p1 of=/opt/oracle/asm-vote1p2.dmp

dd if=/dev/mapper/asm-vote3p1 of=/opt/oracle/asm-vote3p1.dmp

## Then proceed with the following steps
  1. Run the following script (on each node) as root to unlock protected files:
  2. Run the following script (on only one node if the CRS home is shared, but on each node if the CRS home is not shared) as the CRS software installer/owner (oracle) to save important configuration settings:
./custom/scripts/prerootpatch.sh -crshome /opt/crs/product/111_ee_64/crs -crsuser oracle
./custom/scripts/prepatch.sh -crshome /opt/crs/product/111_ee_64/crs

Note: The RDBMS portion can only be applied to an RDBMS home that has been upgraded to 11.1.0.7.0.

Also, If the CRS Version and RDBMS version are the same, enter the following command as the RDBMS software owner:

custom/server/11724953/custom/scripts/prepatch.sh -dbhome <RDBMS_HOME>

## Mathijs, Dbs will not be touched so this script will be skipped.
  1. Enter the following command as the owner of CRS HOME (oracle) (on each node if the CRS home is not shared) to patch the CRS home files:
  2. 8.      Note: This step can be applied only to an RDBMS home that has been upgraded to 11.1.0.7.0. For more information, see My Oracle Support Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment.
$OP napply -local -oh /opt/crs/product/111_ee_64/crs -id 11724953

Enter the following command as the RDBMS HOME owner (on only one node if the CRS ome is shared, bug on each note if the CRS home is not shared) to patch the RDBMS home files:

On Linux32 and Linux64 systems only:

% opatch napply custom/server/ -local -oh <RDBMS_HOME> -id 11724953,7388579

On all other systems:

% opatch napply custom/server/ -local -oh <RDBMS_HOME> -id 11724953

If there are multiple Oracle Database (RDBMS) homes in your configuration, then apply the patch to each home before continuing.

Note:If you later install an additional Oracle Database (RDBMS) home, or if you skip an Oracle Database home, then applying the patch to additional homes is easier, and it can be done while CRS is active:
  1. a.    Shut down all instances using that Oracle Database home.
  2. b.    Invoke opatch as described in step 10.
  3. c.    Restart the instances in that Oracle Database home.
  1. Configure the CRS home, because after the opatch command finishes, some configuration settings need to be applied to the patched files. As the Oracle Clusterware (CRS) software owner (oracle), enter the following command (on only one node if the CRS home is shared, but on each node if the CRS home is not shared):
  2. 10.  Note: This step can be applied only to an RDBMS home that has been upgraded to 11.1.0.7.0. For more information, see My Oracle Support Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment.
./custom/scripts/postpatch.sh -crshome /opt/crs/product/111_ee_64/crs
 

Configure the RDBMS home, because after the opatch command finishes, some configuration settings need to be applied to the patched files. As the Oracle Database owner, enter the following command (on only one node if the CRS home is shared, but on each node if the CRS home is not shared):

% custom/server/11724953/custom/scripts/postpatch.sh -dbhome <RDBMS_HOME>
  1. Restore security settings to the CRS home and restart the CRS daemons by running the following script (on all nodes) as root:
./custom/scripts/postrootpatch.sh -crshome /opt/crs/product/111_ee_64/crs

If the CRS home is shared, run this script on each node, one node at a time. Do not run this script in parallel on two or more nodes.

Do not run this script in any other context than as part of the patching process.

  1. You can determine whether the patch has been successfully installed by entering the following commands:
    1. As the Oracle Clusterware (CRS) software owner:
$OP lsinventory -detail -oh /opt/crs/product/111_ee_64/crs 

As the RDBMS (Oracle Database) server owner:
b. % opatch lsinventory -detail -oh <RDBMS_HOME>

These commands should list the components for the list of nodes.

  1. If there are errors, refer to Section 3, “Known Issues”.

2.3.3 Patch De-installation

To roll back the patch, follow these steps:

  1. Verify that the Oracle Inventory is properly configured by entering the following commands:
    1. As the Oracle Clusterware (CRS) software owner:
    2. As the RDBMS (Oracle Database) server owner:
b. % opatch lsinventory -detail -oh <CRS_HOME>
d. % opatch lsinventory -detail -oh <RDBMS_HOME>

These commands should list the components for the list of nodes. If the Oracle inventory is not set up correctly, one or both of these commands will fail.

  1. Unzip the PSE container file:
  2. If the CRS home is shared, shut down the RDBMS and ASM instances, listeners, and nodeapps on all nodes before shutting down the CRS daemons on all nodes, as follows:
    1. To shut down the RDBMS instance on all nodes, enter the following command:
    2. To shut down ASM instances, enter the following command on each node:
    3. To shut down listeners, enter the following command on each node:
    4. To shut down nodeapps, enter the following command on each node:
    5. To shut down CRS daemons on each node, enter the following command as root:
  3. If the CRS home is not shared, shut down the RDBMS and ASM instances, listeners, nodeapps, and CRS daemons on each local node, as follows:
    1. To shut down the RDBMS instance, enter the following command:
    2. To shut down ASM instances, enter the following command:
    3. To shut down listeners, enter the following command on each node:
    4. To shut down nodeapps, enter the following command:
    5. To shut down CRS daemons, enter the following command as root:
  4. Run the following script (on only one node if the CRS home is shared, but on each node if the CRS home is not shared) as root to unlock protected files:
3. % unzip 11724953.zip
4. % cd 11724953
b. % $ORACLE_HOME/bin/srvctl stop database -d <dbname>
d. % $ORACLE_HOME/bin/srvctl stop asm -n <node_name>
f. % $ORACLE_HOME/bin/srvctl stop listener -n <node_name>
h. % $ORACLE_HOME/bin/srvctl stop nodeapps -n <node_name>
j. # crsctl stop crs
b. % $ORACLE_HOME/bin/srvctl stop instance -d dbname -i <instance_name>
d. % $ORACLE_HOME/bin/srvctl stop asm -n <node_name>
f. % $ORACLE_HOME/bin/srvctl stop listener -n <node_name>
h. % $ORACLE_HOME/bin/srvctl stop nodeapps -n <node_name>
j. # crsctl stop crs
8. custom/scripts/prerootpatch.sh -crshome <CRS_HOME> -crsuser <username>

where <username> is the software installer/owner for the CRS home.

  1. Run the following script (on only one node if the CRS home is shared, but on each node if the CRS home is not shared) as the CRS software installer/owner to save important configuration settings:
10.  % custom/scripts/prepatch.sh -crshome <CRS_HOME>

Note: The RDBMS portion can only be applied to an RDBMS home that has been upgraded to 11.1.0.7.0.

Also, If the CRS Version and RDBMS version are the same, enter the following command as the RDBMS software owner:

% custom/server/11724953/custom/scripts/prepatch.sh -dbhome <RDBMS_HOME>
  1. Depending on your type of system, enter commands as follows:
  • On Linux32 and Linux64 systems only:

Enter the following command as the CRS HOME owner (on only one node if the CRS home is shared, but on each node if the CRS home is not shared) to roll back the patch on all homes:

% opatch rollback -id 11724953 -local -oh <CRS_HOME>

Enter the following command as the RDBMS HOME owner (on only one node if the CRS home is shared, but on each node if the CRS home is not shared) to roll back the patch on all homes:

% opatch rollback -id 11724953,7388579 -local -oh <RDBMS_HOME>
  • On all other systems systems:

Enter the following command as the CRS HOME owner (on only one node if the CRS home is shared, but on each node if the CRS home is not shared) to roll back the patch on all homes:

% opatch rollback -id 11724953 -local -oh <CRS_HOME>

Enter the following command as the RDBMS HOME owner (on only one node if the CRS home is shared, but on each node if the CRS home is not shared) to roll back the patch on all homes:

% opatch rollback -id 11724953 -local -oh <RDBMS_HOME>
  1. Configure the CRS home, because after the opatch command finishes, some configuration settings need to be applied to the patched files. As the Oracle Clusterware (CRS) software owner, enter the following command (on only one node if the CRS home is shared, but on each node if the CRS home is not shared):
  2. Note: This step can be applied only to an RDBMS home that has been upgraded to 11.1.0.7.0. For more information, see My Oracle Support Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment.
13.  % custom/scripts/postpatch.sh -crshome <CRS_HOME>

Configure the RDBMS home, because after the opatch command finishes, some configuration settings need to be applied to the patched files. As the Oracle RDBMS software owner, enter the following command (on only one node if the CRS home is shared, but on each node if the CRS home is not shared):

% custom/server/11724953/custom/scripts/postpatch.sh -dbhome <RDBMS_HOME>
  1. Restore security settings to the CRS home and restart the CRS daemons by running the following script (on all nodes) as root:
16.  % custom/scripts/postrootpatch.sh -crshome <CRS_HOME>

If the CRS home is shared, run this script on each node, one node at a time. Do not run this script in parallel on two or more nodes.

Do not run this script in any other context than as part of the patching process.

  1. You can determine whether the patch has been successfully rolled back by entering the following commands:

.        As the Oracle Clusterware (CRS) software owner:

a. % opatch lsinventory -detail -oh <CRS_HOME>
  1. As the RDBMS (Oracle Database) server owner:
c. % opatch lsinventory -detail -oh <RDBMS_HOME>

These commands should list the components for the list of nodes.

  1. If there are errors, refer to Section 3, “Known Issues”.

3 Known Issues

For information about OPatch issues, see My Oracle Support Note 293369.1 OPatch documentation list.

For issues for PSUs 11.1.0.7.n, see My Oracle Support Note 810663.1 11.1.0.X CRS Bundle Patch Information.

Step 5 Install Grid Infra structure:

### Note Mathijs. This is a full installation.

  1. Unset needed environment

unset ORACLE_HOME

unset ORACLE_SID

unset ORACLE_BASE

unset ORA_CRS_HOME

  1. as root  :

in /opt on all nodes

chmod -R g+w crs

  1. Make sure subdirectories can be created.

oracle base : /opt/crs/grid

oracle _home :  /opt/crs/product/112_ee_64/crs

  1. On the first screen of runInstaller I have selected the “Upgrade grid Infra structure”.

### Mathijs Note:

Ps We have had issues during this install with ssh.  So i have contacted Oracle for that and indeed for SSH it is   best practice to have that checked before and during runInstaller to make sure that password less  access is possible to all nodes. So in the appropriate screen of runinstaller make sure you do TEST TEST TEST connectivity

PS2 Even more important make sure your ORACLE_BASE is set up in right subdirectory, cause would be the first time that using  /opt/oracle as a base ,  had the privs changed to root:dba and that will hurt !

………………   End of Line ………………………

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s