Return of the Acfs , October 2012 PSU Patch (14275572) for GI and RDBMS

Introduction,

This week i have patched a preproduction environment with latest PSU patch at this moment ( October 2012). Next week will do the same on the production machines In it self this action is not a big  issue when applying it to the Grid infra structure and RDBMS, but  ACFS is in use on those boxes. Since I hope this will make an interesting note, I have gathered my steps and want to share them with you.

Preparations:

As a general  information i need to share that on the specific boxes  the ACFS has been implemented for two reasons.  First of all to have shared  mountpoint on a per Database bases  where  the Instances can write their Diagnostics information (ADR). Second  as a shared drive for the logging of the local listener which is also created on a per Database basis.

Note the patching will be done PER node, so on every node following steps will be performed.

The Grid infrastructure is under control  of the root user so you will have to  get yourself access to that account  when you start patching.

###  as root User  you will perform the following steps:

## setting up the paths:

export PATH=/opt/crs/product/112_ee_64/crs/bin:$PATH

## Run the following check for the ACFS files system that is present on the box.

crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME

## With the information above  you can stop the acfs resource:

srvctl stop filesystem -d /dev/asm/MYDBA_acfs1-367 -n MYSRVR3hr

## Well that is not working, cause i also have a local listener defined on those boxes and that local listener  is writing its log file on the ACFS  file system as well. So Device will still be busy. so first you will  go find the local listener:

oracle@MYSRVR3hr:/opt/oracle [+ASM3]# ps -ef|grep inherit

This showed:

oracle   21803     1  0 Oct22 ?        00:00:00 /opt/crs/product/112_ee_64/crs/bin/tnslsnr LISTENER -inherit

oracle   21844     1  0 Oct22 ?        00:00:01 /opt/crs/product/112_ee_64/crs/bin/tnslsnr LISTENER_SCAN1 -inherit

oracle   21962     1  0 Oct22 ?        00:00:01 /opt/crs/product/112_ee_64/crs/bin/tnslsnr LISTENER_MYDBA -inherit

## So my next step is to stop the local listener:

srvctl stop listener -n MYSRVR3hr -l LISTENER_MYDBA

### After stopping the local listener  lets check again for open files ( as Root user) since in know that only this Database is writing into the  ACFS mountpoint:

lsof |grep -i MYDBA

##  I got my prompt back without process showing  activity on the  ACFS mountpoint so I can go ahead.

## stopping the ACFS

srvctl stop filesystem -d /dev/asm/MYDBA_acfs1-367 -n MYSRVR3hr

In earlier PSU patches i still have struggled with the auto option of the OPatch tool. But this week i felt brave so i performed following steps.

## setting up the environment for root user:

export PATH=/opt/crs/product/112_ee_64/crs/OPatch:$PATH

Running a quick check if opatch is correct in my path:

which opatch

I have downloaded the specific Psu patch from MOS  and have uploaded it to all three nodes in the cluster.  The patch has been unpacked in the directory /opt/oracle/stage

cd /opt/oracle/stage

## From the read me of this PSU patch i have taken the  part which applies to me ( no shared Oracle Software , using  ACFS to patch GI home and all Oracle RAC database homes of the same version:

opatch auto /opt/oracle/stage -ocmrf /opt/crs/product/112_ee_64/crs/OPatch/ocm/bin/ocm.rsp

## I followed the log file with a tail and had a second window open on the cluster agent as well.  This all   looks great. So this time the auto patch worked !

 

Follow up February 2013:

One of the Post viewers pointed out to me that  my blog did not include the steps that  needed to be done on the database(s)  that are in place. And basically  that is a very  correct observation. When i  wrote this note i was setting up a new cluster so i did not include the necessary database steps simply because there where none at the time of build.

However should i have had databases on there then i would have followed following steps:

Non-Shared Oracle RAC Database Home

Execute the following command on EACH node of the cluster.
As ROOT user execute:
# opatch auto <unzipped_patch_location> -oh  -ocmrf 

Once that would have finished with success following step would need to be done for each database in that specific Oracle home:
Loading Modified SQL Files into the Database
The following steps load modified SQL files into the database. For an Oracle RAC environment, perform these steps on only one node
For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus. Connect as SYSDBA and run the catbundle.sql script as follows:
1. cd $ORACLE_HOME/rdbms/admin
2. sqlplus /nolog
3. SQL> CONNECT / AS SYSDBA
4. SQL> STARTUP
5. SQL> @catbundle.sql psu apply
6. SQL> QUIT
The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU.
For information about the catbundle.sql script, see My Oracle Support Note 605795.1 Introduction to Oracle Database catbundle.sql.

7. Check the following log files in $ORACLE_BASE/cfgtoollogs/catbundle for any errors:
8. catbundle_PSU_<database SID>_APPLY_<TIMESTAMP>.log
9. catbundle_PSU_<database SID>_GENERATE_<TIMESTAMP>.log

where TIMESTAMP is of the form YYYYMMMDD_HH_MM_SS. If there are errors, refer to Section 3, "Known Issues".

Happy reading,

Mathijs

 

 

2 thoughts on “Return of the Acfs , October 2012 PSU Patch (14275572) for GI and RDBMS

  1. Hi,
    where is the DB part of the patch? Have you tried it in the failover environment? I was hoping to make sure that the ACFS should be made available again before failing over~

    • Hi I have updated the post, your observation was correct. At the time i did this it was a fresh server , no databases on it. So i did not run the Db part. And as the read me tells if you install Databases after installing the patch , no activity needed on that database(s).

      This Acfs is in use in a full blown Rac environment , and it has become a shared – storage for the Diag destination of the Instances , being available on all nodes. So in case if Server crash would have access to the trace files from second node.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s