Thoughts on Adding a Single Instance To Oracle Clusterware (Grid Infra).

General: Configuring Active/Passive Clustering for Oracle Database ( Single Instance in Crs)

The Oracle database can easily be configured to use the Cluster ware framework for high availability.

Using Grid Infrastructure to protect a database resource is a very cost-effective way of setting up an Active/passive cluster. As an added advantage, using only one vendor’s software stack to implement the cluster can make troubleshooting easier. Staff already familiar with RAC will easily be able to set up and run this configuration because it uses a same software stack: all commands and log files are in familiar locations, and troubleshooting does not rely on the input from other teams.

To set up an active/passive cluster with 11g Release 2, you need to initially install Grid Infrastructure on all nodes of the cluster. Grid Infrastructure will give the user with a cluster logical volume manager: ASM. If for some reason another file system is required, you can choose the supported cluster file systems, including ACFS. Using a cluster file system that is mounted concurrently to both cluster nodes offers the advantage of not having to remount the database files and binaries in case of a node failure. Some configurations we saw suffered from extended fail over periods caused by required file system checks before the file system could be remounted.

On top of the Grid Infrastructure build, you do a local installation of the RDBMS. It is important that you do not choose a cluster installation when prompted so; otherwise, you risk violating your license agreement with Oracle.

After the binaries are installed and patched according to your standards, you need to create an ASM disk group or OCFS2/GFS mount point to store the database files. Next, start-up the database configuration assistant from the first node to create a database. Please make sure that you store all the data files in ASM or the clustered file system. The same applies for the Fast Recovery Area: it should be on shared storage, as well.

TIP: After the database is created by dbca, it is automatically registered in the OLR. The profile of the resource is a good reference for the resource profile to be created in the next step. It is a good idea to save the resource profile in a safe place. You can extract the profile with the crsctl status resource ora.databaseName.db -p command.  This will create a text file containing valuable information needed for setting up a configuration in the Clusterware that will have to be tailored to your specific needs.

This Note was made with the given situation that there was a full Rac environment in place that needed to be replaced by the solution we are discussing in this note. And we were not allowed to use Rac_One at that time.  Today i would recommend to check if Rac_One is an option to you ( check  your Licenses !)  before continue.

Summary of things needed:

In order to make things work you might need to take care of this shopping list first before starting:

Needed For
Root access In this solution we also automate the fail over of the HP Open view monitoring. Stopping and starting that agent needs root access. And I suggest it to be password less.
Cooperation with HP Openview expert If they are not involved, Open view will still look at wrong node after failover. They can help automate the failover of monitoring!
Extra IP addresses If a node dies, or has maintenance, ALL resources will fail over. This means that schedules in maestro could go wrong, connecting to the wrong node. I have requested extra IPs per database to have max. Flexibility.
Action script To be implemented in same place on each server participating in this concept.
Identical Infra structure All links / subdirectory in place in same place on all nodes where a specific database could fall over to; Tnsnames equal.
Rac Clusterware 11.2 This  Solution has been implemented and tested in a 11GR2 environment. If you need setup under 11.1 crs you will have to use different approach as is documented in SingleInstance DB Fileover in 11gr1 Clsuterware.pdf.

Which Method to use?

Frankly there are three scenarios that can be followed  and there is not a single solution fit all situation.

  1. Removing database from the Grid infrastructure (and add application resources ( database will be one of them) back to the cluster again. Pro:  naming convention in the resources. Con: More maintenance needed cause resources like database need b removed before you can add again ,
  2. Add the vip address and a dedicated listener to the clusterware in 11.2 environment (aka the Italian way cause that is how it is performed there). Pro: Register new resources ( vip per db and dedicated local listener to cluster  and alter the db resources ). Con. Due to naming convention you still use ora. Resource for the database which might be confusing because it is not a Rac.
  3. Add the vip address and a dedicated listener to the clusterware in 11.1 creating 5 resources among them listener , vip , database , start and stop resource and use  crs_register command to  add them to the cluster. Pro: this has been done on several clusters already  and is working both under crs 11.1 and 11.2.

Note When I first implemented it I have used the method “removing databases from the grid infra and that worked well, given the fact that this was a 11.2 environment with a scan listener in place. However this method was not completed ( since I do not have a failover vip address and a local listener in place).

Happy Reading ,

Mathijs

Details:

1.       Removing Databases from the Grid Infra structure (Cluster ware)

IF the database(s) in scope are part of a Rac at the moment, In order to remove the database resource called (TEST) in this document from the OLR, as shown in the following example (this example assumes the database is named TEST):

[oracle@london1 ~]$ srvctl remove database -d TEST

Setting up an action script

Action Script location:

Next, we to agree where we will put a Mandatory Script and of course what we want to be part of this script. As A default on all boxes, I have used:  oracle@MYSRVR04hr:/opt/crs/product/112_ee_64/crs/crs/public. Indeed buried deep in the heart of the oracle cluster ware software.

Action Script contents:

As a mandatory component of the solution we need an action script that allows the framework to start, stop, and check the database resource. Since every database becomes an “Application resource” every database will have an Action script of its own!

If the clusterware relocates a database (performs a failover), internally this script is running with a parameter:

  • stop in this action script that will perform following steps:
    • sudo su – -c “/var/opt/OV/bin/instrumentation/dbspicol OFF $ORACLE_SID; rm /var/opt/OV/log/OpC/PMON_oracle_${ORACLE_SID}.flag”  Note: as root this will stop the openview agent, and it will remove an agreed touch file  that is being used to show that the database is active on this node.
    • Then using sqlplus it will stop the database.
    • Start in this action script that will perform following steps:
      • mv /opt/oracle/ MYDB1/diag/rdbms/mydb1/MYDB1/trace/alert_$ORACLE_SID.log /opt/oracle/$ORACLE_SID/admin/Arch/alert_${ORACLE_SID}.log.${CURRENT_TIMESTAMP}. Note: This will move away the old alertfile on the Node we are failing over to (in other words on the server where the database was NOT active).
      • $ORACLE_HOME/bin/sqlplus /nolog. Note: Using Sqlplus  this is used to start the database on the new node (server)
      • sudo su – -c “/var/opt/OV/bin/instrumentation/dbspicol ON $ORACLE_SID;touch /var/opt/OV/log/OpC/PMON_oracle_${ORACLE_SID}.flag” Note: As the root user  this will start the Openview agent, and create a touch file as proof that this is the node where the database is alive.

Note:  Before adding this Action script to the cluster ware you could (should test it).   Simply run it with  <scriptname> stop or  <scriptname> start.

In my project we made the script (per Database) look very much like this Every action script is named like this (ora.<Dbname>.active) :

  #!/bin/bash
export ORACLE_SID=MYDB1
export ORACLE_HOME=/opt/oracle/product/112_ee_64/db
export PATH=/usr/local/bin:$ORACLE_HOME/bin:$PATH
export ORACLE_OWNER=oracle
export CURRENT_TIMESTAMP=`date +%Y%m%d_%H%M%S`   # Format: YYYYMMDD_HHMISS   e.g.: 20110907_150455
case $1 in
'start')
mv /opt/oracle/MYDB1/diag/rdbms/mydb1/MYDB1/trace/alert_$ORACLE_SID.log /opt/oracle/$ORACLE_SID/admin/Arch/alert_${ORACLE_SID}.log.${CURRENT_TIMESTAMP}
$ORACLE_HOME/bin/sqlplus /nolog<<EOF
conn / as sysdba
startup
exit
EOF
sudo su - -c "/var/opt/OV/bin/instrumentation/dbspicol ON $ORACLE_SID;touch /var/opt/OV/log/OpC/PMON_oracle_${ORACLE_SID}.flag"
RET=0
;;
'stop')
sudo su - -c "/var/opt/OV/bin/instrumentation/dbspicol OFF $ORACLE_SID; rm /var/opt/OV/log/OpC/PMON_oracle_${ORACLE_SID}.flag"
$ORACLE_HOME/bin/sqlplus /nolog<<EOF
conn / as sysdba
shutdown immediate
exit
EOF
RET=0
;;
'clean')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
conn / as sysdba
shutdown abort
exit
EOF
RET=0
;;
'check')
# check for the existance of the smon process for $ORACLE_SID
# this check could be improved, but was kept short on purpose
found=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`
if [ $found = 0 ]; then
RET=1
else
RET=0
fi
;;
*)
RET=0
;;
esac
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

Note: This action script defines environment variables in the header section, setting the Oracle owner, the Oracle SID, and the Oracle home. It then implements the required start, stop, clean, and check entry points. The check could be more elaborate—for example, it could check for a hung instance—however, this example was kept short and simple for the sake of clarity.

The action script needs to be deployed to the other cluster node as well, and it must be made executable.

Whenever there is a change to the script, the action script needs to be synchronized with the other cluster nodes!

Defining the Cluster Resource:

After you finished defining the mandatory action script, you now need to create a new kind (type) of cluster resource which will be added to the cluster ware (or Grid infra-structure if you like). Securing an Oracle database instance with Clusterware is simplified by the availability of the SCAN listener in this case because users of the database do not need to worry about which node the database is currently started on because the SCAN abstracts this information from them. The communication of the SCAN with the local listener also makes a floating virtual IP (which other cluster stacks require) unnecessary.

Using some values of the resource profile saved earlier (remember You can extract the profile with the crsctl status resource ora.databaseName.db -p command. This will create a textfile containing valuable information needed for setting up a configuration in the Clusterware that will have to be tailored to your specific needs), you need to configure the resource profile for the cluster resource next. Note: It is easier to use a configuration file than to supply all the resource parameters on the command line in name-value pairs. To recreate the database cluster resource, you could use the following configuration file, which is saved as TEST.db.config:

Note:  As a naming convention we named the mandatory configuration files: ora.<dbname>.config.  
So as an example:  cat ora.mydb1.config
NAME=app.mydb1.db
TYPE=cluster_resource
ACL=owner:oracle:rwx,pgrp:dba:rwx,other::r--
ACTION_SCRIPT=/opt/crs/product/112_ee_64/crs/crs/public/ora.mydb1.active
ACTIVE_PLACEMENT=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=10
DEGREE=1
DESCRIPTION=Resource MYDB1 DB
ENABLED=1
HOSTING_MEMBERS=MYSRVR05HR MYSRVR04HR
LOAD=1
LOGGING_LEVEL=1
PLACEMENT=restricted
RESTART_ATTEMPTS=1
START_DEPENDENCIES=hard(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg)
START_TIMEOUT=600
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.MYDB1_DATA.dg,shutdown:ora.MYDB1_FRA.dg,shutdown:ora.MYDB1_REDO.dg)
STOP_TIMEOUT=600
UPTIME_THRESHOLD=1h

The preceding configuration file can be read as follows. Placement and hosting members go handin-hand; the restricted policy only allows executing the resource on the hosting members MYSRVR05HR and MYSRVR04HR london2. The check interval of 10 seconds determines the frequency of checks, and setting active placement 0 prevents Oracle from relocating the resource to a failed node; a failed node could indicate a second outage in the system, and it would be better to let the DBAs perform the switch back to the primary node. The cardinality specifies that there will always be one instance of this resource in the cluster (never more or less); similarly, the degree of 1 indicates that there cannot be more than one instance of the resource on the same node. The parameters restart attempts and action script is self-explanatory in this context. Please note that the directory /opt/crs/product/112_ee_64/crs/crs/public/ was used to store the action script.

Note: ACL=owner:oracle:rwx,pgrp:dba:rwx,other::r–  sets the priviliges in order to make sure that the oracle user is able to perform the actions to ( relocating ) a database.

Adding the resource to the cluster:

Next, use the following command to register the resource in Grid Infrastructure:

$ crsctl add resource app.mydb1.db -type cluster_resource -file /opt/crs/product/112_ee_64/crs/crs/public/ora.mydb1.config

If you get a CRS-2518 (invalid directory path) error while executing this command, you most likely forgot to deploy the action script to the other node.

Note: You might be tempted to use the ora.database.type resource type here. Unfortunately, using this resource type repeatedly caused core dumps of the agent process monitoring the resource. These were in $GRID_HOME/log/hostname/agent/crsd/oraagent_oracle.

Tip: The permissions on the resource at this moment might be too strict (in our configuration we took care of that , but if you forget to do that it could happen that at this moment, only root could effectively modify the resource at this time. Trying to start the resource as the oracle account results in a failure, as in this example:

[oracle@london1 ~]$ crsctl start resource TEST.db

CRS-0245: User doesn’t have enough privilege to perform the operation

CRS-4000: Command Start failed, or completed with errors.

You can confirm the cause of this failure checking the permissions:

[root@london1 ~]# crsctl getperm resource TEST.db

Name: TEST.db

owner:root:rwx,pgrp:root:r-x,other::r–

You would like the oracle user to also be able to start and stop the resource; you can enable this

level of permission using the crsctl setperm command, as in the following example:

[root@london1 ~]# crsctl setperm resource TEST.db -o oracle

[root@london1 ~]# crsctl getperm resource TEST.db

Name: TEST.db

owner:oracle:rwx,pgrp:root:r-x,other::r–

The preceding snippet allows users logging in as oracle (or using sudo su – oracle) to start and stop the database, effectively transferring ownership of the resource to the oracle account. You need to ensure that the oracle account can execute the action script; otherwise, you will get an error when trying to start the resource. Members of the oinstall group have the same rights. All other users defined on the operating system level with privileges to execute the binaries in $GRID_HOME can only read the resource status.

􀂄Note All the resource attributes—and especially the placement options and start/stop dependencies—are documented in Appendix B of the “Oracle Clusterware Administration and Deployment Guide 11g Release 2 (11.2)” (http://docs.oracle.com/cd/E11882_01/rac.112/e16794/resatt.htm)

Final Tips:

TIP: The final preparation steps are to copy the password file and pfile pointing to the ASM spfile into the passive node’s $ORACLE_HOME/dbs directory. You are in 11.2, so the ADR will take care of all your diagnostic files. You need to create the directory for your audit files however, which normally is in $ORACLE_BASE/admin/$ORACLE_SID/adump. Make sure links, directories, tnsnames.ora are also the same on the nodes.

BIGGEST PossibleTIP:. YOU need to keep  Action Scripts in sync and with same privs on all nodes where you want to fail over to… So changing the action script means TEST it and if it works: Copy it across to the new Environment.

Once you have implemented the Solution:

Relocating a resource:

From this point on, you need to use crsctl {start|stop|relocate} to manipulate the database.

Tip:

  1. Always set your environment to the Grid infra-structure before starting to work!
  2. Always check the naming convention of the specific resource you want to work with :
  • crsctl status resource -t
  1. When you have identified the resource ( for example mydb1)  : app.mydb1.db
  2. Check the resource
  • crsctl status resource app.mydb1.db
  1. Then you can relocate the mydb1 with:
  • crsctl relocate resource app.mydb1.db
  1. and check it with :
  • crsctl status resource app.mydb1.db

2.       Add the vip address &  dedicated listener to the clusterware(11.2)

In this scenario most of the resources in the clusterware (GI) remain in place, only new resources are added for a dedicated listener and a vip address. AS mentioned in the requirements you will need to have a dedicated VIP address for the database, that vip needs to be used in the listener configuration.  SSH in order to backups or to perform schedules should be using the VIP !

Below is a working example which was tailored for  one of the databases:
1)      get new VIP name / IP address for each database in scope and write them into /etc/hosts of all servers:
 For example:
 Database: MYDB1, new VIP name: mydb1dbprod, IP address: 195.233.102.120
2)      please copy act_listener.pl script (see attachment) into $GRID_INFRASTRUCTURE_HOME/crs/public directory
 in your case in “/opt/crs/product/112_ee_64/crs/crs/public”
3)      create new crs resource for  VIP name as root user ($GRID_INFRASTRUCTURE_HOME setting):
 appvipcfg create -network=1 -ip=195.233.102.120 -vipname=mydb1dbprod -user=root
crsctl setperm resource mydb1dbprod -u user:oracle:r-x
4)      create new crs type (I named it as custom_listener) as cluster_resource type as oracle user with $GRID_INFRASTRUCTURE_HOME setting:
 crsctl add type custom_listener -basetype cluster_resource -attr "ATTRIBUTE=ORACLE_HOME,TYPE=string"
crsctl modify type custom_listener -attr "ATTRIBUTE=ACTION_SCRIPT,TYPE=string,DEFAULT_VALUE=/opt/crs/product/112_ee_64/crs/crs/public/act_listener.pl"
crsctl modify type custom_listener -attr "ATTRIBUTE=ORA_LISTENER_NAME,TYPE=string,DEFAULT_VALUE=NULL"
5)      insert into /opt/crs/product/112_ee_64/crs/network/admin/listener.ora file the information about VIP Local listener dedicated for MYDB1 database (I named it as LISTENER_MYDB1) on all cluster node:
LISTENER_MYDB1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = mydb1dbprod)(PORT = 1521))
    )
  )
              Change port if it’s necessary.
6)      Insert into /opt/oracle/product/112_ee_64/db/network/admin/tnsnames.ora file the information about VIP Local listener:
LISTENER_MYDB1 =
  (ADDRESS = (PROTOCOL = TCP)(HOST = mydb1dbprod)(PORT = 1521))
7)      Create new crs resource of custom_listener type for LISTENER_MYDB1 listener as oracle user with $GRID_INFRASTRUCTURE_HOME setting:
crsctl add resource LISTENER_MYDB1 -type custom_listener \
-attr "PLACEMENT=favored, \
HOSTING_MEMBERS='MYSRVR01hr MYSRVR02hr MYSRVR03hr MYSRVR04hr',CHECK_INTERVAL=30,RESTART_ATTEMPTS=2, \
START_DEPENDENCIES=hard(mydb1dbprod),STOP_DEPENDENCIES=hard(mydb1dbprod), \
ORACLE_HOME=/opt/crs/product/112_ee_64/crs,ORA_LISTENER_NAME=LISTENER_MYDB1"
8)      Change instance’s parameters related to listeners:
 alter system set local_listener='LISTENER_MYDB1' scope=both sid='*' ;
alter system set remote_listener='’ scope=both sid='*' ;
alter system register ;
9)      Change entry in /var/opt/oracle/tnsnames.ora file:
 MYDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = mydb1dbprod)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = MYDB1_TAF.prod.nl)
   )
)
10)   Change attribute of ora.mydb1.db database crs resource (already reconfigured as Single instance database):
                crsctl modify res ora.mydb1.db -attr "PLACEMENT=favored,HOSTING_MEMBERS='MYSRVR01hr MYSRVR02hr MYSRVR03hr MYSRVR04hr',SERVER_POOLS=''"
       crsctl modify res ora.mydb1.db -attr "START_DEPENDENCIES='hard(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg,mydb1dbprod) pullup(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg)'"
crsctl modify res ora.mydb1.db -attr "STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.MYDB1_DATA.dg,shutdown:ora.MYDB1_FRA.dg,shutdown:ora.MYDB1_REDO.dg,shutdown:mydb1dbprod)'"
Congratulations when all that is finished, you are ready to play with new VIP/Local listener/Single instance database resources with crsctl command:
crsctl relocate res [RESOURCE_NAME] –n [NODE_NAME] -f
where
[RESOURCE_NAME] could be mydb1dbprod, LISTENER_MYDB1 or  ora.mydb1.db
[NODE_NAME] any node names that you wrote in HOSTING_MEMBERS attribute.
Act_listener.pl
#!/usr/bin/perl
#    NOTES
#      Edit the perl installation directory as appropriate.
#
#      Place this file in <CRS_HOME>/crs/public/
$ORACLE_HOME = "$ENV{_CRS_ORACLE_HOME}";
$ORA_LISTENER_NAME = "$ENV{_CRS_ORA_LISTENER_NAME}";
if ($#ARGV != 0 ) {
        print "usage: start stop check required \n";
exit;
}
$command = $ARGV[0];
# start listener
if ($command eq "start") {
        system ("
        ORACLE_HOME=$ORACLE_HOME
        export ORACLE_HOME
        ORA_LISTENER_NAME=$ORA_LISTENER_NAME
        export ORA_LISTENER_NAME
#       export TNS_ADMIN=$ORACLE_HOME/network/admin  # optionally set TNS_ADMIN here
        $ORACLE_HOME/bin/lsnrctl start $ORA_LISTENER_NAME");
        }
# stop listener
if ($command eq "stop") {
        system ("
        ORACLE_HOME=$ORACLE_HOME
        export ORACLE_HOME
        ORA_LISTENER_NAME=$ORA_LISTENER_NAME
        export ORA_LISTENER_NAME
#       export TNS_ADMIN=$ORACLE_HOME/network/admin  # optionally set TNS_ADMIN here
        $ORACLE_HOME/bin/lsnrctl stop $ORA_LISTENER_NAME");
        }
# check listener
if ($command eq "check") {
        check_listener();
        }
sub check_listener {
        my($check_proc_listener,$process_listener) = @_;
        $process_listener = "$ORACLE_HOME/bin/tnslsnr $ORA_LISTENER_NAME -inherit";
        $check_proc_listener = qx(ps -ae -o args | grep -w "tnslsnr $ORA_LISTENER_NAME" | grep -v grep | head -n 1 );
        chomp($check_proc_listener);
        if ($process_listener eq $check_proc_listener) {
                exit 0;
        } else {
                exit 1;
                }
        }
# clean listener
if ($command eq "clean") {
        my $kill_proc = qx(ps -aef | grep -w "tnslsnr $ORA_LISTENER_NAME" | grep -v grep | head -n 1 | awk '{print \$2}'| xargs kill -9 $1);
        exit 0;
}

3.       Add the vip address and a dedicated listener to the clusterware in 11.1

This Scenario has been implemented on various servers already and it is working both under 11.1 Crs and 11.2 GI which makes it a good candidate as a default.   It consists of 5 .Cap files which contain settings for the cluster. IN those 5 cap files 3 . pl scripts are being used. All activities to register have been copied to a .txt file which we will add here.

mysrvrMYDB4 will be used as a working example:
Overview and activities In the cluster: mysrvrMYDB4.txt
export CRS_HOME=/opt/crs/product/111_ee_64/crs
# register
/opt/crs/product/111_ee_64/crs/bin/crs_register -dir /opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4 mysrvrMYDB4.stop
/opt/crs/product/111_ee_64/crs/bin/crs_register -dir /opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4 mysrvrMYDB4.vip
### vip activities need to be done as root ! 
su -- /opt/crs/product/111_ee_64/crs/bin/crs_setperm mysrvrMYDB4.vip -o root
su -- /opt/crs/product/111_ee_64/crs/bin/crs_setperm mysrvrMYDB4.vip -u user:oracle:r-x
/opt/crs/product/111_ee_64/crs/bin/crs_register -dir /opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4 mysrvrMYDB4.lsnrMYDB4
/opt/crs/product/111_ee_64/crs/bin/crs_register -dir /opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4 mysrvrMYDB4.dbMYDB4
/opt/crs/product/111_ee_64/crs/bin/crs_register -dir /opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4 mysrvrMYDB4.start
# start all ( will start all resources using the .start
  /opt/crs/product/111_ee_64/crs/bin/crs_start mysrvrMYDB4.start
# start single
/opt/crs/product/111_ee_64/crs/bin/crs_start mysrvrMYDB4.vip
/opt/crs/product/111_ee_64/crs/bin/crs_start mysrvrMYDB4.lsnrMYDB4
/opt/crs/product/111_ee_64/crs/bin/crs_start mysrvrMYDB4.dbMYDB4
# stop  all
 /opt/crs/product/111_ee_64/crs/bin/crs_stop mysrvrMYDB4.stop -f
# unregister 
/opt/crs/product/111_ee_64/crs/bin/crs_unregister mysrvrMYDB4.start
/opt/crs/product/111_ee_64/crs/bin/crs_unregister mysrvrMYDB4.dbMYDB4
/opt/crs/product/111_ee_64/crs/bin/crs_unregister mysrvrMYDB4.lsnrMYDB4
su -- /opt/crs/product/111_ee_64/crs/bin/crs_unregister mysrvrMYDB4.vip
/opt/crs/product/111_ee_64/crs/bin/crs_unregister mysrvrMYDB4.stop

 
act_listenerMYDB4.pl
#!/usr/bin/perl
#
$ORACLE_HOME="/opt/oracle/product/111_ee_64/db";
$ORA_LISTENER_NAME="LISTENER_MYDB4";
$_USR_ORA_LANG="/opt/oracle/product/111_ee_64/db";
$_USR_ORA_SRV="LISTENER_MYDB4";
$LD_LIBRARY_PATH="/opt/oracle/product/111_ee_64/db/lib";
$LD_LIBRARY_PATH_64="";
#  print "$ORACLE_HOME\n";
#  print "$ORA_LISTENER_NAME\n";
if ($#ARGV != 0 ) {
        print "usage: start stop check required \n";
exit;
}
$command = $ARGV[0];
# start listener
if ($command eq "start") {
        system ("
        ORACLE_HOME=$ORACLE_HOME
        export ORACLE_HOME
        ORA_LISTENER_NAME=$ORA_LISTENER_NAME
        export ORA_LISTENER_NAME
        LD_LIBRARY_PATH=$ORACLE_HOME/lib
        export LD_LIBRARY_PATH
        LD_LIBRARY_PATH_64=
        export LD_LIBRARY_PATH_64
        TNS_ADMIN=/var/opt/oracle
        export TNS_ADMIN
        $ORACLE_HOME/bin/lsnrctl start $ORA_LISTENER_NAME");
}
# stop listener
if ($command eq "stop") {
        system ("
        ORACLE_HOME=$ORACLE_HOME
        export ORACLE_HOME
        ORA_LISTENER_NAME=$ORA_LISTENER_NAME
        export ORA_LISTENER_NAME
        LD_LIBRARY_PATH=$ORACLE_HOME/lib
        export LD_LIBRARY_PATH
        LD_LIBRARY_PATH_64=
        export LD_LIBRARY_PATH_64
        TNS_ADMIN=/var/opt/oracle
        export TNS_ADMIN
        $ORACLE_HOME/bin/lsnrctl stop $ORA_LISTENER_NAME");
}
# check listener
if ($command eq "check") {
        check_listener();
        }
sub check_listener {
        my($check_proc_listener,$process_listener) = @_;
        $process_listener = "$ORACLE_HOME/bin/tnslsnr $ORA_LISTENER_NAME -inherit";
        $check_proc_listener = qx(ps -ef | grep -i  "tnslsnr $ORA_LISTENER_NAME" | grep -v grep | /bin/awk '{ print \$8 " " \$9  " -inherit" }' );
 print "SOLL $process_listener \n";
 print "IST  $check_proc_listener \n";
        chomp($check_proc_listener);
        if ($process_listener eq $check_proc_listener) {
                exit 0;
        } else {
                exit 1;
        }
}

act_resgroup_mysrvrMYDB4.pl
#!/usr/bin/perl
#
# act_resgroup.pl
# NOTES
# Edit the perl installation directory as appropriate.
#
# Place this file in <CLUSTERWARE_HOME>/crs/basicpub/
#
#
exit 0;

##act_dbMYDB4.pl
This perl script is the most important one because in this script we alter the HO OPenview environment, preventing the wrong monitoring (monitoring on a node that no longer holds the database since it failed over to different node) and prevent wrong monitoring by moving away the alertfile of the Oracle database to a different location since  HPOpenview cannot start reading old lines in the alertfile anymore.

#!/usr/bin/perl
# NAME
# act_db.pl - <one-line expansion of the name>
#
# DESCRIPTION
# This perl script is the action script for start / stop / check
# the Oracle Instance in a cold failover configuration.
#
# Place this file in <CLUSTERWARE_HOME>/crs/basicpub/
#
# NOTES
# Edit the perl installation directory as appropriate.
#
## Creating an array in Perl to hold the dateTime for a filename
my @now = localtime(time);
# rearrange the following to suit your stamping needs.
# it currently generates YYYYMMDDhhmmss
my $CURRENT_TIMESTAMP = sprintf("%04d%02d%02d%02d%02d%02d",
                        $now[5]+1900, $now[4]+1, $now[3],
                        $now[2],      $now[1],   $now[0]);
$ORACLE_HOME="/opt/oracle/product/111_ee_64/db";
$ORACLE_SID="MYDB4";
$LOWER_ORACLE_SID="MYDB4";
$USES_ASM="YES";
if ($#ARGV != 0 ) {
        print "usage: start stop check required \n";
exit;
}
$command = $ARGV[0];
# Database start stop check
# Start database
if ($command eq "start" ) {
        $MYRETASM = checkASM();
        if ($MYRETASM eq 1) {
                exit 1;
        }
        system ("
        ORACLE_SID=$ORACLE_SID
        export ORACLE_SID
        LOWER_ORACLE_SID=$LOWER_ORACLE_SID
        export LOWER_ORACLE_SID
        ORACLE_HOME=$ORACLE_HOME
        export ORACLE_HOME
        ORA_NLS10=$ORACLE_HOME/nls/data
        export ORA_NLS10
        export TNS_ADMIN=/var/opt/oracle
        export CURRENT_TIMESTAMP=$CURRENT_TIMESTAMP
        ### Move old alertfile away to prevent old errors seen by Openview
        mv /opt/oracle/$ORACLE_SID/diag/rdbms/$LOWER_ORACLE_SID/$ORACLE_SID/trace/alert_$ORACLE_SID.log /opt/oracle/$ORACLE_SID/admin/Arch/alert_${ORACLE_SID}.log.${CURRENT_TIMESTAMP}
        $ORACLE_HOME/bin/sqlplus /nolog <<EOF
        connect / as sysdba
        startup
        quit
        EOF" );
        ### DBSPI start collecting data and present current location ( no shared filesystems in place )
        system ("
        sudo su - <<EOF
        /var/opt/OV/bin/instrumentation/dbspicol ON ${ORACLE_SID}
        /bin/touch /var/opt/OV/log/OpC/PMON_oracle_${ORACLE_SID}.flag
EOF" );
#### Since we are in a new shell being ROOT  EOF needs 2 b in column 1.
        ###
        $MYRET = check();
        exit $MYRET;
        }
if ($command eq "stop" ) {
        ### DBSPI
        system ("
        sudo su - <<EOF
        /var/opt/OV/bin/instrumentation/dbspicol OFF $ORACLE_SID
        rm /var/opt/OV/log/OpC/PMON_oracle_${ORACLE_SID}.flag
EOF" );
        ###
        system ("
        sleep 5
        ORACLE_SID=$ORACLE_SID
        export ORACLE_SID
        ORACLE_HOME=$ORACLE_HOME
        export ORACLE_HOME
        ORA_NLS10=$ORACLE_HOME/nls/data
        export ORA_NLS10
        export TNS_ADMIN=/var/opt/oracle
        $ORACLE_HOME/bin/sqlplus /nolog <<EOF
        connect / as sysdba
        shutdown immediate
        quit
        EOF" );
        $MYRET = check();
        if ($MYRET eq 1) {
                exit 0;
                }
        else {
                exit 1;
                }
        }
# Clean database
if ($command eq "clean") {
        my $kill_proc = qx(ps -aef | grep -w ora_pmon_$ORACLE_SID | grep -v grep | awk '{print \$2}'| xargs kill -9 $1);
        exit 0;
}
# Check database
if ($command eq "check" ) {
        $MYRET = check();
        exit $MYRET;
}
sub check {
        my($check_proc,$process) = @_;
        $process = "ora_pmon_$ORACLE_SID";
        $check_proc = qx(ps -ef | grep -w ora_pmon_$ORACLE_SID | grep -v grep | /bin/awk '{ print \$8 }' );
        chomp($check_proc);
        if ($process eq $check_proc) {
                $RET=0;
        } else {
                $RET=1;
        }
        return $RET;
}
sub checkASM {
        my($check_proc,$process) = @_;
        $process = "asm_pmon_+ASM";
        $check_proc = qx(ps -ef | grep asm_pmon_+ASM | grep -v grep | /bin/awk '{ print \$8 }' | sed "s/ASM1/ASM/g" | sed "s/ASM2/ASM/"  | sed "s/ASM3/ASM/g" );
        chomp($check_proc);
        if ($process eq $check_proc) {
                $RET=0;
        } else {
                print "ASM not preset, sleeping 120 Secs ..... \n";
                sleep 120;
                $check_proc = qx(ps -ef | grep asm_pmon_+ASM | grep -v grep | /bin/awk '{ print \$8 }'  | sed "s/ASM1/ASM/g" | sed "s/ASM2/ASM/" | sed "s/ASM3/ASM/g" );
                chomp($check_proc);
                if ($process eq $check_proc) {
                   $RET=0;
                } else {
                   $RET=1;
                }
        }
        return $RET;
}

#cat mysrvrMYDB4.start.cap
NAME=mysrvrMYDB4.start
TYPE=application
ACTION_SCRIPT=/opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4/act_resgroup_mysrvrMYDB4.pl
ACTIVE_PLACEMENT=1
AUTO_START=restore
CHECK_INTERVAL=600
DESCRIPTION=mysrvrMYDB4.start
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=mysrvrar mysrvrbr mysrvrcr
OPTIONAL_RESOURCES=
PLACEMENT=favored
REQUIRED_RESOURCES=mysrvrMYDB4.lsnrMYDB4 mysrvrMYDB4.dbMYDB4
RESTART_ATTEMPTS=1
SCRIPT_TIMEOUT=60
START_TIMEOUT=0
STOP_TIMEOUT=0
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

## mysrvrMYDB4.stop.cap
NAME=mysrvrMYDB4.stop
TYPE=application
ACTION_SCRIPT=/opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4/act_resgroup_mysrvrMYDB4.pl
ACTIVE_PLACEMENT=1
AUTO_START=restore
CHECK_INTERVAL=60
DESCRIPTION=mysrvrMYDB4.stop
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=mysrvrar mysrvrbr mysrvrcr
OPTIONAL_RESOURCES=
PLACEMENT=favored
REQUIRED_RESOURCES=
RESTART_ATTEMPTS=1
SCRIPT_TIMEOUT=60
START_TIMEOUT=0
STOP_TIMEOUT=0
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

##mysrvrMYDB4.vip.cap
Being able to failover an ip address via the clusterware is a key to success. SSH is to be done on that ip address when running schedules for backup for example.
NAME=mysrvrMYDB4.vip
TYPE=application
ACTION_SCRIPT=/opt/crs/product/111_ee_64/crs/bin/usrvip
ACTIVE_PLACEMENT=1
AUTO_START=restore
CHECK_INTERVAL=60
DESCRIPTION=mysrvrMYDB4.vip
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=mysrvrar mysrvrbr mysrvrcr
OPTIONAL_RESOURCES=
PLACEMENT=favored
REQUIRED_RESOURCES=mysrvrMYDB4.stop
RESTART_ATTEMPTS=1
SCRIPT_TIMEOUT=60
START_TIMEOUT=0
STOP_TIMEOUT=0
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=bond0
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=
USR_ORA_NETMASK=255.255.255.128
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=195.233.202.20

##mysrvrMYDB4.lsnrMYDB4.cap
NAME=mysrvrMYDB4.lsnrMYDB4
TYPE=application
ACTION_SCRIPT=/opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4/act_listenerMYDB4.pl
ACTIVE_PLACEMENT=1
AUTO_START=restore
CHECK_INTERVAL=20
DESCRIPTION=mysrvrMYDB4.lsnrMYDB4
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=mysrvrar mysrvrbr mysrvrcr
OPTIONAL_RESOURCES=
PLACEMENT=favored
REQUIRED_RESOURCES=mysrvrMYDB4.vip
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
START_TIMEOUT=0
STOP_TIMEOUT=0
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=/opt/oracle/product/111_ee_64/db
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=LISTENER_mysrvrMYDB4
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

##mysrvrMYDB4.dbMYDB4.cap
NAME=mysrvrMYDB4.dbMYDB4
TYPE=application
ACTION_SCRIPT=/opt/crs/product/111_ee_64/crs/crs/public/mysrvrMYDB4/act_dbMYDB4.pl
ACTIVE_PLACEMENT=1
AUTO_START=restore
CHECK_INTERVAL=20
DESCRIPTION=mysrvrMYDB4.dbMYDB4
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=mysrvrar mysrvrbr mysrvrcr
OPTIONAL_RESOURCES=
PLACEMENT=favored
REQUIRED_RESOURCES=mysrvrMYDB4.stop
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=60
START_TIMEOUT=600
STOP_TIMEOUT=0
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0
USR_ORA_DISCONNECT=false
USR_ORA_FLAGS=0
USR_ORA_IF=
USR_ORA_INST_NOT_SHUTDOWN=
USR_ORA_LANG=/opt/oracle/product/111_ee_64/db
USR_ORA_NETMASK=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_PFILE=
USR_ORA_PRECONNECT=none
USR_ORA_SRV=mysrvrMYDB4
USR_ORA_START_TIMEOUT=0
USR_ORA_STOP_MODE=immediate
USR_ORA_STOP_TIMEOUT=0
USR_ORA_VIP=

Relocate resources:

crs_relocate MYDB6.start -f
or
crs_relocate MYDB7.start -f -c mysrvr22r
Note:  resources need to be ONLINE to do a successful relocate.
 

Checking resources:

oracle@mysrvrcr:/opt/oracle/admin/tools [CRS]# crsctl status resource -t|grep MYDB4  -C 5

###  this will show you  the services for example and show 3 lines output in linux.

Should show the 5 application resources.
mysrvrMYDB4.dbMYDB4                 0/5    0    ONLINE       ONLINE on mysrvrbr
mysrvrMYDB4.lsnrMYDB4               0/5    0    ONLINE       ONLINE on mysrvrbr
mysrvrMYDB4.start                      0/1    0    ONLINE       ONLINE on mysrvrbr
mysrvrMYDB4.stop                       0/1    0    ONLINE       ONLINE on mysrvrbr
mysrvrMYDB4.vip                        0/1    0    ONLINE       ONLINE on mysrvrbr

###

Start all resources (on current node):

##                /opt/crs/product/11.2.0.2_a/crs/bin/crs_start MYDB6.start

Stop all recources (on the current node):

##                /opt/crs/product/11.2.0.2_a/crs/bin/crs_stop MYDB6.stop -f

Status UNKNOWN

### if a resource gets status UNKNOWN in the cluster you need to stop it with force f.e.:

crs_stop mysrvrMYDB4.dbMYDB4 -f

##  relocate:

crs_relocate MYDB6.start -f

crs_relocate MYDB7.start -f -c mysrvr22r

Appendix. Working solution with Scenario 1 Removing – Add to crs.
Tuesday , feeling brave ,   this is how I  proceeded after a great blog
https://blogs.oracle.com/xpsoluxdb/entry/clusterware_11gr2_setting_up_an_activepassive_failover_configuration
###   so now the config looks like this
### cat ora.mydb1.config
NAME=apcrs.mydb1.db
TYPE=cluster_resource
ACL=owner:oracle:rwx,pgrp:dba:rwx,other::r--
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=/opt/crs/product/112_ee_64/crs/crs/public/haclusterMYDB1.sh
ACTIVE_PLACEMENT=1
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=10
DEGREE=1
DESCRIPTION=Active Passive resource for MYDB1 Db
ENABLED=1
HOSTING_MEMBERS=MYSRVR01HR MYSRVR02HR
LOAD=1
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=2
SCRIPT_TIMEOUT=60
START_DEPENDENCIES=hard(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg)
START_TIMEOUT=600
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.MYDB1_DATA.dg,shutdown:ora.MYDB1_FRA.dg,shutdown:ora.MYDB1_REDO.dg)
STOP_TIMEOUT=600
UPTIME_THRESHOLD=1h
### Third day feeling brave again .
I noticed that in the example they had the oracle home ( crs home available )  let’s try that.
##  and I have changed the script , removed the su  to oracle  and removed  the  oracle sid and environment being set  in every case ..

#!/bin/bash
export ORACLE_SID=MYDB11
export ORACLE_HOME=/opt/oracle/product/112_ee_64/db
export PATH=/usr/local/bin:$ORACLE_HOME/bin:$PATH
export ORACLE_OWNER=oracle
case $1 in
'start')
$ORACLE_HOME/bin/sqlplus /nolog<<EOF
conn / as sysdba
startup
exit
EOF
RET=0
;;
'stop')
$ORACLE_HOME/bin/sqlplus /nolog<<EOF
conn / as sysdba
shutdown immediate
exit
EOF
RET=0
;;
'clean')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
conn / as sysdba
shutdown abort
exit
EOF
RET=0
;;
'check')
# check for the existance of the smon process for $ORACLE_SID
# this check could be improved, but was kept short on purpose
found=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`
if [ $found = 0 ]; then
RET=1
else
RET=0
fi
;;
*)
RET=0
;;
esac
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi
###   now I remove the configuration and try again
crsctl stop resource app.mydb1.db
and removed it :
crsctl delete resource app.mydb1.db
###  status check.
crsctl status resource –t
##   as root
cd /opt/crs/product/112_ee_64/crs/bin
crsctl add resource app.mydb1.db -type cluster_resource -file /opt/crs/product/112_ee_64/crs/crs/public/ora.mydb1.config
## pffff
MYSRVR01hr:root:/opt/crs/product/112_ee_64/crs/bin # crsctl add resource app.mydb1.db -type cluster_resource -file /opt/crs/product/112_ee_64/crs/crs/public/ora.mydb1.config
CRS-0160: The attribute 'ORACLE_HOME' is not supported in this resource type.
CRS-4000: Command Add failed, or completed with errors.
##..  one more try after remove and
crsctl start resource app.mydb1.db
and it worked  , ah  happy me …..
#
# now as oracle :
crsctl relocate  resource app.mydb1.db
oracle@MYSRVR01hr:/opt/crs/product/112_ee_64/crs/bin [CRS]# crsctl relocate  resource app.mydb1.db
CRS-2673: Attempting to stop 'app.mydb1.db' on 'MYSRVR02hr'
CRS-2677: Stop of 'app.mydb1.db' on 'MYSRVR02hr' succeeded
CRS-2672: Attempting to start 'app.mydb1.db' on 'MYSRVR01hr'
CRS-2676: Start of 'app.mydb1.db' on 'MYSRVR01hr' succeeded
J  awesome happy dba.
###  wrap up these settings did  the magic :
Configuration looks like this:
cat ora.mydb1.config
NAME=app.mydb1.db
TYPE=cluster_resource
ACL=owner:oracle:rwx,pgrp:dba:rwx,other::r--
ACTION_SCRIPT=/opt/crs/product/112_ee_64/crs/crs/public/haclusterMYDB1.sh
ACTIVE_PLACEMENT=0
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=10
DEGREE=1
DESCRIPTION=Resource MYDB1 Db
ENABLED=1
HOSTING_MEMBERS=MYSRVR01HR MYSRVR02HR
LOAD=1
LOGGING_LEVEL=1
PLACEMENT=restricted
RESTART_ATTEMPTS=1
START_DEPENDENCIES=hard(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.MYDB1_DATA.dg,ora.MYDB1_FRA.dg,ora.MYDB1_REDO.dg)
START_TIMEOUT=600
STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.MYDB1_DATA.dg,shutdown:ora.MYDB1_FRA.dg,shutdown:ora.MYDB1_REDO.dg)
STOP_TIMEOUT=600
UPTIME_THRESHOLD=1h
### And action script looks like this.
### After having removed the su – oracle which I had in there it worked!
cat haclusterMYDB1.sh
#!/bin/bash
export ORACLE_SID=MYDB11
export ORACLE_HOME=/opt/oracle/product/112_ee_64/db
export PATH=/usr/local/bin:$ORACLE_HOME/bin:$PATH
export ORACLE_OWNER=oracle
case $1 in
'start')
$ORACLE_HOME/bin/sqlplus /nolog<<EOF
conn / as sysdba
startup
exit
EOF
RET=0
;;
'stop')
$ORACLE_HOME/bin/sqlplus /nolog<<EOF
conn / as sysdba
shutdown immediate
exit
EOF
RET=0
;;
'clean')
$ORACLE_HOME/bin/sqlplus /nolog <<EOF
conn / as sysdba
shutdown abort
exit
EOF
RET=0
;;
'check')
# check for the existance of the smon process for $ORACLE_SID
# this check could be improved, but was kept short on purpose
found=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`
if [ $found = 0 ]; then
RET=1
else
RET=0
fi
;;
*)
RET=0
;;
esac
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s