Character-set change in an Oracle Database

Introduction

Recently a request came in to investigate options to change the character set for a number of core databases. Goal would be to move from  we8iso8859p15 to al32utf8. Below are some thoughts, things to remember if you are really heading in such direction.

A management summary of this article would be:  Yes it is technically possible to move the character set of a database from we8iso8859p15 to al32utf8.  But this can only be achieved after careful checking first and if checks show actions are needed then you will Need a team with application – datamodel knowledge look into the reports that will be generated during the  inventory part of the scenario. With the knowledge gained from those reports they (application – datamodel) aware people will then have to fix issues if any in the Source database before running another analyses ( scan ) before doing a conversion. Oh and of course ( test test test test ) and have a backup in place  before doing this on your live production boxes.  Oracle software is trying to protect you by keeping a max of 7 days between analyses and conversion and of course that makes sense given the fact that the data of course will continue to grow during and after the first analyses.

Details

When setting up a new database there is of course a lot to be considered as in sizing of the database ( sizing during start and lets say after 12 Months ), the users and their privileges but there is one extra very important topic to think of well ( think of twice would even be better ) and that is the character set.  Plain and simple if the Character-set is not chosen correctly there might be an issue converting the character set (if possible at all ) if you want to set it to the one desired at a later point and time. As a rule of thumb one could even say if the character set is wrong the data present in the database needs to be extracted (with tools like datapump), the database would need to be recreated in the new character-set after which the data can be added (inserted) again.  Of course only to be done after testing such a scenario on a pre-production environment and only after careful analyses of  the results in the reports to check the data before moving such scenario to the production environment.

There is one exception to the above scenario where you always will have to recreate the database to get to the desired (new) character-set. If  the new character-set is a super set ( if it is holding  the same characters ( and at the same location in the character table))  then a conversion will be possible.  Of course still needed to do a proper analyses about the data present in the database before starting such conversion , plan a scenario in pre-production to do the conversion, do a lot of analyses of the results , run the same scenario ( run inventory  do an impact analyses  first on production,  making a backup, do the conversion and do a lot of post checks ones that conversion has completed).

Before Oracle 12 two tools would need to be used to do the conversion. Csscan tool would do the inventory activity and present the results in three files about the consequences of such a conversion.  If data would be lost you will have to manually fix such issues  and rerun the csscan tool. Once that has been done the csalter tool would need to run in a  restricted database ( only one instance up in case of a rac (all others need to be down) ,  only one session logged in as sys, only in a restricted database).  Of course this will mean  that  the csalter tool will require a downtime of your environment .

In Oracle 12 csscan and csalter are no longer supported and you will have to look for the dmu tool.

Csscan and Csalter

Preparations:

### This Mos note has been used  as a baseline for the csscan:

Installing and configuring Csscan in 10g and 11g (Database Character Set Scanner) (Doc ID 745809.1)

Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) in 8i, 9i , 10g and 11g (Doc ID 260192.1)

Csscan Output Explained (Doc ID 444701.1)

As a best practice it was mentioned to alter the script after making a copy of it.

### In the existing f.e. 11G ORACLE_HOME:
cp $ORACLE_HOME/rdbms/admin/csminst.sql $ORACLE_HOME/rdbms/admin/csminst.sql.old

### As best practice alter the script by altering the csmig user to an existing tablespace in the database with enough room. ( Hint: you might even consider to put the designated tablespace to autoextend till the analyses have completed).

alter user csmig default tablespace nomonitor quota unlimited on nomonitor;

### Then run the adapted script:

sqlplus /nolog
conn / as sysdba
set TERMOUT ON
set ECHO ON
spool csminst.log
@?/rdbms/admin/csminst.sql

### Make sure you perform below changes to the user:

sqlplus /nolog
conn / as sysdba;
alter user csmig identified by ces10;
alter user csmig account unlock ;
grant create session to CSMIG;
exit;

###  All set for the analyses so on the Operating System level run:

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
csscan SYSTEM FULL=y TOCHAR=AL32UTF8 ARRAY=2048000 PROCESS=32 CAPTURE=N LOG=$ORACLE_SID.log

 

### Important to  know about csscan and csalter is that they are desupported in 12c.

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=549806046730391&id=1418321.1&_afrWindowMode=0&_adf.ctrl-state=sm7b61jd7_375

For a scenario in 11G ( or before ) :

http://dbaonline.org/newsletter/Newsletter_200908.pdf

If your source database is big ( one of the environments where i had to run it was app 8TB).  The csscan (11G env) will take time( in my case it ran for some 13 hours). If you like to keep an eye on it  v$session_longops will be your friend:

sqlplus / as sysdba
SET pages 1000
SELECT target,
TO_CHAR(start_time,'HH24:MI:SS - DD-MM-YY'),
time_remaining,
sofar,
totalwork,
sid,
serial#,
opname
FROM v$session_longops
WHERE sid IN
(SELECT sid FROM v$session WHERE upper(program) LIKE 'CSSCAN%')
AND sofar < totalwork
ORDER BY start_time
/

DMU tool

 

### Dmu tool :

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=549848776806800&id=1272374.1&_afrWindowMode=0&_adf.ctrl-state=sm7b61jd7_429

## Known issues with dmu:

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=549881125542061&id=2018250.1&_afrWindowMode=0&_adf.ctrl-state=sm7b61jd7_483

### Requirements for DMU tool in Oracle (dmu can be used as of 11.2.0.3. and up)

### Needed to run this package in database you need 2 analyse.

@?/rdbms/admin/prvtdumi.plb

### Need to make sure you have a password file in place.  First attempt failed, when trying to connect to the database in the dmu tool ,  kept getting invalid username or password …  But after some time i realized that a password file might be mandatory. If a password file is not present this is the way to create password file :

orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=manager entries=16 force=y

### If password file seems to be in place , check in the database with ( you should at least see sys out there then)

SQL> select * FROM v$pwfile_users;

### DMU tool will need a user with sysdba privileges .

### Before starting need to have set up environment.

export JAVA_HOME=$ORACLE_HOME/jdk
$ORACLE_HOME/dmu/dmu.sh

 

As always, a pleasure to share thoughts and approaches. Happy reading,

Mathijs.

To Tfa or not to Tfa

Introduction

Writing a few notes on the tool of Tfa on Oracle Real Application Clusters to share my current experiences on this topic. When you create a Tar @ Oracle for an issue on one of your clusters it might very well be that Support is asking to upload Tfa reports. In such case question number one would be do I have that tool installed in my software stack of the cluster-ware. Well first comfort is that as of Oracle 11.2.0.4 this tool is present as part of the standard cluster stack. But .. ( sometimes thinking why does there always have to be a but ). In my specific case Grid Infra structure is 11.2.0.3 so first I had to look if the tool was installed as an add-on to the cluster,  and second of course had to look if it was / is up and running, if it was able to survive a node reboot etc.

Below you will find some notes on the how and why of  this tool.

On the why you should use it part:

Oracle Trace File Analyzer White Paper

Okay that does make a lot of sense since the tool is helping us not only gather various kind of information in the log files on all the cluster nodes, but it is also able to filter that information to hold only data from lets say 4 hours before the incident till 1 hour after the incident.

On the how to part:

As mentioned since the cluster is using Grid Infra structure 11.2.0.3. it was needed to check if tool was installed and was running. Details below are more with regard to the checking if tools is there and is running. Should you need more on the installation process: Oracle Grid Infrastructure Trace File Analyzer Installation

Tfa summary of commands which all have to run as the mighty ROOT:

## checking actions:
cd /opt/crs/product/tfa/bin/
tfactl print actions
## When all is running and you need to collect data as ROOT:
./tfactl diagcollect -all -node all
## which hosts are being monitored:
./tfactl print hosts
Information with regard to  the configuration
ls -ltr /etc/init.d/init.tfa

-rwxr-xr-x 1 root root 11724 Aug 9 2016 /etc/init.d/init.tfa
## Starting the Tfa Daemon(s):
MYSRVR01hr:root:/root # /etc/init.d/init.tfa start
or 
MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl start
## enable – disable autostart at node reboot: 
MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl enable
## show status: 
MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl print status
## show config:
MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl print config
##show directories:
MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl print directories
## collect data for example on all nodes:
# ./tfactl diagcollect -all -node all
or
./tfactl diagcollect -from “MMM/dd/yyyy hh:mm:ss” -to “MMM/dd/yyyy hh:mm:ss”
./tfactl diagcollect -from "Jun/30/2017 09:15:00" -to "Jun/30/2017 12:00:00"
### Examples starting tfa 
MYSRVR01hr:root:/root # /etc/init.d/init.tfa start
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . . 
. . . . . 
Successfully started TFA Process..
. . . . . 
TFA Started and listening for commands
## Example enabling the start of the daemons at server / node reboot:
MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl enable
TFA autostart has been enabled ..
## Example status
MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl print status

.------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+----------+---------------+-------+------+------------+----------------------+------------------+
| MYSRVR01hr | RUNNING | 3608 | 5000 | 12.1.2.5.0 | 12125020150615061618 | COMPLETE |
| MYSRVR02hr | RUNNING | 20057 | 5000 | 12.1.2.5.0 | 12125020150615061618 | COMPLETE |
'----------+---------------+-------+------+------------+----------------------+------------------'
## Example showing current configuration:

MYSRVR01hr:root:/opt/crs/product/tfa/bin # ./tfactl print config
.----------------------------------------------------------.
| MYSRVR02hr                                               |
+---------------------------------------------+------------+
| Configuration Parameter | Value |
+---------------------------------------------+------------+
| TFA version | 12.1.2.5.0 |
| Automatic diagnostic collection | OFF |
| Trimming of files during diagcollection | ON |
| Repository current size (MB) | 466 |
| Repository maximum size (MB) | 1271 |
| Inventory Trace level | 1 |
| Collection Trace level | 1 |
| Scan Trace level | 1 |
| Other Trace level | 1 |
| Max Size of TFA Log (MB) | 50 |
| Max Number of TFA Logs | 10 |
| Max Size of Core File (MB) | 20 |
| Max Collection Size of Core Files (MB) | 200 |
| Automatic Purging | ON |
| Minimum Age of Collections to Purge (Hours) | 12 |
'---------------------------------------------+------------'

.----------------------------------------------------------.
| MYSRVR01hr |
+---------------------------------------------+------------+
| Configuration Parameter | Value |
+---------------------------------------------+------------+
| TFA version | 12.1.2.5.0 |
| Automatic diagnostic collection | OFF |
| Trimming of files during diagcollection | ON |
| Repository current size (MB) | 473 |
| Repository maximum size (MB) | 1170 |
| Inventory Trace level | 1 |
| Collection Trace level | 1 |
| Scan Trace level | 1 |
| Other Trace level | 1 |
| Max Size of TFA Log (MB) | 50 |
| Max Number of TFA Logs | 10 |
| Max Size of Core File (MB) | 20 |
| Max Collection Size of Core Files (MB) | 200 |
| Automatic Purging | ON |
| Minimum Age of Collections to Purge (Hours) | 12 |
'---------------------------------------------+------------'

Happy reading,

Mathijs

 

Altering the Hearbeat in Oracle Rac environment.

Introduction

When being asked for a part of job description of a dba  I would like to take a moment, smile and reply:  to serve and to protect the cluster the databases and the data of course. In the matter at hand this will have to mean that this missions/assignment will be all about increasing High availability and improve the functionality of the cluster interconnect between nodes and that all for one price: doubling  your cluster interconnect ips on the various layers.

And to be honest this assignment is even more of interest because after all how often is there a need or a challenge to alter IP addresses once you have setup the cluster. Hmm only valid options  I could think of would be action where there is a action like lifting and shifting the servers to other rooms  or as in this case because we simply want to improve availability.

Some days before the action was scheduled, I looked into the matter of the required dedicated ips together with the colleagues from team. In the file in /etc/hosts being the source for that we looked for ips which had  hb in the alias of the naming convention which is according to standards on OS level. This information  was used to setup the ips on the OS level. Once that task was completed a first and very important test had to be: can you ping these new and dedicated ips from every node which is part of the cluster. And since this is a happy flow scenario that was of course what happened. During the maintenance window itself detailed steps as seen below have been performed to make these addresses known / usable on the cluster layer in the grid infrastructure.

Detailed Plan

In this scenario we are using a 4 node Rac cluster on RH with 11.2.0.4 Grid Infrastructure. When looking on the OS level this is what we found already present in the hosts file. And even better these ips where all available and not in use.

grep -i hb /etc/hosts
 • 10.124.10.75 mysrvrahr-hb1.dc.nl mysrvrahr-hb1
 • 10.124.11.75 mysrvrahr-hb2.dc.nl mysrvrahr-hb2
 • 10.124.10.76 mysrvrbhr-hb1.dc.nl mysrvrbhr-hb1
 • 10.124.11.76 mysrvrbhr-hb2.dc.nl mysrvrbhr-hb2
 • 10.124.10.77 mysrvrchr-hb1.dc.nl mysrvrchr-hb1
 • 10.124.11.77 mysrvrchr-hb2.dc.nl mysrvrchr-hb2
 • 10.124.10.78 mysrvrdhr-hb1.dc.nl mysrvrdhr-hb1
 • 10.124.11.78 mysrvrdhr-hb2.dc.nl mysrvrdhr-hb2

The steps below have been followed based on a great Mos note in order to complete the tasks that are needed to make the Grid infrastructure (cluster) aware of the new ips. The scenario is running through a number of steps to be well prepared but also of course to be on the save side before and during the changes on the cluster layer.  It is like hmm paying respect and being brave but cautious .

Preparation steps:
As of 11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also in the gpnp profile. Documentation was very clear on this: If the private network is not available or its definition is incorrect, the CRSD process will not start and any subsequent changes to the OCR will be impossible.

  • Therefore care needs to be taken when making modifications to the configuration of the private network.
  • It is important to perform the changes in the correct order.

Note that manual modification of gpnp profile is not supported so it is best to stick to proper actions and not go into hacking mode!

So let’s take a backup of profile.xml on all cluster nodes before proceeding:

As grid user ( in my case the oracle user) , move to the correct directory( cd $GRID_HOME/gpnp/<hostname>/profiles/peer/).

cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrahr/profiles/peer 
cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrbhr/profiles/peer 
cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrchr/profiles/peer 
cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrdhr/profiles/peer 

During startup of cluster Oracle is relying on this very important xml file
for specific data like spfile / diskgroups and of course IPS. 

cp -p profile.xml profile.xml.bk

Ensure Oracle Clusterware is running on ALL cluster nodes in the cluster and save current status of resource. (Better save then sorry and to make sure you know about the health of  the cluster and its resources as they have been defined. So do check the cluster and save the current status of resources in a file as a pre-change image.

/app/oracle/product/11.2.0/grid/bin/crsctl check cluster -all 
/app/oracle/product/11.2.0/grid/bin/crsctl status resource -t>/tmp/beforeNewIps.lst

As grid user( in my case the oracle user): Get the existing information.

showing which interfaces are defined in the cluster.
##below you will see that the current (single) cluster interconnect is set up at 64.18.112.208

/app/oracle/product/11.2.0/grid/bin/oifcfg getif
bond1 64.18.112.208  global  cluster_interconnect
bond0  195.233.190.64  global  public

The command iflist will show you the network information known on the OS. Showing defined all ( or specific ) ips. Check the interfaces / subnet address can be identified by command for eth specifically:

/app/oracle/product/11.2.0/grid/bin/oifcfg iflist|grep -i eth|sort 
64.18.32.0 eth0
10.124.10.0 eth2
10.124.11.0 eth6

or

## check  interfaces / subnets in general:
 /app/oracle/product/11.2.0/grid/bin/oifcfg iflist|sort

Since we now have a good picture of the status of the cluster and since we know more about the ips being used (oifcfg getif) and about the ips being present on the system (oifcfg iflist) all things set to Add the new cluster_interconnect information. As you can see definition of both eth2 Address and eth6. And with the -global parameter the information is shared in the complete cluster on all nodes

/app/oracle/product/11.2.0/grid/bin/oifcfg setif -global eth2/10.124.10.0:cluster_interconnect 
/app/oracle/product/11.2.0/grid/bin/oifcfg setif -global eth6/10.124.11.0:cluster_interconnect

Of course there cannot be a change without verifying it. So i checked on all nodes with below command.

/app/oracle/product/11.2.0/grid/bin/oifcfg getif  

Since we are using 11GR2 Grid Infrastructure below steps are to be followed now: Shutdown Oracle Cluster ware on all nodes and disable the Oracle Cluster ware as root

Action is to be performed as the root user: 
sudo su -  
./app/oracle/product/11.2.0/grid/bin/crsctl stop crs 
./app/oracle/product/11.2.0/grid/bin/crsctl disable crs

In this specific scenario my Linux brothers in arms had already made the network configuration change at OS level as was required and that great job was seen in the oifcfg iflist command. They made sure that the new interfaces were available on all nodes after their change.

(check to ping the interfaces on all nodes with script kindly provided by Linux team member). 
for x in 10 11;do for xx in 75 76 77 78;do ping -c2 10.124.${x}.${xx}|egrep 'icmp_seq|transmitted';done;echo;done 
for x in a b c d; do for xx in 1 2;do ping -c2 mysrvr${x}hr-hb$xx|egrep 'icmp_seq|transmitted';done;echo;done 

Well all went well and has been checked so it is time to restart Oracle Cluster ware and once completed enable Oracle Cluster ware again.

On all nodes in the cluster:

## as root user: 
sudo su -  
/app/oracle/product/11.2.0/grid/bin/crsctl start crs

Seeing  = believing in this matter so after some time Check:

/app/oracle/product/11.2.0/grid/bin/crsctl check cluster -all 

In the step below we are checking the status of the resources in the cluster again and adding that information to  a file. This “post” operation file is then being used to compare the status of the cluster resources before and after.

/app/oracle/product/11.2.0/grid/bin/crsctl status resource -t>/tmp/afterNewIps.lst 
sdiff /tmp/afterNewIps.lst /tmp/beforeNewIps.lst

This compare showed me that a 10G RAC database resource and its services needed my intention, so via the cluster commands i checked and observed their status after starting them with srvctl command as the oracle user.  Once completed I ran another check as described and ah happy me all resource in the post status file were in a similar status ( online online) as in the pre status file.

as root user: 
sudo su -  
/app/oracle/product/11.2.0/grid/bin/crsctl enable crs

Time to wrap up this scenario. As part of housekeeping remove the old interface:

/app/oracle/product/11.2.0/grid/bin/oifcfg delif -global bond1/64.18.112.208:cluster_interconnect

Verified the environment one more time.

/app/oracle/product/11.2.0/grid/bin/oifcfg getif

Cluster ware proved already of course but checked the databases and listeners as a last sanity check and that completed the tasks for this time.

Time to inform Apps team that they can perform their sanity checks and start the applications again.

Happy reading and till next Time.

Mathijs

Upgrade to 12C GridInfra lessons learned

Introduction:

Not sure if it was word from a wise Dba or just from a fortune cookie (might even have been from a Pink Panther movie). It said always expect the unexpected and as an add-on success just loves preparation.

This week one of my tasks was to upgrade a 4 node Oracle Rac cluster from 11.2.0.4 to 12c (12.1.0.2.0) grid infrastructure. And even though I came well prepared (see also detailed other blog for that ( https://mathijsbruggink.com/2017/05/01/upgrading-11g-gridinfra-to-12c-in-linux/) several small surprises occurred which will be used as a lesson learned in upcoming upgrades of  the grid infra structure. Also I would like to offer some timeline as with regard to how long the upgrade process really took.

Lessons learned:

  • During the preparations needed to order extra disks for ASM storage for the Grid Infrastructure management repository (GIMR). When i started the runInstaller as a first check if all was well prepared  noticed that the installer software is indeed most likely looking for a diskgroup called +OCR or +VOTING. This could be a trap if you had not extended one of them ( but instead a +GRID diskgroup ). So when preparing look for either OCR or VOTING ( best both if present) to add extra disks ( and have some disks at spare).
  • During the start of maintenance window the Linux colleague mentioned that he would have to stop the Hyperion services. This activity took some 45 minutes of the change window. Will have to find out if this was a justified claim to stop those services and will need to add an extra step to the pre-checks to find out about other services – daemons that are running on the cluster that might be impacted when doing an upgrade.
  • Purpose of rootupgrade.sh after the installation part via the runinstaller completes and the upgrade part via the runinstaller commences. Rootupgrade.sh will perform the actual ASM upgrade, will configure the OLR (local registry) amongst other things.

Timelines:

Every change on a test  or production environment will have to come with a plan with regard to an estimated time needed how long the change will take. First and most important of course choose the strategy, will  a rolling window be used (thus minimizing impact since at least one node will be up ( thinking about a kind of batch where first batch will hold first node , second batch holding node 2 and 3 in my  4 node example, and a last batch holding the last node)).

Start of Change Window : 20:00 CET ( 6:00 UTC ) .
According to Linux expert Hyperion services needed to be stopped before we could continue.

Start of installation: 20:45 CET.
Started the runInstaller on the first node. Software was deployed to first node and all the nodes in the cluster (4 Node Rac).

Upgrade part of the existing 11.2.0.4 GridInfra structure:
21:30 – 21:58 on the first Node (MYSRVR09hr) the rootupgrade.sh was started. (used the manual upgrade ( still a bit hmm unwilling to leave it all to the automated option), this means set up a root session on first node and run: ./app/grid/product/`1102/grid/rootupgrade.sh).

In the runInstaller it was offered to automate and to run the rootupgrade.sh in parallel on Node number 2 and 3. So in separate windows but to me it felt better to open a terminal session as root in parallel  to run the script on each server.

22:06 – 22:13 on MYSRVR10hr : ./app/grid/product/`1102/grid/rootupgrade.sh
22:06 – 22:24 on MYSRVR11hr : ./app/grid/product/`1102/grid/rootupgrade.sh

On the last node MYSRVR12hr:
22:28 – 22:48 ./app/grid/product/`1102/grid/rootupgrade.sh

After that install continued with the Grid Infrastructure management repository (GIMR) database  and  once completed  i ran a number sanity checks in the cluster:

22:50 23:55

At 23:59:59 Reported mission completed.

Happy reading and till next time,

Mathijs

 

 

Gimr expanding ocr / voting Diskgroup

Introduction:

For a 12c Upgrade project I have been asked to run all the preparations and investigate requirements to move from 11.2 Grid infrastructure to 12c.  While reading and preparing it became clear  that Grid Infrastructure management repository (GIMR) database has become mandatory in Oracle GI 12.1.0.2. According to documentation and as per information from various Colleagues on the Web: Data files associated with it will be created in same diskgroup as OCR or voting. (Average growth per day per node = app 750MB so a 4 node cluster would lead at default retention of 3 days to app /GB. Note retention can be changed).

Until Oracle 12C all clusters here have been set up with a minimum ocr / voting diskgroup for for the mere reason  that they only where supposed to hold the Oracle Registry and the Voting disk(s). In this case that means that +GRID diskgroup is only 2GB in size. But as always with changing releases comes changing times (or was that the other way around). Below you will find the steps I followed to expand the OCR / Voting diskgroup ( In my scenario called +GRID) in order to be able to host the GIMR.

Claiming extra ASM storage

A first step to be able to host the GIMR  made it necessary to get extra  ASM storage from the Storage  / Linux Colleagues. For  ASM requested 6 * 8 GB shared (means visible on all nodes of the cluster) ASM disks that will be added to an existing Diskgroup (in my case in mysrvr09hr – 12hr ) to diskgroup +GRID.  +GRID diskgroup has been created in the past with Normal Redundancy with three FAILGROUPS on 3 Disks. For now I only used 3 new disks (thus saving something for a rainy day in case more storage will be needed).

/dev/mapper/asm-vote04 GRID_0014 8 MEMBER ONLINE CACHED NORMAL
/dev/mapper/asm-vote05 GRID_0015 8 MEMBER ONLINE CACHED NORMAL
/dev/mapper/asm-vote06 GRID_0016 8 MEMBER ONLINE CACHED NORMAL

Details:

When adding disks to the +GRID diskgroup which is in my case holding OCR and Voting information some extra preparation is needed because it will be the goal to add the new disks to the existing Failgroups.

conn / as sysasm
set lines 300
--
alter system set asm_power_limit = 0;
-- Note Use the same failgroups as existings disks here! And add them in one
-- command.
alter DISKGROUP GRID add 
 FAILGROUP GRID_0000 disk '/dev/mapper/asm-vote04' name GRID_0014
 FAILGROUP GRID_0001 disk '/dev/mapper/asm-vote05' name GRID_0015
 FAILGROUP GRID_0002 disk '/dev/mapper/asm-vote06' name GRID_0016
;
--
alter system set asm_power_limit = 1;
alter DISKGROUP GRID rebalance power 3;
--
select * from gv$asm_operation order by 1,2,3;
--
col name format a15;
col path format a25;
col failgroup format a20;
--
select dg.name, d.path, d.failgroup, d.failgroup_type,dg.VOTING_FILES
from v$asm_diskgroup dg, v$asm_disk d
where dg.group_number = d.group_number and dg.name = 'GRID'
order by dg.name, d.path, d.failgroup;

Once  the add step has completed below information was displayed with 6 disks, who all were holding OCR / Voting info after the re-balance:

NAME PATH FAILGROUP FAILGROUP_TYPE VOTING_FILES 
--------------- ------------------------- -------------------- ------- - 
GRID /dev/mapper/asm-vote01 GRID_0000 REGULAR Y 
GRID /dev/mapper/asm-vote02 GRID_0001 REGULAR Y 
GRID /dev/mapper/asm-vote03 GRID_0002 REGULAR Y 
GRID /dev/mapper/asm-vote04 GRID_0000 REGULAR Y 
GRID /dev/mapper/asm-vote05 GRID_0001 REGULAR Y 
GRID /dev/mapper/asm-vote06 GRID_0002 REGULAR Y

As always seeing is believing in Oracle so Let’s check this:

crsctl query css votedisk

## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
 1. ONLINE 19e8fcfc4ab24f73bf301473eda5b98e (/dev/mapper/asm-vote01) [GRID]
 2. ONLINE 2da0b18a78cb4f1dbf0596f9fdbd13e2 (/dev/mapper/asm-vote02) [GRID]
 3. ONLINE 6d670887aa874f6cbf8b177acae2ea29 (/dev/mapper/asm-vote03) [GRID]

Hmm okay so this means that the Cluster still sees the old three votingdisks on the first three original disks. So it let’s make Time for part 2 , dropping the old disks .

# 
conn / as sysasm
set lines 300
--
alter system set asm_power_limit = 0;
-- Dropping the previous (Old disks)
alter DISKGROUP GRID drop DISK
 GRID_0000
,GRID_0001
,GRID_0002
;
--
alter system set asm_power_limit = 1;
alter DISKGROUP GRID rebalance power 3;
--
select * from gv$asm_operation order by 1,2,3;
--
select dg.name, d.path, d.failgroup, d.failgroup_type,dg.VOTING_FILES
from v$asm_diskgroup dg, v$asm_disk d
where dg.group_number = d.group_number and dg.name = 'GRID'
order by dg.name, d.path, d.failgroup;

this now shows:

NAME PATH FAILGROUP FAILGROUP_TYPE VOTING_FILES
------------------------------ ------------------------------ ------- -
GRID /dev/mapper/asm-vote04 GRID_0000 REGULAR Y
GRID /dev/mapper/asm-vote05 GRID_0001 REGULAR Y
GRID /dev/mapper/asm-vote06 GRID_0002 REGULAR Y

And

crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
 1. ONLINE 0915739721ca4f55bf0f9ea53d58ecee (/dev/mapper/asm-vote04) [GRID]
 2. ONLINE 15507ed4dd964f26bfc497f4913034db (/dev/mapper/asm-vote05) [GRID]
 3. ONLINE 219ed4aef2d74fcdbfa9c9e8b81dbbde (/dev/mapper/asm-vote06) [GRID]
Located 3 voting disk(s).

Okay that looks much better. Completed the action for expanding the diskgroup +Grid as a preparation for Grid Infrastructure management repository (GIMR) database for my 12C Grid Infrastructure upgrade.

Happy Reading,

Mathijs.

Upgrading 11G GridInfra to 12C in Linux

Introduction:

With spring 2017 around new initiatives are developed. As a preparation to start doing Database upgrades to 12C  it will be a mandatory step to upgrade the Cluster-ware ( Grid-Infrastructure) first before doing the database part. So in  this case very happy me that finally the time has come that one of the customers requests to upgrade a number of Clusters to 12C Grid-infrastructure.  In  this document will share thoughts , and my plan to tackle this interesting puzzle. Since the first Cluster upgrade will happen pretty soon (this week) the document might evolve with the lessons learned of that first upgrade. Happy reading in advance.

Preparations:

It could be some text of a fortune cookie but every success just loves preparation so in this case that will not be any different. First thing to do was to identify a scope of clusters that had to be upgraded. Together with customer an inventory  list had to be created and in the end 10 Clusters have been defined as part of scope for this action. 8 Test clusters and 2 production environments . Interesting detail will be that all Clusters have been patched pretty recently all holding 11.2.0.4 Grid infrastructure with some extra challenge that the below Operating system will come in two flavors (being Red Hat Linux server release 5.11 (Tikanga) and 6.5 (Santiago). Curious in advance already to see if these different versions of Red Hat will have an influence of the steps to be performed. In the details below you will find more details on detailed preparations and actions of the upgrade.

Operating System:

One of the first steps to investigate is of course to find out if the Operating versions at hand are supported ones for the Upgrade. Oracle support confirmed that even though it would be recommended to upgrade the 5.11 Red Hat version first to Red Hat 7, it should work with the 5.11 version at hand. The 6.5 Os version was okay anyhow. The project decided however that an OS upgrade of the 5.11 boxes would delay things so upgrading the OS will be done in a different project.

Storage:

Before even considering to run the upgrade of the grid-infrastructure some extra time needs to be spend to investigate the storage in place in the Cluster for such upgrade. Often the Oracle software is first set up locally on each box on Volume group VG0 but with the out-of-place-installation these days that might become a challenge if  there is not enough local storage present anymore in the box. Due to standards those root disk become nearly untouchable. For my project this storage requirement has been defined as an absolute minimum which means there will  be a need for extra local storage per node or even for San storage per node which  will be presented as required mount points to me. If such storage would not (or no longer be present locally)  I have to request and received additional storage for it.

/app/grid /app/oracle /var/opt/oracle /tmp San 4 lvm dbs
50GB 70GB 32M 1GB

Short explain for this:

/app/grid : 12C Grid-Infra software will be installed.
/app/oracle: For the 12C Database software.
/var/opt/oracle and /tmp: required minimum space.
San 4 lvm dbs:  will be setup for 4GB mountpoints, for each Instance on the local node in order to hold logfiles.

When migrating to 12C and coming  from 11G  please be informed that you might need extra storage in your OCR – VOTING disk group due to a new feature as well. This new repository database will have to be implemented during the upgrade. This Grid Infrastructure management repository (GIMR) database has become mandatory in Oracle GI 12.1.0.2. Data files associated with it will be created in same diskgroup as OCR or voting.  (Average growth per day per node = app 750 MB so a 4 node cluster would lead at default retention of 3 days to app 9 GB storage requirement in OCR  or VOTING diskgroup).  A fortunate Note is that retention can be changed. Well in my case this means that more ASM disks will need to be added  to the specific disk group. At work most OCR and VOTING diskgroups are set up as bare minimum ( in normal redundancy with three disks each like 4 GB each). ( extra info on this topic: https://blogs.oracle.com/UPGRADE/entry/grid_infrastructure_management_repository_gimr)

Detailed preparations and health checks.

One of the quotes in IT sometimes is that you should not touch a well running system. Well in this case I would like to add but if you do, come well prepared.  In this case i have put the focus on the three below tools to prove that the current system is in a good shape to run the upgrade which is also to be regarded as a health check of the environment. These preps are based on the Mos note (1579762.1) from from reading Chapter 13 in the great book “Expert Oracle Rac 12C”  by Syed Jaffar Hussain, Tariq Farooq,Riyaj Shamsudeen and Kai Yu. ( ISBN-13 (electronic): 978-1-4302-5045-6).

  • Opatch
  • RACcheck: Orachk
  • Runcluvfy

Opatch

Using opatch in order to make sure that the Orainventory is in good shape on all nodes in the cluster. Command issued is investiging the current gridinfrastructure:

opatch lsinventory -oh /opt/crs/product/11204/crs -detail

-oh means for the specific ORACLE_HOME.

-detail shows all details.

RACcheck: Orachk

I have looked on Metalink and Downloaded and installed this tool on the cluster (nodes).

orachk Version
12.2.0.1.2_20161215

Following Quick start guide for this tool:

http://docs.oracle.com/cd/E68491_01/OEXUG/quick-start-guide.htm#OEXUG-GUID-CB4224DA-F389-4E9C-AB6A-C57F46A80C61

Clear information to be found In mos :

ORAchk Upgrade Readiness Assessment (Doc ID 1457357.1)

With the  tool downloaded below steps have been performed:

According to documentation the tool needs to be copied, unpacked (and installed) in suptools subdirectory of the cluster software installation.

scp orachk.zip oracle@mysrvr23hr:/opt/crs/product/11204/crs/suptools
scp orachk.zip oracle@mysrvr24hr:/opt/crs/product/11204/crs/suptools

Once unzipped the tool can run in two modes, a pre upgrade mode and a post upgrade mode:

./orachk u -o pre |tee Orachk_pre_20170124.log
./orachk u -o post |tee Orachk_post_20170124.log

Note: the tee command will also create a log file holding all the steps – progress information during run time.
Note: /opt/oracle/.orachk should be empty before stat otherwise:‘Another instance of orachk is running on:: #  message.

Runcluvfy

Working with runcluvfy  is like meeting an old friend again. Yet each time it is a bit of struggle to find optimal syntax – parameters to be used for your set up.

#Wrong setup was
./runcluvfy.sh stage -pre crsinst -upgrade -n mysrvr23hr,mysrvr24hr -rolling -fixup -src_crshome /opt/crs/product/11204/crs -dest_home /app/grid/product/12102/grid -dest_version 12.1.0 -verbose
## working version
./runcluvfy.sh stage -pre crsinst -n mysrvr23hr,mysrvr24hr -verbose|tee runcluvfy_20170130_pre.lst
Or
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/crs/product/11204/crs -dest_crshome /app/grid/product/12102/grid -dest_version 12.1.0.2.0 -verbose|tee runcluvfy_20170130_preUpgrade.lst

Upgrade steps:

Now it will become to plan and set up your upgrade steps after the confidence build on the preparation. In the upgrade multiple approaches will be possible.  But my goal in this is plain and simple, minimum Impact on Cluster and on the databases hosted on that cluster so I will be aiming for this Scenario:  rolling upgrade ASM + Clusterware. A baseline for such will be the below URL:

https://docs.oracle.com/database/121/CWLIN/procstop.htm#CWLIN10001

Working according to company standards will require to use following specific settings for an $ORACLE_BASE, $ORACLE_HOME for the GI installation and a different $ORACLE_HOME for the database software.

oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_BASE
/app/oracle
oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_HOME
/app/grid/product/12102/grid

oracle@mysrvrhr:/home/oracle [MYDB1]# echo $ORACLE_HOME
/app/oracle/product/12102/db

Below in the bullets will go through the steps and comment where needed.

  • Due to Grid Infrastructure management repository (GIMR) database I had to add larger disks to VOTING diskgroup to have enough storage in place (the steps on how to add the new disks and drop the old ones are too detailed for this blog (after all it is a blog and not a book 🙂 so I will have to blog about that in a separate blog).
  • Check /tmp because upgrade requires at least 1GB present in /tmp. Either clean up or have  /tmp extended. (use ls -lSh  command).
  •  check ocr integrity by :
cluvfy comp ocr -n all -verbose
  • Check backup of ocr and voting disk in the cluster:
    ocrconfig -showbackup

Note: this command can be performed as ORACLE user and will shows info similar to the information below.  Interesting aspect here was that I issued the command on the first node ( but the automated back-ups are all on  node 11hr).

oracle@mysrvr09hr:/opt/oracle [CRS]# ocrconfig -showbackup
mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr
mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr
mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr
mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr
mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
  • As the root user Run a Manual Backup of the OCR information. Run the ocrconfig -manualbackup command on a node where the Oracle Cluster-ware stack is up and running to force Oracle Cluster-ware to perform a backup of OCR at any time, rather than wait for the automatic backup.  Note: The -manualbackup option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to OCR. The OLR only supports manual backups. NOTE: In 11gR2, the voting files are backed up automatically as part of OCR. Oracle recommends NOT used dd command to backup or restore as this can lead to loss of the voting disk.
mysrvr09hr:root:/root # cd /opt/crs/product/11204/crs/bin/
mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -manualbackup
mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr

## Checking a second time will now also show a manual backup 2 b in place:
mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -showbackup
mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr
mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr
mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr
mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr
mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr
mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr

Last line is now showing the manual backup
(since it is showing the format (backup_yyyymmdd_hhmmss.ocr)
  • Check Location of OCR and Voting Disk (need to be in a diskgroup )
##How:
cat /etc/oracle/ocr.loc
## Shows output similiar to this
## (if ocr is already mirrored in other Diskgroup with normal Redundancy)
#Device/file getting replaced by device +OCR
ocrconfig_loc=+VOTE
ocrmirrorconfig_loc=+OCR

 

 

##How: 
crsctl query css votedisk

## Will show 3 voting disks in Disk group Vote due to Normal redundancy (and 3 Disk)
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
 1. ONLINE 36b26f862b9a4f54bfba3096e3d50afa (/dev/mapper/asm-vote01) [VOTE]
 2. ONLINE 9d45d791c1124febbf0a093d5a185c13 (/dev/mapper/asm-vote02) [VOTE]
 3. ONLINE 1b7e510a302e4f03bfdea942d55d7067 (/dev/mapper/asm-vote03) [VOTE]
Located 3 voting disk(s).
## check in ASM:
select a.name dg_name,
a.GROUP_NUMBER dg_number,
a.state dg_state,
b.DISK_NUMBER d_number, 
b.name d_name,
b.mount_status
d_mount_status,
b.header_status
d_header_status,
b.mode_status
d_mode_status,
b.state d_state,
b.FAILGROUP
d_failgroup,
b.path d_path
from
v$asm_diskgroup a,
v$asm_disk b
where
a.GROUP_NUMBER(+) = b.GROUP_NUMBER
order by 2,4;
  • Unset environment Variables:
unset ORACLE_BASE 
unset ORACLE_HOME 
unset GI_HOME 
unset ORA_CRS_HOME 
unset TNS_ADMIN
unset ORACLE_SID
unset ORA_NLS10
  • Check active crs version and software version:
## using the current CRS to document current active - and software version
/opt/crs/product/11204/crs/bin/crsctl query crs activeversion
/opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
  • Performing a Standard Upgrade from an Earlier Release
## Use the following procedure to upgrade the cluster from an earlier release:
Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.
On the node selection page, select all nodes.
Select installation options as prompted. 
Note: Oracle recommends that you configure root script automation,
so that the sh script can be run automatically during the upgrade.
Run root scripts, using either automatically or manually:

Running root scripts automatically:
TIP: If you have configured root script automation, 
then use the pause between batches to relocate services from the nodes running the previous release to the new release.
Comment Mathijs: I have not decided yet on this automation step. 
In the documentation read as prep for the upgrade you see the option to create multiple batches:
like batch 1 starting node, 
batch 2 all but last node,
batch 3 last node. 
I will use both the automated way for one cluster and then use the below manual (old school method mentioned below) on another cluster.

Running root scripts manually:
If you have not configured root script automation, then when prompted, 
run the rootupgrade.sh script on each node in the cluster that you want to upgrade.

If you run root scripts manually, then run the script on the local node first. 
The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. 
When the script is run successfully on all the nodes except the last node, run the script on the last node.
After running the sh script on the last node in the cluster, if you are upgrading from a release earlier than Oracle Grid Infrastructure 11g Release 2 (11.2.0.2), 
and left the check box labeled ASMCA checked, which is the default, then Oracle Automatic Storage Management Configuration Assistant ASMCA runs automatically, 
and the Oracle Grid Infrastructure upgrade is complete. 
If you unchecked the box during the interview stage of the upgrade, then ASMCA is not run automatically.

If an earlier release of Oracle Automatic Storage Management (Oracle ASM) is installed, then the installer starts ASMCA to upgrade Oracle ASM to 12c Release 1 (12.1). 
You can choose to upgrade Oracle ASM at this time, or upgrade it later.
Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade Oracle Clusterware. 
Until Oracle ASM is upgraded, Oracle Databases that use Oracle ASM cannot be created and the Oracle ASM management tools in the Oracle Grid Infrastructure 12c Release 1 (12.1) home (for example, srvctl) do not work.

Note: 
Because the Oracle Grid Infrastructure home is in a different location than the former Oracle Clusterware and Oracle ASM homes, 
update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.
  • Check active crs version and software version:
/opt/crs/product/11204/crs/bin/crsctl query crs activeversion
/opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
  • Post upgrade checks:
 ps -ef|grep d.bin should show daemons started from 12C.

Thoughts on Rollback:

Of course each migration will be as good as its preparation. But still your plan should at least hold the steps for a rollback in case you might not make it to a successful completed task. Below you will find the steps mentioned in general.

On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade to stop the 12c Release 1 (12.1).
On the local node use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade -lastnode
On any of the cluster member nodes where the rootupgrade.sh script has run successfully:

cd /u01/app/12.1.0/grid/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList
-silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid

On any of the cluster member nodes where the rootupgrade script has run successfully:
In Old ORACLE_HOME (the earlier Oracle Clusterware installation).$ cd /opt/crs/product/11204/crs/oui/bin/
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs

Start the Oracle Clusterware stack manually:
On each node, start Oracle Clusterware from the earlier release Oracle Clusterware home:
/opt/crs/product/11204/crs/bin/crsctl start crs

As always thank you for taking an interest in my blog. Happy reading and till the next time.

Mathijs

 

 

Relink Oracle software after Upgrading Red Hat release

Introduction:

Soon in a theater near you as a main event, the patching of a lot of Linux (database) servers. So it is once more time to dust off the documentation and refresh memory because in the past there has always been a lot of debate was between dbas should we or should we not re-link the Oracle Software once Linux patching has been done? Motivation would be that folks stated “but we have a running version now, and don’t know what misery recompile will bring”.  Plain and simple and with Oracle Support recommendation the answer to that question should always be YES. If required libraries in Linux have changed it is best to find out during that one maintenance window instead of finding out – weeks or months later when most of us have forgotten about the patch activities.

Below steps are a bit specific for my environments where I have setup Asm and Oracle Restart on various single servers. Yet I hope this note will be of benefit as a general overview

Stopping the databases under control in an easy way:

 

Below you will find some nice feature of the srvctl  status – stop – start command in order to easily capture  – stop and start all the databases  that are part of a specific ORACLE_HOME. ( Commands and examples come from Oracle Documentation (https://docs.oracle.com/html/E25494_01/lot.htm)

srvctl status home -o oracle_home -s state_file

srvctl status home Options

Option Description
-o Complete path of the Oracle home
-s Complete path of the state file

srvctl stop home -o oracle_home -s state_file [-t stop_options] [-f]

srvctl stop home Options

Option Description
-o Complete path of the Oracle home
-s Complete path of the state file
-t stop_options SHUTDOWN command options for the database (for example: NORMAL, TRANSACTIONAL, IMMEDIATE, or ABORT). Default is IMMEDIATE.
-f Force stop each component

srvctl start home -o oracle_home -s state_file

srvctl start home Options

Option Description
-o Complete path of the Oracle home
-s Complete path of the state file. The state file contains the current state information for the components in the Oracle home and is created when the srvctl stop home command or the srvctl status home command is run.

 

As prep for the relinking of the software I performed following step:

srvctl status home -o /opt/oracle/product/112_ee_64/db -s /var/tmp/state_file.status

srvctl stop home -o /opt/oracle/product/112_ee_64/db -s /var/tmp/state_file.dmp

Note: in the stop home command a file is created at given location. Oracle will put in all Instances-Databases that are part of that specific Home. The nice part is that this  .dmp file can be used in one command to start all databases again once things are done.

In Order to relink the Rdbms software:

After shutting down the databases (see above):

Had the ORACLE_HOME  set properly

$ORACLE_HOME/bin/relink all

Note: writing relink log to: /opt/oracle/product/112_ee_64/db/install/relink.log

In Order to relink the Oracle Restart software:

Prepare the Oracle Grid Infrastructure for a Standalone Server home for modification using the following procedure:

  1. Log in as the Oracle Grid Infrastructure software owner user and change the directory to the path Grid_home/bin, where Grid_home is the path to the Oracle Grid Infrastructure home:
cd /opt/crs/product/112_ee_64/crs/bin
  1. Shut down the Oracle Restart stack using the following command:
crsctl stop has –f

This will show:

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'myhost01hr'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'myhost01hr'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'myhost01hr' succeeded

CRS-2673: Attempting to stop 'ora.evmd' on 'myhost01hr'

CRS-2677: Stop of 'ora.evmd' on 'myhost01hr' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'myhost01hr' has completed

CRS-4133: Oracle High Availability Services has been stopped.

oracle@myhost01hr:/opt/crs/product/112_ee_64/crs/bin [+ASM]#

Then:

Relink Oracle Grid Infrastructure:

  1. Login as root
    
    cd /opt/crs/product/112_ee_64/crs/crs/install
    
    perl roothas.pl -unlock
    
    Log in as the Oracle Grid Infrastructure for a Standalone Server owner:
    
    $ export ORACLE_HOME=/opt/crs/product/112_ee_64/crs
    
    $ $ORACLE_HOME/bin/relink

Note: This will show: oracle@myhost01hr:/opt/oracle [+ASM]# $ORACLE_HOME/bin/relink
writing relink log to: /opt/crs/product/112_ee_64/crs/install/relink.log

  1. Login as root again:
    
    cd/opt/crs/product/112_ee_64/crs/rdbms/install/
    
    ./rootadd_rdbms.sh 
    
    Note. Rootadd_rdbms came back very fast without any output.
    
    cd /opt/crs/product/112_ee_64/crs/crs/install
    
    perl roothas.pl -patch

Checked with:

  1. export $ORACLE_HOME=/opt/crs/product/112_ee_64/crs/
  2. ./crsctl check has

This showed:CRS-4638: Oracle High Availability Services is online

Starting the databases under control in an easy way:

When the command srvctl stop home was issued a  file was created in /var/tmp as state_file.dmp.  Now we can start all dbs again with one command:

srvctl start home -o /opt/oracle/product/112_ee_64/db -s /var/tmp/state_file.dmp

As always happy reading And happy Dba..