Installing 12.2 GI on Red hat Linux 74

Introduction.

From one of the Oracle new features books ( I believe it was 11G ) it was stated that the only constant is change. And with recent experience when installing 12.2 Grid Infrastructure as a fine example I could not agree more. But as always if everything with regard to Oracle would be easy…

For a project new installations need to be done. In this specific case that means that I will work with Red Hat Linux 74 together with Oracle 12.2 both  Grid Infrastructure and 12.2 Oracle Rdbms including latest PSU (at the moment January 2018). As mentioned in my other post setup for Grid Infrastructure has changes (no more runInstaller there , and unzipping the Oracle provided Zip in the destination which will hold your software ( so no more staging area).

For one of the platforms it was mandatory to setup 12.2 GI ( and database ) on a single server which is also referred to by Oracle as Oracle restart.

Details.

Now this is where the challenging – even scary part comes in. Once  the installation was on its way , at app 80% of  the progress the famous second window pops up where you are asked to start a second session , become root and run the script root.sh.  Well this is what happened on the way to that  theater:

CRS-4664: Node mysrvr successfully pinned.

2018/01/30 09:19:28 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.service’

2018/01/30 09:21:03 CLSRSC-400: A system reboot is required to continue installing.

The command ‘/app/grid/product/12.2.0.1/grid/perl/bin/perl -I/app/grid/product/12.2.0.1/grid/perl/lib -I/app/grid/product/12.2.0.1/grid/crs/install /app/grid/product/12.2.0.1/grid/crs/install/roothas.pl ‘ execution failed

I have looked at the web and it was almost a  relieve that more bloggers already came across this phenomena.   As always when reading the scenarios that others have followed to solve the issue , it will always be the question does this apply to your specific situation. It was interesting to identify that this issue is caused by acfs drivers when the kernel used in Red Hat 74 ( and also in 73  ) is higher then an expected (default) one thus causing  the root.sh to fail. And as a spoiler alert the suggested reboot , did not help in my specific case.  Consulted with The Oracle and this came back:

==============================
>  ACFS-9154: Loading ‘oracleoks.ko’ driver.
>  modprobe: ERROR: could not insert ‘oracleoks’: Unknown symbol in module, or unknown parameter (see dmesg)
>  ACFS-9109: oracleoks.ko driver failed to load.
>  ACFS-9178: Return code = USM_FAIL
>  ACFS-9177: Return from ‘ld usm drvs’
>  ACFS-9428: Failed to load ADVM/ACFS drivers. A system reboot is recommended.
==============================

Work Around – Solution:

Either apply a oneoff patch:

25078431 – is for 7.3.
26247490 – is for 7.4.

or

Even better but also a new approach: Apply  January  2018 PSU before installation.

Both Scenarios could be interesting but given the fact that i need the psu 2018 as a baseline anyhow this is how  the scenario worked:

1 Installation did not complete, so I did not perform a normal de-install.

oracle@mysrvr:/app/grid/product/12.2.0.1 []# cd grid
oracle@mysrvr:/app/grid/product/12.2.0.1/grid []# rm -rf *

2 Unzipped to /app/grid/product/12.2.0.1/grid.

unzip /app/grid/product/12.2.0.1/linuxx64_12201_grid_home.zip -d /app/grid/product/12.2.0.1/grid

3 Latest version of Opatch needed , make sure you download and have this in place.

unzip /app/grid/product/12.2.0.1/p6880880_122010_Linux-x86-64.zip -d /app/grid/product/12.2.0.1/grid

4 this is January 2018 PSU for GI p27100009_122010_Linux-x86-64.zip.

Unzip /app/grid/product/12.2.0.1/p27100009_122010_Linux-x86-64.zip -d /app/grid/product/12.2.0.1/patch

5 Check for correct Opatch.

oracle@mysrvr:/app/grid/product/12.2.0.1/grid []# cd OPatch
oracle@mysrvr:/app/grid/product/12.2.0.1/grid/OPatch []# opatch version
OPatch Version: 12.2.0.1.11
OPatch succeeded.

6 Time to run this command , it will first patch the software tree then start the setup:

cd ..
./gridSetup.sh -applyPSU /app/grid/product/12.2.0.1/patch/27100009
After some minutes this showed : Successfully applied the patch.
The log can be found at: /app/oraInventory/logs/GridSetupActions2018-01-30_02-34-09PM/installerPatchActions_2018-01-30_02-34-09PM.log
right before the gridSetup started.

Then the grid installer started ….

7 And in a popup screen was asked to have root the run :

time to run root.sh
mysrvr:root:/app/grid/product/12.2.0.1/grid $ ./root.sh
This time  a much more positive output came in:

Performing root user operation.
 
The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME= /app/grid/product/12.2.0.1/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
 
The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME= /app/grid/product/12.2.0.1/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/grid/product/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
 /app/oracle/crsdata/mysrvr/crsconfig/roothas_2018-01-30_03-03-44PM.log
 
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'dba'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node mysrvr successfully pinned.
2018/01/30 15:04:19 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'mysrvr'
CRS-2673: Attempting to stop 'ora.evmd' on 'mysrvr'
CRS-2677: Stop of 'ora.evmd' on 'mysrvr' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'mysrvr' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
 
mysrvr 2018/01/30 15:06:13 /app/grid/product/12.2.0.1/grid/cdata/mysrvr/backup_20180130_150613.olr 2960767134
2018/01/30 15:06:14 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

And once again a happy me.

Dedication: would like to dedicate this Post to the colleagues from Oracle Acs support for  their great and swift help. Always a pleasure to work together.

As always, happy reading and till we meet again,

Mathijs