Some things happened (while installing 12.2 GI in a Rac cluster).

Summary:

One of the fine people in the Oracle community once shared  a thought where he told about repairing a bike once and repeating the plan of approach N times. As a lesson learned from  that scenario it is best to see each bike as a new challenge which deserves a new and fresh approach. In this blog I will describe a number of things i came across when setting up Grid Infra structure 12.2 with January 2018 PSU on a multi-node cluster.

Details – Things to look after:

  • Locating the log files of the installation can make a world of difference. Make sure you understand and find  the location of it and have it tailed during all of install.  In my case in this directory will find subdirs and log file for example: /app/oraInventory/logs/GridSetupActions2018-04-26_09-39-53AM.
  • In the past you always had  one destination to unzip your Software.zip and during installation the runInstaller would ask for an installation location  during setup. With 12.2 ( and in Oracle 18 Grid infra) that is no longer the case. Create the subdirectory where the software is supposed to be installed and unzip your files there as a first step.
  • runInstaller is no more … In order to start the installation process you will have to find this command:./gridSetup.sh
  • When Installing as in my case on Red Hat  Linux 7.4 with a Patched Kernel you might come across  ACFS-9154: Loading ‘oracleoks.ko’ driver.  >  modprobe: ERROR: could not insert ‘oracleoks’: Unknown symbol in module, or unknown parameter (see dmesg) >  ACFS-9109: oracleoks.ko driver failed to load. >  ACFS-9178: Return code = USM_FAIL >  ACFS-9177: Return from ‘ld usm drvs’ >  ACFS-9428: Failed to load ADVM/ACFS drivers. A system reboot is recommended. You can Solve that by running the gridSetup.sh with parameters which will install the Patch(es) first and then run the commands:./gridSetup.sh -applyPSU /app/grid/product/12201/grid/27100009.  Translated this means that the psu patch needs to be applied first and then the gridSetup can start its setup.
  • Images during setup have changed. In my case I have selected  this one. Which also brought me FLEX-ASM as per default in 12.2.

 

2018-05-06_074150

When installing GI as a standalone cluster in the follow-up screens you are asked to add the nodes of your cluster either as a Hub or as a Leave. Thus differentiating by default which nodes should have a dedicated ASM instance ( Hub) and which nodes will communicate remotely with one of the Hub-Asm instances. After install i learned that in 12.2 as a default 3 ASM instances will be created  no matter how many nodes  there are in your cluster.

  • Scan listener: Make sure as a preparation that the colleagues from Linux team have added the 3 Ips for your cluster in the Dns and try a nslookup first before installing. During installation when you have to  add the clustername ( here presented as mycluster) ,  the installing tool will also show the scan-name (and most likely you will have to alter it anyhow to meet with the information in  dns needed for the setup).( in dns mycluster-scan.prod.nl) needs to be present as 3 IP addresses

2018-05-06_075326

  • In the clusternode screen you will add all the nodes in your cluster. In this case I intended to set up each node as a hub ( thus expecting that there would be 8 asm instances in place too ( which was not the case but that is elaborated in other topic.
  • On this screen you add the nodes using the add button.
  • On this screen you can set up SSH connectivity between all the nodes. On the web it was not clear to me in various blogs and in the documentation neither, what is the preferred way to do this. I had the tool setup ssh connectivity between all nodes and i was happy with result.
  • Once completed press next and the tool will show something like “validating node readiness”.

2018-05-06_080405

  • In the specify Network Interface usage screen:
  • Best practice  / Lessons learned: Make sure you have consulted with  the Linux team about the interfaces. In my specific case > ETH0 is admin lan . You should put it to do not use. Eth2 and Eth7 are the private interconnects. Make sure that only one of the is have the option Private, ASM.  (In a flex asm cluster ASM needs a way to communicate via its dedicated listener. Since as per default you will have only ASM listener, make sure only one of the private interconnects is using this combo of private and ASM).

2018-05-06_082605

Note: This installation was implemented on New Hardware , coming from Dell. During the install we found out that in the original setup the DELL systems use a Range of Ips ( 169.*) which is also used by the HA-IPS of Oracle. And even when the colleagues of Linux might grumble, it is mandatory that the range of 169* is not in use!! In the first setup the Dell systems had  the 169* enabled for their idRac interface. This IPs have been disabled.

  • For the setup of two Diskgroups ( one for the OCR and voting disks and one for the GIMR ( grid infra structure Management Repository) make sure that the Linux Admins have delivered ASM disks). In my case i got 2 times 3 Disks , so i could setup Normal redundancy Diskgroups for both Diskgroups.
  • On the Summary screen , pay extra attention to make sure that all the cluster nodes that you intend to have included in your soon to be cluster are showing ! ( Hub nodes: this should show all the nodes. If this is not the case you can select Edit ( which will rerun all steps as of Cluster node information).

Well that is all for now .. To be continued in a galaxy near you …

As always happy reading and till we meet again.

Mathijs

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s