Summary.
In 12.2 Grid infrastructure Oracle has altered the concept of ASM to flex-ASM as a default. This blog will take a focus on re-configuring the Oracle Rac 12.2 Grid infrastructure network component parts like the interconnect, the public or to change the interface to do-not-use, whenever that applies / is an improvement to the situation at hand. Read carefully in full before performing it on one of your clusters. Baseline for this action will be a document on Mos (How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1))
Details:
As with any change, when going through input – processing – output it is important to have a clear picture of the situation as is. So a first and very mandatory step should be to check with the oifcfg getif command how things are before starting the changes:
When entering the command information with regard to the known network interfaces in the Rac cluster similar to below should be showing:
oracle@mysrvr1dr:/app/oracle/stage/27468969 [+ASM1]# oifcfg getif bond0 198.19.11.0 global public eth0 10.217.210.0 global cluster_interconnect,asm eth2 192.168.10.0 global cluster_interconnect eth7 192.168.11.0 global cluster_interconnect
Here bond0 will be used used as public, eth0 at the moment is holding activities for the cluster interconnect and for asm, eth2 and eth7 are dedicated to the interconnect. Eth0 is defined as admin lan for various activities. In this setup the cluster is unstable, nodes are being evicted. Time to perform steps to stabilize it.
From the Mos note, looking at Case IV. Changing private network interface name, subnet or netmask. For 12c Oracle Clusterware with Flex ASM.
Precaution, taking backup of profile.xml on each node.
Take a backup of profile.xml on all cluster nodes before proceeding, as grid user. In this specific case this is the user that has installed the Grid Infrastructure ( in this scenario that was the oracle user):
Command:
$ cd $GRID_HOME/gpnp/<hostname>/profiles/peer/ $ cp -p profile.xml profile.xml.bk
cd /app/grid/product/12201/grid/gpnp/mysrvr1dr/profiles/peer cp -p profile.xml profile.xml.bk cd /app/grid/product/12201/grid/gpnp/mysrvr2dr/profiles/peer cp -p profile.xml profile.xml.bk cd /app/grid/product/12201/grid/gpnp/mysrvr3dr/profiles/peer cp -p profile.xml profile.xml.bk cd /app/grid/product/12201/grid/gpnp/mysrvr4dr/profiles/peer cp -p profile.xml profile.xml.bk cd /app/grid/product/12201/grid/gpnp/mysrvr5dr/profiles/peer cp -p profile.xml profile.xml.bk cd /app/grid/product/12201/grid/gpnp/mysrvr6dr/profiles/peer cp -p profile.xml profile.xml.bk cd /app/grid/product/12201/grid/gpnp/mysrvr7dr/profiles/peer cp -p profile.xml profile.xml.bk cd /app/grid/product/12201/grid/gpnp/mysrvr8dr/profiles/peer cp -p profile.xml profile.xml.bk
Altering the interconnect:
One of the interconnects should be altered to make sure that the ASM listener is able to communicate using that interface to. In this scenario eth2 was used to do so. When doing this take note of the ip since it will be needed to configure a new ASM listener.
oifcfg setif -global eth2/192.168.10.0:cluster_interconnect,asm oifcfg setif -global eth7/192.168.11.0:cluster_interconnect
Now eth2 shows that it setup for interconnect and asm (only one interconnect should be setup to combine cluster_interconnect+asm).
peer [+ASM1]# oifcfg getif bond0 198.19.11.0 global public eth0 10.217.210.0 global cluster_interconnect,asm eth2 192.168.10.0 global cluster_interconnect,asm eth7 192.168.11.0 global cluster_interconnect
With this information checked and in place it is time for setting up new listener for asm since the original ASM listener during the installation used eth0 and that eth0 will be dropped – removed from cluster configuration in steps below:
Existing listener ASMNET1LSNR will become new one ASMNET122LSNR. srvctl add listener -asmlistener -l ASMNET122LSNR -subnet 192.168.10.0 (as mentioned this is the eth2 interface that we are going to use).
As always seeing is believing : use crsctl status resource -t to see details similar to below. The new ASM listener is created as a resource and it is in a status offline offline on all nodes in the cluster at this point and time :
-------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------- ora.ASMNET122LSNR_ASM.lsnr OFFLINE OFFLINE mysrvr1dr STABLE OFFLINE OFFLINE mysrvr2dr STABLE OFFLINE OFFLINE mysrvr3dr STABLE OFFLINE OFFLINE mysrvr4dr STABLE OFFLINE OFFLINE mysrvr5dr STABLE OFFLINE OFFLINE mysrvr6dr STABLE OFFLINE OFFLINE mysrvr7dr STABLE OFFLINE OFFLINE mysrvr8dr STABLE
In the next step we will remove the old ASM listener, and use a -f option to prevent errors – messages with regard to dependencies.
srvctl update listener -listener ASMNET1LSNR_ASM -asm -remove -force
I have checked again with crsctl status resource -t to make sure the old resource is gone now.
Removing the old ASM listener
In the Mos note there is a little inconsistency because it claims that as a next step the old ASM listener should be stopped. I was able to grep for the listener ( ps -ef|grep -i inherit) and i saw it on OS level on the machine(S). But I am not able to stop that listener since the cluster resource is already gone and lsnrctl did not work. Solution: What I noticed that when I skipped this step and stopped and started the cluster which is mandatory in this scenario, the listener was gone on all nodes.
Should have given this command, but that is NOT working: lsnrctl stop ASMNET1LSNR_ASM
Check configuration before restarting GI:
First command: srvctl config listener -asmlistener Name: ASMNET122LSNR_ASM Type: ASM Listener Owner: oracle Subnet: 192.168.10.0 Home: <CRS home> End points: TCP:1527 Listener is enabled. Listener is individually enabled on nodes: Listener is individually disabled on nodes: Second Command: srvctl config asm ASM home: <CRS home> Password file: +VOTE/orapwASM Backup of Password file: ASM listener: LISTENER ASM instance count: ALL Cluster ASM listener: ASMNET122LSNR_ASM
Both results look great so time to move to the next step (restarting the Grid Infra structure on all nodes).
Restarting Grid infrastructure on all Nodes:
For this next step you have to become root (or sudo su – ) to do the next steps. First and importantly make sure that the Grid infra structure is not restarting automatically should a cluster node perform a reboot (disable crs) , then stop the Grid infrastructure software:
As root /app/grid/product/12201/grid/bin/crsctl disable crs /app/grid/product/12201/grid/bin/crsctl stop crs To be done on: mysrvr[1-8]dr
Checking network configuration on all nodes.
mysrvr1dr:root:/root $ ifconfig -a
Starting cluster again:
As root /app/grid/product/12201/grid/bin/crsctl enable crs /app/grid/product/12201/grid/bin/crsctl start crs To be done on: mysrvr[1-8]dr
Final checks:
oifcfg getif bond0 198.19.11.0 global public eth0 10.217.210.0 global cluster_interconnect,asm eth2 192.168.10.0 global cluster_interconnect,asm eth7 192.168.11.0 global cluster_interconnect
Time to delete eth0
Since eth0 is admin lan, and after our reconfigure steps, time to get rid of the eth0 (remove it from the Grid infra structure).
oifcfg delif -global eth0/10.217.210.0
And a last check again:
oifcfg getif bond0 198.19.11.0 global public eth2 192.168.10.0 global cluster_interconnect,asm eth7 192.168.11.0 global cluster_interconnect
Happy reading, and till we meet again,
Mathijs.