Reconfigure Rac 12.2 Gridinfra network

Summary.

In 12.2 Grid infrastructure Oracle has altered the concept of ASM to flex-ASM as a default. This blog will take a focus on re-configuring the Oracle Rac 12.2 Grid infrastructure network component parts like the interconnect, the public or to change the interface to do-not-use, whenever that applies / is an improvement to the situation at hand. Read carefully in full before performing it on one of your clusters. Baseline for this action will be a document on Mos (How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1))

Details:

As with any change, when going through input – processing – output it is important to have a clear picture of the situation as is. So a first and  very mandatory step should be to check with the oifcfg getif command how things are before starting the changes:

When entering the command information with regard to the known network interfaces in the Rac cluster similar to below should be showing:

oracle@mysrvr1dr:/app/oracle/stage/27468969 [+ASM1]# oifcfg getif

bond0  198.19.11.0  global  public
eth0  10.217.210.0  global  cluster_interconnect,asm
eth2  192.168.10.0  global  cluster_interconnect
eth7  192.168.11.0  global  cluster_interconnect

Here bond0 will be used used as public, eth0 at the moment is holding activities for the cluster interconnect and for asm, eth2 and eth7 are dedicated to  the interconnect. Eth0 is defined as admin lan for various activities. In this setup the cluster is unstable, nodes are being evicted. Time to perform steps to stabilize it.

From the Mos note, looking at Case IV. Changing private network interface name, subnet or netmask. For 12c Oracle Clusterware with Flex ASM.

Precaution, taking backup of profile.xml on each node.

Take a backup of profile.xml on all cluster nodes before proceeding, as grid user. In this specific case this is the user that has installed the Grid Infrastructure ( in this scenario that was the  oracle user):
Command:

$ cd $GRID_HOME/gpnp/<hostname>/profiles/peer/
 $ cp -p profile.xml profile.xml.bk
cd /app/grid/product/12201/grid/gpnp/mysrvr1dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr2dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr3dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr4dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr5dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr6dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr7dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr8dr/profiles/peer
cp -p profile.xml profile.xml.bk

Altering the interconnect:

One of the interconnects should be altered to make sure that the ASM listener is able to communicate using that interface to. In this scenario eth2 was used to do so. When doing this take note of the ip since it will be needed to configure a new ASM listener.

oifcfg setif -global eth2/192.168.10.0:cluster_interconnect,asm
oifcfg setif -global eth7/192.168.11.0:cluster_interconnect

Now eth2  shows that it setup for interconnect and asm (only one interconnect should be setup to combine cluster_interconnect+asm).

peer [+ASM1]# oifcfg getif

bond0  198.19.11.0  global  public
eth0  10.217.210.0  global  cluster_interconnect,asm
eth2  192.168.10.0  global  cluster_interconnect,asm
eth7  192.168.11.0  global  cluster_interconnect

With this information checked and in place it is time for setting up new listener for asm since the original ASM listener during the installation used eth0 and that eth0 will be dropped  – removed from cluster configuration in steps below:

Existing listener ASMNET1LSNR  will become new one ASMNET122LSNR.

srvctl add listener -asmlistener -l ASMNET122LSNR -subnet 192.168.10.0
(as mentioned this is the eth2 interface that we are going to use).

As always seeing is believing : use crsctl status resource -t to see details similar to below. The new ASM listener is created as a resource and it is in a status offline offline on all nodes in the cluster at this point and time :

--------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------
ora.ASMNET122LSNR_ASM.lsnr
               OFFLINE OFFLINE      mysrvr1dr                 STABLE
               OFFLINE OFFLINE      mysrvr2dr                 STABLE
               OFFLINE OFFLINE      mysrvr3dr                 STABLE
               OFFLINE OFFLINE      mysrvr4dr                 STABLE
               OFFLINE OFFLINE      mysrvr5dr                 STABLE
               OFFLINE OFFLINE      mysrvr6dr                 STABLE
               OFFLINE OFFLINE      mysrvr7dr                 STABLE
               OFFLINE OFFLINE      mysrvr8dr                 STABLE

In the next step we will remove the old ASM listener, and use a -f option to prevent errors – messages with regard to dependencies.

srvctl update listener -listener ASMNET1LSNR_ASM -asm -remove -force

I have checked again with crsctl status resource -t to make sure the old resource is gone now.

Removing the old ASM listener

In the Mos note there is a little inconsistency because it claims  that as a next step the old ASM listener should be stopped.  I was able to grep for the listener ( ps -ef|grep -i inherit)  and i saw it on OS level on the machine(S). But I am not able to stop that listener  since the cluster resource is already gone and lsnrctl did not work. Solution: What I noticed that when I skipped this step and stopped and started the cluster which is mandatory in this scenario, the listener was gone on all nodes.

Should have given this command, but that is NOT working:
lsnrctl stop ASMNET1LSNR_ASM

Check configuration before restarting GI:

First command:
srvctl config listener -asmlistener

Name: ASMNET122LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.10.0
Home: <CRS home>
End points: TCP:1527
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

Second Command:
srvctl config asm

ASM home: <CRS home>
Password file: +VOTE/orapwASM
Backup of Password file:
ASM listener: LISTENER
ASM instance count: ALL
Cluster ASM listener: ASMNET122LSNR_ASM

Both results look great so time to move to the next step (restarting the Grid Infra structure on all nodes).

Restarting Grid infrastructure on all Nodes:

For this next step you have to become root (or sudo su – ) to do the next steps. First and importantly make sure that the Grid infra structure is not restarting automatically should a cluster node perform a reboot (disable crs) , then stop the Grid infrastructure software:

As root

/app/grid/product/12201/grid/bin/crsctl disable crs
/app/grid/product/12201/grid/bin/crsctl stop crs
To be done on: mysrvr[1-8]dr

Checking network configuration on all nodes.

mysrvr1dr:root:/root $ ifconfig -a

Starting cluster again:

As root

/app/grid/product/12201/grid/bin/crsctl enable crs
/app/grid/product/12201/grid/bin/crsctl start crs

To be done on: mysrvr[1-8]dr

Final checks:

oifcfg getif

bond0  198.19.11.0  global  public
eth0  10.217.210.0  global  cluster_interconnect,asm
eth2  192.168.10.0  global  cluster_interconnect,asm
eth7  192.168.11.0  global  cluster_interconnect

Time to delete eth0

Since eth0 is admin lan, and after our reconfigure steps, time  to get rid of the eth0 (remove it from the Grid infra structure).

oifcfg delif -global eth0/10.217.210.0 

And a last check again:

oifcfg getif

bond0  198.19.11.0  global  public
eth2  192.168.10.0  global  cluster_interconnect,asm
eth7  192.168.11.0  global  cluster_interconnect

Happy reading, and till we meet again,

Mathijs.

 

 

Importing Data via Network

Introduction:

For two projects there has been an assignment to upgrade to 11.20.4 Oracle. One environment was already 11.2.3 with same Cluster stack below it and one environment will come from 10.2.0.4 on Solaris.  For both projects on Linux  an 11.2.0.4 cluster-stack plus database version has been set up on one of the newer shared clusters.  Both environments will be migrated using the export – import method (since they are relatively small ( app 400- 500 GB) ) and of course since one of them is being migrated cross platforms (from Solaris to Linux ) you do not have that much choice.

In other project I had good experience with nfs filesystems between source and target servers and at first was aiming to use them again during these migrations.  However since not every project is able to make it to the time lines ( will have to wait for at least 2 more weeks to get the nfs mounts ) other creativity will be  required. In this specific case will work with datapump via the network.

When looking into this scenario i came across two scenarios. First scenario being covered by a fellow blogger and interesting since it offers the option to export directly into an ASM disk group. In  that scenario extra step would be needed using impdp with directory to the same  asmdiskgroup/subdirectory. Second scenario which is explained in more detail here is even one step beyond. Scenario is simple  using impdp via a dblink directly in the database ( not even a need to park a dumpfile somewhere on filesystem or in diskgroup first and then run the imp). Nope just another  imdp and you are there !

 

 

1.     Setting up  tnsnames entry on the target ( receiving ) side.

 

In order to make this scenario work  you will have to make sure that there is no firewall in place to the source database you will pull the data from when you create the tnsnames.ora entry on the target side.

In my case:

 

I always try a: telnet <ip> <port>

telnet  666.233.103.203  33012

 

If  you see something like trying ….  and nothing helps will happen well this was not your lucky day and a firewall is blocking you from making this a happy scenario.  If you see something like this lucky you :

Escape character is '^]'.

Recommendation when you get stuck with trying … then is to make sure that firewall  is opened. In my case my host was a vip address for a rac database and Port 33012 had been assigned to the local listener of that database.

 

## Let set up the tnsnames entry  NOTE : firewall needs to be freed before proceed with tnsping etc:

MBMYDB =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=666.233.103.203)(PORT=33012))
)
(CONNECT_DATA=
(SERVICE_NAME=MYDB.test.nl)
)
)

One interesting part is that the service_name of the tnsnames  i wanted to use was not present as a service in the database so I had to add to extend the present service (which was not default service since it was without  domain).

 

## ## On the source side in the database where i want to take the data from:  added service:

 

alter system set service_names = ‘MYDB’,’MYDB.test.nl’ scope = both ;

 

SQL> show parameter service

 

NAME                                                      TYPE VALUE

———————————— ———– ——————————

service_names                                      string               MYDB, MYDB.test.nl

 

So now we have two services in place which we can use in the tnsnames.ora.

 

2.     Time to set up a public dblink

 

## Reading articles by fellow bloggers they recommended to created PUBLIC (this seems mandatory) db link. Since in my case i would do the import with system a normal db link would b okay too. But for the scenarios sake  public database link is fine.

 

drop public DATABASE LINK old_MYDB;

## worked with this one

CREATE public DATABASE LINK old_MYDB CONNECT TO system IDENTIFIED BY xxxxxxx USING ‘mbMYDB’;

3.     Seeing is believing , test the db link.

 

## performed select

select ‘x’ from  dual@old_MYDB;

4.     Next stop, creating a directory for the logfile of the impdp.

 

Yes that is correct only a directory for the log file not for the dump itself J  that is why i liked this scenario so much.

 

## created directory for the logfile

create directory acinu_imp as ‘/opt/oracle/MYDB/admin/create’ ;

grant read,write on directory acinu_imp to system;

 

 

 

5.     Time to perform the import.

 

Over the years have used expdp and impdp a lot  but most time as an almost 1:1 clone of exp/ imp. But since Google  is your friend when looking for scenarios it was great to explore the  powerful option of exclude= parameter. As you will see ,  creating an import of the full database but excluding the  schemas i don’t care about.

 

Since i was hmm energy efficient i wanted to type the full statement in Linux but was punished  by having ” ” in my command. However had i used a parfile things would have been easier J . But since i wanted to stick to scenario found that whenever on OS  ” level an \ will be mandatory like below:

 

## performed import  with success with  command below

 

impdp system full= yes "EXCLUDE=SCHEMA:\"IN('ADBM','DBSNMP','PERFSTAT','UPDOWN','ORACLE_OCM','OUTLN','SYS','SYSTEM')\"" network_link=old_MYDB directory=acinu_imp logfile=AcinupImport.log parallel=2 job_name=MYDB_DMP_FULL

 

 

## Note

At first all my scenarios  had error below

 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39200: Link name “OLD_ACINUP” is invalid.
ORA-02019: connection description for remote database not found

 

This made me check  the services in the database, the entry in the tnsnames, and test it all again.  After that as A-team Hannibal would say , love it when a plan comes together  it worked !

 

Happy reading ,

 

And always don’t believe it just because it is printed.

 

Mathijs Bruggink