Installing 12.3 Goldengate in different Group

Summary

Do yo know the feeling that sometimes it is easier to send a message to yourself ( no message to rudy ) then taking notes? Well this is another fine example of that . For project had to install Oracle Goldengate  in A Cluster and below steps are the ones that were followed to do that. Extra flavor to this assignment due to the fact that the team that will use Goldengate in Bau ( business as usuall)  is not the same team as the team i am part of so in this case it was more or less “thou shall not install as oracle:dba”.

Details:

  • As a starting point  I have created a Diskgroup in ASM , and have setup an ACFS filesystem on it . By doing so  This mount-point ( and also the software) becomes shared / usable on all cluster-nodes.
  • Location of the software: oracle@myStartingHost:/opt/oracle/Odrive/depot/software/oracle/GoldenGate/Linux_64/123012_fbo_ggs_Linux_x64_shiphome.zip

On  the target cluster:  checked current groups. Requested from Linux team that they would add the group adba to the oracle account i am using. (in this scenario it is  required to have adba added to the groups for the Oracle account)

oracle@mysrvr:/app/oracle/GG12/# groups oracle
  • Set newgroups to install  Goldangate tool in an Other teams owned group. in Linux perform this command:
newgrp adba
  • unzip the file
  • In oracle@mysrvr:/app/oracle/GG12/fbo_ggs_Linux_x64_shiphome/Disk1,  this now shows:
oracle@mysrvr:/app/oracle/GG12/fbo_ggs_Linux_x64_shiphome/Disk1 [MIGABP]# ls -lisa
total 24

3276804 4 drwxr-xr-x.  5 oracle adba 4096 Dec  9 15:02 .
3276803 4 drwxr-xr-x.  3 oracle adba 4096 Dec  9 15:02 ..
3276808 4 drwxr-xr-x.  4 oracle adba 4096 Dec  9 15:02 install
3276805 4 drwxrwxr-x.  2 oracle adba 4096 Dec  9 15:02 response
3276807 4 -rwxr-xr-x.  1 oracle adba  918 Dec  9 15:02 runInstaller
3276830 4 drwxr-xr-x. 11 oracle adba 4096 Dec  9 15:02 stage

 

  • ran ./ runInstaller

## Issues i came across since  the port that Goldengate likes to claim was already in use:

netstat -tulpn | grep 7809 as root to kill process at reinstall

 

Setting up Goldengate in this way will make proper entry in the OraInventory.

..Next challenge  but that is for different post ,  to add Goldengate to the  Oracle Grid Infrastructure as a resource so the cluster-ware will monitor and respond to this resource should the  node restart or if a relocation of the resource  would help to lift some burden on  that server .. Stay tuned.

Happy reading ,

Mathijs

 

 

Reconfigure Rac 12.2 Gridinfra network

Summary.

In 12.2 Grid infrastructure Oracle has altered the concept of ASM to flex-ASM as a default. This blog will take a focus on re-configuring the Oracle Rac 12.2 Grid infrastructure network component parts like the interconnect, the public or to change the interface to do-not-use, whenever that applies / is an improvement to the situation at hand. Read carefully in full before performing it on one of your clusters. Baseline for this action will be a document on Mos (How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1))

Details:

As with any change, when going through input – processing – output it is important to have a clear picture of the situation as is. So a first and  very mandatory step should be to check with the oifcfg getif command how things are before starting the changes:

When entering the command information with regard to the known network interfaces in the Rac cluster similar to below should be showing:

oracle@mysrvr1dr:/app/oracle/stage/27468969 [+ASM1]# oifcfg getif

bond0  198.19.11.0  global  public
eth0  10.217.210.0  global  cluster_interconnect,asm
eth2  192.168.10.0  global  cluster_interconnect
eth7  192.168.11.0  global  cluster_interconnect

Here bond0 will be used used as public, eth0 at the moment is holding activities for the cluster interconnect and for asm, eth2 and eth7 are dedicated to  the interconnect. Eth0 is defined as admin lan for various activities. In this setup the cluster is unstable, nodes are being evicted. Time to perform steps to stabilize it.

From the Mos note, looking at Case IV. Changing private network interface name, subnet or netmask. For 12c Oracle Clusterware with Flex ASM.

Precaution, taking backup of profile.xml on each node.

Take a backup of profile.xml on all cluster nodes before proceeding, as grid user. In this specific case this is the user that has installed the Grid Infrastructure ( in this scenario that was the  oracle user):
Command:

$ cd $GRID_HOME/gpnp/<hostname>/profiles/peer/
 $ cp -p profile.xml profile.xml.bk
cd /app/grid/product/12201/grid/gpnp/mysrvr1dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr2dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr3dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr4dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr5dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr6dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr7dr/profiles/peer
cp -p profile.xml profile.xml.bk

cd /app/grid/product/12201/grid/gpnp/mysrvr8dr/profiles/peer
cp -p profile.xml profile.xml.bk

Altering the interconnect:

One of the interconnects should be altered to make sure that the ASM listener is able to communicate using that interface to. In this scenario eth2 was used to do so. When doing this take note of the ip since it will be needed to configure a new ASM listener.

oifcfg setif -global eth2/192.168.10.0:cluster_interconnect,asm
oifcfg setif -global eth7/192.168.11.0:cluster_interconnect

Now eth2  shows that it setup for interconnect and asm (only one interconnect should be setup to combine cluster_interconnect+asm).

peer [+ASM1]# oifcfg getif

bond0  198.19.11.0  global  public
eth0  10.217.210.0  global  cluster_interconnect,asm
eth2  192.168.10.0  global  cluster_interconnect,asm
eth7  192.168.11.0  global  cluster_interconnect

With this information checked and in place it is time for setting up new listener for asm since the original ASM listener during the installation used eth0 and that eth0 will be dropped  – removed from cluster configuration in steps below:

Existing listener ASMNET1LSNR  will become new one ASMNET122LSNR.

srvctl add listener -asmlistener -l ASMNET122LSNR -subnet 192.168.10.0
(as mentioned this is the eth2 interface that we are going to use).

As always seeing is believing : use crsctl status resource -t to see details similar to below. The new ASM listener is created as a resource and it is in a status offline offline on all nodes in the cluster at this point and time :

--------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------
ora.ASMNET122LSNR_ASM.lsnr
               OFFLINE OFFLINE      mysrvr1dr                 STABLE
               OFFLINE OFFLINE      mysrvr2dr                 STABLE
               OFFLINE OFFLINE      mysrvr3dr                 STABLE
               OFFLINE OFFLINE      mysrvr4dr                 STABLE
               OFFLINE OFFLINE      mysrvr5dr                 STABLE
               OFFLINE OFFLINE      mysrvr6dr                 STABLE
               OFFLINE OFFLINE      mysrvr7dr                 STABLE
               OFFLINE OFFLINE      mysrvr8dr                 STABLE

In the next step we will remove the old ASM listener, and use a -f option to prevent errors – messages with regard to dependencies.

srvctl update listener -listener ASMNET1LSNR_ASM -asm -remove -force

I have checked again with crsctl status resource -t to make sure the old resource is gone now.

Removing the old ASM listener

In the Mos note there is a little inconsistency because it claims  that as a next step the old ASM listener should be stopped.  I was able to grep for the listener ( ps -ef|grep -i inherit)  and i saw it on OS level on the machine(S). But I am not able to stop that listener  since the cluster resource is already gone and lsnrctl did not work. Solution: What I noticed that when I skipped this step and stopped and started the cluster which is mandatory in this scenario, the listener was gone on all nodes.

Should have given this command, but that is NOT working:
lsnrctl stop ASMNET1LSNR_ASM

Check configuration before restarting GI:

First command:
srvctl config listener -asmlistener

Name: ASMNET122LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.10.0
Home: <CRS home>
End points: TCP:1527
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

Second Command:
srvctl config asm

ASM home: <CRS home>
Password file: +VOTE/orapwASM
Backup of Password file:
ASM listener: LISTENER
ASM instance count: ALL
Cluster ASM listener: ASMNET122LSNR_ASM

Both results look great so time to move to the next step (restarting the Grid Infra structure on all nodes).

Restarting Grid infrastructure on all Nodes:

For this next step you have to become root (or sudo su – ) to do the next steps. First and importantly make sure that the Grid infra structure is not restarting automatically should a cluster node perform a reboot (disable crs) , then stop the Grid infrastructure software:

As root

/app/grid/product/12201/grid/bin/crsctl disable crs
/app/grid/product/12201/grid/bin/crsctl stop crs
To be done on: mysrvr[1-8]dr

Checking network configuration on all nodes.

mysrvr1dr:root:/root $ ifconfig -a

Starting cluster again:

As root

/app/grid/product/12201/grid/bin/crsctl enable crs
/app/grid/product/12201/grid/bin/crsctl start crs

To be done on: mysrvr[1-8]dr

Final checks:

oifcfg getif

bond0  198.19.11.0  global  public
eth0  10.217.210.0  global  cluster_interconnect,asm
eth2  192.168.10.0  global  cluster_interconnect,asm
eth7  192.168.11.0  global  cluster_interconnect

Time to delete eth0

Since eth0 is admin lan, and after our reconfigure steps, time  to get rid of the eth0 (remove it from the Grid infra structure).

oifcfg delif -global eth0/10.217.210.0 

And a last check again:

oifcfg getif

bond0  198.19.11.0  global  public
eth2  192.168.10.0  global  cluster_interconnect,asm
eth7  192.168.11.0  global  cluster_interconnect

Happy reading, and till we meet again,

Mathijs.

 

 

Some things happened (while installing 12.2 GI in a Rac cluster).

Summary:

One of the fine people in the Oracle community once shared  a thought where he told about repairing a bike once and repeating the plan of approach N times. As a lesson learned from  that scenario it is best to see each bike as a new challenge which deserves a new and fresh approach. In this blog I will describe a number of things i came across when setting up Grid Infra structure 12.2 with January 2018 PSU on a multi-node cluster.

Details – Things to look after:

  • Locating the log files of the installation can make a world of difference. Make sure you understand and find  the location of it and have it tailed during all of install.  In my case in this directory will find subdirs and log file for example: /app/oraInventory/logs/GridSetupActions2018-04-26_09-39-53AM.
  • In the past you always had  one destination to unzip your Software.zip and during installation the runInstaller would ask for an installation location  during setup. With 12.2 ( and in Oracle 18 Grid infra) that is no longer the case. Create the subdirectory where the software is supposed to be installed and unzip your files there as a first step.
  • runInstaller is no more … In order to start the installation process you will have to find this command:./gridSetup.sh
  • When Installing as in my case on Red Hat  Linux 7.4 with a Patched Kernel you might come across  ACFS-9154: Loading ‘oracleoks.ko’ driver.  >  modprobe: ERROR: could not insert ‘oracleoks’: Unknown symbol in module, or unknown parameter (see dmesg) >  ACFS-9109: oracleoks.ko driver failed to load. >  ACFS-9178: Return code = USM_FAIL >  ACFS-9177: Return from ‘ld usm drvs’ >  ACFS-9428: Failed to load ADVM/ACFS drivers. A system reboot is recommended. You can Solve that by running the gridSetup.sh with parameters which will install the Patch(es) first and then run the commands:./gridSetup.sh -applyPSU /app/grid/product/12201/grid/27100009.  Translated this means that the psu patch needs to be applied first and then the gridSetup can start its setup.
  • Images during setup have changed. In my case I have selected  this one. Which also brought me FLEX-ASM as per default in 12.2.

 

2018-05-06_074150

When installing GI as a standalone cluster in the follow-up screens you are asked to add the nodes of your cluster either as a Hub or as a Leave. Thus differentiating by default which nodes should have a dedicated ASM instance ( Hub) and which nodes will communicate remotely with one of the Hub-Asm instances. After install i learned that in 12.2 as a default 3 ASM instances will be created  no matter how many nodes  there are in your cluster.

  • Scan listener: Make sure as a preparation that the colleagues from Linux team have added the 3 Ips for your cluster in the Dns and try a nslookup first before installing. During installation when you have to  add the clustername ( here presented as mycluster) ,  the installing tool will also show the scan-name (and most likely you will have to alter it anyhow to meet with the information in  dns needed for the setup).( in dns mycluster-scan.prod.nl) needs to be present as 3 IP addresses

2018-05-06_075326

  • In the clusternode screen you will add all the nodes in your cluster. In this case I intended to set up each node as a hub ( thus expecting that there would be 8 asm instances in place too ( which was not the case but that is elaborated in other topic.
  • On this screen you add the nodes using the add button.
  • On this screen you can set up SSH connectivity between all the nodes. On the web it was not clear to me in various blogs and in the documentation neither, what is the preferred way to do this. I had the tool setup ssh connectivity between all nodes and i was happy with result.
  • Once completed press next and the tool will show something like “validating node readiness”.

2018-05-06_080405

  • In the specify Network Interface usage screen:
  • Best practice  / Lessons learned: Make sure you have consulted with  the Linux team about the interfaces. In my specific case > ETH0 is admin lan . You should put it to do not use. Eth2 and Eth7 are the private interconnects. Make sure that only one of the is have the option Private, ASM.  (In a flex asm cluster ASM needs a way to communicate via its dedicated listener. Since as per default you will have only ASM listener, make sure only one of the private interconnects is using this combo of private and ASM).

2018-05-06_082605

Note: This installation was implemented on New Hardware , coming from Dell. During the install we found out that in the original setup the DELL systems use a Range of Ips ( 169.*) which is also used by the HA-IPS of Oracle. And even when the colleagues of Linux might grumble, it is mandatory that the range of 169* is not in use!! In the first setup the Dell systems had  the 169* enabled for their idRac interface. This IPs have been disabled.

  • For the setup of two Diskgroups ( one for the OCR and voting disks and one for the GIMR ( grid infra structure Management Repository) make sure that the Linux Admins have delivered ASM disks). In my case i got 2 times 3 Disks , so i could setup Normal redundancy Diskgroups for both Diskgroups.
  • On the Summary screen , pay extra attention to make sure that all the cluster nodes that you intend to have included in your soon to be cluster are showing ! ( Hub nodes: this should show all the nodes. If this is not the case you can select Edit ( which will rerun all steps as of Cluster node information).

Well that is all for now .. To be continued in a galaxy near you …

As always happy reading and till we meet again.

Mathijs

 

 

Install & Patch 12.2 GI in Oracle Restart

Introduction:

Where would we be without challenges. I have become team member of a project team for  a new billing environment and this team is aiming to use ( and go live) with Oracle 12.2 Grind Infra structure and Database version). The information of  this article will become a baseline for the installation of several Oracle environments on Linux. Oracle is referring to this as Oracle Restart. Next in line after that (and I love it) will be Real application clusters to be set up.

General Preparations 12.2 Grid Kata:

## Identifying ORACLE_BASE and layout of Grid Infrastructure.

echo $ORACLE_BASE
/app/oracle

echo $ORACLE_HOME
/app/grid/product/12.2.0.1/grid

## Identifying ORACLE_BASE  and Db software
echo $ORACLE_BASE
/app/oracle

echo $ORACLE_HOME
/app/oracle/product/12.2.0.1/db

## So for the 12.2 layout which is in scope for the actions on a Restart or Rac environment:

+ASM1     /app/grid/product/12.2.0.1/grid          
CRS       /app/grid/product/12.2.0.1/grid           
-MGMTDB   /app/grid/product/12.2.0.1/grid          
MYDB     /app/oracle/product/12.2.0.1/db

## Checking Red Hat release:

oracle@mysrvr1hr:/dev/mapper []# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.9 (Santiago)

## Oracle restart installation for 12.2  instructions to be found:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/installing-oracle-grid-infrastructure-for-a-standalone-server-with-a-new-database-installation.html#GUID-0B1CEE8C-C893-46AA-8A6A-7B5FAAEC72B3

Interesting point is that in 12.2 the famous, well known runInstaller is replaced by ./gridSetup.sh (when opening runInstaller even ran into errors (oui-10133 wen running runInstaller in 12.2).  Second point of interest will be that you have to pre-create the directory where the software will be running. 

oui-10133

## Preparations  for Installation:

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/installing-oracle-grid-infrastructure-for-a-standalone-server-with-a-new-database-installation.html#GUID-0B1CEE8C-C893-46AA-8A6A-7B5FAAEC72B3

  1. On the server where you will install the Grid infrastructure create the directory where you want to install the software (the location you will later on call Your ORACLE_HOME).  On the source server  and in my specific case that meant that: I had to do mkdir -p /app/grid/product/12.2.0.1/grid on the source server.
  2. From Solltau: oracle@myhost:/opt/oracle/Odrive/depot/software/oracle/12c/GI []# scp linuxx64_12201_grid_home.zip oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid
  3. UNSET your environment variables if any on the installation box:
unset ORACLE_BASE 
unset ORACLE_HOME 
unset GI_HOME 
unset ORA_CRS_HOME 
unset TNS_ADMIN
unset ORACLE_SID
unset ORA_NLS10

echo $ORACLE_BASE etc.

  1. ## Check zip file in the destination that will also become your ORACLE_HOME for this install.
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid []# ls -ltr
total 2924504
-rw-r--r--. 1 oracle dba 2994687209 Jan 3 16:28 linuxx64_12201_grid_home.zip

Make sure you use the future ORACLE_HOME destination to unzip the file. This is mandatory because compared to previous Installations where you can alter the Software installed directory,  during the 12.2 installation there will NOT be an option to choose the destination where to setup this installation. ## make sure you are in the directory /app/grid/product/12.2.0.1/grid and extract the zip!!

  1. ##Once the Zip file is extracted fire below script:
./gridSetup.sh

In one of the following detailed screens make sure you change “change directory path” to discover the disks that you will be using for  this installation. In my case this meant that the Linux Admin colleague has set up – has labelled dedicated Luns (disks) as a preparation to my actions.

root # ls -lH /dev/mapper/ASM_*
brw-rw----. 1 oracle dba 253,  6 Dec 22 16:01 /dev/mapper/ASM_ACFS_035_001
brw-rw----. 1 oracle dba 253, 33 Dec 22 16:01 /dev/mapper/ASM_OCRM_008_001
brw-rw----. 1 oracle dba 253, 34 Dec 22 16:01 /dev/mapper/ASM_OCRM_008_002

brw-rw----. 1 oracle dba 253, 25 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_001
brw-rw----. 1 oracle dba 253, 26 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_002
brw-rw----. 1 oracle dba 253, 30 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_003

 

## Once you have made all the selections needed below screen will appear:

Grid summary

Once you selected install  you will be updated by next progress screen:

Grid prograss

 

## In a separate screen as the Root user ./root.sh  has to run which will show:

mysrvr1hr:root:/app/grid/product/12.2.0.1/grid # ./root.sh

Performing root user operation.
The following environment variables are set as:

ORACLE_OWNER= oracle
ORACLE_HOME=  /app/grid/product/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/grid/product/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
/app/oracle/crsdata/mysrvr1hr/crsconfig/roothas_2018-01-03_05-02-27PM.log

## logging details of root.sh :
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'dba'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.

CRS-4664: Node mysrvr1hr successfully pinned.
2018/01/03 17:02:50 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'mysrvr1hr'
CRS-2673: Attempting to stop 'ora.evmd' on 'mysrvr1hr'
CRS-2677: Stop of 'ora.evmd' on 'mysrvr1hr' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'mysrvr1hr' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.

mysrvr1hr     2018/01/03 17:03:44     /app/grid/product/12.2.0.1/grid/cdata/mysrvr1hr/backup_20180103_170344.olr     0
2018/01/03 17:03:49 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
mysrvr1hr:root:/app/grid/product/12.2.0.1/grid #

 

Resolving possible issues:  12.2 GI standalone : [INS-20802] Automatic Storage Management Configuration Assistant failed (Doc ID 2277224.1)

grid completing

 

## The Installation will create the ASM instance with a default spfile. Due to company standards and due to best practice (knowing that size does matter and default settings will not do well in a heavily used environment) you should connect to the ASM instance and  alter below values:

## Specific setup for asm Instance

ALTER SYSTEM SET memory_max_target=4096M SCOPE=SPFILE;
ALTER SYSTEM SET memory_target=1536M SCOPE=SPFILE;
ALTER SYSTEM SET large_pool_size=100M SCOPE=SPFILE;
ALTER SYSTEM SET shared_pool_size=512M SCOPE=BOTH;
ALTER SYSTEM SET shared_pool_reserved_size=100M SCOPE=SPFILE;

## Nothing to do with performance but mandatory due to Standards.
ALTER SYSTEM SET audit_file_dest='/app/oracle/+ASM/admin/adump' SCOPE=SPFILE;
ALTER SYSTEM SET background_dump_dest='/app/oracle/diag/asm/+asm/+ASM/trace' SCOPE=BOTH;

 

## Company Standards as with regard to  listener:

  • Log destination: /app/oracle/diag/tnslsnr/mysrvr1hr/listener
  • One listener per vip

## So I have added a listener with the netca tool running from  the GridInfrastructurehome.

## /app/oracle/diag/tnslsnr/<servername>/<listenername>/trace

oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid/network/admin [+ASM]# lsnrctl status LISTENER_MYSRVR1HR

## Deinstallation when needed. As always you might need a way out (back again) .

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/deinstalling-previous-release-grid-home.html#GUID-D84F94CE-251C-4D3B-9D6E-F584FB30F5B3

Note: For upgrades from previous releases, if you want to uninstall the previous release Grid home, then perform the following steps:

  1. Log in as the root user.
  2. Manually change the permissions of the previous release Grid home (see below).
  3. Run the /app/grid/product/12.2.0.1/grid/deinstall/deinstall command (as oracle User).
For example, on Grid Infrastructure for a standalone server:

# chown -R oracle:dba /app/grid/product/12.2.0.1
# chmod -R 775 /app/grid/product/12.2.0.1

In this example:
/u01/app/oracle/product/11.2.0/grid is the previous release Oracle Grid Infrastructure for a standalone server home

oracle is the Oracle Grid Infrastructure installation owner user
dba is the name of the Oracle Inventory group (OINSTALL group)

For example, on Oracle Database:

# chown -R oracle:dba /app/oracle/product/12.2.0.1
# chmod -R 775 /app/oracle/product/12.2.0.1

 

If all is well Time to start Patching the Environment !

## patching : GI : p26737266_122010_Linux-x86-64.zip

## oracle@soltau2:/opt/oracle/Odrive/depot/software/oracle/patches/Linuxx86 []# scp p26737266_122010_Linux-x86-64.zip oracle@mysrvr1hr:/app/grid/product/12.2.0.1/stage

## check current situation with opatch before patching.

opatch lsinventory -detail -oh /app/grid/product/12.2.0.1/grid

This shows:

Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Oracle Home       : /app/grid/product/12.2.0.1/grid
Central Inventory : /app/oraInventory
from           : /app/grid/product/12.2.0.1/grid/oraInst.loc

OPatch version    : 12.2.0.1.6
OUI version       : 12.2.0.1.4

Log file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-05_14-39-27PM_1.log
Lsinventory Output file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-01-05_14-39-27PM.txt
--------------------------------------------------------------------------------
Local Machine Information::
Hostname: mysrvr1hr.mydomain 

ARU platform id: 226
ARU platform description:: Linux x86-64
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.

Installed Products (99):
etc.
.
.
There are 99 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
--------------------------------------------------------------------------------
OPatch succeeded.

## Use opatch to  check for conflicts:

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26710464
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26925644
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26737232
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26839277
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26928563

## This did not show any conflicts

## Next step will be : use opatch check for space requirements (you would not want to end up installation failing due to lacking storage:

For Grid Infrastructure Home, as home user:

Create file /tmp/patch_list_gihome.txt with the following content:
cat /tmp/patch_list_gihome.txt3.  

/app/grid/product/12.2.0.1/stage/26737266/269285634.  
/app/grid/product/12.2.0.1/stage/26737266/268392775.  
/app/grid/product/12.2.0.1/stage/26737266/267372326.  
/app/grid/product/12.2.0.1/stage/26737266/269256447.  
/app/grid/product/12.2.0.1/stage 26737266/26710464

 

Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt

## this shows

oracle@mysrvr1hr:/app/grid/product/12.2.0.1/stage [+ASM]# $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /app/grid/product/12.2.0.1/grid
Central Inventory : /app/oraInventory
from           : /app/grid/product/12.2.0.1/grid/oraInst.loc

OPatch version    : 12.2.0.1.6
OUI version       : 12.2.0.1.4

Log file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-05_14-55-06PM_1.log
Invoking prereq "checksystemspace"

Prereq "checkSystemSpace" passed.
OPatch succeeded.


 

## To patch only the GI home:

# opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid

## failed with:

OPATCHAUTO-72046: Invalid wallet parameters.
OPATCHAUTO-72046: The wallet path or wallet password provided is not valid.
OPATCHAUTO-72046: Please provide valid wallet information.

opatchauto bootstrapping failed with error code 46.

## Thank you Mos for elaborating.

 

OPATCHAUTO-72046: Invalid wallet parameters (Doc ID 2150070.1)

 

opatchauto command is not being run as root user.  Opatchauto for Grid PSUs should always be run as root user.

## So as the root user :

/app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid

## and it failed again !!!

mysrvr1hr:root:/root # /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid

System initialization log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2018-01-05_03-09-09PM.log.
Session log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2018-01-05_03-09-12PM.log

The id for this session is 5LQ1
[init:init] Executing OPatchAutoBinaryAction action on home /app/grid/product/12.2.0.1/grid
Executing OPatch prereq operations to verify patch applicability on SIHA Home........
[init:init] OPatchAutoBinaryAction action completed on home /app/grid/product/12.2.0.1/grid with failure
Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : mysrvr1hr->/app/grid/product/12.2.0.1/grid Type[siha]

Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /app/grid/product/12.2.0.1/grid, host: mysrvr1hr.
Command failed:  /app/grid/product/12.2.0.1/grid/OPatch/opatchauto  apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid -target_type has -binary -invPtrLoc /app/grid/product/12.2.0.1/grid/oraInst.loc -persistresult /app/grid/product/12.2.0.1/grid/OPatch/auto/dbsessioninfo/sessionresult_analyze_mysrvr1hr_siha.ser -analyze -online
Command failure output:
==Following patches FAILED in analysis for apply:

Patch: /app/grid/product/12.2.0.1/stage/26737266/26925644
Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-09-16PM_1.log

Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output:
The details are:
Prerequisite check "CheckMinimumOPatchVersion" failed.]
Failed during Analysis: CheckMinimumOPatchVersion Failed, [ Prerequisite Status: FAILED, Prerequisite output:

The details are:
The OPatch being used has version 12.2.0.1.6 while the following patch(es) require higher versions: 

Patch 26710464 requires OPatch version 12.2.0.1.7.

Please download latest OPatch from My Orac ...

etc.
.
.

OPatchAuto failed.

opatchauto failed with error code 42

mysrvr1hr:root:/root #

 

## So I downloaded latest opatch  version and parked it in a temporary directory on that targeted server:

unzip p6880880_122011_Linux-x86-64.zip -d /app/grid/product/12.2.0.1/grid

## now  Opatch shows:

oracle@mysrvr1hr:/app/grid/product/12.2.0.1/opatch [+ASM]# opatch version
OPatch Version: 12.2.0.1.11

OPatch succeeded.

## Sometimes you just have to be patient to hear the lambs being silent:

## Next run as the root user :

/app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid

 

## logfiles:

oracle@mysrvr1hr:/app/oracle/crsdata/mysrvr1hr/crsconfig

-rw-rw----. 1 oracle dba  17364 Jan  5 15:35 hapatch_2018-01-05_03-34-42PM.log

-rw-rw----. 1 oracle dba  23725 Jan  5 15:42 hapatch_2018-01-05_03-42-41PM.log

##showed
mysrvr1hr:root:/root # /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid
OPatchauto session is initiated at Fri Jan  5 15:33:54 2018
System initialization log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2018-01-05_03-33-58PM.log.
Session log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2018-01-05_03-34-02PM.log

The id for this session is XLE2
Executing OPatch prereq operations to verify patch applicability on home /app/grid/product/12.2.0.1/grid
Patch applicability verified successfully on home /app/grid/product/12.2.0.1/grid

Bringing down CRS service on home /app/grid/product/12.2.0.1/grid
Prepatch operation log file location: /app/oracle/crsdata/mysrvr1hr/crsconfig/hapatch_2018-01-05_03-34-42PM.log
CRS service brought down successfully on home /app/grid/product/12.2.0.1/grid

Start applying binary patch on home /app/grid/product/12.2.0.1/grid
Binary patch applied successfully on home /app/grid/product/12.2.0.1/grid

Starting CRS service on home /app/grid/product/12.2.0.1/grid
Postpatch operation log file location: /app/oracle/crsdata/mysrvr1hr/crsconfig/hapatch_2018-01-05_03-42-41PM.log
CRS service started successfully on home /app/grid/product/12.2.0.1/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:mysrvr1hr
SIHA Home:/app/grid/product/12.2.0.1/grid

Summary:
==Following patches were SUCCESSFULLY applied:
Patch: /app/grid/product/12.2.0.1/stage/26737266/26710464
Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log

Patch: /app/grid/product/12.2.0.1/stage/26737266/26737232
Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log

Patch: /app/grid/product/12.2.0.1/stage/26737266/26839277
Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log

Patch: /app/grid/product/12.2.0.1/stage/26737266/26925644
Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log

Patch: /app/grid/product/12.2.0.1/stage/26737266/26928563
Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log

OPatchauto session completed at Fri Jan  5 15:43:05 2018
Time taken to complete the session 9 minutes, 11 seconds


 

Happy Dba , Installed 12.2 GI and Patched it with RU October 2017.

Thanks for reading and till we meet again,

Mathijs.

Altering the Hearbeat in Oracle Rac environment.

Introduction

When being asked for a part of job description of a dba  I would like to take a moment, smile and reply:  to serve and to protect the cluster the databases and the data of course. In the matter at hand this will have to mean that this missions/assignment will be all about increasing High availability and improve the functionality of the cluster interconnect between nodes and that all for one price: doubling  your cluster interconnect ips on the various layers.

And to be honest this assignment is even more of interest because after all how often is there a need or a challenge to alter IP addresses once you have setup the cluster. Hmm only valid options  I could think of would be action where there is a action like lifting and shifting the servers to other rooms  or as in this case because we simply want to improve availability.

Some days before the action was scheduled, I looked into the matter of the required dedicated ips together with the colleagues from team. In the file in /etc/hosts being the source for that we looked for ips which had  hb in the alias of the naming convention which is according to standards on OS level. This information  was used to setup the ips on the OS level. Once that task was completed a first and very important test had to be: can you ping these new and dedicated ips from every node which is part of the cluster. And since this is a happy flow scenario that was of course what happened. During the maintenance window itself detailed steps as seen below have been performed to make these addresses known / usable on the cluster layer in the grid infrastructure.

Detailed Plan

In this scenario we are using a 4 node Rac cluster on RH with 11.2.0.4 Grid Infrastructure. When looking on the OS level this is what we found already present in the hosts file. And even better these ips where all available and not in use.

grep -i hb /etc/hosts
 • 10.124.10.75 mysrvrahr-hb1.dc.nl mysrvrahr-hb1
 • 10.124.11.75 mysrvrahr-hb2.dc.nl mysrvrahr-hb2
 • 10.124.10.76 mysrvrbhr-hb1.dc.nl mysrvrbhr-hb1
 • 10.124.11.76 mysrvrbhr-hb2.dc.nl mysrvrbhr-hb2
 • 10.124.10.77 mysrvrchr-hb1.dc.nl mysrvrchr-hb1
 • 10.124.11.77 mysrvrchr-hb2.dc.nl mysrvrchr-hb2
 • 10.124.10.78 mysrvrdhr-hb1.dc.nl mysrvrdhr-hb1
 • 10.124.11.78 mysrvrdhr-hb2.dc.nl mysrvrdhr-hb2

The steps below have been followed based on a great Mos note in order to complete the tasks that are needed to make the Grid infrastructure (cluster) aware of the new ips. The scenario is running through a number of steps to be well prepared but also of course to be on the save side before and during the changes on the cluster layer.  It is like hmm paying respect and being brave but cautious .

Preparation steps:
As of 11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also in the gpnp profile. Documentation was very clear on this: If the private network is not available or its definition is incorrect, the CRSD process will not start and any subsequent changes to the OCR will be impossible.

  • Therefore care needs to be taken when making modifications to the configuration of the private network.
  • It is important to perform the changes in the correct order.

Note that manual modification of gpnp profile is not supported so it is best to stick to proper actions and not go into hacking mode!

So let’s take a backup of profile.xml on all cluster nodes before proceeding:

As grid user ( in my case the oracle user) , move to the correct directory( cd $GRID_HOME/gpnp/<hostname>/profiles/peer/).

cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrahr/profiles/peer 
cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrbhr/profiles/peer 
cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrchr/profiles/peer 
cd /app/oracle/product/11.2.0/grid/gpnp/mysrvrdhr/profiles/peer 

During startup of cluster Oracle is relying on this very important xml file
for specific data like spfile / diskgroups and of course IPS. 

cp -p profile.xml profile.xml.bk

Ensure Oracle Clusterware is running on ALL cluster nodes in the cluster and save current status of resource. (Better save then sorry and to make sure you know about the health of  the cluster and its resources as they have been defined. So do check the cluster and save the current status of resources in a file as a pre-change image.

/app/oracle/product/11.2.0/grid/bin/crsctl check cluster -all 
/app/oracle/product/11.2.0/grid/bin/crsctl status resource -t>/tmp/beforeNewIps.lst

As grid user( in my case the oracle user): Get the existing information.

showing which interfaces are defined in the cluster.
##below you will see that the current (single) cluster interconnect is set up at 64.18.112.208

/app/oracle/product/11.2.0/grid/bin/oifcfg getif
bond1 64.18.112.208  global  cluster_interconnect
bond0  195.233.190.64  global  public

The command iflist will show you the network information known on the OS. Showing defined all ( or specific ) ips. Check the interfaces / subnet address can be identified by command for eth specifically:

/app/oracle/product/11.2.0/grid/bin/oifcfg iflist|grep -i eth|sort 
64.18.32.0 eth0
10.124.10.0 eth2
10.124.11.0 eth6

or

## check  interfaces / subnets in general:
 /app/oracle/product/11.2.0/grid/bin/oifcfg iflist|sort

Since we now have a good picture of the status of the cluster and since we know more about the ips being used (oifcfg getif) and about the ips being present on the system (oifcfg iflist) all things set to Add the new cluster_interconnect information. As you can see definition of both eth2 Address and eth6. And with the -global parameter the information is shared in the complete cluster on all nodes

/app/oracle/product/11.2.0/grid/bin/oifcfg setif -global eth2/10.124.10.0:cluster_interconnect 
/app/oracle/product/11.2.0/grid/bin/oifcfg setif -global eth6/10.124.11.0:cluster_interconnect

Of course there cannot be a change without verifying it. So i checked on all nodes with below command.

/app/oracle/product/11.2.0/grid/bin/oifcfg getif  

Since we are using 11GR2 Grid Infrastructure below steps are to be followed now: Shutdown Oracle Cluster ware on all nodes and disable the Oracle Cluster ware as root

Action is to be performed as the root user: 
sudo su -  
./app/oracle/product/11.2.0/grid/bin/crsctl stop crs 
./app/oracle/product/11.2.0/grid/bin/crsctl disable crs

In this specific scenario my Linux brothers in arms had already made the network configuration change at OS level as was required and that great job was seen in the oifcfg iflist command. They made sure that the new interfaces were available on all nodes after their change.

(check to ping the interfaces on all nodes with script kindly provided by Linux team member). 
for x in 10 11;do for xx in 75 76 77 78;do ping -c2 10.124.${x}.${xx}|egrep 'icmp_seq|transmitted';done;echo;done 
for x in a b c d; do for xx in 1 2;do ping -c2 mysrvr${x}hr-hb$xx|egrep 'icmp_seq|transmitted';done;echo;done 

Well all went well and has been checked so it is time to restart Oracle Cluster ware and once completed enable Oracle Cluster ware again.

On all nodes in the cluster:

## as root user: 
sudo su -  
/app/oracle/product/11.2.0/grid/bin/crsctl start crs

Seeing  = believing in this matter so after some time Check:

/app/oracle/product/11.2.0/grid/bin/crsctl check cluster -all 

In the step below we are checking the status of the resources in the cluster again and adding that information to  a file. This “post” operation file is then being used to compare the status of the cluster resources before and after.

/app/oracle/product/11.2.0/grid/bin/crsctl status resource -t>/tmp/afterNewIps.lst 
sdiff /tmp/afterNewIps.lst /tmp/beforeNewIps.lst

This compare showed me that a 10G RAC database resource and its services needed my intention, so via the cluster commands i checked and observed their status after starting them with srvctl command as the oracle user.  Once completed I ran another check as described and ah happy me all resource in the post status file were in a similar status ( online online) as in the pre status file.

as root user: 
sudo su -  
/app/oracle/product/11.2.0/grid/bin/crsctl enable crs

Time to wrap up this scenario. As part of housekeeping remove the old interface:

/app/oracle/product/11.2.0/grid/bin/oifcfg delif -global bond1/64.18.112.208:cluster_interconnect

Verified the environment one more time.

/app/oracle/product/11.2.0/grid/bin/oifcfg getif

Cluster ware proved already of course but checked the databases and listeners as a last sanity check and that completed the tasks for this time.

Time to inform Apps team that they can perform their sanity checks and start the applications again.

Happy reading and till next Time.

Mathijs

Sql report in html & sending mail in linux

Introduction

Being part of the  Oracle community on the web is always a great experience to me. Great because it is inspiring since  a lot of the stories, blogs and  tweets that are shared by the colleagues in the field are top notch. This week a came across a  great tweet from one of my favorite bloggers : Uwe Hesse  (https://uhesse.com/2011/06/30/sqlplus-output-in-nice-html-format/) in which he shared how to create sql output in html format. If you do not follow his blogs yet , please do  because it is great. So of course had to test it myself since it is o so familiar that getting the data is one thing but a nice way to present is is a different story ( with all the set lines 999, col x  format y etc).

Details:

The scenario for this is great:

  1. Run your report( sql) as you are used to
  2. run this html part after.
  3. Either open Firefox with the html file so show – share the info ( or as in my aim find a way to send it from Linux box as an attachment).

In his blog Uwe showed in the sql script that is used in his post that you either run this kind of work either from a client (being a pc or a (l)Unix because once the job is done , the results are  loaded the report in Firefox).

set termout off

set markup HTML ON HEAD ” –

” –
BODY “” –
TABLE “border=’1′ align=’center’ summary=’Script output'” –
SPOOL ON ENTMAP ON PREFORMAT OFF

spool myoutput.html

l
/

spool off
set markup html off spool off
host firefox myoutput.html
set termout on;
quit;

On  the Linux boxes where I work most Firefox is not present. So instead of — host Firefox I am aiming to sending the output as a mail attachment. So far did a lot of reading and tried various suggestions that i should use mailx  -a  but that is not a valid parameter in this linux Redhat 5X release. Another option offered on the web was using uuencode  which I also cannot use because it is not in place. And last but not least also another great suggest to use mutt  failed me too (also not in place). After the weekend will talk to Linux colleagues at work if  there is another option. so  that matter is still 2 b continued . And of course if there is another solution this will be shared.

But when looking at this as an example , the work will be worth the effort! Thank you Uwe for this great tweet and Blog post (https://uhesse.com/2011/06/30/sqlplus-output-in-nice-html-format/).

Archivelogs per hour – day.

archives-script

report-in-cool-htm-format

To be continued…

As always,

Happy reading,

Mathijs.

Importing Data via Network

Introduction:

For two projects there has been an assignment to upgrade to 11.20.4 Oracle. One environment was already 11.2.3 with same Cluster stack below it and one environment will come from 10.2.0.4 on Solaris.  For both projects on Linux  an 11.2.0.4 cluster-stack plus database version has been set up on one of the newer shared clusters.  Both environments will be migrated using the export – import method (since they are relatively small ( app 400- 500 GB) ) and of course since one of them is being migrated cross platforms (from Solaris to Linux ) you do not have that much choice.

In other project I had good experience with nfs filesystems between source and target servers and at first was aiming to use them again during these migrations.  However since not every project is able to make it to the time lines ( will have to wait for at least 2 more weeks to get the nfs mounts ) other creativity will be  required. In this specific case will work with datapump via the network.

When looking into this scenario i came across two scenarios. First scenario being covered by a fellow blogger and interesting since it offers the option to export directly into an ASM disk group. In  that scenario extra step would be needed using impdp with directory to the same  asmdiskgroup/subdirectory. Second scenario which is explained in more detail here is even one step beyond. Scenario is simple  using impdp via a dblink directly in the database ( not even a need to park a dumpfile somewhere on filesystem or in diskgroup first and then run the imp). Nope just another  imdp and you are there !

 

 

1.     Setting up  tnsnames entry on the target ( receiving ) side.

 

In order to make this scenario work  you will have to make sure that there is no firewall in place to the source database you will pull the data from when you create the tnsnames.ora entry on the target side.

In my case:

 

I always try a: telnet <ip> <port>

telnet  666.233.103.203  33012

 

If  you see something like trying ….  and nothing helps will happen well this was not your lucky day and a firewall is blocking you from making this a happy scenario.  If you see something like this lucky you :

Escape character is '^]'.

Recommendation when you get stuck with trying … then is to make sure that firewall  is opened. In my case my host was a vip address for a rac database and Port 33012 had been assigned to the local listener of that database.

 

## Let set up the tnsnames entry  NOTE : firewall needs to be freed before proceed with tnsping etc:

MBMYDB =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=666.233.103.203)(PORT=33012))
)
(CONNECT_DATA=
(SERVICE_NAME=MYDB.test.nl)
)
)

One interesting part is that the service_name of the tnsnames  i wanted to use was not present as a service in the database so I had to add to extend the present service (which was not default service since it was without  domain).

 

## ## On the source side in the database where i want to take the data from:  added service:

 

alter system set service_names = ‘MYDB’,’MYDB.test.nl’ scope = both ;

 

SQL> show parameter service

 

NAME                                                      TYPE VALUE

———————————— ———– ——————————

service_names                                      string               MYDB, MYDB.test.nl

 

So now we have two services in place which we can use in the tnsnames.ora.

 

2.     Time to set up a public dblink

 

## Reading articles by fellow bloggers they recommended to created PUBLIC (this seems mandatory) db link. Since in my case i would do the import with system a normal db link would b okay too. But for the scenarios sake  public database link is fine.

 

drop public DATABASE LINK old_MYDB;

## worked with this one

CREATE public DATABASE LINK old_MYDB CONNECT TO system IDENTIFIED BY xxxxxxx USING ‘mbMYDB’;

3.     Seeing is believing , test the db link.

 

## performed select

select ‘x’ from  dual@old_MYDB;

4.     Next stop, creating a directory for the logfile of the impdp.

 

Yes that is correct only a directory for the log file not for the dump itself J  that is why i liked this scenario so much.

 

## created directory for the logfile

create directory acinu_imp as ‘/opt/oracle/MYDB/admin/create’ ;

grant read,write on directory acinu_imp to system;

 

 

 

5.     Time to perform the import.

 

Over the years have used expdp and impdp a lot  but most time as an almost 1:1 clone of exp/ imp. But since Google  is your friend when looking for scenarios it was great to explore the  powerful option of exclude= parameter. As you will see ,  creating an import of the full database but excluding the  schemas i don’t care about.

 

Since i was hmm energy efficient i wanted to type the full statement in Linux but was punished  by having ” ” in my command. However had i used a parfile things would have been easier J . But since i wanted to stick to scenario found that whenever on OS  ” level an \ will be mandatory like below:

 

## performed import  with success with  command below

 

impdp system full= yes "EXCLUDE=SCHEMA:\"IN('ADBM','DBSNMP','PERFSTAT','UPDOWN','ORACLE_OCM','OUTLN','SYS','SYSTEM')\"" network_link=old_MYDB directory=acinu_imp logfile=AcinupImport.log parallel=2 job_name=MYDB_DMP_FULL

 

 

## Note

At first all my scenarios  had error below

 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39200: Link name “OLD_ACINUP” is invalid.
ORA-02019: connection description for remote database not found

 

This made me check  the services in the database, the entry in the tnsnames, and test it all again.  After that as A-team Hannibal would say , love it when a plan comes together  it worked !

 

Happy reading ,

 

And always don’t believe it just because it is printed.

 

Mathijs Bruggink

 

 

Cloning your 11.2 Oracle home from an existing server to a new server

Introduction:

At the moment I am reading a great book on 12c (Pro Oracle Database 12c administration) by Darl Kuhn (http://www.apress.com/9781430257288) . And I really like his style and elaborations on topics . While i was  reading it is like being held a mirror. Indeed most times I would also use the runInstaller interactively to run the installation of the Oracle binaries and yes would even frown if I would not be allowed to use a xterm to do installations. Well Darl you gave me an inspiration for set up of environment i need shortly.  So this is a big applause for him already!

Details:

For a data migration environment i am asked to set up Oracle on a Linux server identical to the production servers that are part of the scope of the project.  After installation I will be working with BCVS to bring copies of a number of databases to this box. The production servers are 11.2.2.0 with  one-off patches and a psu patch ..  And as an important add-on information i have installed the Grid Infrastructure software already on the new box so i have Oracle Restart with an ASM instance running. ( that also means that  i have an OraInst location and an existing Oracle Inventory in place too). So which way would i head.

  1. Install Oracle binaries from scratch , and apply all the patches
  2. Clone  the Rdbms software from a production box.

Since i got time i investigated both options. But in the end i loved the scenario 2 below because I always had done the normal interactive way installation:

First $ORACLE_HOME was checked:

$ echo $ORACLE_HOME
/opt/oracle/product/112_ee_64/db

Next step was to move to the ORACLE_HOME

$ cd $ORACLE_HOME

Almost there. Moved back  ( up ) one level ( so I was in /opt/oracle/product/112_ee_64 directory).

$ cd ..

Was aiming  for the tar utility to copy everything in the subdirectory db (and below)  in the  /opt/oracle/product/112_ee_64/.

Note if you get error messages of files not being included ( happened to me too make sure you do a nohup  tar -cvf myorahome.tar>Mylogfile.lst) Most likely you either ran out of space or did not have enough privileges to run the tar for all files ( so i ran  tar as root in the end).

$ tar -cvf myorahome.tar db
  • tar -cvf is the command  to run with the options -cvf
  • myorahome.tar will be holding all the files in all the subdirectories below the /opt/oracle/product/112_ee_64/
  • db is the subdirectory ( part of my ORACLE_HOME) which I need to be added to the tar file.

That created for a full-sized Rdbms Installation with all patches included a Tarred file  of about  5G. After that  myorahome.tar file was copied to the new server with scp.

The tar file was copied to the already existing /opt/oracle/product/112_ee_64 on the new machine. There the tar file was extracted and the extract created the subdirectory db again ( and all others below 2) as part of the extract:

$ cd /opt/oracle/product/112_ee_64
Note: make sure you have plenty space on the new server ( better check that before extraction.
Next, extract the files:
$ tar -xvf myorahome.tar

Next step was needed to include, to register this cloned installation in the Oracle Inventory. In my case I used the runInstaller inside the new Oracle home. First to make sure that the new Oracle Home was not known in the Inventory by detaching it:

./runInstaller -detachHome ORACLE_HOME="/opt/oracle/product/112_ee_64/db"

Then I added the cloned software to the Inventory:

./runInstaller -silent -ignoreSysPrereqs -attachHome ORACLE_HOME="/opt/oracle/product/112_ee_64/db" ORACLE_HOME_NAME="OraDb11g_home1" LOCAL_NODE="" CLUSTER_NODES="

But of course the proof of the pudding is in tasting it once this had finished:

So  it was needed to check with Opatch if the Orainventory now was aware about all installed patches.

export PATH=/opt/oracle/product/112_ee_64/db/OPatch:$PATH

I checked  opatch was the correct one:

which opatch

Then I checked the patches with the command below:

opach lsinventory -oh /opt/oracle/product/112_ee_64/db

And it showed all the correct patches . So indeed I had cloned the Oracle software in a correct way to this machine . Yah ! And a  big thank you to Darl  for a great book which pointed me in the right direction.

Happy reading,

Mathijs

Moving Your 11.2 Database to a new Diskgroup in ASM

Introduction:

I have been assigned with a new challenge with 11.2 Oracle On RedHat Linux with Grid Infrastructure and ASM.  Lets start with some background information. When I started building Real Application Clusters with ASM it was a standard to have merely one Disk group for data (often +DATA01) and one Disk group for the flash recovery area (+FRA01).  Over time that was altered as a standard by still having that one Data Disk group but using at least a dedicated disk group for the flash recovery area. When adding more databases to the same data disk group that might cause bottlenecks when one database grows fast claiming all space. Now a day standard would require a dedicated diskgroup both for data and fra. For one of the production databases I have been asked to deal with a scenario where there will be a new dedicated data disk group (the database already had a dedicated FRA disk group.

Below you will find the steps I performed in a sandbox as a scenario for upcoming change:

Details:

Working with the Spfile:

## First we start with finding location of the spfile that is being used.

SQL> show parameter spfile
NAME                                                      TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile                                                string       +DATA01/ASMdummy/spfileASMdummy.ora

## Next step is creating a pfile from this current spfile which will be used to recreate the spfile in the new disk group:

SQL> create pfile='/opt/oracle/ASMDUMMY/admin/pfile/initASMDUMMY.ora' from spfile ;
File created.

## Then create an spfile in the new disk group:

create spfile='+DATA02' from pfile='/opt/oracle/ASMDUMMY/admin/pfile/initASMDUMMY.ora';

## note alertlog shows:

Tue Nov 12 15:43:51 2013
SUCCESS: disk group DATA02 was mounted
Tue Nov 12 15:43:51 2013

NOTE: dependency between database ASMDUMMY and disk group resource ora.DATA02.dg is established

## Shutdown the database:

SQL> shutdown immediate;

## In a second screen make sure your environment is pointing to the ASM environment and go to ASMcmd  and go to the new disk group

asmcmd -p

## it shows:

ASMCMD [+DATA02/ASMDUMMY/PARAMETERFILE] > ls -ltr
WARNING:option 'r' is deprecated for 'ls'
please use 'reverse'
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   NOV 12 15:00:00  Y    spfile.256.831311031

## Copy the file spfile.256.831311031 to the new disk group which will work as a more human readable file name:

cp +DATA02/ASMDUMMY/PARAMETERFILE/spfile.256.831311031 +DATA02/ASMDUMMY/spfileASMdummy.ora

### After doing that on Linux level alter the location of the spfile in the init.ora in the $ORACLE_HOME/dbs

cd /opt/oracle/product/11203_ee_64_a/db/dbs [ASMDUMMY]# ls -ltr  initASMDUMMY.ora
-rw-r----- 1 oracle dba       45 Nov 12 14:07 initASMDUMMY.ora
### current content: initASMDUMMY.ora
SPFILE='+DATA01/ASMDUMMY/spfileASMDUMMY.ora'

## After changing the disk group my new init.ora looks like this:

SPFILE='+DATA02/ASMDUMMY/spfileASMDUMMY.ora'

## Start the  database to find out that you did good thing :

SQL> startup

Working with the control files and Perform Backup:

SQL> show parameter control_files
NAME                                                      TYPE VALUE
---------------------------- ----------- ------------------------------
control_files                  string  +DATA01/ASMdummy/control01.ctl,+FRA01/ASMdummy/control02.ctl

## Set new location of controlfile in SPFILE:

SQL> alter system set control_files='+DATA02/ASMdummy/control01.ctl' scope=spfile sid='*';

## if you work with one control file , at the end you have to do extra step again to have a two control files again ( I have documented that step ) , but when re-reading this note I think would have been better to :

## Set new location of controlfile in SPFILE:

SQL> alter system set control_files='+DATA02/ASMdummy/control01.ctl', '+FRA01/ASMdummy/control02.ctl' scope=spfile sid='*';

## Shutdown your database

SQL> shutdown;

## Open ASMcmd again with the environment pointing to +ASM instance:

ASMCMD> cp +DATA01/ASMdummy/control01.ctl +DATA02/ASMdummy/control01.ctl

## check it:

ls -l  +DATA02/ASMdummy/control01.ctl
ASMCMD [+] > ls -l  +DATA02/ASMdummy/control01.ctl
Type         Redund  Striped  Time             Sys  Name
N    control01.ctl => +DATA02/ASM/CONTROLFILE/control01.ctl.258.831312753

##  Start your database with startup nomount

SQL> startup nomount;

## Start an rman session : Open “rman target /” and restore from old control and mount + open database:

rman target /
restore controlfile to '+DATA02/ASMdummy/control01.ctl' from '+DATA01/ASMdummy/control01.ctl';

##This will show

Starting restore at 12.11.2013 16:15:54
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
channel ORA_DISK_1: copied control file copy
Finished restore at 12.11.2013 16:15:55

##In Rman you should mount the database:

RMAN> sql 'alter database mount';

## this shows :

sql statement: alter database mount
released channel: ORA_DISK_1
## and in the alert:
Tue Nov 12 16:16:51 2013
SUCCESS: disk group DATA02 was mounted
Tue Nov 12 16:16:51 2013
NOTE: dependency between database ASMDUMMY and disk group resource ora.DATA02.dg is established
Tue Nov 12 16:16:55 2013
Successful mount of redo thread 1, with mount id 2267329971
Database mounted in Exclusive Mode
Lost write protection disabled
Completed: alter database mount

## Now it is time to make a backup of the database into the new Disk group (Data02). If you are in a rac environment make sure all other instances are down. Issue the following command in rman because this will create a one to one copy of the database in the new Disk group:

backup as copy database format '+DATA02';
##This will show:
Starting backup at 12.11.2013 16:18:54
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATA01/ASMdummy/datafile/system.256.831304997
output file name=+DATA02/ASMdummy/datafile/system.259.831313135 tag=TAG20131112T161854 RECID=2 STAMP=831313144
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+DATA01/ASMdummy/datafile/sysaux.257.831304997
output file name=+DATA02/ASMdummy/datafile/sysaux.260.831313151 tag=TAG20131112T161854 RECID=3 STAMP=831313157
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATA01/ASMdummy/datafile/example.262.831305069
output file name=+DATA02/ASMdummy/datafile/example.261.831313165 tag=TAG20131112T161854 RECID=4 STAMP=831313170
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATA01/ASMdummy/datafile/undotbs1.258.831304997
output file name=+DATA02/ASMdummy/datafile/undotbs1.262.831313173 tag=TAG20131112T161854 RECID=5 STAMP=831313173
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA02/ASMdummy/controlfile/backup.263.831313173 tag=TAG20131112T161854 RECID=6 STAMP=831313175
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATA01/ASMdummy/datafile/users.259.831304997
output file name=+DATA02/ASMdummy/datafile/users.264.831313177 tag=TAG20131112T161854 RECID=7 STAMP=831313177
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 12.11.2013 16:19:38
channel ORA_DISK_1: finished piece 1 at 12.11.2013 16:19:39
piece handle=+DATA02/ASMdummy/backupset/2013_11_12/nnsnf0_tag20131112t161854_0.265.831313179 tag=TAG20131112T161854 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 12.11.2013 16:19:39

## Once that has finished issue the following command in Rman (note this switch to command performs the  … set newname for you .. :

switch database to copy;
##This will show you:
datafile 1 switched to datafile copy "+DATA02/ASMdummy/datafile/system.259.831313135"
datafile 2 switched to datafile copy "+DATA02/ASMdummy/datafile/sysaux.260.831313151"
datafile 3 switched to datafile copy "+DATA02/ASMdummy/datafile/undotbs1.262.831313173"
datafile 4 switched to datafile copy "+DATA02/ASMdummy/datafile/users.264.831313177"
datafile 5 switched to datafile copy "+DATA02/ASMdummy/datafile/example.261.831313165"

##When that is finished issue following command in rman :

alter database open;

##Your alert file has been updated with following information:

Tue Nov 12 16:21:17 2013
Switch of datafile 1 complete to datafile copy
checkpoint is 1040434
Switch of datafile 2 complete to datafile copy
checkpoint is 1040434
Switch of datafile 3 complete to datafile copy
checkpoint is 1040434
Switch of datafile 4 complete to datafile copy
checkpoint is 1040434
Switch of datafile 5 complete to datafile copy
checkpoint is 1040434
Tue Nov 12 16:22:00 2013
alter database open
Tue Nov 12 16:22:00 2013
Thread 1 opened at log sequence 10
Current log# 1 seq# 10 mem# 0: +FRA01/ASMdummy/onlinelog/group_1.257.831305063
Current log# 1 seq# 10 mem# 1: +DATA01/ASMdummy/onlinelog/group_1.264.831305765
Successful open of redo thread 1
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Tue Nov 12 16:22:00 2013
SMON: enabling cache recovery
[29312] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:3669618038 end:3669618518 diff:480 (4 seconds)
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is WE8MSWIN1252
No Resource Manager plan active
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Tue Nov 12 16:22:03 2013
QMNC started with pid=33, OS id=8677
Tue Nov 12 16:22:06 2013
db_recovery_file_dest_size of 4122 MB is 0.00% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM disk group.
Completed: alter database open
Tue Nov 12 16:22:06 2013
Starting background process CJQ0
Tue Nov 12 16:22:06 2013
CJQ0 started with pid=34, OS id=8763

## At that moment it feels is if we are almost there ..

Create Destination(s)

## Open a Sqlplus session and check for the %create% parameter:

show parameter create
##This shows:
NAME                                                      TYPE VALUE
------------------------------------ ----------- ------------------------------
create_bitmap_area_size                   integer            8388608
create_stored_outlines                       string
db_create_file_dest                             string               +DATA01
db_create_online_log_dest_1           string               +FRA01
alter system set db_create_file_dest='DATA02' sid='*';
alter system set db_create_file_dest='DATA02' sid='*'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-01261: Parameter db_create_file_dest destination string cannot be translated
ORA-01262: Stat failed on a file destination directory
Linux-x86_64 Error: 2: No such file or directory
## pff typo  of course you should add the + in front of the disk group name
SQL> alter system set db_create_file_dest='+DATA02' sid='*';

Working with the temp files:

##Now it is time to work with the temp files. You will have to create a New temp tablespace in the new disk group, make that the default one and drop the old one:

SQL> select FILE_NAME from dba_temp_files;

## This shows:

FILE_NAME
--------------------------------------------------------------------------------
+DATA01/ASMdummy/tempfile/temp.261.831305069

##This will create a new temp tablespace , in the new disk group , make it the default tablespace and drop old

create temporary tablespace TEMP02 tempfile size 100m;
alter database default temporary tablespace TEMP02;
drop tablespace TEMP including contents;

##Check it again:

SQL> select FILE_NAME from dba_temp_files;
FILE_NAME
--------------------------------------------------------------------------------
+DATA02/ASMdummy/tempfile/temp02.266.831313635

Working with the redo logs:

In this step first we have to add new members to each group (to each thread (in a rac)). After that and after switching the log files you can delete the members in the old disk group

##First check the environment:

Set lines 2000
select * from v$log;
SQL>
GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE        MEMBERS ARC STATUS              FIRST_CHANGE# FIRST_TIME    NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------------- ------------ -------------------
1         1        10   52428800    512       2 YES INACTIVE                  1035793 12.11.2013 14:17:13      1067305 13.11.2013 06:00:48
2         1        11   52428800    512       2 NO  CURRENT            1067305 13.11.2013 06:00:48   2.8147E+14
3         1         9   52428800    512       2 YES INACTIVE                  1035790 12.11.2013 14:17:11      1035793 12.11.2013 14:17:13
## and
SQL> select GROUP#,MEMBER from v$logfile order by 1;
GROUP# MEMBER
---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 +FRA01/ASMdummy/onlinelog/group_1.257.831305063
1 +DATA01/ASMdummy/onlinelog/group_1.264.831305765
2 +DATA01/ASMdummy/onlinelog/group_2.265.831305777
2 +FRA01/ASMdummy/onlinelog/group_2.258.831305065
3 +DATA01/ASMdummy/onlinelog/group_3.266.831305783
3 +FRA01/ASMdummy/onlinelog/group_3.259.831305065

##First we add new members to the correct , new disk group:

alter database add logfile member '+DATA02' to group 1;
alter database add logfile member '+DATA02' to group 2;
alter database add logfile member '+DATA02' to group 3;

###Check again:

select GROUP#,MEMBER from v$logfile order by 1;

## Perform several  switches to make sure the new members have been in use

SQL> alter system switch logfile;

##Alert showed:

Wed Nov 13 07:54:28 2013
Thread 1 advanced to log sequence 12 (LGWR switch)
Current log# 3 seq# 12 mem# 0: +FRA01/ASMdummy/onlinelog/group_3.259.831305065
Current log# 3 seq# 12 mem# 1: +DATA01/ASMdummy/onlinelog/group_3.266.831305783
Current log# 3 seq# 12 mem# 2: +DATA02/ASMdummy/onlinelog/group_3.269.831369169
Wed Nov 13 07:54:28 2013
Errors in file /opt/oracle/diag/rdbms/ASMdummy/ASMDUMMY/trace/ASMDUMMY_arc0_2197.trc:
ORA-00313: open failed for members of log group 2 of thread 1
Errors in file /opt/oracle/diag/rdbms/ASMdummy/ASMDUMMY/trace/ASMDUMMY_arc0_2197.trc:
ORA-00313: open failed for members of log group 2 of thread 1
Wed Nov 13 07:54:29 2013
Errors in file /opt/oracle/diag/rdbms/ASMdummy/ASMDUMMY/trace/ASMDUMMY_m000_22231.trc:
Errors in file /opt/oracle/diag/rdbms/ASMdummy/ASMDUMMY/trace/ASMDUMMY_m000_22231.trc:
Archived Log entry 8 added for thread 1 sequence 11 ID 0x8724b1a2 dest 1:
Wed Nov 13 07:55:38 2013
Thread 1 advanced to log sequence 13 (LGWR switch)
Current log# 1 seq# 13 mem# 0: +FRA01/ASMdummy/onlinelog/group_1.257.831305063
Current log# 1 seq# 13 mem# 1: +DATA01/ASMdummy/onlinelog/group_1.264.831305765
Current log# 1 seq# 13 mem# 2: +DATA02/ASMdummy/onlinelog/group_1.267.831369167
Wed Nov 13 07:55:38 2013
Archived Log entry 9 added for thread 1 sequence 12 ID 0x8724b1a2 dest 1:
Thread 1 cannot allocate new log, sequence 14
Checkpoint not complete
Current log# 1 seq# 13 mem# 0: +FRA01/ASMdummy/onlinelog/group_1.257.831305063
Current log# 1 seq# 13 mem# 1: +DATA01/ASMdummy/onlinelog/group_1.264.831305765
Current log# 1 seq# 13 mem# 2: +DATA02/ASMdummy/onlinelog/group_1.267.831369167
Thread 1 advanced to log sequence 14 (LGWR switch)
Current log# 2 seq# 14 mem# 0: +FRA01/ASMdummy/onlinelog/group_2.258.831305065
Current log# 2 seq# 14 mem# 1: +DATA01/ASMdummy/onlinelog/group_2.265.831305777
Current log# 2 seq# 14 mem# 2: +DATA02/ASMdummy/onlinelog/group_2.268.831369169
Wed Nov 13 07:55:45 2013
Archived Log entry 10 added for thread 1 sequence 13 ID 0x8724b1a2 dest 1:
Wed Nov 13 07:55:59 2013
Thread 1 advanced to log sequence 15 (LGWR switch)
Current log# 3 seq# 15 mem# 0: +FRA01/ASMdummy/onlinelog/group_3.259.831305065
Current log# 3 seq# 15 mem# 1: +DATA01/ASMdummy/onlinelog/group_3.266.831305783
Current log# 3 seq# 15 mem# 2: +DATA02/ASMdummy/onlinelog/group_3.269.831369169
Wed Nov 13 07:55:59 2013
Archived Log entry 11 added for thread 1 sequence 14 ID 0x8724b1a2 dest 1:

##It is time to drop the members from the old ( data01 )

select GROUP#,MEMBER from v$logfile order by 1;
GROUP# MEMBER
---------- -------------------------------------------------------------------------------- -------
1 +DATA01/ASMdummy/onlinelog/group_1.264.831305765
1 +DATA02/ASMdummy/onlinelog/group_1.267.831369167
1 +FRA01/ASMdummy/onlinelog/group_1.257.831305063
2 +DATA02/ASMdummy/onlinelog/group_2.268.831369169
2 +DATA01/ASMdummy/onlinelog/group_2.265.831305777
2 +FRA01/ASMdummy/onlinelog/group_2.258.831305065
3 +DATA02/ASMdummy/onlinelog/group_3.269.831369169
3 +DATA01/ASMdummy/onlinelog/group_3.266.831305783
3 +FRA01/ASMdummy/onlinelog/group_3.259.831305065
9 rows selected.

##So we have to drop the redo members that point to the old ( DATA01 ) Disk group:

alter database drop logfile member '+DATA01/ASMdummy/onlinelog/group_1.264.831305765';
alter database drop logfile member '+DATA01/ASMdummy/onlinelog/group_2.265.831305777';
alter database drop logfile member  '+DATA01/ASMdummy/onlinelog/group_3.266.831305783';
##When I did  this following error occurred:
SQL> alter database drop logfile member  '+DATA01/ASMdummy/onlinelog/group_3.266.831305783'
*
ERROR at line 1:
ORA-01609: log 3 is the current log for thread 1 - cannot drop members
ORA-00312: online log 3 thread 1: '+FRA01/ASMdummy/onlinelog/group_3.259.831305065'
ORA-00312: online log 3 thread 1: '+DATA01/ASMdummy/onlinelog/group_3.266.831305783'
ORA-00312: online log 3 thread 1: '+DATA02/ASMdummy/onlinelog/group_3.269.831369169'
##So I need another switch logfile since group 3 was current.
SQL> alter system switch logfile;
System altered.
SQL> alter database drop logfile member  '+DATA01/ASMdummy/onlinelog/group_3.266.831305783';
Database altered.

##Checked again

select GROUP#,MEMBER from v$logfile order by 1;
GROUP# MEMBER
---------- --------------------------------------------------------------------------------
1 +FRA01/ASMdummy/onlinelog/group_1.257.831305063
1 +DATA02/ASMdummy/onlinelog/group_1.267.831369167
2 +DATA02/ASMdummy/onlinelog/group_2.268.831369169
2 +FRA01/ASMdummy/onlinelog/group_2.258.831305065
3 +DATA02/ASMdummy/onlinelog/group_3.269.831369169
3 +FRA01/ASMdummy/onlinelog/group_3.259.831305065
6 rows selected.

Working in the clusterware:

After these activities I tried Stopping and starting via srvctl ( this is an oracle restart environment crashed ) ..   Alertlog was having error messages and the start failed …   I did notice that the environment was using the old SPFILE in +Data01 again … I checked the spfile which was wrong again..

cd :/opt/oracle/product/11203_ee_64_a/db/dbs [ASMDUMMY]# cat initASMDUMMY.ora
SPFILE='+DATA01/ASMDUMMY/spfileASMDUMMY.ora'                        # line added by Agent

## so the clusteragent had altered my changes

##altered init.ora again and started that worked ..

##In sqlplus:
select name from v$controlfile
union
select name from v$datafile
union
select name from v$tempfile
union
select member from v$logfile
union
select filename from v$block_change_tracking
union
select name from v$flashback_database_logfile;
## This shows:
NAME
--------------------------------------------------------------------------------
+DATA02/ASMdummy/control01.ctl
+DATA02/ASMdummy/datafile/example.261.831313165
+DATA02/ASMdummy/datafile/sysaux.260.831313151
+DATA02/ASMdummy/datafile/system.259.831313135
+DATA02/ASMdummy/datafile/undotbs1.262.831313173
+DATA02/ASMdummy/datafile/users.264.831313177
+DATA02/ASMdummy/onlinelog/group_1.267.831369167
+DATA02/ASMdummy/onlinelog/group_2.268.831369169
+DATA02/ASMdummy/onlinelog/group_3.269.831369169
+DATA02/ASMdummy/tempfile/temp02.266.831313635
+FRA01/ASMdummy/onlinelog/group_1.257.831305063
+FRA01/ASMdummy/onlinelog/group_2.258.831305065
+FRA01/ASMdummy/onlinelog/group_3.259.831305065
14 rows selected.

## In an Oracle restart or Rac Environment you need to check the Clusterware setup now since it has knowledge about spfile, disk groups being used etc.

##First check the configuration in the Clusterware for the database:

srvctl config database -d ASMDUMMY

##This shows:

Database unique name: ASMDUMMY
Database name: ASMDUMMY
Oracle home: /opt/oracle/product/11203_ee_64_a/db
Oracle user: oracle
Spfile: +DATA01/ASMDUMMY/spfileASMDUMMY.ora <<- OLD DISK GROUP
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: ASMDUMMY
Disk Groups: DATA01,FRA01,DATA02 <<- OLD DISK GROUP
Services:

##So we have to perform two action points:

  • Make the spfile point to the correct disk group ( our new +data02)
  • Disk groups attribute still knows about that data01 disk group ( and it should not)

##First modification will be to inform the Clusterware which spfile to use:

srvctl modify database -d ASMdummy -p ' +DATA02/ASMDUMMY/spfileASMDUMMY.ora'

##After that similar action for the disk groups:

srvctl modify database -d ASMdummy -a 'DATA02,FRA01'

## don’t believe it check it
srvctl config database -d ASMdummy
Database unique name: ASMDUMMY
Database name: ASMDUMMY
Oracle home: /opt/oracle/product/11203_ee_64_a/db
Oracle user: oracle
Spfile: +DATA02/ASMDUMMY/spfileASMDUMMY.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: ASMDUMMY
Disk Groups: DATA02,FRA01

##This looks better so now let’s do a stop & start with srvctl

srvctl stop database -d ASMdummy
srvctl start database -d ASMdummy

##That worked !  happy dba

##Since I started the scenario with one control file I needed to go back to two control files:

##Set new location of both ctlfile in SPFILE:

alter system set control_files='+DATA02/ASMdummy/control01.ctl','+FRA01/ASMdummy/control02.ctl' scope=spfile sid='*';

##Perform a shutdown

SQL> shutdown;

## Open ASMcmd (make sure you point to the ASM instance) and copy the controlfile to the Fra01 Disk group to get them in sync again:

ASMCMD> cp +DATA02/ASMdummy/control01.ctl +FRA01/ASMdummy/control02.ctl

##Check it:

ls -l  +FRA01/ASMdummy/control02.ctl

##In Sqlplus startup nomount

SQL> startup nomount;

## Start an  “rman target /” and restore from old control and mount & open database:

RMAN> restore controlfile to '+FRA01/ASMdummy/control02.ctl' from '+DATA02/ASMdummy/control01.ctl';
sql 'alter database mount';
sql 'alter database open';

##In an Sqlplus session final check :

SQL> show parameter control
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      +DATA02/ASMdummy/control01.ctl, +FRA01/ASMdummy/control02.ctl

As always Happy reading,

Mathijs.

Sending Mail from an 11G Oracle Database (UTL_SMTP)

Introduction

I have been asked to do the steps to send mail from inside an Oracle 11G. There is a lot of good notes about that already on the web which i used to get the picture and to get things done. Still to me this post offers a nice overview.

On the Linux level:

## I found this: smtp.mb-nl.internal.mycompany.com as mail server.

## By checking /etc/mail/sendmail.cf
## I found:
#DS[mi.mycompany.com]
DS[smtp.mb-nl.internal.mycompany.com]

## So that is my first clue

Below are the steps you will need to do in the database.

## In order to send mails in Oracle 11g you will need to take care of several steps.

## They are not default installed in the database (11g). You need to actually install the UTL_MAIL package.

sqlplus /nolog
connect / as sysdba
@?/rdbms/admin/utlmail.sql
@?/rdbms/admin/prvtmail.plb
grant execute on utl_mail to public;

## Next, add the address & port of the e-mail server (smtp server).

## Hint: If you do not do this, you will receive “ORA-06502: PL/SQL: numeric or value error” when you try to use the UTL_MAIL package.

## Execute the following with user SYS as SYSDBA:

alter system set smtp_out_server = smtp.mb-nl.internal.mycompany.com:25' scope=both;

## check what is out there in the database.

select * from dba_network_acls;

## Dropping same named access_control list if already existing.

BEGIN
   DBMS_NETWORK_ACL_ADMIN.DROP_ACL(
      acl => 'mail_access.xml');
END;
/

## Create an access control list:

BEGIN
   DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
    acl          => 'mail_access.xml',
    description  => 'Permissions to access e-mail server.',
    principal    => 'PUBLIC',
    is_grant     => TRUE,
    privilege    => 'connect');
   COMMIT;
END;
/

## Assign the list to the smtp ( mail server ).

## Note Default port is 25!

BEGIN
   DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
    acl          => 'mail_access.xml',
    host         => 'smtp.mb-nl.internal.mycompany.com',
    lower_port   => 25,
    upper_port   => 25
    );
   COMMIT;
END;
/

## This is an example to check if it is working:

CREATE OR REPLACE PROCEDURE send_mail (p_to        IN VARCHAR2,
                                       p_from      IN VARCHAR2,
                                       p_message   IN VARCHAR2,
                                       p_smtp_host IN VARCHAR2,
                                       p_smtp_port IN NUMBER DEFAULT 25)
AS
  l_mail_conn   UTL_SMTP.connection;
BEGIN
  l_mail_conn := UTL_SMTP.open_connection(p_smtp_host, p_smtp_port);
  UTL_SMTP.helo(l_mail_conn, p_smtp_host);
  UTL_SMTP.mail(l_mail_conn, p_from);
  UTL_SMTP.rcpt(l_mail_conn, p_to);
  UTL_SMTP.data(l_mail_conn, p_message || UTL_TCP.crlf || UTL_TCP.crlf);
  UTL_SMTP.quit(l_mail_conn);
END;

/The code below shows how the procedure is called.

 BEGIN
  send_mail(p_to        => 'mathijs.bruggink@mycompany.com',
            p_from      => 'oracle@mbnld33r.com',
            p_message   => 'This is a test message.',
            p_smtp_host => 'smtp.mb-nl.internal.mycompany.com');
END;
/

## After Feedback  that public indeed should be avoided in these matters this is the adepted example:

IF you set it up not for public but for dedicated users.

-- M Bruggink
-- Enabling  Mail functionality in Oracle
-- 20131024

spool 20131024.lst
connect / as sysdba;

@?/rdbms/admin/utlmail.sql
@?/rdbms/admin/prvtmail.plb
grant execute on utl_mail to public;

alter system set smtp_out_server = 'smtp.MB-nl.internal.MYCOMPANY.com:25' scope=both;

BEGIN
   DBMS_NETWORK_ACL_ADMIN.DROP_ACL(
      acl => 'mail_access.xml');
END;
/

--Create an access control list:
BEGIN
   DBMS_NETWORK_ACL_ADMIN.CREATE_ACL (
    acl          => 'mail_access.xml',
    description  => 'Permissions to access e-mail server.',
    principal    => 'SYS',
    is_grant     => TRUE,
    privilege    => 'connect');
   COMMIT;
 END;
/

-- Assign the list to the smtp ( mail server ):
-- Note Default port is 25!

BEGIN
   DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL (
    acl          => 'mail_access.xml',
    host         => 'smtp.MB-nl.internal.MYCOMPANY.com',
    lower_port   => 25,
    upper_port   => 25
    );
   COMMIT;
 END;
/
BEGIN
   DBMS_NETWORK_ACL_ADMIN.add_privilege (
    acl          => 'mail_access.xml',
    principal    => 'SYSTEM',
    is_grant     => TRUE,
    privilege    => 'connect');
   COMMIT;
 END;
/

BEGIN
   DBMS_NETWORK_ACL_ADMIN.add_privilege (
    acl          => 'mail_access.xml',
    principal    => 'SCOTT',
    is_grant     => TRUE,
    privilege    => 'connect');
   COMMIT;
 END;
/

BEGIN
   DBMS_NETWORK_ACL_ADMIN.add_privilege (
    acl          => 'mail_access.xml',
    principal    => 'ARNOLD',
    is_grant     => TRUE,
    privilege    => 'connect');
   COMMIT;
 END;
/
-- check the setup
COLUMN host FORMAT A30
COLUMN acl FORMAT A30

SELECT host, lower_port, upper_port, acl
FROM   dba_network_acls;

COLUMN acl FORMAT A30
COLUMN principal FORMAT A30
set lines 200

SELECT acl,
       principal,
       privilege,
       is_grant,
       TO_CHAR(start_date, 'DD-MON-YYYY') AS start_date,
       TO_CHAR(end_date, 'DD-MON-YYYY') AS end_date
FROM   dba_network_acl_privileges;
spool off

Useful notes in Mos:

NOTE:74269.1 – How to Test an SMTP Mail Gateway From a Command Line Interface

NOTE:130785.1 – How To Send E-mail Using UTL_SMTP

Happy reading,

Mathijs