1 | As of 11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also in the gpnp profile. If the private network is not available or its definition is incorrect, the CRSD process will not start and any subsequent changes to the OCR will be impossible. Therefore care needs to be taken when making modifications to the configuration of the private network. It is important to perform the changes in the correct order. Please also note that manual modification of gpnp profile is not supported. |
Please take a backup of profile.xml on all cluster nodes before proceeding, as grid user: | |
cd $GRID_HOME/gpnp/<hostname>/profiles/peer/ | |
cd /app/oracle/product/12.x.x/grid/gpnp/mysrvrahr/profiles/peer | |
cd /app/oracle/product/12.x.x/grid/gpnp/mysrvrbhr/profiles/peer | |
cp -p profile.xml profile.xml.bk | |
2 | Ensure Oracle Clusterware is running on ALL cluster nodes in the cluster and save current status of resource. |
/app/oracle/product/12.x.x/grid/bin/crsctl check cluster -all | |
/app/oracle/product/12.x.x/grid/bin/crsctl status resource -t>/tmp/beforeNewIps.lst | |
3 | As grid user ( always curious who might that b, to me it was the Oracle user btw.): |
Get the existing information. Here you will see only One interconnect in place For example: | |
/app/oracle/product/12.x.x/grid/bin/oifcfg getif | |
bond1 172.18.112.208 global cluster_interconnect | |
bond0 195.233.190.64 global public | |
## | |
Check the interfaces / subnet address can be identified by command for eth specifically: | |
/app/oracle/product/12.x.x/grid/bin/oifcfg iflist|grep -i eth|sort | |
eth0 172.18.32.0 | |
eth2 192.168.10.0 | |
eth6 192.168.11.0 | |
or check interfaces / subnets in general on OS with oifcfg: | |
/app/oracle/product/12.x.x/grid/bin/oifcfg iflist|sort | |
4 | Add the new cluster_interconnect information: |
/app/oracle/product/12.x.x/grid/bin/oifcfg setif -global eth2/192.168.10.0:cluster_interconnect,asm | |
/app/oracle/product/12.x.x/grid/bin/oifcfg setif -global eth6/192.168.11.0:cluster_interconnect,asm | |
5 | Verify the change: |
/app/oracle/product/12.x.x/grid/bin/oifcfg getif | |
With this information checked and in place it is time for setting up new listeners for asm since the original ASM listener during the installation used eth0 and that eth0 will be dropped – removed from cluster configuration in steps below: Existing listener ASMNET1LSNR will become new one ASMNET122LSNR. srvctl add listener -asmlistener -l ASMNET1221LSNR -subnet 192.168.10.0 (as mentioned this is the eth2 interface that we are going to use). srvctl add listener -asmlistener -l ASMNET1222LSNR -subnet 192.168.11.0 (as mentioned this is the eth6 interface that we are going to use). As always seeing is believing : use crsctl status resource -t to see details. Note: The new ASM listener is created as a resource and it is in a status offline offline on all nodes in the cluster at this point and time. In the next step we will remove the old ASM listener, and use a -f option to prevent errors – messages with regard to dependencies. srvctl update listener -listener ASMNET1LSNR_ASM -asm -remove -force I have checked again with crsctl status resource -t to make sure the old resource is gone now. Removing the old ASM listener In the Mos note there is a little inconsistency because it claims that as a next step the old ASM listener should be stopped. I was able to grep for the listener ( ps -ef|grep -i inherit) and i saw it on OS level on the machine(S). But I am not able to stop that listener since the cluster resource is already gone and lsnrctl did not work. Solution: What I noticed that when I skipped this step and stopped and started the cluster which is mandatory in this scenario, the listener was gone on all nodes. Should have given this command, but that is NOT working: lsnrctl stop ASMNET1LSNR_ASM Check configuration before restarting GI: First command: srvctl config listener -asmlistener Name: ASMNET122LSNR_ASM Type: ASM Listener Owner: oracle Subnet: 192.168.10.0 Home: <CRS home> End points: TCP:1527 Listener is enabled. Listener is individually enabled on nodes: Listener is individually disabled on nodes: Second Command: srvctl config asm ASM home: <CRS home> Password file: +VOTE/orapwASM Backup of Password file: ASM listener: LISTENER ASM instance count: ALL Cluster ASM listener: ASMNET122LSNR_ASM | |
6 | In GridInfrastructure: Shutdown Oracle Clusterware on all nodes and disable the Oracle Clusterware as root user ( in my example i was allowed to sudo ): |
sudo su – | |
./app/oracle/product/12.x.x/grid/bin/crsctl stop crs | |
./app/oracle/product/12.x.x/grid/bin/crsctl disable crs | |
7 | Make the network configuration change at OS level as required, ensure the new interface is available on all nodes after the change. ( check to ping the interfaces on all nodes ). |
for x in 10 11;do for xx in 75 76 77 78;do ping -c2 192.168.${x}.${xx}|egrep ‘icmp_seq|transmitted’;done;echo;done | |
for x in a b c d; do for xx in 1 2;do ping -c2 mysrvr${x}hr-hb$xx|egrep ‘icmp_seq|transmitted’;done;echo;done | |
8 | Restart Oracle Clusterware on all nodes as root user: |
sudo su – | |
./app/oracle/product/12.x.x/grid/bin/crsctl start crs | |
9 | Check |
/app/oracle/product/12.x.x/grid/bin/crsctl check cluster -all | |
/app/oracle/product/12.x.x/grid/bin/crsctl status resource -t>/tmp/afterNewIps.lst | |
sdiff /tmp/afterNewIps.lst /tmp/beforeNewIps.lst | |
Enable Oracle Clusterware on all nodes as root user: | |
./app/oracle/product/12.x.x/grid/bin/crsctl enable crs | |
10 | Remove the old interface if required: |
/app/oracle/product/12.x.x/grid/bin/oifcfg delif -global bond1/172.18.112.208:cluster_interconnect | |
11 | Verify the remove: |
/app/oracle/product/12.x.x/grid/bin/oifcfg getif |
ASM
Make an Oracle export to asm diskgroup
Introduction
Good day to you all , quick blog on how to make an export in Oracle with datapump to an ASM diskgroup. Background of such an action could be a scenario that your database server does not provide “normal filesystem”.
- In an Asm Diskgroup:
I have created ( with asmcmd +DATA02/MYDEV/DPDUMP) - In the Oracle database:
create or replace directory ASM_DUMP as ‘+DATA02/MYDEV/DPDUMP’;
grant read,write on directory asm_dump to system; - On Linux create this subdirectory, (note it is mandatory to be able to write during the expdp to a NON asm Filesystem):
/opt/oracle/TOSCAOSV1/admin/restore/DPLOG. - In the Oracle database create a directory which will hold the logfile of the expdp:
create or replace directory ASMDP_LOGDIR as ‘/opt/oracle/MYDB1/admin/restore/DPLOG’;
grant read,write on directory ASMDP_LOGDIR to system; - Seeing is believing so lets check:
Set lines 2000
select * from dba_directories;
DSYS ASM_DUMP
+DATA02/TOSCAOSV/DPDUMP
SYS ASMDP_LOGDIR
/opt/oracle/TOSCAOSV2/admin/restore/DPLOG
- Create a Parfile for the expdp
PARALLEL=1
userid=system
directory=ASM_DUMP
logfile=ASMDP_LOGDIR:myMyDevDp.log
DUMPFILE=myMyDev_%U.dmp
FILESIZE=4G
COMPRESSION=ALL
JOB_NAME=backup_MyDev_schemas
SCHEMAS=TOSCAADMIN
oracle@mysrvr1:/opt/oracle/MYDB1/admin/DP [MYDB1]# ls -ltr
total 4
-rw-r–r–. 1 oracle dba 189 Jun 15 13:02 expdp.par
- Run the datapump: expdp parfile=/opt/oracle/MYDB1/admin/DP/expdp.par
Missing or Corrupted Spfile in ASM instance in Rac
Introduction:
Old saying is , always expect the unexpected, well this time this was another proof of that. During patching of an 8 node cluster on the first node we came across 2 issues , both requiring a work around. Issue one was that after applying January 2019 Cluster did not start. This workaround is not part of this note btw. The second issue was that once we had the first Workaround in place the asm instance on the first node would not start. This note is explaining the steps followed to create a new spfile for the ASM instance(s) in a rac cluster.
General information:
This was the scenario where we found ourselves in: Patching in a rolling way had started on the first node with January 2019 on 12.2 Oracle (GI and RDBMS) the first node was patched but crs would not start ( and hmm never liked opatchauto a lot 2 b honest). Together with Oracle support a work around was provided but after that the ASM instance still would not start. During compare of environments it showed one very significant memory setting on this cluster. Maybe this would have been MEMORY_* and we are using hugepages on that cluster) that prevented asm from starting once the patching on the first node completed .
As a work around we created a pfile and once the cluster on node one was up we started the asm instance with the pfile ( which was altered by me).
However we could no longer update the spfile for the other asm instances since oracle told us that in a rolling upgrade mode you cannot make changes to the spfile . That meant that on all 8 Nodes we performed the patching , and once crs was up we had to start the asm with a copy of the init.ora we used on 1st node too. In our case 8 nodes with a local copy of the init.ora which did not make us happy at all. That however brought us to below scenario where you need to bring the asm back to using an spfile .
Important note: Since oracle 11.2 the GPNP profile is the key for such change !!!!
From old days , came up with below scenario to create a pfile, alter that file to meet your needs and bring that as a spfile for the asm instance :
Scenario
- Could be used with a missing spfile – or corrupted .
- With an existing spfile with wrong settings , but where you cannot alter since you started patching already ( spfile updates are prohibited in rolling upgrade scenario ) , so maybe best practice is to analyse spfile before.
- Important message: Scenario has the requirement that the FULL cluster stack is down and you will work with ONE node only!!
#### With spget you can check current location of spfile in asmcmd.
ASMCMD [+] > spget
+VOTE/mysrvr18cl/ASMPARAMETERFILE/registry.253.978015605
#### created and altered the pfile on the first node. ( and copied it to all other nodes during the workaround).
oracle@mysrvr1dr:/app/grid/product/12201/grid/dbs []# cd /app/oracle/admin/+ASM1/pfile
oracle@mysrvr1dr:/app/oracle/admin/+ASM1/pfile []# ls -ltr
total 4
-rw-r–r–. 1 oracle dba 2433 Feb 8 09:23 initASM.ora
##### starting cluster 1st attempt (recalled that the cluster needed 2 b in some part of restricted mode for that, so all of cluster was stopped , then below command was issued). But Oracle showed mercy , telling to use the correct syntax:
mysrvr1dr:root:/app/grid/product/12201/grid/bin $ ./crsctl start crs restrict
Parse error:
‘restrict’ is an invalid argument
Usage:
crsctl start crs [-excl [-nocrs | -cssonly]] | [-wait | -waithas | -nowait] | [-noautostart]
Start OHAS on this server
where
-excl Start Oracle Clusterware in exclusive mode
-nocrs Start Oracle Clusterware in exclusive mode without starting CRS
-nowait Do not wait for OHAS to start
-wait Wait until startup is complete and display all progress and status messages
-waithas Wait until startup is complete and display OHASD progress and status messages
-cssonly Start only CSS
-noautostart Start only OHAS
## Then started cluster in exclusive mode , that failed too btw since it tried to start the asm instance, which was still holding the original spfile with the incorrect information.
mysrvr1dr:root:/app/grid/product/12201/grid/bin $ ./crsctl start crs -excl
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr1dr’
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr1dr’ succeeded
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr1dr’
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr1dr’
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr1dr’ succeeded
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr1dr’ succeeded
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr1dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr1dr’ succeeded
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr1dr’
- CRS-2674: Start of ‘ora.asm’ on ‘mysrvr1dr’ failed
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr1dr’
- ORA-15077: could not locate ASM instance serving a required diskgroup
- CRS-2674: Start of ‘ora.storage’ on ‘mysrvr1dr’ failed
- CRS-2679: Attempting to clean ‘ora.storage’ on ‘mysrvr1dr’
- CRS-2681: Clean of ‘ora.storage’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.ctssd’ on ‘mysrvr1dr’ succeeded
- CRS-4000: Command Start failed, or completed with errors.
## Next attempt , whole cluster was down , on first node performed
- mysrvr1dr:root:/app/grid/product/12201/grid/bin $ ./crsctl start crs -excl -nocrs
- ## in this nocrs mode we were able to start the +ASM1 instance manually now.
- SQL> STARTUP PFILE=’/app/oracle/admin/+ASM1/pfile/initASM.ora’;
- ## Once ASM had started create the new SPFILE
- create spfile = ‘+VOTE’ from PFILE=’/app/oracle/admin/+ASM1/pfile/initASM.ora’;
- ## in Alert noticed this which is good SINCE this shows gpnp profile had been updated accordingly
- 019-02-08T16:31:32.839547+01:00
- NOTE: updated gpnp profile ASM SPFILE to
- NOTE: header on disk 0 advanced to format #2 using fcn 0.0
- NOTE: header on disk 2 advanced to format #2 using fcn 0.0
- NOTE: updated gpnp profile ASM diskstring: /dev/mapper/ASM_*
- NOTE: updated gpnp profile ASM diskstring: /dev/mapper/ASM_*
- 2019-02-08T16:31:34.381619+01:00
- NOTE: updated gpnp profile ASM SPFILE to +VOTE/mysrvr18cl/ASMPARAMETERFILE/registry.253.999707493
##### Checked our activities in ASMCMD:
- oracle@mysrvr1dr:/app/oracle/admin/+ASM1/pfile [+ASM1]# asmcmd
- [Option -p will be used ]
- ASMCMD [+] > spget
- +VOTE/mysrvr18cl/ASMPARAMETERFILE/registry.253.999707493
### Checked our activities in gpnptool
oracle@mysrvr1dr:/app/oracle/admin/+ASM1/pfile [+ASM1]# gpnptool get
Warning: some command line parameters were defaulted. Resulting command line:
/app/grid/product/12201/grid/bin/gpnptool.bin get -o-
<?xml version=”1.0″ encoding=”UTF-8″?><gpnp:GPnP-Profile Version=”1.0″ xmlns=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:gpnp=”http://www.grid-pnp.org/2005/11/gpnp-profile” xmlns:orcl=”http://www.oracle.com/gpnp/2005/11/gpnp-profile” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd” ProfileSequence=”7″ ClusterUId=”afc024ecfd5ffff8ffbeda0a212bebe1″ ClusterName=”mysrvr18cl” PALocation=””><gpnp:Network-Profile><gpnp:HostNetwork id=”gen” HostName=”*”><gpnp:Network id=”net1″ IP=”198.19.11.0″ Adapter=”bond0″ Use=”public”/><gpnp:Network id=”net2″ IP=”192.168.10.0″ Adapter=”eth3″ Use=”asm,cluster_interconnect”/><gpnp:Network id=”net3″ IP=”192.168.11.0″ Adapter=”eth5″ Use=”cluster_interconnect”/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400″/><orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/mapper/ASM_*” SPFile=”+VOTE/mysrvr18cl/ASMPARAMETERFILE/registry.253.999707493″ Mode=”remote” Extended=”false”/><ds:Signature xmlns:ds=”http://www.w3.org/2000/09/xmldsig#“><ds:SignedInfo><ds:CanonicalizationMethod Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#“/><ds:SignatureMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#rsa-sha1“/><ds:Reference URI=””><ds:Transforms><ds:Transform Algorithm=”http://www.w3.org/2000/09/xmldsig#enveloped-signature“/><ds:Transform Algorithm=”http://www.w3.org/2001/10/xml-exc-c14n#“> <InclusiveNamespaces xmlns=”http://www.w3.org/2001/10/xml-exc-c14n#” PrefixList=”gpnp orcl xsi”/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm=”http://www.w3.org/2000/09/xmldsig#sha1“/><ds:DigestValue>QH9UPO559zhufkrc7tFxQts6oF0=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>aL2hOnxyLt5YwMcPjGg8LUDx2KD97Y75eLv+
yqvcfQ5O705K8ceQPCnwnsTs4Wn5E1jNeYCEzXnrVp5zM3hMbz9LdEEP2GKk9XJInQprWc39z7JKxm4uEw
NX3Ocs54FqxP1JdBX7PRiMh/
ePd8CoJIVtIaVMD29giX078uGwXcQ=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
### since we have started cluster with -excl –nocrs time to stop the cluster and start it normally
mysrvr1dr:root:/app/grid/product/12201/grid/bin $ ./crsctl stop crs
- CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.crsd’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.crsd’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.crf’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.crf’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.gpnpd’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.storage’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.storage’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.mdnsd’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.ctssd’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.cssd’ on ‘mysrvr1dr’
### starting cluster normally on first node in normal mode
mysrvr1dr:root:/app/grid/product/12201/grid/bin $ ./crsctl start crs
### Had small issue , so decided to stop the cluster on node 1 with force option
mysrvr1dr:root:/app/grid/product/12201/grid/bin $ ./crsctl stop crs -f
- CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.mdnsd’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.gpnpd’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.evmd’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.asm’ on ‘mysrvr1dr’
- CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.ctssd’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.evmd’ on ‘mysrvr1dr’ succeeded
- CRS-2677: Stop of ‘ora.asm’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.cssd’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.cssd’ on ‘mysrvr1dr’ succeeded
- CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘mysrvr1dr’
- CRS-2677: Stop of ‘ora.gipcd’ on ‘mysrvr1dr’ succeeded
- CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘mysrvr1dr’ has completed
- CRS-4133: Oracle High Availability Services has been stopped.
#### Time to start the cluster in normal mode for all nodes
mysrvr1dr:root:/root $ cd /app/grid/product/12201/grid/bin
mysrvr1dr:root:/app/grid/product/12201/grid/bin $ ./crsctl start cluster -all
- CRS-2672: Attempting to start ‘ora.evmd’ on ‘mysrvr6dr’
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr6dr’
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr4dr’
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr2dr’
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr3dr’
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr8dr’
- CRS-2672: Attempting to start ‘ora.evmd’ on ‘mysrvr2dr’
- CRS-2672: Attempting to start ‘ora.evmd’ on ‘mysrvr4dr’
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr7dr’
- CRS-2672: Attempting to start ‘ora.evmd’ on ‘mysrvr3dr’
- CRS-2672: Attempting to start ‘ora.evmd’ on ‘mysrvr8dr’
- CRS-2672: Attempting to start ‘ora.evmd’ on ‘mysrvr7dr’
- CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘mysrvr5dr’
- CRS-2672: Attempting to start ‘ora.evmd’ on ‘mysrvr5dr’
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr4dr’ succeeded
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr8dr’ succeeded
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr4dr’
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr4dr’
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr8dr’
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr2dr’ succeeded
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr8dr’
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr3dr’ succeeded
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr6dr’ succeeded
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr7dr’ succeeded
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr2dr’
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr6dr’
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr2dr’
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr3dr’
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr6dr’
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr7dr’
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr3dr’
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr4dr’ succeeded
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr7dr’
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr8dr’ succeeded
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr2dr’ succeeded
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr6dr’ succeeded
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr3dr’ succeeded
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr7dr’ succeeded
- CRS-2676: Start of ‘ora.cssdmonitor’ on ‘mysrvr5dr’ succeeded
- CRS-2672: Attempting to start ‘ora.cssd’ on ‘mysrvr5dr’
- CRS-2672: Attempting to start ‘ora.diskmon’ on ‘mysrvr5dr’
- CRS-2676: Start of ‘ora.diskmon’ on ‘mysrvr5dr’ succeeded
- CRS-2676: Start of ‘ora.evmd’ on ‘mysrvr6dr’ succeeded
- CRS-2676: Start of ‘ora.evmd’ on ‘mysrvr2dr’ succeeded
- CRS-2676: Start of ‘ora.evmd’ on ‘mysrvr4dr’ succeeded
- CRS-2676: Start of ‘ora.evmd’ on ‘mysrvr8dr’ succeeded
- CRS-2676: Start of ‘ora.evmd’ on ‘mysrvr3dr’ succeeded
- CRS-2676: Start of ‘ora.evmd’ on ‘mysrvr7dr’ succeeded
- CRS-2676: Start of ‘ora.evmd’ on ‘mysrvr5dr’ succeeded
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr8dr’ succeeded
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr8dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr8dr’
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr2dr’ succeeded
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr2dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr2dr’
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr5dr’ succeeded
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr5dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr5dr’
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr8dr’ succeeded
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr2dr’ succeeded
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr7dr’ succeeded
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr7dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr7dr’
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr5dr’ succeeded
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr4dr’ succeeded
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr4dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr4dr’
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr3dr’ succeeded
- CRS-2676: Start of ‘ora.cssd’ on ‘mysrvr6dr’ succeeded
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr3dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr3dr’
- CRS-2672: Attempting to start ‘ora.ctssd’ on ‘mysrvr6dr’
- CRS-2672: Attempting to start ‘ora.cluster_interconnect.haip’ on ‘mysrvr6dr’
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr7dr’ succeeded
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr8dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr8dr’
- CRS-2676: Start of ‘ora.asm’ on ‘mysrvr8dr’ succeeded
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr8dr’
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr4dr’ succeeded
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr3dr’ succeeded
- CRS-2676: Start of ‘ora.ctssd’ on ‘mysrvr6dr’ succeeded
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr2dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr2dr’
- CRS-2676: Start of ‘ora.asm’ on ‘mysrvr2dr’ succeeded
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr2dr’
- CRS-2676: Start of ‘ora.storage’ on ‘mysrvr8dr’ succeeded
- CRS-2672: Attempting to start ‘ora.crsd’ on ‘mysrvr8dr’
- CRS-2676: Start of ‘ora.crsd’ on ‘mysrvr8dr’ succeeded
- CRS-2676: Start of ‘ora.storage’ on ‘mysrvr2dr’ succeeded
- CRS-2672: Attempting to start ‘ora.crsd’ on ‘mysrvr2dr’
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr5dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr5dr’
- CRS-2676: Start of ‘ora.asm’ on ‘mysrvr5dr’ succeeded
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr5dr’
- CRS-2676: Start of ‘ora.crsd’ on ‘mysrvr2dr’ succeeded
- CRS-2676: Start of ‘ora.storage’ on ‘mysrvr5dr’ succeeded
- CRS-2672: Attempting to start ‘ora.crsd’ on ‘mysrvr5dr’
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr7dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr7dr’
- CRS-2676: Start of ‘ora.asm’ on ‘mysrvr7dr’ succeeded
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr7dr’
- CRS-2676: Start of ‘ora.crsd’ on ‘mysrvr5dr’ succeeded
- CRS-2676: Start of ‘ora.storage’ on ‘mysrvr7dr’ succeeded
- CRS-2672: Attempting to start ‘ora.crsd’ on ‘mysrvr7dr’
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr6dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr6dr’
- CRS-2676: Start of ‘ora.asm’ on ‘mysrvr6dr’ succeeded
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr6dr’
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr4dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr4dr’
- CRS-2676: Start of ‘ora.asm’ on ‘mysrvr4dr’ succeeded
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr4dr’
- CRS-2676: Start of ‘ora.cluster_interconnect.haip’ on ‘mysrvr3dr’ succeeded
- CRS-2672: Attempting to start ‘ora.asm’ on ‘mysrvr3dr’
- CRS-2676: Start of ‘ora.asm’ on ‘mysrvr3dr’ succeeded
- CRS-2672: Attempting to start ‘ora.storage’ on ‘mysrvr3dr’
- CRS-2676: Start of ‘ora.crsd’ on ‘mysrvr7dr’ succeeded
- CRS-2676: Start of ‘ora.storage’ on ‘mysrvr6dr’ succeeded
- CRS-2672: Attempting to start ‘ora.crsd’ on ‘mysrvr6dr’
- CRS-2676: Start of ‘ora.storage’ on ‘mysrvr4dr’ succeeded
- CRS-2672: Attempting to start ‘ora.crsd’ on ‘mysrvr4dr’
- CRS-2676: Start of ‘ora.storage’ on ‘mysrvr3dr’ succeeded
- CRS-2672: Attempting to start ‘ora.crsd’ on ‘mysrvr3dr’
- CRS-2676: Start of ‘ora.crsd’ on ‘mysrvr6dr’ succeeded
- CRS-2676: Start of ‘ora.crsd’ on ‘mysrvr4dr’ succeeded
- CRS-2676: Start of ‘ora.crsd’ on ‘mysrvr3dr’ succeeded
- CRS-4690: Oracle Clusterware is already running on ‘mysrvr1dr’ à fine since we kept cluster running on node 1
CRS-4000: Command Start failed, or completed with errors.
### checks performed :
On each node: ps -ef|grep d.bin
On each node: crsctl stat res -t -init
On a node: crsctl check cluster -all
Happy reading,
And till next time
Mathijs
Install & Patch 12.2 GI in Oracle Restart
Introduction:
Where would we be without challenges. I have become team member of a project team for a new billing environment and this team is aiming to use ( and go live) with Oracle 12.2 Grind Infra structure and Database version). The information of this article will become a baseline for the installation of several Oracle environments on Linux. Oracle is referring to this as Oracle Restart. Next in line after that (and I love it) will be Real application clusters to be set up.
General Preparations 12.2 Grid Kata:
## Identifying ORACLE_BASE and layout of Grid Infrastructure.
echo $ORACLE_BASE /app/oracle echo $ORACLE_HOME /app/grid/product/12.2.0.1/grid ## Identifying ORACLE_BASE and Db software echo $ORACLE_BASE /app/oracle echo $ORACLE_HOME /app/oracle/product/12.2.0.1/db
## So for the 12.2 layout which is in scope for the actions on a Restart or Rac environment:
+ASM1 /app/grid/product/12.2.0.1/grid CRS /app/grid/product/12.2.0.1/grid -MGMTDB /app/grid/product/12.2.0.1/grid MYDB /app/oracle/product/12.2.0.1/db
## Checking Red Hat release:
oracle@mysrvr1hr:/dev/mapper []# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.9 (Santiago)
## Oracle restart installation for 12.2 instructions to be found:
Interesting point is that in 12.2 the famous, well known runInstaller is replaced by ./gridSetup.sh (when opening runInstaller even ran into errors (oui-10133 wen running runInstaller in 12.2). Second point of interest will be that you have to pre-create the directory where the software will be running.
## Preparations for Installation:
- On the server where you will install the Grid infrastructure create the directory where you want to install the software (the location you will later on call Your ORACLE_HOME). On the source server and in my specific case that meant that: I had to do mkdir -p /app/grid/product/12.2.0.1/grid on the source server.
- From Solltau: oracle@myhost:/opt/oracle/Odrive/depot/software/oracle/12c/GI []# scp linuxx64_12201_grid_home.zip oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid
- UNSET your environment variables if any on the installation box:
unset ORACLE_BASE unset ORACLE_HOME unset GI_HOME unset ORA_CRS_HOME unset TNS_ADMIN unset ORACLE_SID unset ORA_NLS10
echo $ORACLE_BASE etc.
- ## Check zip file in the destination that will also become your ORACLE_HOME for this install.
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid []# ls -ltr total 2924504 -rw-r--r--. 1 oracle dba 2994687209 Jan 3 16:28 linuxx64_12201_grid_home.zip
Make sure you use the future ORACLE_HOME destination to unzip the file. This is mandatory because compared to previous Installations where you can alter the Software installed directory, during the 12.2 installation there will NOT be an option to choose the destination where to setup this installation. ## make sure you are in the directory /app/grid/product/12.2.0.1/grid and extract the zip!!
- ##Once the Zip file is extracted fire below script:
./gridSetup.sh
In one of the following detailed screens make sure you change “change directory path” to discover the disks that you will be using for this installation. In my case this meant that the Linux Admin colleague has set up – has labelled dedicated Luns (disks) as a preparation to my actions.
root # ls -lH /dev/mapper/ASM_*
brw-rw----. 1 oracle dba 253, 6 Dec 22 16:01 /dev/mapper/ASM_ACFS_035_001 brw-rw----. 1 oracle dba 253, 33 Dec 22 16:01 /dev/mapper/ASM_OCRM_008_001 brw-rw----. 1 oracle dba 253, 34 Dec 22 16:01 /dev/mapper/ASM_OCRM_008_002 brw-rw----. 1 oracle dba 253, 25 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_001 brw-rw----. 1 oracle dba 253, 26 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_002 brw-rw----. 1 oracle dba 253, 30 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_003
## Once you have made all the selections needed below screen will appear:
Once you selected install you will be updated by next progress screen:
## In a separate screen as the Root user ./root.sh has to run which will show:
mysrvr1hr:root:/app/grid/product/12.2.0.1/grid # ./root.sh
Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /app/grid/product/12.2.0.1/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /app/grid/product/12.2.0.1/grid/crs/install/crsconfig_params The log of current session can be found at: /app/oracle/crsdata/mysrvr1hr/crsconfig/roothas_2018-01-03_05-02-27PM.log ## logging details of root.sh : LOCAL ADD MODE Creating OCR keys for user 'oracle', privgrp 'dba'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4664: Node mysrvr1hr successfully pinned. 2018/01/03 17:02:50 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf' CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'mysrvr1hr' CRS-2673: Attempting to stop 'ora.evmd' on 'mysrvr1hr' CRS-2677: Stop of 'ora.evmd' on 'mysrvr1hr' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'mysrvr1hr' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. mysrvr1hr 2018/01/03 17:03:44 /app/grid/product/12.2.0.1/grid/cdata/mysrvr1hr/backup_20180103_170344.olr 0 2018/01/03 17:03:49 CLSRSC-327: Successfully configured Oracle Restart for a standalone server mysrvr1hr:root:/app/grid/product/12.2.0.1/grid #
Resolving possible issues: 12.2 GI standalone : [INS-20802] Automatic Storage Management Configuration Assistant failed (Doc ID 2277224.1)
## The Installation will create the ASM instance with a default spfile. Due to company standards and due to best practice (knowing that size does matter and default settings will not do well in a heavily used environment) you should connect to the ASM instance and alter below values:
## Specific setup for asm Instance
ALTER SYSTEM SET memory_max_target=4096M SCOPE=SPFILE; ALTER SYSTEM SET memory_target=1536M SCOPE=SPFILE; ALTER SYSTEM SET large_pool_size=100M SCOPE=SPFILE; ALTER SYSTEM SET shared_pool_size=512M SCOPE=BOTH; ALTER SYSTEM SET shared_pool_reserved_size=100M SCOPE=SPFILE; ## Nothing to do with performance but mandatory due to Standards. ALTER SYSTEM SET audit_file_dest='/app/oracle/+ASM/admin/adump' SCOPE=SPFILE; ALTER SYSTEM SET background_dump_dest='/app/oracle/diag/asm/+asm/+ASM/trace' SCOPE=BOTH;
## Company Standards as with regard to listener:
- Log destination: /app/oracle/diag/tnslsnr/mysrvr1hr/listener
- One listener per vip
## So I have added a listener with the netca tool running from the GridInfrastructurehome.
## /app/oracle/diag/tnslsnr/<servername>/<listenername>/trace
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid/network/admin [+ASM]# lsnrctl status LISTENER_MYSRVR1HR
## Deinstallation when needed. As always you might need a way out (back again) .
Note: For upgrades from previous releases, if you want to uninstall the previous release Grid home, then perform the following steps:
- Log in as the root user.
- Manually change the permissions of the previous release Grid home (see below).
- Run the /app/grid/product/12.2.0.1/grid/deinstall/deinstall command (as oracle User).
For example, on Grid Infrastructure for a standalone server: # chown -R oracle:dba /app/grid/product/12.2.0.1 # chmod -R 775 /app/grid/product/12.2.0.1 In this example: /u01/app/oracle/product/11.2.0/grid is the previous release Oracle Grid Infrastructure for a standalone server home oracle is the Oracle Grid Infrastructure installation owner user dba is the name of the Oracle Inventory group (OINSTALL group) For example, on Oracle Database: # chown -R oracle:dba /app/oracle/product/12.2.0.1 # chmod -R 775 /app/oracle/product/12.2.0.1
If all is well Time to start Patching the Environment !
## patching : GI : p26737266_122010_Linux-x86-64.zip
## oracle@soltau2:/opt/oracle/Odrive/depot/software/oracle/patches/Linuxx86 []# scp p26737266_122010_Linux-x86-64.zip oracle@mysrvr1hr:/app/grid/product/12.2.0.1/stage
## check current situation with opatch before patching.
opatch lsinventory -detail -oh /app/grid/product/12.2.0.1/grid
This shows:
Oracle Interim Patch Installer version 12.2.0.1.6 Copyright (c) 2018, Oracle Corporation. All rights reserved. Oracle Home : /app/grid/product/12.2.0.1/grid Central Inventory : /app/oraInventory from : /app/grid/product/12.2.0.1/grid/oraInst.loc OPatch version : 12.2.0.1.6 OUI version : 12.2.0.1.4 Log file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-05_14-39-27PM_1.log Lsinventory Output file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-01-05_14-39-27PM.txt -------------------------------------------------------------------------------- Local Machine Information:: Hostname: mysrvr1hr.mydomain ARU platform id: 226 ARU platform description:: Linux x86-64 Installed Top-level Products (1): Oracle Grid Infrastructure 12c 12.2.0.1.0 There are 1 products installed in this Oracle Home. Installed Products (99): etc. . . There are 99 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. -------------------------------------------------------------------------------- OPatch succeeded.
## Use opatch to check for conflicts:
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26710464 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26925644 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26737232 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26839277 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26928563
## This did not show any conflicts
## Next step will be : use opatch check for space requirements (you would not want to end up installation failing due to lacking storage:
For Grid Infrastructure Home, as home user:
Create file /tmp/patch_list_gihome.txt with the following content: cat /tmp/patch_list_gihome.txt3. /app/grid/product/12.2.0.1/stage/26737266/269285634. /app/grid/product/12.2.0.1/stage/26737266/268392775. /app/grid/product/12.2.0.1/stage/26737266/267372326. /app/grid/product/12.2.0.1/stage/26737266/269256447. /app/grid/product/12.2.0.1/stage 26737266/26710464
Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:
$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
## this shows
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/stage [+ASM]# $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt Oracle Interim Patch Installer version 12.2.0.1.6 Copyright (c) 2018, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /app/grid/product/12.2.0.1/grid Central Inventory : /app/oraInventory from : /app/grid/product/12.2.0.1/grid/oraInst.loc OPatch version : 12.2.0.1.6 OUI version : 12.2.0.1.4 Log file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-05_14-55-06PM_1.log Invoking prereq "checksystemspace" Prereq "checkSystemSpace" passed. OPatch succeeded.
## To patch only the GI home:
# opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid
## failed with:
OPATCHAUTO-72046: Invalid wallet parameters. OPATCHAUTO-72046: The wallet path or wallet password provided is not valid. OPATCHAUTO-72046: Please provide valid wallet information. opatchauto bootstrapping failed with error code 46.
## Thank you Mos for elaborating.
OPATCHAUTO-72046: Invalid wallet parameters (Doc ID 2150070.1) |
opatchauto command is not being run as root user. Opatchauto for Grid PSUs should always be run as root user.
## So as the root user :
/app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid
## and it failed again !!!
mysrvr1hr:root:/root # /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid System initialization log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2018-01-05_03-09-09PM.log. Session log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2018-01-05_03-09-12PM.log The id for this session is 5LQ1 [init:init] Executing OPatchAutoBinaryAction action on home /app/grid/product/12.2.0.1/grid Executing OPatch prereq operations to verify patch applicability on SIHA Home........ [init:init] OPatchAutoBinaryAction action completed on home /app/grid/product/12.2.0.1/grid with failure Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures: Patch Target : mysrvr1hr->/app/grid/product/12.2.0.1/grid Type[siha] Details: [ ---------------------------Patching Failed--------------------------------- Command execution failed during patching in home: /app/grid/product/12.2.0.1/grid, host: mysrvr1hr. Command failed: /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid -target_type has -binary -invPtrLoc /app/grid/product/12.2.0.1/grid/oraInst.loc -persistresult /app/grid/product/12.2.0.1/grid/OPatch/auto/dbsessioninfo/sessionresult_analyze_mysrvr1hr_siha.ser -analyze -online Command failure output: ==Following patches FAILED in analysis for apply: Patch: /app/grid/product/12.2.0.1/stage/26737266/26925644 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-09-16PM_1.log Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output: The details are: Prerequisite check "CheckMinimumOPatchVersion" failed.] Failed during Analysis: CheckMinimumOPatchVersion Failed, [ Prerequisite Status: FAILED, Prerequisite output: The details are: The OPatch being used has version 12.2.0.1.6 while the following patch(es) require higher versions: Patch 26710464 requires OPatch version 12.2.0.1.7. Please download latest OPatch from My Orac ... etc. . . OPatchAuto failed. opatchauto failed with error code 42 mysrvr1hr:root:/root #
## So I downloaded latest opatch version and parked it in a temporary directory on that targeted server:
unzip p6880880_122011_Linux-x86-64.zip -d /app/grid/product/12.2.0.1/grid
## now Opatch shows:
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/opatch [+ASM]# opatch version OPatch Version: 12.2.0.1.11 OPatch succeeded.
## Sometimes you just have to be patient to hear the lambs being silent:
## Next run as the root user :
/app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid
## logfiles:
oracle@mysrvr1hr:/app/oracle/crsdata/mysrvr1hr/crsconfig -rw-rw----. 1 oracle dba 17364 Jan 5 15:35 hapatch_2018-01-05_03-34-42PM.log -rw-rw----. 1 oracle dba 23725 Jan 5 15:42 hapatch_2018-01-05_03-42-41PM.log ##showed mysrvr1hr:root:/root # /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid OPatchauto session is initiated at Fri Jan 5 15:33:54 2018 System initialization log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2018-01-05_03-33-58PM.log. Session log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2018-01-05_03-34-02PM.log The id for this session is XLE2 Executing OPatch prereq operations to verify patch applicability on home /app/grid/product/12.2.0.1/grid Patch applicability verified successfully on home /app/grid/product/12.2.0.1/grid Bringing down CRS service on home /app/grid/product/12.2.0.1/grid Prepatch operation log file location: /app/oracle/crsdata/mysrvr1hr/crsconfig/hapatch_2018-01-05_03-34-42PM.log CRS service brought down successfully on home /app/grid/product/12.2.0.1/grid Start applying binary patch on home /app/grid/product/12.2.0.1/grid Binary patch applied successfully on home /app/grid/product/12.2.0.1/grid Starting CRS service on home /app/grid/product/12.2.0.1/grid Postpatch operation log file location: /app/oracle/crsdata/mysrvr1hr/crsconfig/hapatch_2018-01-05_03-42-41PM.log CRS service started successfully on home /app/grid/product/12.2.0.1/grid OPatchAuto successful. --------------------------------Summary-------------------------------- Patching is completed successfully. Please find the summary as follows: Host:mysrvr1hr SIHA Home:/app/grid/product/12.2.0.1/grid Summary: ==Following patches were SUCCESSFULLY applied: Patch: /app/grid/product/12.2.0.1/stage/26737266/26710464 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26737232 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26839277 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26925644 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26928563 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log OPatchauto session completed at Fri Jan 5 15:43:05 2018 Time taken to complete the session 9 minutes, 11 seconds
Happy Dba , Installed 12.2 GI and Patched it with RU October 2017.
Thanks for reading and till we meet again,
Mathijs.
Upgrade to 12C GridInfra lessons learned
Introduction:
Not sure if it was word from a wise Dba or just from a fortune cookie (might even have been from a Pink Panther movie). It said always expect the unexpected and as an add-on success just loves preparation.
This week one of my tasks was to upgrade a 4 node Oracle Rac cluster from 11.2.0.4 to 12c (12.1.0.2.0) grid infrastructure. And even though I came well prepared (see also detailed other blog for that ( https://mathijsbruggink.com/2017/05/01/upgrading-11g-gridinfra-to-12c-in-linux/) several small surprises occurred which will be used as a lesson learned in upcoming upgrades of the grid infra structure. Also I would like to offer some timeline as with regard to how long the upgrade process really took.
Lessons learned:
- During the preparations needed to order extra disks for ASM storage for the Grid Infrastructure management repository (GIMR). When i started the runInstaller as a first check if all was well prepared noticed that the installer software is indeed most likely looking for a diskgroup called +OCR or +VOTING. This could be a trap if you had not extended one of them ( but instead a +GRID diskgroup ). So when preparing look for either OCR or VOTING ( best both if present) to add extra disks ( and have some disks at spare).
- During the start of maintenance window the Linux colleague mentioned that he would have to stop the Hyperion services. This activity took some 45 minutes of the change window. Will have to find out if this was a justified claim to stop those services and will need to add an extra step to the pre-checks to find out about other services – daemons that are running on the cluster that might be impacted when doing an upgrade.
- Purpose of rootupgrade.sh after the installation part via the runinstaller completes and the upgrade part via the runinstaller commences. Rootupgrade.sh will perform the actual ASM upgrade, will configure the OLR (local registry) amongst other things.
Timelines:
Every change on a test or production environment will have to come with a plan with regard to an estimated time needed how long the change will take. First and most important of course choose the strategy, will a rolling window be used (thus minimizing impact since at least one node will be up ( thinking about a kind of batch where first batch will hold first node , second batch holding node 2 and 3 in my 4 node example, and a last batch holding the last node)).
Start of Change Window : 20:00 CET ( 6:00 UTC ) .
According to Linux expert Hyperion services needed to be stopped before we could continue.
Start of installation: 20:45 CET.
Started the runInstaller on the first node. Software was deployed to first node and all the nodes in the cluster (4 Node Rac).
Upgrade part of the existing 11.2.0.4 GridInfra structure:
21:30 – 21:58 on the first Node (MYSRVR09hr) the rootupgrade.sh was started. (used the manual upgrade ( still a bit hmm unwilling to leave it all to the automated option), this means set up a root session on first node and run: ./app/grid/product/`1102/grid/rootupgrade.sh).
In the runInstaller it was offered to automate and to run the rootupgrade.sh in parallel on Node number 2 and 3. So in separate windows but to me it felt better to open a terminal session as root in parallel to run the script on each server.
22:06 – 22:13 on MYSRVR10hr : ./app/grid/product/`1102/grid/rootupgrade.sh
22:06 – 22:24 on MYSRVR11hr : ./app/grid/product/`1102/grid/rootupgrade.sh
On the last node MYSRVR12hr:
22:28 – 22:48 ./app/grid/product/`1102/grid/rootupgrade.sh
After that install continued with the Grid Infrastructure management repository (GIMR) database and once completed i ran a number sanity checks in the cluster:
22:50 23:55
At 23:59:59 Reported mission completed.
Happy reading and till next time,
Mathijs
Gimr expanding ocr / voting Diskgroup
Introduction:
For a 12c Upgrade project I have been asked to run all the preparations and investigate requirements to move from 11.2 Grid infrastructure to 12c. While reading and preparing it became clear that Grid Infrastructure management repository (GIMR) database has become mandatory in Oracle GI 12.1.0.2. According to documentation and as per information from various Colleagues on the Web: Data files associated with it will be created in same diskgroup as OCR or voting. (Average growth per day per node = app 750MB so a 4 node cluster would lead at default retention of 3 days to app /GB. Note retention can be changed).
Until Oracle 12C all clusters here have been set up with a minimum ocr / voting diskgroup for for the mere reason that they only where supposed to hold the Oracle Registry and the Voting disk(s). In this case that means that +GRID diskgroup is only 2GB in size. But as always with changing releases comes changing times (or was that the other way around). Below you will find the steps I followed to expand the OCR / Voting diskgroup ( In my scenario called +GRID) in order to be able to host the GIMR.
Claiming extra ASM storage
A first step to be able to host the GIMR made it necessary to get extra ASM storage from the Storage / Linux Colleagues. For ASM requested 6 * 8 GB shared (means visible on all nodes of the cluster) ASM disks that will be added to an existing Diskgroup (in my case in mysrvr09hr – 12hr ) to diskgroup +GRID. +GRID diskgroup has been created in the past with Normal Redundancy with three FAILGROUPS on 3 Disks. For now I only used 3 new disks (thus saving something for a rainy day in case more storage will be needed).
/dev/mapper/asm-vote04 GRID_0014 8 MEMBER ONLINE CACHED NORMAL /dev/mapper/asm-vote05 GRID_0015 8 MEMBER ONLINE CACHED NORMAL /dev/mapper/asm-vote06 GRID_0016 8 MEMBER ONLINE CACHED NORMAL
Details:
When adding disks to the +GRID diskgroup which is in my case holding OCR and Voting information some extra preparation is needed because it will be the goal to add the new disks to the existing Failgroups.
conn / as sysasm set lines 300 -- alter system set asm_power_limit = 0; -- Note Use the same failgroups as existings disks here! And add them in one -- command. alter DISKGROUP GRID add FAILGROUP GRID_0000 disk '/dev/mapper/asm-vote04' name GRID_0014 FAILGROUP GRID_0001 disk '/dev/mapper/asm-vote05' name GRID_0015 FAILGROUP GRID_0002 disk '/dev/mapper/asm-vote06' name GRID_0016 ; -- alter system set asm_power_limit = 1; alter DISKGROUP GRID rebalance power 3; -- select * from gv$asm_operation order by 1,2,3; -- col name format a15; col path format a25; col failgroup format a20; -- select dg.name, d.path, d.failgroup, d.failgroup_type,dg.VOTING_FILES from v$asm_diskgroup dg, v$asm_disk d where dg.group_number = d.group_number and dg.name = 'GRID' order by dg.name, d.path, d.failgroup;
Once the add step has completed below information was displayed with 6 disks, who all were holding OCR / Voting info after the re-balance:
NAME PATH FAILGROUP FAILGROUP_TYPE VOTING_FILES --------------- ------------------------- -------------------- ------- - GRID /dev/mapper/asm-vote01 GRID_0000 REGULAR Y GRID /dev/mapper/asm-vote02 GRID_0001 REGULAR Y GRID /dev/mapper/asm-vote03 GRID_0002 REGULAR Y GRID /dev/mapper/asm-vote04 GRID_0000 REGULAR Y GRID /dev/mapper/asm-vote05 GRID_0001 REGULAR Y GRID /dev/mapper/asm-vote06 GRID_0002 REGULAR Y
As always seeing is believing in Oracle so Let’s check this:
crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 19e8fcfc4ab24f73bf301473eda5b98e (/dev/mapper/asm-vote01) [GRID] 2. ONLINE 2da0b18a78cb4f1dbf0596f9fdbd13e2 (/dev/mapper/asm-vote02) [GRID] 3. ONLINE 6d670887aa874f6cbf8b177acae2ea29 (/dev/mapper/asm-vote03) [GRID]
Hmm okay so this means that the Cluster still sees the old three votingdisks on the first three original disks. So it let’s make Time for part 2 , dropping the old disks .
# conn / as sysasm set lines 300 -- alter system set asm_power_limit = 0; -- Dropping the previous (Old disks) alter DISKGROUP GRID drop DISK GRID_0000 ,GRID_0001 ,GRID_0002 ; -- alter system set asm_power_limit = 1; alter DISKGROUP GRID rebalance power 3; -- select * from gv$asm_operation order by 1,2,3; -- select dg.name, d.path, d.failgroup, d.failgroup_type,dg.VOTING_FILES from v$asm_diskgroup dg, v$asm_disk d where dg.group_number = d.group_number and dg.name = 'GRID' order by dg.name, d.path, d.failgroup;
this now shows:
NAME PATH FAILGROUP FAILGROUP_TYPE VOTING_FILES
------------------------------ ------------------------------ ------- - GRID /dev/mapper/asm-vote04 GRID_0000 REGULAR Y GRID /dev/mapper/asm-vote05 GRID_0001 REGULAR Y GRID /dev/mapper/asm-vote06 GRID_0002 REGULAR Y
And
crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 0915739721ca4f55bf0f9ea53d58ecee (/dev/mapper/asm-vote04) [GRID] 2. ONLINE 15507ed4dd964f26bfc497f4913034db (/dev/mapper/asm-vote05) [GRID] 3. ONLINE 219ed4aef2d74fcdbfa9c9e8b81dbbde (/dev/mapper/asm-vote06) [GRID] Located 3 voting disk(s).
Okay that looks much better. Completed the action for expanding the diskgroup +Grid as a preparation for Grid Infrastructure management repository (GIMR) database for my 12C Grid Infrastructure upgrade.
Happy Reading,
Mathijs.
Upgrading 11G GridInfra to 12C in Linux
Introduction:
With spring 2017 around new initiatives are developed. As a preparation to start doing Database upgrades to 12C it will be a mandatory step to upgrade the Cluster-ware ( Grid-Infrastructure) first before doing the database part. So in this case very happy me that finally the time has come that one of the customers requests to upgrade a number of Clusters to 12C Grid-infrastructure. In this document will share thoughts , and my plan to tackle this interesting puzzle. Since the first Cluster upgrade will happen pretty soon (this week) the document might evolve with the lessons learned of that first upgrade. Happy reading in advance.
Preparations:
It could be some text of a fortune cookie but every success just loves preparation so in this case that will not be any different. First thing to do was to identify a scope of clusters that had to be upgraded. Together with customer an inventory list had to be created and in the end 10 Clusters have been defined as part of scope for this action. 8 Test clusters and 2 production environments . Interesting detail will be that all Clusters have been patched pretty recently all holding 11.2.0.4 Grid infrastructure with some extra challenge that the below Operating system will come in two flavors (being Red Hat Linux server release 5.11 (Tikanga) and 6.5 (Santiago). Curious in advance already to see if these different versions of Red Hat will have an influence of the steps to be performed. In the details below you will find more details on detailed preparations and actions of the upgrade.
Operating System:
One of the first steps to investigate is of course to find out if the Operating versions at hand are supported ones for the Upgrade. Oracle support confirmed that even though it would be recommended to upgrade the 5.11 Red Hat version first to Red Hat 7, it should work with the 5.11 version at hand. The 6.5 Os version was okay anyhow. The project decided however that an OS upgrade of the 5.11 boxes would delay things so upgrading the OS will be done in a different project.
Storage:
Before even considering to run the upgrade of the grid-infrastructure some extra time needs to be spend to investigate the storage in place in the Cluster for such upgrade. Often the Oracle software is first set up locally on each box on Volume group VG0 but with the out-of-place-installation these days that might become a challenge if there is not enough local storage present anymore in the box. Due to standards those root disk become nearly untouchable. For my project this storage requirement has been defined as an absolute minimum which means there will be a need for extra local storage per node or even for San storage per node which will be presented as required mount points to me. If such storage would not (or no longer be present locally) I have to request and received additional storage for it.
/app/grid | /app/oracle | /var/opt/oracle | /tmp | San 4 lvm dbs |
50GB | 70GB | 32M | 1GB |
Short explain for this:
/app/grid : 12C Grid-Infra software will be installed. /app/oracle: For the 12C Database software. /var/opt/oracle and /tmp: required minimum space. San 4 lvm dbs: will be setup for 4GB mountpoints, for each Instance on the local node in order to hold logfiles.
When migrating to 12C and coming from 11G please be informed that you might need extra storage in your OCR – VOTING disk group due to a new feature as well. This new repository database will have to be implemented during the upgrade. This Grid Infrastructure management repository (GIMR) database has become mandatory in Oracle GI 12.1.0.2. Data files associated with it will be created in same diskgroup as OCR or voting. (Average growth per day per node = app 750 MB so a 4 node cluster would lead at default retention of 3 days to app 9 GB storage requirement in OCR or VOTING diskgroup). A fortunate Note is that retention can be changed. Well in my case this means that more ASM disks will need to be added to the specific disk group. At work most OCR and VOTING diskgroups are set up as bare minimum ( in normal redundancy with three disks each like 4 GB each). ( extra info on this topic: https://blogs.oracle.com/UPGRADE/entry/grid_infrastructure_management_repository_gimr)
Detailed preparations and health checks.
One of the quotes in IT sometimes is that you should not touch a well running system. Well in this case I would like to add but if you do, come well prepared. In this case i have put the focus on the three below tools to prove that the current system is in a good shape to run the upgrade which is also to be regarded as a health check of the environment. These preps are based on the Mos note (1579762.1) from from reading Chapter 13 in the great book “Expert Oracle Rac 12C” by Syed Jaffar Hussain, Tariq Farooq,Riyaj Shamsudeen and Kai Yu. ( ISBN-13 (electronic): 978-1-4302-5045-6).
- Opatch
- RACcheck: Orachk
- Runcluvfy
Opatch
Using opatch in order to make sure that the Orainventory is in good shape on all nodes in the cluster. Command issued is investiging the current gridinfrastructure:
opatch lsinventory -oh /opt/crs/product/11204/crs -detail
-oh means for the specific ORACLE_HOME.
-detail shows all details.
RACcheck: Orachk
I have looked on Metalink and Downloaded and installed this tool on the cluster (nodes).
orachk Version |
12.2.0.1.2_20161215 |
Following Quick start guide for this tool:
http://docs.oracle.com/cd/E68491_01/OEXUG/quick-start-guide.htm#OEXUG-GUID-CB4224DA-F389-4E9C-AB6A-C57F46A80C61
Clear information to be found In mos :
ORAchk Upgrade Readiness Assessment (Doc ID 1457357.1)
With the tool downloaded below steps have been performed:
According to documentation the tool needs to be copied, unpacked (and installed) in suptools subdirectory of the cluster software installation.
scp orachk.zip oracle@mysrvr23hr:/opt/crs/product/11204/crs/suptools scp orachk.zip oracle@mysrvr24hr:/opt/crs/product/11204/crs/suptools
Once unzipped the tool can run in two modes, a pre upgrade mode and a post upgrade mode:
./orachk u -o pre |tee Orachk_pre_20170124.log ./orachk u -o post |tee Orachk_post_20170124.log Note: the tee command will also create a log file holding all the steps – progress information during run time. Note: /opt/oracle/.orachk should be empty before stat otherwise:‘Another instance of orachk is running on:: # message.
Runcluvfy
Working with runcluvfy is like meeting an old friend again. Yet each time it is a bit of struggle to find optimal syntax – parameters to be used for your set up.
#Wrong setup was ./runcluvfy.sh stage -pre crsinst -upgrade -n mysrvr23hr,mysrvr24hr -rolling -fixup -src_crshome /opt/crs/product/11204/crs -dest_home /app/grid/product/12102/grid -dest_version 12.1.0 -verbose
## working version ./runcluvfy.sh stage -pre crsinst -n mysrvr23hr,mysrvr24hr -verbose|tee runcluvfy_20170130_pre.lst Or ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/crs/product/11204/crs -dest_crshome /app/grid/product/12102/grid -dest_version 12.1.0.2.0 -verbose|tee runcluvfy_20170130_preUpgrade.lst
Upgrade steps:
Now it will become to plan and set up your upgrade steps after the confidence build on the preparation. In the upgrade multiple approaches will be possible. But my goal in this is plain and simple, minimum Impact on Cluster and on the databases hosted on that cluster so I will be aiming for this Scenario: rolling upgrade ASM + Clusterware. A baseline for such will be the below URL:
https://docs.oracle.com/database/121/CWLIN/procstop.htm#CWLIN10001
Working according to company standards will require to use following specific settings for an $ORACLE_BASE, $ORACLE_HOME for the GI installation and a different $ORACLE_HOME for the database software.
oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_BASE /app/oracle oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_HOME /app/grid/product/12102/grid oracle@mysrvrhr:/home/oracle [MYDB1]# echo $ORACLE_HOME /app/oracle/product/12102/db
Below in the bullets will go through the steps and comment where needed.
- Due to Grid Infrastructure management repository (GIMR) database I had to add larger disks to VOTING diskgroup to have enough storage in place (the steps on how to add the new disks and drop the old ones are too detailed for this blog (after all it is a blog and not a book 🙂 so I will have to blog about that in a separate blog).
- Check /tmp because upgrade requires at least 1GB present in /tmp. Either clean up or have /tmp extended. (use ls -lSh command).
- check ocr integrity by :
cluvfy comp ocr -n all -verbose
- Check backup of ocr and voting disk in the cluster:
ocrconfig -showbackup
Note: this command can be performed as ORACLE user and will shows info similar to the information below. Interesting aspect here was that I issued the command on the first node ( but the automated back-ups are all on node 11hr).
oracle@mysrvr09hr:/opt/oracle [CRS]# ocrconfig -showbackup mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available
- As the root user Run a Manual Backup of the OCR information. Run the ocrconfig -manualbackup command on a node where the Oracle Cluster-ware stack is up and running to force Oracle Cluster-ware to perform a backup of OCR at any time, rather than wait for the automatic backup. Note: The -manualbackup option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to OCR. The OLR only supports manual backups. NOTE: In 11gR2, the voting files are backed up automatically as part of OCR. Oracle recommends NOT used dd command to backup or restore as this can lead to loss of the voting disk.
mysrvr09hr:root:/root # cd /opt/crs/product/11204/crs/bin/ mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -manualbackup mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr ## Checking a second time will now also show a manual backup 2 b in place: mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -showbackup mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr Last line is now showing the manual backup (since it is showing the format (backup_yyyymmdd_hhmmss.ocr)
- Check Location of OCR and Voting Disk (need to be in a diskgroup )
##How: cat /etc/oracle/ocr.loc
## Shows output similiar to this ## (if ocr is already mirrored in other Diskgroup with normal Redundancy) #Device/file getting replaced by device +OCR ocrconfig_loc=+VOTE ocrmirrorconfig_loc=+OCR
##How: crsctl query css votedisk ## Will show 3 voting disks in Disk group Vote due to Normal redundancy (and 3 Disk) ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 36b26f862b9a4f54bfba3096e3d50afa (/dev/mapper/asm-vote01) [VOTE] 2. ONLINE 9d45d791c1124febbf0a093d5a185c13 (/dev/mapper/asm-vote02) [VOTE] 3. ONLINE 1b7e510a302e4f03bfdea942d55d7067 (/dev/mapper/asm-vote03) [VOTE] Located 3 voting disk(s).
## check in ASM: select a.name dg_name, a.GROUP_NUMBER dg_number, a.state dg_state, b.DISK_NUMBER d_number, b.name d_name, b.mount_status d_mount_status, b.header_status d_header_status, b.mode_status d_mode_status, b.state d_state, b.FAILGROUP d_failgroup, b.path d_path from v$asm_diskgroup a, v$asm_disk b where a.GROUP_NUMBER(+) = b.GROUP_NUMBER order by 2,4;
- Unset environment Variables:
unset ORACLE_BASE unset ORACLE_HOME unset GI_HOME unset ORA_CRS_HOME unset TNS_ADMIN unset ORACLE_SID unset ORA_NLS10
- Check active crs version and software version:
## using the current CRS to document current active - and software version /opt/crs/product/11204/crs/bin/crsctl query crs activeversion /opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
- Performing a Standard Upgrade from an Earlier Release
## Use the following procedure to upgrade the cluster from an earlier release: Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation. On the node selection page, select all nodes. Select installation options as prompted. Note: Oracle recommends that you configure root script automation, so that the sh script can be run automatically during the upgrade. Run root scripts, using either automatically or manually: Running root scripts automatically: TIP: If you have configured root script automation, then use the pause between batches to relocate services from the nodes running the previous release to the new release. Comment Mathijs: I have not decided yet on this automation step. In the documentation read as prep for the upgrade you see the option to create multiple batches: like batch 1 starting node, batch 2 all but last node, batch 3 last node. I will use both the automated way for one cluster and then use the below manual (old school method mentioned below) on another cluster. Running root scripts manually: If you have not configured root script automation, then when prompted, run the rootupgrade.sh script on each node in the cluster that you want to upgrade. If you run root scripts manually, then run the script on the local node first. The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation. After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. When the script is run successfully on all the nodes except the last node, run the script on the last node. After running the sh script on the last node in the cluster, if you are upgrading from a release earlier than Oracle Grid Infrastructure 11g Release 2 (11.2.0.2), and left the check box labeled ASMCA checked, which is the default, then Oracle Automatic Storage Management Configuration Assistant ASMCA runs automatically, and the Oracle Grid Infrastructure upgrade is complete. If you unchecked the box during the interview stage of the upgrade, then ASMCA is not run automatically. If an earlier release of Oracle Automatic Storage Management (Oracle ASM) is installed, then the installer starts ASMCA to upgrade Oracle ASM to 12c Release 1 (12.1). You can choose to upgrade Oracle ASM at this time, or upgrade it later. Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade Oracle Clusterware. Until Oracle ASM is upgraded, Oracle Databases that use Oracle ASM cannot be created and the Oracle ASM management tools in the Oracle Grid Infrastructure 12c Release 1 (12.1) home (for example, srvctl) do not work. Note: Because the Oracle Grid Infrastructure home is in a different location than the former Oracle Clusterware and Oracle ASM homes, update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.
- Check active crs version and software version:
/opt/crs/product/11204/crs/bin/crsctl query crs activeversion /opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
- Post upgrade checks:
ps -ef|grep d.bin should show daemons started from 12C.
Thoughts on Rollback:
Of course each migration will be as good as its preparation. But still your plan should at least hold the steps for a rollback in case you might not make it to a successful completed task. Below you will find the steps mentioned in general.
On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade to stop the 12c Release 1 (12.1). |
On the local node use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade -lastnode |
On any of the cluster member nodes where the rootupgrade.sh script has run successfully:
cd /u01/app/12.1.0/grid/oui/bin |
On any of the cluster member nodes where the rootupgrade script has run successfully: In Old ORACLE_HOME (the earlier Oracle Clusterware installation).$ cd /opt/crs/product/11204/crs/oui/bin/ $ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs Start the Oracle Clusterware stack manually: |
As always thank you for taking an interest in my blog. Happy reading and till the next time.
Mathijs
Asm compatible.rdbm set correct(or not)
General:
Almost wanted to start with once upon a time .. but what good is that kind of start for a blog about ASM compatible.rdbms right? So maybe better stick to the facts. On one of the test environments of the billing environment in the past I have set up some 35 databases and registered them all in Oracle Restart 11.2.0.3. Since this was already several years ago, according to standards back in ASM then only two disk groups existed +DATA and +FRA. All the databases that have been created back then had there archive destination set to +FRA. This worked okay till recently where archiving behavior of specific database started changing (high increase of number of archives). That was when it became to set up dedicated diskgroup for each of the databases in that specific scope.
Details:
As part of protocol , dropped a number of disks of the +FRA diskgroup, performed in ASM a rebalance action on that diskgroup after which I was good to go for my fresh diskgroup. Since all the databases on that server all were 11.2.0.3 this is the way the diskgroup was setup:
CREATE DISKGROUP MYDB01_FRA01 EXTERNAL REDUNDANCY DISK '/dev/mapper/asm-redo145p1' name MYDB01_FRA01_0001 ATTRIBUTE 'compatible.asm'='11.2', 'compatible.rdbms'='11.2' ;
Looks perfectly normal right, and all according to the book. I checked if the diskgroup was mounted (which she was) so next step was the action to work with one of the databases in scope and make sure the archive_destination would be pointing to the new created diskgroup and as part of check after that would perform a logswitch in the database.
## In the alert log of the database this is what was noticed:
ALTER SYSTEM SET log_archive_dest_1='LOCATION=+MYDB01_FRA01' SCOPE=BOTH; Tue Mar 14 15:59:46 2017 Thread 1 advanced to log sequence 15417 (LGWR switch) Current log# 3 seq# 15417 mem# 0: +DATA/MYDB01/onlinelog/group_3.285.765120841 Current log# 3 seq# 15417 mem# 1: +FRA/MYDB01/onlinelog/group_3.271.765120841 Tue Mar 14 15:59:47 2017 ARCH: Archival stopped, error occurred. Will continue retrying ORACLE Instance MYDB01 - Archival Error ORA-16038: log 2 sequence# 15416 cannot be archived ORA-00254: error in archive control string '' ORA-00312: online log 2 thread 1: '+DATA/MYDB01/onlinelog/group_2.286.765120839' ORA-00312: online log 2 thread 1: '+FRA/MYDB01/onlinelog/group_2.272.765120841' ORA-15001: diskgroup "MYDB01_FRA01" does not exist or is not mounted
Hm that was unexpected because as said I did check the diskgroup to be online in ASM before starting the steps. So time to look in the Alert log file of ASM.
### In asm logfile Tue Mar 14 15:59:46 2017 NOTE: Rejecting a request to use grp 'MYDB01_FRA01' from 'MYDB01:MYDB01'. NOTE: Its version '11.1.0.0.0' is lesser than the minimum required version '11.2.0.0.0' for a database to use this group Tue Mar 14 16:05:27 2017 NOTE: Rejecting a request to use grp 'MYDB01_FRA01' from 'MYDB01:MYDB01'. NOTE: Its version '11.1.0.0.0' is lesser than the minimum required version '11.2.0.0.0' for a database to use this group
Clear clue but very much unexpected to be honest because all databases are 11.2.0.3 since I personally set them up. So one more detail was missing in this puzzle. Lets look at the specific database:
SQL> show parameter compatible NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ compatible string 11.1.0
For whatever reason in those days when the database was set up the compatible parameter in the database was set to 11.1. Since the database was in a hung status already , time was ticking. Tried to alter the archive_destination again which was not allowed (got error messages). So ready for plan-B:
Dropped diskgroup MYDB01_FRA01 (no archive was written in this group, of course I checked).
## Then in ASM I performed: CREATE DISKGROUP MYDB01_FRA01 EXTERNAL REDUNDANCY DISK '/dev/mapper/asm-redo145p1' name MYDB01_FRA01_0001 ATTRIBUTE 'compatible.asm'='11.2', 'compatible.rdbms'='11.1' ;
After that the database was happy (and so was this DBA) and I had a lesson re-learned. Always expect the unexpected. Or more general, before switching your log_archive_dest to another diskgroup in ASM, make sure the compatible.rdbms of that diskgroup is set properly.
As always Happy reading,
Mathijs
Renaming datafiles in Oracle 11GR2 Asm
Introduction.
As part of Housekeeping job Dbas of all times have been creating tablespaces for the users and applications. During the lifecycle of an application objects will be created , deleted copy into temporary copies etc. In the end storage used of a tablespace might look like a Swiss Cheese. Even though Oracle is smart enough to reuse the “wholes” sometimes it is good housekeeping to reorganize a tablespace by moving – recreating its objects in a new tablespace. Once such scenario has completed the old tablespace ( aka swiss cheese ) will be empty so it could be tossed away. Or not if the tablespace name is mandatory since Vendor scripts use hardcodes specific tablespace name. Even though the dba might not like such ‘Fixed’ names it is good to know that Oracle in 11G( my version is 11.2.03 on Linux ) offers the option to rename a tablespace. But what will happen to the datafiles that are part of that tablespace ?
Below scenario is based on a great post I found on the web by Frits Hoogland: https://fritshoogland.wordpress.com/2012/07/23/rename-oracle-managed-file-omf-datafiles-in-asm/ .
In his Post Frits is offering two scenarios to do this action. Personally i favor the first once since it holds less risk. Oracle keeps track of the copied file and the old file and is dropping the old file during the alter database datafile rename .. action whereas the second scenario means you will have to do some afterwork in asmcmd since the original ( old file ) remains in place ( since in rman a copy and switch to is done).
And as you can see , after all these years (his post was written in 2012 ) still great input for the current scenario. Good posts are like good wine , they get better as they age.
Scenario:
Often on the web it is stated that you should not believe it just because it is printed. That is also what I did. To get a look and feel of the scenario let’s run a test scenario first.
## Creating a tablespace in an Asm diskgroup ( which will create the datafile as OMF).
SQL> create tablespace test1 datafile ‘+DATA’ size 100m;
Let’s see how the datafiles for this tablespace in the DATA diskgroup will look like:
SQL> select * from dba_data_files where tablespace_name = ‘TEST1’;
FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_
—- — ———- ———- ———— ———- ———– ——-
+DATA/mydb1/datafile/test1.736.935381317 5 TEST1 104857600 12800 AVAILABLE 5 NO 0 0 0 103809024 12672 ONLINE
Ok so we see that the datafile is +DATA/mydb1/datafile/test1.736.935381317 at the moment.
In the next step we will rename the tablespace:
SQL> alter tablespace test1 rename to test2;
Time to check if there has something changed under the hood now we have renamed the tablespace:
SQL> select * from dba_data_files where tablespace_name = ‘TEST2’;
FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_
—- — ———- ———- ———— ———- ———– ——-
+DATA/mydb1/datafile/test1.736.935381317 5 TEST2 104857600 12800 AVAILABLE 5 NO 0 0 0 103809024 12672 ONLINE
Not much of a surprise that the datafile is still +DATA/mydb1/datafile/test1.736.935381317 so if we feel that there should be a 1:1 relation between datafiles we need to take the extra mile to make it right.
In my first sqlplus session , issued this command:
SQL> alter tablespace TEST2 offline;
I am used to work in multiple sessions on my linux sessions so I opened a second window sqlplus and started RMAN and gave this command to copy ( the old file) in the Diskgroup. This will create another OMF for this datafile in the diskgroup +DATA.
rman target /
RMAN> copy datafile ‘+DATA/mydb1/datafile/test1.736.935381317’ to ‘+DATA’;
Output of this action looked like this:
Starting backup at 08.02.2017 04:13:58
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=923 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATA/mydb1/datafile/test1.736.935381317
output file name=+DATA/mydb1/datafile/test2.733.935381641 tag=TAG20170208T041400 RECID=3 STAMP=935381644
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
Finished backup at 08.02.2017 04:14:07
Starting Control File and SPFILE Autobackup at 08.02.2017 04:14:08
piece handle=/opt/oracle/product/11203_ee_64/db/dbs/MYDB1_c-2786503255-20170208-00 comment=NONE
Finished Control File and SPFILE Autobackup at 08.02.2017 04:14:15
With the output file name ( the new created file ) +DATA/mydb1/datafile/test2.733.935381641
I Moved back to my first screen and in sqlplus issued this command:
SQL> alter database rename file ‘+DATA/mydb1/datafile/test1.736.935381317’ to ‘+DATA/mydb1/datafile/test2.733.935381641’ ;
After that last step to make is to take the tablespace online again:
SQL> alter tablespace test2 online;
Let’s check the datafile for the tablespace again now.
SQL> select * from dba_data_files where tablespace_name = ‘TEST2’;
FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_
—- — ———- ———- ———— ———- ———– ——-
+DATA/mydb1/datafile/test2.733.935381641 5 TEST2 104857600 12800 AVAILABLE 5 NO 0 0 0 103809024 12672 ONLINE
Now it is time for the real deal.
First let’s find out which datafiles are part of our Tablespace POOL_DATA ( which was renamed from POOL_DATA_REORG.
SQL> select file_id,file_name from dba_data_files where tablespace_name = ‘POOL_DATA’;
FILE_ID FILE_NAME --------------------------------------------- 41 +DATA/mydb1/datafile/pool_data_reorg.780.934704787 42 +DATA/mydb1/datafile/pool_data_reorg.783.934705039 43 +DATA/mydb1/datafile/pool_data_reorg.658.934705043 44 +DATA/mydb1/datafile/pool_data_reorg.735.934705271 45 +DATA/mydb1/datafile/pool_data_reorg.734.934705525 46 +DATA/mydb1/datafile/pool_data_reorg.732.934705785 47 +DATA/mydb1/datafile/pool_data_reorg.731.934706037 48 +DATA/mydb1/datafile/pool_data_reorg.730.934706289 49 +DATA/mydb1/datafile/pool_data_reorg.729.934706505
Time to take the tablespace offline after consulting with Vendor – Dba that there will be an agreed maintenance window to do these actions:
SQL> alter tablespace POOL_DATA offline;
Opened a second screen and started an rman session.
rman target /
RMAN> copy datafile ‘+DATA/mydb1/datafile/pool_data_reorg.780.934704787’ to ‘+DATA’;
This showed this output in Rman Starting backup at 08.02.2017 04:35:57 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile copy input datafile file number=00041 name=+DATA/mydb1/datafile/pool_data_reorg.780.934704787 output file name=+DATA/mydb1/datafile/pool_data.736.935382959 tag=TAG20170208T043558 RECID=4 STAMP=935383461 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:08:25 Finished backup at 08.02.2017 04:44:23 Starting Control File and SPFILE Autobackup at 08.02.2017 04:44:24 piece handle=/opt/oracle/product/11203_ee_64/db/dbs/MYDB1_c-2786503255-20170208-03 comment=NONE Finished Control File and SPFILE Autobackup at 08.02.2017 04:44:31
In sqlplus I issued the rename action using the above information:
SQL> alter database rename file ‘+DATA/mydb1/datafile/pool_data_reorg.780.934704787’ to ‘+DATA/mydb1/datafile/pool_data.736.935382959’;
And with the idea checking Is believing , in the alert log I saw this entry
Alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.780.934704787' to '+DATA/mydb1/datafile/pool_data.736.935382959' Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.780.934704787 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.780.934704787' to '+DATA/mydb1/datafile/pool_data.736.935382959'
I like what I see! Working this way and performing the rename will automatically delete the old File in ASM for us!
If this would be the online datafile to be moved the next step would be to take the tablespace online again. ( In this case also as an extra check that we did not break anything).
SQL> alter tablespace POOL_DATA online;
Now let us check the datafile(s) for the tablespace again:
SQL> select file_id,file_name from dba_data_files where tablespace_name = ‘POOL_DATA’;
FILE_ID FILE_NAME ---------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 41 +DATA/mydb1/datafile/pool_data.736.935382959 42 +DATA/mydb1/datafile/pool_data_reorg.783.934705039 43 +DATA/mydb1/datafile/pool_data_reorg.658.934705043 44 +DATA/mydb1/datafile/pool_data_reorg.735.934705271 45 +DATA/mydb1/datafile/pool_data_reorg.734.934705525 46 +DATA/mydb1/datafile/pool_data_reorg.732.934705785 47 +DATA/mydb1/datafile/pool_data_reorg.731.934706037 48 +DATA/mydb1/datafile/pool_data_reorg.730.934706289 49 +DATA/mydb1/datafile/pool_data_reorg.729.934706505
Ah that looks like one down ( one correct 8 More to go).
alter tablespace POOL_DATA offline;
Proceeded with the remaining 8 Datafiles and when doing the rename step it was a good thing to see this below information in the Alert log:
Wed Feb 08 06:00:34 2017 alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.783.934705039' to '+DATA/mydb1/datafile/pool_data.780.935384353' Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.783.934705039 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.783.934705039' to '+DATA/mydb1/datafile/pool_data.780.935384353' alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.658.934705043' to '+DATA/mydb1/datafile/pool_data.755.935385209' Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.658.934705043 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.658.934705043' to '+DATA/mydb1/datafile/pool_data.755.935385209' alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.735.934705271' to '+DATA/mydb1/datafile/pool_data.778.935384371' Wed Feb 08 06:00:48 2017 Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.735.934705271 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.735.934705271' to '+DATA/mydb1/datafile/pool_data.778.935384371' alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.734.934705525' to '+DATA/mydb1/datafile/pool_data.781.935385215' Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.734.934705525 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.734.934705525' to '+DATA/mydb1/datafile/pool_data.781.935385215' alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.730.934706289' to '+DATA/mydb1/datafile/pool_data.762.935386865' Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.730.934706289 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.730.934706289' to '+DATA/mydb1/datafile/pool_data.762.935386865' alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.729.934706505' to '+DATA/mydb1/datafile/pool_data.772.935386887' Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.729.934706505 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.729.934706505' to '+DATA/mydb1/datafile/pool_data.772.935386887' Wed Feb 08 06:01:30 2017 alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.732.934705785' to '+DATA/mydb1/datafile/pool_data.744.935386243' Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.732.934705785 Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.732.934705785' to '+DATA/mydb1/datafile/pool_data.744.935386243'
All looked well so time to wrap up.
SQL> alter tablespace POOL_DATA online;
SQL> alter database backup controlfile to trace;
SQL> select file_id,file_name from dba_data_files where tablespace_name = ‘POOL_DATA’;
FILE_ID FILE_NAME ---------------- 41 +DATA/mydb1/datafile/pool_data.736.935382959 42 +DATA/mydb1/datafile/pool_data.780.935384353 43 +DATA/mydb1/datafile/pool_data.755.935385209 44 +DATA/mydb1/datafile/pool_data.778.935384371 45 +DATA/mydb1/datafile/pool_data.781.935385215 46 +DATA/mydb1/datafile/pool_data.744.935386243 47 +DATA/mydb1/datafile/pool_data.732.935388425 48 +DATA/mydb1/datafile/pool_data.762.935386865 49 +DATA/mydb1/datafile/pool_data.772.935386887
Final check:
SQL> select TABLESPACE_NAME,status from dba_tablespaces order by 1 ;
TABLESPACE_NAME STATUS ------------------------------ --------- POOL_DATA ONLINE POOL_IX ONLINE SYSAUX ONLINE SYSTEM ONLINE TEMP ONLINE TEST2 ONLINE TOOLS ONLINE UNDOTBS1 ONLINE
As always hope you had some happy reading this procedure.
Mathijs.