Introduction:
Where would we be without challenges. I have become team member of a project team for a new billing environment and this team is aiming to use ( and go live) with Oracle 12.2 Grind Infra structure and Database version). The information of this article will become a baseline for the installation of several Oracle environments on Linux. Oracle is referring to this as Oracle Restart. Next in line after that (and I love it) will be Real application clusters to be set up.
General Preparations 12.2 Grid Kata:
## Identifying ORACLE_BASE and layout of Grid Infrastructure.
echo $ORACLE_BASE /app/oracle echo $ORACLE_HOME /app/grid/product/12.2.0.1/grid ## Identifying ORACLE_BASE and Db software echo $ORACLE_BASE /app/oracle echo $ORACLE_HOME /app/oracle/product/12.2.0.1/db
## So for the 12.2 layout which is in scope for the actions on a Restart or Rac environment:
+ASM1 /app/grid/product/12.2.0.1/grid CRS /app/grid/product/12.2.0.1/grid -MGMTDB /app/grid/product/12.2.0.1/grid MYDB /app/oracle/product/12.2.0.1/db
## Checking Red Hat release:
oracle@mysrvr1hr:/dev/mapper []# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.9 (Santiago)
## Oracle restart installation for 12.2 instructions to be found:
Interesting point is that in 12.2 the famous, well known runInstaller is replaced by ./gridSetup.sh (when opening runInstaller even ran into errors (oui-10133 wen running runInstaller in 12.2). Second point of interest will be that you have to pre-create the directory where the software will be running.
## Preparations for Installation:
- On the server where you will install the Grid infrastructure create the directory where you want to install the software (the location you will later on call Your ORACLE_HOME). On the source server and in my specific case that meant that: I had to do mkdir -p /app/grid/product/12.2.0.1/grid on the source server.
- From Solltau: oracle@myhost:/opt/oracle/Odrive/depot/software/oracle/12c/GI []# scp linuxx64_12201_grid_home.zip oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid
- UNSET your environment variables if any on the installation box:
unset ORACLE_BASE unset ORACLE_HOME unset GI_HOME unset ORA_CRS_HOME unset TNS_ADMIN unset ORACLE_SID unset ORA_NLS10
echo $ORACLE_BASE etc.
- ## Check zip file in the destination that will also become your ORACLE_HOME for this install.
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid []# ls -ltr total 2924504 -rw-r--r--. 1 oracle dba 2994687209 Jan 3 16:28 linuxx64_12201_grid_home.zip
Make sure you use the future ORACLE_HOME destination to unzip the file. This is mandatory because compared to previous Installations where you can alter the Software installed directory, during the 12.2 installation there will NOT be an option to choose the destination where to setup this installation. ## make sure you are in the directory /app/grid/product/12.2.0.1/grid and extract the zip!!
- ##Once the Zip file is extracted fire below script:
./gridSetup.sh
In one of the following detailed screens make sure you change “change directory path” to discover the disks that you will be using for this installation. In my case this meant that the Linux Admin colleague has set up – has labelled dedicated Luns (disks) as a preparation to my actions.
root # ls -lH /dev/mapper/ASM_*
brw-rw----. 1 oracle dba 253, 6 Dec 22 16:01 /dev/mapper/ASM_ACFS_035_001 brw-rw----. 1 oracle dba 253, 33 Dec 22 16:01 /dev/mapper/ASM_OCRM_008_001 brw-rw----. 1 oracle dba 253, 34 Dec 22 16:01 /dev/mapper/ASM_OCRM_008_002 brw-rw----. 1 oracle dba 253, 25 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_001 brw-rw----. 1 oracle dba 253, 26 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_002 brw-rw----. 1 oracle dba 253, 30 Dec 22 16:01 /dev/mapper/ASM_VOTE_008_003
## Once you have made all the selections needed below screen will appear:
Once you selected install you will be updated by next progress screen:
## In a separate screen as the Root user ./root.sh has to run which will show:
mysrvr1hr:root:/app/grid/product/12.2.0.1/grid # ./root.sh
Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /app/grid/product/12.2.0.1/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /app/grid/product/12.2.0.1/grid/crs/install/crsconfig_params The log of current session can be found at: /app/oracle/crsdata/mysrvr1hr/crsconfig/roothas_2018-01-03_05-02-27PM.log ## logging details of root.sh : LOCAL ADD MODE Creating OCR keys for user 'oracle', privgrp 'dba'.. Operation successful. LOCAL ONLY MODE Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4664: Node mysrvr1hr successfully pinned. 2018/01/03 17:02:50 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf' CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'mysrvr1hr' CRS-2673: Attempting to stop 'ora.evmd' on 'mysrvr1hr' CRS-2677: Stop of 'ora.evmd' on 'mysrvr1hr' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'mysrvr1hr' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. mysrvr1hr 2018/01/03 17:03:44 /app/grid/product/12.2.0.1/grid/cdata/mysrvr1hr/backup_20180103_170344.olr 0 2018/01/03 17:03:49 CLSRSC-327: Successfully configured Oracle Restart for a standalone server mysrvr1hr:root:/app/grid/product/12.2.0.1/grid #
Resolving possible issues: 12.2 GI standalone : [INS-20802] Automatic Storage Management Configuration Assistant failed (Doc ID 2277224.1)
## The Installation will create the ASM instance with a default spfile. Due to company standards and due to best practice (knowing that size does matter and default settings will not do well in a heavily used environment) you should connect to the ASM instance and alter below values:
## Specific setup for asm Instance
ALTER SYSTEM SET memory_max_target=4096M SCOPE=SPFILE; ALTER SYSTEM SET memory_target=1536M SCOPE=SPFILE; ALTER SYSTEM SET large_pool_size=100M SCOPE=SPFILE; ALTER SYSTEM SET shared_pool_size=512M SCOPE=BOTH; ALTER SYSTEM SET shared_pool_reserved_size=100M SCOPE=SPFILE; ## Nothing to do with performance but mandatory due to Standards. ALTER SYSTEM SET audit_file_dest='/app/oracle/+ASM/admin/adump' SCOPE=SPFILE; ALTER SYSTEM SET background_dump_dest='/app/oracle/diag/asm/+asm/+ASM/trace' SCOPE=BOTH;
## Company Standards as with regard to listener:
- Log destination: /app/oracle/diag/tnslsnr/mysrvr1hr/listener
- One listener per vip
## So I have added a listener with the netca tool running from the GridInfrastructurehome.
## /app/oracle/diag/tnslsnr/<servername>/<listenername>/trace
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/grid/network/admin [+ASM]# lsnrctl status LISTENER_MYSRVR1HR
## Deinstallation when needed. As always you might need a way out (back again) .
Note: For upgrades from previous releases, if you want to uninstall the previous release Grid home, then perform the following steps:
- Log in as the root user.
- Manually change the permissions of the previous release Grid home (see below).
- Run the /app/grid/product/12.2.0.1/grid/deinstall/deinstall command (as oracle User).
For example, on Grid Infrastructure for a standalone server: # chown -R oracle:dba /app/grid/product/12.2.0.1 # chmod -R 775 /app/grid/product/12.2.0.1 In this example: /u01/app/oracle/product/11.2.0/grid is the previous release Oracle Grid Infrastructure for a standalone server home oracle is the Oracle Grid Infrastructure installation owner user dba is the name of the Oracle Inventory group (OINSTALL group) For example, on Oracle Database: # chown -R oracle:dba /app/oracle/product/12.2.0.1 # chmod -R 775 /app/oracle/product/12.2.0.1
If all is well Time to start Patching the Environment !
## patching : GI : p26737266_122010_Linux-x86-64.zip
## oracle@soltau2:/opt/oracle/Odrive/depot/software/oracle/patches/Linuxx86 []# scp p26737266_122010_Linux-x86-64.zip oracle@mysrvr1hr:/app/grid/product/12.2.0.1/stage
## check current situation with opatch before patching.
opatch lsinventory -detail -oh /app/grid/product/12.2.0.1/grid
This shows:
Oracle Interim Patch Installer version 12.2.0.1.6 Copyright (c) 2018, Oracle Corporation. All rights reserved. Oracle Home : /app/grid/product/12.2.0.1/grid Central Inventory : /app/oraInventory from : /app/grid/product/12.2.0.1/grid/oraInst.loc OPatch version : 12.2.0.1.6 OUI version : 12.2.0.1.4 Log file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-05_14-39-27PM_1.log Lsinventory Output file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-01-05_14-39-27PM.txt -------------------------------------------------------------------------------- Local Machine Information:: Hostname: mysrvr1hr.mydomain ARU platform id: 226 ARU platform description:: Linux x86-64 Installed Top-level Products (1): Oracle Grid Infrastructure 12c 12.2.0.1.0 There are 1 products installed in this Oracle Home. Installed Products (99): etc. . . There are 99 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. -------------------------------------------------------------------------------- OPatch succeeded.
## Use opatch to check for conflicts:
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26710464 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26925644 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26737232 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26839277 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /app/grid/product/12.2.0.1/stage/26737266/26928563
## This did not show any conflicts
## Next step will be : use opatch check for space requirements (you would not want to end up installation failing due to lacking storage:
For Grid Infrastructure Home, as home user:
Create file /tmp/patch_list_gihome.txt with the following content: cat /tmp/patch_list_gihome.txt3. /app/grid/product/12.2.0.1/stage/26737266/269285634. /app/grid/product/12.2.0.1/stage/26737266/268392775. /app/grid/product/12.2.0.1/stage/26737266/267372326. /app/grid/product/12.2.0.1/stage/26737266/269256447. /app/grid/product/12.2.0.1/stage 26737266/26710464
Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:
$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt
## this shows
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/stage [+ASM]# $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt Oracle Interim Patch Installer version 12.2.0.1.6 Copyright (c) 2018, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /app/grid/product/12.2.0.1/grid Central Inventory : /app/oraInventory from : /app/grid/product/12.2.0.1/grid/oraInst.loc OPatch version : 12.2.0.1.6 OUI version : 12.2.0.1.4 Log file location : /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatch/opatch2018-01-05_14-55-06PM_1.log Invoking prereq "checksystemspace" Prereq "checkSystemSpace" passed. OPatch succeeded.
## To patch only the GI home:
# opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid
## failed with:
OPATCHAUTO-72046: Invalid wallet parameters. OPATCHAUTO-72046: The wallet path or wallet password provided is not valid. OPATCHAUTO-72046: Please provide valid wallet information. opatchauto bootstrapping failed with error code 46.
## Thank you Mos for elaborating.
OPATCHAUTO-72046: Invalid wallet parameters (Doc ID 2150070.1) |
opatchauto command is not being run as root user. Opatchauto for Grid PSUs should always be run as root user.
## So as the root user :
/app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid
## and it failed again !!!
mysrvr1hr:root:/root # /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid System initialization log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2018-01-05_03-09-09PM.log. Session log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2018-01-05_03-09-12PM.log The id for this session is 5LQ1 [init:init] Executing OPatchAutoBinaryAction action on home /app/grid/product/12.2.0.1/grid Executing OPatch prereq operations to verify patch applicability on SIHA Home........ [init:init] OPatchAutoBinaryAction action completed on home /app/grid/product/12.2.0.1/grid with failure Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures: Patch Target : mysrvr1hr->/app/grid/product/12.2.0.1/grid Type[siha] Details: [ ---------------------------Patching Failed--------------------------------- Command execution failed during patching in home: /app/grid/product/12.2.0.1/grid, host: mysrvr1hr. Command failed: /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid -target_type has -binary -invPtrLoc /app/grid/product/12.2.0.1/grid/oraInst.loc -persistresult /app/grid/product/12.2.0.1/grid/OPatch/auto/dbsessioninfo/sessionresult_analyze_mysrvr1hr_siha.ser -analyze -online Command failure output: ==Following patches FAILED in analysis for apply: Patch: /app/grid/product/12.2.0.1/stage/26737266/26925644 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-09-16PM_1.log Reason: Failed during Analysis: CheckNApplyReport Failed, [ Prerequisite Status: FAILED, Prerequisite output: The details are: Prerequisite check "CheckMinimumOPatchVersion" failed.] Failed during Analysis: CheckMinimumOPatchVersion Failed, [ Prerequisite Status: FAILED, Prerequisite output: The details are: The OPatch being used has version 12.2.0.1.6 while the following patch(es) require higher versions: Patch 26710464 requires OPatch version 12.2.0.1.7. Please download latest OPatch from My Orac ... etc. . . OPatchAuto failed. opatchauto failed with error code 42 mysrvr1hr:root:/root #
## So I downloaded latest opatch version and parked it in a temporary directory on that targeted server:
unzip p6880880_122011_Linux-x86-64.zip -d /app/grid/product/12.2.0.1/grid
## now Opatch shows:
oracle@mysrvr1hr:/app/grid/product/12.2.0.1/opatch [+ASM]# opatch version OPatch Version: 12.2.0.1.11 OPatch succeeded.
## Sometimes you just have to be patient to hear the lambs being silent:
## Next run as the root user :
/app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid
## logfiles:
oracle@mysrvr1hr:/app/oracle/crsdata/mysrvr1hr/crsconfig -rw-rw----. 1 oracle dba 17364 Jan 5 15:35 hapatch_2018-01-05_03-34-42PM.log -rw-rw----. 1 oracle dba 23725 Jan 5 15:42 hapatch_2018-01-05_03-42-41PM.log ##showed mysrvr1hr:root:/root # /app/grid/product/12.2.0.1/grid/OPatch/opatchauto apply /app/grid/product/12.2.0.1/stage/26737266 -oh /app/grid/product/12.2.0.1/grid OPatchauto session is initiated at Fri Jan 5 15:33:54 2018 System initialization log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2018-01-05_03-33-58PM.log. Session log file is /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2018-01-05_03-34-02PM.log The id for this session is XLE2 Executing OPatch prereq operations to verify patch applicability on home /app/grid/product/12.2.0.1/grid Patch applicability verified successfully on home /app/grid/product/12.2.0.1/grid Bringing down CRS service on home /app/grid/product/12.2.0.1/grid Prepatch operation log file location: /app/oracle/crsdata/mysrvr1hr/crsconfig/hapatch_2018-01-05_03-34-42PM.log CRS service brought down successfully on home /app/grid/product/12.2.0.1/grid Start applying binary patch on home /app/grid/product/12.2.0.1/grid Binary patch applied successfully on home /app/grid/product/12.2.0.1/grid Starting CRS service on home /app/grid/product/12.2.0.1/grid Postpatch operation log file location: /app/oracle/crsdata/mysrvr1hr/crsconfig/hapatch_2018-01-05_03-42-41PM.log CRS service started successfully on home /app/grid/product/12.2.0.1/grid OPatchAuto successful. --------------------------------Summary-------------------------------- Patching is completed successfully. Please find the summary as follows: Host:mysrvr1hr SIHA Home:/app/grid/product/12.2.0.1/grid Summary: ==Following patches were SUCCESSFULLY applied: Patch: /app/grid/product/12.2.0.1/stage/26737266/26710464 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26737232 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26839277 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26925644 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log Patch: /app/grid/product/12.2.0.1/stage/26737266/26928563 Log: /app/grid/product/12.2.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-01-05_15-35-04PM_1.log OPatchauto session completed at Fri Jan 5 15:43:05 2018 Time taken to complete the session 9 minutes, 11 seconds
Happy Dba , Installed 12.2 GI and Patched it with RU October 2017.
Thanks for reading and till we meet again,
Mathijs.