Cloning a Pdb with Different Character set into your CDB

Introduction.

The below scenario will be discussed in this post: There is container database that has been created with an utf8 Character Set (and this character set comes recommended in OCI). This Container Db (and its only PDB (of course also in utf8) has an active data guard already in place in OCI.

Scenario requested: Can we add – create a PDB with a different Character set into this CDB with that other Character set.

Short story: No you cannot create a pluggable database in the CDB with a different Character Set. Plan your CDB with care.

Long story:  You cannot, but there is a work around possible but your target CDB (the database where you would like to add that different Character set needs to be UTF8 to make this work!) 

AND

You need a second Container Database with a different Character set in OCI.

Recommendations / Setup / Tested practice:

In OCI: before implementing a data guard on your CDB, it is recommended to implement all the needed PDBs (best practice via the OCI console) in the CDB before setting up a data guard. Which means IF you really really really want to do this scenario below, do it before setting up the Dataguard.

Second recommendation if you need to add PDBs in the container database ( with utf8)  still recommend to add  or clone first all the PDBs needed in that CDB before starting to work on the (active) data guard.

Should you need a container database with a different Character Set  and need to load data into it which also holds that different character set ,  then set up this environment and perform the remote clone steps as mentioned here (below)  and make sure the PDB  is in a good status on the Prim side before continue with cloning scenario.

Please be aware that a lot of things happen for a reason. If you really really really ( did i mention really) have to have environments with other characterset ( different from UTF8) do consider to implement a separate CDB – infrastructure with that different character set!

And most and most importantly order of using tools should be: UI, then the db cli and only as last resort use manual intervention using ssh – sqlplus etc.

Requirements / best practice that need checking before Cloning:

  • My source CDB database (in UTF8) is holding a PDB and I want to add a PDB In a different Character set to my primary side in the existing CDB:
  • Source (remote pluggable database) in CDB with different Character set is set to read only.
  • Is there a data guard set up in place in OCI? If your answer to that question is Yes: check with dgmgrl that the Data Guard = Happy.
  • Do you have enough storage on disk groups to hold a transient PDB, and a clone PDB from transient in same Container Database?
  • Is there a database Link in place for the remote clone.
  • Check source (in a container database with different character set) as a preparation but for the first clone) :

SQL> show PDBs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 ISOP15_PDB1                    READ WRITE NO

         4 TEST                           READ ONLY  NO

         5 WILLIE                         READ WRITE NO

  • Check character set in that Source PDB:

SELECT value FROM nls_database_parameters WHERE parameter = ‘NLS_CHARACTER SET’;

VALUE

—————————————————————-

WE8ISO8859P15

  • Check for Data Guard (if in place) in place (and happy)

Note:  by default in OCI broker is running on the Prim side.

dgmgrl

DGMGRL> connect sys / as sysdba                                                                                                  

Password:                                                                                                                         

Connected to “DBUTF8_fra2ps”                                                                                                     

Connected as SYSDBA.                                                                                                              

DGMGRL> show configuration                                                                                                       

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7                                                                                      

  Protection Mode: MaxPerformance                                                                                                

  Members:                                                                                                                        

  DBUTF8_fra2ps – Primary database                                                                                               

    DBUTF8_fra1h7 – Physical standby database                                                                                     

Fast-Start Failover:  Disabled                                                                                                    

Configuration Status:                                                                                                             

SUCCESS   (status updated 59 seconds ago) 

  • Checked for storage:

Specific script for Asm environment:

SET LINESIZE 150

SET PAGESIZE 9999

SET VERIFY off

COLUMN group_name FORMAT a25 HEAD ‘DISKGROUP_NAME’

COLUMN state FORMAT a11 HEAD ‘STATE’

COLUMN type FORMAT a6 HEAD ‘TYPE’

COLUMN total_mb FORMAT 999,999,999 HEAD ‘TOTAL SIZE(GB)’

COLUMN free_mb FORMAT 999,999,999 HEAD ‘FREE SIZE (GB)’

COLUMN used_mb FORMAT 999,999,999 HEAD ‘USED SIZE (GB)’

COLUMN pct_used FORMAT 999.99 HEAD ‘PERCENTAGE USED’

SELECT distinct name group_name , state state , type type ,

round(total_mb/1024) TOTAL_GB , round(free_mb/1024) free_gb ,

round((total_mb – free_mb) / 1024) used_gb ,

round((1- (free_mb / total_mb))*100, 2) pct_used

from v$asm_diskgroup

–where round((1- (free_mb / total_mb))*100, 2) > 90

ORDER BY name;

  • On prim side check for specific database link

set lines 300

select owner,DB_LINK from dba_db_links order by 1 ;

OWNER                                                                                                                            DB_LINK

——————————————————————————————————————————– ——————————————————————————————————————————–

PUBLIC                                                                                                                           dblink_to_other_CDB.SUB10200805271.MBK1.ORACLEVCN.COM

SYS                                                                                                                              SYS_HUB

  • If the  database link test is not in place it needs create (As you can see I decided to do a public db link and also decided to add all the details in the db link and not use tnsnames entry):

CREATE public DATABASE LINK dblink_to_other_CDB

CONNECT TO <C##POWERUSER> IDENTIFIED BY <USER_PASSWORD>

USING ‘(DESCRIPTION=(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.179)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=test.sub10200805271.mbk1.oraclevcn.com)))’;

  • On the Prim side check for  parameter for GLOBAL first,  if you do not really need it put it to False!

SQL> show parameter global

NAME                                 TYPE        VALUE

———————————— ———– ——————————

allow_global_dblinks                 boolean     FALSE

global_names                      boolean    FALSE <<<—-

global_txn_processes                 integer     1

  • if global_names is true change it to false.

alter system set  global_names = FALSE ;

  • After that check that the db link is working on prim side and  SB side:

SQL> select sysdate from dual@dblink_to_other_CDB;

SYSDATE

———

19-NOV-21

  • On the prim side create an intermedia PDB (clone from the remote container) (with a PDB in diff character set). This step will create a pluggable dabase in our CBD with the character set  from the Other  CDB  (if that CDB was we8iso8859p15 and its PDB there will be we8iso8859p15 too). PDBVOID will be a pluggable database with character set  we8iso8859p15 which is part of  the CDB where character Set is UTF8.

create pluggable database PDBVOID from test@test keystore identified by “<wallet_password>” standbys=none;

Explanation:

test@dblink_to_other_CDB means:  PDBname_to_clone_from dblink_to_other_CDB: dblink we created earlier to the CDB holding PDB (in a different Character set)

standbys means do not protect this PDB in the Data Guard.

  • Next step will be to create a self-referencing DB link on primary (here named DBLINK_TO_PRIM) ( so basically a  dblink that is pointing to the database where it is also created).
  • Note it will be replicated on standby pointing to the primary too) and I DO recommend you test that DB link on  both sides ( Primary and Standby) if it works well.

CREATE public DATABASE LINK PRIMSIDE

   CONNECT TO C##SYSTEM IDENTIFIED BY HAS_BEEN_CHANGED_##12

   USING ‘(DESCRIPTION=(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.39)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBUTF8_fra2ps.sub10200805271.mbk1.oraclevcn.com)))’;

  • Next step to be taken is happening on the Standby side: You will have to set – update the parameter: STANDBY_PDB_SOURCE_FILE_DBLINK:

ALTER SYSTEM SET STANDBY_PDB_SOURCE_FILE_DBLINK=’PRIMSIDE’;

  • Next Step on prim side prepare  for the next clone (inside the container) and that clone from a clone will be there to stay.

SQL> alter pluggable database PDBVOID open instances = all;

SQL> alter pluggable database PDBVOID  close instances = all;

SQL> alter pluggable database PDBVOID open read only instances = all;

SQL> show PDBs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 PDB1                           READ WRITE NO

         4 PDBDEC                         READ WRITE NO

         5 PDB2                           READ WRITE NO

         6 PDB010                         READ WRITE NO

         7 PDBP15                         READ WRITE NO

         8 PDBVOID                        READ ONLY  NO

  • Now create a local clone from the transient no-standby PDB with STANDBYS=ALL  (which means the Data Guard should protect this New PDB).

create pluggable database PDBP15C from PDBVOID keystore identified by “WElcome##12” STANDBYS=ALL ;

SQL> show PDBs                       

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 PDB1                           READ WRITE NO

         4 PDBDEC                         READ WRITE NO

         5 PDB2                           READ WRITE NO

         6 PDB010                         READ WRITE NO

         7 PDBP15                         READ WRITE NO

         8 PDBVOID                        READ ONLY  NO

         9 PDBP15C                        MOUNTED

  • Open the PDB on primary side:

SQL> alter pluggable database PDBP15C  open instances = all;

  • Time to check wallet on prim side:

col wrl_parameter format a60

set lines 300

select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

  • Create a new TDE master encryption key on the primary side.

alter session set container=PDBP15C;

administer key management set key force keystore identified by “WElcome##12” with backup;

  • Note the broker of the Data Guard will not be happy it will show error due to the new master key implemented on the PRIM SIDE !!!!!!!!!!!!!!!!!!!!!:

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

      Error: ORA-16766: Redo Apply is stopped

Fast-Start Failover:  Disabled

Configuration Status:

ERROR   (status updated 60 seconds ago)

  • On prim side .. start preps for  scp of the wallet to the standby machine(s).

cd /opt/oracle/dcs/commonstore/wallets/tde/DBUTF8_fra2ps/

[oracle@clutf81 DBUTF8_fra2ps]$ cp ewallet.p12 cwallet.sso /tmp

[oracle@clutf81 DBUTF8_fra2ps]$ cd /tmp

[oracle@clutf81 tmp]$ ls -ltr ewallet.p12 cwallet.sso

-rw——- 1 oracle oinstall 14091 Nov 19 12:33 ewallet.p12

-rw——- 1 oracle oinstall 14136 Nov 19 12:33 cwallet.sso

## Permissions needed for OPC:

chmod o+rx  ewallet.p12  cwallet.sso

[opc@clutf81 tmp]$ scp ewallet.p12  cwallet.sso 10.0.1.62:/tmp

ewallet.p12                                                                                                                                              100%   14KB  10.4MB/s   00:00   

cwallet.sso                                                                                                          

  • On the Standy side start your preparations by saving the wallet. And copy  the wallet from the primary side to the wallet location on the Standby Side.

cd /opt/oracle/dcs/commonstore/wallets/tde/*/

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp ewallet.p12 ewallet.p12.save

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp cwallet.sso cwallet.sso.save

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp /tmp/ewallet.p12 .

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp /tmp/cwallet.sso .

  • On Standby Side close the wallet ( it will reopen with the query in the next bullit):

SQL> alter session set container = CDB$ROOT;

SQL> administer key management set keystore close container=ALL;

  • On Standby side check , ( and this will also open the keystore again).

col wrl_parameter format a60

set lines 300

select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

  • Now it is time to check status in broker again.

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

      Error: ORA-16766: Redo Apply is stopped

Fast-Start Failover:  Disabled

Configuration Status:

ERROR   (status updated 60 seconds ago)

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover:  Disabled

Configuration Status:

ERROR   (status updated 51 seconds ago)

  • In the broker enable apply again and give it some time !!! ( in my case various minutes).

edit database DBUTF8_fra1h7 set state=’apply-on’;

## check broker:

DGMGRL> show configuration

DGMGRL> show database DBUTF8_fra1h7

Database – DBUTF8_fra1h7

  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON

  Transport Lag:      0 seconds (computed 1 second ago)

  Apply Lag:          0 seconds (computed 1 second ago)

  Average Apply Rate: 3.88 MByte/s

  Real Time Query:    ON

  Instance(s):

    DBUTF81

    DBUTF82 (apply instance)

Database Status:

SUCCESS

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:

SUCCESS   (status updated 66 seconds ago)

 Well that is  Relieve.   Data Guard configuration – Broker are happy again (which also means so is this Dba),

  • This can also be checked in sqlplus on SB side

SQL> Set lines 300

Col value format a25

select * from v$Data Guard_stats;

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                                               VALUE                     UNIT                           TIME_COMPUTED                 DATUM_TIME                          CON_ID

———– ——————————– ————————————————– ————————- —————————— —————————— —————————— ———-

          0                                  transport lag                                      +00 00:00:00              day(2) to second(0) interval   11/19/2021 12:50:42           11/19/2021 12:50:40                      0

          0                                  apply lag                                          +00 00:00:00              day(2) to second(0) interval   11/19/2021 12:50:42           11/19/2021 12:50:40                      0

          0                                  apply finish time                                  +00 00:00:00.000          day(2) to second(3) interval   11/19/2021 12:50:42           0

          0                                  estimated startup time                             24                        second                         11/19/2021 12:50:42           0

  • And good things always come in twice ( another check on the standby side:

SQL> select status,blocks, delay_mins, known_agents from gv$managed_standby where process like ‘MRP%’;

STATUS           BLOCKS DELAY_MINS KNOWN_AGENTS

———— ———- ———- ————

APPLYING_LOG    2097152          0            3

  • Since we are in active Data Guard almost last step in this scenario is to bring the new born PDB  to read only mode.

SQL> show con_name

CON_NAME

——————————

CDB$ROOT

## Last Checks

SQL> show PDBs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 PDB1                           READ ONLY  NO

         4 PDBDEC                         READ ONLY  NO

         5 PDB2                           READ ONLY  NO

         6 PDB010                         READ ONLY  NO

         7 PDBP15                         READ ONLY  NO

         8 PDBVOID                        MOUNTED

         9 PDBP15C                        MOUNTED

SQL> alter pluggable database PDBP15C open read only instances = all;

Pluggable database altered.

## Check

SQL> alter session set container = PDBP15C;

Session altered.

SQL> SELECT value FROM nls_database_parameters WHERE parameter = ‘NLS_CHARACTER SET’;

VALUE

————————-

WE8ISO8859P15

################################  The End  ##############################

PS Of course it is completely up to you if you want to drop PDBVOID now, or that you will keep it as a template for future scenarios as a kind of Template.

Happy Reading and till we meet again. And of course always  test test test this scenario before you run it on your favorite production environment in a TEST environment.

Mathijs

TDE SETUP in a rac environment:

Summary:

This document will share the steps to implement TDE (transparent Database encryption) in the database in a Cluster.  Required Steps that are needed to implement it:

  • have the requirement to have an ACFS file system in place with proper setup there (see below).
  • Have the requirement to have an adapted sqlnet.ora in a centralized location.

Details:

  1. Setup will require preparations. In our case, the first database was MYDB. Steps below will be identical for each database in scope. (Of course you need to change the database name according to the database in your scope).

Mysrv[3-4]dr is holding the MYDB database.

  • Create a Diskgroup in normal Redundancy and call it TDE_KEYS.
  • Prepare the acfs created mountpoint by creating a TDE_VOL.
  • Mount the TDE_VOL as /app/oracle/admin/WALLET
  • On the acfs mount (/app/oracle/admin/WALLET
  • cd /app/oracle/admin/WALLET

mkdir MYDB,

ln -sf MYDB MYDB <1,2>

This acfs filesystem is shared between the clusternode.

With the mkdir you wil create an entry for  the database you are about to encrypt , with the 2 links you will point both instances to the database directory , to keep all in a central place.

  • Make sure you have a centralized Sqlnet.ora and adapt this one. As you can see the ORACLE_SID ( the instance name offers a flexible input in this way):

ENCRYPTION_WALLET_LOCATION =

  (SOURCE =(METHOD = FILE)(METHOD_DATA =

    (DIRECTORY = /app/oracle/admin/WALLET/$ORACLE_SID/ )

                          )

  )

  • In the database first check:

SQL> select * from v$encryption_wallet;

WRL_TYPE

——————–

WRL_PARAMETER

——————————————————————————–

STATUS                                       WALLET_TYPE               WALLET_OR FULLY_BAC

—————————— ——————– ——— ———

    CON_ID

———-

FILE

/app/oracle/admin/WALLET/MYDB1/

NOT_AVAILABLE                                    UNKNOWN                    SINGLE    UNDEFINED

                 0

  • With the below command you will create the keystore  and you will give the keystore a password.

SQL> administer key management create keystore ‘/app/oracle/admin/WALLET/MYDB/’ identified by “mypwd19!”;

## Once that command is given Sqlplus  will report:

keystore altered.

## TIP Be sure the correct sqlnet is read (links?)

  • run below command to open your keystore:

SQL> administer key management set keystore open identified by “mypwd19!”;

## this will report in sqlplus:

keystore altered.

  1. Now is a good time to Check the status of your wallet:

SQL> select * from v$encryption_wallet;

WRL_TYPE

——————–

WRL_PARAMETER

——————————————————————————–

STATUS                                       WALLET_TYPE               WALLET_OR FULLY_BAC

—————————— ——————– ——— ———

    CON_ID

———-

FILE

/app/oracle/admin/WALLET/MYDB1/

OPEN_NO_MASTER_KEY                    PASSWORD                    SINGLE    UNDEFINED

                 0

  1.  Check  your encryption keys.

SQL> select key_id,activation_time from v$encryption_keys;

## sql will 1st time report:

no rows selected

## On Os you should already see the wallet similar to below:

SQL> host

oracle@Mysrv3dr:/app/oracle/admin/WALLET/MYDB []# ls -lisa

77 4 -rw——-. 1 oracle dba 2555 Sep 13 05:51 ewallet.p12

  1. In Sqlplus  now it is time to create the key:

SQL> administer key management create key identified by “mypwd19!” with backup;

  1. When you check your encryption keys again now:

SQL> select key_id,activation_time from v$encryption_keys;

KEY_ID

——————————————————————————

ACTIVATION_TIME

—————————————————————————

NcC1701D+mbkK+6v92xdM/qIxcXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

  1. In sqlplus give below command:

SQL> administer key management use key ‘ NcC1701D+mbkK+6v92xdM/qIxcXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

‘ identified by “mypwd19!” with backup;

## sqlplus will report:

keystore altered.

  1. Check again your encryption keys again to get something similar to below:

SQL> select key_id,activation_time from v$encryption_keys;

## now you will see an activation_time Too:

KEY_ID

——————————————————————————

ACTIVATION_TIME

—————————————————————————

NcC1701D+mbkK+6v92xdM/qIxcXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

13-SEP-19 06.01.49.995963 AM +02:00

  1. As a test create a small  tablspace:

SQL> create tablespace ENC datafile ‘+MYDB_DATA’ size 10M encryption using ‘AES256’ default storage(encrypt);

## Check this with:

select tablespace_name,encrypted from dba_tablespaces order by 1;

## Sqlplus should report something like this:

SQL> select tablespace_name,encrypted from dba_tablespaces order by 1;

TABLESPACE_NAME                ENC

—————————— —

ENC                            YES

Now you are all set to create all required application tablespaces with these extra parameters  encryption using ‘AES256’ default storage(encrypt).

Automatically open the wallet

Note:   Basically a wallet needs to be open before you can access the database. This however can and should be altered: When opening your database this should automatically open the wallet:

  1. In sqlplus check:

SQL> select * from v$encryption_wallet;

WRL_TYPE

——————–

WRL_PARAMETER

——————————————————————————–

STATUS                                       WALLET_TYPE               WALLET_OR FULLY_BAC

—————————— ——————– ——— ———

    CON_ID

———-

FILE

/app/oracle/admin/WALLET/MYDB1/

OPEN                                           PASSWORD                    SINGLE    NO

                 0

  • Give  this command:

SQL> administer key management create auto_login keystore from keystore ‘/app/oracle/admin/WALLET/MYDB/’ identified by “mypwd19!”;

## sqlplus will report:

keystore altered.

## On OS you will see :

oracle@Mysrv3dr:/app/oracle/admin/WALLET/MYDB [MYDB1]# ls -lisa

total 48

74  4 drwxr-xr-x. 2 oracle dba 4096 Sep 13 06:37 .

 2  4 drwxrwxr-x. 5 oracle dba 4096 Sep 13 05:44 ..

82 12 -rw——-. 1 oracle dba 5304 Sep 13 06:37 cwallet.sso

78  4 -rw——-. 1 oracle dba 2555 Sep 13 05:55 ewallet_2019091303553484.p12

80  4 -rw——-. 1 oracle dba 3803 Sep 13 05:57 ewallet_2019091303572233.p12

81  8 -rw——-. 1 oracle dba 5067 Sep 13 06:01 ewallet_2019091304014993.p12

77 12 -rw——-. 1 oracle dba 5259 Sep 13 06:01 ewallet.p12

  • And in Sqlplus  too you  will see  a change.

SQL> SELECT * FROM v$encryption_wallet;

WRL_TYPE

——————–

WRL_PARAMETER

——————————————————————————–

STATUS                                       WALLET_TYPE               WALLET_OR FULLY_BAC

—————————— ——————– ——— ———

    CON_ID

———-

FILE

/app/oracle/admin/WALLET/MYDB1/

OPEN                                           AUTOLOGIN                  SINGLE    NO

                 0

Appendix 1 Create an acfs file system:

Idea was to setup acfs via asmca. This however did not work in first implementation due to error because ora.proxy_advm is offline on some nodes.

It should be:

crsctl stat res ora.proxy_advm -t

——————————————————————————–

Name           Target  State        Server                   State details      

——————————————————————————–

Local Resources

——————————————————————————–

ora.proxy_advm

               ONLINE  ONLINE       Mysrv3dr                 STABLE

               ONLINE  ONLINE       Mysrv4dr                 STABLE

If the status is offline check if the acfsmodule are loaded: lsmod | grep acfs

oracleacfs           4840664  2

oracleoks             663240  2 oracleacfs,oracleadvm

if they are not loaded on the local node: log in as root and perform

$GRID_HOME/bin/csrctl stop crs

$GRID_HOME/bin/acfstool install

$GRID_HOME/bin/crsctl start crs

Modules should be loaded now. Volume can be created in asmca (high protection, TDE_VOL1, 1GB)

The volume will show up under /dev/asm (/dev/asm/tde_vol1-301 in this case).

Mkdir  -p /app/oracle/admin/wallet

Cd /app/oracle

Chown –R oracle:dba admin

/sbin/mkfs -t acfs /dev/asm/tde_vol1-302

Register in clusterware:

/sbin/acfsutil registry -a /dev/asm/tde_vol1-302 /app/oracle/admin/WALLET

Upgrading 11G GridInfra to 12C in Linux

Introduction:

With spring 2017 around new initiatives are developed. As a preparation to start doing Database upgrades to 12C  it will be a mandatory step to upgrade the Cluster-ware ( Grid-Infrastructure) first before doing the database part. So in  this case very happy me that finally the time has come that one of the customers requests to upgrade a number of Clusters to 12C Grid-infrastructure.  In  this document will share thoughts , and my plan to tackle this interesting puzzle. Since the first Cluster upgrade will happen pretty soon (this week) the document might evolve with the lessons learned of that first upgrade. Happy reading in advance.

Preparations:

It could be some text of a fortune cookie but every success just loves preparation so in this case that will not be any different. First thing to do was to identify a scope of clusters that had to be upgraded. Together with customer an inventory  list had to be created and in the end 10 Clusters have been defined as part of scope for this action. 8 Test clusters and 2 production environments . Interesting detail will be that all Clusters have been patched pretty recently all holding 11.2.0.4 Grid infrastructure with some extra challenge that the below Operating system will come in two flavors (being Red Hat Linux server release 5.11 (Tikanga) and 6.5 (Santiago). Curious in advance already to see if these different versions of Red Hat will have an influence of the steps to be performed. In the details below you will find more details on detailed preparations and actions of the upgrade.

Operating System:

One of the first steps to investigate is of course to find out if the Operating versions at hand are supported ones for the Upgrade. Oracle support confirmed that even though it would be recommended to upgrade the 5.11 Red Hat version first to Red Hat 7, it should work with the 5.11 version at hand. The 6.5 Os version was okay anyhow. The project decided however that an OS upgrade of the 5.11 boxes would delay things so upgrading the OS will be done in a different project.

Storage:

Before even considering to run the upgrade of the grid-infrastructure some extra time needs to be spend to investigate the storage in place in the Cluster for such upgrade. Often the Oracle software is first set up locally on each box on Volume group VG0 but with the out-of-place-installation these days that might become a challenge if  there is not enough local storage present anymore in the box. Due to standards those root disk become nearly untouchable. For my project this storage requirement has been defined as an absolute minimum which means there will  be a need for extra local storage per node or even for San storage per node which  will be presented as required mount points to me. If such storage would not (or no longer be present locally)  I have to request and received additional storage for it.

/app/grid /app/oracle /var/opt/oracle /tmp San 4 lvm dbs
50GB 70GB 32M 1GB

Short explain for this:

/app/grid : 12C Grid-Infra software will be installed.
/app/oracle: For the 12C Database software.
/var/opt/oracle and /tmp: required minimum space.
San 4 lvm dbs:  will be setup for 4GB mountpoints, for each Instance on the local node in order to hold logfiles.

When migrating to 12C and coming  from 11G  please be informed that you might need extra storage in your OCR – VOTING disk group due to a new feature as well. This new repository database will have to be implemented during the upgrade. This Grid Infrastructure management repository (GIMR) database has become mandatory in Oracle GI 12.1.0.2. Data files associated with it will be created in same diskgroup as OCR or voting.  (Average growth per day per node = app 750 MB so a 4 node cluster would lead at default retention of 3 days to app 9 GB storage requirement in OCR  or VOTING diskgroup).  A fortunate Note is that retention can be changed. Well in my case this means that more ASM disks will need to be added  to the specific disk group. At work most OCR and VOTING diskgroups are set up as bare minimum ( in normal redundancy with three disks each like 4 GB each). ( extra info on this topic: https://blogs.oracle.com/UPGRADE/entry/grid_infrastructure_management_repository_gimr)

Detailed preparations and health checks.

One of the quotes in IT sometimes is that you should not touch a well running system. Well in this case I would like to add but if you do, come well prepared.  In this case i have put the focus on the three below tools to prove that the current system is in a good shape to run the upgrade which is also to be regarded as a health check of the environment. These preps are based on the Mos note (1579762.1) from from reading Chapter 13 in the great book “Expert Oracle Rac 12C”  by Syed Jaffar Hussain, Tariq Farooq,Riyaj Shamsudeen and Kai Yu. ( ISBN-13 (electronic): 978-1-4302-5045-6).

  • Opatch
  • RACcheck: Orachk
  • Runcluvfy

Opatch

Using opatch in order to make sure that the Orainventory is in good shape on all nodes in the cluster. Command issued is investiging the current gridinfrastructure:

opatch lsinventory -oh /opt/crs/product/11204/crs -detail

-oh means for the specific ORACLE_HOME.

-detail shows all details.

RACcheck: Orachk

I have looked on Metalink and Downloaded and installed this tool on the cluster (nodes).

orachk Version
12.2.0.1.2_20161215

Following Quick start guide for this tool:

http://docs.oracle.com/cd/E68491_01/OEXUG/quick-start-guide.htm#OEXUG-GUID-CB4224DA-F389-4E9C-AB6A-C57F46A80C61

Clear information to be found In mos :

ORAchk Upgrade Readiness Assessment (Doc ID 1457357.1)

With the  tool downloaded below steps have been performed:

According to documentation the tool needs to be copied, unpacked (and installed) in suptools subdirectory of the cluster software installation.

scp orachk.zip oracle@mysrvr23hr:/opt/crs/product/11204/crs/suptools
scp orachk.zip oracle@mysrvr24hr:/opt/crs/product/11204/crs/suptools

Once unzipped the tool can run in two modes, a pre upgrade mode and a post upgrade mode:

./orachk u -o pre |tee Orachk_pre_20170124.log
./orachk u -o post |tee Orachk_post_20170124.log

Note: the tee command will also create a log file holding all the steps – progress information during run time.
Note: /opt/oracle/.orachk should be empty before stat otherwise:‘Another instance of orachk is running on:: #  message.

Runcluvfy

Working with runcluvfy  is like meeting an old friend again. Yet each time it is a bit of struggle to find optimal syntax – parameters to be used for your set up.

#Wrong setup was
./runcluvfy.sh stage -pre crsinst -upgrade -n mysrvr23hr,mysrvr24hr -rolling -fixup -src_crshome /opt/crs/product/11204/crs -dest_home /app/grid/product/12102/grid -dest_version 12.1.0 -verbose
## working version
./runcluvfy.sh stage -pre crsinst -n mysrvr23hr,mysrvr24hr -verbose|tee runcluvfy_20170130_pre.lst
Or
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /opt/crs/product/11204/crs -dest_crshome /app/grid/product/12102/grid -dest_version 12.1.0.2.0 -verbose|tee runcluvfy_20170130_preUpgrade.lst

Upgrade steps:

Now it will become to plan and set up your upgrade steps after the confidence build on the preparation. In the upgrade multiple approaches will be possible.  But my goal in this is plain and simple, minimum Impact on Cluster and on the databases hosted on that cluster so I will be aiming for this Scenario:  rolling upgrade ASM + Clusterware. A baseline for such will be the below URL:

https://docs.oracle.com/database/121/CWLIN/procstop.htm#CWLIN10001

Working according to company standards will require to use following specific settings for an $ORACLE_BASE, $ORACLE_HOME for the GI installation and a different $ORACLE_HOME for the database software.

oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_BASE
/app/oracle
oracle@mysrvrhr:/home/oracle [CRS]# echo $ORACLE_HOME
/app/grid/product/12102/grid

oracle@mysrvrhr:/home/oracle [MYDB1]# echo $ORACLE_HOME
/app/oracle/product/12102/db

Below in the bullets will go through the steps and comment where needed.

  • Due to Grid Infrastructure management repository (GIMR) database I had to add larger disks to VOTING diskgroup to have enough storage in place (the steps on how to add the new disks and drop the old ones are too detailed for this blog (after all it is a blog and not a book 🙂 so I will have to blog about that in a separate blog).
  • Check /tmp because upgrade requires at least 1GB present in /tmp. Either clean up or have  /tmp extended. (use ls -lSh  command).
  •  check ocr integrity by :
cluvfy comp ocr -n all -verbose
  • Check backup of ocr and voting disk in the cluster:
    ocrconfig -showbackup

Note: this command can be performed as ORACLE user and will shows info similar to the information below.  Interesting aspect here was that I issued the command on the first node ( but the automated back-ups are all on  node 11hr).

oracle@mysrvr09hr:/opt/oracle [CRS]# ocrconfig -showbackup
mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr
mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr
mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr
mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr
mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
  • As the root user Run a Manual Backup of the OCR information. Run the ocrconfig -manualbackup command on a node where the Oracle Cluster-ware stack is up and running to force Oracle Cluster-ware to perform a backup of OCR at any time, rather than wait for the automatic backup.  Note: The -manualbackup option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to OCR. The OLR only supports manual backups. NOTE: In 11gR2, the voting files are backed up automatically as part of OCR. Oracle recommends NOT used dd command to backup or restore as this can lead to loss of the voting disk.
mysrvr09hr:root:/root # cd /opt/crs/product/11204/crs/bin/
mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -manualbackup
mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr

## Checking a second time will now also show a manual backup 2 b in place:
mysrvr09hr:root:/opt/crs/product/11204/crs/bin # ./ocrconfig -showbackup
mysrvr11hr 2017/04/21 05:20:36 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup00.ocr
mysrvr11hr 2017/04/21 01:20:29 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup01.ocr
mysrvr11hr 2017/04/20 21:20:07 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup02.ocr
mysrvr11hr 2017/04/20 01:19:42 /opt/crs/product/11204/crs/cdata/mysrvr03cl/day.ocr
mysrvr11hr 2017/04/12 17:16:11 /opt/crs/product/11204/crs/cdata/mysrvr03cl/week.ocr
mysrvr11hr 2017/04/21 09:12:40 /opt/crs/product/11204/crs/cdata/mysrvr03cl/backup_20170421_091240.ocr

Last line is now showing the manual backup
(since it is showing the format (backup_yyyymmdd_hhmmss.ocr)
  • Check Location of OCR and Voting Disk (need to be in a diskgroup )
##How:
cat /etc/oracle/ocr.loc
## Shows output similiar to this
## (if ocr is already mirrored in other Diskgroup with normal Redundancy)
#Device/file getting replaced by device +OCR
ocrconfig_loc=+VOTE
ocrmirrorconfig_loc=+OCR

 

 

##How: 
crsctl query css votedisk

## Will show 3 voting disks in Disk group Vote due to Normal redundancy (and 3 Disk)
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
 1. ONLINE 36b26f862b9a4f54bfba3096e3d50afa (/dev/mapper/asm-vote01) [VOTE]
 2. ONLINE 9d45d791c1124febbf0a093d5a185c13 (/dev/mapper/asm-vote02) [VOTE]
 3. ONLINE 1b7e510a302e4f03bfdea942d55d7067 (/dev/mapper/asm-vote03) [VOTE]
Located 3 voting disk(s).
## check in ASM:
select a.name dg_name,
a.GROUP_NUMBER dg_number,
a.state dg_state,
b.DISK_NUMBER d_number, 
b.name d_name,
b.mount_status
d_mount_status,
b.header_status
d_header_status,
b.mode_status
d_mode_status,
b.state d_state,
b.FAILGROUP
d_failgroup,
b.path d_path
from
v$asm_diskgroup a,
v$asm_disk b
where
a.GROUP_NUMBER(+) = b.GROUP_NUMBER
order by 2,4;
  • Unset environment Variables:
unset ORACLE_BASE 
unset ORACLE_HOME 
unset GI_HOME 
unset ORA_CRS_HOME 
unset TNS_ADMIN
unset ORACLE_SID
unset ORA_NLS10
  • Check active crs version and software version:
## using the current CRS to document current active - and software version
/opt/crs/product/11204/crs/bin/crsctl query crs activeversion
/opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
  • Performing a Standard Upgrade from an Earlier Release
## Use the following procedure to upgrade the cluster from an earlier release:
Start the installer, and select the option to upgrade an existing Oracle Clusterware and Oracle ASM installation.
On the node selection page, select all nodes.
Select installation options as prompted. 
Note: Oracle recommends that you configure root script automation,
so that the sh script can be run automatically during the upgrade.
Run root scripts, using either automatically or manually:

Running root scripts automatically:
TIP: If you have configured root script automation, 
then use the pause between batches to relocate services from the nodes running the previous release to the new release.
Comment Mathijs: I have not decided yet on this automation step. 
In the documentation read as prep for the upgrade you see the option to create multiple batches:
like batch 1 starting node, 
batch 2 all but last node,
batch 3 last node. 
I will use both the automated way for one cluster and then use the below manual (old school method mentioned below) on another cluster.

Running root scripts manually:
If you have not configured root script automation, then when prompted, 
run the rootupgrade.sh script on each node in the cluster that you want to upgrade.

If you run root scripts manually, then run the script on the local node first. 
The script shuts down the earlier release installation, replaces it with the new Oracle Clusterware release, and starts the new Oracle Clusterware installation.
After the script completes successfully, you can run the script in parallel on all nodes except for one, which you select as the last node. 
When the script is run successfully on all the nodes except the last node, run the script on the last node.
After running the sh script on the last node in the cluster, if you are upgrading from a release earlier than Oracle Grid Infrastructure 11g Release 2 (11.2.0.2), 
and left the check box labeled ASMCA checked, which is the default, then Oracle Automatic Storage Management Configuration Assistant ASMCA runs automatically, 
and the Oracle Grid Infrastructure upgrade is complete. 
If you unchecked the box during the interview stage of the upgrade, then ASMCA is not run automatically.

If an earlier release of Oracle Automatic Storage Management (Oracle ASM) is installed, then the installer starts ASMCA to upgrade Oracle ASM to 12c Release 1 (12.1). 
You can choose to upgrade Oracle ASM at this time, or upgrade it later.
Oracle recommends that you upgrade Oracle ASM at the same time that you upgrade Oracle Clusterware. 
Until Oracle ASM is upgraded, Oracle Databases that use Oracle ASM cannot be created and the Oracle ASM management tools in the Oracle Grid Infrastructure 12c Release 1 (12.1) home (for example, srvctl) do not work.

Note: 
Because the Oracle Grid Infrastructure home is in a different location than the former Oracle Clusterware and Oracle ASM homes, 
update any scripts or applications that use utilities, libraries, or other files that reside in the Oracle Clusterware and Oracle ASM homes.
  • Check active crs version and software version:
/opt/crs/product/11204/crs/bin/crsctl query crs activeversion
/opt/crs/product/11204/crs/bin/crsctl query crs softwareversion
  • Post upgrade checks:
 ps -ef|grep d.bin should show daemons started from 12C.

Thoughts on Rollback:

Of course each migration will be as good as its preparation. But still your plan should at least hold the steps for a rollback in case you might not make it to a successful completed task. Below you will find the steps mentioned in general.

On all remote nodes, use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade to stop the 12c Release 1 (12.1).
On the local node use the command syntax Grid_home/crs/install/rootcrs.sh -downgrade -lastnode
On any of the cluster member nodes where the rootupgrade.sh script has run successfully:

cd /u01/app/12.1.0/grid/oui/bin
./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList
-silent CRS=false ORACLE_HOME=/u01/app/12.1.0/grid

On any of the cluster member nodes where the rootupgrade script has run successfully:
In Old ORACLE_HOME (the earlier Oracle Clusterware installation).$ cd /opt/crs/product/11204/crs/oui/bin/
$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/u01/app/crs

Start the Oracle Clusterware stack manually:
On each node, start Oracle Clusterware from the earlier release Oracle Clusterware home:
/opt/crs/product/11204/crs/bin/crsctl start crs

As always thank you for taking an interest in my blog. Happy reading and till the next time.

Mathijs

 

 

Sql report in html & sending mail in linux

Introduction

Being part of the  Oracle community on the web is always a great experience to me. Great because it is inspiring since  a lot of the stories, blogs and  tweets that are shared by the colleagues in the field are top notch. This week a came across a  great tweet from one of my favorite bloggers : Uwe Hesse  (https://uhesse.com/2011/06/30/sqlplus-output-in-nice-html-format/) in which he shared how to create sql output in html format. If you do not follow his blogs yet , please do  because it is great. So of course had to test it myself since it is o so familiar that getting the data is one thing but a nice way to present is is a different story ( with all the set lines 999, col x  format y etc).

Details:

The scenario for this is great:

  1. Run your report( sql) as you are used to
  2. run this html part after.
  3. Either open Firefox with the html file so show – share the info ( or as in my aim find a way to send it from Linux box as an attachment).

In his blog Uwe showed in the sql script that is used in his post that you either run this kind of work either from a client (being a pc or a (l)Unix because once the job is done , the results are  loaded the report in Firefox).

set termout off

set markup HTML ON HEAD ” –

” –
BODY “” –
TABLE “border=’1′ align=’center’ summary=’Script output'” –
SPOOL ON ENTMAP ON PREFORMAT OFF

spool myoutput.html

l
/

spool off
set markup html off spool off
host firefox myoutput.html
set termout on;
quit;

On  the Linux boxes where I work most Firefox is not present. So instead of — host Firefox I am aiming to sending the output as a mail attachment. So far did a lot of reading and tried various suggestions that i should use mailx  -a  but that is not a valid parameter in this linux Redhat 5X release. Another option offered on the web was using uuencode  which I also cannot use because it is not in place. And last but not least also another great suggest to use mutt  failed me too (also not in place). After the weekend will talk to Linux colleagues at work if  there is another option. so  that matter is still 2 b continued . And of course if there is another solution this will be shared.

But when looking at this as an example , the work will be worth the effort! Thank you Uwe for this great tweet and Blog post (https://uhesse.com/2011/06/30/sqlplus-output-in-nice-html-format/).

Archivelogs per hour – day.

archives-script

report-in-cool-htm-format

To be continued…

As always,

Happy reading,

Mathijs.

Importing Data via Network

Introduction:

For two projects there has been an assignment to upgrade to 11.20.4 Oracle. One environment was already 11.2.3 with same Cluster stack below it and one environment will come from 10.2.0.4 on Solaris.  For both projects on Linux  an 11.2.0.4 cluster-stack plus database version has been set up on one of the newer shared clusters.  Both environments will be migrated using the export – import method (since they are relatively small ( app 400- 500 GB) ) and of course since one of them is being migrated cross platforms (from Solaris to Linux ) you do not have that much choice.

In other project I had good experience with nfs filesystems between source and target servers and at first was aiming to use them again during these migrations.  However since not every project is able to make it to the time lines ( will have to wait for at least 2 more weeks to get the nfs mounts ) other creativity will be  required. In this specific case will work with datapump via the network.

When looking into this scenario i came across two scenarios. First scenario being covered by a fellow blogger and interesting since it offers the option to export directly into an ASM disk group. In  that scenario extra step would be needed using impdp with directory to the same  asmdiskgroup/subdirectory. Second scenario which is explained in more detail here is even one step beyond. Scenario is simple  using impdp via a dblink directly in the database ( not even a need to park a dumpfile somewhere on filesystem or in diskgroup first and then run the imp). Nope just another  imdp and you are there !

 

 

1.     Setting up  tnsnames entry on the target ( receiving ) side.

 

In order to make this scenario work  you will have to make sure that there is no firewall in place to the source database you will pull the data from when you create the tnsnames.ora entry on the target side.

In my case:

 

I always try a: telnet <ip> <port>

telnet  666.233.103.203  33012

 

If  you see something like trying ….  and nothing helps will happen well this was not your lucky day and a firewall is blocking you from making this a happy scenario.  If you see something like this lucky you :

Escape character is '^]'.

Recommendation when you get stuck with trying … then is to make sure that firewall  is opened. In my case my host was a vip address for a rac database and Port 33012 had been assigned to the local listener of that database.

 

## Let set up the tnsnames entry  NOTE : firewall needs to be freed before proceed with tnsping etc:

MBMYDB =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=666.233.103.203)(PORT=33012))
)
(CONNECT_DATA=
(SERVICE_NAME=MYDB.test.nl)
)
)

One interesting part is that the service_name of the tnsnames  i wanted to use was not present as a service in the database so I had to add to extend the present service (which was not default service since it was without  domain).

 

## ## On the source side in the database where i want to take the data from:  added service:

 

alter system set service_names = ‘MYDB’,’MYDB.test.nl’ scope = both ;

 

SQL> show parameter service

 

NAME                                                      TYPE VALUE

———————————— ———– ——————————

service_names                                      string               MYDB, MYDB.test.nl

 

So now we have two services in place which we can use in the tnsnames.ora.

 

2.     Time to set up a public dblink

 

## Reading articles by fellow bloggers they recommended to created PUBLIC (this seems mandatory) db link. Since in my case i would do the import with system a normal db link would b okay too. But for the scenarios sake  public database link is fine.

 

drop public DATABASE LINK old_MYDB;

## worked with this one

CREATE public DATABASE LINK old_MYDB CONNECT TO system IDENTIFIED BY xxxxxxx USING ‘mbMYDB’;

3.     Seeing is believing , test the db link.

 

## performed select

select ‘x’ from  dual@old_MYDB;

4.     Next stop, creating a directory for the logfile of the impdp.

 

Yes that is correct only a directory for the log file not for the dump itself J  that is why i liked this scenario so much.

 

## created directory for the logfile

create directory acinu_imp as ‘/opt/oracle/MYDB/admin/create’ ;

grant read,write on directory acinu_imp to system;

 

 

 

5.     Time to perform the import.

 

Over the years have used expdp and impdp a lot  but most time as an almost 1:1 clone of exp/ imp. But since Google  is your friend when looking for scenarios it was great to explore the  powerful option of exclude= parameter. As you will see ,  creating an import of the full database but excluding the  schemas i don’t care about.

 

Since i was hmm energy efficient i wanted to type the full statement in Linux but was punished  by having ” ” in my command. However had i used a parfile things would have been easier J . But since i wanted to stick to scenario found that whenever on OS  ” level an \ will be mandatory like below:

 

## performed import  with success with  command below

 

impdp system full= yes "EXCLUDE=SCHEMA:\"IN('ADBM','DBSNMP','PERFSTAT','UPDOWN','ORACLE_OCM','OUTLN','SYS','SYSTEM')\"" network_link=old_MYDB directory=acinu_imp logfile=AcinupImport.log parallel=2 job_name=MYDB_DMP_FULL

 

 

## Note

At first all my scenarios  had error below

 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39200: Link name “OLD_ACINUP” is invalid.
ORA-02019: connection description for remote database not found

 

This made me check  the services in the database, the entry in the tnsnames, and test it all again.  After that as A-team Hannibal would say , love it when a plan comes together  it worked !

 

Happy reading ,

 

And always don’t believe it just because it is printed.

 

Mathijs Bruggink

 

 

Transport Tablespace as Mig to 11.2.0.4 with rman.

Introduction:

For one of the projects the question came in to investigate and set up a  11.2.0.4 Real application cluster database with an extra challenge that the migration had to be done cross-platform from Oracle 10.2.0.3 on Solaris platform to 11.2.0.4.0 on Linux. From application provider came the suggestion to investigate a back-up – restore scenario with an upgrade on the new server ( Linux environment). Due to the fact that the Source environment was 10.20.3 on Solaris and due to fact we were heading towards a Rac cluster environment on on Linux that suggestion was  the first that was send to the dustbin.

Normal export  / import was the second scenario that was explored. Of course this is a valid scenario  but given  the fact that the database was more than 1.x TB not exactly the most favorite way to bring  the data across. But whit scripting and using multiple par-files  and or with moving  partitioned data across in waves would be a  fair plan-b.

From reading though Had put my mind to the use of  transportable tablespaces as a way forward with this challenging question.

Requirements:

As preparation for the job requested to have Nas filesystem mounted between the source Server (MySunServer) holding the 10G database and the target Server (MyLinuxcluster). This Nas filesystem  would hold  the datapumps to be created, to hold the scripts and parfiles  / config files as was suggested  based on Mos Note ( 1389592.1 ). Nas system was / read-writable from both  servers. The  perl scripts that come with the note will support in the transport of the tablespaces but also help in  the convert of big endian to little endian And as a bonus in my case will do the copy into ASM.

Due to the layout of the database in the source environment  Rman was chosen as the best way forward with the scenario.

As a preparation an 110204 Rac database was set up on the target cluster. This  database only to hold the normal tablespaces and a smal temporary tablespace for the users. ( In TTS solution the name of the data tablespaces that come across to the new environment may not exist in the new environment). All data- application users have been pre created on the new environment with a  new – default user tablespace.

Details & Comments

Configuration file for the Perl scripts:

This is a file  that is part of the unzipped file from the Mos note. It needs to be setup to match your specific needs.  Will only show settings  I have used and  its comments:

xtt.properties:
## Reduce Transportable Tablespace Downtime using Incremental Backups
## (Doc ID 1389592.1)

## Properties file for xttdriver.pl

## See documentation below and My Oracle Support Note 1389592.1 for details.
## Tablespaces to transport

## Specify tablespace names in CAPITAL letters.

tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,MYDB_INDEX,MYTS,USERS
##tablespaces=MYCOMPTTS
## Source database platform ID

## platformid

## Source database platform id, obtained from V$DATABASE.PLATFORM_ID

platformid=2

## srclink

## Database link in the destination database that refers to the source

## database. Datafiles will be transferred over this database link using
## dbms_file_transfer.
srclink=TTSLINK

## Location where datafile copies are created during the “-p prepare” step.

## This location must have sufficient free space to hold copies of all
## datafiles being transported.

dfcopydir=/mycomp_mig_db_2_linux/mybox/rman

## backupformat

## Location where incremental backups are created.

backupformat=/mycomp_mig_db_2_linux/mybox/rman

## Destination system file locations

## stageondest

## Location where datafile copies are placed by the user when they are

## transferred manually from the souce system. This location must have
## sufficient free space to hold copies of all datafiles being transported.

stageondest=/mycomp_mig_db_2_linux/mybox/rman

# storageondest

## This parameter is used only when Prepare phase method is RMAN backup.

## Location where the converted datafile copies will be written during the

## "-c conversion of datafiles" step. This is the final location of the
## datafiles where they will be used by the destination database.
storageondest=+MYDBP_DATA01/mydbp/datafile
## backupondest

## Location where converted incremental backups on the destination system

## will be written during the "-r roll forward datafiles" step.

## NOTE: If this is set to an ASM location then define properties

##      asm_home and asm_sid below. If this is set to a file system
##       location, then comment out asm_home and asm_sid below
backupondest=+MYDBP_DATA01/mydbp/datafile

## asm_home, asm_sid

## Grid home and SID for the ASM instance that runs on the destination

asm_home=/opt/crs/product/11204/crs

asm_sid=+ASM1
## Parallel parameters

parallel=8

## rollparallel

## Defines the level of parallelism for the -r roll forward operation.

## If undefined, default value is 0 (serial roll forward).

rollparallel=2
## getfileparallel

## Defines the level of parallelism for the -G operation

getfileparallel=4

## desttmpdir

## This should be defined to same directory as TMPDIR for getting the

## temporary files. The incremental backups will be copied to directory pointed
## by stageondest parameter.
desttmpdir=/mycomp_mig_db_2_linux/MYDBP/scripts

 

Below in a Table format you will see the steps performed with comments.

Steps do qualify for

  • I for Initial steps – activities
  • P for Preparation
  • R for Roll Forward activities
  • T for Transport activities

Server column shows where the action needs to be done.

Step Server What needs 2 b done
I1.3 Source Identify the tablespace(s) in the source database that will be transported ( Application owner needs to support with schema owner information) :
tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,

MYDB_INDEX,MYTS,USERS

I1.5 Source + Target In my case project offered an nfs filesystem which i could use : Nfs filesystem : /mycomp_mig_db_2_linux
I1.6 Source Together with the Mos note cam  this zip file : Unzip rman-xttconvert.zip.
I1.7 Source Tailor the extracted file xtt.properties file on the source system to match your environment.
I1.8 Target As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system. This was not needed since we used the nas filesystem.
P1.9 Source + Target On both environments set up this:

export TMPDIR= /mycomp_mig_db_2_linux/MYDBP/scripts.

P2B.1 Source perl xttdriver.pl -p
Note. Do Not use ]$ $ORACLE_HOME/perl/bin/perl this did not work
P2B.2 Source Copy files to destination. N/A since we use NFS
P2B3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt N/A since we use NFS.
perl/bin/perl xttdriver.pl –c
R3.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:
perl xttdriver.pl –I
R3.3 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
 Since we are using Nas shared filesystem no need to copy with scp  between source and target.
perl xttdriver.pl -r
R3.4 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the determine new FROM_SCN step as follows:
perl xttdriver.pl –s
R3.5 Source 1.     If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
2.     If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.
T4.0 Source As found in note : Alter Tablespace Read Only Hanging When There Are Active TX In Any Tablespace (Doc ID 554832.1). A restart of the database is required to have no active transactions. Alternative during off hours . Actually during a first test with one dedicated tablespace with only one  object it took more than 7 hrs. Oracle seems to look and wait  for ALL active transactions, not only the ones that would impact   the object in the test tablespace i worked with.
T4.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, make the tablespaces being transported READ ONLY.
alter tablespace MYDB_DATA read only;
alter tablespace MYDB_EUC_DATA read only;
alter tablespace MYDB_EUC_INDEX read only;
alter tablespace MYDB_INDEX read only;
alter tablespace MYTS read only;
alter tablespace USERS read only;
T4.2 Source Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles.
perl xttdriver.pl -i
T4.2 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
perl xttdriver.pl –r
..
T4.3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:
perl xttdriver.pl –e
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt. It will hold list of all the TTS you have configured and all its transport_datafiles in details.
Example of that generated file : cat xttplugin.txt
impdp directory=MYDB_XTT_DIR logfile=tts_imp.log \
network_link=TTSLINK.PROD.NL transport_full_check=no \
transport_tablespaces=MYCOMPTTS ,A,B,C\
transport_datafiles=’+MYDBP_DATA01/mycomptts_152.xtf’
Note in our example once edited we chmodded   xttplugin.txt with 744 and ran it as script .
T4.3 Source After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.
T4.4 Target At this step, the transported data is READ ONLY in the destination database.  Perform application specific validation to verify the transported data.
Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:
In rman:
validate tablespace MYDB_DATA, MYDB_EUC_DATA, MYDB_EUC_INDEX, MYDB_INDEX, MYTS, USERS check logical;
T4.5 Target alter tablespace MYDB_DATA read write;
alter tablespace MYDB_EUC_DATA read write;
alter tablespace MYDB_EUC_INDEX read write;
alter tablespace MYDB_INDEX read write,
alter tablespace MYTS read write;
alter tablespace USERS read write;
T5 Source + Target Cleanup of NFS filesystem.
Put Source Db in restricted mode as a fallback after the go live for couple of days then put it to tape and decommission it;

The dreaded ORA-12154: TNS:could not resolve the connect identifier specified.

Introduction

Actually  I wanted to start this one with  … and a funny thing happened on the way to the circus , but lets save that one for another occasion okay ? Last week  I got a mail from one of the users  who tried to connect  via ezconnect to an existing database from new client that i had set up for him.  AS always a challenge to see what is going on and of course it takes time to find out the real deal.

Details:

## This case occurred in a 11.2 environment on one of the test boxes . They were trying to connect via ezconnect to one of the existing databases which failed:

 [10:30:23] [ INFO] SQL Runner: Starting runing script on database SCOTT/SCOTT@MYSRVR1:1521/MYDB1
[10:30:26] [ INFO] INPUT> ERROR:
[10:30:26] [ INFO] INPUT> ORA-12154: TNS:could not resolve the connect identifier specified
[10:30:26] [ INFO] INPUT>
[10:30:26] [ INFO] INPUT>
[10:30:26] [ INFO] INPUT> SP2-0306: Invalid option.
[10:30:26] [ INFO] INPUT> Usage: CONN[ECT] [{logon|/|proxy} [AS {SYSDBA|SYSOPER|SYSASM}] [edition=value]]
[10:30:26] [ INFO] INPUT> where  ::= [/][@<connect_identifier>]
[10:30:26] [ INFO] INPUT>        ::= [][/][@<connect_identifier>]
[10:30:26] [ INFO] INPUT> ERROR:
[10:30:26] [ INFO] INPUT> ORA-12162: TNS:net service name is incorrectly specified
[10:30:26] [ INFO] INPUT>
[10:30:26] [ INFO] INPUT>

##However a  Tnsping  was working correctly

tnsping MYDB1
TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 10-FEB-2014 10:31:14
 Copyright (c) 1997, 2011, Oracle.  All rights reserved.
 Used parameter files:
/opt/SP/STORAGE/TNS_ADMIN/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION=(ENABLE=BROKEN)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=MYSRVR1.vfnl.dc-ratingen.de)(PORT = 1521)))(CONNECT_DATA=(SERVICE_NAME=MYDB1.test.nl)))
OK (130 msec)

## Since this was a 11.2 test environment I was able to play a bit with the environment. What puzzled me most  was that  a different database showed both the short service name and the fully qualified name of the service ( with the domain name in it ) in the listener. And when added the short  service name  MYDB1  next to the qualified service name  MYDB1.test.nl  listener would still not pick up both  services , even when I read  that pmon was supposed to register the services automatically  in frequent intervals (60 seconds) .

Added some entries to my tnsnames.ora and started testing . And indeed the full service name worked  and the short service refused to !  Performed the test that I bounced the database ( again this was test so not that much harm done) with no better effect . Even the restart of listener  did not bring the solution.

So it was clear that  I needed to see what was different between the two environments  since I had one other database automatically registered in the listener  with both the services I was looking for.

Bottom-line  after investigation is it works now after the restart of the database and setting some parts different.  🙂   Lets check .

Oh and I performed three  actions in the database  to make it work. And yes the database and not the listener cause  the 11.2 environment lets the database register the services automatically with a listener  (well as long as one plays by the rules):

##First I  added the short service name to the database ( this is not a Rac environment so I did not set up a service in the clusterware using srvctl  ( and ok  I admit it i tried and clusterware replied that you cannot add a service with the same name as the database).

T his is how my services look now:

SQL> show parameter  service

NAME                                                      TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names                                      string               MYDB1.test.nl, MYDB1

##Made sure that if the local listener is in place it is pointing to the correct listener

## Just wanted to make sure that  the correct listener would be used so in this test i added both my listeners:

SQL>show parameter  listener
NAME                                                      TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks                                string
local_listener                                         string               (DESCRIPTION=(ADDRESS_LIST=(AD
                                                                                          DRESS=(PROTOCOL=TCP)(HOST=195.
                                                                                          233.124.139)(PORT=1522))(ADDRE
                                                                                          SS=(PROTOCOL=TCP)(HOST=195.233
                                                                                          .124.139)(PORT=1521))))
remote_listener                                    string

## And yes the  Domain was set:

SQL> show parameter domain
 NAME                                                      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_domain                                             string               test.nl

## The environment that was working did not have set the db_domain parameter so in this test I removed it too:  It is now

SQL> show parameter domain
 NAME                                                      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_domain                                             string

Now the listener ,  it is showing both services :

 Service "MYDB1" has 1 instance(s).
  Instance "MYDB1", status READY, has 6 handler(s) for this service...
Service "MYDB1.test.nl" has 1 instance(s).
  Instance "MYDB1", status READY, has 6 handler(s) for this service...

I tested it myself with two entries in tnsnames ( short name and long one and both work

MYDB1MBK1 =
   (DESCRIPTION =
     (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = TCP)(HOST = MYSRVR1.vfnl.dc-ratingen.de)(PORT = 1522))
     )
     (CONNECT_DATA =
       (SERVICE_NAME = MYDB1.test.nl)
     )
   )

 MYDB1MBK2 =
   (DESCRIPTION =
     (ADDRESS_LIST =
       (ADDRESS = (PROTOCOL = TCP)(HOST = MYSRVR1.vfnl.dc-ratingen.de)(PORT = 1522))
     )
     (CONNECT_DATA =
       (SERVICE_NAME = MYDB1)
     )

This workaround worked so I informed customer to test it.  Will have to test on another environment the effect if i keep db_domain ( and of course db_unique_name) and leave the services to blank.

Happy reading ,

Mathijs

Move Oracle Rac Database to a new diskgroup in Asm (A real life scenario)

Introduction:

Below you will find detailled steps to move a Rac database ( Oracle 11.2.0.3) to a different Diskgroup using a total maintenance mode scenario where the Database is not available during the maintenance window for the appliciaton.

Happy reading,

Mathijs

Detailled  Scenario of a diskgroup move of a Rac Database

## create fresh pfile  to be used  as a basis for a new spfile in the new diskgroup
create pfile=’/opt/oracle/MYDB1/admin/pfile/initMYDB1.ora.20140111_1800′ from spfile;
## create new spfile in the new diskgroup
create spfile=’+MYDB_DATA01′ from pfile=’/opt/oracle/MYDB1/admin/pfile/initMYDB1.ora.20140111_1800′;

## Shutdown the database vi the cluster :
##  all actions being performed after this shutdown using sqlplus till further notice !
srvctl stop database  -d MYDB

## In a second screen make sure your environment is pointing to the ASM environment and go to Asmcmd and go to the new disk group to find new name

asmcmd -p
cd MYDB_DATA01/MYDB/PARAMETERFILE

## it shows:
ls -ltr
spfile.256.836591237

## Copy the file spfile.256.831311031 to the new disk group which will work as a more human readable file name:

cp +MYDB_DATA01/MYDB/PARAMETERFILE/spfile.256.836591237  +MYDB_DATA01/MYDB/spfileMYDB.ora

### After doing that on Linux level alter the location of the spfile in the init.ora in the $ORACLE_HOME/dbs  ON ALL NODES  ( mysrvr25r / mysrvr26r)

cd/opt/oracle/product/11203_ee_64/db/dbs

ls -ltr  initMYDB*

### current content:  initMYDB1.ora check and adapt on second node as well
spfile=’+DATA1/MYDB/spfileMYDB.ora’
## After changing the disk group my new init.ora looks like this:

SPFILE=’+MYDB_DATA01/MYDB/spfileMYDB.ora’

## Start the  database to find out that you did good thing :

SQL> startup

## Working with the control files and Perform Backup:
## shows we have three controlfiles in place
SQL> show parameter control_files

NAME                                                      TYPE VALUE
—————————- ———– ——————————
+DATA1/MYDB/control01.ctl,+DATA1/MYDB/control02.ctl,+DATA1/MYDB/control03.ctl

## Set new location of controlfile in SPFILE:
alter system set control_files=’+MYDB_DATA01/MYDB/control01.ctl’, ‘+MYDB_FRA1/MYDB/control02.ctl’, ‘+MYDB_DATA01/MYDB/control03.ctl’ scope=spfile sid=’*’;
alter system set cluster_database=false scope=spfile;
## Shutdown your database
SQL> shutdown;

## Open Asmcmd again with the environment pointing to +ASM instance:
##  Copy the current control file from +DATA01 to the correct Diskgroups and sync them by this copy
ASMCMD
cp +DATA1/MYDB/control01.ctl  +MYDB_DATA01/MYDB/control01.ctl
cp +DATA1/MYDB/control01.ctl  +MYDB_FRA1/MYDB/control02.ctl
cp +DATA1/MYDB/control01.ctl  +MYDB_DATA01/MYDB/control03.ctl
## check it:
ls -l +MYDB_DATA01/MYDB/control01.ctl
ls -l +MYDB_FRA1/MYDB/control02.ctl
ls -l +MYDB_DATA01/MYDB/control03.ctl

##  Start your database with startup nomount

SQL> startup nomount;

## Start an rman session : Open “rman target /” and restore from old control and mount + open database:
## Not 100 % sure if this step was needed since we copied file in asmcmd already but  it wont hurt and takes little time
rman target /
restore controlfile to ‘+MYDB_DATA01/MYDB/control01.ctl’ from ‘+DATA1/MYDB/control01.ctl’;
restore controlfile to ‘+MYDB_FRA1/MYDB/control02.ctl’   from ‘+DATA1/MYDB/control01.ctl’;
restore controlfile to ‘+MYDB_DATA01/MYDB/control03.ctl’ from ‘+DATA1/MYDB/control01.ctl’;

##This will show
rman target /

Recovery Manager: Release 11.2.0.3.0 – Production on Sat Jan 11 18:53:29 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: MYDB (not mounted)

RMAN> restore controlfile to ‘+MYDB_DATA01/MYDB/control01.ctl’ from ‘+DATA1/MYDB/control01.ctl’;

Starting restore at 11.01.2014 18:54:00
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1010 device type=DISK

channel ORA_DISK_1: copied control file copy
Finished restore at 11.01.2014 18:54:02

RMAN> restore controlfile to ‘+MYDB_FRA1/MYDB/control02.ctl’   from ‘+DATA1/MYDB/control01.ctl’;

Starting restore at 11.01.2014 18:54:33
using channel ORA_DISK_1

channel ORA_DISK_1: copied control file copy
Finished restore at 11.01.2014 18:54:34

RMAN> restore controlfile to ‘+MYDB_DATA01/MYDB/control03.ctl’ from ‘+DATA1/MYDB/control01.ctl’;

Starting restore at 11.01.2014 18:54:51
using channel ORA_DISK_1

channel ORA_DISK_1: copied control file copy
Finished restore at 11.01.2014 18:54:52

## Mount the database via Rman

RMAN> sql ‘alter database mount’;

##This will show

List of instances:
1 (myinst: 1)
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Post SMON to start 1st pass IR
Submitted all GCS remote-cache requests
Post SMON to start 1st pass IR
Fix write in gcs resources
Reconfiguration complete
Sat Jan 11 18:39:32 2014
LCK0 started with pid=31, OS id=4011
Starting background process RSMN
Sat Jan 11 18:39:33 2014
RSMN started with pid=32, OS id=4015
ORACLE_BASE from environment = /opt/oracle
Sat Jan 11 18:39:33 2014
ALTER DATABASE   MOUNT
This instance was first to mount
NOTE: Loaded library: System
SUCCESS: diskgroup DATA1 was mounted
NOTE: dependency between database MYDB and diskgroup resource ora.DATA1.dg is established
Successful mount of redo thread 1, with mount id 2306013605
Lost write protection disabled
Completed: ALTER DATABASE   MOUNT

## Now it is time to make a backup of the database into the new Disk group (+MYDB_DATA01). If you are in a rac environment make sure all other instances are down.

## Issue the following command in rman because this will create a one to one copy of the database in the new Disk group:

RMAN>backup as copy database format ‘+MYDB_DATA01’;

##This will show:

RMAN> backup as copy database format ‘+MYDB_DATA01’;

Starting backup at 11.01.2014 18:58:11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1766 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=+DATA1/MYDB/datafile/gemprod.275.786977627
output file name=+MYDB_DATA01/MYDB/datafile/gemprod.260.836593093 tag=TAG20140111T185812 RECID=15 STAMP=836593134
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=+DATA1/MYDB/datafile/sysaux.262.786977877
output file name=+MYDB_DATA01/MYDB/datafile/sysaux.261.836593139 tag=TAG20140111T185812 RECID=16 STAMP=836593155
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=+DATA1/MYDB/datafile/system.269.786977709
output file name=+MYDB_DATA01/MYDB/datafile/system.262.836593163 tag=TAG20140111T185812 RECID=17 STAMP=836593175
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=+DATA1/MYDB/datafile/undotbs1.266.786977783
output file name=+MYDB_DATA01/MYDB/datafile/undotbs1.263.836593179 tag=TAG20140111T185812 RECID=18 STAMP=836593187
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=+DATA1/MYDB/datafile/undotbs2.267.786977883
output file name=+MYDB_DATA01/MYDB/datafile/undotbs2.264.836593195 tag=TAG20140111T185812 RECID=19 STAMP=836593202
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=+DATA1/MYDB/datafile/users.259.786977783
output file name=+MYDB_DATA01/MYDB/datafile/users.265.836593209 tag=TAG20140111T185812 RECID=20 STAMP=836593211
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=+DATA1/MYDB/datafile/tools.263.786977709
output file name=+MYDB_DATA01/MYDB/datafile/tools.266.836593213 tag=TAG20140111T185812 RECID=21 STAMP=836593215
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+MYDB_DATA01/MYDB/controlfile/backup.267.836593215 tag=TAG20140111T185812 RECID=22 STAMP=836593217
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 11.01.2014 19:00:18
channel ORA_DISK_1: finished piece 1 at 11.01.2014 19:00:19
piece handle=+MYDB_DATA01/MYDB/backupset/2014_01_11/nnsnf0_tag20140111t185812_0.268.836593219 tag=TAG20140111T185812 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 11.01.2014 19:00:19

## Once that has finished issue the following command in Rman (note this switch to command performs the  … set newname for you  which is great:

RMAN>switch database to copy;

##This will show you:

RMAN> switch database to copy;

datafile 1 switched to datafile copy “+MYDB_DATA01/MYDB/datafile/system.262.836593163”
datafile 2 switched to datafile copy “+MYDB_DATA01/MYDB/datafile/sysaux.261.836593139”
datafile 3 switched to datafile copy “+MYDB_DATA01/MYDB/datafile/undotbs1.263.836593179”
datafile 4 switched to datafile copy “+MYDB_DATA01/MYDB/datafile/users.265.836593209”
datafile 5 switched to datafile copy “+MYDB_DATA01/MYDB/datafile/tools.266.836593213”
datafile 6 switched to datafile copy “+MYDB_DATA01/MYDB/datafile/gemprod.260.836593093”
datafile 7 switched to datafile copy “+MYDB_DATA01/MYDB/datafile/undotbs2.264.836593195”

##When that is finished issue following command in rman :
RMAN>sql ‘alter database open’;

## Start / open  Second instance as well via Sqlplus  as a check on the second box.
alter system set cluster_database=true scope=spfile;
startup

##Your alert file has been updated with following information:
Sat Jan 11 19:02:35 2014
WARNING: cataloging database area datafile
+DATA1/MYDB/datafile/system.269.786977709 as recovery area datafilecopy.
This datafilecopy is accounted into used space. Consider incrementing
db_recovery_file_dest_size parameter value by size of datafile.
Switch of datafile 1 complete to datafile copy
checkpoint is 13560558508638
WARNING: cataloging database area datafile
+DATA1/MYDB/datafile/sysaux.262.786977877 as recovery area datafilecopy.
This datafilecopy is accounted into used space. Consider incrementing
db_recovery_file_dest_size parameter value by size of datafile.
Switch of datafile 2 complete to datafile copy
checkpoint is 13560558508638
WARNING: cataloging database area datafile
+DATA1/MYDB/datafile/undotbs1.266.786977783 as recovery area datafilecopy.
This datafilecopy is accounted into used space. Consider incrementing
db_recovery_file_dest_size parameter value by size of datafile.
Switch of datafile 3 complete to datafile copy
checkpoint is 13560558508638
WARNING: cataloging database area datafile
+DATA1/MYDB/datafile/users.259.786977783 as recovery area datafilecopy.
This datafilecopy is accounted into used space. Consider incrementing
db_recovery_file_dest_size parameter value by size of datafile.
Switch of datafile 4 complete to datafile copy
checkpoint is 13560558508638
WARNING: cataloging database area datafile
+DATA1/MYDB/datafile/tools.263.786977709 as recovery area datafilecopy.
This datafilecopy is accounted into used space. Consider incrementing
db_recovery_file_dest_size parameter value by size of datafile.
Switch of datafile 5 complete to datafile copy
checkpoint is 13560558508638
WARNING: cataloging database area datafile
+DATA1/MYDB/datafile/gemprod.275.786977627 as recovery area datafilecopy.
This datafilecopy is accounted into used space. Consider incrementing
db_recovery_file_dest_size parameter value by size of datafile.
Switch of datafile 6 complete to datafile copy
checkpoint is 13560558508638
WARNING: cataloging database area datafile
+DATA1/MYDB/datafile/undotbs2.267.786977883 as recovery area datafilecopy.
This datafilecopy is accounted into used space. Consider incrementing
db_recovery_file_dest_size parameter value by size of datafile.
Switch of datafile 7 complete to datafile copy
checkpoint is 13560558508638
Sat Jan 11 19:04:21 2014
alter database open

## Open a Sqlplus session and check for the %create% parameter:

show parameter create
SQL> show parameter db_create

NAME                                 TYPE        VALUE
———————————— ———– ——————————
db_create_file_dest                  string
db_create_online_log_dest_1          string
db_create_online_log_dest_2          string
db_create_online_log_dest_3          string
db_create_online_log_dest_4          string
db_create_online_log_dest_5          string

alter system set db_create_file_dest=’+MYDB_DATA01′ sid=’*’;
alter system set db_create_online_log_dest_1=’+MYDB_FRA1′ sid=’*’;

## check again
show parameter db_create
show parameter cluster

##Now it is time to work with the temp files. You will have to create a New temp tablespace in the new disk group, make that the default one and drop the old one:
set lines 200
col tablespace_name format a40
col file_name format a80
select tablespace_name,FILE_NAME, bytes/1024/1024 MB from dba_temp_files;

## This shows: FILE_NAME
TABLESPACE_NAME                          FILE_NAME                                                                                MB
—————————————- ——————————————————————————– ———-
TEMP                                     +DATA1/MYDB/tempfile/temp.268.774880131                                             8192

##This will create a new temp tablespace , in the new disk group , make it the default tablespace and drop old

create temporary tablespace TEMP02 tempfile size 1024m;
alter database default temporary tablespace TEMP02;
drop tablespace TEMP ;
create temporary tablespace TEMP tempfile size 8192m;
alter database default temporary tablespace TEMP;
drop tablespace TEMP02;

##Check it again:
set lines 200
col tablespace_name format a40
col file_name format a80
select tablespace_name,FILE_NAME, bytes/1024/1024 MB from dba_temp_files;

## Working with the redo logs:
## In this step first we have to add new members to each group (to each thread (in a rac)). After that and after switching the log files you can delete the members in the old disk group
## First check the environment:

Set lines 2000
select * from v$log;
SQL>
GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME          NEXT_CHANGE# NEXT_TIME
———- ———- ———- ———- ———- ———- — —————- ————- ——————- ———— ——————-
1          1      28288  209715200        512          2 YES INACTIVE            1.3559E+13 08.01.2014 14:54:59   1.3559E+13 08.01.2014 15:24:59
2          1      28289  209715200        512          2 YES INACTIVE            1.3559E+13 08.01.2014 15:24:59   1.3559E+13 08.01.2014 15:54:59
3          1      28290  209715200        512          2 NO  CURRENT             1.3559E+13 08.01.2014 15:54:59   2.8147E+14
5          2      28097  209715200        512          2 YES INACTIVE            1.3559E+13 08.01.2014 15:24:53   1.3559E+13 08.01.2014 15:54:52
6          2      28098  209715200        512          2 NO  CURRENT             1.3559E+13 08.01.2014 15:54:52   2.8147E+14
7          2      28095  209715200        512          2 YES INACTIVE            1.3559E+13 08.01.2014 14:24:51   1.3559E+13 08.01.2014 14:54:53
8          2      28096  209715200        512          2 YES INACTIVE            1.3559E+13 08.01.2014 14:54:53   1.3559E+13 08.01.2014 15:24:53

## and
col member format a80
set pagesize 33
select GROUP#,MEMBER from v$logfile order by 1;
GROUP# MEMBER
———- ——————————————————————————–
1 +DATA1/MYDB/onlinelog/group_1.260.774880079
1 +MYDB_FRA1/MYDB/onlinelog/group_1.258.774880081
2 +DATA1/MYDB/onlinelog/group_2.264.774880083
2 +MYDB_FRA1/MYDB/onlinelog/group_2.257.774880085
3 +DATA1/MYDB/onlinelog/group_3.261.774880087
3 +MYDB_FRA1/MYDB/onlinelog/group_3.256.774880089
5 +DATA1/MYDB/onlinelog/group_5.271.774880749
5 +MYDB_FRA1/MYDB/onlinelog/group_5.259.774880751
6 +DATA1/MYDB/onlinelog/group_6.272.774880753
6 +MYDB_FRA1/MYDB/onlinelog/group_6.260.774880755
7 +DATA1/MYDB/onlinelog/group_7.273.774880757
7 +MYDB_FRA1/MYDB/onlinelog/group_7.261.774880759
8 +DATA1/MYDB/onlinelog/group_8.274.774880763
8 +MYDB_FRA1/MYDB/onlinelog/group_8.262.774880765

##First we add new members to the correct , new disk group:
alter database add logfile member ‘+MYDB_DATA01’ to group 1;
alter database add logfile member ‘+MYDB_DATA01’ to group 2;
alter database add logfile member ‘+MYDB_DATA01’ to group 3;
alter database add logfile member ‘+MYDB_DATA01’ to group 5;
alter database add logfile member ‘+MYDB_DATA01’ to group 6;
alter database add logfile member ‘+MYDB_DATA01’ to group 7;
alter database add logfile member ‘+MYDB_DATA01’ to group 8;

###Check again:
select GROUP#,MEMBER from v$logfile order by 1;

## Perform some switches to make sure the new members have been in use  ( archive log current performs logswitch in  the whole  Database )

alter system archive log current;

alter system archive log current;

alter system archive log current;

alter system archive log current;

alter system archive log current;

alter system archive log current;

alter system archive log current;

alter system archive log current;

## First check the environment again:
Set lines 2000
select * from v$log;

##It is time to drop the members from the old ( data1 )
select GROUP#,MEMBER from v$logfile order by 1;

GROUP# MEMBER
———- ——————————————————————————–
1 +DATA1/MYDB/onlinelog/group_1.260.774880079
1 +MYDB_FRA1/MYDB/onlinelog/group_1.258.774880081
2 +DATA1/MYDB/onlinelog/group_2.264.774880083
2 +MYDB_FRA1/MYDB/onlinelog/group_2.257.774880085
3 +DATA1/MYDB/onlinelog/group_3.261.774880087
3 +MYDB_FRA1/MYDB/onlinelog/group_3.256.774880089
5 +DATA1/MYDB/onlinelog/group_5.271.774880749
5 +MYDB_FRA1/MYDB/onlinelog/group_5.259.774880751
6 +DATA1/MYDB/onlinelog/group_6.272.774880753
6 +MYDB_FRA1/MYDB/onlinelog/group_6.260.774880755
7 +DATA1/MYDB/onlinelog/group_7.273.774880757
7 +MYDB_FRA1/MYDB/onlinelog/group_7.261.774880759
8 +DATA1/MYDB/onlinelog/group_8.274.774880763
8 +MYDB_FRA1/MYDB/onlinelog/group_8.262.774880765

##So we have to drop the redo members that point to the old ( DATA01 ) Disk group ( but the group can not be current !!):

GROUP# MEMBER
———- ——————————————————————————–
1 +DATA1/MYDB/onlinelog/group_1.260.774880079
2 +DATA1/MYDB/onlinelog/group_2.264.774880083
3 +DATA1/MYDB/onlinelog/group_3.261.774880087
5 +DATA1/MYDB/onlinelog/group_5.271.774880749
6 +DATA1/MYDB/onlinelog/group_6.272.774880753
7 +DATA1/MYDB/onlinelog/group_7.273.774880757
8 +DATA1/MYDB/onlinelog/group_8.274.774880763

alter database drop logfile member ‘+DATA1/MYDB/onlinelog/group_1.260.774880079’;
alter database drop logfile member ‘+DATA1/MYDB/onlinelog/group_2.264.774880083’;
alter database drop logfile member ‘+DATA1/MYDB/onlinelog/group_3.261.774880087’;
alter database drop logfile member ‘+DATA1/MYDB/onlinelog/group_5.271.774880749’;
alter database drop logfile member ‘+DATA1/MYDB/onlinelog/group_6.272.774880753’;
alter database drop logfile member ‘+DATA1/MYDB/onlinelog/group_7.273.774880757’;
alter database drop logfile member ‘+DATA1/MYDB/onlinelog/group_8.274.774880763’;

##Checked again
select GROUP#,MEMBER from v$logfile order by 1;

##Working in the clusterware:

After these activities I tried Stopping and starting via srvctl ( this is an oracle restart environment crashed ) ..   Alertlog was having error messages and the start failed …   I did notice that the environment was using the old SPFILE in +Data01 again … I checked the spfile which was wrong again..

cd  $ORACLE_HOME/db/dbs

cat initMYDB1.ora
spfile=’+DATA1/MYDB/spfileMYDB.ora’

## so the clusteragent had altered my changes
##altered init.ora again and started that worked ..

##In sqlplus:

select name from v$controlfile
union
select name from v$datafile
union
select name from v$tempfile
union
select member from v$logfile
union
select filename from v$block_change_tracking
union
select name from v$flashback_database_logfile;

## This shows:

NAME
——————————————————————————–
+DATA1/MYDB/changetracking/ctf.265.774880707
+DATA1/MYDB/control01.ctl
+DATA1/MYDB/control02.ctl
+DATA1/MYDB/control03.ctl
+DATA1/MYDB/datafile/gemprod.275.786977627
+DATA1/MYDB/datafile/sysaux.262.786977877
+DATA1/MYDB/datafile/system.269.786977709
+DATA1/MYDB/datafile/tools.263.786977709
+DATA1/MYDB/datafile/undotbs1.266.786977783
+DATA1/MYDB/datafile/undotbs2.267.786977883
+DATA1/MYDB/datafile/users.259.786977783
+DATA1/MYDB/onlinelog/group_1.260.774880079
+DATA1/MYDB/onlinelog/group_2.264.774880083
+DATA1/MYDB/onlinelog/group_3.261.774880087
+DATA1/MYDB/onlinelog/group_5.271.774880749
+DATA1/MYDB/onlinelog/group_6.272.774880753
+DATA1/MYDB/onlinelog/group_7.273.774880757
+DATA1/MYDB/onlinelog/group_8.274.774880763
+DATA1/MYDB/tempfile/temp.268.774880131
+MYDB_FRA1/MYDB/onlinelog/group_1.258.774880081
+MYDB_FRA1/MYDB/onlinelog/group_2.257.774880085
+MYDB_FRA1/MYDB/onlinelog/group_3.256.774880089
+MYDB_FRA1/MYDB/onlinelog/group_5.259.774880751
+MYDB_FRA1/MYDB/onlinelog/group_6.260.774880755
+MYDB_FRA1/MYDB/onlinelog/group_7.261.774880759
+MYDB_FRA1/MYDB/onlinelog/group_8.262.774880765

##  means another action point as with regard to the block change tracking
SELECT filename, status, bytes
FROM v$block_change_tracking;
2
FILENAME                                                                         STATUS          BYTES
——————————————————————————– ———- ———-
+DATA1/MYDB/changetracking/ctf.265.774880707                                  ENABLED      11599872

## Disable it and enable it on the new location ( the new disk group )
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE ‘+MYDB_DATA01’;

## In an Oracle restart or Rac Environment you need to check the Clusterware setup now since it has knowledge about spfile, disk groups being used etc.
## First check the configuration in the Clusterware for the database:
srvctl config database -d MYDB
Database unique name: MYDB
Database name:
Oracle home: /opt/oracle/product/11203_ee_64/db
Oracle user: oracle
Spfile: +DATA1/MYDB/spfileMYDB.ora
Domain: prod.vis
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MYDB
Database instances: MYDB1,MYDB2
Disk Groups: DATA1,MYDB_FRA1
Mount point paths:
Services: MYDB_TAF.prod.vis
Type: RAC
Database is administrator managed

##So we have to perform two action points:
·        Make the spfile point to the correct disk group ( our new +MYDB_DATA01)
·        Disk groups attribute still knows about that data1 disk group ( and it should not)

##First modification will be to inform the Clusterware which spfile to use:
srvctl modify database -d MYDB -p ‘+MYDB_DATA01/MYDB/spfileMYDB.ora’

##After that similar action for the disk groups:
srvctl modify database -d  MYDB -a ‘MYDB_DATA01,MYDB_FRA1’

## don’t believe it check it
srvctl config database -d MYDB

##This looks better so now let’s do a stop & start with srvctl
srvctl stop database -d MYDB
srvctl start database -d MYDB

##That worked !  happy dba

ORA-01110: data file 2504 Errors occurred during index rebuild

This morning i came across this :

ORA-01110: data file 2504 Errors occurred during index rebuild

I examined and I followed steps below after that index rebuild was success again.

## first checked for the properties since this is a 10g environment:
COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A30
COLUMN description FORMAT A50
SET LINESIZE 200

SELECT *
FROM   database_properties
WHERE  property_name like ‘%TABLESPACE’;SQL> SQL> SQL> SQL> SQL>   2    3  

PROPERTY_NAME                  PROPERTY_VALUE                 DESCRIPTION
—————————— —————————— ————————————————–
DEFAULT_TEMP_TABLESPACE        TEMP                           Name of default temporary tablespace
DEFAULT_PERMANENT_TABLESPACE   SYSTEM                         Name of default permanent tablespace

##Created a new temporary tablespace:
CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE  ‘/db/MYDB/temp/temp_99.dbf’ size 1024M;

##Made this the new default temporary tablespace
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;

## checked if temp tablespace was in use (would have to kill those sessions in case of)
SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE;

##Dropped the old tablespace.
DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;

## Removed the file on Unix. i do know i might have settled for a reuse, given the fact of possible corrupt thought this was better.
rm temp_01.dbf

## Created  new temp tablespace.
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE ‘/db/MYDB/temp/temp_01.dbf’ size 2000M;

## Removed all temp files on the os
oracle@mysrvr1:/db/MYDB/temp [OPTMYDB]# rm  temp_02.dbf
oracle@mysrvr1:/db/MYDB/temp [OPTMYDB]# rm  temp_03.dbf
oracle@mysrvr1:/db/MYDB/temp [OPTMYDB]# rm  temp_04.dbf
oracle@mysrvr1:/db/MYDB/temp [OPTMYDB]# rm  temp_05.dbf
oracle@mysrvr1:/db/MYDB/temp [OPTMYDB]# rm  temp_06.dbf
oracle@mysrvr1:/db/MYDB/temp [OPTMYDB]# rm  temp_07.dbf

##  temp files added
ALTER TABLESPACE temp ADD TEMPFILE ‘/db/MYDB/temp/temp_02.dbf’ size 2000m;
ALTER TABLESPACE temp ADD TEMPFILE ‘/db/MYDB/temp/temp_03.dbf’ size 2000m;
ALTER TABLESPACE temp ADD TEMPFILE ‘/db/MYDB/temp/temp_04.dbf’ size 2000m;
ALTER TABLESPACE temp ADD TEMPFILE ‘/db/MYDB/temp/temp_05.dbf’ size 2000m;
ALTER TABLESPACE temp ADD TEMPFILE ‘/db/MYDB/temp/temp_06.dbf’ size 2000m;
ALTER TABLESPACE temp ADD TEMPFILE ‘/db/MYDB/temp/temp_07.dbf’ size 2000m;

## Default tablespace was put back to temp again
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

##  Temp2 tablespace dropped
DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;

## This tablepace also a temp one had issues in same way:
DROP TABLESPACE tools_temp  INCLUDING CONTENTS AND DATAFILES;

rm  tools_temp_01.dbf

## tools_temp hatte auch macke deshalb die auch raus und rein.
CREATE TEMPORARY TABLESPACE TOOLS_TEMP  TEMPFILE ‘/db/MYDB/temp/tools_temp_01.dbf’ size 1024M;

## All back to normal again .

Happy reading,

Mathijs

When truncating a table does not work ORA-01426

Introduction.

This mornings incident brought a big smile to my face so i thought highest time to share it with the community again. I got a  call from a colleague who takes care of an application that he had issues with the 11.1 Oracle environment. He explained to me that he tried to truncate a table and that in stead of being rewarded with an empty table he got punished with a ORA-01426.  As often the web was a Dbas best friend again so the puzzle got solved.

Details:

This incident had two staging tables.  Both of them  had 1,000,000,000 ( that is right that is  1,000 million rows this is not a typo) , and  Oracle would not allow the Application  /  user to truncate that staging table in one blow by truncating it  because it brought ORA-01426 the horror! Hmm do i sound sarcastic yet cause frankly i was bobbing head when hearing these details.

Anyhow as always  Internet ( Metalink too)  is  your friend so in the end we had two options:

either patch it

Apply Patch 8226471

OR

a. Flush in-memory monitoring information for all tables in the dictionary.

exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO()

b. For problematic table set a small value for NUMROWS using
## for both tables in scope
exec DBMS_STATS.SET_TABLE_STATS( ‘MYUSER’,’TABLE_UPLD2′,NUMROWS=> 10000 )
exec DBMS_STATS.SET_TABLE_STATS( ‘MYUSER’,’TABLE_UPLD1′,NUMROWS=> 10000 )

c. Issue truncate/exchange partition statement.
exec  DBMS_UTILITY.EXEC_DDL_STATEMENT(‘TRUNCATE TABLE TABLE_UPLD2’);
exec  DBMS_UTILITY.EXEC_DDL_STATEMENT(‘TRUNCATE TABLE TABLE_UPLD1’);

Given the fact that this was a production issue i performed workaround and recommended customer to check with his software provider if this was an out of control cleanup issue of staging tables  or just a bug in the application software.

As always happy reading

Mathijs