Cloning a Pdb with Different Character set into your CDB

Introduction.

The below scenario will be discussed in this post: There is container database that has been created with an utf8 Character Set (and this character set comes recommended in OCI). This Container Db (and its only PDB (of course also in utf8) has an active data guard already in place in OCI.

Scenario requested: Can we add – create a PDB with a different Character set into this CDB with that other Character set.

Short story: No you cannot create a pluggable database in the CDB with a different Character Set. Plan your CDB with care.

Long story:  You cannot, but there is a work around possible but your target CDB (the database where you would like to add that different Character set needs to be UTF8 to make this work!) 

AND

You need a second Container Database with a different Character set in OCI.

Recommendations / Setup / Tested practice:

In OCI: before implementing a data guard on your CDB, it is recommended to implement all the needed PDBs (best practice via the OCI console) in the CDB before setting up a data guard. Which means IF you really really really want to do this scenario below, do it before setting up the Dataguard.

Second recommendation if you need to add PDBs in the container database ( with utf8)  still recommend to add  or clone first all the PDBs needed in that CDB before starting to work on the (active) data guard.

Should you need a container database with a different Character Set  and need to load data into it which also holds that different character set ,  then set up this environment and perform the remote clone steps as mentioned here (below)  and make sure the PDB  is in a good status on the Prim side before continue with cloning scenario.

Please be aware that a lot of things happen for a reason. If you really really really ( did i mention really) have to have environments with other characterset ( different from UTF8) do consider to implement a separate CDB – infrastructure with that different character set!

And most and most importantly order of using tools should be: UI, then the db cli and only as last resort use manual intervention using ssh – sqlplus etc.

Requirements / best practice that need checking before Cloning:

  • My source CDB database (in UTF8) is holding a PDB and I want to add a PDB In a different Character set to my primary side in the existing CDB:
  • Source (remote pluggable database) in CDB with different Character set is set to read only.
  • Is there a data guard set up in place in OCI? If your answer to that question is Yes: check with dgmgrl that the Data Guard = Happy.
  • Do you have enough storage on disk groups to hold a transient PDB, and a clone PDB from transient in same Container Database?
  • Is there a database Link in place for the remote clone.
  • Check source (in a container database with different character set) as a preparation but for the first clone) :

SQL> show PDBs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 ISOP15_PDB1                    READ WRITE NO

         4 TEST                           READ ONLY  NO

         5 WILLIE                         READ WRITE NO

  • Check character set in that Source PDB:

SELECT value FROM nls_database_parameters WHERE parameter = ‘NLS_CHARACTER SET’;

VALUE

—————————————————————-

WE8ISO8859P15

  • Check for Data Guard (if in place) in place (and happy)

Note:  by default in OCI broker is running on the Prim side.

dgmgrl

DGMGRL> connect sys / as sysdba                                                                                                  

Password:                                                                                                                         

Connected to “DBUTF8_fra2ps”                                                                                                     

Connected as SYSDBA.                                                                                                              

DGMGRL> show configuration                                                                                                       

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7                                                                                      

  Protection Mode: MaxPerformance                                                                                                

  Members:                                                                                                                        

  DBUTF8_fra2ps – Primary database                                                                                               

    DBUTF8_fra1h7 – Physical standby database                                                                                     

Fast-Start Failover:  Disabled                                                                                                    

Configuration Status:                                                                                                             

SUCCESS   (status updated 59 seconds ago) 

  • Checked for storage:

Specific script for Asm environment:

SET LINESIZE 150

SET PAGESIZE 9999

SET VERIFY off

COLUMN group_name FORMAT a25 HEAD ‘DISKGROUP_NAME’

COLUMN state FORMAT a11 HEAD ‘STATE’

COLUMN type FORMAT a6 HEAD ‘TYPE’

COLUMN total_mb FORMAT 999,999,999 HEAD ‘TOTAL SIZE(GB)’

COLUMN free_mb FORMAT 999,999,999 HEAD ‘FREE SIZE (GB)’

COLUMN used_mb FORMAT 999,999,999 HEAD ‘USED SIZE (GB)’

COLUMN pct_used FORMAT 999.99 HEAD ‘PERCENTAGE USED’

SELECT distinct name group_name , state state , type type ,

round(total_mb/1024) TOTAL_GB , round(free_mb/1024) free_gb ,

round((total_mb – free_mb) / 1024) used_gb ,

round((1- (free_mb / total_mb))*100, 2) pct_used

from v$asm_diskgroup

–where round((1- (free_mb / total_mb))*100, 2) > 90

ORDER BY name;

  • On prim side check for specific database link

set lines 300

select owner,DB_LINK from dba_db_links order by 1 ;

OWNER                                                                                                                            DB_LINK

——————————————————————————————————————————– ——————————————————————————————————————————–

PUBLIC                                                                                                                           dblink_to_other_CDB.SUB10200805271.MBK1.ORACLEVCN.COM

SYS                                                                                                                              SYS_HUB

  • If the  database link test is not in place it needs create (As you can see I decided to do a public db link and also decided to add all the details in the db link and not use tnsnames entry):

CREATE public DATABASE LINK dblink_to_other_CDB

CONNECT TO <C##POWERUSER> IDENTIFIED BY <USER_PASSWORD>

USING ‘(DESCRIPTION=(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.179)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=test.sub10200805271.mbk1.oraclevcn.com)))’;

  • On the Prim side check for  parameter for GLOBAL first,  if you do not really need it put it to False!

SQL> show parameter global

NAME                                 TYPE        VALUE

———————————— ———– ——————————

allow_global_dblinks                 boolean     FALSE

global_names                      boolean    FALSE <<<—-

global_txn_processes                 integer     1

  • if global_names is true change it to false.

alter system set  global_names = FALSE ;

  • After that check that the db link is working on prim side and  SB side:

SQL> select sysdate from dual@dblink_to_other_CDB;

SYSDATE

———

19-NOV-21

  • On the prim side create an intermedia PDB (clone from the remote container) (with a PDB in diff character set). This step will create a pluggable dabase in our CBD with the character set  from the Other  CDB  (if that CDB was we8iso8859p15 and its PDB there will be we8iso8859p15 too). PDBVOID will be a pluggable database with character set  we8iso8859p15 which is part of  the CDB where character Set is UTF8.

create pluggable database PDBVOID from test@test keystore identified by “<wallet_password>” standbys=none;

Explanation:

test@dblink_to_other_CDB means:  PDBname_to_clone_from dblink_to_other_CDB: dblink we created earlier to the CDB holding PDB (in a different Character set)

standbys means do not protect this PDB in the Data Guard.

  • Next step will be to create a self-referencing DB link on primary (here named DBLINK_TO_PRIM) ( so basically a  dblink that is pointing to the database where it is also created).
  • Note it will be replicated on standby pointing to the primary too) and I DO recommend you test that DB link on  both sides ( Primary and Standby) if it works well.

CREATE public DATABASE LINK PRIMSIDE

   CONNECT TO C##SYSTEM IDENTIFIED BY HAS_BEEN_CHANGED_##12

   USING ‘(DESCRIPTION=(CONNECT_TIMEOUT=5)(TRANSPORT_CONNECT_TIMEOUT=3)(RETRY_COUNT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=10.0.1.39)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBUTF8_fra2ps.sub10200805271.mbk1.oraclevcn.com)))’;

  • Next step to be taken is happening on the Standby side: You will have to set – update the parameter: STANDBY_PDB_SOURCE_FILE_DBLINK:

ALTER SYSTEM SET STANDBY_PDB_SOURCE_FILE_DBLINK=’PRIMSIDE’;

  • Next Step on prim side prepare  for the next clone (inside the container) and that clone from a clone will be there to stay.

SQL> alter pluggable database PDBVOID open instances = all;

SQL> alter pluggable database PDBVOID  close instances = all;

SQL> alter pluggable database PDBVOID open read only instances = all;

SQL> show PDBs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 PDB1                           READ WRITE NO

         4 PDBDEC                         READ WRITE NO

         5 PDB2                           READ WRITE NO

         6 PDB010                         READ WRITE NO

         7 PDBP15                         READ WRITE NO

         8 PDBVOID                        READ ONLY  NO

  • Now create a local clone from the transient no-standby PDB with STANDBYS=ALL  (which means the Data Guard should protect this New PDB).

create pluggable database PDBP15C from PDBVOID keystore identified by “WElcome##12” STANDBYS=ALL ;

SQL> show PDBs                       

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 PDB1                           READ WRITE NO

         4 PDBDEC                         READ WRITE NO

         5 PDB2                           READ WRITE NO

         6 PDB010                         READ WRITE NO

         7 PDBP15                         READ WRITE NO

         8 PDBVOID                        READ ONLY  NO

         9 PDBP15C                        MOUNTED

  • Open the PDB on primary side:

SQL> alter pluggable database PDBP15C  open instances = all;

  • Time to check wallet on prim side:

col wrl_parameter format a60

set lines 300

select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

  • Create a new TDE master encryption key on the primary side.

alter session set container=PDBP15C;

administer key management set key force keystore identified by “WElcome##12” with backup;

  • Note the broker of the Data Guard will not be happy it will show error due to the new master key implemented on the PRIM SIDE !!!!!!!!!!!!!!!!!!!!!:

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

      Error: ORA-16766: Redo Apply is stopped

Fast-Start Failover:  Disabled

Configuration Status:

ERROR   (status updated 60 seconds ago)

  • On prim side .. start preps for  scp of the wallet to the standby machine(s).

cd /opt/oracle/dcs/commonstore/wallets/tde/DBUTF8_fra2ps/

[oracle@clutf81 DBUTF8_fra2ps]$ cp ewallet.p12 cwallet.sso /tmp

[oracle@clutf81 DBUTF8_fra2ps]$ cd /tmp

[oracle@clutf81 tmp]$ ls -ltr ewallet.p12 cwallet.sso

-rw——- 1 oracle oinstall 14091 Nov 19 12:33 ewallet.p12

-rw——- 1 oracle oinstall 14136 Nov 19 12:33 cwallet.sso

## Permissions needed for OPC:

chmod o+rx  ewallet.p12  cwallet.sso

[opc@clutf81 tmp]$ scp ewallet.p12  cwallet.sso 10.0.1.62:/tmp

ewallet.p12                                                                                                                                              100%   14KB  10.4MB/s   00:00   

cwallet.sso                                                                                                          

  • On the Standy side start your preparations by saving the wallet. And copy  the wallet from the primary side to the wallet location on the Standby Side.

cd /opt/oracle/dcs/commonstore/wallets/tde/*/

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp ewallet.p12 ewallet.p12.save

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp cwallet.sso cwallet.sso.save

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp /tmp/ewallet.p12 .

[oracle@mysbfra1 DBUTF8_fra1h7]$ cp /tmp/cwallet.sso .

  • On Standby Side close the wallet ( it will reopen with the query in the next bullit):

SQL> alter session set container = CDB$ROOT;

SQL> administer key management set keystore close container=ALL;

  • On Standby side check , ( and this will also open the keystore again).

col wrl_parameter format a60

set lines 300

select con_id, wrl_parameter, status, wallet_type, keystore_mode from v$encryption_wallet;

  • Now it is time to check status in broker again.

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

      Error: ORA-16766: Redo Apply is stopped

Fast-Start Failover:  Disabled

Configuration Status:

ERROR   (status updated 60 seconds ago)

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover:  Disabled

Configuration Status:

ERROR   (status updated 51 seconds ago)

  • In the broker enable apply again and give it some time !!! ( in my case various minutes).

edit database DBUTF8_fra1h7 set state=’apply-on’;

## check broker:

DGMGRL> show configuration

DGMGRL> show database DBUTF8_fra1h7

Database – DBUTF8_fra1h7

  Role:               PHYSICAL STANDBY

  Intended State:     APPLY-ON

  Transport Lag:      0 seconds (computed 1 second ago)

  Apply Lag:          0 seconds (computed 1 second ago)

  Average Apply Rate: 3.88 MByte/s

  Real Time Query:    ON

  Instance(s):

    DBUTF81

    DBUTF82 (apply instance)

Database Status:

SUCCESS

DGMGRL> show configuration

Configuration – DBUTF8_fra2ps_DBUTF8_fra1h7

  Protection Mode: MaxPerformance

  Members:

  DBUTF8_fra2ps – Primary database

    DBUTF8_fra1h7 – Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:

SUCCESS   (status updated 66 seconds ago)

 Well that is  Relieve.   Data Guard configuration – Broker are happy again (which also means so is this Dba),

  • This can also be checked in sqlplus on SB side

SQL> Set lines 300

Col value format a25

select * from v$Data Guard_stats;

SOURCE_DBID SOURCE_DB_UNIQUE_NAME            NAME                                               VALUE                     UNIT                           TIME_COMPUTED                 DATUM_TIME                          CON_ID

———– ——————————– ————————————————– ————————- —————————— —————————— —————————— ———-

          0                                  transport lag                                      +00 00:00:00              day(2) to second(0) interval   11/19/2021 12:50:42           11/19/2021 12:50:40                      0

          0                                  apply lag                                          +00 00:00:00              day(2) to second(0) interval   11/19/2021 12:50:42           11/19/2021 12:50:40                      0

          0                                  apply finish time                                  +00 00:00:00.000          day(2) to second(3) interval   11/19/2021 12:50:42           0

          0                                  estimated startup time                             24                        second                         11/19/2021 12:50:42           0

  • And good things always come in twice ( another check on the standby side:

SQL> select status,blocks, delay_mins, known_agents from gv$managed_standby where process like ‘MRP%’;

STATUS           BLOCKS DELAY_MINS KNOWN_AGENTS

———— ———- ———- ————

APPLYING_LOG    2097152          0            3

  • Since we are in active Data Guard almost last step in this scenario is to bring the new born PDB  to read only mode.

SQL> show con_name

CON_NAME

——————————

CDB$ROOT

## Last Checks

SQL> show PDBs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

———- —————————— ———- ———-

         2 PDB$SEED                       READ ONLY  NO

         3 PDB1                           READ ONLY  NO

         4 PDBDEC                         READ ONLY  NO

         5 PDB2                           READ ONLY  NO

         6 PDB010                         READ ONLY  NO

         7 PDBP15                         READ ONLY  NO

         8 PDBVOID                        MOUNTED

         9 PDBP15C                        MOUNTED

SQL> alter pluggable database PDBP15C open read only instances = all;

Pluggable database altered.

## Check

SQL> alter session set container = PDBP15C;

Session altered.

SQL> SELECT value FROM nls_database_parameters WHERE parameter = ‘NLS_CHARACTER SET’;

VALUE

————————-

WE8ISO8859P15

################################  The End  ##############################

PS Of course it is completely up to you if you want to drop PDBVOID now, or that you will keep it as a template for future scenarios as a kind of Template.

Happy Reading and till we meet again. And of course always  test test test this scenario before you run it on your favorite production environment in a TEST environment.

Mathijs

Character-set change in an Oracle Database

Introduction

Recently a request came in to investigate options to change the character set for a number of core databases. Goal would be to move from  we8iso8859p15 to al32utf8. Below are some thoughts, things to remember if you are really heading in such direction.

A management summary of this article would be:  Yes it is technically possible to move the character set of a database from we8iso8859p15 to al32utf8.  But this can only be achieved after careful checking first and if checks show actions are needed then you will Need a team with application – datamodel knowledge look into the reports that will be generated during the  inventory part of the scenario. With the knowledge gained from those reports they (application – datamodel) aware people will then have to fix issues if any in the Source database before running another analyses ( scan ) before doing a conversion. Oh and of course ( test test test test ) and have a backup in place  before doing this on your live production boxes.  Oracle software is trying to protect you by keeping a max of 7 days between analyses and conversion and of course that makes sense given the fact that the data of course will continue to grow during and after the first analyses.

Details

When setting up a new database there is of course a lot to be considered as in sizing of the database ( sizing during start and lets say after 12 Months ), the users and their privileges but there is one extra very important topic to think of well ( think of twice would even be better ) and that is the character set.  Plain and simple if the Character-set is not chosen correctly there might be an issue converting the character set (if possible at all ) if you want to set it to the one desired at a later point and time. As a rule of thumb one could even say if the character set is wrong the data present in the database needs to be extracted (with tools like datapump), the database would need to be recreated in the new character-set after which the data can be added (inserted) again.  Of course only to be done after testing such a scenario on a pre-production environment and only after careful analyses of  the results in the reports to check the data before moving such scenario to the production environment.

There is one exception to the above scenario where you always will have to recreate the database to get to the desired (new) character-set. If  the new character-set is a super set ( if it is holding  the same characters ( and at the same location in the character table))  then a conversion will be possible.  Of course still needed to do a proper analyses about the data present in the database before starting such conversion , plan a scenario in pre-production to do the conversion, do a lot of analyses of the results , run the same scenario ( run inventory  do an impact analyses  first on production,  making a backup, do the conversion and do a lot of post checks ones that conversion has completed).

Before Oracle 12 two tools would need to be used to do the conversion. Csscan tool would do the inventory activity and present the results in three files about the consequences of such a conversion.  If data would be lost you will have to manually fix such issues  and rerun the csscan tool. Once that has been done the csalter tool would need to run in a  restricted database ( only one instance up in case of a rac (all others need to be down) ,  only one session logged in as sys, only in a restricted database).  Of course this will mean  that  the csalter tool will require a downtime of your environment .

In Oracle 12 csscan and csalter are no longer supported and you will have to look for the dmu tool.

Csscan and Csalter

Preparations:

### This Mos note has been used  as a baseline for the csscan:

Installing and configuring Csscan in 10g and 11g (Database Character Set Scanner) (Doc ID 745809.1)

Changing the NLS_CHARACTERSET to AL32UTF8 / UTF8 (Unicode) in 8i, 9i , 10g and 11g (Doc ID 260192.1)

Csscan Output Explained (Doc ID 444701.1)

As a best practice it was mentioned to alter the script after making a copy of it.

### In the existing f.e. 11G ORACLE_HOME:
cp $ORACLE_HOME/rdbms/admin/csminst.sql $ORACLE_HOME/rdbms/admin/csminst.sql.old

### As best practice alter the script by altering the csmig user to an existing tablespace in the database with enough room. ( Hint: you might even consider to put the designated tablespace to autoextend till the analyses have completed).

alter user csmig default tablespace nomonitor quota unlimited on nomonitor;

### Then run the adapted script:

sqlplus /nolog
conn / as sysdba
set TERMOUT ON
set ECHO ON
spool csminst.log
@?/rdbms/admin/csminst.sql

### Make sure you perform below changes to the user:

sqlplus /nolog
conn / as sysdba;
alter user csmig identified by ces10;
alter user csmig account unlock ;
grant create session to CSMIG;
exit;

###  All set for the analyses so on the Operating System level run:

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
csscan SYSTEM FULL=y TOCHAR=AL32UTF8 ARRAY=2048000 PROCESS=32 CAPTURE=N LOG=$ORACLE_SID.log

 

### Important to  know about csscan and csalter is that they are desupported in 12c.

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=549806046730391&id=1418321.1&_afrWindowMode=0&_adf.ctrl-state=sm7b61jd7_375

For a scenario in 11G ( or before ) :

http://dbaonline.org/newsletter/Newsletter_200908.pdf

If your source database is big ( one of the environments where i had to run it was app 8TB).  The csscan (11G env) will take time( in my case it ran for some 13 hours). If you like to keep an eye on it  v$session_longops will be your friend:

sqlplus / as sysdba
SET pages 1000
SELECT target,
TO_CHAR(start_time,'HH24:MI:SS - DD-MM-YY'),
time_remaining,
sofar,
totalwork,
sid,
serial#,
opname
FROM v$session_longops
WHERE sid IN
(SELECT sid FROM v$session WHERE upper(program) LIKE 'CSSCAN%')
AND sofar < totalwork
ORDER BY start_time
/

DMU tool

 

### Dmu tool :

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=549848776806800&id=1272374.1&_afrWindowMode=0&_adf.ctrl-state=sm7b61jd7_429

## Known issues with dmu:

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=549881125542061&id=2018250.1&_afrWindowMode=0&_adf.ctrl-state=sm7b61jd7_483

### Requirements for DMU tool in Oracle (dmu can be used as of 11.2.0.3. and up)

### Needed to run this package in database you need 2 analyse.

@?/rdbms/admin/prvtdumi.plb

### Need to make sure you have a password file in place.  First attempt failed, when trying to connect to the database in the dmu tool ,  kept getting invalid username or password …  But after some time i realized that a password file might be mandatory. If a password file is not present this is the way to create password file :

orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=manager entries=16 force=y

### If password file seems to be in place , check in the database with ( you should at least see sys out there then)

SQL> select * FROM v$pwfile_users;

### DMU tool will need a user with sysdba privileges .

### Before starting need to have set up environment.

export JAVA_HOME=$ORACLE_HOME/jdk
$ORACLE_HOME/dmu/dmu.sh

 

As always, a pleasure to share thoughts and approaches. Happy reading,

Mathijs.