Asm compatible.rdbm set correct(or not)

General:

Almost wanted to start with once upon a time .. but what good is that kind of start for a blog about ASM compatible.rdbms right? So maybe better stick  to the facts.  On one of the test environments of the billing environment in the past I have set up some 35 databases and registered them all in Oracle Restart 11.2.0.3.  Since this was already several years ago, according to standards back in ASM then only two disk groups existed +DATA and +FRA. All the databases that have been created back then had there archive destination set to +FRA.  This worked okay till recently where  archiving behavior of specific database started changing (high increase of number of archives). That was when it became to set up dedicated diskgroup for each of the databases in that specific scope.

Details:

As part of protocol , dropped a number of disks of the +FRA diskgroup, performed in ASM a rebalance action on that diskgroup after which I was good  to go for my fresh diskgroup. Since all the databases on that server all were 11.2.0.3 this is the way the diskgroup was setup:

CREATE DISKGROUP MYDB01_FRA01 EXTERNAL REDUNDANCY
 DISK '/dev/mapper/asm-redo145p1' name MYDB01_FRA01_0001
ATTRIBUTE 'compatible.asm'='11.2', 'compatible.rdbms'='11.2'
;

Looks perfectly normal right, and all according to the book.  I checked if the diskgroup was mounted (which she was) so next step was the action to work with one of the databases in scope and make sure the archive_destination would be pointing to the new created diskgroup and as part of check after that would perform a logswitch in the database.

## In the alert log of the database this is what was noticed:

ALTER SYSTEM SET log_archive_dest_1='LOCATION=+MYDB01_FRA01' SCOPE=BOTH;
Tue Mar 14 15:59:46 2017
Thread 1 advanced to log sequence 15417 (LGWR switch)
 Current log# 3 seq# 15417 mem# 0: +DATA/MYDB01/onlinelog/group_3.285.765120841
 Current log# 3 seq# 15417 mem# 1: +FRA/MYDB01/onlinelog/group_3.271.765120841
Tue Mar 14 15:59:47 2017
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance MYDB01 - Archival Error
ORA-16038: log 2 sequence# 15416 cannot be archived
ORA-00254: error in archive control string ''
ORA-00312: online log 2 thread 1: '+DATA/MYDB01/onlinelog/group_2.286.765120839'
ORA-00312: online log 2 thread 1: '+FRA/MYDB01/onlinelog/group_2.272.765120841'
ORA-15001: diskgroup "MYDB01_FRA01" does not exist or is not mounted

 

Hm that was unexpected  because as said I did check the diskgroup to be online in ASM before starting the steps.  So time to look in the Alert log file of ASM.

### In asm logfile
Tue Mar 14 15:59:46 2017
NOTE: Rejecting a request to use grp 'MYDB01_FRA01' from 'MYDB01:MYDB01'.
NOTE: Its version '11.1.0.0.0' is lesser than the minimum required version '11.2.0.0.0' for a database to use this group
Tue Mar 14 16:05:27 2017
NOTE: Rejecting a request to use grp 'MYDB01_FRA01' from 'MYDB01:MYDB01'.
NOTE: Its version '11.1.0.0.0' is lesser than the minimum required version '11.2.0.0.0' for a database to use this group

Clear clue but very much unexpected to be honest because all databases are 11.2.0.3 since I personally set them up.  So one more detail was missing in this puzzle.  Lets look at the specific database:

SQL> show parameter compatible 
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 11.1.0

 

For whatever reason in those days when the database was set up the compatible parameter in the database was set to 11.1. Since the database was in a hung status already , time was ticking. Tried to alter the archive_destination again which was not allowed (got error messages). So ready for plan-B:

Dropped diskgroup MYDB01_FRA01 (no archive was written in this group, of course I checked).

## Then in ASM I performed:
 
CREATE DISKGROUP MYDB01_FRA01 EXTERNAL REDUNDANCY
 DISK '/dev/mapper/asm-redo145p1' name MYDB01_FRA01_0001
ATTRIBUTE 'compatible.asm'='11.2', 'compatible.rdbms'='11.1'
;

After that the database was happy (and so was this DBA) and  I had a lesson re-learned. Always expect the unexpected. Or more general, before switching your log_archive_dest to another diskgroup in ASM, make sure the compatible.rdbms  of that diskgroup is set properly.

As always Happy reading,

Mathijs

 

 

 

Renaming datafiles in Oracle 11GR2 Asm

Introduction.

As part of Housekeeping job Dbas of all times have been creating tablespaces for the users and applications. During the lifecycle of an application objects will be created , deleted copy into temporary copies etc. In the end  storage used of a tablespace might look like a Swiss Cheese. Even though Oracle is smart enough to reuse the “wholes”  sometimes it is good housekeeping to reorganize  a tablespace by moving – recreating its objects in a new tablespace. Once such scenario has completed the old  tablespace  ( aka swiss cheese )  will be empty  so it could be tossed away. Or not  if the tablespace name is mandatory since Vendor scripts use hardcodes specific tablespace name.  Even though the dba might not like such ‘Fixed’ names  it is good to know  that Oracle  in 11G( my version is 11.2.03 on Linux ) offers the option to rename a tablespace. But  what will happen to the datafiles that are part of that tablespace ?

Below scenario is based on a great post I found on the web by  Frits Hoogland:  https://fritshoogland.wordpress.com/2012/07/23/rename-oracle-managed-file-omf-datafiles-in-asm/ .

In his Post Frits is offering two scenarios to do this action. Personally i favor the first once since it holds less risk. Oracle keeps track of the copied file and the old file and is dropping the old file during  the alter database datafile  rename ..  action whereas the second scenario means you will have to do some afterwork in asmcmd since the original ( old file ) remains in place ( since in rman a copy and switch to is done).

And as you can see , after all these years  (his post was written in 2012 ) still great input for the current scenario. Good posts are like good wine , they get better as they age.

Scenario:

Often on the web it is stated that you should not believe it just because it is printed. That is also what I did. To get a look and feel of the scenario let’s run a test scenario first.

## Creating a tablespace in an Asm diskgroup ( which will create the datafile as OMF).

SQL> create tablespace test1 datafile ‘+DATA’ size 100m;

Let’s see how the datafiles for this tablespace in the DATA diskgroup will look like:

SQL> select * from dba_data_files where tablespace_name = ‘TEST1’;

FILE_NAME                                                                                                                                             FILE_ID TABLESPACE_NAME                BYTES    BLOCKS STATUS    RELATIVE_FNO AUT   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_

—- — ———- ———- ———— ———- ———– ——-

+DATA/mydb1/datafile/test1.736.935381317                                                                                                              5 TEST1                       104857600     12800 AVAILABLE            5 NO          0        0           0  103809024    12672 ONLINE

 

Ok so we see that the datafile  is +DATA/mydb1/datafile/test1.736.935381317  at the moment.

In  the next step we will rename the tablespace:

SQL> alter tablespace test1 rename to test2;

Time to check if there has something changed under the hood now we have renamed the tablespace:

SQL> select * from dba_data_files where tablespace_name = ‘TEST2’;

FILE_NAME                                                                                                                                             FILE_ID TABLESPACE_NAME                BYTES    BLOCKS STATUS    RELATIVE_FNO AUT   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_

—- — ———- ———- ———— ———- ———– ——-

+DATA/mydb1/datafile/test1.736.935381317                                                                                                              5 TEST2                       104857600     12800 AVAILABLE            5 NO          0        0           0  103809024    12672 ONLINE

Not much of a surprise that the datafile is still  +DATA/mydb1/datafile/test1.736.935381317  so if we feel that there should be a 1:1 relation between datafiles  we need to take the extra mile to make it right.

In my first sqlplus session , issued this command:

SQL> alter tablespace TEST2 offline;

I am used  to work in multiple sessions on my linux sessions so I opened a second window sqlplus and started RMAN  and gave this command to copy   ( the old file) in the Diskgroup. This will create another OMF for this datafile  in the diskgroup  +DATA.

rman target /

RMAN> copy datafile ‘+DATA/mydb1/datafile/test1.736.935381317’ to ‘+DATA’;

Output of this action looked like this:

Starting backup at 08.02.2017 04:13:58

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=923 device type=DISK

channel ORA_DISK_1: starting datafile copy

input datafile file number=00005 name=+DATA/mydb1/datafile/test1.736.935381317

output file name=+DATA/mydb1/datafile/test2.733.935381641 tag=TAG20170208T041400 RECID=3 STAMP=935381644

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07

Finished backup at 08.02.2017 04:14:07

Starting Control File and SPFILE Autobackup at 08.02.2017 04:14:08

piece handle=/opt/oracle/product/11203_ee_64/db/dbs/MYDB1_c-2786503255-20170208-00 comment=NONE

Finished Control File and SPFILE Autobackup at 08.02.2017 04:14:15

With the output file name ( the new created file ) +DATA/mydb1/datafile/test2.733.935381641

I Moved back to my first screen and in sqlplus  issued this command:

SQL> alter database rename file ‘+DATA/mydb1/datafile/test1.736.935381317’ to ‘+DATA/mydb1/datafile/test2.733.935381641’ ;

After that last step to make is to take the tablespace online again:

SQL> alter tablespace test2 online;

Let’s check the datafile for the tablespace again now.

SQL> select * from dba_data_files where tablespace_name = ‘TEST2’;

FILE_NAME                                                                                                                                             FILE_ID TABLESPACE_NAME                BYTES    BLOCKS STATUS    RELATIVE_FNO AUT   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_

—- — ———- ———- ———— ———- ———– ——-

+DATA/mydb1/datafile/test2.733.935381641                                                                                                              5 TEST2                       104857600     12800 AVAILABLE            5 NO          0        0           0  103809024    12672 ONLINE

Now it is time for the real deal.

First let’s find out which datafiles are part of our Tablespace POOL_DATA ( which was renamed from POOL_DATA_REORG.

SQL> select file_id,file_name from dba_data_files where tablespace_name = ‘POOL_DATA’;

FILE_ID FILE_NAME

---------------------------------------------

       41 +DATA/mydb1/datafile/pool_data_reorg.780.934704787

       42 +DATA/mydb1/datafile/pool_data_reorg.783.934705039

       43 +DATA/mydb1/datafile/pool_data_reorg.658.934705043

       44 +DATA/mydb1/datafile/pool_data_reorg.735.934705271

       45 +DATA/mydb1/datafile/pool_data_reorg.734.934705525

       46 +DATA/mydb1/datafile/pool_data_reorg.732.934705785

       47 +DATA/mydb1/datafile/pool_data_reorg.731.934706037

       48 +DATA/mydb1/datafile/pool_data_reorg.730.934706289

       49 +DATA/mydb1/datafile/pool_data_reorg.729.934706505


Time to take the tablespace offline after consulting with Vendor – Dba that there will be an agreed maintenance window to do these actions:

SQL> alter tablespace POOL_DATA offline;

Opened a second screen  and started an rman session.

rman target /

RMAN> copy datafile ‘+DATA/mydb1/datafile/pool_data_reorg.780.934704787’ to ‘+DATA’;

This showed this output in Rman

Starting backup at 08.02.2017 04:35:57

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile copy

input datafile file number=00041 name=+DATA/mydb1/datafile/pool_data_reorg.780.934704787

output file name=+DATA/mydb1/datafile/pool_data.736.935382959 tag=TAG20170208T043558 RECID=4 STAMP=935383461

channel ORA_DISK_1: datafile copy complete, elapsed time: 00:08:25

Finished backup at 08.02.2017 04:44:23

Starting Control File and SPFILE Autobackup at 08.02.2017 04:44:24

piece handle=/opt/oracle/product/11203_ee_64/db/dbs/MYDB1_c-2786503255-20170208-03 comment=NONE

Finished Control File and SPFILE Autobackup at 08.02.2017 04:44:31

In sqlplus I issued the rename action using the above information:

SQL> alter database rename file ‘+DATA/mydb1/datafile/pool_data_reorg.780.934704787’ to ‘+DATA/mydb1/datafile/pool_data.736.935382959’;

And with the idea checking Is believing ,  in the alert log I saw this entry

Alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.780.934704787' to '+DATA/mydb1/datafile/pool_data.736.935382959'

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.780.934704787

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.780.934704787' to '+DATA/mydb1/datafile/pool_data.736.935382959'

I like what I see! Working this way and performing the rename will automatically delete the old File in ASM for us!

If this would be the online datafile to be moved the next step would be to take the tablespace online again. ( In this case also as an extra check that we did not break anything).

SQL> alter tablespace POOL_DATA online;

Now let us check  the datafile(s) for the tablespace again:

SQL> select file_id,file_name from dba_data_files where tablespace_name = ‘POOL_DATA’;

   FILE_ID FILE_NAME

---------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

       41 +DATA/mydb1/datafile/pool_data.736.935382959

       42 +DATA/mydb1/datafile/pool_data_reorg.783.934705039

       43 +DATA/mydb1/datafile/pool_data_reorg.658.934705043

       44 +DATA/mydb1/datafile/pool_data_reorg.735.934705271

       45 +DATA/mydb1/datafile/pool_data_reorg.734.934705525

       46 +DATA/mydb1/datafile/pool_data_reorg.732.934705785

       47 +DATA/mydb1/datafile/pool_data_reorg.731.934706037

       48 +DATA/mydb1/datafile/pool_data_reorg.730.934706289

       49 +DATA/mydb1/datafile/pool_data_reorg.729.934706505

Ah that looks like one down ( one correct 8 More to go).

alter tablespace POOL_DATA offline;

Proceeded with the remaining 8 Datafiles and when doing  the rename step it was a good thing to see this below information in the Alert log:

Wed Feb 08 06:00:34 2017

alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.783.934705039' to '+DATA/mydb1/datafile/pool_data.780.935384353'

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.783.934705039

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.783.934705039' to '+DATA/mydb1/datafile/pool_data.780.935384353'

alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.658.934705043' to '+DATA/mydb1/datafile/pool_data.755.935385209'

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.658.934705043

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.658.934705043' to '+DATA/mydb1/datafile/pool_data.755.935385209'

alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.735.934705271' to '+DATA/mydb1/datafile/pool_data.778.935384371'

Wed Feb 08 06:00:48 2017

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.735.934705271

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.735.934705271' to '+DATA/mydb1/datafile/pool_data.778.935384371'

alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.734.934705525' to '+DATA/mydb1/datafile/pool_data.781.935385215'

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.734.934705525

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.734.934705525' to '+DATA/mydb1/datafile/pool_data.781.935385215'

alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.730.934706289' to '+DATA/mydb1/datafile/pool_data.762.935386865'

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.730.934706289

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.730.934706289' to '+DATA/mydb1/datafile/pool_data.762.935386865'

alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.729.934706505' to '+DATA/mydb1/datafile/pool_data.772.935386887'

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.729.934706505

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.729.934706505' to '+DATA/mydb1/datafile/pool_data.772.935386887'

Wed Feb 08 06:01:30 2017

alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.732.934705785' to '+DATA/mydb1/datafile/pool_data.744.935386243'

Deleted Oracle managed file +DATA/mydb1/datafile/pool_data_reorg.732.934705785

Completed: alter database rename file '+DATA/mydb1/datafile/pool_data_reorg.732.934705785' to '+DATA/mydb1/datafile/pool_data.744.935386243'

All looked well so time to wrap up.

SQL> alter tablespace POOL_DATA online;

SQL> alter database backup controlfile to trace;

SQL> select file_id,file_name from dba_data_files where tablespace_name = ‘POOL_DATA’;

   FILE_ID FILE_NAME

----------------

                41 +DATA/mydb1/datafile/pool_data.736.935382959

                42 +DATA/mydb1/datafile/pool_data.780.935384353

                43 +DATA/mydb1/datafile/pool_data.755.935385209

                44 +DATA/mydb1/datafile/pool_data.778.935384371

                45 +DATA/mydb1/datafile/pool_data.781.935385215

                46 +DATA/mydb1/datafile/pool_data.744.935386243

                47 +DATA/mydb1/datafile/pool_data.732.935388425

                48 +DATA/mydb1/datafile/pool_data.762.935386865

                49 +DATA/mydb1/datafile/pool_data.772.935386887

 

Final check:

SQL> select TABLESPACE_NAME,status from dba_tablespaces order by 1 ;

TABLESPACE_NAME              STATUS

------------------------------ ---------

POOL_DATA                   ONLINE

POOL_IX                     ONLINE

SYSAUX                      ONLINE

SYSTEM                      ONLINE

TEMP                        ONLINE

TEST2                       ONLINE

TOOLS                       ONLINE

UNDOTBS1                    ONLINE





As always hope you had some happy reading this procedure.

 

Mathijs.

 

 

 

Sql report in html & sending mail in linux

Introduction

Being part of the  Oracle community on the web is always a great experience to me. Great because it is inspiring since  a lot of the stories, blogs and  tweets that are shared by the colleagues in the field are top notch. This week a came across a  great tweet from one of my favorite bloggers : Uwe Hesse  (https://uhesse.com/2011/06/30/sqlplus-output-in-nice-html-format/) in which he shared how to create sql output in html format. If you do not follow his blogs yet , please do  because it is great. So of course had to test it myself since it is o so familiar that getting the data is one thing but a nice way to present is is a different story ( with all the set lines 999, col x  format y etc).

Details:

The scenario for this is great:

  1. Run your report( sql) as you are used to
  2. run this html part after.
  3. Either open Firefox with the html file so show – share the info ( or as in my aim find a way to send it from Linux box as an attachment).

In his blog Uwe showed in the sql script that is used in his post that you either run this kind of work either from a client (being a pc or a (l)Unix because once the job is done , the results are  loaded the report in Firefox).

set termout off

set markup HTML ON HEAD ” –

” –
BODY “” –
TABLE “border=’1′ align=’center’ summary=’Script output'” –
SPOOL ON ENTMAP ON PREFORMAT OFF

spool myoutput.html

l
/

spool off
set markup html off spool off
host firefox myoutput.html
set termout on;
quit;

On  the Linux boxes where I work most Firefox is not present. So instead of — host Firefox I am aiming to sending the output as a mail attachment. So far did a lot of reading and tried various suggestions that i should use mailx  -a  but that is not a valid parameter in this linux Redhat 5X release. Another option offered on the web was using uuencode  which I also cannot use because it is not in place. And last but not least also another great suggest to use mutt  failed me too (also not in place). After the weekend will talk to Linux colleagues at work if  there is another option. so  that matter is still 2 b continued . And of course if there is another solution this will be shared.

But when looking at this as an example , the work will be worth the effort! Thank you Uwe for this great tweet and Blog post (https://uhesse.com/2011/06/30/sqlplus-output-in-nice-html-format/).

Archivelogs per hour – day.

archives-script

report-in-cool-htm-format

To be continued…

As always,

Happy reading,

Mathijs.

Importing Data via Network

Introduction:

For two projects there has been an assignment to upgrade to 11.20.4 Oracle. One environment was already 11.2.3 with same Cluster stack below it and one environment will come from 10.2.0.4 on Solaris.  For both projects on Linux  an 11.2.0.4 cluster-stack plus database version has been set up on one of the newer shared clusters.  Both environments will be migrated using the export – import method (since they are relatively small ( app 400- 500 GB) ) and of course since one of them is being migrated cross platforms (from Solaris to Linux ) you do not have that much choice.

In other project I had good experience with nfs filesystems between source and target servers and at first was aiming to use them again during these migrations.  However since not every project is able to make it to the time lines ( will have to wait for at least 2 more weeks to get the nfs mounts ) other creativity will be  required. In this specific case will work with datapump via the network.

When looking into this scenario i came across two scenarios. First scenario being covered by a fellow blogger and interesting since it offers the option to export directly into an ASM disk group. In  that scenario extra step would be needed using impdp with directory to the same  asmdiskgroup/subdirectory. Second scenario which is explained in more detail here is even one step beyond. Scenario is simple  using impdp via a dblink directly in the database ( not even a need to park a dumpfile somewhere on filesystem or in diskgroup first and then run the imp). Nope just another  imdp and you are there !

 

 

1.     Setting up  tnsnames entry on the target ( receiving ) side.

 

In order to make this scenario work  you will have to make sure that there is no firewall in place to the source database you will pull the data from when you create the tnsnames.ora entry on the target side.

In my case:

 

I always try a: telnet <ip> <port>

telnet  666.233.103.203  33012

 

If  you see something like trying ….  and nothing helps will happen well this was not your lucky day and a firewall is blocking you from making this a happy scenario.  If you see something like this lucky you :

Escape character is '^]'.

Recommendation when you get stuck with trying … then is to make sure that firewall  is opened. In my case my host was a vip address for a rac database and Port 33012 had been assigned to the local listener of that database.

 

## Let set up the tnsnames entry  NOTE : firewall needs to be freed before proceed with tnsping etc:

MBMYDB =
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=666.233.103.203)(PORT=33012))
)
(CONNECT_DATA=
(SERVICE_NAME=MYDB.test.nl)
)
)

One interesting part is that the service_name of the tnsnames  i wanted to use was not present as a service in the database so I had to add to extend the present service (which was not default service since it was without  domain).

 

## ## On the source side in the database where i want to take the data from:  added service:

 

alter system set service_names = ‘MYDB’,’MYDB.test.nl’ scope = both ;

 

SQL> show parameter service

 

NAME                                                      TYPE VALUE

———————————— ———– ——————————

service_names                                      string               MYDB, MYDB.test.nl

 

So now we have two services in place which we can use in the tnsnames.ora.

 

2.     Time to set up a public dblink

 

## Reading articles by fellow bloggers they recommended to created PUBLIC (this seems mandatory) db link. Since in my case i would do the import with system a normal db link would b okay too. But for the scenarios sake  public database link is fine.

 

drop public DATABASE LINK old_MYDB;

## worked with this one

CREATE public DATABASE LINK old_MYDB CONNECT TO system IDENTIFIED BY xxxxxxx USING ‘mbMYDB’;

3.     Seeing is believing , test the db link.

 

## performed select

select ‘x’ from  dual@old_MYDB;

4.     Next stop, creating a directory for the logfile of the impdp.

 

Yes that is correct only a directory for the log file not for the dump itself J  that is why i liked this scenario so much.

 

## created directory for the logfile

create directory acinu_imp as ‘/opt/oracle/MYDB/admin/create’ ;

grant read,write on directory acinu_imp to system;

 

 

 

5.     Time to perform the import.

 

Over the years have used expdp and impdp a lot  but most time as an almost 1:1 clone of exp/ imp. But since Google  is your friend when looking for scenarios it was great to explore the  powerful option of exclude= parameter. As you will see ,  creating an import of the full database but excluding the  schemas i don’t care about.

 

Since i was hmm energy efficient i wanted to type the full statement in Linux but was punished  by having ” ” in my command. However had i used a parfile things would have been easier J . But since i wanted to stick to scenario found that whenever on OS  ” level an \ will be mandatory like below:

 

## performed import  with success with  command below

 

impdp system full= yes "EXCLUDE=SCHEMA:\"IN('ADBM','DBSNMP','PERFSTAT','UPDOWN','ORACLE_OCM','OUTLN','SYS','SYSTEM')\"" network_link=old_MYDB directory=acinu_imp logfile=AcinupImport.log parallel=2 job_name=MYDB_DMP_FULL

 

 

## Note

At first all my scenarios  had error below

 

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39200: Link name “OLD_ACINUP” is invalid.
ORA-02019: connection description for remote database not found

 

This made me check  the services in the database, the entry in the tnsnames, and test it all again.  After that as A-team Hannibal would say , love it when a plan comes together  it worked !

 

Happy reading ,

 

And always don’t believe it just because it is printed.

 

Mathijs Bruggink

 

 

(De)install Apex in 11GR2 challenge

Introduction

Recently two new databases set up had been checked by QA process and they did not make that check due to the fact that Apex was setup in the sysaux tablespace. Of course it is important to follow the standards so some  extra work  would be needed to pass  the checks . House rules state that for Apex installation a dedicated tablespace will be needed. Time to play and make things right with two scenarios.  Scenario 1 will be faulty installation with a deinstall and an install after. Scenario 2 will be an install( During which action noticed that Oracle text is a mandatory component that needs to be in place before install Apex). Below notes I have used  to investigate and fix the challenge. As always  Google is you friend when exploring such solutions.

Being curious by nature of course had to explore two approaches. Primarily installing a new part to an already existing database can be done with DBCA. As another option it is still a good challenge to see if you can do it yourself by using the command line too. Hmm does it still show that I grew up with Oracle 7.3.4 as a starters kit ?

Details

First lets explore if there is a current installation of Apex in place.

col  comp_name format a60
set lines 144
select COMP_NAME,status from dba_registry order by 1;

or

SELECT STATUS FROM DBA_REGISTRY
WHERE COMP_ID = 'APEX';

VERSION
------------------------------
3.2.1.00.12

Deinstall

As part of activities looked on  the Web and found below Blog.  Big Thank you to the author Matthias Hoys for documenting:

De-installing Apex in 11gr2

So in my case would have set Environment to Oracle 11gr2 and  move to the apex directory before  starting sqlplus .

cd /opt/oracle/product/11203_ee_64/db/apex/
sqlplus /nolog
connect / as sysdba
@/opt/oracle/product/11203_ee_64/db/apex/apxremov.sql
Once the scripts has completed check again:
SELECT STATUS FROM DBA_REGISTRY
WHERE COMP_ID = 'APEX';

no rows selected

Now let’s do it proper and create a dedicated tablespace for apex installation first:

create tablespace APEX datafile ‘+CRMMST_DATA01’ size 4096M autoextend off
EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT  AUTO UNIFORM SIZE 1M;

Note. If you settle for  uniform size tablespace, make sure your extend-size is big enough. I had the issue that install will fail  if extend size is too small (apex install will fail with  ORA-60019). Error Detail:

create table wwv_mig_forms (
*
ERROR at line 1:
ORA-60019: Creating initial extent of size 14 in tablespace of extent size 8

ORA-60019: Creating initial extent of size string in tablespace of extent size string
Cause: Creation of SECUREFILE segment failed due to small tablespace extent size.
Action: Create tablespace with larger extent size and reissue command.
## my ts was uniform size 64K when created dedicated ts. Recreated ts with uniform size 1M.

Install Apex manually  (with Oracle Text in place):

## make sure you are in this directory:
cd /opt/oracle/product/11203_ee_64/db/apex 
@/opt/oracle/product/11203_ee_64/db/apex/apexins apex apex temp /i/

Since more than one script will run, made sure to head to the Apex directory in my Oracle installation before running  the starting script in sqlplus.  My new created dedicated  tablespace apex  is part of  parameters needed. ( @apexins.sql tablespace_apex tablespace_files tablespace_temp images ).

 

Install Apex manually (with Oracle Text  not in place):

In case Oracle Text is not installed in the database, the apex install will complain about that. In such case you will have to install Oracle Text first.

Oracle Text:

cd /opt/oracle/product/11203_ee_64/db/ctx/admin
Start your sqlplus session and run:
spool /tmp/catctx.log
@catctx.sql CTXSYS SYSAUX TEMP NOLOCK;

Among the parameters username,tablespace to create objects into, temp, account status after install.

After that installed Apex:

## make sure you are in this directory:
cd /opt/oracle/product/11203_ee_64/db/apex 
@/opt/oracle/product/11203_ee_64/db/apex/apexins apex apex temp /i/

## check status :

SQL> SELECT STATUS FROM DBA_REGISTRY
WHERE COMP_ID = ‘APEX’;  2

STATUS
———–
VALID

## Another issue and blushing when in the end  turned out i had proper tablespace NOT in place. Ran the script with the parameters but the tablespace had not been created:

 

APPLICATION 4411 - APEX  - System Messages Set Credentials... begin * ERROR at line 1: 
ORA-04063: package body "APEX_030200.WWV_FLOW_API" has errors ORA-06508: PL/SQL: could not find program unit being called: 
"APEX_030200.WWV_FLOW_API" ORA-06512: at line 4 
## and did notice this package remained invalid even after remove part of apex. 
Solution: drop package htmldb_system; drop public synonym htmldb_system; 
Retry.

Mandatory Aftercare (last man standing).
Check for invalid objects and recompile them.
select owner ,object_name,object_type from dba_objects where  status <> ‘VALID’ order by 1,2;

@?/rdbms/admin/utlrp

and check again .

 

Happy reading,

 

Mathijs

 

Btw. Loved this post by one of the great bloggers in general and on this Apex  in particular :

Oracle-Base oracle-application-express-apex-4-2-installation

Transport Tablespace as Mig to 11.2.0.4 with rman.

Introduction:

For one of the projects the question came in to investigate and set up a  11.2.0.4 Real application cluster database with an extra challenge that the migration had to be done cross-platform from Oracle 10.2.0.3 on Solaris platform to 11.2.0.4.0 on Linux. From application provider came the suggestion to investigate a back-up – restore scenario with an upgrade on the new server ( Linux environment). Due to the fact that the Source environment was 10.20.3 on Solaris and due to fact we were heading towards a Rac cluster environment on on Linux that suggestion was  the first that was send to the dustbin.

Normal export  / import was the second scenario that was explored. Of course this is a valid scenario  but given  the fact that the database was more than 1.x TB not exactly the most favorite way to bring  the data across. But whit scripting and using multiple par-files  and or with moving  partitioned data across in waves would be a  fair plan-b.

From reading though Had put my mind to the use of  transportable tablespaces as a way forward with this challenging question.

Requirements:

As preparation for the job requested to have Nas filesystem mounted between the source Server (MySunServer) holding the 10G database and the target Server (MyLinuxcluster). This Nas filesystem  would hold  the datapumps to be created, to hold the scripts and parfiles  / config files as was suggested  based on Mos Note ( 1389592.1 ). Nas system was / read-writable from both  servers. The  perl scripts that come with the note will support in the transport of the tablespaces but also help in  the convert of big endian to little endian And as a bonus in my case will do the copy into ASM.

Due to the layout of the database in the source environment  Rman was chosen as the best way forward with the scenario.

As a preparation an 110204 Rac database was set up on the target cluster. This  database only to hold the normal tablespaces and a smal temporary tablespace for the users. ( In TTS solution the name of the data tablespaces that come across to the new environment may not exist in the new environment). All data- application users have been pre created on the new environment with a  new – default user tablespace.

Details & Comments

Configuration file for the Perl scripts:

This is a file  that is part of the unzipped file from the Mos note. It needs to be setup to match your specific needs.  Will only show settings  I have used and  its comments:

xtt.properties:
## Reduce Transportable Tablespace Downtime using Incremental Backups
## (Doc ID 1389592.1)

## Properties file for xttdriver.pl

## See documentation below and My Oracle Support Note 1389592.1 for details.
## Tablespaces to transport

## Specify tablespace names in CAPITAL letters.

tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,MYDB_INDEX,MYTS,USERS
##tablespaces=MYCOMPTTS
## Source database platform ID

## platformid

## Source database platform id, obtained from V$DATABASE.PLATFORM_ID

platformid=2

## srclink

## Database link in the destination database that refers to the source

## database. Datafiles will be transferred over this database link using
## dbms_file_transfer.
srclink=TTSLINK

## Location where datafile copies are created during the “-p prepare” step.

## This location must have sufficient free space to hold copies of all
## datafiles being transported.

dfcopydir=/mycomp_mig_db_2_linux/mybox/rman

## backupformat

## Location where incremental backups are created.

backupformat=/mycomp_mig_db_2_linux/mybox/rman

## Destination system file locations

## stageondest

## Location where datafile copies are placed by the user when they are

## transferred manually from the souce system. This location must have
## sufficient free space to hold copies of all datafiles being transported.

stageondest=/mycomp_mig_db_2_linux/mybox/rman

# storageondest

## This parameter is used only when Prepare phase method is RMAN backup.

## Location where the converted datafile copies will be written during the

## "-c conversion of datafiles" step. This is the final location of the
## datafiles where they will be used by the destination database.
storageondest=+MYDBP_DATA01/mydbp/datafile
## backupondest

## Location where converted incremental backups on the destination system

## will be written during the "-r roll forward datafiles" step.

## NOTE: If this is set to an ASM location then define properties

##      asm_home and asm_sid below. If this is set to a file system
##       location, then comment out asm_home and asm_sid below
backupondest=+MYDBP_DATA01/mydbp/datafile

## asm_home, asm_sid

## Grid home and SID for the ASM instance that runs on the destination

asm_home=/opt/crs/product/11204/crs

asm_sid=+ASM1
## Parallel parameters

parallel=8

## rollparallel

## Defines the level of parallelism for the -r roll forward operation.

## If undefined, default value is 0 (serial roll forward).

rollparallel=2
## getfileparallel

## Defines the level of parallelism for the -G operation

getfileparallel=4

## desttmpdir

## This should be defined to same directory as TMPDIR for getting the

## temporary files. The incremental backups will be copied to directory pointed
## by stageondest parameter.
desttmpdir=/mycomp_mig_db_2_linux/MYDBP/scripts

 

Below in a Table format you will see the steps performed with comments.

Steps do qualify for

  • I for Initial steps – activities
  • P for Preparation
  • R for Roll Forward activities
  • T for Transport activities

Server column shows where the action needs to be done.

Step Server What needs 2 b done
I1.3 Source Identify the tablespace(s) in the source database that will be transported ( Application owner needs to support with schema owner information) :
tablespaces=MYDB_DATA,MYDB_EUC_DATA,MYDB_EUC_INDEX,

MYDB_INDEX,MYTS,USERS

I1.5 Source + Target In my case project offered an nfs filesystem which i could use : Nfs filesystem : /mycomp_mig_db_2_linux
I1.6 Source Together with the Mos note cam  this zip file : Unzip rman-xttconvert.zip.
I1.7 Source Tailor the extracted file xtt.properties file on the source system to match your environment.
I1.8 Target As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system. This was not needed since we used the nas filesystem.
P1.9 Source + Target On both environments set up this:

export TMPDIR= /mycomp_mig_db_2_linux/MYDBP/scripts.

P2B.1 Source perl xttdriver.pl -p
Note. Do Not use ]$ $ORACLE_HOME/perl/bin/perl this did not work
P2B.2 Source Copy files to destination. N/A since we use NFS
P2B3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt N/A since we use NFS.
perl/bin/perl xttdriver.pl –c
R3.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:
perl xttdriver.pl –I
R3.3 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
 Since we are using Nas shared filesystem no need to copy with scp  between source and target.
perl xttdriver.pl -r
R3.4 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the determine new FROM_SCN step as follows:
perl xttdriver.pl –s
R3.5 Source 1.     If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
2.     If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.
T4.0 Source As found in note : Alter Tablespace Read Only Hanging When There Are Active TX In Any Tablespace (Doc ID 554832.1). A restart of the database is required to have no active transactions. Alternative during off hours . Actually during a first test with one dedicated tablespace with only one  object it took more than 7 hrs. Oracle seems to look and wait  for ALL active transactions, not only the ones that would impact   the object in the test tablespace i worked with.
T4.1 Source On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, make the tablespaces being transported READ ONLY.
alter tablespace MYDB_DATA read only;
alter tablespace MYDB_EUC_DATA read only;
alter tablespace MYDB_EUC_INDEX read only;
alter tablespace MYDB_INDEX read only;
alter tablespace MYTS read only;
alter tablespace USERS read only;
T4.2 Source Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles.
perl xttdriver.pl -i
T4.2 Target [oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt
perl xttdriver.pl –r
..
T4.3 Target On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:
perl xttdriver.pl –e
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt. It will hold list of all the TTS you have configured and all its transport_datafiles in details.
Example of that generated file : cat xttplugin.txt
impdp directory=MYDB_XTT_DIR logfile=tts_imp.log \
network_link=TTSLINK.PROD.NL transport_full_check=no \
transport_tablespaces=MYCOMPTTS ,A,B,C\
transport_datafiles=’+MYDBP_DATA01/mycomptts_152.xtf’
Note in our example once edited we chmodded   xttplugin.txt with 744 and ran it as script .
T4.3 Source After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.
T4.4 Target At this step, the transported data is READ ONLY in the destination database.  Perform application specific validation to verify the transported data.
Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:
In rman:
validate tablespace MYDB_DATA, MYDB_EUC_DATA, MYDB_EUC_INDEX, MYDB_INDEX, MYTS, USERS check logical;
T4.5 Target alter tablespace MYDB_DATA read write;
alter tablespace MYDB_EUC_DATA read write;
alter tablespace MYDB_EUC_INDEX read write;
alter tablespace MYDB_INDEX read write,
alter tablespace MYTS read write;
alter tablespace USERS read write;
T5 Source + Target Cleanup of NFS filesystem.
Put Source Db in restricted mode as a fallback after the go live for couple of days then put it to tape and decommission it;

Adding Vip Address and Goldengate to Grid Infra structure 11.2.0.3

Introduction:

Earlier this week preparations have been started to add  the Goldengate software to the Grid  infrastructure of 11.2.0.0.3 on  the Billing environment on production. As part of that scenario   I also had to add a Vip address  that is to be used by the Goldengate  Software  as part of high(er) availability. In my concept  Goldengate Daemons are  running on on Node only  by default. During a Node crash ( of course not wanted  nor desired )  or as a way to load balance work on the cluster the Vip address and the Goldengate software need to stop and restart on the other Node. Below you will  find a working example as part of the preparations I have performed.  Some comment has been added to the specific steps.

Commands will be typed in italic in this blog.

Details

## First step will be to be adding the vip address 10.97.242.40 oradb-gg-localp1.dc-lasvegas.us  to the Grid Infra (GI). Note IP address and the description have been defined in the DNS. Once I got feedback that the address was added I was able to perform a nslookup. Of course it was not possible yet to ping the ip  because we first have to add it to the cluster as is done here.

## As root:
/opt/crs/product/11203/crs/bin/appvipcfg create -network=1 -ip=10.97.242.40 -vipname=oradb-gg-localp1.dc-lasvegas.us -user=root

## Once that is in place , grant permissions to Oracle user to work with the vip address:
(As root, allow the Oracle Grid infrastructure software owner (e.g. Oracle) to run the script to start the VIP.)

/opt/crs/product/11203/crs/bin/crsctl setperm resource oradb-gg-localp1.dc-lasvegas.us -u user:oracle:r-x

## Now it is time to start  the Vip:
## As Oracle, start the VIP:
/opt/crs/product/11203/crs/bin/crsctl start resource oradb-gg-localp1.dc-lasvegas.us

##Check our activities:
## As Oracle:

/opt/crs/product/11203/crs/bin/crsctl status resource oradb-gg-localp1.dc-lasvegas.us -p

## In my setup  Goldengate is defined to be able to run on either node one (usapb1hr)  or  on node 2 (usapb2hr) in my four node cluster. And  Since i want to make sure it only runs on those two servers I add placement to restricted.
## As root:

/opt/crs/product/11203/crs/bin/crsctl modify resource oradb-gg-localp1.dc-lasvegas.us  -attr “HOSTING_MEMBERS=usapb1hr usapb2hr”
/opt/crs/product/11203/crs/bin/crsctl modify resource oradb-gg-localp1.dc-lasvegas.us  -attr  “PLACEMENT=restricted”

## As always the taste of the creme brulee is in the details  so let’ s check :
## As Oracle:
/opt/crs/product/11203/crs/bin/crsctl status resource oradb-gg-localp1.dc-lasvegas.us -p

##  Great that worked , now lets relocate the Vip to the other node as a test:
## As Oracle:

/opt/crs/product/11203/crs/bin/crsctl relocate resource oradb-gg-localp1.dc-lasvegas.us

## completed  action with a smile Because it worked as planned.

## As always the taste of the creme brulee is in the details  so let’ s check :
## As Oracle:
/opt/crs/product/11203/crs/bin/crsctl status resource oradb-gg-localp1.dc-lasvegas.us -p

## As part of making sure that setup from scratch was same on all machines ( had the same solution in Pre Prod env. ) let us first remove the existing resource  for Goldengate and then add it to the GI again.

/opt/crs/product/11203/crs/bin/crsctl delete resource myGoldengate

## as Oracle  ( white paper was very specific about that , performed it as root first time ending up with   wrong primary group in the ACL which i checked in the end) . So stick to plan ! And do this als ORACLE. Add the resource to the GI and  put in a relationship to the Vip address that has been created in the GI earlier, AND  inform the cluster about  the action script that is to be used during a relocate – server boot  – node crash . ( This script is in my case a shell script holding conditions like stop, start , status etc   and the correspondig commands in the Goldengate that are to be used by the GI:

/opt/crs/product/11203/crs/bin/crsctl add resource myGoldengate -type cluster_resource -attr “ACTION_SCRIPT=/opt/crs/product/11203/crs/crs/public/gg.local.active, CHECK_INTERVAL=30, START_DEPENDENCIES=’hard(oradb-gg-localp1.dc-lasvegas.us) pullup(oradb-gg-localp1.dc-lasvegas.us)’, STOP_DEPENDENCIES=’hard(oradb-gg-localp1.dc-lasvegas.us)‘”

## Altering  hosting members and placement again ( by default only one node part of hosting_members and placement=balanced by default).

## As root:
/opt/crs/product/11203/crs/bin/crsctl modify resource  myGoldengate -attr “HOSTING_MEMBERS=usapb1hr usapb2hr”

/opt/crs/product/11203/crs/bin/crsctl modify resource  myGoldengate -attr  “PLACEMENT=restricted”

## so in the end you should check it with this:

/opt/crs/product/11203/crs/crs/public [CRS]# crsctl status resource myGoldengate -p

## Time to set  set permission to myGoldengate (altering Ownership to myGoldengate user ( which is my OS user for this).
### As root:
/opt/crs/product/11203/crs/bin/crsctl setperm resource myGoldengate -o myGoldengate

## needed and sometimes forgotten to make sure that the oracle user ( who is also the owner of the Grid infra software on these boxes).
###As root, allow oracle to run the script to start the goldengate_app application.
/opt/crs/product/11203/crs/bin/crsctl setperm resource myGoldengate -u user:oracle:r-x

 

Wrap-up:

All preparations are now in place. During an already scheduled maintenance window  following steps will be performed to bring this scenario to a HA solution for Goldengate.

  • Stop the Goldengate software daemons ( at moment  stopped and started by hand) .
  • Start the  Goldengate resource  via the Grid Infra ( giving her control of status and activities) .
  • Perform checks that Goldengate is starting  its activities .
  • Perform a relocate  of the  Goldengate resource  via the Grid Infra to the other node.
  • Perform checks that Goldengate is starting  its activities .

As an old quote states . Success just loves preparation. With these preparations in place  I feel confident  for the maintenance window to put this Solution  live .

 

As Always ,  happy reading and till next Time ,

 

Mathijs