Skip to main content

Featured

Missing Java Classes - Oracle Database

 Recently in out UAT database our DBA had found that the component JAVAVM (JServer JAVA Virtual Machine) is invalid, and hence he did the JVM reload. One thing he forgot was that the impact of JVM reload on existing JAVA objects in the database. Yes, all the java classes/objects got dropped due to the sudden JVM reload.   Ours is not a standalone database it has various applications running on top of using several java classes, and hence all our applications stopped working. So after a lot of trial and errors, we figured out the complete steps to fix our issue. Query to check the java objects in the database: select owner , count(*) from dba_objects where OBJECT_TYPE like '%JAVA%' group by owner;   Oracle EBusiness Suite Application - APPS                a)        Take a backup of $AD_TOP/admin/driver/adldjava.drv and replace all loadjava commands with below lines loadjava csf java...

How To Partition an existing table using Oracle Datapump

There may be a need to partition an  existing table to improve the performance on it.


Need for Table Partitioning:

1. Improve the maintainability
2. Improve the performance of DML operations and Select operations.


Methods for Partitioning an existing table:

There are various methods for partitioning an existing table.
1. DBMS_REDEFINITION
2. Export/Import

We are going to cover here about Export/Import Methodology:


 Export and Import is one of the oldest techniques for Table partitioning.

Steps are really very simple:

Lets explain  better with an example:

Situation:

There is a huge table big_table which is frequently undergoing updations, direct reads and insertions too..
At this point we found certain performance issues when the table is growing larger and larger, as it is too hectic to scan for the whole table to update/select only certain set of data.

Here comes the idea of partitioning the huge table into chunks and get the required data from the chunks instead of the whole table:

Step 1: Export the huge table  lets say it is of 30GB
           expdp apps/apps directory=EXP_IMP dumpfile=big_table.dmp logfile=big_table_exp.log Tables=big_table


Step 2 : Make a note of the "BIG_TABLE"'s Structure as we need to recreate it
          Structure includes constraints , grants, indices built upon it,etc.

Step 3 :Drop the table "BIG_TABLE"
            drop table apps.big_table cascade constraints;

Step 4:  Recreate the table with required partitions.
             Recreate the table with the required partitioning strategy single or composite partitioning..

Step 5: Recreate all the indices that are built on top of the BIG_TABLE.

Step 6: Import the table "BIG_TABLE" from the dump
             impdp apps/apps directory=EXP_IMP dumpfile=big_table.dmp logfile=big_table_imp.log Tables=big_table Table_exists_action=APPEND
   
 Your Import will complete with one warning stating that the table already exists. This can be ignored as we created the table.

Note:
If   Table_exists_action=APPEND option  is not given then your import will fail and will not insert any rows to the table.
This option instructs oracle to insert the data if the object already exists.

For more information on the Export/Import utility you can follow the below link:

http://www.oracle-base.com/articles/10g/OracleDataPump10g.php

After partitioning the BIG_TABLE you will really see the dramatic increase in the  performance for both DML and Read operations.......

Best practices:
1.Never try to do this directly in your production environment
2. Test in a DEV/TEST environment where you have more or less same amount of data.
3. Note down the timing taken, which will be useful for planning the same for production.
4. Test the performance before and after partitioning.






Comments

Popular Posts