Tuning
Tuning impdp, generate less redo with transform=disable_archive_logging:Y
Category: Database Author: Rodrigo Mufalani (Board Member) Date: 5 years ago Comments: 0

Tuning impdp, generate less redo with transform=disable_archive_logging:Y

Every single DBA in this world know how boring is load huge amount of data into databases using dump files, it is so boring than see grass growing. Huge part of this time processing is because of redo/archivelog generation.
 
In Oracle 12cR1, oracle introduced this very nice feature to allow DBAs to import data bypassing redolog generation, nologging mode. So, you are running your database in force logging option it will not take effect and is strongly recommended generate a physical backup when the import of the database is finish.
 
Let’s setup our test environment, In this case I use 18c, but you can use 12cR1 as well to execute this procedure.
SQL> create directory home as '/home/oracle' ;

Directory created.

SQL> create user c##mufalani identified by welcome1;

User created.

SQL> grant dba to c##mufalani;

Grant succeeded.
Now, log in this user and create a table and load some test data on it.
— Create some data into one table
SQL> conn c##mufalani/welcome1
Connected.
SQL>
SQL>
SQL> create table test as select * from cdb_objects;

Table created.

SQL> insert into test (select * from test);

72897 rows created.

SQL> /

145794 rows created.

SQL> /

291588 rows created.

SQL> /

583176 rows created.

SQL> /

1166352 rows created.

SQL> commit;

Commit complete.

SQL>
– Let’s check the size of the table (segment)
SQL> col SEGMENT_NAME form a12
SQL> select bytes/1024/1024 mb, segment_name, segment_type from user_segments;

MB SEGMENT_NAME SEGMENT_TYPE
---------- ------------ ------------------
375 TEST TABLE
— Now, we are generating the dumpfile to load and measure the redo generation
[oracle@ora18c3 ~]$ expdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=mufalani.log schemas="C##MUFALANI"

Export: Release 18.0.0.0.0 - Production on Sun Jul 7 11:52:38 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Starting "SYS"."SYS_EXPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=mufalani.log schemas=C##MUFALANI
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/mufalani.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Sun Jul 7 11:53:32 2019 elapsed 0 00:00:49
— Verifying the size of the dumpfile
[oracle@ora18c3 ~]$ ls -ltrh mufalani.*
-rw-r-----. 1 oracle dba 325M Jul 7 11:53 mufalani.dmp
-rw-r--r--. 1 oracle dba 1.3K Jul 7 11:53 mufalani.log
— So, now we will check the actual value for redo generation on v$sysstat, before the import to compare after the import
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;

NAME MB
---------------- ----------
redo size 837.06
— Then use the import in regular fashion, and see how much redo generation will be done
[oracle@ora18c3 ~]$ impdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas="C##MUFALANI"

Import: Release 18.0.0.0.0 - Production on Sun Jul 7 12:01:12 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Master table "SYS"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas=C##MUFALANI
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_SCHEMA_01" successfully completed at Sun Jul 7 12:01:49 2019 elapsed 0 00:00:36
— Check the redo generated again
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;

NAME MB
---------------- ----------
redo size 1215.03
As we can see, to import this table, in regular mode, Oracle generates about 378MB of redo. Let’s drop the table and repeat using this feature to cut redo generation.
SQL> drop table c##mufalani.test purge;

Table dropped.
 
Then, I will raise the import again using parameter transform=disable_archive_logging:Y and check later the redo generation.
[oracle@ora18c3 ~]$ impdp \"/as sysdba\" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas="C##MUFALANI" transform=disable_archive_logging:Y

Import: Release 18.0.0.0.0 - Production on Sun Jul 7 12:14:56 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database.

Master table "SYS"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_SCHEMA_01": "/******** AS SYSDBA" directory=home dumpfile=mufalani.dmp logfile=imp_no_transform.log schemas=C##MUFALANI transform=disable_archive_logging:Y
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "C##MUFALANI"."TEST" 323.9 MB 2332704 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_SCHEMA_01" successfully completed at Sun Jul 7 12:15:06 2019 elapsed 0 00:00:09
— Check the redo size generation again
SQL> select name, value/1024/1024 mb from v$sysstat where name ='redo size' ;
NAME MB
---------------- ----------
redo size 1218.01
As you can verify, the time to import was dropped from 36s to 9s and redo generated, just 3MB, very cool. Just to enforce the idea, my database is not running in force logging, note that the database is in archivelog mode, but no in force logging, this reduced of redolog generation  is because oracle executes the import in nologging mode, in force logging Oracle will not bypass redolog generation, so, the time and efforts to execute import will be increased.

 

 

SQL> select log_mode, FORCE_LOGGING, name from v$database;

LOG_MODE FORCE_LOGGING NAME
------------ --------------------------------------- ---------
ARCHIVELOG NO DB01
— Let’s check the table to see what is the logging mode of the table, after the import is finished. As you can see, logging, which is the normal way (generates redo).
SQL> COL OWNER FORM A12
SQL> COL TABLE_NAME FORM A12
SQL> COL LOGGING FORM A10
SQL> select owner, table_name, logging from dba_tables where owner='C##MUFALANI' ;

OWNER TABLE_NAME LOGGING
------------ ------------ ----------
C##MUFALANI TEST YES
This is an very usefull feature to load huge amount of data into the database, but, import data with this parameter transform, leads to unrecoverable transactions and you must  execute a full backup rigth after to finish the import to avoid corrupt blocks in case of recover. I will discuss this situation in other article, soon.
I hope this feature introduced on 12.1, can help you to speed up you imports.
All the best,
Rodrigo Mufalani
 
 
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”