

ZDLRA, Real-Time REDO and RPO ZERO
The idea for Real-Time Redo is to reach zero RPO for every kind of database and this includes ones with and without DG. As you can see in my last post, where I showed how to configure Real-Time Redo for one database, some little steps need to be executed and they are pretty similar than a remote destination for archivelog for DG.
But if you noticed, the configuration for the remote destination was defined as ASYNC, and hinted like that at ZDLRA docs (“Protection of Ongoing Transactions” or at “How Real-Time Redo Transport Works”). In the same post, I suggested as “controversial” because the ASYNC does not guarantee the RPO zero.
You can see more in the DataGuard docs at (Oracle Data Guard Protection Modes and Oracle Data Guard Concepts and Administration), but the resume it is:
ASYNC: The primary database does not wait for the response from a remote destination.
SYNC/NOAFIRM: The Primary database holds commit until the remote destination report that received the redo data. It does not wait until the remote site report that wrote the data in the disc.
SYNC/AFFIRM: The primary database holds commit until the remote destination report that received redo data and wrote it at the disk.
You can read with more details the difference here: Best Practices for Synchronous Redo Transport and Best Practices for Asynchronous Redo Transport.
The idea is simple, if you use ASYNC, there is no guarantee for zero data loss between the primary database and the remote destination.
ZDLRA Protection
If needed, the configuration to reach zero RPO, when using ZDLRA, it is simple. You just need to put the archive destination in SYNC. Simple like that. Before put archivelog destination in SYNC, I will do an example using ASYNC to show some features of real-time redo.
Saving crashes
One interesting point is that ZDLRA generates for you a new archivelog in case of a primary database crash. As the documentation says:
If the redo stream terminates unexpectedly, then the Recovery Appliance can close the incoming redo stream and create a partial archived redo log file backup, thereby protecting transactions up to the last change that the appliance received.
So, even not using DG or sync, you can protect it until the last transaction. In real life it is this (I will simulate a catastrophic failure; check the current archivelog, and kill the instances process after that; at the end, check how far the protection was):
Current archivelog:
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 29
Next log sequence to archive 31
Current log sequence 31
SQL>
SQL> ALTER SYSTEM ARCHIVE LOG current;
System altered.
SQL>
SQL> show parameter dest_2;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_online_log_dest_2 string
log_archive_dest_2 string SERVICE="zeros1ig-scan.simo.fz
net.org:1521/zeros1:dedicated"
ASYNC NOAFFIRM delay=0 option
al compression=disable max_fai
lure=1 max_connections=1 reope
n=10 DB_UNIQUE_NAME=zeros1 net
_timeout=8 group=1 priority=1
VALID_FOR=(ONLINE_LOGFILE,ALL_
ROLES)
log_archive_dest_20 string
…
…
Backups of archivelog (look the “Next Time”)
RMAN> list backup of archivelog sequence 31;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958158 1.12M SBT_TAPE 00:00:00 2019-10-18_19-28-21
BP Key: 50958159 Status: AVAILABLE Compressed: YES Tag: TAG20191018T222821
Handle: $RSCN_1_RTIM_1022003674_THRD_1_SEQ_31_CTKEY_50958151_BACKUP Media:
List of Archived Logs in backup set 50958158
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 31 1025023 2019-10-18_19-27-40 1025761 2019-10-18_19-28-20
RMAN> list backup of archivelog sequence 32;
specification does not match any backup in the repository
RMAN>
Killing the instance:
[oracle@exac1vm01-ORAD18]$ ps -ef |grep pmon_ORAD18
oracle 31646 1 0 22:27 ? 00:00:00 ora_pmon_ORAD18
oracle 37116 396809 0 22:29 pts/2 00:00:00 grep pmon_ORAD18
[oracle@exac1vm01-ORAD18]$ ps -ef |grep smon_ORAD18
oracle 31731 1 0 22:27 ? 00:00:00 ora_smon_ORAD18
oracle 37213 396809 0 22:29 pts/2 00:00:00 grep smon_ORAD18
[oracle@exac1vm01-ORAD18]$ ps -ef |grep lgwr_ORAD18
oracle 31724 1 0 22:27 ? 00:00:00 ora_lgwr_ORAD18
oracle 37245 396809 0 22:29 pts/2 00:00:00 grep lgwr_ORAD18
[oracle@exac1vm01-ORAD18]$ ps -ef |grep dbw0_ORAD18
oracle 31722 1 0 22:27 ? 00:00:00 ora_dbw0_ORAD18
oracle 40104 396809 0 22:29 pts/2 00:00:00 grep dbw0_ORAD18
[oracle@exac1vm01-ORAD18]$ ps -ef |grep arc* |grep ORAD18
oracle 31805 1 0 22:27 ? 00:00:00 ora_mark_ORAD18
oracle 32021 1 0 22:27 ? 00:00:00 ora_arc0_ORAD18
oracle 32042 1 0 22:27 ? 00:00:00 ora_arc1_ORAD18
oracle 32050 1 0 22:27 ? 00:00:00 ora_arc2_ORAD18
oracle 32057 1 0 22:27 ? 00:00:00 ora_arc3_ORAD18
[oracle@exac1vm01-ORAD18]$
[oracle@exac1vm01-ORAD18]$
[oracle@exac1vm01-ORAD18]$
[oracle@exac1vm01-ORAD18]$ date
Fri Oct 18 22:30:32 CEST 2019
[oracle@exac1vm01-ORAD18]$ kill -9 31646 31731 31724 31722 32021 32042 32050 32057
[oracle@exac1vm01-ORAD18]$
#####
At alertlog:
2019-10-18T22:30:33.160912+02:00
RMS0 (ospid: 31718): terminating the instance due to ORA error 472
Cause - 'Instance is being terminated due to fatal process PMON being dead.'
2019-10-18T22:30:33.290189+02:00
System state dump requested by (instance=1, osid=31718 (RMS0)), summary=[abnormal instance termination]. error - 'Instance is terminating.
'
System State dumped to trace file /u01/app/oracle/diag/rdbms/orad18/ORAD18/trace/ORAD18_diag_31692_20191018223033.trc
2019-10-18T22:30:34.378356+02:00
Dumping diagnostic data in directory=[cdmp_20191018223033], requested by (instance=1, osid=31718 (RMS0)), summary=[abnormal instance termination].
2019-10-18T22:30:35.642670+02:00
Instance terminated by RMS0, pid = 31718
Above look the hour that I killed the instance.
How Far it Was
[oracle@exac1vm01-ORAD18]$ export ORACLE_SID=SIMON
[oracle@exac1vm01-SIMON]$ rman target / catalog vpcusr/welcome1@zeros1ig-scan:1521/zeros1:dedicated
Recovery Manager: Release 18.0.0.0.0 - Production on Fri Oct 18 22:31:39 2019
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
connected to target database (not started)
connected to recovery catalog database
RMAN> startup force nomount;
startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/initSIMON.ora'
starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started
Total System Global Area 1073740616 bytes
Fixed Size 8665928 bytes
Variable Size 281018368 bytes
Database Buffers 775946240 bytes
Redo Buffers 8110080 bytes
RMAN> set dbid 2464352672;
executing command: SET DBID
database name is "ORAD18" and DBID is 2464352672
RMAN> list backup of archivelog sequence 32;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958220 72.50K SBT_TAPE 00:00:00 2019-10-18_22-30-42
BP Key: 50958221 Status: AVAILABLE Compressed: YES Tag: TAG20191018T223042
Handle: $RSCN_1_RTIM_1022003674_THRD_1_SEQ_32_CTKEY_50958156_BACKUP Media:
List of Archived Logs in backup set 50958220
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 32 1025761 2019-10-18_22-28-20 1026104 2019-10-18_22-30-32
RMAN>
Above you can see that now I have at catalog one backup of archivelog sequence number 32. And the “Next Time” for it is 22:30:32 that was the same moment that I killed the instance. So, the ZDLRA protected until the last point before the crash (even in ASYNC mode).
One little explanation. As you saw, I opened one new instance (called SIMON) to check the catalog because I wanted to avoid that when opening the same instance (ORAD18) the partial archivelog generated by ZDLRA was removed because it believes that database returned. This is explained in the doc:
If the protected database crashes, redo data received from the current redo log group until the time of the crash is backed up at the Recovery Appliance as a “partial” archived redo log. If the protected database is reopened, crash recovery of the protected database will complete the current redo log group at the time of the crash, and the completed redo log will be re-shipped to the Recovery Appliance through the automatic Data Guard Gap fetching feature. The “complete” archived redo log will be used in any future restore/recover operations instead of the previously backed up “partial” archived redo log.
Saving crashed, now with load
Using the example above you can argue that ZDLRA reached RPO zero just because I don’t have load over the database. And, being worst, without SYNC you can’t guarantee zero data loss. Yes, I agree with you for both (and I already exposed thins in my last post and in the introduction of this post). To show you how to use ZDLRA with SYNC for archive log destination.
Sync mode
Putting the destination in SYNC mode. Just changed to SYNC NOAFFIRM:
[oracle@exac1vm01-ORAD18]$ sqlplus / as sysdba
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Oct 18 23:04:25 2019
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
SQL> ALTER SYSTEM SET log_archive_dest_2='SERVICE="zeros1ig-scan.simo.fznet.org:1521/zeros1:dedicated" SYNC NOAFFIRM delay=0 optional compression=disable max_failure=1 max_connections=1 reopen=10 DB_UNIQUE_NAME=zeros1 net_timeout=8 group=1 priority=1 VALID_FOR=(ONLINE_LOGFILE,ALL_ROLES)' scope = spfile sid = '*';
System altered.
SQL>
Current archivelog:
After bouncing the instance, checking the current archivelog and creating a new one:
SQL> alter system archive log current;
System altered.
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 3
Next log sequence to archive 5
Current log sequence 5
SQL>
Backups of archivelog (look the “Next Time”)
Check that does not exist archivelog from the current redo log:
[oracle@exac1vm01-ORAD18]$ rman target / catalog vpcusr/welcome1@zeros1ig-scan:1521/zeros1:dedicated
Recovery Manager: Release 18.0.0.0.0 - Production on Fri Oct 18 23:05:35 2019
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORAD18 (DBID=2464352672)
connected to recovery catalog database
RMAN> list backup of archivelog sequence 4;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958759 41.00K SBT_TAPE 00:00:00 2019-10-18_23-05-23
BP Key: 50958760 Status: AVAILABLE Compressed: YES Tag: TAG20191018T230523
Handle: $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_4_CTKEY_50958752_BACKUP Media:
List of Archived Logs in backup set 50958759
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 4 1131626 2019-10-18_23-04-53 1131667 2019-10-18_23-05-09
RMAN> list backup of archivelog sequence 5;
specification does not match any backup in the repository
RMAN>
The simulate load I created a table with 3 columns and put two sessions loading data and generating a huge number of commits. After every insert, I commit the data and show the current time for each insert. Look at the sessions:
[oracle@exac1vm01-ORAD18]$ for i in {1..100000}
> do
> echo "Insert Data $i - `date +%d-%m-%Y-%H%M%S`"
> sqlplus -s / as sysdba<<EOF
> set heading on feedback on;
> insert into testIns(c1, c2, c3) values ($i, sysdate, 'Loop');
> commit;
> EOF
> done
Insert Data 1 - 18-10-2019-230723
1 row created.
Commit complete.
Insert Data 2 - 18-10-2019-230723
1 row created.
Commit complete.
Insert Data 3 - 18-10-2019-230723
1 row created.
Commit complete.
Insert Data 4 - 18-10-2019-230724
1 row created.
Commit complete.
....
....
[oracle@exac1vm01-ORAD18]$
[oracle@exac1vm01-ORAD18]$ for i in {1..100000}
> do
> echo "Insert Data $i - `date +%d-%m-%Y-%H%M%S`"
> sqlplus -s / as sysdba<<EOF
> set heading on feedback on;
> insert into testIns(c1, c2, c3) values ($i, sysdate, 'Loop2');
> commit;
> EOF
> done
Insert Data 1 - 18-10-2019-230816
1 row created.
Commit complete.
Insert Data 2 - 18-10-2019-230816
1 row created.
Commit complete.
Insert Data 3 - 18-10-2019-230816
1 row created.
Commit complete.
...
...
Killing the instance
[oracle@exac1vm01-]$
[oracle@exac1vm01-]$
[oracle@exac1vm01-]$ ps -ef |grep pmon_ORAD18
oracle 93700 1 0 22:51 ? 00:00:00 ora_pmon_ORAD18
oracle 140193 357155 0 23:09 pts/1 00:00:00 grep pmon_ORAD18
[oracle@exac1vm01-]$ ps -ef |grep smon_ORAD18
oracle 93786 1 0 22:51 ? 00:00:00 ora_smon_ORAD18
oracle 140394 357155 0 23:09 pts/1 00:00:00 grep smon_ORAD18
[oracle@exac1vm01-]$ ps -ef |grep lgwr_ORAD18
oracle 93780 1 0 22:51 ? 00:00:00 ora_lgwr_ORAD18
oracle 140561 357155 0 23:09 pts/1 00:00:00 grep lgwr_ORAD18
[oracle@exac1vm01-]$ ps -ef |grep dbw0_ORAD18
oracle 93776 1 0 22:51 ? 00:00:00 ora_dbw0_ORAD18
oracle 140738 357155 0 23:09 pts/1 00:00:00 grep dbw0_ORAD18
[oracle@exac1vm01-]$ ps -ef |grep arc* |grep ORAD18
oracle 93835 1 0 22:51 ? 00:00:00 ora_mark_ORAD18
oracle 103814 1 0 22:55 ? 00:00:00 ora_arc0_ORAD18
oracle 103816 1 0 22:55 ? 00:00:00 ora_arc1_ORAD18
oracle 103818 1 0 22:55 ? 00:00:00 ora_arc2_ORAD18
oracle 103820 1 0 22:55 ? 00:00:00 ora_arc3_ORAD18
[oracle@exac1vm01-]$
[oracle@exac1vm01-]$ date
Fri Oct 18 23:09:44 CEST 2019
[oracle@exac1vm01-]$ kill -9 93700 93786 93780 93776 103814 103816 103818 103820
[oracle@exac1vm01-]$
Failed load
As expected, the two sessions failed:
...
...
Insert Data 1016 - 18-10-2019-230944
1 row created.
Commit complete.
Insert Data 1017 - 18-10-2019-230944
1 row created.
commit
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 142277
Session ID: 53 Serial number: 30197
...
...
Insert Data 646 - 18-10-2019-230944
1 row created.
Commit complete.
Insert Data 647 - 18-10-2019-230944
1 row created.
commit
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 142274
Session ID: 41 Serial number: 3186
Losing everything
So, the session loading string “Loop” apparently loaded until the value 1016 with success. And the session 2, loading with string “ Loop2” loaded until value 647. These values are the mark until we want to reach.
But to be more precise and catastrophic, I removed the data from filesystem/ASM too:
ASMCMD> cd +DATAC1/ORAD18/
ASMCMD> rm -rf CONTROLFILE/
ASMCMD> rm -rf DATAFILE/
ASMCMD> rm -rf ONLINELOG/
ASMCMD> rm -rf TEMPFILE/
ASMCMD> cd +RECOC1/ORAD18/
ASMCMD> rm -rf ARCHIVELOG/
ASMCMD> rm -rf AUTOBACKUP/
ASMCMD> rm -rf CONTROLFILE/
ASMCMD> rm -rf ONLINELOG/
ASMCMD>
Now, there is no escape. I need to recover completely the database to retrieve it until the last transaction. Let’s see if ZDLRA works 😉
Recovery Everything – ZERO RPO
Below I restore the database until the last transaction. The major steps follow the same as every full disaster recovery:
Restore Controlfile
Discover the last backup of archivelog
Restore and recover database until the found scn
Look at how it is:
[oracle@exac1vm01-ORAD18]$ rman target / catalog vpcusr/welcome1@zeros1ig-scan:1521/zeros1:dedicated
Recovery Manager: Release 18.0.0.0.0 - Production on Fri Oct 18 23:12:19 2019
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
connected to target database (not started)
connected to recovery catalog database
RMAN> startup nomount;
Oracle instance started
Total System Global Area 2147480376 bytes
Fixed Size 8659768 bytes
Variable Size 671088640 bytes
Database Buffers 1459617792 bytes
Redo Buffers 8114176 bytes
RMAN> list backup of controlfile completed after "sysdate - 10/1440";
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958685 Full 50.25M SBT_TAPE 00:00:01 2019-10-18_23-03-15
BP Key: 50958686 Status: AVAILABLE Compressed: NO Tag: TAG20191018T230314
Handle: c-2464352672-20191018-0b Media: Recovery Appliance (ZDLRAK)
Control File Included: Ckp SCN: 1131375 Ckp time: 2019-10-18_23-03-14
RMAN> restore controlfile from tag = TAG20191018T230314;
Starting restore at 2019-10-18_23-13-24
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=30 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=414 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: RA Library (ZDLRAK) SID=95344A4C604F5404E0538C43B20A52E2
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: restoring control file
channel ORA_SBT_TAPE_1: reading from backup piece c-2464352672-20191018-0b
channel ORA_SBT_TAPE_1: piece handle=c-2464352672-20191018-0b tag=TAG20191018T230314
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:02
output file name=+DATAC1/ORAD18/CONTROLFILE/current.741.1022012007
output file name=+RECOC1/ORAD18/CONTROLFILE/current.968.1022012009
Finished restore at 2019-10-18_23-13-28
RMAN> list backup of database completed after "sysdate - 15/1440";
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958689 Incr 1 40.00K SBT_TAPE 00:00:01 2019-10-18_23-03-11
BP Key: 50958690 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958684I Media:
List of Datafiles in backup set 50958689
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
1 1 Incr 1131350 2019-10-18_23-03-10 NO +DATAC1/ORAD18/DATAFILE/system.894.1022011045
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958693 Incr 0 249.44M SBT_TAPE 00:00:01 2019-10-18_23-03-11
BP Key: 50958694 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958684_1 Media:
List of Datafiles in backup set 50958693
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
1 0 Incr 1131350 2019-10-18_23-03-10 NO +DATAC1/ORAD18/DATAFILE/system.894.1022011045
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958710 Incr 1 40.00K SBT_TAPE 00:00:01 2019-10-18_23-03-10
BP Key: 50958711 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958707I Media:
List of Datafiles in backup set 50958710
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
3 1 Incr 1131348 2019-10-18_23-03-09 NO +DATAC1/ORAD18/DATAFILE/undotbs1.496.1022011015
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958723 Incr 0 10.68M SBT_TAPE 00:00:01 2019-10-18_23-03-10
BP Key: 50958724 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958707_3 Media:
List of Datafiles in backup set 50958723
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
3 0 Incr 1131348 2019-10-18_23-03-09 NO +DATAC1/ORAD18/DATAFILE/undotbs1.496.1022011015
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958727 Incr 1 40.00K SBT_TAPE 00:00:01 2019-10-18_23-03-13
BP Key: 50958728 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958714I Media:
List of Datafiles in backup set 50958727
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
4 1 Incr 1131357 2019-10-18_23-03-12 NO +DATAC1/ORAD18/DATAFILE/users.353.1022011029
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958731 Incr 0 40.00K SBT_TAPE 00:00:01 2019-10-18_23-03-13
BP Key: 50958732 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958714_4 Media:
List of Datafiles in backup set 50958731
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
4 0 Incr 1131357 2019-10-18_23-03-12 NO +DATAC1/ORAD18/DATAFILE/users.353.1022011029
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958736 Incr 1 40.00K SBT_TAPE 00:00:01 2019-10-18_23-03-12
BP Key: 50958737 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958735I Media:
List of Datafiles in backup set 50958736
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
2 1 Incr 1131353 2019-10-18_23-03-11 NO +DATAC1/ORAD18/DATAFILE/sysaux.921.1022010999
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
50958740 Incr 0 93.84M SBT_TAPE 00:00:01 2019-10-18_23-03-12
BP Key: 50958741 Status: AVAILABLE Compressed: YES Tag: BKP-LEVEL1
Handle: VB$_1965521110_50958735_2 Media:
List of Datafiles in backup set 50958740
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
2 0 Incr 1131353 2019-10-18_23-03-11 NO +DATAC1/ORAD18/DATAFILE/sysaux.921.1022010999
RMAN> list backup of archivelog all completed after "sysdate - 15/1440";
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958604 2.00M SBT_TAPE 00:00:00 2019-10-18_23-02-52
BP Key: 50958605 Status: AVAILABLE Compressed: NO Tag: BKP-ARCH
Handle: 14uel9vc_1_1 Media: Recovery Appliance (ZDLRAK)
List of Archived Logs in backup set 50958604
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 2 1129806 2019-10-18_22-58-29 1131219 2019-10-18_23-02-50
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958618 1.15M SBT_TAPE 00:00:00 2019-10-18_23-02-54
BP Key: 50958619 Status: AVAILABLE Compressed: YES Tag: TAG20191018T230254
Handle: $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_2_CTKEY_50958445_BACKUP Media:
List of Archived Logs in backup set 50958618
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 2 1129806 2019-10-18_22-58-29 1131219 2019-10-18_23-02-50
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958754 67.50K SBT_TAPE 00:00:00 2019-10-18_23-05-00
BP Key: 50958755 Status: AVAILABLE Compressed: YES Tag: TAG20191018T230500
Handle: $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_3_CTKEY_50958595_BACKUP Media:
List of Archived Logs in backup set 50958754
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 3 1131219 2019-10-18_23-02-50 1131626 2019-10-18_23-04-53
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958759 41.00K SBT_TAPE 00:00:00 2019-10-18_23-05-23
BP Key: 50958760 Status: AVAILABLE Compressed: YES Tag: TAG20191018T230523
Handle: $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_4_CTKEY_50958752_BACKUP Media:
List of Archived Logs in backup set 50958759
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 4 1131626 2019-10-18_23-04-53 1131667 2019-10-18_23-05-09
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
50958809 2.05M SBT_TAPE 00:00:00 2019-10-18_23-10-12
BP Key: 50958810 Status: AVAILABLE Compressed: YES Tag: TAG20191018T231012
Handle: $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_5_CTKEY_50958757_BACKUP Media:
List of Archived Logs in backup set 50958809
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 5 1131667 2019-10-18_23-05-09 1135762 2019-10-18_23-09-44
RMAN>
RMAN> alter database mount;
released channel: ORA_DISK_1
released channel: ORA_SBT_TAPE_1
Statement processed
RMAN>
RMAN> run{
2> set until scn 1135762;
3> restore database;
4> recover database;
5> }
executing command: SET until clause
Starting restore at 2019-10-18_23-16-04
Starting implicit crosscheck backup at 2019-10-18_23-16-04
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
Crosschecked 1 objects
Finished implicit crosscheck backup at 2019-10-18_23-16-05
Starting implicit crosscheck copy at 2019-10-18_23-16-05
using channel ORA_DISK_1
Finished implicit crosscheck copy at 2019-10-18_23-16-05
searching for all files in the recovery area
cataloging files...
no files cataloged
using channel ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=419 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: RA Library (ZDLRAK) SID=953453F453376CA6E0538C43B20A62A0
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00001 to +DATAC1/ORAD18/DATAFILE/system.894.1022011045
channel ORA_SBT_TAPE_1: reading from backup piece VB$_1965521110_50958684_1
channel ORA_SBT_TAPE_1: piece handle=VB$_1965521110_50958684_1 tag=BKP-LEVEL1
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:15
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00003 to +DATAC1/ORAD18/DATAFILE/undotbs1.496.1022011015
channel ORA_SBT_TAPE_1: reading from backup piece VB$_1965521110_50958707_3
channel ORA_SBT_TAPE_1: piece handle=VB$_1965521110_50958707_3 tag=BKP-LEVEL1
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:15
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00004 to +DATAC1/ORAD18/DATAFILE/users.353.1022011029
channel ORA_SBT_TAPE_1: reading from backup piece VB$_1965521110_50958714_4
channel ORA_SBT_TAPE_1: piece handle=VB$_1965521110_50958714_4 tag=BKP-LEVEL1
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:15
channel ORA_SBT_TAPE_1: starting datafile backup set restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
channel ORA_SBT_TAPE_1: restoring datafile 00002 to +DATAC1/ORAD18/DATAFILE/sysaux.921.1022010999
channel ORA_SBT_TAPE_1: reading from backup piece VB$_1965521110_50958735_2
channel ORA_SBT_TAPE_1: piece handle=VB$_1965521110_50958735_2 tag=BKP-LEVEL1
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:15
Finished restore at 2019-10-18_23-17-11
Starting recover at 2019-10-18_23-17-12
using channel ORA_DISK_1
using channel ORA_SBT_TAPE_1
starting media recovery
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=3
channel ORA_SBT_TAPE_1: reading from backup piece $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_3_CTKEY_50958595_BACKUP
channel ORA_SBT_TAPE_1: piece handle=$RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_3_CTKEY_50958595_BACKUP tag=TAG20191018T230500
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
archived log file name=+RECOC1/ORAD18/ARCHIVELOG/2019_10_18/thread_1_seq_3.967.1022012235 thread=1 sequence=3
channel default: deleting archived log(s)
archived log file name=+RECOC1/ORAD18/ARCHIVELOG/2019_10_18/thread_1_seq_3.967.1022012235 RECID=33 STAMP=1022012236
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=4
channel ORA_SBT_TAPE_1: reading from backup piece $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_4_CTKEY_50958752_BACKUP
channel ORA_SBT_TAPE_1: piece handle=$RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_4_CTKEY_50958752_BACKUP tag=TAG20191018T230523
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:02
archived log file name=+RECOC1/ORAD18/ARCHIVELOG/2019_10_18/thread_1_seq_4.967.1022012237 thread=1 sequence=4
channel default: deleting archived log(s)
archived log file name=+RECOC1/ORAD18/ARCHIVELOG/2019_10_18/thread_1_seq_4.967.1022012237 RECID=34 STAMP=1022012237
channel ORA_SBT_TAPE_1: starting archived log restore to default destination
channel ORA_SBT_TAPE_1: restoring archived log
archived log thread=1 sequence=5
channel ORA_SBT_TAPE_1: reading from backup piece $RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_5_CTKEY_50958757_BACKUP
channel ORA_SBT_TAPE_1: piece handle=$RSCN_1129803_RTIM_1022011106_THRD_1_SEQ_5_CTKEY_50958757_BACKUP tag=TAG20191018T231012
channel ORA_SBT_TAPE_1: restored backup piece 1
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:00:01
archived log file name=+RECOC1/ORAD18/ARCHIVELOG/2019_10_18/thread_1_seq_5.967.1022012239 thread=1 sequence=5
channel default: deleting archived log(s)
archived log file name=+RECOC1/ORAD18/ARCHIVELOG/2019_10_18/thread_1_seq_5.967.1022012239 RECID=35 STAMP=1022012240
media recovery complete, elapsed time: 00:00:01
Finished recover at 2019-10-18_23-17-21
RMAN> alter database open resetlogs;
Statement processed
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
RMAN> exit
Recovery Manager complete.
[oracle@exac1vm01-ORAD18]$
Some details in this log:
The archivelog sequence 5 was generated by ZDLRA. Compare in the previous output that I don’t have it when listed the generated archivelog.
The archivelog “Next Time” was “23:09:44” that was the same moment of failure by instance crash.
I used the SCN to pick up exactly the last transaction, covered by SCN.
Checking data
[oracle@exac1vm01-ORAD18]$ sqlplus / as sysdba
SQL*Plus: Release 18.0.0.0.0 - Production on Fri Oct 18 23:18:37 2019
Version 18.3.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
SQL>
SQL>
SQL> select count(*) from testIns group by c3;
COUNT(*)
----------
646
1016
SQL>
Look above that I have exactly the 646 and 1016 data that I was inserting. Everything was protected and I had RPO zero.
Even in ASYNC
Besides the SYNC mode, I made the same test as ASYNC mode, and you can see that even in ASYNC mode we reached RPO zero (and doing load during the test). You can see in the txt file that all the transactions were protected.
ZDLRA and ZERO RPO
As you saw above with the examples, you can reach RPO zero for ZDLRA. As already told in my previous post (and in the Oracle docs), the real-time redo is based on DG remote transport of redo.
But with ZDLRA you can have zero RPO even for databases without DG configurated. You saw that ZDLRA protected the database even in case of catastrophic failure (losing all data and redos). Everything was protected.
Think about how it was to save this failure in the traditional backup environment. Until the last backup of archivelog? Until the last sync of storage that holds the archivelogs/redo? How much data your company can loose?
The idea of SYNC with ZDLRA, besides the docs hinting to use ASYNC to avoid overload the primary database, if fine to use. I understand that they don’t want to overload primary database and put it to wait ZDLRA writes at disks. The usage of SYNC with NOAFFIRM was proved that work. I don’t think that SYNC/AFFIRM cam improve even more the protection, because ZDLRA it is not a full DG remote destination (there are no standby redo logs), and the write to disks can’t be synchronous. The impact for SYNC/NOAFFIRM is almost zero since it involves only memory to memory copy (redo log buffers from source to remote destination).
Besides that, think about what kind of environment you usually have ZDLRA. Critical databases right? And even the name of the product says ZERO DATA LOSS. So, SYNC is the only way to truly guarantee this.
Now it is easy to understand how real-time redo it is important. You can protect your database (from small to the critical) and reach RPO zero. And if you go further and follow MAA best practices you can reach RPO and RTO zero.
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


ZDLRA, Real-Time Redo
Real-time redo transport is the feature that allows you to reduce to zero the RPO (Recovery Point Objective) for your database. Check how to configure real-time redo, the steps, parameters, and other details that need to be modified to enable it.
The idea behind real-time redo transport it is easy, basically the ZDLRA it is a remote destination for your redo log buffers/archivelogs of your database. It is really, really, similar to what occurs for data guard configurations (but here you don’t need to set all datafiles as an example). It is not the same too because ZDLRA can detect if the database stops/crash and will generate the archivelog (at ZDLRA side) with all the received redo and this can be used to restore to, at least zero/sub-seconds, of data loss.
Using real-time redo it is the only way to reach RPO zero. With other features of ZDLRA, you can have a better backup window time (but just that) using incremental backups. Just using real-time redo you reach zero RPO and this impacts directly how to configure for MAA compliance. There are a lot of options and level of protection for MAA that you can check at “Maximum Availability Architecture (MAA) – On-Premises HA Reference Architectures 2019”, “Maximum Availability Architecture Best Practices for Oracle Cloud”, “Oracle MAA Reference Architectures”, “Maximum Availability Architecture – Best Practices for Oracle Database 19c”.
This post starts from one environment that you already enrolled in the database at ZDLRA. I already wrote about how to do that, you can check here in my previous post. This is the first post about real-time redo, here you will see how to configure and verify it is working.

Pre-Check
The way that I configure real-time redo for database it is a little different than what is in the official docs. I added some pre-checks and another order for some steps, but the result is the same. If you want to check the official steps you can check at Zero Data Loss Recovery Appliance Protected Database Configuration Guide or at Zero Data Loss Recovery Appliance Administrator’s Guide.
Just contextualization of environment:
ZDLRA database name: zdlras1
Database name: orcl19
VPC user: cat_zdlra
Check Configuration
The first step that I do is verify the current configuration. Mainly the preexistence of wallet:
[oracle@orcloel7 ~]$ mkstore -wrl /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/ra_wallet -listCredential
Oracle Secret Store Tool Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.
List credential (index: connect_string username)
1: zdlras1-scan:1521/zdlras1:CAT_ZDLRA cat_zdlra
[oracle@orcloel7 ~]$
[oracle@orcloel7 ~]$ cat $ORACLE_HOME/dbs/raORCL19.ora
RA_WALLET='LOCATION=file:/u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/ra_wallet CREDENTIAL_ALIAS=zdlras1-scan:1521/zdlras1:CAT_ZDLRA'
[oracle@orcloel7 ~]$
The most import here is the credential name (zdlras1-scan:1521/zdlras1:CAT_ZDLRA in this case) because it needs to be the same name for the database file that points to RA_WALLET. The config file is for each database/instance and has the name ra<DBNAME>.ora (it was already configured by the previous post pointed before).
After that, I check if the sqlnet.ora and tnsnames.ora have the information for wallet and TNS entry respectively:
[oracle@orcloel7 ~]$ cat $ORACLE_HOME/network/admin/sqlnet.ora
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
SQLNET.WALLET_OVERRIDE = true
WALLET_LOCATION =
(
SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = /u01/app/oracle/product/19.3.0.0/dbhome_1/dbs/ra_wallet)
)
)
[oracle@orcloel7 ~]$
[oracle@orcloel7 ~]$ tnsping zdlras1
TNS Ping Utility for Linux: Version 19.0.0.0.0 - Production on 01-OCT-2019 23:30:36
Copyright (c) 1997, 2019, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/19.3.0.0/dbhome_1/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = zdlras1-scan)(PORT = 1521)) (CONNECT_DATA = (SERVICE_NAME = zdlras1)))
OK (0 msec)
[oracle@orcloel7 ~]$
Check Database Configuration
After check outside of the database config, it is important to check some database requirements. The most import is verifying if REMOTE_LOGIN_PASSWORDFILE it is set as “exclusive” or “shared”:
[oracle@orcloel7 ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Oct 1 23:32:09 2019
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL>
SQL> show parameter REMOTE_LOGIN_PASSWORDFILE;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile string EXCLUSIVE
SQL>
SQL>
Other parameters to verify are log_archive_config and db_unique_name. They are important to verify the current configuration in cases of databases using DG as an example.
Configuring Real-Time Redo
After these pre-checks, we can start the configuration. Here I set everything at spfile to, after configuring everything, bounce the database and start with everything up and running.
REDO_TRANSPORT_USER
It is the parameter that defines the user that the database utilizes to send the redo to another database. This user needs to exist in both databases (for DG), and because of that needs to exist in ZDLRA. When using ZDLRA and real-time redo it needs to be the same as VPC inside the database. So, the same VPC user needs to exists in database side and set for this parameter:
SQL> show parameter redo_transport_user;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
redo_transport_user string
SQL>
SQL> alter system set redo_transport_user='CAT_ZDLRA' scope = spfile SID='*';
System altered.
SQL>
SQL> create user cat_zdlra identified by s3nhazdlra;
User created.
SQL> grant sysoper to cat_zdlra;
Grant succeeded.
SQL>
As you can see the user CAT_ZDLRA was created inside of the database. The user needs to have at least the grant SYSOPER to work properly.
LOG_ARCHIVE_CONFIG
This parameter works the same for DG environments, and since ZDLRA real-time redo it is a remote destination for archive, we do the same. Here, we set this specifying the database unique name for ZDLRA here (zdlras1 in my current environment):
SQL> show parameter log_archive_config;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_config string
SQL>
SQL> show parameter db_unique_name;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string ORCL19
SQL>
SQL> ALTER SYSTEM SET log_archive_config='DG_CONFIG=(orcl19,zdlras1)' SCOPE=SPFILE SID='*';
System altered.
SQL>
If you already have DG configured, you need to just add the entry for ZDLRA database here in the parameter.
LOG_ARCHIVE_DEST_n
SQL> ALTER SYSTEM SET log_archive_dest_2='SERVICE="zdlras1-scan:1521/zdlras1:CAT_ZDLRA" ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=1 max_connections=1 reopen=10 DB_UNIQUE_NAME=zdlras1 net_timeout=8 group=1 priority=1 VALID_FOR=(ONLINE_LOGFILE,ALL_ROLES)' scope = spfile sid = '*';
System altered.
SQL>
SQL> alter system set log_archive_dest_state_2=DEFER scope = spfile sid = '*';
System altered.
SQL>
The secrets are:
SERVICE: it is the connection to ZDLRA. It can be a TNS service name, or an ezconnect. If you sawm I have the credential name, as the same of ezconnection. I recommend to use this, so you know what the credential it is for. If you use TNS entry for SERVICE, recommend having a credential name with the same name.
ASYNC NOAFFIRM: This is controversy (I will explain later), but this defines the mode that redo log is shipped for the remote destination.
DB_UNIQUE_NAME: Database unique name of recovery appliance database.
All the others are the normal parameters for DG config and you can check at LOG_ARCHIVE_DEST_n Parameter Attributes.
As I told before, the ASYNC NOAFFIRM are controversial because if you look it was defined as ASYNC, and it is the same in the ZDLRA docs. You can read here at “Protection of Ongoing Transactions” (or at “How Real-Time Redo Transport Works”):
“Redo data from the protected database is written asynchronously to the Recovery Appliance as it is generated. Load on production database servers is minimized because redo data is shipped directly from the memory to the Recovery Appliance without the involvement of disk I/O”
“To avoid degrading the performance of the protected database, protected databases transfer redo asynchronously to the Recovery Appliance. If a protected database is lost, zero to subsecond data loss is expected in most cases.”
The controversy here is because the appliance it is ZERO DATA LOSS, and ASYNC does not guarantee zero data loss. So, is it a scam case and we need to change the name for Non-Zero Data Loss Recovery Appliance? No, they just are trying to remember you that to minimize the impact over the database, it was configured in async mode. But you can configure in SYNC mode, it is allowed, but remember that this can/will impact the database.
If you want to understand the impact over that SYNC/ASYNC/AFFIRM/NOAFFIRM have over the database, read the “Best Practices for Synchronous Redo Transport Data Guard and Active Data Guard” doc. It is fundamental.
I will cover this in the next post about real-time redo.
Using Real-Time Redo
After pre-check and configure all the parameters above, we can start to use it. If you have some part already configured (like the redo transport user, and the wallet), you can config without restart database. But, here, I made the restart of the database (and enabled the log_archive_dest_state_n).
The best way to show/check real-time redo working is through rman, where I can describe the archivelogs and backups that exist for each one:
[oracle@orcloel7 ~]$ rman target=/ catalog=cat_zdlra/s3nhazdlra@zdlras1
Recovery Manager: Release 19.0.0.0.0 - Production on Tue Oct 1 23:57:42 2019
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL19 (DBID=310627084)
connected to recovery catalog database
RMAN> list copy of archivelog all;
specification does not match any archived log in the repository
RMAN> alter system archive log current;
Statement processed
RMAN> list copy of archivelog all;
List of Archived Log Copies for database with db_unique_name ORCL19
=====================================================================
Key Thrd Seq S Low Time
------- ---- ------- - -------------------
4329 1 47 A 02-10-2019_00:09:32
Name: /u01/app/oracle/oradata/ORCL19/archivelog/2019_10_02/o1_mf_1_47_gs7mzgkn_.arc
RMAN> list backup of archivelog all completed after "sysdate - 5/1440";
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
4365 595.50K SBT_TAPE 00:00:00 02-10-2019_00:15:34
BP Key: 4366 Status: AVAILABLE Compressed: YES Tag: TAG20191002T001534
Handle: $RSCN_1920977_RTIM_1009298641_THRD_1_SEQ_47_CTKEY_4222_BACKUP Media:
List of Archived Logs in backup set 4365
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 47 2370920 02-10-2019_00:09:32 2372594 02-10-2019_00:15:10
RMAN>
Look the example above, I started with no copy of archivelogs locally. After I made a switch for archivelog and sequence 47 was generated. Instantly, if I do a list backup archivelog you can see that already exists one copy of the same sequence.
For the copy, you can identify that it is a real-time redo and not a copy because of the handle. It has a specific format starting as $RSCN.
If I do one more time, the same behavior:
RMAN> list copy of archivelog all;
List of Archived Log Copies for database with db_unique_name ORCL19
=====================================================================
Key Thrd Seq S Low Time
------- ---- ------- - -------------------
4329 1 47 A 02-10-2019_00:09:32
Name: /u01/app/oracle/oradata/ORCL19/archivelog/2019_10_02/o1_mf_1_47_gs7mzgkn_.arc
RMAN> alter system archive log current;
Statement processed
RMAN> list copy of archivelog all;
List of Archived Log Copies for database with db_unique_name ORCL19
=====================================================================
Key Thrd Seq S Low Time
------- ---- ------- - -------------------
4329 1 47 A 02-10-2019_00:09:32
Name: /u01/app/oracle/oradata/ORCL19/archivelog/2019_10_02/o1_mf_1_47_gs7mzgkn_.arc
4406 1 48 A 02-10-2019_00:15:10
Name: /u01/app/oracle/oradata/ORCL19/archivelog/2019_10_02/o1_mf_1_48_gs7n1ck8_.arc
RMAN> list backup of archivelog all completed after "sysdate - 5/1440";
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
4365 595.50K SBT_TAPE 00:00:00 02-10-2019_00:15:34
BP Key: 4366 Status: AVAILABLE Compressed: YES Tag: TAG20191002T001534
Handle: $RSCN_1920977_RTIM_1009298641_THRD_1_SEQ_47_CTKEY_4222_BACKUP Media:
List of Archived Logs in backup set 4365
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 47 2370920 02-10-2019_00:09:32 2372594 02-10-2019_00:15:10
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
4445 41.00K SBT_TAPE 00:00:00 02-10-2019_00:16:32
BP Key: 4446 Status: AVAILABLE Compressed: YES Tag: TAG20191002T001632
Handle: $RSCN_1920977_RTIM_1009298641_THRD_1_SEQ_48_CTKEY_4294_BACKUP Media:
List of Archived Logs in backup set 4445
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- ------------------- ---------- ---------
1 48 2372594 02-10-2019_00:15:10 2372793 02-10-2019_00:16:11
RMAN>
Another point is that if you try to do the backup of this archivelog you will see that they was already backed:
RMAN> BACKUP DEVICE TYPE SBT FILESPERSET 100 FORMAT '%U' ARCHIVELOG ALL NOT BACKED UP TAG 'BKP-ARCH';
Starting backup at 02-10-2019_00:18:30
current log archived
released channel: ORA_DISK_1
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=86 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: RA Library (ZDLRAS1) SID=93E18A9FC482277FE053010310AC3886
skipping archived logs of thread 1 from sequence 47 to 48; already backed up
Real-Time Redo… in real
The real-time redo transport configuration that I showed here just covered the configuration part. You can use it as a guide do understand what you need to configure and check. But this post does not cover full details about it.
For the next post about real-time redo, I will explore more details about the controversy about SYNC/ASYNC mode for transport. I will test and simulate other details, like a complete crash of the environment and verify if everything will be safe and we reach zero RPO.
But in the end, we have the full “Delta Push” for our database. The incremental forever strategy and real-time redo enabled.
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


ZDLRA Internals, Virtual Full Backup
Virtual Full Backup probably is the most know feature of Oracle Zero Data Loss Recovery Appliance (ZDLRA) and you can check here how it works. In this post I will show how virtual full backup works internally and integrate INDEX_BACKUP task with tables like PLANS, PLAN_DETAILS, CHUNKS, and BLOCKS.
About the internal tables, you can check my previous post “ZDLRA Internals, Tables and Storage” where I explained details about that. To understand the INDEX_BACKUP task, check my post “ZDLRA Internals, INDEX_BACKUP task in details”. But if you know nothing and want to start reading about ZDLRA, you can check my post “Understanding ZDLRA” and check all the features and details.
The base for this article is virtual full backup and incremental forever strategy. I explained both at “ZDLRA, Virtual Full Backup and Incremental Forever” and I included hot it’s work integrated with rman backup. Basically, after an initial backup level 0, you execute just level 1 backups and ZDLRA generated a virtual backup level 0. But here, in this post, I will show you how it works in some internal details.
Database Environment
In this post, I used a new datafile with 1MB (block size of 8k) and I create a small table to load some data. After every load, I took backup for the datafile (level 1). The idea is to show how ZDLRA will index the backups, generate, and store internally the virtual full backup. The database here runs over 19c, and the ZDLRA it is running 19c version too. But, this works the same for every database version supported, and for every ZDLRA version.
Creating the tablespace, checking the datafile, and creating the table:
SQL> create tablespace simon datafile '/u01/app/oracle/oradata/ORCL19/simon01.dbf' size 1m autoextend on next 1m maxsize 10m;
Tablespace created.
SQL>
SQL> select file_id from dba_data_files where tablespace_name = 'SIMON';
FILE_ID
----------
5
SQL>
SQL> create table test(c1 decimal(1,0), c2 varchar2(128)) tablespace simon;
Table created.
SQL> insert into test (c1, c2) values (0, 'SIMON');
1 row created.
SQL> commit;
Commit complete.
SQL>
And after that, doing the backup level 0 for the datafile:
RMAN> list backup of datafile 5;
specification does not match any backup in the repository
RMAN> BACKUP INCREMENTAL LEVEL 0 DEVICE TYPE SBT FILESPERSET 1 DATAFILE 5;
Starting backup at 22-09-2019_17:54:27
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=75 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: RA Library (ZDLRAS1) SID=9327516A92A43E0DE053010310ACCB56
channel ORA_SBT_TAPE_1: starting incremental level 0 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ORCL19/simon01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 22-09-2019_17:54:28
channel ORA_SBT_TAPE_1: finished piece 1 at 22-09-2019_17:54:29
piece handle=ORCL19_2aucdsak_1_1 tag=TAG20190922T175427 comment=API Version 2.0,MMS Version 12.2.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
Finished backup at 22-09-2019_17:54:29
Starting Control File and SPFILE Autobackup at 22-09-2019_17:54:29
piece handle=c-310627084-20190922-05 comment=API Version 2.0,MMS Version 12.2.0.2
Finished Control File and SPFILE Autobackup at 22-09-2019_17:54:30
RMAN>
RMAN> list backup of datafile 5;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
2729 Incr 0 40.00K SBT_TAPE 00:00:02 22-09-2019_17:54:30
BP Key: 2730 Status: AVAILABLE Compressed: YES Tag: TAG20190922T175427
Handle: VB$_1887643964_2728I Media:
List of Datafiles in backup set 2729
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- ------------------- ----------- ------ ----
5 0 Incr 2320763 22-09-2019_17:54:28 NO /u01/app/oracle/oradata/ORCL19/simon01.dbf
RMAN>
So, as you can see, the tablespace SIMON was created (datafile #5). After that, the table TEST was created and some data loaded. At the end, the first backup of datafile was done and this was indexed by ZDLRA.
Basic information for database
For this database (database name ORCL19), inside internal tables of ZDLRA, we can see that have the DB_KEY as 2202 and DB_INCKEY as 2203. The DF_KEY for the datafile is 2689. This info is important to identify correctly the backups.
[oracle@zdlras1n1 ~]$ sqlplus rasys/change^Me2
SQL*Plus: Release 19.0.0.0.0 - Production on Sun Sep 22 17:53:21 2019
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Last Successful login time: Sun Sep 22 2019 17:39:03 +02:00
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> set linesize 250
SQL> select db_key, dbinc_key from rc_database where name = 'ORCL19';
DB_KEY DBINC_KEY
---------- ----------
2202 2203
SQL>
SQL> select df_key, file#, ts#, create_scn, create_time, block_size, blocks from df where dbinc_key = 2203 and file# = 5;
DF_KEY FILE# TS# CREATE_SCN CREATE_TI BLOCK_SIZE BLOCKS
---------- ---------- ---------- ---------- --------- ---------- ----------
2689 5 6 2319183 22-SEP-19 8192 128
SQL>
Virtual Full Backup and PLANS
After the backup ingests, and the task INDEX_BACKUP finished at ZDLRA, we have one virtual full backup linked with the datafile. This information came from VBDF and PLANS tables:
SQL> select vb_key, ckp_scn, vcbp_key, srcbp_key, blocks, chunks_used from vbdf where db_key = 2202 and df_key = 2689 order by vb_key asc;
VB_KEY CKP_SCN VCBP_KEY SRCBP_KEY BLOCKS CHUNKS_USED
---------- ---------- ---------- ---------- ---------- -----------
2728 2320763 2730 2701 128 1
SQL>
SQL> select * from plans where db_key = 2202 and df_key = 2689 order by vb_key asc;
TYPE DB_KEY VB_KEY DF_KEY TASK_ID OLD BLKSREAD MAXRANK NUMCHUNKS READSIZE NEEDCHUNK FREECHUNK
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
8 2202 2728 2689 19 1 1
SQL>
Above we can see that this virtual full backup with VB_KEY have 128 blocks (128 * 8K = 1M) and have one plan for this backup. This virtual full backup/plan has just 19 blocks (BLKSREAD) and used 1 chunk to store it (NUMCHUNKS).
But this not show how ZDLRA see the backup, the virtual full backup store the information at PLAN_DETAILS table. Reading this table we can see the foundation of virtual full backup:
SQL> select * from plans_details where df_key = 2689 order by vb_key asc, blockno asc;
DF_KEY TYPE VB_KEY BLKRANK BLOCKNO CHUNKNO NUMBLKS COFFSET NUMBYTES
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
2689 8 2728 1 0 1 1 8192 24576
2689 8 2728 1 2 1 17 32788 2167
2689 8 2728 1 4294967295 1 1 34955 294
SQL>
Let’s explain what this means:
The datafile block 0 (column BLOCKNO) until block 1 (BLOCKNO+NUNBLKS) are stored at chunk 1.
The datafile block 2 (column BLOCKNO) until block 19 (BLOCKNO+NUNBLKS) are stored at chunk 1
The datafile block 4294967295 (the last block of datafile) is stored at chunks 1.
To show this I created the schema below. This represents the first 13 blocks of the virtual full backup and will help to visualize the virtual full backup next. So, the virtual full backup with VB_KEY 2728 can be reconstructed following the PLANS_DETAILS as:

Subsequent backups
If we continue to modify data in this tablespace/datafile and execute new incremental backup level 1 we can see more details. So, first, add more data and do a backup:
SQL> BEGIN
2 FOR i IN 1 .. 300 LOOP
3 insert into test (c1, c2) values (2, DBMS_RANDOM.STRING('P', 128));
4
5 if (MOD(i, 100) = 0) then
6 commit;
7 end if;
8
9 END LOOP;
10
11 commit;
12 END;
13 /
PL/SQL procedure successfully completed.
SQL>
RMAN> BACKUP INCREMENTAL LEVEL 1 DEVICE TYPE SBT FILESPERSET 1 DATAFILE 5;
Starting backup at 22-09-2019_18:34:30
using channel ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: starting incremental level 1 datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ORCL19/simon01.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 22-09-2019_18:34:31
channel ORA_SBT_TAPE_1: finished piece 1 at 22-09-2019_18:34:32
piece handle=ORCL19_2cucduln_1_1 tag=TAG20190922T183430 comment=API Version 2.0,MMS Version 12.2.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
Finished backup at 22-09-2019_18:34:32
Starting Control File and SPFILE Autobackup at 22-09-2019_18:34:32
piece handle=c-310627084-20190922-06 comment=API Version 2.0,MMS Version 12.2.0.2
Finished Control File and SPFILE Autobackup at 22-09-2019_18:34:36
RMAN>
And now inside of ZDLRA we have two virtual full backups and two plans (for PLANS look the number of blocks read – this is normal since I added more data above):
SQL> select vb_key, ckp_scn, vcbp_key, srcbp_key, blocks, chunks_used from vbdf where db_key = 2202 and df_key = 2689 order by vb_key asc;
VB_KEY CKP_SCN VCBP_KEY SRCBP_KEY BLOCKS CHUNKS_USED
---------- ---------- ---------- ---------- ---------- -----------
2728 2320763 2730 2701 128 1
2768 2322525 2770 2735 128 1
SQL>
SQL> select * from plans where db_key = 2202 and df_key = 2689 order by vb_key asc;
TYPE DB_KEY VB_KEY DF_KEY TASK_ID OLD BLKSREAD MAXRANK NUMCHUNKS READSIZE NEEDCHUNK FREECHUNK
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
8 2202 2728 2689 19 1 1
1 2202 2768 2689 27 1 2
SQL>
The virtual full backups have these PLAN_DETAILS:
SQL> select * from plans_details where df_key = 2689 order by vb_key asc, blockno asc;
DF_KEY TYPE VB_KEY BLKRANK BLOCKNO CHUNKNO NUMBLKS COFFSET NUMBYTES
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
2689 8 2728 1 0 1 1 8192 24576
2689 8 2728 1 2 1 17 32788 2167
2689 8 2728 1 4294967295 1 1 34955 294
2689 1 2768 1 0 1025 1 8192 24576
2689 1 2768 1 2 1025 2 32788 252
2689 1 2768 1 4 1 4 33038 408
2689 1 2768 1 8 1025 16 33040 45339
2689 1 2768 1 71 1 3 34703 252
2689 1 2768 1 4294967295 1025 1 78379 293
9 rows selected.
SQL>
Checking in details the last virtual full backup, VB_KEY 2768, we can see interesting things and start to understand how it’s work. So, to “mount” the virtual full backup 2768 we have:
The datafile block 0 (column BLOCKNO) until block 1 (BLOCKNO+NUNBLKS) are stored at chunk 1025.
The datafile block 2 (column BLOCKNO) until block 4 (BLOCKNO+NUNBLKS) are stored at chunk 1025.
The datafile block 4 (column BLOCKNO) until block 8 (BLOCKNO+NUNBLKS) are stored at chunk 1 (and came from previous virtual full backup).
The datafile block 8 (column BLOCKNO) until block 24 (BLOCKNO+NUNBLKS) are stored at chunk 1025.
So, the virtual full backup with VB_KEY 2768 for DF_KEY can be reconstructed following the PLANS_DETAILS. It is the image below represents the Delta Store and actual virtual full backups:

Look that ZDLRA started to save space to store the backup, being “smart” and indexing all the blocks to needed to create the virtual full backup. But the important here it is that ZDLRA understood and indexed every datafile block that was inside of backup. And the virtual full backup “does not exist”, it is basically the index for each version of the block.
If I continue to insert/delete/update some of the lines from this table I will possibly change blocks (and same blocks already created) and if I do some subsequent backups I have these plans:
SQL> select vb_key, ckp_scn, vcbp_key, srcbp_key, blocks, chunks_used from vbdf where db_key = 2202 and df_key = 2689 order by vb_key asc;
VB_KEY CKP_SCN VCBP_KEY SRCBP_KEY BLOCKS CHUNKS_USED
———- ———- ———- ———- ———- ———–
2728 2320763 2730 2701 128 1
2768 2322525 2770 2735 128 1
2818 2323607 2820 2779 128 1
2874 2324062 2876 2829 128 1
2936 2324792 2938 2885 128 1
SQL> select * from plans where db_key = 2202 and df_key = 2689 order by vb_key asc;
TYPE DB_KEY VB_KEY DF_KEY TASK_ID OLD BLKSREAD MAXRANK NUMCHUNKS READSIZE NEEDCHUNK FREECHUNK
———- ———- ———- ———- ———- ———- ———- ———- ———- ———- ———- ———-
8 2202 2728 2689 19 1 1
1 2202 2768 2689 27 1 2
1 2202 2818 2689 27 1 3
1 2202 2874 2689 27 1 3
1 2202 2936 2689 27 1 4
SQL> select * from plans_details where df_key = 2689 order by vb_key asc, blockno asc;
DF_KEY TYPE VB_KEY BLKRANK BLOCKNO CHUNKNO NUMBLKS COFFSET NUMBYTES
———- ———- ———- ———- ———- ———- ———- ———- ———-
2689 8 2728 1 0 1 1 8192 24576
2689 8 2728 1 2 1 17 32788 2167
2689 8 2728 1 4294967295 1 1 34955 294
2689 1 2768 1 0 1025 1 8192 24576
2689 1 2768 1 2 1025 2 32788 252
2689 1 2768 1 4 1 4 33038 408
2689 1 2768 1 8 1025 16 33040 45339
2689 1 2768 1 71 1 3 34703 252
2689 1 2768 1 4294967295 1025 1 78379 293
2689 1 2818 1 0 2049 1 8192 24576
2689 1 2818 1 2 1025 2 32788 252
2689 1 2818 1 4 1 4 33038 408
2689 1 2818 1 8 2049 1 32788 257
2689 1 2818 1 9 1025 7 33293 36549
2689 1 2818 1 16 2049 3 33045 17304
2689 1 2818 1 19 1025 1 76821 1026
2689 1 2818 1 20 2049 4 50349 29052
2689 1 2818 1 71 1 3 34703 252
2689 1 2818 1 4294967295 2049 1 79401 301
2689 1 2874 1 0 3073 1 8192 24576
2689 1 2874 1 2 1025 2 32788 252
2689 1 2874 1 4 1 4 33038 408
2689 1 2874 1 8 3073 1 32788 262
2689 1 2874 1 9 1025 2 33293 347
2689 1 2874 1 11 3073 13 33050 94555
2689 1 2874 1 71 1 3 34703 252
2689 1 2874 1 4294967295 3073 1 127605 296
2689 1 2936 1 0 4097 1 8192 24576
2689 1 2936 1 2 1025 2 32788 252
2689 1 2936 1 4 1 4 33038 408
2689 1 2936 1 8 3073 1 32788 262
2689 1 2936 1 9 1025 2 33293 347
2689 1 2936 1 11 3073 13 33050 94555
2689 1 2936 1 71 1 3 34703 252
2689 1 2936 1 4294967295 4097 1 32788 278
35 rows selected.
SQL>
As you can see above now I have 5 virtual full backups (2728, 2768,2818,2874, and 2936) stored in 5 chunks (1, 1025, 2049,3073,4097). In this case, if I want need to read the virtual full backup 2936 I read the PLAN_DETAILS and check that:
The datafile block 0 until block 1 are stored at chunk 4097.
The datafile block 2 until block 4 are stored at chunk 1025.
The datafile block 4 until block 8 are stored at chunk 1.
The datafile block 8 are stored at chunk 3073.
The datafile block 9 until block 10 are stored at chunk 1025.
The datafile block 11 until block 24 are stored at chunk 3073.
The image below represents this:

Automated Delta Pool Space Management
Since ZDLRA contains a “self-managed” rman catalog, the backups need are managed automatically. This means that unnecessary data are deleted from time to time to avoid redundancy and to be more space-efficient. For ZDLRA this is called “automated delta pool space management”, specifically the “delta pool optimization“, to optimize the delta pool.
Internally this means that ZDLRA constantly checks the ingest backup and try to optimize the Delta Store. If I continue to change data and to do more backups we can see this in action:
SQL> select * from plans_details where df_key = 2689 order by vb_key asc, blockno asc;
DF_KEY TYPE VB_KEY BLKRANK BLOCKNO CHUNKNO NUMBLKS COFFSET NUMBYTES
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
2689 1 2768 1 0 1025 1 8192 24576
2689 1 2768 1 2 1025 2 32788 252
2689 1 2768 1 4 1 4 33038 408
2689 1 2768 1 8 1025 16 33040 45339
2689 1 2768 1 71 1 3 34703 252
2689 1 2768 1 4294967295 1025 1 78379 293
2689 1 2818 1 0 2049 1 8192 24576
2689 1 2818 1 2 1025 2 32788 252
2689 1 2818 1 4 1 4 33038 408
2689 1 2818 1 8 2049 1 32788 257
2689 1 2818 1 9 1025 7 33293 36549
2689 1 2818 1 16 2049 3 33045 17304
2689 1 2818 1 19 1025 1 76821 1026
2689 1 2818 1 20 2049 4 50349 29052
2689 1 2818 1 71 1 3 34703 252
2689 1 2818 1 4294967295 2049 1 79401 301
2689 1 2874 1 0 3073 1 8192 24576
2689 1 2874 1 2 1025 2 32788 252
2689 1 2874 1 4 1 4 33038 408
2689 1 2874 1 8 3073 1 32788 262
2689 1 2874 1 9 1025 2 33293 347
2689 1 2874 1 11 3073 13 33050 94555
2689 1 2874 1 71 1 3 34703 252
2689 1 2874 1 4294967295 3073 1 127605 296
2689 1 2936 1 0 4097 1 8192 24576
2689 1 2936 1 2 1025 2 32788 252
2689 1 2936 1 4 1 4 33038 408
2689 1 2936 1 8 3073 1 32788 262
2689 1 2936 1 9 1025 2 33293 347
2689 1 2936 1 11 3073 13 33050 94555
2689 1 2936 1 71 1 3 34703 252
2689 1 2936 1 4294967295 4097 1 32788 278
2689 1 3002 1 0 5121 1 8192 24576
2689 1 3002 1 2 5121 2 32788 255
2689 1 3002 1 4 1 4 33038 408
2689 1 3002 1 8 3073 1 32788 262
2689 1 3002 1 9 1025 1 33293 128
2689 1 3002 1 10 5121 1 33043 215
2689 1 3002 1 11 3073 13 33050 94555
2689 1 3002 1 24 5121 28 33258 4986
2689 1 3002 1 52 1 1 34703 84
2689 1 3002 1 56 5121 8 38244 1168
2689 1 3002 1 72 1 1 34787 84
2689 1 3002 1 128 5121 843 39412 125460
2689 1 3002 1 4294967295 5121 1 164872 301
45 rows selected.
SQL>
Here, look the plan for virtual full backup 2768 (that was one of the first and already consolidated) changed. Look that for block 0 the chunk that store changed from 1 to 1025. And this occurred for other blocks (blocks 8-13 as an example), look the plans right now:

The red arrows represent the change that we can see in the PLAN_DETAILS table for VB_KEY 2728. The back arrows represent the blocks needed (from previous backups) if I want to read the virtual full backup 3002.
At the end if I do another backup we can see more evolution for space management:
SQL> select * from plans_details where df_key = 2689 order by vb_key asc, blockno asc;
DF_KEY TYPE VB_KEY BLKRANK BLOCKNO CHUNKNO NUMBLKS COFFSET NUMBYTES
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
2689 1 2818 1 0 2049 1 8192 24576
2689 1 2818 1 2 1025 2 32788 252
2689 1 2818 1 4 1 4 33038 408
2689 1 2818 1 8 2049 1 32788 257
2689 1 2818 1 9 1025 7 33293 36549
2689 1 2818 1 16 2049 3 33045 17304
2689 1 2818 1 19 1025 1 76821 1026
2689 1 2818 1 20 2049 4 50349 29052
2689 1 2818 1 71 1 3 34703 252
2689 1 2818 1 4294967295 2049 1 79401 301
2689 1 2874 1 0 3073 1 8192 24576
2689 1 2874 1 2 1025 2 32788 252
2689 1 2874 1 4 1 4 33038 408
2689 1 2874 1 8 3073 1 32788 262
2689 1 2874 1 9 1025 2 33293 347
2689 1 2874 1 11 3073 13 33050 94555
2689 1 2874 1 71 1 3 34703 252
2689 1 2874 1 4294967295 3073 1 127605 296
2689 1 2936 1 0 4097 1 8192 24576
2689 1 2936 1 2 1025 2 32788 252
2689 1 2936 1 4 1 4 33038 408
2689 1 2936 1 8 3073 1 32788 262
2689 1 2936 1 9 1025 2 33293 347
2689 1 2936 1 11 3073 13 33050 94555
2689 1 2936 1 71 1 3 34703 252
2689 1 2936 1 4294967295 4097 1 32788 278
2689 1 3002 1 0 5121 1 8192 24576
2689 1 3002 1 2 5121 2 32788 255
2689 1 3002 1 4 1 4 33038 408
2689 1 3002 1 8 3073 1 32788 262
2689 1 3002 1 9 1025 1 33293 128
2689 1 3002 1 10 5121 1 33043 215
2689 1 3002 1 11 3073 13 33050 94555
2689 1 3002 1 24 5121 28 33258 4986
2689 1 3002 1 52 1 1 34703 84
2689 1 3002 1 56 5121 8 38244 1168
2689 1 3002 1 72 1 1 34787 84
2689 1 3002 1 128 5121 843 39412 125460
2689 1 3002 1 4294967295 5121 1 164872 301
2689 1 3074 1 0 6145 1 8192 24576
2689 1 3074 1 2 5121 2 32788 255
2689 1 3074 1 4 1 4 33038 408
2689 1 3074 1 8 3073 1 32788 262
2689 1 3074 1 9 1025 1 33293 128
2689 1 3074 1 10 5121 1 33043 215
2689 1 3074 1 11 3073 13 33050 94555
2689 1 3074 1 24 5121 28 33258 4986
2689 1 3074 1 52 1 1 34703 84
2689 1 3074 1 56 5121 8 38244 1168
2689 1 3074 1 72 1 1 34787 84
2689 1 3074 1 128 5121 843 39412 125460
2689 1 3074 1 4294967295 6145 1 32788 281
52 rows selected.
SQL>
The image that represents the actual story now it is:

If you check the table and the image you can see that ZDLRA check block a block and now the block #8 for VB_KEY 2728 came from chunk 2049 (done by virtual full backup 2818). If you check with the previous report you can see that backup #8 was split from previous chunk and this information was added for VB_KEY 2728.
At the end, this means that ZDLRA optimized the delta store too, maybe, allow that chunk 1 can be deleted in the future. Probably during the INDEX_BACKUP/PLAN_DF tasks the ZDLRA checked that the block 1 it is the same that exists in others chunks and marked the existing version in chunk number 1 as obsolete.
Virtual Full Backup
This is how the virtual full backup works. Every datafile block that enters inside of ZDLRA from rman backup it is indexed and stored to create the virtual full backup. As you saw above, the idea is creating the representation for every virtual full backup of datafile (VBDF table) linking with one plan (PLANS and PLAN_DETAILS tables).
Going further, you can see that does not exist 1 to 1 relation between backup that was ingested and the virtual full backup. It is just a matrix of pointers for blocks inside chunks. This is the reason that ZDLRA it is different from other backup appliances, it can analyze block a block and index it efficiently.
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


LUXOUG - 2020 - Update - COVID-19 !!
We would like to express our sincerest apologies to you.
Face to the last events and news about the pandemic status of COVID-19, we were suggested by attendees and also by the sponsors to postpone our LuxOUG 2020 event.
The Luxembourg Oracle User Group is very sad with the current situation, but your health and welfare is important to us.
We will keep everyone informed when we have a new date for the event.
Sincerely
The LuxOUG – Board
First LUXOUG event for the Oracle community.
We will hold our first event in Luxembourg, covering various technologies such as DevOps, Engineered System, Middleware, Cloud and others.
Location: Luxembourg
Date: 28/3/2020
Event Schedule: 9:00 AM – 17:00 PM (CEST)
Address: 13 Rue de l’Industrie, 8399 Windhof – 4 floor.
See you at the event
Event Schedule:

Our Speakers

Erik van Roon – In 1995 Erik van Roon switched careers for medical reasons from Microbiology/Biochemistry to being an Oracle Developer. Since then he has worked for several consulting companies in the Netherlands. Since 2009 he is self-employed. His company is called EvROCS, short for Erik van Roon Oracle Consulting Services. He has worked on major projects for several clients in industries like entertainment, banking and energy, working with the ‘classic’ Oracle tools Designer, Forms, Reports, and of course SQL and PLSQL. He has been the technical lead of multiple successful high impact data-migration projects moving and transforming large amounts of data. In 2012 he started speaking at international conferences. He was nominated and selected to be a finalist in the SQL category of OTN Developers Choice Awards in 2015.
site: https://www.evrocs.nl/?lang=en
twitter: https://twitter.com/evrocs_nl
linkein: https://www.linkedin.com/in/erikvanroon/

Kamran Agayev – Kamran Agayev Agamehdi is an Oracle Certified Professional DBA (9i, 10g) with over 7 years experience with UNIX Systems and with Oracle Databases. During his 7 years working in the IT industry, Kamran has been exposed to a wide range of technologies and developed a broad set of technical skills. He have developed a very strong background in Oracle Databases, UNIX and Windows systems. Kamran has worked in Production environments as an Oracle DBA and in Development environments as Oracle DBA/Developer, and adapt well to both. His background, communication and documentation skills help in bridging the gap between business people, database administrators and UNIX system administrators. He’s an author of the book – “Oracle Backup & Recovery: Expert secrets for using RMAN and Data Pump”. He’s also pursuing a PhD in Azerbaijan Oil Academy in Computer Science, teaching “Oracle Database Administration” and presents seminars in a Qafqaz University. At the same time, he’s an active member of OTN Forums and has published a lot of “Step by Step” articles and Oracle Video tutorials on his blog http://kamranagayev.wordpress.com covering main Oracle features. Currently he’s working as a DBA Expert at “Azercell Telecom”
site: http://kamranagayev.wordpress.com/
twitter: https://www.twitter.com/KamranAgayev
linkein: https://www.linkedin.com/in/KamranAgayev

Toon Koppelars – Toon has been part of the Oracle ecosystem since 1987, he is part of Oracle Real World Performance team. He is currently a member of Oracle’s Real-World Performance team. The RWP team troubleshoots application performance issues in and around the DBMS. The way applications currently use (or rather, abuse) the DBMS, is often at the root of these performance issues. Prior to joining the RWP team, Toon has been mainly involved in database application development. Real-World Performance @oracle – database designer – SQL+PL/SQL – OakTable – author Applied Math for DB Pro’s – SQL Assertions – #SmartDB – Opinions my own
site: www.oracle.com
twitter: https://twitter.com/toonkoppelaars
linkein: N/A

Bruno Reis – Bruno Reis da Silva is an Oracle Senior Cloud Database Administrator Consultant and professionally Certified Oracle Database Administrator who has worked on the South American continent and is now working on the European continent. His interests are in RDBMS, mainly Oracle, operating systems (Linux, AIX, HPUX and Solaris) and High Availability Solutions implementations. In his free time he enjoys playing sports , going to the gym and traveling. His blog is dedicated to his sister Laura Vitoria and the main reason for blogging is because he believes in the quote “giving back to get back” . He also enjoys continuous learning and the interaction with others who share his interest.
site: http://www.techdatabasket.com/
twitter: https://twitter.com/brunorsdba
linkein: https://br.linkedin.com/in/bruno-reis-81498191

Piet de Visser – Piet is an experienced Database Consultant and DBA. He has been in IT since 1987, and specializes in Oracle and Database-related technology since 1994. Piet combines technical database knowledge with extensive experience in global organizations. His energetic personality and language-skills makes him ideal for pioneering and ground-breaking projects in a multi-national environment. Piet also invests a lot of effort in maintaining a network of Like-Minded Oracle specialists worldwide and is a regular speaker at international usergroup events and symposia. Piet likes to travel, speaks various languages more or less fluently, and is alway interested in learning how to say “Hello” and “Thank You” in a few more languages, provided he can learn it in-situ. Piet specializes in Oracle databases and High-Availability, deployment and performance. His philosophy is that Simple systems are easier to maintain and to run and therefore offer better Availability.
site: http://pdvfirstblog.blogspot.com/
twitter: https://twitter.com/pdevisser
linkein: https://nl.linkedin.com/in/pietdevisser

Robert Marz – Robert Marz is a member of its-people GmbH, a Germany based professional services company. At customer sites, he works as a Senior Technical Architect with a database-centric view of the world. Robert is Portfolio Manager Database Technologies at its-people. At DOAG (the German Oracle user group), he is in charge of the Cloud topics inside the database community. Robert is a member of ODTUG, IOUG and RMOUG. More than twenty years of experience in custom development and data warehouse projects makes him a specialist in Oracle database development and administration. Additionally, he is skilled in the fields of scripting languages, operating systems, virtualization and cloud technologies. Robert Marz publishes articles and papers in various publications, maintains the its-people portfolio blog and speaks frequently at mostly German conferences.
Personal Interests
Robert is a passionate amateur photographer. His main models are his wife and his little daughter. He loves to travel with his family. Vacations always include extended hiking. Robert has a well-equipped home cinema and owns more than two thousand movies on BluRay and DVD. Prior to his daughter’s birth, he went to the cinema with his wife more than sixty times a year. He is famous for identifying movies walking by a TV after watching only a couple of seconds.
site: https://robbie.databee.org/
twitter: https://twitter.com/RobbieDatabee
linkein: https://www.linkedin.com/in/robbieddatabee

Nikitas Xenakis – Nikitas is a Platform Specialist-Architect with over 17 years of experience in architecture, design and implementation of enterprise-scale, business-critical solutions, underpinned by Oracle Database and Middleware technologies in a 24x7x365 real-time, multi-platform, multi-site/cloud landscape. Active member in Global Leaders for TP and CAB programs, frequent speaker, presenter at OOW, UKOUG, DOAG.
site: http://nikitasxenakis.wordpress.com/
twitter: https://www.twitter.com/Nikitas_Xenakis
linkein: https://www.linkedin.com/in/nikitasxenakis
Sponsor



Converting Dictionary Managed Tablespaces (DMT) to Local Managed Tablespaces (LMT)
Today we found some really old database using really old fashion tablespaces allocation. In another life, when I began with Oracle database and we use to call MOSC as Metalink (@ludodba) there was a way to manage the extents on database called DICTIONARY MANAGED TABLESPACES (DMT), most because of performance issues, Oracle has decided to change this management of the extents instead to resides on data dictionary (SYSTEM tbs), to be stored on bitmaps on local tablespace, that’s why they called it LOCAL MANAGED TABLESPACE (LMT). This feature was introduced on Oracle 9i in 2001, almost 20 years ago and we still see old stuff running around.
The steps that we will perform are very fast and simple, but as we are changing some core things on the database, please, make sure to have full and confiable backup before.
Here is an important note, a little bit confusing, please read it before to proceed. Resuming, after convert SYSTEM, you cannot put DMT in read write mode anymore.
********* IMPORTANT NOTE **********
Note:After the SYSTEM tablespace is migrated to locally managed, any dictionary-managed tablespaces in the database cannot be made read/write.
If you want to use the dictionary-managed tablespaces in read/write mode, then Oracle recommends
that you first migrate these tablespaces to locally managed before migrating the SYSTEM tablespace.
***********************************
1) Put DATABASE in restricted session
SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION;
System altered.
SELECT logins FROM v$instance;
LOGINS
---------
RESTRICTED
2) Define a default temporary tablespace on the database
SQL> col PROPERTY_NAME form a30
SQL> col PROPERTY_VALUE form a30
SQL> col DESCRIPTION form a50
SQL> select * from database_properties where PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
------------------------------ ------------------------------ --------------------------------------------------
DEFAULT_TEMP_TABLESPACE SYSTEM ID of default temporary tablespace
Obs.: Really bad, using system tablespace to store temporary segments, some DBA here didn’t made his homework… In fact his work.
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMPTBS01;
Database altered.
Obs.: Check if temp is dictionary managed, if so, recreate it before to proceed with the others.
SQL> select * from database_properties where PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
------------------------------ ------------------------------ --------------------------------------------------
DEFAULT_TEMP_TABLESPACE TEMPTBS01 ID of default temporary tablespace
3) Migrate all other DICTIONARY MANAGED TBS before MIGRATE system
select tablespace_name, status, extent_management
from dba_tablespaces
where extent_management = 'DICTIONARY';
4) Execute the migration
select 'execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('''||tablespace_name||''');' cmd
from dba_tablespaces where extent_management = 'DICTIONARY'
and tablespace_name <> 'SYSTEM';
CMD
----------------------------------------------------------------------------------------
execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('INDX');
execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('TOOLS');
execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('TESTTBS1');
execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('TESTTBS2');
execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('USERS');
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('INDX');
PL/SQL procedure successfully completed.
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('TOOLS');
PL/SQL procedure successfully completed.
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('TESTTBS1');
PL/SQL procedure successfully completed.
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('TESTTBS2');
PL/SQL procedure successfully completed.
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('USERS');
PL/SQL procedure successfully completed.
5) Make sure that all tablespaces, other than SYSTEM were migrated to LMT, after execute the migration of SYSTEM to LMT, if you left any dictionary managed tablespaces (DMT), you cannot put them read write anymore.
select tablespace_name, status, extent_management
from dba_tablespaces
where extent_management = 'DICTIONARY'
and tablespace_name <> 'SYSTEM';
no rows selected
6) Put all tablespaces other than UNDO, TEMP AND SYSAUX in readonly mode.
select 'ALTER TABLESPACE '||tablespace_name||' READ ONLY;' CMD
from dba_tablespaces
where tablespace_name NOT in ('SYSTEM','SYSAUX')
and contents not in ('UNDO','TEMPORARY');
CMD
----------------------------------------------------------
ALTER TABLESPACE INDX READ ONLY;
ALTER TABLESPACE TOOLS READ ONLY;
ALTER TABLESPACE TESTTBS1 READ ONLY;
ALTER TABLESPACE TESTTBS2 READ ONLY;
ALTER TABLESPACE USERS READ ONLY;
7) Check the status of the tablespaces
SQL> select tablespace_name, status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
INDX READ ONLY
TOOLS READ ONLY
TESTTBS1 READ ONLY
TESTTBS2 READ ONLY
USERS READ ONLY
UNDOTBS01 ONLINE
TEMPTBS01 ONLINE
SYSAUX ONLINE
8) Put SYSAUX tablespace offline
SQL> alter tablespace SYSAUX offline;
Tablespace altered.
9) Check the status of tablespaces again, just UNDO, TEMP and SYSTEM must be online.
SQL> select tablespace_name, status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
INDX READ ONLY
TOOLS READ ONLY
TESTTBS1 READ ONLY
TESTTBS2 READ ONLY
USERS READ ONLY
UNDOTBS01 ONLINE
TEMPTBS01 ONLINE
SYSAUX OFFLINE
10) Now we can migrate SYSTEM tablespaces
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('SYSTEM');
PL/SQL procedure successfully completed.
11) Put the tablespaces again read write and online
select 'ALTER TABLESPACE '||tablespace_name||' READ WRITE;' CMD
from dba_tablespaces
where tablespace_name NOT in ('SYSTEM','SYSAUX')
and contents not in ('UNDO','TEMPORARY');
CMD
-----------------------------------------------------------
ALTER TABLESPACE INDX READ WRITE;
ALTER TABLESPACE TOOLS READ WRITE;
ALTER TABLESPACE USR READ WRITE;
ALTER TABLESPACE TESTTBS2 READ WRITE;
ALTER TABLESPACE USERS READ WRITE;
SQL> ALTER TABLESPACE INDX READ WRITE;
Tablespace altered.
SQL> ALTER TABLESPACE TOOLS READ WRITE;
Tablespace altered.
SQL> ALTER TABLESPACE USR READ WRITE;
Tablespace altered.
SQL> ALTER TABLESPACE TESTTBS2 READ WRITE;
Tablespace altered.
SQL> ALTER TABLESPACE USERS READ WRITE;
Tablespace altered.
12) Put SYSAUX online again
SQL> ALTER TABLESPACE SYSAUX ONLINE;
Tablespace altered.
13) Check the status of the tablespaces one more time
SQL> select tablespace_name, status, extent_management from dba_tablespaces;
TABLESPACE_NAME STATUS EXTENT_MAN
------------------------------ --------- ----------
SYSTEM ONLINE LOCAL
INDX ONLINE LOCAL
TOOLS ONLINE LOCAL
TESTTBS1 ONLINE LOCAL
TESTTBS2 ONLINE LOCAL
USERS ONLINE LOCAL
UNDOTBS01 ONLINE LOCAL
TEMPTBS01 ONLINE LOCAL
SYSAUX ONLINE LOCAL
14) Disable restricted session on the database
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;
System altered.
SELECT logins FROM v$instance;
LOGINS
---------
ALLOWED
If you don’t follow all steps carefully you can reach some issues like these below
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('SYSTEM');
ERROR at line 1:
ORA-10644: SYSTEM tablespace cannot be default temporary tablespace
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('SYSTEM');
ERROR at line 1:
ORA-10647: Tablespace other than SYSTEM, UNDO, TEMP not
found in read only mode
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
SQL> select tablespace_name from dba_tablespaces;
TABLESPACE_NAME
------------------------------
SYSTEM
TEMP
USERS
IDX
SYSAUX
UNDO
SQL> alter tablespace USERS read only;
SQL> alter tablespace IDX read only;
SQL> execute DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL ('SYSTEM');
ERROR at line 1:
ORA-10648: Tablespace SYSAUX is not offline
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
I hope this helps you!!!
Rodrigo Mufalani and Andre Luiz Dutra Ontalba
Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Internal Exception: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
TIPS: Change Database Hostname After OAM security store configured
Change Database Hostname After OAM security stored configured
Issue:
Database Hostname need to be changed after OAM security store is configured.
Info: Data source is: opss-DBDS
[EL Severe]: 2017-03-14 20:39:37.575--ServerSession(1547285287)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.1.v20111018-r10243): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Error Code: 17002
JAN 07, 2020 8:39:37 PM oracle.security.jps.internal.common.config.AbstractSecurityStore getSecurityStoreVersion
WARNING: Unable to get the Version from Store returning the default. Reason: java.net.ConnectException: Connection refused.
[EL Severe]: 2020-01-20 20:39:37.978--ServerSession(1619843188)--Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.1.v20111018-r10243): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Error Code: 17002
JAN 07, 2020 8:39:37 PM oracle.security.jps.internal.credstore.ldap.LdapCredentialStore init
WARNING: Could not create credential store instance. Reason oracle.security.jps.service.policystore.PolicyStoreConnectivityException: JPS-00027: There was an internal error: java.net.ConnectException: Connection refused
JPS-01055: Could not create credential store instance. Reason oracle.security.jps.service.policystore.PolicyStoreConnectivityException: JPS-00027: There was an internal error: java.net.ConnectException: Connection refused
Error: Diagnostics data was not saved to the credential store.
Error: Validate operation has failed.
Need to do the security configuration first!
Changes:
Login to Weblogic console and modify below connection pools
Navigate to “Services > Datasources > oamDS > Connection Pool” and modify connection details
Navigate to “Services > Datasources > opss-DBDS > Connection Pool” and modify connection details.
If weblogic console is not accessible then modify two files “oam-db-jdbc.xml and opss-jdbc.xml” under /user_projects/domains/base_domain/config/jdbc.
Login to server and navigate to below location.
/user_projects/domains/base_domain/config/fmwconfig/
Modify jps-config-jse.xml, jps-config-migration.xml and jps-config.xml file
Modify “jdbc.url” property and update with new hostname.
Restart Admin server and managed server.
Repeat step 2 in all your cluster nodes.
Expected Error if jps-config-jse.xml, jps-config-migration.xml and jps-config.xml files not modified.
CHEERS !
Cassio Mariani
Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”