New Features - OCI
Today I decided to write a post with the new features that have been released in OCI in recent months.
I will be making detailed articles on each of these features as soon as possible.
Let’s go now with the new features
𝐎𝐒 𝐌𝐚𝐧𝐚𝐠𝐞𝐦𝐞𝐧𝐭 𝐒𝐞𝐫𝐯𝐢𝐜𝐞
– Service Available from OCI Console
– Managed instance can be managed individually or in group.
– Easy Instance grouping – By OS or By Purpose of Instances – Easy Bulk Operations
– Select or Create Custom software sources.
– Immediate execution as Work Request or Scheduled Jobs for OS Package Management.
– Checking for Exposures to known Common Vulnerabilities and Exposures (CVE).
– Agent based Service for Linux 6,7 or 8 Instances.
– OS Management enabled by default only Agent install needed via yum.
– For Service Operation Agent (osms-agent) must be installed.
– Compute + OS Management = Managed Instance
– Enables Package Management – Monitor, Search, Add, Remove, Update
𝐁𝐥𝐨𝐜𝐤 𝐕𝐨𝐥𝐮𝐦𝐞 𝐒𝐡𝐚𝐫𝐢𝐧𝐠 𝐑𝐞𝐚𝐝-𝐖𝐫𝐢𝐭𝐞 𝐌𝐨𝐝𝐞
– Two options introduced Read/Write Sharable and Read-only Sharable.
– Read-Write needs Clusterware solution on top of shared block volume.
– Read-only attachment are available in sharable mode to multiple instances.
– No need of concurrency access control or cluster-ware solution for Read-only Attach.
– Volume groups, Disk-to-Disk Clone, automated backups are available for Shared Block Volumes.
– Default attachment from Compute is non-shareable and exclusive read/write
– Available in all regions via OCI Console, CLI, SDK, Terraform.
– Other vendors provide only Read-only Block Volumes but has limitation on Read-Write shareable attach.
Get back the Deleted Compartment.
– Once compartment is deleted it will be appended with random string.
– Use Filter Deleted Compartment to get the list.
– Audit Retention period is 90-365 days.
– Restore Compartment Option will allow you to restore compartment with the same appended string name.
– It will be listed in compartment list.
– All policies refer to the deleted compartment will be now pointed to restored compartment name.
I hope this helps you!!!
Andre Luiz Dutra Ontalba
Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”
ZDLRA Internals, Tables and Storage
ZDLRA tables are owned by rasys user and are one extension of rman recovery catalog views. The internal tables that were added are just a few, but are important to understand how tasks works (mainly INDEX_BACKUP). In this post, I will show tables like plans, plans_details, blocks, and chunks. And besides that, show a little glimpse about how the files are stored.
Extra tables
If you check, officially, the rman recovery catalog already includes some columns from ZDLRA, the column VB_KEY identify the virtual backup key inside ZDLRA. It exists in RC_BACKUP_PIECE as an example. The extra tables added are linked with SBT tasks (clone to tape or cloud), replication (to other ZDLRA), storage (chunks), vbdf and plans (virtual full backup).
The most import tables are listed below.
RASYS.VBDF: This table store the reference for virtual full backups created for the datafile. Have the information about the backup piece used as the source(base) for virtual full backup and the reference to the new backup piece that was generated at virtual full creation. The most import columns are:
VB_KEY: The unique identifier to virtual backup.
DF_KEY: The reference to datafile key for virtual full backup.
CKP_ID and CHP_SCN: Checkpoint information (id/scn) for this virtual full backup.
SRCBP_KEY: The reference to the original backup piece (source) that was used to generate the virtual full backup. Links to table BP.
VCBP_KEY: Reference to the derived backup piece that was created during the index of the original (source) backup piece.
RASYS.PLANS: This table store the plans that link the datafile and the virtual full backup. Basically, is the reference “index” and links: virtual backup, datafile, and the number of chunks that store it. Important columns:
VB_KEY: The unique identifier to virtual backup.
DF_KEY: The reference to datafile key for virtual full backup.
NUMCHUNKS: Chunks used to store (or store) this virtual full backup.
RASYS.PLANS_DETAILS: IOT table that stores the detailed information for the plan for the “index”. It is partitioned over the DF_KEY column. Link the virtual full backup, datafile, block number, and chunk identification. Tells you in which chuck it is stored each block that are part of the virtual full backup. Import columns:
VB_KEY: The unique identifier to virtual backup.
DF_KEY: The reference to datafile key for virtual full backup.
BLOCKNO: Block number that are part of the datafile.
NUMBLKS: Number of the blocks in sequential mode starting from BLOCKNO that are equal.
CHUNKNO: What is the chunk id that store the BLOCKNO + NUMBKS.
RASYS.BLOCKS: IOT table that stores the info for each block that belongs to the datafile. It is partitioned over the DF_KEY column. There is no link to any backup, but store each version of a single block that is needed to restore any backup (the block number needed can be found in the PLAN_DETAILS and SCN come from other tables). Import columns:
DF_KEY: The reference to datafile key for virtual full backup.
BLOCKNO: Block number that are part of the datafile.
SCN and CKP_ID: Checkpoint and SCN for this specific version of the block.
RASYS.CHUNK: IOT table that has the information about the chunks that was used to store the backups for the datafile. Important columns:
DF_KEY: The reference to datafile key for virtual full backup.
CHUNKNO: What is the chunk id.
FILESIZE: Size for this chunk
FILENAME: Derived file name that store the chunk (no represent the file inside ASM).
These tables are the most important for ZDLRA for the internal part of ZDLRA. They are not needed in daily basis activities but I think that is useful if you want to understand how things work internally. The list above is just a glimpse for internal tables, but based on that you can easily: identify what are the datafile blocks that belongs for specific virtual backup, where the blocks are stored (which chunk), and how this virtual full backup was build (the plan that contains all the blocks).
Storage
As you saw above, the blocks are stored in chunks and in CHUNKS table we have filename information. But this is not related where the data are really stored, there are other places used too. All of these tables are stored at RA_POOL tablespace but don’t store the backup itself. The structure that does this is “CONTAINERS” and represent files inside ASM diskgroup.
If you see the ZDLRA doc (look the image below) this is called DELTA POOLS (CONTAINERS and CHUNKS) and is stored inside one “Delta Store” (ASM diskgroup). For ASM diskgroups you can see the CATALOG and DELTA. The CATALOG, as I wrote in this post, store the metadata information about the backups (mostly the rman catalog views) and the DELTA (the Delta Store) have the info related the backups itself.
For tablespaces, there are two important RA_POOL and RA_CATALOG. The tables listed before (in the “Extra Tables topic”) resides at RA_POOL tablespace. But the backup data itself is stored in CONTAINER files at DELTA diskgroup and you can check them in the table SYS.AMCONT$. Another option is going in ASM diskgroup and check the list at DELTA/CONTAINER files (each file have the default of 2TB):
ASMCMD> pwd
+DELTA/ZDLRA/CONTAINER
ASMCMD> ls -s
Block_Size Blocks Bytes Space Name
512 4294959104 2199019061248 4398059094016 con.561.935405781
512 4294959104 2199019061248 4398059094016 con.562.935405789
512 4294959104 2199019061248 4398059094016 con.563.935405797
512 4294959104 2199019061248 4398059094016 con.564.935405805
…
…
512 4294959104 2199019061248 4398059094016 con.656.935406489
512 308051968 157722607616 315692679168 con.657.935406495
ASMCMD>
You can see more info checking tables SYS.AM* tables and views (SYS.AMAU$, SYS.AMCONT$, and SYS.AM$CONTAINER are examples). The list for containers files are not even listed as ZDLRA database (as datafile in dba_data_files) but are created directly by ZDLRA library.
You can even test the performance (before starting to use ZDLRA usage) to create these files using rastat.pl: https://docs.oracle.com/cd/E88198_01/AMAGD/amagd_monitor.htm#AMAGD1503. And this is the only info in the docs about CONTAINERS files.
Last info about ZDLRA database and storage, the database operates in NOARCHIVLOG mode. The backups for it are daily exports from the metadata information and in case of corruption/lost you can reconstruct the information reading the con files.
Conclusion
Understand the tables involved for ZDLRA are important to see what’s happening internally, how the index is stored and how works virtual full backup. The tables listed above are not used in daily bases for ZDLRA, you usually go to rman catalog view (RC_*), everything is there.
But, PLANS, PLANS_DETAILS, and VCBP are the base for a task like INDEX_BACKUP and to sustain virtual full backups. I needed to pass over these tables before explaining in more details the virtual full creation and INDEX_BACKUP task.
The storage part, containers and ASM diskgroup are needed just to help support to identify some bottleneck or problem. You don’t need to use it but I think that it is important to show you, at least superficially, how its work.
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”
Webinar – Understanding ZDLRA
Category: Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 6 years ago Comments: 1




