Engineer System
Applying ExaCC DomU patch
Category: Engineer System Author: César Carvalho Date: 2 years ago Comments: 0

Applying ExaCC DomU patch

Always read Oracle technical notes before applying any patches.

 

Technical References
DomU upgrades and cloud tooling

 

#### We will apply the patch in a RAC environment. ####
#### Performing Oracle Home backup of RAC nodes. ####
[oracle@srv01 ~]$ echo $ORACLE_HOME
/u02/app/oracle/product/11.2.0/dbhome_2

#### Node 1 ####
[root@srv01 ~]$ tar -pcvf /backup/cesar_update/oracle_home_srv01.tar dbhome_2
[root@srv01 ~]$ cd /u01/app/
[root@srv01 ~]$ tar -pcvf /backup/cesar_update/srv01_oraInventory.tar oraInventory

#### Node 2 ####
[root@srv02 ~]$ tar -pcvf /backup/cesar_update/oracle_home_srv02.tar dbhome_2
[root@srv02 ~]$ cd /u01/app/
[root@srv02 ~]$ tar -pcvf /backup/cesar_update/srv02_oraInventory.tar oraInventory

#### Checking RAC nodes Status ####
[oracle@srv01 ~]$ srvctl status database -d DBPROD
Instance DBPROD1 is running on node srv01
Instance DBPROD2 is running on node srv02

#### Checking version of dbaastools installed ####
[root@srv01 11.2.0]# rpm -qa|grep -i dbaastools
dbaastools_exa-1.0-1+19.1.1.1.0_211221.1316.x86_64

[root@srv01 11.2.0]# dbaascli patch tools list
DBAAS CLI version 19.1.1.1.0
Executing command patch tools list

[root@srv02 11.2.0]# rpm -qa|grep -i dbaastools
dbaastools_exa-1.0-1+19.1.1.1.0_211221.1316.x86_64

[root@srv02 11.2.0]# dbaascli patch tools list
DBAAS CLI version 19.1.1.1.0
Executing command patch tools list

#### Check if the patch download url is the same on all nodes ####
[root@srv01 exapatch]# cat /var/opt/oracle/exapatch/exadbcpatch.cfg |grep oss_container_url
[root@srv02 exapatch]# cat /var/opt/oracle/exapatch/exadbcpatch.cfg |grep oss_container_url

#### Check if the url is accessible ####
[root@srv01 exapatch]# curl -v -O URL GERADA ACIMA
[root@srv02 exapatch]# curl -v -O URL GERADA ACIMA

#### Export environment variables and check available patches to be applied ####
export ORACLE_BASE=/u02/app/oracle
export ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_2
export PATH=${ORACLE_HOME}/bin:$PATH
export ORACLE_SID=DBPROD1
echo $ORACLE_SID

export ORACLE_BASE=/u02/app/oracle
export ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_2
export PATH=${ORACLE_HOME}/bin:$PATH
export ORACLE_SID=DBPROD2
echo $ORACLE_SID

[root@srv02 ~]# dbaascli patch db list --oh srv02:/u02/app/oracle/product/11.2.0/dbhome_2
DBAAS CLI version 19.1.1.1.0
Executing command patch db list --oh srv02:/u02/app/oracle/product/11.2.0/dbhome_2
INFO : EXACS patching

Available Patches
patchid :26610265 (DB 11.2.0.4.170814 QUARTERLY DATABASE PATCH FOR EXADATA - Aug 2017)
patchid :26635694 (DB 11.2.0.4.171017 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2017)
patchid :27011043 (DB 11.2.0.4.180116 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2018)
patchid :27475722 (DB 11.2.0.4.180417 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2018)
patchid :27980213 (DB 11.2.0.4.180717 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2018)
patchid :28462975 (DB 11.2.0.4.181016 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2018)
patchid :28833571 (DB 11.2.0.4.190115 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2019)
patchid :29257245 (DB 11.2.0.4.190416 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2019)
patchid :29698813 (DB 11.2.0.4.190716 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2019)
patchid :30070157 (DB 11.2.0.4.191015 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2019)
patchid :30501894 (DB 11.2.0.4.200114 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2020)
patchid :30805507 (DB 11.2.0.4.200414 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2020)
patchid :31220011 (DB 11.2.0.4.200714 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2020)
patchid :31718644 (DB 11.2.0.4.201020 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2020)
patchid :32131241 (DB 11.2.0.4.210119 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2021)
patchid :32537095 (DB 11.2.0.4.210420 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2021)
patchid :32917411 (DB 11.2.0.4.210720 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2021)
patchid :33248386 (DB 11.2.0.4.211019 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2021)
patchid :33575241 (DB 11.2.0.4.220118 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2022)
Install database patch using
dbaascli patch db apply --patchid 33575241 --dbnames <>

[root@srv01 ~]# dbaascli patch db list --oh srv01:/u02/app/oracle/product/11.2.0/dbhome_2
DBAAS CLI version 19.1.1.1.0
Executing command patch db list --oh srv01:/u02/app/oracle/product/11.2.0/dbhome_2
INFO : EXACS patching

Available Patches
patchid :26610265 (DB 11.2.0.4.170814 QUARTERLY DATABASE PATCH FOR EXADATA - Aug 2017)
patchid :26635694 (DB 11.2.0.4.171017 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2017)
patchid :27011043 (DB 11.2.0.4.180116 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2018)
patchid :27475722 (DB 11.2.0.4.180417 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2018)
patchid :27980213 (DB 11.2.0.4.180717 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2018)
patchid :28462975 (DB 11.2.0.4.181016 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2018)
patchid :28833571 (DB 11.2.0.4.190115 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2019)
patchid :29257245 (DB 11.2.0.4.190416 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2019)
patchid :29698813 (DB 11.2.0.4.190716 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2019)
patchid :30070157 (DB 11.2.0.4.191015 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2019)
patchid :30501894 (DB 11.2.0.4.200114 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2020)
patchid :30805507 (DB 11.2.0.4.200414 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2020)
patchid :31220011 (DB 11.2.0.4.200714 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2020)
patchid :31718644 (DB 11.2.0.4.201020 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2020)
patchid :32131241 (DB 11.2.0.4.210119 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2021)
patchid :32537095 (DB 11.2.0.4.210420 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2021)
patchid :32917411 (DB 11.2.0.4.210720 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2021)
patchid :33248386 (DB 11.2.0.4.211019 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2021)
patchid :33575241 (DB 11.2.0.4.220118 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2022)
Install database patch using
dbaascli patch db apply --patchid 33575241 --dbnames <>

#### Precheck all nodes before patchid 33575241 ####
[root@srv01 ~]$ dbaascli patch db prereq --patchid 33575241 --instance1 srv01:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD -alldbs
[root@srv02 ~]$ dbaascli patch db prereq --patchid 33575241 --instance1 srv02:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD -alldbs

#### Last lines of the log informing that it was executed successfully ####
INFO: status of slave txn###: Precheck completed on srv01
INFO: -precheck_async completed on srv01:/u02/app/oracle/product/11.2.0/dbhome_2
INFO: Successfully released ohome lock. Proceeding to release local provisioning lock
INFO: Successfully released local provisioning lock
INFO: -precheck_async completed on all nodes

INFO: status of slave txn###: Precheck completed on srv02
INFO: -precheck_async completed on srv02:/u02/app/oracle/product/11.2.0/dbhome_2
INFO: Successfully released ohome lock. Proceeding to release local provisioning lock
INFO: Successfully released local provisioning lock
INFO: -precheck_async completed on all nodes

#### Apply patchid 33575241 on node srv02 ####
[root@srv02 ~]$ nohup dbaascli patch db apply --patchid 33575241 --instance1 srv02:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD --run_datasql 0 &

#### Tracking patch application logs ####
[root@srv02 ~]$ tail -f /var/opt/oracle/log/exadbcpatch/exadbcpatch.log
2022-05-19 18:07:00.950618 - Instance check cleared for node srv02 w.r.t nodelist
2022-05-19 18:07:00.950760 - INFO: deleting patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-19 18:07:00.958084 - INFO: deleted patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-19 18:07:00.958209 -
INFO: initpatch being run for post ecs patching
2022-05-19 18:07:00.958467 - Output from cmd /var/opt/oracle/misc/initpatch.pl ecsbppost run on localhost is:
INFO : No patch needed
2022-05-19 18:07:01.510654 - cmd took 0.551481008529663 seconds
2022-05-19 18:07:01.510964 - ##### INFO: Exadbcpatch completed successfully #####

#### Apply patchid 33575241 on node srv01 ####
[root@srv02 ~]$ nohup dbaascli patch db apply --patchid 33575241 --instance1 srv01:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD --run_datasql 1 &

#### Tracking patch application logs ####
[root@srv01 ~]$ tail -f /var/opt/oracle/log/exadbcpatch/exadbcpatch.log

2022-05-20 09:47:27.879789 - Instance check cleared for node srv01 w.r.t nodelist
2022-05-20 09:47:27.880011 - INFO: deleting patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-20 09:47:27.891809 - INFO: deleted patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-20 09:47:27.892030 -
INFO: initpatch being run for post ecs patching
2022-05-20 09:47:27.893285 - Output from cmd /var/opt/oracle/misc/initpatch.pl ecsbppost run on localhost is:
INFO : No patch needed
2022-05-20 09:47:28.447038 - cmd took 0.553269147872925 seconds
2022-05-20 09:47:28.447205 - ##### INFO: Exadbcpatch completed successfully #####
César Carvalho – DBA
Contact: https://twitter.com/Cesar_DBA
https://sgbdbrasil.wordpress.com/

Applying ExaCC DomU patch dataguard
Category: Engineer System Author: César Carvalho Date: 2 years ago Comments: 0

Applying ExaCC DomU patch dataguard

 
 
Always read Oracle technical notes before applying any patches.
 
Technical References
DomU upgrades and cloud tooling
#### We will apply the patch in a RAC environment. ####
#### Performing Oracle Home backup of RAC nodes. ####
[oracle@srv01 ~]$ echo $ORACLE_HOME
/u02/app/oracle/product/11.2.0/dbhome_2

#### Node 1 ####
[root@srv01 ~]$ tar -pcvf /backup/cesar_update/oracle_home_srv01.tar dbhome_2
[root@srv01 ~]$ cd /u01/app/
[root@srv01 ~]$ tar -pcvf /backup/cesar_update/srv01_oraInventory.tar oraInventory

#### Node 2 ####
[root@srv02 ~]$ tar -pcvf /backup/cesar_update/oracle_home_srv02.tar dbhome_2
[root@srv02 ~]$ cd /u01/app/
[root@srv02 ~]$ tar -pcvf /backup/cesar_update/srv02_oraInventory.tar oraInventory

#### Checking RAC nodes Status ####
[oracle@srv01 ~]$ srvctl status database -d DBPROD
Instance DBPROD1 is running on node srv01
Instance DBPROD2 is running on node srv02

#### Checking version of dbaastools installed ####
[root@srv01 11.2.0]# rpm -qa|grep -i dbaastools
dbaastools_exa-1.0-1+19.1.1.1.0_211221.1316.x86_64

[root@srv01 11.2.0]# dbaascli patch tools list
DBAAS CLI version 19.1.1.1.0
Executing command patch tools list

[root@srv02 11.2.0]# rpm -qa|grep -i dbaastools
dbaastools_exa-1.0-1+19.1.1.1.0_211221.1316.x86_64

[root@srv02 11.2.0]# dbaascli patch tools list
DBAAS CLI version 19.1.1.1.0
Executing command patch tools list

#### Check if the patch download url is the same on all nodes ####
[root@srv01 exapatch]# cat /var/opt/oracle/exapatch/exadbcpatch.cfg |grep oss_container_url
[root@srv02 exapatch]# cat /var/opt/oracle/exapatch/exadbcpatch.cfg |grep oss_container_url

#### Check if the url is accessible ####
[root@srv01 exapatch]# curl -v -O URL GERADA ACIMA
[root@srv02 exapatch]# curl -v -O URL GERADA ACIMA

#### Export environment variables and check available patches to be applied ####
export ORACLE_BASE=/u02/app/oracle
export ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_2
export PATH=${ORACLE_HOME}/bin:$PATH
export ORACLE_SID=DBPROD1
echo $ORACLE_SID

export ORACLE_BASE=/u02/app/oracle
export ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_2
export PATH=${ORACLE_HOME}/bin:$PATH
export ORACLE_SID=DBPROD2
echo $ORACLE_SID

[root@srv02 ~]# dbaascli patch db list --oh srv02:/u02/app/oracle/product/11.2.0/dbhome_2
DBAAS CLI version 19.1.1.1.0
Executing command patch db list --oh srv02:/u02/app/oracle/product/11.2.0/dbhome_2
INFO : EXACS patching

Available Patches
patchid :26610265 (DB 11.2.0.4.170814 QUARTERLY DATABASE PATCH FOR EXADATA - Aug 2017)
patchid :26635694 (DB 11.2.0.4.171017 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2017)
patchid :27011043 (DB 11.2.0.4.180116 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2018)
patchid :27475722 (DB 11.2.0.4.180417 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2018)
patchid :27980213 (DB 11.2.0.4.180717 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2018)
patchid :28462975 (DB 11.2.0.4.181016 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2018)
patchid :28833571 (DB 11.2.0.4.190115 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2019)
patchid :29257245 (DB 11.2.0.4.190416 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2019)
patchid :29698813 (DB 11.2.0.4.190716 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2019)
patchid :30070157 (DB 11.2.0.4.191015 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2019)
patchid :30501894 (DB 11.2.0.4.200114 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2020)
patchid :30805507 (DB 11.2.0.4.200414 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2020)
patchid :31220011 (DB 11.2.0.4.200714 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2020)
patchid :31718644 (DB 11.2.0.4.201020 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2020)
patchid :32131241 (DB 11.2.0.4.210119 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2021)
patchid :32537095 (DB 11.2.0.4.210420 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2021)
patchid :32917411 (DB 11.2.0.4.210720 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2021)
patchid :33248386 (DB 11.2.0.4.211019 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2021)
patchid :33575241 (DB 11.2.0.4.220118 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2022)
Install database patch using
dbaascli patch db apply --patchid 33575241 --dbnames <>

[root@srv01 ~]# dbaascli patch db list --oh srv01:/u02/app/oracle/product/11.2.0/dbhome_2
DBAAS CLI version 19.1.1.1.0
Executing command patch db list --oh srv01:/u02/app/oracle/product/11.2.0/dbhome_2
INFO : EXACS patching

Available Patches
patchid :26610265 (DB 11.2.0.4.170814 QUARTERLY DATABASE PATCH FOR EXADATA - Aug 2017)
patchid :26635694 (DB 11.2.0.4.171017 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2017)
patchid :27011043 (DB 11.2.0.4.180116 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2018)
patchid :27475722 (DB 11.2.0.4.180417 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2018)
patchid :27980213 (DB 11.2.0.4.180717 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2018)
patchid :28462975 (DB 11.2.0.4.181016 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2018)
patchid :28833571 (DB 11.2.0.4.190115 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2019)
patchid :29257245 (DB 11.2.0.4.190416 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2019)
patchid :29698813 (DB 11.2.0.4.190716 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2019)
patchid :30070157 (DB 11.2.0.4.191015 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2019)
patchid :30501894 (DB 11.2.0.4.200114 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2020)
patchid :30805507 (DB 11.2.0.4.200414 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2020)
patchid :31220011 (DB 11.2.0.4.200714 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2020)
patchid :31718644 (DB 11.2.0.4.201020 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2020)
patchid :32131241 (DB 11.2.0.4.210119 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2021)
patchid :32537095 (DB 11.2.0.4.210420 QUARTERLY DATABASE PATCH FOR EXADATA - Apr 2021)
patchid :32917411 (DB 11.2.0.4.210720 QUARTERLY DATABASE PATCH FOR EXADATA - Jul 2021)
patchid :33248386 (DB 11.2.0.4.211019 QUARTERLY DATABASE PATCH FOR EXADATA - Oct 2021)
patchid :33575241 (DB 11.2.0.4.220118 QUARTERLY DATABASE PATCH FOR EXADATA - Jan 2022)
Install database patch using
dbaascli patch db apply --patchid 33575241 --dbnames <>

#### Precheck all nodes before patchid 33575241 ####
[root@srv01 ~]$ dbaascli patch db prereq --patchid 33575241 --instance1 srv01:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD -alldbs
[root@srv02 ~]$ dbaascli patch db prereq --patchid 33575241 --instance1 srv02:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD -alldbs

#### Last lines of the log informing that it was executed successfully ####
INFO: status of slave txn###: Precheck completed on srv01
INFO: -precheck_async completed on srv01:/u02/app/oracle/product/11.2.0/dbhome_2
INFO: Successfully released ohome lock. Proceeding to release local provisioning lock
INFO: Successfully released local provisioning lock
INFO: -precheck_async completed on all nodes

INFO: status of slave txn###: Precheck completed on srv02
INFO: -precheck_async completed on srv02:/u02/app/oracle/product/11.2.0/dbhome_2
INFO: Successfully released ohome lock. Proceeding to release local provisioning lock
INFO: Successfully released local provisioning lock
INFO: -precheck_async completed on all nodes

################### DATAGUARD ###################
#### Apply patchid 33575241 on node srv02 Dataguard ####
[root@srv02 ~]$ nohup dbaascli patch db apply --patchid 33575241 --instance1 srv02:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD --run_datasql 0 &

#### Tracking patch application logs ####
[root@srv02 ~]$ tail -f /var/opt/oracle/log/exadbcpatch/exadbcpatch.log
2022-05-19 18:07:00.950618 - Instance check cleared for node srv02 w.r.t nodelist
2022-05-19 18:07:00.950760 - INFO: deleting patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-19 18:07:00.958084 - INFO: deleted patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-19 18:07:00.958209 -
INFO: initpatch being run for post ecs patching
2022-05-19 18:07:00.958467 - Output from cmd /var/opt/oracle/misc/initpatch.pl ecsbppost run on localhost is:
INFO : No patch needed
2022-05-19 18:07:01.510654 - cmd took 0.551481008529663 seconds
2022-05-19 18:07:01.510964 - ##### INFO: Exadbcpatch completed successfully #####

#### Apply patchid 33575241 on node srv01 Dataguard ####
[root@srv02 ~]$ nohup dbaascli patch db apply --patchid 33575241 --instance1 srv01:/u02/app/oracle/product/11.2.0/dbhome_2 --dbnames DBPROD --run_datasql 0 &

#### Tracking patch application logs ####
[root@srv01 ~]$ tail -f /var/opt/oracle/log/exadbcpatch/exadbcpatch.log

2022-05-20 09:47:27.879789 - Instance check cleared for node srv01 w.r.t nodelist
2022-05-20 09:47:27.880011 - INFO: deleting patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-20 09:47:27.891809 - INFO: deleted patching_progress, patched_ohome, patched_ohome_name from creg
2022-05-20 09:47:27.892030 -
INFO: initpatch being run for post ecs patching
2022-05-20 09:47:27.893285 - Output from cmd /var/opt/oracle/misc/initpatch.pl ecsbppost run on localhost is:
INFO : No patch needed
2022-05-20 09:47:28.447038 - cmd took 0.553269147872925 seconds
2022-05-20 09:47:28.447205 - ##### INFO: Exadbcpatch completed successfully #####

César Carvalho – DBA
Contact: https://twitter.com/Cesar_DBA
https://sgbdbrasil.wordpress.com/

ODA UPGRADE 19.x to 19.10
Category: Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 3 years ago Comments: 0

ODA Upgrade 19.x to 19.10

Today I’m bringing an article with the steps to upgrade the ODA version to 19.10.
 
Before starting, we must check the minimum release so that we can perform the upgrade successfully.
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/19.10/cmtrn/oda-patches.html#GUID-220DA05B-0F52-4EDA-84C9-BFD15F43802D
 
 
Before starting the patches, ensure that you have enough space on /,  /u01, and /opt filesystems. 
 

1. Backup snapshot from ODA

root@oda-duts-01 / # /opt/odabr/odabr backup -snap -osize 35 -rsize 20 -usize 60

INFO: 2021-07-01 10:58:35: Please check the logfile '/opt/odabr/out/log/odabr_53053.log' for more details

│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│

 odabr - ODA node Backup Restore - Version: 2.0.1-58

 Copyright Oracle, Inc. 2013, 2020

 --------------------------------------------------------

 Author: Ruggero Citton <[email protected]>

 RAC Pack, Cloud Innovation and Solution Engineering Team

│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│

INFO: 2021-07-01 10:58:35: Checking superuser

INFO: 2021-07-01 10:58:35: Checking Bare Metal

INFO: 2021-07-01 10:58:35: Removing existing LVM snapshots

WARNING: 2021-07-01 10:58:36: LVM snapshot for 'opt' does not exist

WARNING: 2021-07-01 10:58:36: LVM snapshot for 'u01' does not exist

WARNING: 2021-07-01 10:58:36: LVM snapshot for 'root' does not exist

INFO: 2021-07-01 10:58:36: Checking LVM size

INFO: 2021-07-01 10:58:36: Boot device backup

INFO: 2021-07-01 10:58:36: Getting EFI device

INFO: 2021-07-01 10:58:36: ...step1 - unmounting EFI

INFO: 2021-07-01 10:58:36: ...step2 - making efi device backup

SUCCESS: 2021-07-01 10:58:40: ...EFI device backup saved as '/opt/odabr/out/hbi/efi.img'

INFO: 2021-07-01 10:58:40: ...step3 - checking EFI device backup

INFO: 2021-07-01 10:58:40: Getting boot device

INFO: 2021-07-01 10:58:40: ...step1 - making boot device backup using tar

SUCCESS: 2021-07-01 10:58:45: ...boot content saved as '/opt/odabr/out/hbi/boot.tar.gz'

INFO: 2021-07-01 10:58:45: ...step2 - unmounting boot

INFO: 2021-07-01 10:58:45: ...step3 - making boot device backup using dd

SUCCESS: 2021-07-01 10:58:50: ...boot device backup saved as '/opt/odabr/out/hbi/boot.img'

INFO: 2021-07-01 10:58:50: ...step4 - mounting boot

INFO: 2021-07-01 10:58:50: ...step5 - mounting EFI

INFO: 2021-07-01 10:58:50: ...step6 - checking boot device backup

INFO: 2021-07-01 10:58:51: OCR backup

INFO: 2021-07-01 10:58:53: ...ocr backup saved as '/opt/odabr/out/hbi/ocrbackup_53053.bck'

INFO: 2021-07-01 10:58:53: Making LVM snapshot backup

SUCCESS: 2021-07-01 10:58:53: ...snapshot backup for 'opt' created successfully

SUCCESS: 2021-07-01 10:58:55: ...snapshot backup for 'u01' created successfully

SUCCESS: 2021-07-01 10:58:55: ...snapshot backup for 'root' created successfully

SUCCESS: 2021-07-01 10:58:55: LVM snapshots backup done successfully

root@oda-duts-01 / # /opt/odabr/odabr infosnap

│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│

 odabr - ODA node Backup Restore - Version: 2.0.1-58

 Copyright Oracle, Inc. 2013, 2020

 --------------------------------------------------------

 Author: Ruggero Citton <[email protected]>

 RAC Pack, Cloud Innovation and Solution Engineering Team

│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│

LVM snap name         Status                COW Size              Data%

-------------         ----------            ----------            ------

root_snap             active                20.00 GiB             0.01%

opt_snap              active                35.00 GiB             0.01%

u01_snap              active                60.00 GiB             0.01%

 

2.  Unzip and update the repository

For the patch of the server, you will need the following patches ( to be downloaded from support.oracle.com)

-p32351355_1910000_Linux-x86-64_1of2.zip
-p32351355_1910000_Linux-x86-64_2of2.zip
 
The steps are the following. All actions can be found in the documentation

 

2.1 Unpack the two files and then copy them to the oda-duts-01

root@oda-duts-01 /backups # cd patches/

root@oda-duts-01 /backups/patches # ll

total 23037662

-rw-r--r-- 1 root root  3035715218 May 25 10:07 p23494997_1910000_Linux-x86-64.zip

-rw-r--r-- 1 root root 11467982793 May 25 11:18 p32351355_1910000_Linux-x86-64_1of2.zip

-rw-r--r-- 1 root root  9086867187 May 25 09:53 p32351355_1910000_Linux-x86-64_2of2.zip

root@oda-duts-01 /backups/patches # unzip p32351355_1910000_Linux-x86-64_1of2.zip

Archive:  p32351355_1910000_Linux-x86-64_1of2.zip

 extracting: oda-sm-19.10.0.0.0-210222.4-server1of2.zip

  inflating: README.txt

root@oda-duts-01 /backups/patches # unzip p32351355_1910000_Linux-x86-64_2of2.zip

Archive:  p32351355_1910000_Linux-x86-64_2of2.zip

 extracting: oda-sm-19.10.0.0.0-210222.4-server2of2.zip

root@oda-duts-01 /backups/patches # ll

total 43110758

-rw-r--r-- 1 root root 11467982139 Mar 20 22:40 oda-sm-19.10.0.0.0-210222.4-server1of2.zip

-rw-r--r-- 1 root root  9086866837 Mar 20 23:02 oda-sm-19.10.0.0.0-210222.4-server2of2.zip

-rw-r--r-- 1 root root  3035715218 May 25 10:07 p23494997_1910000_Linux-x86-64.zip

-rw-r--r-- 1 root root 11467982793 May 25 11:18 p32351355_1910000_Linux-x86-64_1of2.zip

-rw-r--r-- 1 root root  9086867187 May 25 09:53 p32351355_1910000_Linux-x86-64_2of2.zip

-rwxr-xr-x 1 root root         191 Feb 24 02:17 README.txt

You have new mail in /var/spool/mail/root

root@oda-duts-01 /backups/patches # ll -ltr

total 43110758

-rwxr-xr-x 1 root root         191 Feb 24 02:17 README.txt

-rw-r--r-- 1 root root 11467982139 Mar 20 22:40 oda-sm-19.10.0.0.0-210222.4-server1of2.zip

-rw-r--r-- 1 root root  9086866837 Mar 20 23:02 oda-sm-19.10.0.0.0-210222.4-server2of2.zip

-rw-r--r-- 1 root root  9086867187 May 25 09:53 p32351355_1910000_Linux-x86-64_2of2.zip

-rw-r--r-- 1 root root  3035715218 May 25 10:07 p23494997_1910000_Linux-x86-64.zip

-rw-r--r-- 1 root root 11467982793 May 25 11:18 p32351355_1910000_Linux-x86-64_1of2.zip

You have new mail in /var/spool/mail/root

2.2  Update the repository with the 2 files

root@oda-duts-01 /backups/patches # /opt/oracle/dcs/bin/odacli update-repository -f /backups/patches/oda-sm-19.10.0.0.0-210222.4-server1of2.zip

{

  "jobId" : "ff6d6ad4-de02-432b-b62e-5986146a8eee",

  "status" : "Created",

  "message" : "/backups/patches/oda-sm-19.10.0.0.0-210222.4-server1of2.zip",

  "reports" : [ ],

  "createTimestamp" : "Jul 01, 2021 11:55:02 AM CEST",

  "resourceList" : [ ],

  "description" : "Repository Update",

  "updatedTime" : "Jul 01, 2021 11:55:02 AM CEST"

}

root@oda-duts-01 /backups/patches # odacli describe-job -i ff6d6ad4-de02-432b-b62e-5986146a8eee

Job details

----------------------------------------------------------------

                     ID:  ff6d6ad4-de02-432b-b62e-5986146a8eee

            Description:  Repository Update

                 Status:  Running

                Created:  Jul 01, 2021 11:55:02 AM CEST

                Message:  /backups/patches/oda-sm-19.10.0.0.0-210222.4-server1of2.zip

Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Job details

----------------------------------------------------------------

                     ID:  ff6d6ad4-de02-432b-b62e-5986146a8eee

            Description:  Repository Update

                 Status:  Success

                Created:  Jul 01, 2021 11:55:02 AM CEST

                Message:  /backups/patches/oda-sm-19.10.0.0.0-210222.4-server1of2.zip

Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

root@oda-duts-01 /backups/patches #

root@oda-duts-01 /backups/patches # /opt/oracle/dcs/bin/odacli update-repository -f /backups/patches/oda-sm-19.10.0.0.0-210222.4-server2of2.zip

{

  "jobId" : "e07fb8f3-e937-44b9-8934-7c1343b1b3ef",

  "status" : "Created",

  "message" : "/backups/patches/oda-sm-19.10.0.0.0-210222.4-server2of2.zip",

  "reports" : [ ],

  "createTimestamp" : "Jul 01, 2021 11:59:48 AM CEST",

  "resourceList" : [ ],

  "description" : "Repository Update",

  "updatedTime" : "Jul 01, 2021 11:59:48 AM CEST"

}

root@oda-duts-01 /backups/patches # odacli describe-job -i e07fb8f3-e937-44b9-8934-7c1343b1b3ef

Job details

----------------------------------------------------------------

                     ID:  e07fb8f3-e937-44b9-8934-7c1343b1b3ef

            Description:  Repository Update

                 Status:  Running

                Created:  Jul 01, 2021 11:59:48 AM CEST

                Message:  /backups/patches/oda-sm-19.10.0.0.0-210222.4-server2of2.zip

Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

root@oda-duts-01 /backups/patches # odacli describe-job -i e07fb8f3-e937-44b9-8934-7c1343b1b3ef

Job details

----------------------------------------------------------------

                     ID:  e07fb8f3-e937-44b9-8934-7c1343b1b3ef

            Description:  Repository Update

                 Status:  Success

                Created:  Jul 01, 2021 11:59:48 AM CEST

                Message:  /backups/patches/oda-sm-19.10.0.0.0-210222.4-server2of2.zip

Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

 

After the update of the repository, you can remove the two files to save space.

 

3. Update  the DCS admin

 

root@oda-duts-01 /backups/patches # /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.10.0.0.0

{

  "jobId" : "03c4fb7f-5de6-425b-bb63-0663180ae488",

  "status" : "Created",

  "message" : null,

  "reports" : [ ],

  "createTimestamp" : "Jul 01, 2021 12:04:50 PM CEST",

  "resourceList" : [ ],

  "description" : "DcsAdmin patching",

  "updatedTime" : "Jul 01, 2021 12:04:50 PM CEST"

}

root@oda-duts-01 /backups/patches # odacli describe-job -i 03c4fb7f-5de6-425b-bb63-0663180ae488




Job details

----------------------------------------------------------------

                     ID:  03c4fb7f-5de6-425b-bb63-0663180ae488

            Description:  DcsAdmin patching

                 Status:  Success

                Created:  Jul 01, 2021 12:04:50 PM CEST

                Message:




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Patch location validation                Jul 01, 2021 12:04:51 PM CEST       Jul 01, 2021 12:04:51 PM CEST       Success

dcsadmin upgrade                         Jul 01, 2021 12:04:51 PM CEST       Jul 01, 2021 12:04:52 PM CEST       Success

Update System version                    Jul 01, 2021 12:04:57 PM CEST       Jul 01, 2021 12:04:57 PM CEST       Success

 

 

4. Update the DCS components

root@oda-duts-01 /backups/patches # /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.10.0.0.0

{

  "jobId" : "672e3572-db10-48ea-b737-ae075793c3bb",

  "status" : "Success",

  "message" : "Update-dcscomponents is successful on all the node(s):DCS-Agent shutdown is successful. MySQL upgrade is successful. Timezone is set successfully. Metadata migration is successful. Agent rpm upgrade is successful. DCS-CLI rpm upgrade is successful. DCS-C",

  "reports" : null,

  "createTimestamp" : "Jul 01, 2021 12:07:17 PM CEST",

  "description" : "Update-dcscomponents job completed and is not part of Agent job list",

  "updatedTime" : "Jul 01, 2021 12:09:59 PM CEST"

}

root@oda-duts-01 /backups/patches #


 

5.  Update the DCS agent

The command below may take some time, as specified in the documentation. So be patient and wait.  This command updates the Zookeeper, installs MySQL, migrates metadata from Derby to MySQL, and updates other DCS components such as the DCS Agent, DCS CLI, and DCS Controller on Oracle Database Appliance.
root@oda-duts-01 /backups/patches # /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.10.0.0.0

{

  "jobId" : "71637667-3dea-4ac4-a2dd-5d9e7726874a",

  "status" : "Created",

  "message": "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands",

  "reports" : [ ],

  "createTimestamp" : "Jul 01, 2021 12:12:01 PM CEST",

  "resourceList" : [ ],

  "description" : "DcsAgent patching",

  "updatedTime" : "Jul 01, 2021 12:12:01 PM CEST"

}

root@oda-duts-01 /backups/patches # odacli describe-job -i "71637667-3dea-4ac4-a2dd-5d9e7726874a"

root@oda-duts-01 /backups/patches # odacli describe-job -i "71637667-3dea-4ac4-a2dd-5d9e7726874a"

Job details

----------------------------------------------------------------

                     ID:  71637667-3dea-4ac4-a2dd-5d9e7726874a

            Description:  DcsAgent patching

                 Status:  Running

                Created:  Jul 01, 2021 12:12:01 PM CEST

                Message:

Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

dcs-agent upgrade  to version 19.10.0.0.0 Jul 01, 2021 12:12:01 PM CEST       Jul 01, 2021 12:12:01 PM CEST       Running

root@oda-duts-01 /backups/patches # odacli describe-job -i "71637667-3dea-4ac4-a2dd-5d9e7726874a"

Job details

----------------------------------------------------------------

                     ID:  71637667-3dea-4ac4-a2dd-5d9e7726874a

            Description:  DcsAgent patching

                 Status:  Success

                Created:  Jul 01, 2021 12:12:01 PM CEST

                Message:

Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

dcs-agent upgrade  to version 19.10.0.0.0 Jul 01, 2021 12:12:01 PM CEST       Jul 01, 2021 12:13:36 PM CEST       Success

Update System version                    Jul 01, 2021 12:13:36 PM CEST       Jul 01, 2021 12:13:36 PM CEST       Success

 

6.  Create a precheck report

Before updating the server, we run the prechecks
root@oda-duts-01 /backups/patches # /opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.10.0.0.0

Job details

----------------------------------------------------------------

                     ID:  a3569c92-beef-46a6-b36d-d7d8c7fb6066

            Description:  Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER]

                 Status:  Created

                Created:  Jul 01, 2021 12:15:39 PM CEST

                Message:  Use 'odacli describe-prepatchreport -i a3569c92-beef-46a6-b36d-d7d8c7fb6066' to check details of results

Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

root@oda-duts-01 /backups/patches # odacli describe-prepatchreport -i 9d93bb8b-9ede-4deb-9f2d-7c16a2d5abfa

Patch pre-check report

------------------------------------------------------------------------

                 Job ID:  9d93bb8b-9ede-4deb-9f2d-7c16a2d5abfa

            Description:  Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER]

                 Status:  FAILED

                Created:  Jul 01, 2021 1:11:54 PM CEST

                 Result:  One or more pre-checks failed for [ORACHK]

Node Name

---------------

oda-duts-01

Pre-Check                      Status   Comments

------------------------------ -------- --------------------------------------

__OS__

Validate supported versions     Success   Validated minimum supported versions.

Validate patching tag           Success   Validated patching tag: 19.10.0.0.0.

Is patch location available     Success   Patch location is available.

Verify OS patch                 Success   Verified OS patch

Validate command execution      Success   Validated command execution

__ILOM__

Validate supported versions     Success   Validated minimum supported versions.

Validate patching tag           Success   Validated patching tag: 19.10.0.0.0.

Is patch location available     Success   Patch location is available.

Checking Ilom patch Version     Success   Successfully verified the versions

Patch location validation       Success   Successfully validated location

Validate command execution      Success   Validated command execution

__GI__

Validate supported GI versions  Success   Validated minimum supported versions.

Validate available space        Success   Validated free space under /u01

Is clusterware running          Success   Clusterware is running

Validate patching tag           Success   Validated patching tag: 19.10.0.0.0.

Is system provisioned           Success   Verified system is provisioned

Validate ASM in online          Success   ASM is online

Validate minimum agent version  Success   GI patching enabled in current

                                          DCSAGENT version

Validate GI patch metadata      Success   Validated patching tag: 19.10.0.0.0.

Is patch location available     Success   Patch location is available.

Patch location validation       Success   Successfully validated location

Patch verification              Success   Patches 32218454 not applied on GI

                                          home /u01/app/19.0.0.0/grid on node

                                          oda-duts-01

Validate Opatch update          Success   Successfully updated the opatch in

                                          GiHome /u01/app/19.0.0.0/grid on node

                                          oda-duts-01

Patch conflict check            Success   No patch conflicts found on GiHome

                                          /u01/app/19.0.0.0/grid on node

                                          oda-duts-01

Validate command execution      Success   Validated command execution

__ORACHK__

Running orachk                  Failed    Orachk validation failed: .

Validate command execution      Success   Validated command execution

Verify the vm.min_free_kbytes   Failed    AHF-4819: The vm.min_free_kbytes

configuration                             configuration is not set as

                                          recommended

Software home                   Failed    Software home check failed
In the results, we can see that there are some errors due to orachk. Based on the following document:
https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/19.10/cmtrn/issues-with-oda-odacli.html#GUID-F2B10F21-3D1E-4328-8E9B-D75AD38D26A1
Oracle sends the information to ignore the errors and then continue with updating the server.

 

 

7.  Apply the server update

 
As we ignore the error, we use the flag sko for the patching

 

root@oda-duts-01 /backups/patches # /opt/oracle/dcs/bin/odacli update-server -v 19.10.0.0.0 -sko

{

  "jobId" : "59bbb6d2-bf6e-451a-8a4e-1de86c490d26",

  "status" : "Created",

  "message" : "Success of server update will trigger reboot of the node after 4-5 minutes. Please wait until the node reboots.",

  "reports" : [ ],

  "createTimestamp" : "Jul 01, 2021 20:23:54 PM CEST",

  "resourceList" : [ ],

  "description" : "Server Patching",

  "updatedTime" : "Jul 01, 2021 20:23:54 PM CEST"

}




root@oda-duts-01 ~ # odacli describe-job -i "59bbb6d2-bf6e-451a-8a4e-1de86c490d26"




Job details

----------------------------------------------------------------

                     ID:  59bbb6d2-bf6e-451a-8a4e-1de86c490d26

            Description:  Server Patching

                 Status:  Success

                Created:  Jul 01, 2021 8:24:34 PM CEST

                Message:




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Patch location validation                Jul 01, 2021 8:24:41 PM CEST        Jul 01, 2021 8:24:41 PM CEST        Success

dcs-controller upgrade                   Jul 01, 2021 8:24:42 PM CEST        Jul 01, 2021 8:24:42 PM CEST        Success

Creating repositories using yum          Jul 01, 2021 8:24:43 PM CEST        Jul 01, 2021 8:24:46 PM CEST        Success

Updating YumPluginVersionLock rpm        Jul 01, 2021 8:24:46 PM CEST        Jul 01, 2021 8:24:46 PM CEST        Success

Applying OS Patches                      Jul 01, 2021 8:24:46 PM CEST        Jul 01, 2021 8:24:47 PM CEST        Success

Creating repositories using yum          Jul 01, 2021 8:24:47 PM CEST        Jul 01, 2021 8:24:47 PM CEST        Success

Applying HMP Patches                     Jul 01, 2021 8:24:47 PM CEST        Jul 01, 2021 8:24:48 PM CEST        Success

Client root Set up                       Jul 01, 2021 8:24:48 PM CEST        Jul 01, 2021 8:24:48 PM CEST        Success

Client grid Set up                       Jul 01, 2021 8:24:48 PM CEST        Jul 01, 2021 8:24:48 PM CEST        Success

Patch location validation                Jul 01, 2021 8:24:48 PM CEST        Jul 01, 2021 8:24:48 PM CEST        Success

oda-hw-mgmt upgrade                      Jul 01, 2021 8:24:49 PM CEST        Jul 01, 2021 8:24:49 PM CEST        Success

OSS Patching                             Jul 01, 2021 8:24:50 PM CEST        Jul 01, 2021 8:24:50 PM CEST        Success

Applying Firmware Disk Patches           Jul 01, 2021 8:24:50 PM CEST        Jul 01, 2021 8:24:53 PM CEST        Success

Applying Firmware Controller Patches     Jul 01, 2021 8:24:53 PM CEST        Jul 01, 2021 8:24:55 PM CEST        Success

Checking Ilom patch Version              Jul 01, 2021 8:24:55 PM CEST        Jul 01, 2021 8:24:55 PM CEST        Success

Patch location validation                Jul 01, 2021 8:24:55 PM CEST        Jul 01, 2021 8:24:56 PM CEST        Success

Save password in Wallet                  Jul 01, 2021 8:24:56 PM CEST        Jul 01, 2021 8:24:56 PM CEST        Success

Apply Ilom patch                         Jul 01, 2021 8:24:56 PM CEST        Jul 01, 2021 8:24:56 PM CEST        Success

Copying Flash Bios to Temp location      Jul 01, 2021 8:24:56 PM CEST        Jul 01, 2021 8:24:56 PM CEST        Success

Starting the clusterware                 Jul 01, 2021 8:24:59 PM CEST        Jul 01, 2021 8:24:59 PM CEST        Success

clusterware patch verification           Jul 01, 2021 8:24:59 PM CEST        Jul 01, 2021 8:25:01 PM CEST        Success

Patch location validation                Jul 01, 2021 8:25:01 PM CEST        Jul 01, 2021 8:25:01 PM CEST        Success

Opatch update                            Jul 01, 2021 8:26:32 PM CEST        Jul 01, 2021 8:26:36 PM CEST        Success

Patch conflict check                     Jul 01, 2021 8:26:36 PM CEST        Jul 01, 2021 8:27:29 PM CEST        Success

clusterware upgrade                      Jul 01, 2021 8:27:34 PM CEST        Jul 01, 2021 9:04:18 PM CEST        Success

Updating GiHome version                  Jul 01, 2021 9:04:18 PM CEST        Jul 01, 2021 9:04:21 PM CEST        Success

Starting the clusterware                 Jul 01, 2021 9:05:25 PM CEST        Jul 01, 2021 9:05:25 PM CEST        Success

remove network public interface          Jul 01, 2021 9:05:25 PM CEST        Jul 01, 2021 9:05:28 PM CEST        Success

create bridge network                    Jul 01, 2021 9:05:28 PM CEST        Jul 01, 2021 9:05:33 PM CEST        Success

modify network public interface          Jul 01, 2021 9:05:33 PM CEST        Jul 01, 2021 9:05:34 PM CEST        Success

Update System version                    Jul 01, 2021 9:05:35 PM CEST        Jul 01, 2021 9:05:35 PM CEST        Success

Cleanup JRE Home                         Jul 01, 2021 9:05:35 PM CEST        Jul 01, 2021 9:05:35 PM CEST        Success

Add SYSNAME in Env                       Jul 01, 2021 9:05:35 PM CEST        Jul 01, 2021 9:05:35 PM CEST        Success

Setting ACL for disk groups              Jul 01, 2021 9:05:35 PM CEST        Jul 01, 2021 9:05:38 PM CEST        Success

preRebootNode Actions                    Jul 01, 2021 9:05:44 PM CEST        Jul 01, 2021 9:06:30 PM CEST        Success

Reboot Ilom                              Jul 01, 2021 9:06:30 PM CEST        Jul 01, 2021 9:06:30 PM CEST        Success

 

We can confirm this by running the following command

 

root@oda-duts-01 ~ # odacli describe-component

System Version

---------------

19.10.0.0.0




System node Name

---------------

oda-duts-01




Local System Version

---------------

19.10.0.0.0




Component                                Installed Version    Available Version

---------------------------------------- -------------------- --------------------

OAK                                       19.10.0.0.0           up-to-date




GI                                        19.10.0.0.210119      up-to-date




DB                                        11.2.0.4.200414       11.2.0.4.210119




DCSAGENT                                  19.10.0.0.0           up-to-date




OS                                        7.9                   up-to-date




ILOM                                      5.0.1.21.a.r138015    up-to-date




BIOS                                      41080800              up-to-date




FIRMWARECONTROLLER                        QDV1RF30              up-to-date




FIRMWAREDISK                              0121                  up-to-date




HMP                                       2.4.7.0.1             up-to-date

 

8. Update the existing dbhomes

 

root@oda-duts-01 ~ # odacli list-dbhomes




ID                                       Name                 DB Version                               Home Location                                 Status

---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------

7c724f57-a495-4db9-a88a-48323f3632c6     OraDB11204_home1     11.2.0.4.200414                          /u01/app/oracle/product/11.2.0.4/dbhome_1     CONFIGURED







root@oda-duts-01 ~ # odacli create-prepatchreport --dbhome --dbhomeid 7c724f57-a495-4db9-a88a-48323f3632c6 -v 19.10.0.0.0







root@oda-duts-01 ~ # odacli update-dbhome --dbhomeid 7c724f57-a495-4db9-a88a-48323f3632c6 -v 19.10.0.0.0 -sko

{

  "jobId" : "cf0d4064-1eb2-4b9a-8653-41fe1f06f98a",

  "status" : "Created",

  "message" : null,

  "reports" : [ ],

  "createTimestamp" : "Jul 01, 2021 21:26:05 PM CEST",

  "resourceList" : [ ],

  "description" : "DB Home Patching: Home Id is 7c724f57-a495-4db9-a88a-48323f3632c6",

  "updatedTime" : "Jul 01, 2021 21:26:05 PM CEST"

}

root@oda-duts-01 ~ #

root@oda-duts-01 ~ #




root@oda-duts-01 ~ # odacli describe-job -i "cf0d4064-1eb2-4b9a-8653-41fe1f06f98a"




Job details

----------------------------------------------------------------

                     ID:  cf0d4064-1eb2-4b9a-8653-41fe1f06f98a

            Description:  DB Home Patching: Home Id is 7c724f57-a495-4db9-a88a-48323f3632c6

                 Status:  Success

                Created:  Jul 01, 2021 9:26:05 PM CEST

                Message:  Success




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

clusterware patch verification           Jul 01, 2021 9:26:19 PM CEST        Jul 01, 2021 9:26:21 PM CEST        Success

Patch conflict check                     Jul 01, 2021 9:26:21 PM CEST        Jul 01, 2021 9:26:21 PM CEST        Success

Patch location validation                Jul 01, 2021 9:26:21 PM CEST        Jul 01, 2021 9:26:21 PM CEST        Success

Opatch update                            Jul 01, 2021 9:27:13 PM CEST        Jul 01, 2021 9:27:17 PM CEST        Success

Patch conflict check                     Jul 01, 2021 9:27:17 PM CEST        Jul 01, 2021 9:27:30 PM CEST        Success

Creating wallet for DB Client            Jul 01, 2021 9:27:58 PM CEST        Jul 01, 2021 9:28:03 PM CEST        Success

db upgrade                               Jul 01, 2021 9:28:03 PM CEST        Jul 01, 2021 9:30:06 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:06 PM CEST        Jul 01, 2021 9:30:07 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:07 PM CEST        Jul 01, 2021 9:30:08 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:08 PM CEST        Jul 01, 2021 9:30:09 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:09 PM CEST        Jul 01, 2021 9:30:10 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:10 PM CEST        Jul 01, 2021 9:30:11 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:11 PM CEST        Jul 01, 2021 9:30:12 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:12 PM CEST        Jul 01, 2021 9:30:13 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:13 PM CEST        Jul 01, 2021 9:30:14 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:14 PM CEST        Jul 01, 2021 9:30:15 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:15 PM CEST        Jul 01, 2021 9:30:16 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:17 PM CEST        Jul 01, 2021 9:30:18 PM CEST        Success

SqlPatch upgrade                         Jul 01, 2021 9:30:18 PM CEST        Jul 01, 2021 9:30:19 PM CEST        Success

Update System version                    Jul 01, 2021 9:30:19 PM CEST        Jul 01, 2021 9:30:19 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:21 PM CEST        Jul 01, 2021 9:30:22 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:22 PM CEST        Jul 01, 2021 9:30:24 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:24 PM CEST        Jul 01, 2021 9:30:26 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:26 PM CEST        Jul 01, 2021 9:30:28 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:28 PM CEST        Jul 01, 2021 9:30:30 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:30 PM CEST        Jul 01, 2021 9:30:32 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:32 PM CEST        Jul 01, 2021 9:30:34 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:34 PM CEST        Jul 01, 2021 9:30:36 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:36 PM CEST        Jul 01, 2021 9:30:38 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:38 PM CEST        Jul 01, 2021 9:30:40 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:40 PM CEST        Jul 01, 2021 9:30:41 PM CEST        Success

updating the Database version            Jul 01, 2021 9:30:41 PM CEST        Jul 01, 2021 9:30:43 PM CEST        Success




You have new mail in /var/spool/mail/root

 

Now that the patch was successful, we can delete the snapshots we took with odabr

 

root@oda-duts-01 ~ # /opt/odabr/odabr delsnap

INFO: 2021-07-01 21:33:02: Please check the logfile '/opt/odabr/out/log/odabr_48818.log' for more details




INFO: 2021-07-01 21:33:02: Removing LVM snapshots

INFO: 2021-07-01 21:33:02: ...removing LVM snapshot for 'opt'

SUCCESS: 2021-07-01 21:33:03: ...snapshot for 'opt' removed successfully

INFO: 2021-07-01 21:33:03: ...removing LVM snapshot for 'u01'

SUCCESS: 2021-07-01 21:33:03: ...snapshot for 'u01' removed successfully

INFO: 2021-07-01 21:33:03: ...removing LVM snapshot for 'root'

SUCCESS: 2021-07-01 21:33:03: ...snapshot for 'root' removed successfully

SUCCESS: 2021-07-01 21:33:03: Remove LVM snapshots done successfully

root@oda-duts-01 ~ #

 

9.  Update the repository with the RDBMS clone 19.10

 

The last step is to update the repository with the RDBMS clone 19.10. For this, download the patch p23494997_1910000_Linux-x86-64.zip. And then unpack the file and run the command

 

root@oda-duts-01 /backups/patches # odacli update-repository -f /backups/patches/odacli-dcs-19.10.0.0.0-210115-DB-11.2.0.4.zip

{

  "jobId" : "9e5124ef-670a-41a5-9135-4f1688cad2b7",

  "status" : "Created",

  "message" : "/backups/patches/odacli-dcs-19.10.0.0.0-210115-DB-11.2.0.4.zip",

  "reports" : [ ],

  "createTimestamp" : "Jul 01, 2021 21:38:53 PM CEST",

  "resourceList" : [ ],

  "description" : "Repository Update",

  "updatedTime" : "Jul 01, 2021 21:38:53 PM CEST"

}

root@oda-duts-01 /backups/patches # odacli describe-job -i "9e5124ef-670a-41a5-9135-4f1688cad2b7"




Job details

----------------------------------------------------------------

                     ID:  9e5124ef-670a-41a5-9135-4f1688cad2b7

            Description:  Repository Update

                 Status:  Running

                Created:  Jul 01, 2021 9:38:53 PM CEST

                Message:  /backups/patches/odacli-dcs-19.10.0.0.0-210115-DB-11.2.0.4.zip




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Unzip bundle                             Jul 01, 2021 9:38:53 PM CEST        Jul 01, 2021 9:38:53 PM CEST        Running




root@oda-duts-01 /backups/patches # odacli describe-job -i "9e5124ef-670a-41a5-9135-4f1688cad2b7"




Job details

----------------------------------------------------------------

                     ID:  9e5124ef-670a-41a5-9135-4f1688cad2b7

            Description:  Repository Update

                 Status:  Success

                Created:  Jul 01, 2021 9:38:53 PM CEST

                Message:  /backups/patches/odacli-dcs-19.10.0.0.0-210115-DB-11.2.0.4.zip




Task Name                                Start Time                          End Time                            Status

---------------------------------------- ----------------------------------- ----------------------------------- ----------

Unzip bundle                             Jul 01, 2021 9:38:53 PM CEST        Jul 01, 2021 9:39:36 PM CEST        Success

 

I hope this helps you!!!

 

Stay tuned, following on twitter @aontalba and on Linkedin

 

André Ontalba

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purposes. Specific data and identifications were removed to allow reach generic audience and be useful.”

 


opatch fails with Error: ‘Archive Action: Source file “$ORACLE_HOME/.patch_storage/…” does not exist.’
Category: Database,Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

opatch fails with Error: 'Archive Action: Source file "$ORACLE_HOME/.patch_storage/..." does not exist.'

 
Another quick article about a problem I had yesterday during an update patch in ODA. I found an error during patch in the Oracle Binary.

 

[Sep 14, 2020 2:40:58 PM] [INFO] add CopyAction for olsrelod.sql
[Sep 14, 2020 2:40:58 PM] [INFO] OPatchSessionHelper::sortOnOverlay() Given list - 25897615 25034396 26477255 20370037 21688501 18430870 27435440 24425998
[Sep 14, 2020 2:40:58 PM] [INFO] size of PatchObject list: 8
[Sep 14, 2020 2:40:59 PM] [INFO] Patch 24425998:
Achive Action: Directory "/u01/app/oracle/product/12.1.0.2/dbhome_3/.patch_storage/24425998_Sep_28_2016_12_31_24" does not exists or is not readable.
'oracle.rdbms, 12.1.0.2.0': Cannot update file '/u01/app/oracle/product/12.1.0.2/dbhome_3/lib/libserver12.a' with '/ksfd.o'
[Sep 14, 2020 2:40:59 PM] [INFO] Prerequisite check "CheckRollbackable" on auto-rollback patches failed.
The details are:

Patch 24425998:
Achive Action: Directory "/u01/app/oracle/product/12.1.0.2/dbhome_3/.patch_storage/24425998_Sep_28_2016_12_31_24" does not exists or is not readable.
'oracle.rdbms, 12.1.0.2.0': Cannot update file '/u01/app/oracle/product/12.1.0.2/dbhome_3/lib/libserver12.a' with '/ksfd.o'
[Sep 14, 2020 2:40:59 PM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check "CheckRollbackable" on auto-rollback patches failed.
[Sep 14, 2020 2:40:59 PM] [INFO] --------------------------------------------------------------------------------
[Sep 14, 2020 2:40:59 PM] [INFO] The following warnings have occurred during OPatch execution:
[Sep 14, 2020 2:40:59 PM] [INFO] 1) OUI-67303:
Patches [ 25897615 25034396 26477255 20370037 21688501 18430870 27435440 24425998 ] will be rolled back.
[Sep 14, 2020 2:40:59 PM] [INFO] 2) OUI-67303:
Patches [ 25897615 25034396 26477255 20370037 21688501 18430870 27435440 24425998 ] will be rolled back.
[Sep 14, 2020 2:40:59 PM] [INFO] --------------------------------------------------------------------------------
[Sep 14, 2020 2:40:59 PM] [INFO] Finishing UtilSession at Mon Sep 14 14:40:59 CEST 2020
[Sep 14, 2020 2:40:59 PM] [INFO] Log file location: /u01/app/oracle/product/12.1.0.2/dbhome_3/cfgtoollogs/opatchauto/core/opatch/opatch2020-09-14_14-39-20PM_1.log

 

According to the document: opatch fails with Error: ‘Archive Action: Source file “$ ORACLE_HOME / .patch_storage / …” does not exist.’ or ‘Achive Action: Directory “$ ORACLE_HOME / .patch_storage / …” does not exist or is not readable’. (Doc ID 1244414.1)
The reason for this is:
Files needed to rollback existing subset patch (es) are missing from $ORACLE_HOME/.patch_storage.

BACKGROUND
==============
When an Oracle software patch is installed, the first step is to place an unmodified copy of each affected $ORACLE_HOME file into a directory under the “$ORACLE_HOME/.patch_storage”.  These file copies will be used if the software patch is ever manually, or automatically rolled back.

 

Well after I saw this error, I was sure that nothing had been removed before the patch.
 
The only supported solutions:

 

  1. The missing directories and files can be restored from a backup of the ORACLE_HOME.
  2.  
  3. If no backups exist then re-install the $ORACLE_HOME.
  4.  
  5. Clone from another ORACLE_HOME of a like installation.

 

 

In my case I always have the backup of the following directories $ORACLE_HOME/inventory/oneoffs and    $ORACLE_HOME/.patch_storage.
 
I found the patch folder /.patch_storage/24425998_Sep_28_2016_12_31_24 that was needed to perform the rollback and after that the patch was applied successfully.
 
2020-09-15 07:47:04 Patch 29972716 is successfully already applied on the Home: /u01/app/oracle/product/12.1.0.2/db_home3
2020-09-15 07:47:04 SUCCESS: Successfully applied the patch on the Home : /u01/app/oracle/product/12.1.0.2/db_home4, /u01/app/oracle/product/12.1.0.2/db_home1, /u01/app/oracle/product/12.1.0.2/db_home3.

 

I hope this helps you!!!
 
Stay tuned, following on twitter @aontalba and on Linkedin

 

Andre Luiz Dutra Ontalba

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched
Category: Database,Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched

Yesterday during an ODA upgrade, I came across an error during the cluster upgrade process, where this error was presented.

 

.
.
2020/03/03 14:34:00 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2020/03/03 14:34:04 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2020/03/03 14:34:32 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl start rollingupgrade 18.0.0.0.0'
CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched.
CRS-4000: Command Start failed, or completed with errors.
2020/03/03 14:34:32 CLSRSC-511: failed to set Oracle Clusterware and ASM to rolling migration mode
Died at /u01/app/18.0.0.0/grid/crs/install/oraasm.pm line 1455.

 

Well following Oracle’s note rootupgrade.sh Fails with CRS-1136: Rejecting the rolling upgrade mode change because the cluster is being patched (Doc ID 2494827.1)    .
 
I found the solution to the problem by following the steps below.
 
Run the commands below to identify the versions of crs, releasepatch and softwarepatch to see if there are any differences.

 

bash-4.3# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [2660242823].
bash-4.3#

bash-4.3# crsctl query crs releasepatch
Oracle Clusterware release patch level is [1953265745] and the complete list of patches [23600818 26839277 27001739 27105253 27128906 27144050 27335416 ] have been applied on the local node.
bash-4.3#

bash-4.3# crsctl query crs softwarepatch
Oracle Clusterware patch level on node odatest1 is [1953265745]

 

We can see that the crs has a different version than the releasepatch and softwarepatch.
 
Well done that we will fix the problem.
 
1 – Run stop rollingpatch as root user, which will update OCR with correct values
<GRID_HOME>/bin/crsctl stop rollingpatch  

 

root@odatest1:~# /u01/app/12.1.0.2/grid/bin/crsctl stop rollingpatch
CRS-1161: The cluster was successfully patched to patch level [1953265745].
root@odatest1:~# 

 

2 – Verify software/release patch levels and retry rootupgrade.sh.

 

bash-4.3# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [1953265745].
bash-4.3#

bash-4.3# crsctl query crs releasepatch
Oracle Clusterware release patch level is [1953265745] and the complete list of patches [23600818 26839277 27001739 27105253 27128906 27144050 27335416 ] have been applied on the local node.
bash-4.3#

bash-4.3# crsctl query crs softwarepatch
Oracle Clusterware patch level on node odatest1 is [1953265745]

 

root@odatest1:~# /u01/app/18.0.0.0/grid/rootupgrade.sh







.
.
2020/03/03 15:34:00 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2020/03/03 15:34:04 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2020/03/03 15:34:32 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl start rollingupgrade 18.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2020/03/03 15:35:10 CLSRSC-482: Running command: '/u01/app/18.0.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0.2/grid -oldCRSVersion 12.1.0.2.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2020/03/03 15:34:20 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

.
.

2020/03/03 15:54:00 CLSRSC-595: Executing upgrade step 8 of 19: 'UpgradeNode'.

2020/03/03 15:54:04 CLSRSC-474: Initiating upgrade of resource types

2020/03/03 15:56:20 CLSRSC-475: Upgrade of resource types successfully initiated.

2020/03/03 15:56:44 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.

2020/03/03 15:57:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

 

I hope this helps you!!!

 

Stay tuned, following on twitter @aontalba and on Linkedin
 
Andre Luiz Dutra Ontalba

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


HOW TO REMOVE HAIP ON ODA 18.8.0.0.0
Category: Engineer System Author: Andre Luiz Dutra Ontalba (Board Member) Date: 4 years ago Comments: 0

HOW TO REMOVE HAIP ON ODA 18.8.0.0.0

I needed to remove HAIP from ODA after migrating to version 18.8.0.0.0 and decided to prepare this procedure.
 
This action plan should only require one clusterware restart vs. patching which can result in two or three clusterware restarts.
 
Let’s go to the procedure.
1. Backup gpnp profile

 

[grid@testoda1 peer]$ cd /u01/app/18.0.0.0/grid/gpnp/'hostname'/profiles/peer
[grid@testoda1 peer]$ cp -p profile.xml profile.xml.bkp
[grid@testoda2 peer]$ /u01/app/18.0.0.0/grid/gpnp/'hostname'/profiles/peer
[grid@testoda2 peer]$ cp -p profile.xml profile.xml.bkp
2. Get the cluster_interconnect interfaces (only on node0)
[grid@testoda1 ~]$ /u01/app/18.0.0.0/grid/bin/oifcfg getif

btbond1 10.32.16.0 global public
p1p1 192.168.16.0 global cluster_interconnect,asm
p1p2 192.168.17.0 global cluster_interconnect,asm

Please note: That private interface names might be different depending on the model, and/or ODA version which was used to deploy the machine, etc.

For the rest of this note, we are using p1p1/p1p2 as an example in the steps below.

 
3. Backup existing ifcfg- files
[root@testoda1 ~]# cd /etc/sysconfig/network-scripts
[root@testoda1 network-scripts]# cp ifcfg-p1p1 backupifcfgFiles/ifcfg-p1p1.bak
[root@testoda1 network-scripts]# cp ifcfg-p1p2 backupifcfgFiles/ifcfg-p1p2.bak
[root@testoda2 ~]# cd /etc/sysconfig/network-scripts
[root@testoda2 network-scripts]# cp ifcfg-p1p1 backupifcfgFiles/ifcfg-p1p1.bak
[root@testoda2 network-scripts]# cp ifcfg-p1p2 backupifcfgFiles/ifcfg-p1p2.bak
4. Create ifcfg-icbond0 and modify ifcfg- files
[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-icbond0

# This file is automatically created by the ODA software.

DEVICE=icbond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
MTU=9000
BONDING_OPTS=”mode=active-backup miimon=100 primary=p1p1″
IPADDR=192.168.16.24
NETMASK=255.255.255.0

[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p1

# This file is automatically created by the ODA software.

DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda1 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p2

/etc/sysconfig/network-scripts/ifcfg-p1p2

# This file is automatically created by the ODA software.

DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-icbond0

# This file is automatically created by the ODA software.

DEVICE=icbond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
MTU=9000
BONDING_OPTS=”mode=active-backup miimon=100 primary=p1p1″
IPADDR=192.168.16.25
NETMASK=255.255.255.0

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p1

# This file is automatically created by the ODA software.

DEVICE=p1p1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

[root@testoda2 network-scripts]# vi /etc/sysconfig/network-scripts/ifcfg-p1p2

# This file is automatically created by the ODA software.

DEVICE=p1p2
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
USERCTL=no

# disable generic and large receive offloads on all interfaces,
# to prevent known problems, specifically in bridge configurations.

ETHTOOL_OFFLOAD_OPTS=”lro off gro off”
IPV6INIT=no
PEERDNS=no
BOOTPROTO=none
MASTER=icbond0
SLAVE=yes
MTU=9000

 
5. Creating/replacing init.ora-s for APX instances
[grid@testoda1]$ echo "+APX1.cluster_interconnects='192.168.16.24'" > $ORACLE_HOME/dbs/init+APX1.ora

[grid@testoda2]$ echo “+APX2.cluster_interconnects=’192.168.16.25′” > $ORACLE_HOME/dbs/init+APX2.ora

6. Stop the Clusterware on node2
[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
7. Set the new, bonded cluster_interconnect interface and remove p1p1/p1p2 interfaces from the configuration (only on node0)
[grid@testoda1 ~]$ oifcfg setif -global icbond0/192.168.16.0:cluster_interconnect,asm

[grid@testoda1 ~]$ oifcfg getif

btbond1 10.209.244.0 global public
p1p1 192.168.16.0 global cluster_interconnect,asm
p1p2 192.168.17.0 global cluster_interconnect,asm
icbond0 192.168.16.0 global cluster_interconnect,asm

[grid@testoda1 ~]$ oifcfg delif -global p1p1/192.168.16.0

[grid@testoda1 ~]$ oifcfg delif -global p1p2/192.168.17.0

[grid@testoda1 ~]$ oifcfg getif

btbond1 10.209.244.0 global public
icbond0 192.168.16.0 global cluster_interconnect,asm

8. Remove HAIP dependency in ora.asm
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.cluster_interconnect.haip -attr ENABLED=0 -init

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.cluster_interconnect.haip -attr ENABLED=0 -init

[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.asm -attr “START_DEPENDENCIES=’hard(ora.cssd,ora.ctssd) pullup(ora.cssd,ora.ctssd) weak(ora.drivers.acfs)’, STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl modify res ora.asm -attr “START_DEPENDENCIES=’hard(ora.cssd,ora.ctssd) pullup(ora.cssd,ora.ctssd) weak(ora.drivers.acfs)’, STOP_DEPENDENCIES=’hard(intermediate:ora.cssd)'” -init

9. Removing ora.cluster_interconnect.haip resource
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl delete resource ora.cluster_interconnect.haip -init –f

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl delete resource ora.cluster_interconnect.haip -init –f

10. Stop the Clusterware on node1
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl stop crs -f
11. Restart the network
[root@testoda1 network-scripts]# service network restart

[root@testoda1 network-scripts]# ifconfig -a

[root@testoda1 network-scripts]# cat /proc/net/bondinf/icbond0

[root@testoda2 network-scripts]# service network restart

[root@testoda2 network-scripts]# ifconfig -a

[root@testoda2 network-scripts]# cat /proc/net/bondinf/icbond0

 
12. Restart the Clusterware
[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl start crs

[root@testoda2 ~]# /u01/app/18.0.0.0/grid/bin/crsctl start crs

13. Restart dcs-agent to rediscover the interfaces automatically
[grid@testoda1 ~]# /opt/oracle/dcs/bin/restartagent.sh

[grid@testoda2 ~]# /opt/oracle/dcs/bin/restartagent.sh

14. Checking the cluster service after removing the service
[root@testoda1 ~]#

[root@testoda1 ~]# /u01/app/18.0.0.0/grid/bin/crsctl check cluster -all
**************************************************************
testoda1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
testoda2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
I hope I helped with this procedure
Andre Luiz Dutra Ontalba
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


ZDLRA, Creating the Replication Server
Category: Engineer System Author: Fernando Simon (Board Member) Date: 4 years ago Comments: 0

ZDLRA, Creating the Replication Server

The replication for ZDLRA operates in several ways, from a single upstream/downstream config to a multiple replication config, but both are done using the same procedure. The process is not complicated but has some details that are needed to be aware to avoid reconstruct (or even loss) replicated data. In this post, I will show the details to create the replication config.
The base about how the replication works for ZDLRA I wrote in this post. And how to configure the replication network config in this other post. This network configuration needs to be done just when you are adding the replication after the ZDLRA has been deployed, if you already deployed with replication enabled it is not needed. The official documentation about replication can be found here.

 

 

Replication Topology

The topology for ZDLRA replication can vary, but basic is:

 

And the resume is:

 

1 . One-Way: The data flows in one way only, only one ZDLRA forwards the backups.

 

2. Bi-Directional: Both ZDLRA’s send backups to each other. Is this case, the protected databases for each ZDLRA (usually one at the separated datacenter) are replicated between them since both operated as upstream/downstream.
 
3. Hub-Spoke: One ZDLRA receives backups from several ZDLRA’s. And this “third” ZDLRA is responsible to archive to tape.

 

In any type of replication that you have exists:

 

1. Upstream: It is the ZDLRA that receives the backup and forward it to another ZDLRA

 

2. Downstream: Is the ZDLRA that receives the backup from another ZDLRA

 

Scenario

 

In this post (and others when I use the replication) I will use the “One-Way config” where I have one upstream and one downstream. But if you have other types, you just need to follow the same procedure and take care of details like user, wallets, and credentials.

 

 

 

It will be:

 

1. Upstream: ZDLRAS1.
2. Downstream: ZDLRAS2.

 

Creating the Replication

 

The replication for ZDLRA operates differently than Oracle DG, it is native replication using similar procedure than ingest backup at ZDLRA. I already wrote about this in my previous post (Replication and Index topic).
To configure the replication we use the procedure DBMS_RA.CREATE_REPLICATION_SERVER but before we need to check some details. The replication is done based on protection policy, so, all the databases linked with that will have the backups replicated. I will write about this in another post, in this post I will show how to create the replication config.

 

A user at downstream to receive replication

 

The ZDLRA replication requires that you use one specific user to send the backups from upstream to downstream. This user is created just in the downstream ZDLRA and never needs to be used to connect using rman.
The form/best practices to create user is REPUSER_FROM_[ZDLRA_UPSTREAM_DB_NAME]. Doing this you know the source of connection (when your downstream receives backup from more than one upstream).
So, the first step is to create the user at downstream:

 

[root@zdlras2n1 ~]# /opt/oracle.RecoveryAppliance/bin/racli add vpc_user --user_name=repusr_from_zdlras1

[repusr_from_zdlras1] New Password:

Mon Nov 25 23:34:50 2019: Start: Add vpc user repusr_from_zdlras1.

Mon Nov 25 23:34:51 2019:        Add vpc user repusr_from_zdlras1 successfully.

Mon Nov 25 23:34:51 2019: End:   Add vpc user repusr_from_zdlras1.

[root@zdlras2n1 ~]#

 

Wallet at upstream

 

To allow the upstream connect at downstream to send the backup is needed to create one wallet at upstream ZDLRA with credentials from the repuser create in the first step. The wallet can be stored in one shared filesystem to allow both nodes of the cluster to access is, or each node can store at one folder (but path needs to be the same in both).
The wallet needs to be ALO (auto-login) and can be shared (if you have one). To create the wallet at upstream we need to do:

 

[root@zdlras1n1 ~]# su - oracle

Last login: Mon Nov 25 23:43:26 CET 2019 on pts/3

[oracle@zdlras1n1 ~]$ mkdir /radump/wallrep

[oracle@zdlras1n1 ~]$

[oracle@zdlras1n1 ~]$ mkstore -wrl /radump/wallrep -createALO

Oracle Secret Store Tool Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.




[oracle@zdlras1n1 ~]$
 
And after that, we create the credential with username and password that was created at downstream:

 

[oracle@zdlras1n1 ~]$ mkstore -wrl /radump/wallrep -createCredential zdlras2-rep.oralocal:1522/zdlras2 repusr_from_zdlras1 repuser

Oracle Secret Store Tool Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.




[oracle@zdlras1n1 ~]$

[oracle@zdlras1n1 ~]$ mkstore -wrl /radump/wallrep -listCredential

Oracle Secret Store Tool Release 19.0.0.0.0 - Production

Version 19.3.0.0.0

Copyright (c) 2004, 2019, Oracle and/or its affiliates. All rights reserved.




List credential (index: connect_string username)

1: zdlras2-rep.oralocal:1522/zdlras2 repusr_from_zdlras1

[oracle@zdlras1n1 ~]$

 

The credential name you can define, but I usually specify it with the same pattern as EZCONNECT. Doing this, I directly know where this credential is.

 

DBMS_RA.CREATE_REPLICATION_SERVER

 

The third and last step to create the replication is to call the procedure to create the configuration at upstream. This is done just at upstream and it uses the wallet create at step two.
So, we use the DBMS_RA.CREATE_REPLICATION_SERVER with parameters:

 

. replication_server_name: Name for the downstream server. You can define the name that you want.
sbt_so_name: It will be always “libra.so”.

 

. catalog_user_name: Is the user that will connect using the wallet. Always RASYS.

 

. wallet_alias: The credential name that you defined what wallet.

 

. wallet_path: Where the wallet is located.

 

. max_streams: Max number of concurrent replication streams. The default value is 4.

 

The replication information can be checked at RASYS.RA_REPLICATION_SERVER tables that store all the information for replicated servers at your upstream.

 

So, to create the replication configuration:

 

[oracle@zdlras1n1 ~]$ sqlplus rasys/change^Me2




SQL*Plus: Release 19.0.0.0.0 - Production on Sun Dec 22 20:46:51 2019

Version 19.3.0.0.0




Copyright (c) 1982, 2019, Oracle.  All rights reserved.




Last Successful login time: Sun Dec 22 2019 20:33:15 +01:00




Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.3.0.0.0




SQL> SELECT COUNT(*)  FROM RA_REPLICATION_SERVER;




  COUNT(*)

----------

         0




SQL>

SQL> BEGIN

  2  DBMS_RA.CREATE_REPLICATION_SERVER (

  3      replication_server_name => 'zdlras2_rep',

  4      sbt_so_name      => 'libra.so',

  5      catalog_user_name       => 'RASYS',

  6      wallet_alias            => 'zdlras2-rep.oralocal:1522/zdlras2',

  7      wallet_path             => 'file:/radump/wallrep');

  8  END;

  9  /




PL/SQL procedure successfully completed.




SQL> SELECT COUNT(*)  FROM RA_REPLICATION_SERVER;




  COUNT(*)

----------

         1




SQL>

 

One important point here is the “max_streams” parameter. It needs to be tuned, if you are replicating more databases, maybe is good to increase this value. You can check the queue just select the “rasys.ra_task” table and verify if there are waiting for tasks for replication. Of course, this depends on the size of your files too.

 

Replication

 

The steps described here are just the small part for replication. We just created the replication server config at upstream (wallets and information) and downstream (username). But we still not finish the configuration for replication workflow:

 

 

And if you check the workflow for manual config we still need to do some steps:
But the missing past is related to “logical” definition, like policies that will be replicated and databases that are linked with these policies. The basic configuration (replication server config) was done in this post, and at previous posts.
At next post will show how to configure the backup policies and the details that you need to take care to correctly define it. If you want to understand more about protection policies you can check the post that I made about it.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


1 2 3 7