Blog
Observer, Quorum
Category: Database Author: Fernando Simon (Board Member) Date: 6 years ago Comments: 0

Observer, Quorum

This article closes the series for DG and Fast-Start Failover that I covered with more details the case of isolation can leverage the shutdown of your healthy/running primary database. The “ORA-16830: primary isolated from fast-start failover partners”.
In the first article, I wrote about how one simple detail that impacts dramatically the reliability of your MAA environment. Where you put your Observer in DG environment (when Fast-Start Failover is in use) have a core figure in case of outages, and you can face Primary isolation and shutdown. Besides that, there is no clear documentation to base yourself about “pros and cons” to define the correct place for Observer. You read more in my article here.
In the second article, I wrote about one new feature that can help to have more protected and cover more scenarios for Fast-Start Failover/DG. Using Multiple Observers you can remove the single point of failure and allow you to put one Observer in each side of your environment (primary, standby and a third one). You can read more in my article here.
In this last article I discuss how, even using all the features, there is no               perfect solution. Another point is discussing here is how (maybe) Oracle can improve that. Below I will show more details that even multiple observers continue to shutdown a healthy primary database. Unfortunately, it is a lot of tech info and is a log thread output. But you can jump directly to the end to see the discussion about how this can be improved.

 

Fast-Start Failover and Multiple Observers

Because the design of Fast-Start Failover, Broker and DG even using multiple observers, we continue to have the decision (to failover or no the database) based in just one observer report. The others are just backup from the master, but what they saw not count when the failover scenario hit the environment. Even if the Primary Database can receive connections from other two observers, but not receive the connection master (and standby), it decides to shutdown because it is “isolated”.
Look the example below where I have three observers (one in each site and a third one in the “cloud”):

 

 

The image above can be translated in this config for Broker where the Master Observer it is “dbobss” (that resides in standby datacenter):

 

DGMGRL> show fast_start failover;




Fast-Start Failover: ENABLED




  Threshold:          240 seconds

  Target:             orcls

  Observers:      (*) dbobss

                      dbobsp

                      dbobst

  Lag Limit:          30 seconds (not in use)

  Shutdown Primary:   TRUE

  Auto-reinstate:     TRUE

  Observer Reconnect: 10 seconds

  Observer Override:  FALSE




Configurable Failover Conditions

  Health Conditions:

    Corrupted Controlfile          YES

    Corrupted Dictionary           YES

    Inaccessible Logfile            NO

    Stuck Archiver                  NO

    Datafile Write Errors          YES




  Oracle Error Conditions:

    (none)




DGMGRL>

 

Check that in this case I set the threshold for fast-start failover as 240 seconds just to have more time to show the logs. But be aware that this parameter defines the time that your system waits/freeze until proceeding with the failover in case of system isolation or lost the primary.
Here, to simulate side isolation and same behavior for the first article, I shutdown the network from standby (that talk with primary database), for the Master observer (just network that talks with primary database), and for others observers the network for standby communication. The image below reflects this:

 

 

After that, the log from Broker in the primary start to report a lot of information and I will discuss bellow. You can click here to check the full text for this. From this log you can see that the primary detected that lost communication with standby at 20:14:06.504 and in 4 minutes (240 seconds) will trigger the failover. But the most important part is marked below:

 

2019-05-05 20:14:36.506                      LGWR: still awaiting FSFO ack after 30 seconds

2019-05-05 20:14:41.968                      DMON: Creating process FSFP

2019-05-05 20:14:44.976                      FSFP: Process started

2019-05-05 20:14:44.976                      FSFP: The ping from current master observer (No. 1) to primary database timed out.

2019-05-05 20:14:44.976                      FSFP: Preparing to switch master observer from observer 1 to observer 2

2019-05-05 20:14:44.976                      FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:14:44.976                      DMON: FSFP successfully started

2019-05-05 20:14:44.977                      FSFP: persisting FSFO state flags=0x10040001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:14:44.979                      FSFP: Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

2019-05-05 20:14:44.980                      FSFP: NET Alias translated: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbstb_dg)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcls) (UR=A)))

2019-05-05 20:14:44.980                      FSFP: Net link using RFIUPI_DB_CDESC, site=2, inst=0

2019-05-05 20:14:44.981                      FSFP: Connect to member orcls using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbstb_dg)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(UR=A)(SERVICE_NAME=orcls_DGB)))

2019-05-05 20:14:44.981                      FSFP: 0 x 10 seconds network retry READY

2019-05-05 20:14:51.506                      LGWR: still awaiting FSFO ack after 45 seconds

2019-05-05 20:14:55.120                      FSFP: Failed to connect to remote database orcls. Error is ORA-12543

2019-05-05 20:14:55.120                      FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:14:55.120                      FSFP: persisting FSFO state flags=0x40001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:15:06.507                      LGWR: still awaiting FSFO ack after 60 seconds

2019-05-05 20:15:10.124                      FSFP: The ping from current master observer (No. 1) to primary database timed out.

2019-05-05 20:15:10.124                      FSFP: Preparing to switch master observer from observer 1 to observer 2

2019-05-05 20:15:10.124                      FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:15:10.124                      FSFP: persisting FSFO state flags=0x10040001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

2019-05-05 20:15:10.128                      FSFP: Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

2019-05-05 20:15:10.128                      FSFP: NET Alias translated: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dbstb_dg)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcls) (UR=A)))

2019-05-05 20:15:10.128                      FSFP: Net link using RFIUPI_DB_CDESC, site=2, inst=0

2019-05-05 20:15:10.128                      FSFP: Connect to member orcls using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbstb_dg)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(UR=A)(SERVICE_NAME=orcls_DGB)))

2019-05-05 20:15:10.128                      FSFP: 0 x 10 seconds network retry READY

2019-05-05 20:15:14.979                      DMON: A Fast-Start Failover target switch is necessary because the primary cannot reach the Fast-Start Failover target standby database

2019-05-05 20:15:14.980                      DMON: A target switch was not attempted because the observer has not pinging primary recently.

2019-05-05 20:15:21.246                      FSFP: Failed to connect to remote database orcls. Error is ORA-12543

2019-05-05 20:15:21.247                      FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

2019-05-05 20:15:21.247                      FSFP: persisting FSFO state flags=0x40001, version=128, enver=123,target=2, lad=, obid=0x32cc2adc (852241116), threshold=240, laglim=30, obslim=30

 

Above you can see that the broker in the primary detected that master observer is down and tries to realize the switch to another observer. Unfortunately, since it was not possible to connect with the standby database was impossible to change it. Look the event “SET SWOB INPRG” was triggered but the “CLR SWOB INPRG” was impossible because “Failed to connect to remote database orcls. Error is ORA-12543”. If you compare with the log from the previous article (when I changed the master observer manually) here we don’t see the “SET OBID” and even “OBSERVED” events.
So, basically, because the standby database was incommunicable, the primary database can’t swap the master observer (even if receive connection with them). This behavior does not change even if you set big values for “FastStartFailoverThreshold”, “OperationTimeout”, “CommunicationTimeout” parameters.

Quorum

In one scenario for DG with Fast-Start Failover enabled you can hit a shutdown from your healthy primary database because it thinks that it is isolated when lost communication from standby and observer. Even when you add multiple observers the behavior does not change.
By the actual design for DG, this is 100% correct, it is impossible for each side to know if it is isolated or no. The primary, when lost the connection from Master Observer and Standby, shutdown because can’t guarantee the transactions. And the standby (if alive and have the connection for the Master Observer) failover to be the next primary.  The problem is that even using multiple observers, where you can have odd votes, you still face isolation if the minimal part (standby + master observer) vote itself (even if it is isolated). As discussed in the first article, where you put your observer it is very important, but you need to check the pros and cons for your decision.
Going deeper, when you use the Fast-Start Failover your DG start to operate in sync (even in Max Availability), and in the first sign of communication failure the primary database freeze and don’t accept more transactions. No records are stored, and this is why, even with multiple observers, the primary can’t switch to another one. The database itself is blocked to store this change.
One option that can improve this gap it is Broker start to use Quorum (or voting system) to control if it is ok to proceed with the failover or no. In the scenario that I showed before, where the primary still has a connection from others two observers, the shutdown of the healthy primary not occur because have more votes/connections from observers (2) compared with standby (that have just one). Unfortunately, there is not a perfect solution, if one outage occurs in the primary side, and you lost connection with most part of observers, the standby can’t failover because don’t know if have votes to do that or no.
This can be a future and additional option for Fast-Start Failover environments. But the key and fundamental part is still there, even quorum will not work correctly if you still put all the observers in just one side. And become even more critical when, now, you can have hybrid clouds with one for databases and other for applications.
The idea about writing these articles was to show the problem and try to fill the gap about the place to put your observer. There is no clear doc to search the “pros and cons” for each side option. Unfortunately, even with the new features, still exists a gap that needs to be covered to improve the reliability for some scenarios. There is no perfect solution, but it is important to know what you can do to reach MAA at the highest level.
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for community.”

 


Migrate On-Premises Database to Autonomous Database Cloud
Category: Cloud Author: Andre Luiz Dutra Ontalba (Board Member) Date: 6 years ago Comments: 0

Migrate On-Premises Database to Autonomous Database Cloud

Today we will learn how to migrate an Oracle database (On-Premises) to Autonomous Database Cloud.
 
In this environment will be show the usage of a new tool called MV2ADB. This tool can, after completing the configuration file, execute all the steps to export, transport and import a database to the Oracle Autonomous Cloud.

 

INSTALL THE MV2ADB SCRIPT

 

The script that allows you easy migration to ADB can be downloaded from My Oracle Support through note 2463574.1.
 
INSTALL THE MV2ADB TOOL AS ROOT USER

 

[root@hol /]# cd binaries/

[root@hol binaries]# ll

total 52532

-rw-r--r--. 1 oracle dba    31216 Jul 13 19:08 mv2adb-2.0.1-40.noarch.rpm

-rw-r--r--. 1 oracle dba 53758240 Jul 13 19:08 oracle-instantclient19.3-basic-19.3.0.0.0-1.x86_64.rpm

[root@hol binaries]# rpm -ivh mv2adb-2.0.1-40.noarch.rpm

warning: mv2adb-2.0.1-40.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID e7004b4d: NOKEY

Preparing...                          ################################# [100%]

Updating / installing...

   1:mv2adb-2.0.1-40                  ################################# [100%]




MV2ADB has been installed on /opt/mv2adb succesfully!




[root@hol binaries]#

 

Please note that the install script shows the location where the tool has been installed. In this case /opt/mv2adb. We need this later.

 

INSTALLED ORACLE INSTANT CLIENT

 

We have already downloaded and unzipped the required files for the Oracle Instant Client. In the directory /opt/instalclient we have unzipped the base Instant Client, the SQL*Plus zipfile and the Tools zipfile. All have been downloaded from OTN.
https://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html
instantclient-basic-linux.x64-19.3.0.0.0dbru.zip
instantclient-tools-linux.x64-19.3.0.0.0dbru.zip
instantclient-sqlplus-linux.x64-19.3.0.0.0dbru.zip
 

 

[root@hol opt]# unzip /binaries/instantclient-basic-linux.x64-19.3.0.0.0dbru.zip
Archive:  /binaries/instantclient-basic-linux.x64-19.3.0.0.0dbru.zip

  inflating: instantclient_19_3/adrci

  inflating: instantclient_19_3/BASIC_LICENSE

  inflating: instantclient_19_3/BASIC_README

  inflating: instantclient_19_3/genezi

  inflating: instantclient_19_3/libclntshcore.so.19.1

    linking: instantclient_19_3/libclntsh.so  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.10.1  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.11.1  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.12.1  -> libclntsh.so.19.1

    linking: instantclient_19_3/libclntsh.so.18.1  -> libclntsh.so.19.1

  inflating: instantclient_19_3/libclntsh.so.19.1

  inflating: instantclient_19_3/libipc1.so

  inflating: instantclient_19_3/libmql1.so

  inflating: instantclient_19_3/libnnz19.so

    linking: instantclient_19_3/libocci.so  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.10.1  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.11.1  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.12.1  -> libocci.so.19.1

    linking: instantclient_19_3/libocci.so.18.1  -> libocci.so.19.1

  inflating: instantclient_19_3/libocci.so.19.1

  inflating: instantclient_19_3/libociei.so

  inflating: instantclient_19_3/libocijdbc19.so

  inflating: instantclient_19_3/liboramysql19.so

   creating: instantclient_19_3/network/

  inflating: instantclient_19_3/ojdbc8.jar

  inflating: instantclient_19_3/ucp.jar

  inflating: instantclient_19_3/uidrvci

  inflating: instantclient_19_3/xstreams.jar

   creating: instantclient_19_3/network/admin/

  inflating: instantclient_19_3/network/admin/README

finishing deferred symbolic links:

  instantclient_19_3/libclntsh.so -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.10.1 -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.11.1 -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.12.1 -> libclntsh.so.19.1

  instantclient_19_3/libclntsh.so.18.1 -> libclntsh.so.19.1

  instantclient_19_3/libocci.so -> libocci.so.19.1

  instantclient_19_3/libocci.so.10.1 -> libocci.so.19.1

  instantclient_19_3/libocci.so.11.1 -> libocci.so.19.1

  instantclient_19_3/libocci.so.12.1 -> libocci.so.19.1

  instantclient_19_3/libocci.so.18.1 -> libocci.so.19.1

[root@hol opt]# unzip /binaries/instantclient-tools-linux.x64-19.3.0.0.0dbru.zip

Archive:  /binaries/instantclient-tools-linux.x64-19.3.0.0.0dbru.zip

  inflating: instantclient_19_3/exp

  inflating: instantclient_19_3/expdp

  inflating: instantclient_19_3/imp

  inflating: instantclient_19_3/impdp

  inflating: instantclient_19_3/libnfsodm19.so

  inflating: instantclient_19_3/sqlldr

  inflating: instantclient_19_3/TOOLS_LICENSE

  inflating: instantclient_19_3/TOOLS_README

  inflating: instantclient_19_3/wrc

[root@hol opt]# unzip /binaries/instantclient-sqlplus-linux.x64-19.3.0.0.0dbru.zip

Archive:  /binaries/instantclient-sqlplus-linux.x64-19.3.0.0.0dbru.zip

  inflating: instantclient_19_3/glogin.sql

  inflating: instantclient_19_3/libsqlplusic.so

  inflating: instantclient_19_3/libsqlplus.so

  inflating: instantclient_19_3/sqlplus

  inflating: instantclient_19_3/SQLPLUS_LICENSE

  inflating: instantclient_19_3/SQLPLUS_README

[root@hol opt]#
Change the owner and group of the installclient folder
[root@hol opt]# chown oracle:dba instantclient_19_3/
[root@hol opt]# ll -l instantclient_19_3/

total 240144

-rwxr-xr-x. 1 root root     41840 Apr 17 08:38 adrci

-r-xr-xr-x. 1 root root      5780 Apr 17 08:38 BASIC_LICENSE

-rw-r--r--. 1 root root      1632 Apr 17 08:38 BASIC_README

-rwxr-xr-x. 1 root root   1018968 Apr 17 08:39 exp

-rwxr-xr-x. 1 root root    220640 Apr 17 08:39 expdp

-rwxr-xr-x. 1 root root     59296 Apr 17 08:38 genezi

-rw-r--r--. 1 root root       342 Apr 17 08:39 glogin.sql

-rwxr-xr-x. 1 root root    502536 Apr 17 08:39 imp

-rwxr-xr-x. 1 root root    232768 Apr 17 08:39 impdp

-rwxr-xr-x. 1 root root   8041608 Apr 17 08:38 libclntshcore.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.10.1 -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.11.1 -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.12.1 -> libclntsh.so.19.1

lrwxrwxrwx. 1 root root        17 Jul 13 21:53 libclntsh.so.18.1 -> libclntsh.so.19.1

-rwxr-xr-x. 1 root root  79961792 Apr 17 08:38 libclntsh.so.19.1

-r-xr-xr-x. 1 root root   3609536 Apr 17 08:38 libipc1.so

-r-xr-xr-x. 1 root root    478432 Apr 17 08:38 libmql1.so

-rwxr-xr-x. 1 root root     60024 Apr 17 08:39 libnfsodm19.so

-rwxr-xr-x. 1 root root   6587832 Apr 17 08:38 libnnz19.so

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.10.1 -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.11.1 -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.12.1 -> libocci.so.19.1

lrwxrwxrwx. 1 root root        15 Jul 13 21:53 libocci.so.18.1 -> libocci.so.19.1

-rwxr-xr-x. 1 root root   2339896 Apr 17 08:38 libocci.so.19.1

-rwxr-xr-x. 1 root root 130515320 Apr 17 08:38 libociei.so

-r-xr-xr-x. 1 root root    153624 Apr 17 08:38 libocijdbc19.so

-rwxr-xr-x. 1 root root    115976 Apr 17 08:38 liboramysql19.so

-rwxr-xr-x. 1 root root   1660752 Apr 17 08:39 libsqlplusic.so

-rwxr-xr-x. 1 root root   1573416 Apr 17 08:39 libsqlplus.so

drwxr-xr-x. 3 root root        19 Apr 17 08:38 network

-rw-r--r--. 1 root root   4210517 Apr 17 08:38 ojdbc8.jar

-rwxr-xr-x. 1 root root   1660560 Apr 17 08:39 sqlldr

-rwxr-xr-x. 1 root root     24792 Apr 17 08:39 sqlplus

-r-xr-xr-x. 1 root root      5780 Apr 17 08:39 SQLPLUS_LICENSE

-rw-r--r--. 1 root root      1640 Apr 17 08:39 SQLPLUS_README

-r-xr-xr-x. 1 root root      5780 Apr 17 08:39 TOOLS_LICENSE

-rw-r--r--. 1 root root      1632 Apr 17 08:39 TOOLS_README

-rw-r--r--. 1 root root   1680074 Apr 17 08:38 ucp.jar

-rwxr-xr-x. 1 root root    236960 Apr 17 08:38 uidrvci

-rwxr-xr-x. 1 root root    751072 Apr 17 08:39 wrc

-rw-r--r--. 1 root root     74263 Apr 17 08:38 xstreams.jar

[root@hol opt]#
CREATING A NEW AUTONOMOUS ATP ENVIRONMENT
Log into Oracle Cloud and click the “Create Instance” link.
Click on the “Create” button in the Autonomous Transaction Processing

 

 

Select the compartment you want to build the service in, then click the “Create Autonomous Database” button.

 

Enter the details of the service you want to create. The default sizes are 1 CPU core and 1TB of storage. Remember to select the appropriate licensing model. Click the “Create Autonomous Database” button.
 
Compartment : <keep value>

Display Name : MIGRATE_DBON_DBOCI

Database Name : MIGRATE

Workload Type : Transaction Processing

Choose a deployment type:  Serverless

CPU Core Count : 1

Storage (TB) : 1

Password : OracleMIG#2019

License Type : My Organization Already owns Oracle Database (etc..)

 

 

This process will take between 5 and 10 minutes to complete.
CREATE AN OBJECT STORE BUCKET

 

As we need to upload the export to the OCI environment, we need to create a location to do this. The MV2ADB script could create a new location but this would require the setup of the OCI Commandline tools. Since this environment is using a more generic approach, we need to pre-create the directory (called Bucket).

 

NAVIGATE TO OBJECT STORAGE IN THE OCI CONSOLE

 

 

Select the compartment you want to build the service in, then click the “Create Bucket” button.
Enter the details of the service you want to create.  Click the “Create Bucket” button.
Write down the name of the bucket as we will need it in our configuration file. The name of the bucket in the configuration file is case-sensitive.

 

 

CHECK SOURCE SCHEMAS FOR COMPATIBILITY

 

 

Not everything is supported in the Autonomous Database Cloud environment. To make sure you do not run into any issues, a tool called ADB Schema Advisor has been created. This PL/SQL Package can generate a report to show you any issues you might encounter before you actually execute the migration.

 

PACKAGE SOURCE ALREADY DOWNLOADED

 

The ADB Schema advisor can be downloaded from MOS note 2462677.1 (Oracle Autonomous Database Schema Advisor).

 

We have already downloaded the latest version to the /source directory in your client image.
 
Please note; in a regular environment, this package does not require SYS or SYSTEM user to be installed. When installing it into a non-SYS and non-SYSTEM user, please check the manual for the exact installation steps. Very important are the GRANTS needed to give the package access to the information it needs.

 

[oracle@hol ~]$ . oraenv

ORACLE_SID = [oracle] ? UPGR

ORACLE_BASE environment variable is not being set since this

information is not available for the current user ID oracle.

You can set ORACLE_BASE manually if it is required.

Resetting ORACLE_BASE to its previous value or ORACLE_HOME

The Oracle base has been set to /u01/app/oracle/product/11.2.0.4

[oracle@hol ~]$ sqlplus




SQL*Plus: Release 11.2.0.4.0 Production on Sat Jul 13 23:17:26 2019




Copyright (c) 1982, 2013, Oracle.  All rights reserved.




Enter user-name: / as sysdba




Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options




SQL> @/home/oracle/adb_advisor.plb




Package created.







Package body created.




SQL>

SQL>

 

 

 

As you can see, there are some directory objects that cannot be migrated as ADB does not support access to the local filesystem (besides the DP_DUMP_DEST directory).
 
A second issue are 1 tables that apparently need changes before they can be imported. A little bit further down in the report, the issues are explained:
 

 

  • NOLOGGING options will be automatically changed to LOGGING options
  • Index Organized Tables (IOT) are not supported. You need a special option for IMPDP to change this during import.
  •  
GATHERING REQUIRED DETAILS

 

The configuration file for MV2ADB needs a series of parameters for export, upload of the dumpfile and import of the dumpfile. A full file with hints can be found in the /opt/mv2adb/conf directory. For this lab we will use only the parameters needed for a simple migration.

 

CREATE A NEW FILE FOR THE CONFIGURATION

 

sudo vi /opt/mv2adb/conf/ATP.mv2adb.conf

 

Cut-and-paste the below parameters in this new document so that we can start entering the required data. At this moment, only copy-and-paste the below, we will make changes to the values in the following sections.

 

# DB Parameters




DB_CONSTRIG=//hol:1521/UPGR

SYSTEM_DB_PASSWORD=D5E036E7E7A09843561DCB5AEA05BCD2

SCHEMAS=CARLOCA

DUMPFILES=/tmp/DB112-UPGR.dmp

OHOME=/u01/app/oracle/product/11.2.0.4

ICHOME=/opt/instantclient_19_3

# Expdp/Impdp Parameters

ENC_PASSWORD=D5E036E7E7A09843561DCB5AEA05BCD2

ENC_TYPE=AES256

# Object Store Properties

BMC_HOST=

BMC_TENNANT=

BMC_BUCKET=

BMC_ID=

BMC_PASSWORD=

# ADB Parameters

ADB_NAME=

ADB_PASSWORD=

CFILE=

 

GATHERING (SOURCE) DB PARAMETERS

 

The initial section is regarding the source database. Please enter the following information for the source environment. Since this is a environment, we have pre-entered most of the DB parameters for you. Here is some information where you can find the details:

 

DB_CONSTRIG

Connecting string from the local system (where mv2adb is running) to the database instance that needs to be migrated

SYSTEM_DB_PASSWORD

Password for SYSTEM user for this source database

SCHEMAS

Schema’s to be exported; only transport the schema’s that you need, do not include any default schema’s like HR, OE, SYSTEM, SYS etc as they already exist in ADB and might result in errors (or get ignored)

DUMPFILES

File system location of dumpfiles. If you want parallelism during export and import, specify as many files as you want the parallelism to be. Make sure the files are unique in the source but also in the destination (ATP) environment.

OHOME

The source database Oracle Home

IHOME

The installed Oracle Instant Client home (basic, SQL*Plus and Tools unzipped)

 

GATHERING EXPDP/IMPDP PARAMETERS

 

In this section you specify the encryption password and the encryption type for your export. To make sure your data cannot be compromised when exporting and uploading your data, the script requires a password.

 

ENC_PASSWORD

A password that will encrypt your datapump exports. Has nothing to do with any existing user or password. Please note that this password cannot be plain text. The password needs to be encrypted using the mv2adb binaries on your system

END_TYPE

Type of encryption of your data. The higher the number, the more encryption but also slower export/import. Options are AES128, AES192 and AES256

 

 

ENCRYPT THE ENCRYPTION PASSWORD AND PUT IT IN THE FILE

 

The password we will use for this environment is oracle

 

[oracle@hol ~]$ /opt/mv2adb/mv2adb encpass

Please enter the password :  oracle

Please re-enter the password : oracle

D5E036E7E7A09843561DCB5AEA05BCD2

 

Make sure your encrypted password is entered in your ATP.mv2adb.conf config file.

 

 

GATHERING OBJECT STORE PROPERTIES

 

The Autonomous database can only use dumpfiles uploaded to Swift compatible storage. The following parameters specify where the dumpfiles should be uploaded to after the export. This is also the location where the logfiles will be stored. Instead of the below SWIFT details, you can also choose to locally install the OCI Client and use that setup. See the example config for more information.

 

 

 

BMC_HOST

This is the Swift object storage URL for your environment.

BMC_TENNANT

Name of your Tenancy. Be aware, for SWIFT we only use lower case

BMC_BUCKET

The name of a new bucket that will be created for this migration.

BMC_ID

Your username in OCI

BMC_PASSWORD

The Swift Authentication Token encrypted using the mv2adb password encoder.

 

 

GENERATE TOKEN FOR SWIFT API
 
Click in Identity and Select Users

 

 

Select the user for creating the authentication token.
Click in Generate Token
Put the Description about this token and Click in Generate Token
Copy this token for using in the connect the tool mv2adb with Object Storage.

 

LOCATE THE SWIFT AUTHENTICATION PASSWORD AND ENCRYPT USING THE MV2ADB TOOL

 

 

[oracle@hol ~]$ cat /home/oracle/auth_token.txt

cn#Wi]6xv4hO(:j0l0SX

[oracle@hol ~]$ /opt/mv2adb/mv2adb encpass

Please enter the password : <cut-and-paste auth_key>

Please re-enter the password : <cut-and-paste auth_key>

E54C941DA0DBA8EB467DCC7F0C04(...)ED747D3AF6B6184BC173B78DE426CEBE4

FILL IN ALL OF THE DETAILS FOR THE OBJECT STORE SETTINGS

 

# Object Store Properties

BMC_HOST=https://swiftobjectstorage.eu-frankfurt-1.oraclecloud.com

BMC_TENNANT= andreontalba

BMC_BUCKET= BC_MIGRATE

BMC_ID=andre.ontalba

BMC_PASSWORD= E54C941DA0DBA8EB467DCC7F0C04(...)ED747D3AF6B6184BC173B78DE426CEBE4


GATHERING ADB PARAMETERS

 

During the gathering of the other parameters, your ADB environment should have been created. As a last step we will now gather the information needed for the last section

 

ADB_NAME

Name of your ADB instance

ADB_PASSWORD

Database Admin password

CFILE

Zipfile containing database credentials

 

First parameter requires the name of your created ADB environment. Navigate to the ADB Console and find the name of your database. My Database is MIGRATE

 

 

 

 

 

Second parameter is the password you have entered while creating the Autonomous environment. If you have used the suggested password, it would be OracleMIG#2019. If you have chosen another password, you need to remember it.
 
ENCRYPT YOUR DATABASE PASSWORD USING THE MV2ADB ENCRYPT OPTION

 

[oracle@hol ~]$ /opt/mv2adb/mv2adb encpass 
Please enter the password :  

Please re-enter the password :

DE3D105A8E6F6A4D5E86EXSW6BC1D3BA


For the 3rd  parameter we need to download something from the OCI console, the so-called Wallet file.


CLICK ON THE BUTTON 'DB CONNECTION'



The following screen will be displayed:

CLICK ON THE BUTTON 'DOWNLOAD' TO DOWNLOAD THE WALLET ZIP

In the following screen a password is requested. This is the password that protects the keystore inside the zipfile. For this exercise we will not be using this keystore so enter any random password twice.

ENTER PASSWORD AND PRESS ‘DOWNLOAD

 

Your zipfile will be downloaded to the default location /home/oracle/Downloads. Please note the name of the wallet.zip and enter this in your parameters.

 

# ADB Parameters

ADB_NAME=MIGRATE

ADB_PASSWORD= DE3D105A8E6F6A4D5E8XXXSW6BC1D3BA

CFILE=/home/oracle/Downloads/wallet.zip

 

MAKE SURE ALL PARAMETERS ARE ENTERED AND SAVE THE FILE TO /opt/mv2adb/conf/ATP.mv2adb.conf

 

 

START THE MIGRATION USING THE MV2ADB SCRIPT

 

START THE MV2ADB SCRIPT USING THE CONFIGURATION FILE YOU JUST CREATED.

 

[oracle@hol ~]$ sudo /opt/mv2adb/mv2adb auto -conf /opt/mv2adb/conf/ATP.mv2adb.conf 

INFO: 2019-08-31 14:08:27: Please check the logfile '/opt/mv2adb/out/log/mv2adb_12765.log' for more details

--------------------------------------------------------

mv2adb - Move data to Oracle Autonomous Database

Author: Ruggero Citton <[email protected]>

RAC Pack, Cloud Innovation and Solution Engineering Team

Copyright (c) 1982-2019 Oracle and/or its affiliates.




Version: 2.0.1-29

--------------------------------------------------------

INFO: 2019-08-31 14:08:27: Reading the configuration file '/opt/mv2adb/conf/ATP.mv2adb.conf'

INFO: 2019-08-31 14:08:28: Checking schemas on source DB

...

INFO: 2019-03-31 14:08:54: ...loading '/tmp/DB112-UPGR.dmp' into bucket 'UPGRADEBUCKET-RP'

SUCCESS: 2019-03-31 14:09:27: ...file '/tmp/DB112-UPGR.dmp' uploaded on 'UPGRADEBUCKET-RP' successfully

SUCCESS: 2019-03-31 14:09:27: Upload of '1' dumps over Oracle Object Store complete successfully

INFO: 2019-03-31 14:09:27: Performing impdp into ADB...

INFO: 2019-03-31 14:09:27: Step1 - ...drop Object Store Credential

INFO: 2019-03-31 14:09:29: Step2 - ...creating Object Store Credential

INFO: 2019-03-31 14:09:36: Step3 - ...executing import datapump to ADB

INFO: 2019-03-31 14:12:42: Moving impdp log 'mv2adb_impdp_20190831-140936.log' to Object Store

SUCCESS: 2019-03-31 14:12:43: Impdp to ADB 'MIGRATE' executed successfully


After about 10 minutes, all the steps should have been executed successfully. If you encounter any error, please check the logfile that was displayed immediately after you started the script. This will contain all of the individual steps, commands used and the output of those commands.

 

LOGIN AND CHECK THE MIGRATED DATABASE
 
USE SQL*DEVELOPER TO CHECK IF THE CARLOCA USER HAS BEEN MIGRATED TO ATP

 

On your Desktop, you can see SQL*Developer. Start this application

 

CREATE A NEW CONNECTION TO ATP BY CLICKING ON THE GREEN + SIGN IN THE CONNECTIONS PANE
 
Connection Name : MYATP 

Username : admin

Password : OracleMIG#2019 (or any other password you have used)

Connection Type : Cloud Wallet

Configuration File :

<select the wallet you downloaded in /home/oracle/Downloads>



Service : migrate_tp
ENTER THE REQUIRED DETAILS AND PRESS CONNECT  


After connecting, a new SQL Window will be displayed. Here you can execute queries on the ATP environment.
ENTER THE QUERY AND PRESS THE GREEN ARROW TO EXECUTE IT  


In the Query Result window, the result of the query will be displayed:
I hope this helps you!!!
 
This article was written by Andre Ontalba e Rodrigo Mufalani
 

Disclaimer: “The postings on this site are my own and don’t necessarily represent may actual employer positions, strategies or opinions. The information here was edited  to be useful for general purpose, specific data and identifications was removed to allow reach generic audience and to be useful.”


Oracle Autonomous Database and DevOps: A Simple Approach to Utilization – Part 5
Category: Cloud Author: BRUNO REIS Date: 6 years ago Comments: 0

Oracle Autonomous Database and DevOps: A Simple Approach to Utilization - Part 5

This article is the fifth and final part of the Oracle Autonomous Database and DevOps article series. For a better understanding, I recommend to read the articles “Oracle Autonomous Database and DevOps: A Simple Approach to Use – Part 1”, “Part 2”, “Part 3” and “Part 4” before proceeding with the steps in this article.

 

As the previous article dealt with the creation of the Linux application server within Oracle Cloud, in this article we will proceed with the creation of microservices using Docker and connecting a JAVA application to the cloud database. To start let’s get some concepts:

 

 

What are microservices?

 

According to Chris Richardson, “is a style of architecture that structures an application as a collection of services that are:

 

  • Highly sustainable and testable
  • Frankly coupled
  • Independently deployable
  • Organized around business resources. ”
 
What is node.js?

 

According to the manufacturer’s official website, “Node.js is an open source, cross-platform JavaScript execution environment.”

 

PS: This article will not show the coding of the application, but only the loading of it.

 

Now that we have a brief description of micricroservices and node.js, let’s proceed with the creation of our microservices:
 
 
Creating node.js microservices: Make sure Docker is running on your machine (For more information on Docker see the first article of this series):

 

[root@techdatatechdatabasketblogblog /]# systemctl status  docker
  • docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)

   Active: active (running) since Wed 2019-04-24 20:03:28 CEST; 7s ago

     Docs: http://docs.docker.com

 Main PID: 27946 (dockerd-current)

    Tasks: 20

   CGroup: /system.slice/docker.service

           ├─27946 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=...

           └─27952 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/co...




Apr 24 20:03:27 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:27.250219328+02:00" level=warning msg="failed to cleanup ipc mounts:\nfailed to umount /var/lib/docker/co...id argument"

Apr 24 20:03:27 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:27.250885514+02:00" level=warning msg="8172d6ebc0303d372883143207467260cd2ea708b300ba23c736b36777be0d3b c...id argument"

Apr 24 20:03:27 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:27.350320351+02:00" level=info msg="Firewalld running: true"

Apr 24 20:03:28 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:28.063094521+02:00" level=info msg="Removing stale sandbox f98e8fe2671acc61090bbed5f985266ded27adf2e19e27...6777be0d3b)"

Apr 24 20:03:28 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:28.466076919+02:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16... IP address"

Apr 24 20:03:28 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:28.627644418+02:00" level=info msg="Loading containers: done."

Apr 24 20:03:28 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:28.687620173+02:00" level=info msg="Daemon has completed initialization"

Apr 24 20:03:28 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:28.687645914+02:00" level=info msg="Docker daemon" commit="8633870/1.13.1" graphdriver=overlay2 version=1.13.1

Apr 24 20:03:28 techdatatechdatabasketblogblog.tech.com dockerd-current[27946]: time="2019-04-24T20:03:28.691636427+02:00" level=info msg="API listen on /var/run/docker.sock"

Apr 24 20:03:28 techdatatechdatabasketblogblog.tech.com systemd[1]: Started Docker Application Container Engine.

 

 

At this moment you can pull your app image at https://github.com  as in the example below:







[root@oc8226237722 /]# git clone https://github.com/brunors/appte.git  ((link will not be available after  this article is published))




Cloning into 'appte'...

remote: Enumerating objects: 1045, done.

remote: Total 1045 (delta 0), reused 0 (delta 0), pack-reused 1045

Receiving objects: 100% (1045/1045), 3.49 MiB | 1.95 MiB/s, done.

Resolving deltas: 100% (173/173), done.




Open the port on your computer for the application to start. As we chose in the previous article will be port 3055:




[root@techdatabasketblog /]# firewall-cmd --zone=public --permanent --add-port=3055/tcp

success

 

 

Install Oracle Client on your computer, you can find it on the OTN website and download it:

 

 

Copy the wallet used in previous articles to create the secure connection to a directory that will be used for the container image:
 
[root@techdatabasketblog wallet_APPtechdatabasket]# unzip /tmp/wallet/Wallet_techdatabasket.zip -d /wallet_APPtechdatabasket/

Archive:  /tmp/wallet/Wallet_techdatabasket.zip

  inflating: /wallet_APPtechdatabasket/cwallet.sso 

  inflating: /wallet_APPtechdatabasket/tnsnames.ora 

  inflating: /wallet_APPtechdatabasket/truststore.jks 

  inflating: /wallet_APPtechdatabasket/ojdbc.properties 

  inflating: /wallet_APPtechdatabasket/sqlnet.ora 

  inflating: /wallet_APPtechdatabasket/ewallet.p12 

  inflating: /wallet_APPtechdatabasket/keystore.jks 

[root@techdatabasketblog wallet_APPtechdatabasket]#


Change the sqlnet.ora values ​​to:

 

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA =

(DIRECTORY=$TNS_ADMIN)))

 

Be in the directory containing the wallet, the Oracle Client and start Docker with the image name (in this article  is appte):

 

$ docker build -t appte   .




Check the image:

 

[root@techdatabasketblog]# docker images -a

REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE

<none>                                          <none>              d9369bacadb8        14 minutes ago      562 MB

<none>                                          <none>              d9d8bb9b0d49        14 minutes ago      562 MB

<none>                                          <none>              7e7286d50f76        14 minutes ago      550 MB

<none>                                          <none>              6746bf7618ac        14 minutes ago      173 MB

<none>                                          <none>              6c6022e9f288        14 minutes ago      110 MB

<none>                                          <none>              05c85eda1ac8        14 minutes ago      110 MB

<none>                                          <none>              6c628092a5a8        14 minutes ago      110 MB

<none>                                          <none>              f24dfd98c67a        15 minutes ago      14.9 MB

appte                                           latest              65463ysdbe26       3 weeks ago         171 MB

 

 

Start the image on port 3055 as we chose:

 

docker run -i -p 3055:3055 -t appte  sh

/ #

 

Change your user credentials, password and connection string in the application.

 

Run the server.js script:

 

node server.js &

 

The results will be something like this:

 

appte listening on port 3055




You can see your application in a web browser as http://{SEUIP}:3055

 

 

Now that we have created the  microservices, let’s make a connection of a JAVA application to the ATP:

 

Connect to the provisioned server and as root install java:

 

[root@techdatabasketblog ATPDocker]# ssh [email protected]

Enter passphrase for key '/root/.ssh/id_rsa':

Please login as the user "opc" rather than the user "root".




Connection to 129.146.128.2 closed.

[root@techdatabasketblog ATPDocker]# ssh [email protected]

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

[root@techdatabasketblog ATPDocker]# ssh [email protected]

Enter passphrase for key '/root/.ssh/id_rsa':

Enter passphrase for key '/root/.ssh/id_rsa':

Last login: Wed Apr 24 19:27:11 2019 from ua-85-226-141-154.bbcust.telenor.se

[opc@vmlinuxmachine ~]$ su - root

[root@vmlinuxmachine ~]#yum install java




 

Loaded plugins: langpacks, ulninfo

ol7_UEKR5                                                                                                                                                                            | 2.5 kB  00:00:00    

ol7_addons                                                                                                                                                                           | 2.5 kB  00:00:00    

ol7_developer                                                                                                                                                                        | 2.5 kB  00:00:00    

ol7_developer_EPEL                                                                                                                                                                   | 2.5 kB  00:00:00    

ol7_ksplice                                                                                                                                                                          | 2.5 kB  00:00:00    

ol7_latest                                                                                                                                                                           | 3.0 kB  00:00:00    

ol7_optional_latest                                                                                                                                                                  | 2.5 kB  00:00:00    

ol7_software_collections                                                                                                                                                             | 2.5 kB  00:00:00    

(1/6): ol7_ksplice/primary_db                                                                                                                                                        | 131 kB  00:00:00    

(2/6): ol7_optional_latest/x86_64/updateinfo                                                                                                                                         | 688 kB  00:00:00    

(3/6): ol7_latest/x86_64/updateinfo                                                                                                                                                  | 899 kB  00:00:00    

(4/6): ol7_developer_EPEL/x86_64/primary_db                                                                                                                                          |  10 MB  00:00:00    

(5/6): ol7_latest/x86_64/primary_db                                                                                                                                                  |  16 MB  00:00:00    

(6/6): ol7_optional_latest/x86_64/primary_db                                                                                                                                         | 3.5 MB  00:00:00    

Resolving Dependencies

--> Running transaction check












Installed:

  java-1.8.0-openjdk.x86_64 1:1.8.0.212.b04-0.el7_6                                                                                                                                                         




Dependency Installed:

  alsa-lib.x86_64 0:1.1.6-2.el7                 atk.x86_64 0:2.28.1-1.el7                         avahi-libs.x86_64 0:0.6.31-19.el7                             cairo.x86_64 0:1.15.12-3.el7               

  copy-jdk-configs.noarch 0:3.3-10.el7_5        cups-libs.x86_64 1:1.6.3-35.el7                   dejavu-fonts-common.noarch 0:2.33-6.el7                       dejavu-sans-fonts.noarch 0:2.33-6.el7      

  fontconfig.x86_64 0:2.13.0-4.3.el7            fribidi.x86_64 0:1.0.2-1.el7                      gdk-pixbuf2.x86_64 0:2.36.12-3.el7                            giflib.x86_64 0:4.1.6-9.el7                

  graphite2.x86_64 0:1.3.10-1.el7_3             gtk-update-icon-cache.x86_64 0:3.22.30-3.el7      gtk2.x86_64 0:2.24.31-1.el7                                   harfbuzz.x86_64 0:1.7.5-2.el7              

  hicolor-icon-theme.noarch 0:0.12-7.el7        jasper-libs.x86_64 0:1.900.1-33.el7               java-1.8.0-openjdk-headless.x86_64 1:1.8.0.212.b04-0.el7_6    javapackages-tools.noarch 0:3.4.1-11.el7   

  jbigkit-libs.x86_64 0:2.0-11.el7              libICE.x86_64 0:1.0.9-9.el7                       libSM.x86_64 0:1.2.2-2.el7                                    libX11.x86_64 0:1.6.5-2.el7                

  libX11-common.noarch 0:1.6.5-2.el7            libXau.x86_64 0:1.0.8-2.1.el7                     libXcomposite.x86_64 0:0.4.4-4.1.el7                          libXcursor.x86_64 0:1.1.15-1.el7           

  libXdamage.x86_64 0:1.1.4-4.1.el7             libXext.x86_64 0:1.3.3-3.el7                      libXfixes.x86_64 0:5.0.3-1.el7                                libXft.x86_64 0:2.3.2-2.el7                

  libXi.x86_64 0:1.7.9-1.el7                    libXinerama.x86_64 0:1.1.3-2.1.el7                libXrandr.x86_64 0:1.5.1-2.el7                                libXrender.x86_64 0:0.9.10-1.el7           

  libXtst.x86_64 0:1.2.3-1.el7                  libXxf86vm.x86_64 0:1.1.4-1.el7                   libdrm.x86_64 0:2.4.91-3.el7                                  libfontenc.x86_64 0:1.1.3-3.el7            

  libglvnd.x86_64 1:1.0.1-0.8.git5baa1e5.el7    libglvnd-egl.x86_64 1:1.0.1-0.8.git5baa1e5.el7    libglvnd-glx.x86_64 1:1.0.1-0.8.git5baa1e5.el7                libjpeg-turbo.x86_64 0:1.2.90-6.el7        

  libpciaccess.x86_64 0:0.14-1.el7              libthai.x86_64 0:0.1.14-9.el7                     libtiff.x86_64 0:4.0.3-27.el7_3                               libwayland-client.x86_64 0:1.15.0-1.el7    

  libwayland-server.x86_64 0:1.15.0-1.el7       libxcb.x86_64 0:1.13-1.el7                        libxshmfence.x86_64 0:1.2-1.el7                               libxslt.x86_64 0:1.1.28-5.0.1.el7          

  lksctp-tools.x86_64 0:1.0.17-2.el7            mesa-libEGL.x86_64 0:18.0.5-4.el7_6               mesa-libGL.x86_64 0:18.0.5-4.el7_6                            mesa-libgbm.x86_64 0:18.0.5-4.el7_6        

  mesa-libglapi.x86_64 0:18.0.5-4.el7_6         pango.x86_64 0:1.42.4-2.el7_6                     pcsc-lite-libs.x86_64 0:1.8.8-8.el7                           python-javapackages.noarch 0:3.4.1-11.el7  

  python-lxml.x86_64 0:3.2.1-4.el7              ttmkfdir.x86_64 0:3.0.9-42.el7                    tzdata-java.noarch 0:2019a-1.el7                              xorg-x11-font-utils.x86_64 1:7.5-21.el7    

  xorg-x11-fonts-Type1.noarch 0:7.5-9.el7     




Complete!

 

 

Create a directory and copy the application:

 

cd ~




[root@vmlinuxmachine opc]# mkdir javaBLOG




[root@vmlinuxmachine opc]# git clone  https://github.com/brunors/appte.git/javaBLOG.git





 

 

Download and install JDBC drives in your provisioned environment:

 

[root@vmlinuxmachine lib]# tar xzfv ojdbc8-full.tar.gz

ojdbc8-full/

ojdbc8-full/ojdbc8.jar

ojdbc8-full/oraclepki.jar

ojdbc8-full/osdt_core.jar

ojdbc8-full/xdb6.jar

ojdbc8-full/ons.jar

ojdbc8-full/ojdbc.policy

ojdbc8-full/orai18n.jar

ojdbc8-full/osdt_cert.jar

ojdbc8-full/README.txt

ojdbc8-full/ucp.jar

ojdbc8-full/simplefan.jar

 

 

 

[root@techdatabasketblog]# sudo ssh -i /path_to/sshkeys/id_rsa [email protected]

Enter passphrase for key '/root/.ssh/id_rsa':

[opc@vmlinuxmachine ~]$ cd javaBLOG

[opc@vmlinuxmachine javaBLOG]$

 
Copy and unzip your database wallet to the application directory on the server:
 

 

[opc@vmlinuxmachine ~]$unzip /tmp/wallet/Wallet_techdatabasket.zip -d /wallet_APPtechdatabasket/

Archive:  /tmp/wallet/Wallet_techdatabasket.zip

  inflating: /wallet_APPtechdatabasket/cwallet.sso 

  inflating: /wallet_APPtechdatabasket/tnsnames.ora 

  inflating: /wallet_APPtechdatabasket/truststore.jks 

  inflating: /wallet_APPtechdatabasket/ojdbc.properties 

  inflating: /wallet_APPtechdatabasket/sqlnet.ora 

  inflating: /wallet_APPtechdatabasket/ewallet.p12 

  inflating: /wallet_APPtechdatabasket/keystore.jks 

[root@techdatabasketblog wallet_APPtechdatabasket]#

 
Change the sqlnet.ora file values ​​to the following:
 
WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY=$TNS_ADMIN)))

SSL_SERVER_DN_MATCH=yes

 
Set the TNS_ADMIN variable to the wallet path:
 
export TNS_ADMIN=/home/opc/javaBLOG

 
 
Compile the java application:

 

cd /home/opc/javaBLOG/src

 

 Execute the java application and connect to the ATP database:

 

java -cp .:/home/opc/javaBLOG/lib/ojdbc8-full/ojdbc8.jar javaAPP

 

This finished  the series of articles on the simple approach of Oracle Autonomous Database and DevOps. The idea for this series of articles came from my participation in Oracle Code Rome 2019 in Italy, where I got a lot of knowledge that made me explore this DevOps field further with the ATP database and as mentioned in the first article we started with the Docker setup and later in the following articles on Oracle Cloud database and server provisioning on Oracle Cloud and we end with a brief representation of microservices and java with the ATP database. However, there are a number of features that can still be explored within Oracle Cloud, so create your account and explore the Oracle Cloud world.

 

 

References:
 
https://www.oracle.com/technetwork/database/availability/trn5515-microserviceswithoracle-5187372.pdf
 
https://microservices.io/
https://nodejs.dev/
 
https://blogs.oracle.com/dbcs/building-microservices-on-oracle-autonomous-transaction-processing-service-v2
 
https://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html
 
https://github.com/cloudsolutionhubs
 
https://www.oracle.com/technetwork/database/application-development/jdbc/documentation/atp-5073445.html
 
https://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/apaas/java/getting-started-java-accs/getting-started-java-accs.html

 

 

Bruno Reis da Silva is a Database Cloud Support Engineer and professionally Certified Oracle Database Administrator who has worked on the South American continent and is now working on the European continent. He was awarded the first Oracle Ace Associate of Hungary in 2017. His interests are in RDBMS, mainly Oracle, operating systems (Linux, AIX, HPUX and Solaris) and High Availability Solutions implementations. In his free time he enjoys playing sports , going to the gym and traveling. His blog www.techdatabasket.com is dedicated to his sister Laura Vitoria and the main reason for blogging is because he believes in the quote “giving back to get back” . He also enjoys continuous learning and the interaction with others who share his interest.

 

Carlos Magno de Andrade Júnior is an  Database Architect at eProseed Europe, with more than 15 years of experience in Oracle database on complex projects in countries such as Brazil , India , the Netherlands, Luxembourg , France and Italy, having worked in companies such as Oracle Corporation, IBM, HSBC, among others. Also share information on your blog ezdba.wordpress.com. Certifications : OCM 12c OCP 10g , 11g OCP , OCP 12c, OCE RAC , Exadata , ITIL and OEM 12c Cloud Control Implementation Specialist.


OBSERVERS, MORE THAN ONE
Category: Database Author: Fernando Simon (Board Member) Date: 6 years ago Comments: 0

OBSERVERS, MORE THAN ONE

Recently I made a post about a little issue that I got with Oracle Dataguard. In that scenario, because outage in the standby datacenter, healthy primary database shutdown with error “ORA-16830: primary isolated…”. Just to remember that the database was running with Maximum Availability, Fail-start Failover enabled and (the most important detail) the Observer was running in the standby datacenter too.
The point from my post (that you can read here) tried to show that does not exists one doc that provide full details about “pros” and “cons” where put your observer. Whatever place, on primary datacenter or in standby, have little details to check. Even the best (ideal) scenario with a third datacenter can be tough to sustain.
Here I will try to show one option that can help you and improve the reliability of your environment. At least, you will have more options to decide how to protect your database. Bellow I show some details about how to configure and use multiple observers, but if you want to see a little concern about this you can directly to the end of the post.

 

 

More than one

Basically, to do that, you can add more than one observer to protect your DG environment. It is simple to configure, and you can use this since 12.2 and have at least three of them. To configure you just need to do (in the simplest way):
  1. Install the default Oracle Client infrastructure.
  2. Add TNS entry for/to both sides.
  3. Open the DGMGRL.
  4. Call “start observer” command.
Check how easy it is:

 

[oracle@dbobss ~]$ dgmgrl sys/oracle@orcls

DGMGRL for Linux: Release 12.2.0.1.0 - Production on Sun May 5 16:30:58 2019




Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.




Welcome to DGMGRL, type "help" for information.

Connected to "orcls"

Connected as SYSDBA.

DGMGRL> start observer

[W000 05/05 16:31:40.34] FSFO target standby is orcls

[W000 05/05 16:31:42.53] Observer trace level is set to USER

[W000 05/05 16:31:42.53] Try to connect to the primary.

[W000 05/05 16:31:42.53] Try to connect to the primary orcl.

[W000 05/05 16:31:42.54] The standby orcls is ready to be a FSFO target

[W000 05/05 16:31:42.54] Reconnect interval expired, create new connection to primary database.

[W000 05/05 16:31:42.54] Try to connect to the primary.

[W000 05/05 16:31:43.68] Connection to the primary restored!

[W000 05/05 16:31:44.68] Disconnecting from database orcl.

 

When using multiple observers you can have at least 3 observers at same time. Exists only one master observer and it is responsible for fast-start failover and protect the system. If you lost the master observer the Broker/Primary/Standby decide which one will be the next master observer. Until the 19c version they not work in quorum (or something like this using a voting system to decide the role switch) to protect the DG.
The interesting part about multiple observer it is that provide to you another way to customize your environment. Remember in my first post I reported the complexity (bases in pros and con) to choose the better place to put the observer. Now with multiple observers, you can put one in each data center and switch between then when you want to protect one side or another.
Now, my example environment it is two databases, three observers:

 

Check that I have one in each datacenter and one in external place. And inside of broker you can see:

 

DGMGRL> show observer




Configuration - dgconfig




  Primary:            orcl

  Target:             orcls




Observer "dbobss" - Master




  Host Name:                    dbobss

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          1 second ago




Observer "dbobsp" - Backup




  Host Name:                    dbobsp

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          0 seconds ago




Observer "dbobst" - Backup




  Host Name:                    dbobst

  Last Ping to Primary:         2 seconds ago

  Last Ping to Target:          2 seconds ago


 

In case of failure Broker/Primary/Standby decides which one will be the next master observer. The time to decides that occurs after 30 seconds and need to be coordinated/communicated and the agreement from both, primary and standby. Unfortunately, there is no way to reduce this time/check from 30 seconds.
In my environment, I shutdown the machine running the master observer (dbobss) and the log from broker (in primary):

 

05/05/2019 17:15:34

FSFP: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

Data Guard Broker initiated a master observer switch since the current master observer cannot reach the primary database

FSFP: FSFO SetState(st=12 "SET OBID", fl=0x0 "", ob=0x32cc2ad6, tgt=0, v=0)

Succeeded in switching master observer from observer 'dbobss' to 'dbobsp'

FSFP: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

FSFP: FSFO SetState(st=16 "UNOBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

Master observer begins pinging this instance

Fore: FSFO SetState(st=15 "OBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

 

And in the broker log for standby:

 

05/05/2019 17:15:34

drcx: FSFO SetState(st=16 "UNOBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

drcx: FSFO SetState(st=43 "SET SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

05/05/2019 17:15:37

drcx: FSFO SetState(st=15 "OBSERVED", fl=0x0 "", ob=0x0, tgt=0, v=0)

drcx: FSFO SetState(st=44 "CLR SWOB INPRG", fl=0x0 "", ob=0x0, tgt=0, v=0)

drcx: FSFO SetState(st=12 "SET OBID", fl=0x0 "", ob=0x32cc2ad6, tgt=0, v=0)

05/05/2019 17:15:39

Master observer begins pinging this instance

 

Look in the logs that both (primary and standby) agreed with the change. After the failure you saw the events SET SWOB INPRG (switch observer in progress) and SET OBID (set observer ID) and CLR SWOB INPRG (clear switch observer in progress) to confirm that was detect UNOBSERVED state. You can see here the output when you use the trace level for broker as support. Interesting note that inside broker the faulty observer does not disappears after the failure:

 

DGMGRL> show observer




Configuration - dgconfig




  Primary:            orcl

  Target:             orcls




Observer "dbobsp" - Master




  Host Name:                    dbobsp

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          3 seconds ago




Observer "dbobss" - Backup




  Host Name:                    dbobss

  Last Ping to Primary:         256 seconds ago

  Last Ping to Target:          221 seconds ago




Observer "dbobst" - Backup




  Host Name:                    dbobst

  Last Ping to Primary:         1 second ago

  Last Ping to Target:          5 seconds ago




DGMGRL>

 

After you reinstate you observer and it go back, you can simple set the master observer to the desired one:

 

DGMGRL> set masterobserver to dbobss;

Sent the proposed master observer to the data guard broker configuration. Please run SHOW OBSERVER to see if master observer switch actually happens.

DGMGRL>

 

Hierarchy

When you use multiple observers you can have more control how to protect your DG, you can have one observer in each site and choose the side that you want to protect. You can write one script to check the database role in the observer side and change the master to protect the desired database role.
Remember my previous post. If you choose to protect the primary (with observer in the same datacenter), if your entire datacenter fails, FSFO not occurs because standby does not decide alone. If you choose to protect the standby (with observer in the same datacenter), a datacenter/network failure in standby side, this can lead you a complete shutdown from a healthy primary database because it become “isolated”.
Since multiple observers continues to use hierarchy decision, the decision remains over only one observer. Even if you have a multiple observers, 3 as example and one in each side, if you put the master observer in the same site than standby and they become isolated, they still decide alone and because FSFO the primary continues to shutdown because it thinks that it is isolated. Even if it continues to receive connections from other two observers.
Because the actual design, even if you put the “FastStartFailoverThreshold” as 240, the automatic switch from Master observer does not occurs because the standby side cannot be reach to confirm the change. Maybe for the next versions (20, 21…) we can see a change in this design and when you use multiple observers voting/quorum method are used to decide role change for FSFO. Of course that even a quorum approach can lead a problem if you put two in the same datacenter, but it can mitigate problems in some cases.
In my next post I will dig more about this, with some examples and logs/traces analyses. You will see some details when the standby is isolated and you use multiple observers.

 

 

Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community.”


1 20 21 22 23 24 32