Monday, April 25, 2016

3 configuring online change synchronization

Oracle GoldenGate Tutorial 5 – configuring online change synchronization

In our earlier tutorial, we examined how to create a GoldenGate environment for initial data capture and load.
In this tutorial, we will see how by using GoldenGate change synchronization, changes that occur on the source (Oracle 11g on Linux) are applied near real time on the target (Oracle 11g on AIX). The table on the source is the EMP table in SCOTT schema which is being replicated to the EMP table in the target database SH schema.
These are the steps that we will take:
Create a GoldenGate Checkpoint table
Create an Extract group
Create a parameter file for the online Extract group
Create a Trail
Create a Replicat group
Create a parameter file for the online Replicat group
Create the GoldenGate Checkpoint table
GoldenGate maintains its own Checkpoints which is a known position in the trail file from where the Replicat process will start processing after any kind of error or shutdown. This ensures data integrity and a record of these checkpoints is either maintained in files stored on disk or table in the database which is the preferred option.
We can also create a single Checkpoint table which can used by all Replicat groups from the single or many GoldenGate instances.
In one of the earlier tutorials we had created the GLOBALS file. We now need to edit that GLOBALS file and add an entry for CHECKPOINTTABLE which will include the checkpoint table name which will be available to all Replicat processes via the EDIT PARAMS command.
GGSCI (devu007) 2> EDIT PARAMS ./GLOBALS
GGSCHEMA GGS_OWNER
CHECKPOINTTABLE GGS_OWNER.CHKPTAB
GGSCI (devu007) 4> DBLOGIN USERID ggs_owner, PASSWORD ggs_owner
Successfully logged into database.
GGSCI (devu007) 6> ADD CHECKPOINTTABLE GGS_OWNER.CHKPTAB
Successfully created checkpoint table GGS_OWNER.CHKPTAB.
apex:/u01/oracle/software/goldengate> sqlplus ggs_owner/ggs_owner

SQL*Plus: Release 11.1.0.6.0 - Production on Mon Feb 8 09:02:19 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> desc chkptab

Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 GROUP_NAME                                NOT NULL VARCHAR2(8)
 GROUP_KEY                                 NOT NULL NUMBER(19)
 SEQNO                                              NUMBER(10)
 RBA                                       NOT NULL NUMBER(19)
 AUDIT_TS                                           VARCHAR2(29)
 CREATE_TS                                 NOT NULL DATE
 LAST_UPDATE_TS                            NOT NULL DATE
 CURRENT_DIR                               NOT NULL VARCHAR2(255)
Create the Online Extract Group
GGSCI (redhat346.localdomain) 1> ADD EXTRACT ext1, TRANLOG, BEGIN NOW
EXTRACT added.
Create the Trail
We now create a trail – note that this path pertains to the GoldenGate software location on the target system and this is where the trail files will be created having a prefix ‘rt’which will be used by the Replicat process also running on the target system
GGSCI (redhat346.localdomain) 2> ADD RMTTRAIL /u01/oracle/software/goldengate/dirdat/rt, EXTRACT ext1
RMTTRAIL added.
Create a parameter file for the online Extract group ext1
GGSCI (redhat346.localdomain) 3> EDIT PARAMS ext1
EXTRACT ext1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST devu007, MGRPORT 7809
RMTTRAIL /u01/oracle/software/goldengate/dirdat/rt
TABLE scott.emp;
ON TARGET SYSTEM
Create the online Replicat group
GGSCI (devu007) 7> ADD REPLICAT rep1, EXTTRAIL /u01/oracle/software/goldengate/dirdat/rt
REPLICAT added.
Note that the EXTTRAIL location which is on the target local system conforms to the RMTTRAIL parameter which we used when we created the parameter file for the extract process on the source system.
Create a parameter file for the online Replicat group, rep1
GGSCI (devu007) 8> EDIT PARAMS rep1
REPLICAT rep1
ASSUMETARGETDEFS
USERID ggs_owner, PASSWORD ggs_owner
MAP scott.emp, TARGET sh.emp;
ON SOURCE
Start the Extract process
GGSCI (redhat346.localdomain) 16> START EXTRACT ext1
Sending START request to MANAGER …
EXTRACT EXT1 starting
GGSCI (redhat346.localdomain) 17> STATUS EXTRACT ext1
EXTRACT EXT1: RUNNING
GGSCI (redhat346.localdomain) 16> INFO EXTRACT ext1
EXTRACT EXT1 Last Started 2010-02-08 14:27 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:09 ago)
Log Read Checkpoint Oracle Redo Logs
2010-02-08 14:27:48 Seqno 145, RBA 724480
ON TARGET
Start the Replicat process
GGSCI (devu007) 1> START REPLICAT rep1
Sending START request to MANAGER …
REPLICAT REP1 starting
GGSCI (devu007) 2> INFO REPLICAT rep1
REPLICAT REP1 Last Started 2010-02-08 14:55 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:01 ago)
Log Read Checkpoint File /u01/oracle/software/goldengate/dirdat/rt000001
2010-02-08 14:27:57.600425 RBA 1045
Note: the trail file has a prefix of ‘rt’ (which we had defined earlier)
LET US NOW TEST …
ON SOURCE
SQL> conn scott/tiger
Connected.
SQL> UPDATE emp SET sal=9999 WHERE ename=’KING’;
1 row updated.
SQL> COMMIT;
Commit complete.
ON TARGET
SQL> SELECT SAL FROM emp WHERE ename=’KING’;
SAL
———-
9999
Coming Next! – configuring GoldenGate Data Pump …..

Saturday, April 23, 2016

Creating bidirectional replication

Creating bidirectional replication (Simple)

We'll pick up from the previous recipe and configure the target host to also capture and deliver changes to the same set of tables on the source host.

Getting ready

Repeat steps 1 to 14 followed by steps 1 to 7 from the recipe Creating One-Way Replication (Simple). The rest of the steps prepare the target host to capture changes and deliver them to the applier on the source host.

How to do it...

The steps for bidirectional replication are as follows:
  1. Enable supplemental logging on the target database in order to capture appropriate database changes.
    In our previous recipe, we didn't have to enable supplemental logging on the target because it was not subject to propagating changes. However, in a two-way replication, we propagate in both the ways, as follows:
    SQL> select supplemental_log_data_min from v$database;
    
    We will get the following output:
    SUPPLEME
    -----------------
    NO
    
    The next set of commands to be executed is as follows:
    SQL> alter database add supplemental log data;
    SQL> select supplemental_log_data_min from v$database;
    
    We will get the following output:
    SUPPLEME
    -----------------
    YES
    
  2. Create a TNS entry in the database home so that the extract can connect to the ASM instance using the following command:
    $ cd $ORACLE_HOME/network/admin
    $ vi tnsnames.ora
    
    Add the following TNS entry:
    ASMGG =
      (DESCRIPTION =
         (ADDRESS =
            (PROTOCOL = IPC)
            (key=EXTPROC1521) 
         )
         (CONNECT_DATA=
           (SID=+ASM)
         )
       )
    Save the file and exit.
  3. Create a user asmgg with the sysdba role in the ASM instance using the following command:
    $ sqlplus sys/<password>@asmgg as sysasm
    
    The output for the preceding command will be as follows:
    SQL*Plus: Release 11.2.0.3.0 Production on Thu Nov 15 14:24:20 2012
    Copyright (c) 1982, 2011, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Automatic Storage Management option
    
    Then we execute the following two commands:
    SQL> create user asmgg identified by asmgg ;
    
    The output for the preceding command will be as follows:
    User created.
    
    and
    SQL> grant sysdba to asmgg ;
    
    The output for the preceding command will be as follows:
    Grant succeeded.
    
  4. Let's add supplemental logging to the tables using the following commands:
    $ cd /u01/app/oracle/goldengate
    $ ./ggsci
    GGSCI> add trandata scott.tcustmer
    
    The output for the preceding commands will be as follows :
    Logging of supplemental redo data enabled for table SCOTT.TCUSTMER.
    
    GGSCI> add trandata scott.tcustord
    
    The output for the preceding command is as follows:
    Logging of supplemental redo data enabled for table SCOTT.TCUSTORD.
    
    GGSCI> info trandata scott.tcustmer
    
    The output for the preceding command is as follows:
    Logging of supplemental redo log data is disabled for table OGG.TCUSTMER.
    
    GGSCI> info trandata scott.tcustord
    
  5. Create the extract parameter file for data capture using the following command:
    $ cd /u01/app/oracle/goldengate/dirprm
    $ vi ex01sand.prm
    
    Add the following lines to the file:
    EXTRACT ex01sand
    
    SETENV (ORACLE_SID="TGT101")
    SETENV (ORACLE_HOME="/u01/app/oracle/product/11.2.0/db_1")
    SETENV (NLS_LANG="AMERICAN_AMERICA.AL32UTF8")
    
    USERID ogg, PASSWORD ogg
    
    TRANLOGOPTIONS EXCLUDEUSER ogg
    TRANLOGOPTIONS ASMUSER asmgg@ASMGG ASMPASSWORD asmgg
    
    -- Trail File location locally
    
    EXTTRAIL /u01/app/oracle/goldengate/dirdat/pr
    
    DISCARDFILE /u01/app/oracle/goldengate/dirrpt/ex01sand.dsc, PURGE
    DISCARDROLLOVER AT 01:00 ON SUNDAY
    
    TABLE   SCOTT.TCUSTMER ;
    TABLE   SCOTT.TCUSTORD ;
    Save the file and exit.
  6. Let's add the Extract process and start it by using the following commands:
    $ cd /u01/app/oracle/goldengate
    $ ./ggsci
    GGSCI> add extract ex01sand tranlog begin now
    
    The output for the preceding command is as follows:
    EXTRACT added.
    
    GGSCI> add exttrail /u01/app/oracle/goldengate/dirdat/pr extract ex01sand megabytes 2
    
    The output for the preceding command is as follows:
    EXTTRAIL added.
    
    GGSCI> start ex01sand
    
    The output for the preceding command is as follows:
    Sending START request to MANAGER ...
    EXTRACT EX01SAND starting
    
    GGSCI> info all
    
    The output for the preceding command is as follows:
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    MANAGER     RUNNING
    MANAGER     RUNNING
    EXTRACT     RUNNING     EX01SAND    00:00:00      00:00:06
    REPLICAT    RUNNING     RE01SAND    00:00:00      00:00:07
    
  7. Next we'll create the data pump parameter file using the following commands:
    $ cd /u01/app/oracle/goldengate/dirprm
    $ vi pp01sand.prm
      
    
    Add the following lines to the file:
    EXTRACT pp01sand
    
    PASSTHRU
    
    RMTHOST hosta MGRPORT 7809
    RMTTRAIL /u01/app/oracle/gg/dirdat/pa
    
    DISCARDFILE /u01/app/oracle/goldengate/dirrpt/pp01sand.dsc, PURGE
    
    -- Tables for transport
    
    TABLE   SCOTT.TCUSTMER ;
    TABLE   SCOTT.TCUSTORD ;
    
    Save the file and exit.
  8. Add the data pump process and final configuration on the target host using the following commands:
    GGSCI> add extract pp01sand exttrailsource /u01/app/oracle/goldengate/dirdat/pr
    
    The output for the preceding command is as follows:
    EXTRACT added.
    
    GGSCI> add rmttrail /u01/app/oracle/gg/dirdat/pa extract pp01sand megabytes 2
    
    The output for the preceding command is as follows:
    RMTTRAIL added.
    
    GGSCI> start pp01sand
    
    The output for the preceding command is as follows:
    Sending START request to MANAGER ...
    EXTRACT PP01SAND starting
    
    GGSCI> info all
    
    The output for the preceding command is as follows:
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    MANAGER     RUNNING
    MANAGER     RUNNING
    EXTRACT     RUNNING     EX01SAND    00:00:00      00:00:06
    EXTRACT     RUNNING     PP01SAND    00:00:00      00:00:02
    REPLICAT    RUNNING     RE01SAND    00:00:00      00:00:07
    
  9. Next, we'll move on to the source server and create the REPLICAT parameter file:
    $ cd /u01/app/oracle/gg/dirprm
    $ vi re01sand.prm
    
    Add the following lines to the file:
    REPLICAT re01sand
    
    SETENV (ORACLE_SID="SRC100")
    SETENV (ORACLE_HOME="/u01/app/oracle/product/11.1.0/db_1")
    SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
    
    USERID ogg PASSWORD ogg
    
    DISCARDFILE /u01/app/oracle/gg/dirrpt/re01sand.dsc, APPEND
    DISCARDROLLOVER at 01:00
    
    ReportCount Every 30 Minutes, Rate
    REPORTROLLOVER at 01:30
    
    DBOPTIONS SUPPRESSTRIGGERS DEFERREFCONST
    ASSUMETARGETDEFS
    
    MAP SCOTT.TCUSTMER , TARGET SCOTT.TCUSTMER ;
    MAP SCOTT.TCUSTORD,  TARGET SCOTT.TCUSTORD ;
    
    Save the file and exit.
  10. Now we're ready to complete our two-way replication by adding the Replicat process to apply the incoming changes.
    Add and start the Replicat using the following commands:
    $ cd ..
    $ ./ggsci
    GGSCI> add replicat re01sand exttrail /u01/app/oracle/gg/dirdat/pa checkpointtable ogg.chkpt
    
    GGSCI> start re01sand
    
    The output for the preceding command is as follows:
    Sending START request to MANAGER ...
    REPLICAT RE01SAND starting
    
    GGSCI> info all
    
    The output for the preceding command is as follows:
    Program     Status      Group       Lag at Chkpt  Time Since Chkpt
    MANAGER     RUNNING
    EXTRACT     RUNNING     EX01SAND    00:00:00      00:00:07
    EXTRACT     RUNNING     PP01SAND    00:00:00      00:00:03
    REPLICAT    RUNNING     RE01SAND    00:00:00      00:00:06
    
  11. Next let's validate that an insert will propagate from hostb to hosta.
    Perform the following actions on hostb:
    SQL> insert into scott.tcustmer values ('Tony','Ontario Inc','Toronto','ON') ;
    SQL> commit ;
    SQL> select * from scott.tcustmer;
    
    The output for the preceding command is as follows:
    CUST        NAME                  CITY          ST
    --------    ------------------    ----------    ----
    WILL        BG SOFTWARE CO.       SEATTLE       WA
    JANE        ROCKY FLYER INC.      DENVER        CO
    Tony        Ontario Inc           Toronto       ON
    
  12. Moving on to hosta we should execute the following commands to ensure our changes have been received and applied to the database:
    SQL> select * from scott.tcustmer ;
    
    The output for the preceding command is as follows:
    CUST        NAME                  CITY           ST
    --------    ------------------    ----------     ----
    WILL        BG SOFTWARE CO.       SEATTLE        WA
    JANE        ROCKY FLYER INC.      DENVER         CO
    Tony        Ontario Inc           Toronto        ON
    
    Hence, we conclude that the insert was received and applied.

How it works...

Once again, we needed to add supplemental logging at the target host to both tablesscott.tcustmer and scott.tcustord in order to add additional data in the redo stream. This was not necessary when these tables were subject to delivery only.
At hostb we already had a manager and Replicat process. We needed to configure an extract for data capture, to start scanning the online-redo logs and write out committed transactions to the trail files. We've kept the same two-letter prefix pr as we did on hosta. Remember that the letters can be any two arbitrary letters. The name of the data capture extract is also the same as in hosta. I did this just for simplicity sake. It does not have to be the same prefix. You ought to come up with a naming standard in your own organization for naming extracts and/or replicats.
Now that we've started capturing data, we need a pump to ship it to hosta. Again, I've chosen the same pump name for illustration purposes. Here we need to be a little more careful in choosing the remote trail name prefix as follows:
add rmttrail /u01/app/oracle/gg/dirdat/pa extract pp01sand megabytes 2
I've chosen the prefix "pa". You must ensure that you don't clobber any files on the remote host with an already used prefix as this may corrupt the trail files on the remote host. Point being, make sure you always choose unique prefixes for pumps, specifically if multiple pumps are shipping trail files to the same directory location.
Finally, back on hosta, the only process missing is the replicat which completes the multi-master implementation.
In this illustration of multi-master implementation, you need to consider the possibility of collisions, such as the same record being inserted or deleted simultaneously at both sites as it may violate constraints. In a real-world example, you may use a sequence to generate the Primary Key on the source and a differing sequence on the target so that collisions are minimized. Another alternative would be to use range partition on a numeric data type value based on different ranges between the source and target to avoid collision. And finally, if you know that the application code has the ability to ensure that a business rule between the source and target would never collide, then the implementation is trivial as the application will decide and manage the conflicts. Another important design factor is to use the Primary Key or Unique Key constraints for all objects being replicated; otherwise OGG will use all table columns to determine the uniqueness