【ogg Administering Oracle GoldenGate】17 Monitoring Oracle GoldenGate Processing

17 Monitoring Oracle GoldenGate Processing

This chapter describes the monitoring of Oracle GoldenGate processing.

Topics:

17.1 Using the Information Commands in GGSCI

The primary way to view processing information is through GGSCI. For more information about these commands, see Reference for Oracle GoldenGate.

Table 17-1 Commands to View Process Information

Command What it shows

INFO {EXTRACT | REPLICAT} group [DETAIL]

Run status, checkpoints, approximate lag, and environmental information.

INFO MANAGER

Run status and port number

INFO ALL

INFO output for all Oracle GoldenGate processes on the system

STATS {EXTRACT | REPLICAT} group

Statistics on processing volume, such as number of operations performed.

STATUS {EXTRACT | REPLICAT} group

Run status (starting, running, stopped, abended)

STATUS MANAGER

Run status

LAG {EXTRACT | REPLICAT} group

Latency between last record processed and timestamp in the data source

INFO {EXTTRAIL | RMTTRAIL} trail

Name of associated process, position of last data processed, maximum file size

SEND MANAGER

Run status, information about child processes, port information, trail purge settings

SEND {EXTRACT | REPLICAT} group

Depending on the process and selected options, returns information about memory pool, lag, TCP statistics, long-running transactions, process status, recovery progress, and more.

VIEW REPORT group

Contents of the discard file or process report

VIEW GGSEVT

Contents of the Oracle GoldenGate error log

COMMAND ER wildcard

Information dependent on the COMMAND type:

INFO

LAG

SEND

STATS

STATUS

wildcard is a wildcard specification for the process groups to be affected, for example:

INFO ER ext*
STATS ER *

INFO PARAM

Queries for and displays static information.

GETPARAMINFO

Displays currently-running parameter values.

17.2 Monitoring an Extract Recovery

If Extract abends when a long-running transaction is open, it can seem to take a long time to recover when it is started again. To recover its processing state, Extract must search back through the online and archived logs (if necessary) to find the first log record for that long-running transaction. The farther back in time that the transaction started, the longer the recovery takes, in general, and Extract can appear to be stalled.

To confirm that Extract is recovering properly, use the SEND EXTRACT command with the STATUS option. One of the following status notations appears, and you can follow the progress as Extract changes its log read position over the course of the recovery.

  • In recovery[1] – Extract is recovering to its checkpoint in the transaction log. Meaning that it is reading from either:

    a) reading from BR checkpoint files and then archived/online logs,

    or

    b) reading from Recovery Checkpoint in archived/online log.

  • In recovery[2] – Extract is recovering from its checkpoint to the end of the trail. Meaning that a recovery marker is appended to the output trail when the last transaction was not completely written then rewriting the transaction.

  • Recovery complete – The recovery is finished, and normal processing will resume.

17.3 Monitoring Lag

Lag statistics show you how well the Oracle GoldenGate processes are keeping pace with the amount of data that is being generated by the business applications. With this information, you can diagnose suspected problems and tune the performance of the Oracle GoldenGate processes to minimize the latency between the source and target databases. See Tuning the Performance of Oracle GoldenGate for help with tuning Oracle GoldenGate to minimize lag.

17.3.1 About Lag

For Extract, lag is the difference, in seconds, between the time that a record was processed by Extract (based on the system clock) and the timestamp of that record in the data source.

For Replicat, lag is the difference, in seconds, between the time that the last record was processed by Replicat (based on the system clock) and the timestamp of the record in the trail.

To view lag statistics, use either the LAG or SEND command in GGSCI. For more information, see Reference for Oracle GoldenGate.

Note:

The INFO command also returns a lag statistic, but this statistic is taken from the last record that was checkpointed, not the current record that is being processed. It is less accurate than LAG or INFO.

17.3.2 Controlling How Lag is Reported

Use the LAGREPORTMINUTES or LAGREPORTHOURS parameter to specify the interval at which Manager checks for Extract and Replicat lag. See Reference for Oracle GoldenGate.

Use the LAGCRITICALSECONDSLAGCRITICALMINUTES, or LAGCRITICALHOURS parameter to specify a lag threshold that is considered critical, and to force a warning message to the error log when the threshold is reached. This parameter affects Extract and Replicat processes on the local system. See Reference for Oracle GoldenGate.

Use the LAGINFOSECONDSLAGINFOMINUTES, or LAGINFOHOURS parameter to specify a lag threshold; if lag exceeds the specified value, Oracle GoldenGate reports lag information to the error log. If the lag exceeds the value specified with the LAGCRITICAL parameter, Manager reports the lag as critical; otherwise, it reports the lag as an informational message. A value of zero (0) forces a message at the frequency specified with the LAGREPORTMINUTES or LAGREPORTHOURS parameter. See Reference for Oracle GoldenGate.

17.4 Using Automatic Heartbeat Tables to Monitor

You can use the default automatic heartbeat table functionality to monitor end-to-end replication lag. Automatic heartbeats are sent from each source database into the replication streams, by updating the records in a heartbeat seed table and a heartbeat table, and constructing a heartbeat history table. Each of the replication processes in the replication path process these heartbeat records and update the information in them. These heartbeat records are inserted or updated into the heartbeat table at the target databases.

The heartbeat tables contain the following information:

  • Source database

  • Destination database

  • Information about the outgoing replication streams:

    • Names of the extract, pump/distribution server, and or replicat processes in the path

    • Timestamps when heartbeat records were processed by the replication processes.

  • Information about the incoming replication streams:

    • Names of the extract, pump/distribution server, and or replicat processes in the path

    • Timestamps when heartbeat records were processed by the replication processes.

Using the information in the heartbeat table and the heartbeat history table, the current and historical lags in each of the replication can be computed.

In a bidirectional GoldenGate configuration, the heartbeat table has as many entries as the number of replication paths to neighbors that the database has and in a unidirectional setup, the table at the source is empty. The outgoing columns have the timestamps and the outgoing path, the local Extract and the downstream GoldenGate processes. The incoming columns have the timestamps and path of the upstream GoldenGate processes and local replicat.

In a unidirectional configuration, the target database will populate only the incoming columns in the heartbeat table.

Note:

The Automatic Heartbeat functionality is not supported on MySQL version 5.5.

17.4.1 Understanding Heartbeat Table End-To-End Replication Flow

The flow for end-to-end replication as it relates to heartbeat tables relies on the use of Oracle GoldenGate 12.2.0.1 trail format is as follows:

Ensure that Self-Describing Trail Files functionality is enabled, see  Using Self-Describing Trail Files.

Enable the heartbeat functionality with the ENABLE_HEARTBEAT_TABLE parameter. This is the default.

Add a heartbeat table to each of your databases with the ADD HEARTBEATTABLE command. Add the heartbeat table to all source and target instances and then restart existing Oracle GoldenGate processes (not necessary for processes running against HP-OSS for MX) to enable heartbeat functionality. Depending on your specific database system, you may or may not be required to create or enable a job to populate heartbeat table data.

(Optional) For Oracle Databases, you must ensure that the Oracle DBMS_SCHEDULER is operating correctly as the heartbeat update relies on it. You can query the DBMS_SCHEDULER by issuing:

select START_DATE, LAST_START_DATE, NEXT_RUN_DATE 
from dba_scheduler_jobs 

Where job_name ='GG_UPDATE_HEARTBEATS';

Then look for valid entries for NEXT_RUN_DATE, which is the next time the scheduler will run. If this is a timestamp in the past, then no job will run and you must correct it.

A common reason for the scheduler not working is when the parameter job_queue_processes is set too low (typically zero). Increase the number of job_queue_processes configured in the database with the ALTER SYSTEM SET JOB_QUEUE_PROCESSES = ##; command where ## is the number of job queue processes.

Run an Extract, which on receiving the logical change records (LCR) checks the value in the OUTGOING_EXTRACT column.

  • If the Extract name matches this value, the OUTGOING_EXTRACT_TS column is updated and the record is entered in the trail.

  • If the Extract name does not match then the LCR is discarded.

  • If the OUTGOING_EXTRACT value is NULL, it is populated along with OUTGOING_EXTRACT_TS and the record is entered in the trail.

The Pump or Distribution server on reading the record, checks the value in the OUTGOING_ROUTING_PATH column. This column has a list of distribution paths.

If the value is NULL, the column is updated with the current group name (and path if this is a Distribution server),"*", update the OUTGOING_ROUTING_TS column, and the record is written into its target trail file.

If the value has a "*" in the list, then replace it with group name[:pathname],"*"', update the OUTGOING_ROUTING_TS column, and the record is written into its target trail file. When the value does not have a asterisk (*) in the list and the pump name is in the list, then the record is sent to the path specified in the relevant group name[:pathname],"*"' pair in the list. If the pump name is not in the list, the record is discarded.

Run a Replicat, which on receiving the record checks the value in the OUTGOING_REPLICAT column.

  • If the Replicat name matches the value, the row in the heartbeat table is updated and the record is inserted into the history table.

  • If the Replicat name does not match, the record is discarded.

  • If the value is NULL, the row in the heartbeat and heartbeat history tables are updated with an implicit invocation of the Replicat column mapping.

    Automatic Replicat Column Mapping:

    REMOTE_DATABASE              = LOCAL_DATABASE
    INCOMING_EXTRACT                = OUTGOING_EXTRACT
    INCOMING_ROUTING_PATH      = OUTGOING_ROUTING_PATH with "*"    removed
    INCOMING_REPLICAT               = @GETENV ("GGENVIRONMENT", "GROUPNAME")
    INCOMING_HEARTBEAT_TS   = HEARTBEAT_TIMESTAMP
    INCOMING_EXTRACT_TS     = OUTGOING_EXTRACT_TS
    INCOMING_ROUTING_TS     = OUTGOING_ROUTING_TS
    INCOMING_REPLICAT_TS          = @DATE ('UYYYY-MM-DD HH:MI:SS.FFFFFF','JTSLCT',@GETENV ('JULIANTIMESTAMP'))
    LOCAL_DATABASE  = REMOTE_DATABASE
    OUTGOING_EXTRACT        = INCOMING_EXTRACT
    OUTGOING_ROUTING_PATH   = INCOMING_ROUTING_PATH
    OUTGOING_HEARTBEAT_TS     = INCOMING_HEARTBEAT_TS
    OUTGOING_REPLICAT       = INCOMING_REPLICAT
    OUTGOING_HEARTBEAT_TS   = INCOMING_HEARTBEAT_TS
    

There is just one column for OUTGOING_ROUTING_TS. If a record passes through multiple pump before being applied by a Replicat, each pump will overwrite the OUTGOING_ROUTING_TS column so that the pumps lag that is calculated is not specific to a single pump and refers to the lag across all the pumps specified in PUMP_PATH.

Additional Considerations:

Computing lags as the heartbeat flows through the system relies on the clocks of the source and target systems to be set up correctly. It is possible that the lag can be negative if the target system is ahead of the source system. The lag is shown as a negative number so that you are aware of their clock discrepancy and can take actions to fix it.

The timestamp that flows through the system is in UTC. There is no time zone associated with the timestamp so when viewing the heartbeat tables, the lag can be viewed quickly even if different components are in different time zones. You can write any view you want on top of the underlying tables; UTC is recommended.

All the heartbeat entries are written to the trail in UTF-8.

The outgoing and incoming paths together uniquely determine a row. Meaning that if you have two rows with same outgoing path and a different incoming path, then it is considered two unique entries.

Heartbeat Table Details

The GG_HEARTBEAT table displays timestamp information of the end-to-end replication time and the timing information at the different components primary and secondary Extract and Replicat.

In a unidirectional environment, only the target database contains information about the replication lag. That is the time when a record is generated at the source database and becomes visible to clients at the target database.

Note:

The automatic heartbeat tables don’t populate the OUTGOING_% columns with data, when both the source and remote databases have the same name. To change the database name, use the utility DBNEWID. For details, see the DBNEWID Utility.

Table 17-2 GG_HEARTBEAT Table

Column Data Type Description

LOCAL_DATABASE

VARCHAR2

Local database where the replication time from the remote database is measured.

HEARTBEAT_TIMESTAMP

TIMESTAMP(6)

The point in time when a timestamp is generated at the remote database.

REMOTE_DATABASE

VARCHAR2

Remote database where the timestamp is generated

INCOMING_EXTRACT

VARCHAR2

Name of the primary Extract (capture) at the remote database

INCOMING_ROUTING_PATH

VARCHAR2

Name of the secondary Extract (pump) at the remote database

INCOMING_REPLICAT

VARCHAR2

Name of the Replicat on the local database.

INCOMING_HEARTBEAT_TS

TIMESTAMP(6)

Final timestamp when the information is inserted into the GG_HEARTBEAT table at the local database.

INCOMING_EXTRACT_TS

TIMESTAMP(6)

Timestamp of the generated timestamp is processed by the primary Extract at the remote database.

INCOMING_ROUTING_TS

TIMESTAMP(6)

Timestamp of the generated timestamp is processed by the secondary Extract at the remote database.

INCOMING_REPLICAT_TS

TIMESTAMP(6)

Timestamp of the generated timestamp is processed by Replicat at the local database.

OUTGOING_EXTRACT

VARCHAR2

Bidirectional/N-way replication: Name of the primary Extract on the local database.

OUTGOING_ROUTING_PATH

VARCHAR2

Bidirectional/N-way replication: Name of the secondary Extract on the local database.

OUTGOING_REPLICAT

VARCHAR2

Bidirectional/N-way replication: Name of the Replicat on the remote database.

OUTGOING_HEARTBEAT_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Final timestamp when the information is inserted into the table at the remote database.

OUTGOING_EXTRACT_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Timestamp of the generated timestamp is processed by the primary Extract on the local database.

OUTGOING_ROUTING_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Timestamp of the generated timestamp is processed by the secondary Extract on the local database.

OUTGOING_REPLICAT_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Timestamp of the generated timestamp is processed by Replicat on the remote database.

The GG_HEARTBEAT_HISTORY table displays historical timestamp information of the end-to-end replication time and the timing information at the different components primary and secondary Extract and Replicat.

In a unidirectional environment, only the destination database contains information about the replication lag.

Timestamps are managed in UTC time zone. That is the time when a record is generated at the source database and becomes visible to clients at the target database.

Table 17-3 GG_HEARTBEAT_HISTORY Table

Column Data Type Description

LOCAL_DATABASE

VARCHAR2

Local database where the end-to-end lag is measured.

HEARTBEAT_RECEIVED_TS

TIMESTAMP(6)

Point in time when a timestamp from the remote database receives at the local database.

REMOTE_DATABASE

VARCHAR2

Remote database where the timestamp is generated.

INCOMING_EXTRACT

VARCHAR2

Name of the primary Extract on the remote database.

INCOMING_ROUTING_PATH

VARCHAR2

Name of the secondary Extract of the remote database.

INCOMING_REPLICAT

VARCHAR2

Name of the Replicat on the local database.

INCOMING_HEARTBEAT_TS

TIMESTAMP(6)

Final timestamp when the information is inserted into the GG_HEARTBEAT_HISTORYtable on the local database.

INCOMING_EXTRACT_TS

TIMESTAMP(6)

Timestamp when the generated timestamp is processed by the primary Extract on the remote database.

INCOMING_ROUTING_TS

TIMESTAMP(6)

Timestamp when the generated timestamp is processed by the secondary Extract on the remote database.

INCOMING_REPLICAT_TS

TIMESTAMP(6)

Timestamp when the generated timestamp is processed by Replicat on the local database.

OUTGOING_EXTRACT

VARCHAR2

Bidirectional/N-way replication: Name of the primary Extract from the local database.

OUTGOING_ROUTING_PATH

VARCHAR2

Bidirectional/N-way replication: Name of the secondary Extract from the local database.

OUTGOING_REPLICAT

VARCHAR2

Bidirectional/N-way replication: Name of the Replicat on the remote database.

OUTGOING_HEARTBEAT_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Final timestamp when the information is persistently inserted into the table of the remote database.

OUTGOING_EXTRACT_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Timestamp when the generated timestamp is processed by the primary Extract on the local database.

OUTGOING_ROUTING_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Timestamp when the generated timestamp is processed by the secondary Extract on the local database.

OUTGOING_REPLICAT_TS

TIMESTAMP(6)

Bidirectional/N-way replication: Timestamp when the generated timestamp is processed by Replicat on the remote database.

The GG_LAG view displays information about the replication lag between the local and remote databases.

In a unidirectional environment, only the destination database contains information about the replication lag. The lag is measured in seconds.

Table 17-4 GG_LAG View

Column Data Type Description

LOCAL_DATABASE

VARCHAR2

Local database where the end-to-end replication lag from the remote database is measured.

CURRENT_LOCAL_TS

TIMESTAMP(6)

Current timestamp of the local database.

REMOTE_DATABASE

VARCHAR2

Remote database where the timestamp is generated.

INCOMING_HEARTBEAT_AGE

NUMBER

The age of the most recent heartbeat received from the remote database.

INCOMING_PATH

VARCHAR2

Replication path from the remote database to the local database with Extract and Replicat components.

INCOMING_LAG

NUMBER

Replication lag from the remote database to the local database. This is the time where the heartbeat where generated at the remote database minus the time where the information was persistently inserted into the table at the local database.

OUTGOING_HEARTBEAT_AGE

NUMBER

The age of the most recent heartbeat from the local database to the remote database.

OUTGOING_PATH

VARCHAR2

Replication Path from Local database to the remote database with Extract and Replicat components

OUTGOING_LAG

NUMBER

Replication Lag from the local database to the remote database. This is the time where the heartbeat where generated at the local database minus the time where the information was persistently inserted into the table at the remote database.

The GG_LAG_HISTORY view displays the history information about the replication lag history between the local and remote databases.

In a unidirectional environment, only the destination database contains information about the replication lag.

The unit of the lag units is in seconds.

Table 17-5 GG_LAG_HISTORY View

Column Data Type Description

LOCAL_DATABASE

VARCHAR2

Local database where the end-to-end replication lag from the remote database is measured.

HEARTBEAT_RECEIVED_TS

TIMESTAMP(6)

Point in time when a timestamp from the remote database receives on the local database.

REMOTE_DATABASE

VARCHAR2

Remote database where the timestamp is generated.

INCOMING_HEARTBEAT_AGE

NUMBER

 

INCOMING_PATH

VARCHAR2

Replication path from the remote database to local database with Extract and Replicat components.

INCOMING_LAG

NUMBER

Replication lag from the remote database to the local database. This is the time where the heartbeat was generated at the remote database minus the time where the information was persistently inserted into the table on the local database.

OUTGOING_HEARTBEAT_AGE

NUMBER

 

OUTGOING_PATH

VARCHAR2

Replication path from local database to the remote database with Extract and Replicat components.

OUTGOING_LAG

NUMBER

Replication lag from the local database to the remote database. This is the time where the heartbeat was generated at the local database minus the time where the information was persistently inserted into the table on the remote database.

17.4.2 Updating Heartbeat Tables

The HEARTBEAT_TIMESTAMP column in the heartbeat seed table must be updated periodically by a database job. The default heartbeat interval is 1 minute and this interval can be specified or overridden using a GGSCI or administration server command. For Oracle Database, the database job is created automatically; for all other supported databases, you must create background jobs to update the heartbeat timestamp using the database specific scheduler functionality.

17.4.3 Purging the Heartbeat History Tables

The heartbeat history table is purged periodically using a job. The default interval is 30 days and this interval can be specified or overridden using a GGSCI or administration server command. For Oracle Database, the database job is created automatically; for all other supported databases, you must create background jobs to purge the heartbeat history table using the database specific scheduler functionality.

17.4.4 Best Practice

Oracle recommends that you:

  • Use the same heartbeat frequency on all the databases to makes diagnosis easier.

  • Adjust the retention period if space is an issue.

  • Retain the default heartbeat table frequency; the frequency set to be 30 to 60 seconds gives the best results for most workloads.

  • Use lag history statistics to collect lag and age information.

17.4.5 Using the Automatic Heartbeat Commands

You can use the heartbeat table commands to control the Oracle GoldenGate automatic heartbeat functionality as follows.

Table 17-6 Heartbeat Table Commands

Command Description

ADD HEARTBEATTABLE

Creates the objects required for automatic heartbeat functionality.

ALTER HEARTBEATTABLE

Alters existing heartbeat objects.

DELETE HEARTBEATTABLE

Deletes existing heartbeat objects.

DELETE HEARTBEATENTRY

Deletes entries in the heartbeat table.

INFO HEARTBEATTABLE

Displays heartbeat table information.

For more information, see the Reference for Oracle GoldenGate for Windows and UNIX.

17.5 Monitoring Processing Volume

The STATS commands in GGSCI show you the amount of data that is being processed by an Oracle GoldenGate process, and how fast it is being moved through the Oracle GoldenGate system. With this information, you can diagnose suspected problems and tune the performance of the Oracle GoldenGate processes. These commands provide a variety of options to select and filter the output.

The STATS commands are: STATS EXTRACTSTATS REPLICAT, or STATS ER command.

You can send interim statistics to the report file at any time with the SEND EXTRACT or SEND REPLICAT command with the REPORT option.

17.6 Using the Error Log

Use the Oracle GoldenGate error log to view:

  • a history of GGSCI commands

  • Oracle GoldenGate processes that started and stopped

  • processing that was performed

  • errors that occurred

  • informational and warning messages

Because the error log shows events as they occurred in sequence, it is a good tool for detecting the cause (or causes) of an error. For example, you might discover that:

  • someone stopped a process

  • a process failed to make a TCP/IP or database connection

  • a process could not open a file

To view the error log, use any of the following:

  • Standard shell command to view the ggserr.log file within the root Oracle GoldenGate directory

  • Oracle GoldenGate Director or Oracle GoldenGate Monitor

  • VIEW GGSEVT command in GGSCI.

17.7 Using the Process Report

Use the process report to view (depending on the process):

  • parameters in use

  • table and column mapping

  • database information

  • runtime messages and errors

  • runtime statistics for the number of operations processed

Every Extract, Replicat, and Manager process generates a report file. The report can help you diagnose problems that occurred during the run, such as invalid mapping syntax, SQL errors, and connection errors.

To view a process report, use any of the following:

  • standard shell command for viewing a text file

  • Oracle GoldenGate Monitor

  • VIEW REPORT command in GGSCI.

  • To view information if a process abends without generating a report, use the following command to run the process from the command shell of the operating system (not GGSCI) to send the information to the terminal.

    process paramfile path.prm
    

    Where:

    • The value for process is either extract or replicat.

    • The value for path.prm is the fully qualified name of the parameter file, for example:

      replicat paramfile /ogg/dirdat/repora.prm
      

By default, reports have a file extension of .rpt, for example EXTORA.rpt. The default location is the dirrpt sub-directory of the Oracle GoldenGate directory. However, these properties can be changed when the group is created. Once created, a report file must remain in its original location for Oracle GoldenGate to operate properly after processing has started.

To determine the name and location of a process report, use the INFO EXTRACTINFO REPLICAT, or INFO MANAGER command in GGSCI.

17.7.1 Scheduling Runtime Statistics in the Process Report

By default, runtime statistics are written to the report once, at the end of each run. For long or continuous runs, you can use optional parameters to view these statistics on a regular basis, without waiting for the end of the run.

To set a schedule for reporting runtime statistics, use the REPORT parameter in the Extract or Replicat parameter file to specify a day and time to generate runtime statistics in the report. See REPORT.

To send runtime statistics to the report on demand, use the SEND EXTRACT or SEND REPLICAT command with the REPORT option to view current runtime statistics when needed.

17.7.2 Viewing Record Counts in the Process Report

Use the REPORTCOUNT parameter to report a count of transaction records that Extract or Replicat processed since startup. Each transaction record represents a logical database operation that was performed within a transaction that was captured by Oracle GoldenGate. The record count is printed to the report file and to the screen. For more information, see Reference for Oracle GoldenGate.

17.7.3 Preventing SQL Errors from Filling the Replicat Report File

Use the WARNRATE parameter to set a threshold for the number of SQL errors that can be tolerated on any target table before being reported to the process report and to the error log. The errors are reported as a warning. If your environment can tolerate a large number of these errors, increasing WARNRATE helps to minimize the size of those files. For more information, see Reference for Oracle GoldenGate.

17.8 Using the Discard File

By default, a discard file is generated whenever a process is started with the START command through GGSCI. The discard file captures information about Oracle GoldenGate operations that failed. This information can help you resolve data errors, such as those that involve invalid column mapping.

The discard file reports such information as:

  • The database error message

  • The sequence number of the data source or trail file

  • The relative byte address of the record in the data source or trail file

  • The details of the discarded operation, such as column values of a DML statement or the text of a DDL statement.

To view the discard file, use a text editor or use the VIEW REPORT command in GGSCI. See Reference for Oracle GoldenGate.

The default discard file has the following properties:

  • The file is named after the process that creates it, with a default extension of .dsc. Example: finance.dsc.

  • The file is created in the dirrpt sub-directory of the Oracle GoldenGate installation directory.

  • The maximum file size is 50 megabytes.

  • At startup, if a discard file exists, it is purged before new data is written.

You can change these properties by using the DISCARDFILE parameter. You can disable the use of a discard file by using the NODISCARDFILEparameter. See Reference for Oracle GoldenGate.

If a proces is started from the command line of the operating system, it does not generate a discard file by default. You can use the DISCARDFILE parameter to specify the use of a discard file and its properties.

Once created, a discard file must remain in its original location for Oracle GoldenGate to operate properly after processing has started.

17.9 Maintaining the Discard and Report Files

By default, discard files and report files are aged the same way. A new discard or report file is created at the start of a new process run. Old files are aged by appending a sequence number from 0 (the most recent) to 9 (the oldest) to their names.

If the active report or discard file reaches its maximum file size before the end of a run (or over a continuous run), the process abends unless there is an aging schedule in effect. Use the DISCARDROLLOVER and REPORTROLLOVER parameters to set aging schedules for the discard and report files respectively. These parameters set instructions for rolling over the files at regular intervals, in addition to when the process starts. Not only does this control the size of the files and prevent process outages, but it also provides a predictable set of archives that can be included in your archiving routine. For more information, see the following documentation:

  • DISCARDROLLOVER

  • REPORTROLLOVER

No process ever has more than ten aged reports or discard files and one active report or discard file. After the tenth aged file, the oldest is deleted when a new report is created. It is recommended that you establish an archiving schedule for aged reports and discard files in case they are needed to resolve a service request.

Table 17-7 Current Extract and Manager Reports Plus Aged Reports

Permissions X Date Report

-rw-rw-rw-

1 ggs ggs
1193 Oct 11 14:59
MGR.rpt

-rw-rw-rw-

1 ggs ggs
3996 Oct 5  14:02
MGR0.rpt
-rw-rw-rw-
1 ggs ggs
4384 Oct 5  14:02
TCUST.rpt
-rw-rw-rw-
1 ggs ggs
1011 Sep 27 14:10
TCUST0.rpt
-rw-rw-rw-
1 ggs ggs
3184 Sep 27 14:10
TCUST1.rpt
-rw-rw-rw-
1 ggs ggs
2655 Sep 27 14:06
TCUST2.rpt
-rw-rw-rw-
1 ggs ggs
2655 Sep 27 14:04
TCUST3.rpt
-rw-rw-rw-
1 ggs ggs
2744 Sep 27 13:56
TCUST4.rpt
-rw-rw-rw-
1 ggs ggs
3571 Aug 29 14:27
TCUST5.rpt

17.10 Reconciling Time Differences

To account for time differences between source and target systems, use the TCPSOURCETIMER parameter in the Extract parameter file. This parameter adjusts the timestamps of replicated records for reporting purposes, making it easier to interpret synchronization lag. For more information, see Reference for Oracle GoldenGate.

17.11 Getting Help with Performance Tuning

See Tuning the Performance of Oracle GoldenGate for help with tuning the performance of Oracle GoldenGate.

你可能感兴趣的:(【ogg Administering Oracle GoldenGate】17 Monitoring Oracle GoldenGate Processing)