In This Document
1 PurposeThe purpose of the E-Business Suite Performance Tuning Guide is to give an overview of :
2 IntroductionThis is the first of a series of several documents. This document gives an overview of how to approach an E-Business Suite performance issue, where to start and how to proceed. It references other documents that give the detail on each of the main diagnostic tools, the key actions that can be taken to resolve issues and the key areas for performance tuning. Where there is already comprehensive and up to date content elsewhere (e.g. My Oracle Support, Documentation Library or Blogs) this is referenced rather than repeating the detail of that content in these document. The aim is to give the user:
without repeating significant content. 3 Solution Flow DiagramThe following diagram is a very simplified view of the process. There are many other paths and actions that can be taken depending on the information received and the analysts level of experience. There are also other diagnostics that can be obtained. However, this diagram covers the most likely paths.
4 Where to StartThe usual place to start is the SQL Trace and TKPROF. However this is not always the best place. If the performance issue is not (yet) linked to a specific module or is system wide then an AWR report would be the best starting point. If the specific SQLs with the performance issues are known, the SQL is still in memory (the cursor is still loaded in the cursor cache) and the "statistics_level" initialization parameter is set to ALL (or the initialization parameter "_rowsource_execution_statistics" is set to TRUE) when the SQLs are executed, then the display cursor or SQL Monitor report can be a good place to start. 5 Reducing Iterations5.1 Sufficient InformationIt is important to get the balance right, between getting all necessary information on the first pass and asking for too much information. Round trips between parties should be minimized. This can be a significant delay, especially across different time zones, and can create a lot more work. However, gathering too much data can:
So, if the module or program is known, then typically ask for the AWR report and the SQL Trace/TKPROF. However, many of the other diagnostics should not be requested until later.
5.2 Correct and Complete DiagnosticsTo avoid repeating requests for the same information
For example :
6 Solution Process Overview6.1 SQL Trace / TKPROFAround 90% of performance issues on Oracle E-Business Suite are SQL related. So, if the module/program is known, then start with the SQL Trace (Level 8 – with waits) and TKPROF. The timed_statistics initialization parameter must be set to TRUE, so that the SQL Trace contains row source statistics (STAT lines) and CPU times. In most circumstances this will show the actual runtime execution plan with row source statistics, so it should be possible to identify any inefficient execution plan steps from the TKPROF. However, it does not show which predicates are applied on each execution plan line. Row source statistics are the actual (not estimated) statistics (elapsed time, physical reads/writes, consistent reads) for each execution plan line. It will also identify any time consuming wait events. For more information on obtaining and interpreting SQL Trace and TKPROF see My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)". For more information on identifying SQLs with performance issues from the SQL Trace / TKPROF see the "What is the challenging SQL? > 1 From Trace" section in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)". 6.2 AWRThe AWR report should normally be obtained on the first pass as well. This should be for the period (between snapshots) that the module / program was running. This can help identify expensive SQLs, time consuming wait events, system resource issues or DB configuration issues. It does not report execution plans (runtime or explain plan). Even if the module/program or SQLs with the performance issue are known:
In some cases Active Session History (ASH) reports can be used to give more detailed information on which SQLs, Wait Events, Objects and Row Sources the performance issue occurs. For more information on obtaining and interpreting AWR reports see My Oracle Support document "Performance Diagnosis with Automatic Workload Repository (AWR) (Document 1674086.1)". For more information on identifying SQLs with performance issues from the AWR see the "What is the challenging SQL? > 2 From Automatic Workload Repository (AWR)" section in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)". 6.3 Display Cursor / SQL Monitor ReportIf specific SQLs with the potential performance issues are known, the SQL is still in memory (the cursor is still loaded in the cursor cache) and "statistics_level" is set to ALL (or "_rowsource_execution_statistics" is set to TRUE) when the SQL is executed, then the Display Cursor or SQL Monitor report can be a good place to start. These reports can be produced whilst a SQL is still executing. If it is not possible to obtain a complete SQL trace (or one that contains row source statistics (STAT) lines) then these reports can also be useful in providing actual runtime execution plan and row source statistics. See "Runtime Execution Plans / Row Source Operation Counts", "Incomplete Trace File" and "11g and Extended SQL Trace" sections in the My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)" To produce these reports it will normally be necessary to know the sql_id of SQLs with potential performance issues. See here for more information. The Display Cursor report provides the actual runtime execution plan and row source statistics along with the predicates applied on each execution plan line. Row source statistics are the actual (not estimated) statistics (starts, number of rows, elapsed time, physical reads, buffer gets, memory used (in PGA)) for each execution plan line. The Display Cursor report will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. The SQL Monitor report also provides the actual runtime execution plan. It shows row source statistics (executions, rows, memory usage, I/O Requests, CPU Activity, Wait Activity) for each runtime execution plan line, but there are differences with those provided on the Display Cursor report. It will show executions, rows and memory usage and I/O requests even if the "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed. However, it will only show Active Period, CPU Activity and Wait Activity for execution plan lines if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. So these parameter values are preferred for SQL Monitor reports. It also gives a good view of how parallel SQL/DML performs across stages of the plan and parallel slaves. However, it is only available in Oracle Database 11g. Display Cursor report is preferred if the SQL/DML is non-parallel. It is also preferred prior to 11g. SQL Monitor report is preferred if the SQL/DML is parallel. The advantage of these diagnostics is that they can be obtained quickly and after the event. There is often no need for another iteration to obtain the SQL Trace / TKPROF. See here for more information on Display Cursor report. See here for more information on SQL Monitor report. It is also possible to get information directly from the V$SQL_PLAN_STATISTICS or V$SQL_PLAN_MONITOR tables that are used by the Display Cursor and SQL Monitor reports. For more information see the section "How to get the necessary information? > 6. Steps to get SQL execution plan with V$SQL_PLAN_MONITOR or V$SQL_PLAN_STATISTICS" in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)". 6.4 Most of the time spent in the application tierHigh levels of "SQL*Net message from client" can indicate a large amount of time spent in the client application or on the application tier. This will not just be a high total time, but high average wait time as well. Normally the "SQL*Net message from client" waits will average around 1ms or less. However, it really depends on the hardware/network configuration and bandwidth between tiers. Note that:
Network Bandwidth or Latency IssuesThe following My Oracle Support documents could assist with identifying and resolving Network issues:
6.5 Applications Tier DiagnosticsIf, after analyzing the SQL Trace / TKPROF (or even the AWR), the majority of time is not spent in the database then diagnostics from the application tier may be needed. Even if the application tier is on the same physical host as the database tier, diagnostics will still be needed. These include:
6.6 Most Spent in SQL ExecutionTypically, the majority of time will be spent parsing, executing or fetching in SQL. However, if
then the issue could be in the server side PL/SQL. 6.7 PL/SQL ProfilerThe PL/SQL Profiler and PL/SQL Hierarchical profiler can be used to analyze which steps in the server side PL/SQL are consuming the time. More 6.8 SQLHC or SQLTThere are occasions when SQLHC should be used rather than SQLT:
There are situations where SQLT is preferred over SQLHC:
6.9 SQLHCWhen SQLs with possible sub-optimal execution plans have been identified then use SQLHC (sqlhc.sql). This provides html reports, which contain most of the data required to identify the reasons why the CBO chose a sub-optimal execution plan. e.g. Health Check (Observations), CBO Statistics, Execution plans, Runtime Statistics, System/Instance (Initialization) Parameters and SQL Profiles/Baselines associated with the SQL. It saves a huge amount of time by providing all information in one action (and in one zip file sqlhc_YYYYMMDD_HHMISS_SQLID.zip). It does not require any installation on the environment (other than the script file itself) and has a negligible performance impact. It is particularly useful if engineers/analysts do not have access to the instance. Note that SQLHC will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. It will require the SQL ID. See here for more information on SQLHC. 6.10 SQLT with XTRACT MethodWhen SQLs with possible sub-optimal execution plans have been identified then obtain the SQLT with XTRACT method for those SQLs. This provides an html report (sqlt_snnnnn_main.sql), which contains most of the data required to identify the reasons why the CBO chose the execution plan. It contains execution plans, runtime statistics, CBO statistics, DB parameters, CBO parameters, performance statistics, SQL Profiles/Baselines and metadata (schema object definitions etc) associated with the SQL. It saves a huge amount of time by providing all information in one action (and in one zip file sqlt_snnnnn_method_*.zip). It is particularly useful if engineers/analysts do not have access to the instance. The XTRACT method is recommended as it gives actual SQL execution runtime data (if it is available). Note that the SQLT (with XTRACT and XECUTE methods) will only provide actual row source statistics if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. The XTRACT method will require the SQL ID or SQL Hash Value. The XPLAIN and XECUTE methods will require a file containing the SQL (and application context / bind variables declared and populated for XECUTE). Sometimes it is not possible to use SQLT on an environment. For example a customer may object to installing SQLT on their production environment or they may be concerned about the performance impact on a production system. So SQLHC could be used instead to provide some of the information. See here for more information on SQLT. 6.11 Operating System StatisticsWhile AWR and trace data give a good initial picture of whether there are issues with I/O, CPU or memory; on linux/unix environments there are a number of individual operating system commands that can be used to monitor overall resource usage, or the resource usage for individual processes. e.g. "sar", "ps", "top", "ifconfig", "mpstat", "iostat", "netstat" "vmstat" etc. The OS Watcher and LTOM tools provide automated ways of running some of these commands and then collating and presenting the results. These diagnostics will not be needed if the performance issue is due to sub-optimal SQL execution plans, DB Configuration, Initialization Parameters, Incorrect/Missing CBO Statistics, high numbers of executions etc. More 6.12 SQL TuningThere are many ways to fix an inefficient execution plan. These are covered in the "How to tune the issue SQL ?" section in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)", which describes how a more efficient execution plan could be obtained by:
The My Oracle Support document "Interpreting Explain Plan (10g and Above) (Document 1616894.1)" explains how to interpret execution / explain plans. The My Oracle Support document "White Papers and Blog Entries for Oracle Optimizer (Document 1337116.1)" lists all the whitepapers and blogs for the Cost Based Optimizer. In some cases the execution plan could be the most efficient, but there are still high levels of buffer gets (logical reads) or disk reads (physical reads) for certain execution plan lines. In this case the time taken to read rows for specific objects (tables / indexes) could be reduced by:
6.13 Performance TuningThe issue could be a system or application module wide performance issue caused by:
The "How to tune the issue SQL ?" section in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)" outlines the following methods for resolving general performance issues:
6.13.1 ExadataThe following My Oracle Support documents contain some useful advice for configuring Exadata on E-Business Suite:
6.13.2 RACIf the following places in the AWR show high levels of RAC wait events ("gc" wait events) then the RAC configuration may need to be changed.
See the following My Oracle Support documents for more information:
Also see Chapter 13 Monitoring Performance in the "Real Application Clusters Administration and Deployment Guide". 6.13.3 Concurrent Manager / ProcessingSee the following My Oracle Support documents for more information:
6.13.4 AQ / StreamsSee the following My Oracle Support documents for more information:
Also see Chapter 5 Oracle Database Advanced Queueing Performance and Scalability in the "Database Advanced Queuing User's Guide". 7 Display Cursor ReportThis displays the runtime execution plan of any cursor loaded in the cursor cache. At a basic level, it shows the runtime execution plan. However, the format ALL also includes extra information such as pruning, parallel execution, predicate, projection, alias and remote SQL information. It can be produced by running the dbms_xplan.display_cursor function with the ALL +ALLSTATS option. The +ALLSTATS option will include actual row source statistics for each execution plan step. ALLSTATS includes IOSTATS and MEMSTATS, e.g. :
However, this additional information is only provided if STATISTICS_LEVEL=ALL when the SQL is executed. Setting the initialization parameter "statistics_level" to ALL can be an overhead, so may not be advisable on a production system. Alternatively the initialization parameter "_rowsource_execution_statistics" can be set to TRUE, which gives a lower overhead. These parameters can be set for a specific session (e.g. using the profile option "Initialization SQL Statement - Custom"). If testing from SQL*Plus then the hint /*+ gather_plan_statistics*/ can be used to collect row source statistics for a specific SQL (even if "statistics_level" is not ALL and "_rowsource_execution_statistics" is FALSE). Note that SQLT with XTRACT method will also report actual row source statistics in the same circumstances. The Display Cursor report should be run as soon as possible. If it is delayed, the cursor may have been flushed from memory or invalidated, and no data will be available. The SQL ID will be required to be able to run the Display Cursor report. See here for information on how to obtain it. See here for instructions on how to run the Display Cursor report. By default, the plan statistics are shown for all executions of the cursor, however the keyword LAST can be added to the options to see the statistics for the last execution only. Here is an example of the output from a display cursor report. SQL_ID 8d9ny4cf9qmvm, child number 1 ------------------------------------- INSERT INTO HZ_CUST_SITE_USES_ALL ( SITE_USE_ID , CUST_ACCT_SITE_ID, SITE_USE_CODE , PRIMARY_FLAG , STATUS , LOCATION , ORG_ID , OBJECT_VERSION_NUMBER, CREATED_BY_MODULE, LAST_UPDATE_DATE , CREATION_DATE , LAST_UPDATED_BY , CREATED_BY , LAST_UPDATE_LOGIN) SELECT HZ_CUST_SITE_USES_S.NEXTVAL, L.CUST_ACCT_SITE_ID, 'LATE_CHARGE', 'Y', 'A', TO_CHAR(HZ_CUST_SITE_USES_S.CURRVAL), L.ORG_ID, 1, 'AR_LATE_CHARGE_UPG', :B3 , :B3 , -1551, -1551, -1551 FROM (SELECT CAS.CUST_ACCT_SITE_ID, CAS.ORG_ID FROM HZ_CUST_ACCT_SITES_ALL CAS, HZ_CUST_SITE_USES_ALL CSU WHERE CAS.ROWID >= :B2 AND CAS.ROWID <= :B1 AND CAS.STATUS = 'A' AND CAS.CUST_ACCT_SITE_ID = CSU.CUST_ACCT_SITE_ID AND CSU.STATUS = 'A' AND (( CSU.SITE_USE_CODE = 'STMTS' AND NOT EXISTS (SELECT NULL FROM HZ_CUST_SITE_USES_ALL B WHERE B.CUST_ACCT_SITE_ID = CAS.CUST_ACCT_SITE_ID AND B.SITE_USE_CODE = 'DUN' AND B.STATUS = 'A')) OR CSU.SITE_USE_CODE = 'DUN')) L Plan hash value: 1449143904 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | O/1/M | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | 655 | | | 52008 (100)| | 0 |15:49:30.79 | 147M| 101K| | | | | 1 | LOAD TABLE CONVENTIONAL | | 655 | | | | | 0 |15:49:30.79 | 147M| 101K| | | | | 2 | SEQUENCE | HZ_CUST_SITE_USES_S | 655 | | | | | 109K|15:20:52.78 | 145M| 98825 | | | | |* 3 | FILTER | | 655 | | | | | 109K|15:19:30.01 | 145M| 98825 | | | | |* 4 | FILTER | | 655 | | | | | 5920K|15:02:20.32 | 144M| 80273 | | | | |* 5 | HASH JOIN | | 655 | 42541 | 1744K| 52008 (1)| 00:18:18 | 5920K|15:01:52.29 | 144M| 80273 | 963K| 963K| 655/0/0| |* 6 | TABLE ACCESS BY ROWID RANGE| HZ_CUST_ACCT_SITES_ALL | 655 | 30763 | 751K| 14546 (1)| 00:05:07 | 3507K|00:02:21.51 | 93535 | 80273 | | | | |* 7 | TABLE ACCESS FULL | HZ_CUST_SITE_USES_ALL | 655 | 3356K| 54M| 37454 (1)| 00:13:11 | 4050M|06:58:51.26 | 144M| 0 | | | | |* 8 | TABLE ACCESS BY INDEX ROWID | HZ_CUST_SITE_USES_ALL | 109K| 1 | 17 | 4 (0)| 00:00:01 | 0 |00:20:29.26 | 328K| 18552 | | | | |* 9 | INDEX RANGE SCAN | HZ_CUST_SITE_USES_N1 | 109K| 1 | | 3 (0)| 00:00:01 | 0 |00:20:28.15 | 328K| 18552 | | | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Query Block Name / Object Alias (identified by operation id): ------------------------------------------------------------- 1 - SEL$F5BB74E1 6 - SEL$F5BB74E1 / CAS@SEL$2 7 - SEL$F5BB74E1 / CSU@SEL$2 8 - SEL$3 / B@SEL$3 9 - SEL$3 / B@SEL$3 Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter((("CSU"."SITE_USE_CODE"='STMTS' AND IS NULL) OR "CSU"."SITE_USE_CODE"='DUN')) 4 - filter(CHARTOROWID(:B2)<=CHARTOROWID(:B1)) 5 - access("CAS"."CUST_ACCT_SITE_ID"="CSU"."CUST_ACCT_SITE_ID") 6 - access("CAS".ROWID>=CHARTOROWID(:B2) AND "CAS".ROWID<=CHARTOROWID(:B1)) filter("CAS"."STATUS"='A') 7 - filter("CSU"."STATUS"='A') 8 - filter("B"."STATUS"='A') 9 - access("B"."CUST_ACCT_SITE_ID"=:B1 AND "B"."SITE_USE_CODE"='DUN') Column Projection Information (identified by operation id): ----------------------------------------------------------- 2 - "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 3 - "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 4 - "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 5 - (#keys=1) "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS".ROWID[ROWID,10], "CAS"."ORG_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], "CSU"."STATUS"[VARCHAR2,1], "CSU"."SITE_USE_CODE"[VARCHAR2,30] 6 - "CAS".ROWID[ROWID,10], "CAS"."CUST_ACCT_SITE_ID"[NUMBER,22], "CAS"."STATUS"[VARCHAR2,1], "CAS"."ORG_ID"[NUMBER,22] 7 - "CSU"."CUST_ACCT_SITE_ID"[NUMBER,22], "CSU"."SITE_USE_CODE"[VARCHAR2,30], "CSU"."STATUS"[VARCHAR2,1] 8 - "B".ROWID[ROWID,10], "B"."CUST_ACCT_SITE_ID"[NUMBER,22], "B"."SITE_USE_CODE"[VARCHAR2,30], "B"."STATUS"[VARCHAR2,1] 9 - "B".ROWID[ROWID,10], "B"."CUST_ACCT_SITE_ID"[NUMBER,22], "B"."SITE_USE_CODE"[VARCHAR2,30] Note ----- - dynamic sampling used for this statement (level=7) 150 rows selected. In the above example, it can clearly be seen that the performance issue is related to a full table scan of HZ_CUST_SITE_USES_ALL (alias CSU) and the subsequent hash join. The columns Starts, A-Rows, A-Time, Buffers, Reads, OMem, 1Mem and O/1/M are actual statistics. The columns E-Rows, E-Bytes, Cost (%CPU) and E-Time are estimates (calculated by the Cost Based Optimizer). Most of the information is also included in SQLT with XTRACT method. However, display_cursor provides a simpler view of the information and can be obtained very quickly with little overhead. The runtime execution plan and row source statistics are also available in views V$SQL_PLAN and V$SQL_PLAN_STATISTICS. For more information see the section "How to get the necessary information? > 6. Steps to get SQL execution plan with V$SQL_PLAN_MONITOR or V$SQL_PLAN_STATISTICS" in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)". Note : serveroutput must be set to "off" for dbms_xplan to work. 8 SQL Monitor ReportThe main advantage of this is that it gives a good view of how parallel SQL/DML performs across stages of the plan and parallel slaves. It can also give a good idea of the actual executions and row counts for each execution plan line even if "statistics_level" initialization parameter is not set to ALL ( or "_rowsource_execution_statistics" is not set to TRUE) at the time the SQL is executed. However, it is only available in Oracle Database 11g. For more information see Oracle Database Performance Tuning Guide and My Oracle Support document "Monitoring SQL statements with Real-Time SQL Monitoring (Document 1380492.1)". The SQL Monitor report will only show the Active Period, CPU Activity and Wait Activity for execution plan lines if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. This report uses data from several fixed views (e.g. GV$SQL_MONITOR, GV$SQL_PLAN_MONITOR, GV$SQL_PLAN, GV$ACTIVE_SESSION_HISTORY, GV$SESSION_LONGOPS, GV$SQL), so the database user must have the privilege to select from these (e.g. SELECT_CATALOG role). It can be produced by running the dbms_sqltune.report_sql_monitor function with report_level = ALL. The SQL ID will be required to be able to run the SQL Monitor report. See here for information on how to obtain it. See here for instructions on how to run the SQL Monitor report. By default the SQL Monitor report shows the last (monitored) execution of the SQL. The sql_exec_start and sql_exec_id parameters can be used to display other executions. See the Oracle Database PL/SQL Packages and Types Reference for more information. SQL Monitor reports are also available from Oracle Enterprise Manager. Here is an example of the output from a SQL Monitor report:
The Estimated Rows and Cost columns in the Monitoring Details section contain estimates, the remaining columns contain actual statistics. Note that pointing to any of the colored bars on the HTML report will display more information about that statistic. e.g. For Wait Activity it displays the name of the wait events. Real-Time SQL Monitoring requires the Oracle Tuning Pack. The Oracle Diagnostics Pack is a prerequisite for the Oracle Tuning Pack. So Oracle Tuning Pack and Oracle Diagnostics must be licensed in order to use Real-Time SQL Monitoring. They can only be purchased with the Enterprise Edition of the Oracle Database. See My Oracle Support document "Monitoring SQL statements with Real-Time SQL Monitoring (Document 1380492.1)" or Oracle Database Licensing Information in the documentation library. SQL monitoring is enabled when the initialization parameter "statistics_level" is set to "ALL" or "TYPICAL" (default). The "control_management_pack_access" parameter should also be set to "DIAGNOSTIC+TUNING" (SQL monitoring is part of the Oracle Database Tuning Pack). SQLs are monitored if they consume more than 5 seconds of CPU or I/O time in a single execution or if they use parallel execution (SQL, DML or DDL). The runtime execution plan and row source statistics reported in the SQL Monitor Report are also available in the view V$SQL_PLAN_MONITOR. For more information see the section "How to get the necessary information? > 6. Steps to get SQL execution plan with V$SQL_PLAN_MONITOR or V$SQL_PLAN_STATISTICS" in the My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)". 9 Forms Trace and Forms Runtime DiagnosticsForms Trace was introduced with Forms 9i. It an be used with Oracle E-Business Suite R12 to provide information on events in Forms and when they occurred (relative to each other). It can be used to diagnose and investigate Forms performance issues, when it is clear that the time is being consumed in the Form itself and not in the database. The output an be provided in HTML or XML format. It can record events such as:
Forms Runtime Diagnostics (FRD) is the primary diagnostic tool used in Forms 6i (and hence Oracle E-Business Suite 11i). It also reports on events in Forms and when they occur. The events include
For R12.0 see My Oracle Support document "How To Collect And Use Forms Trace (FRD) in Oracle Applications Release 12 (Document 373548.1)". Also see "R12: Forms Runtime Diagnostics (FRD), Tracing And Logging For Forms In Oracle Applications (Document 438652.1)" for more information. For Release 11i, see My Oracle Support document "11i/R12: Obtaining Forms Runtime Diagnostics (FRD) In Oracle Applications 11i and Release 12 (Document 150168.1)" Note that the location in the FORMS_TRACE_DIR variable can also be obtained through Help in Oracle Forms (Go to Help->About Oracle Applications, section "Forms Server Environment Variables"). If the value is not visible then it may be necessary to set profile option "FND: Diagnostics" to "Yes" and bounce the middle tier. 9.1 Using Oracle Applications ManagerFrom R12 onwards Forms Runtime Diagnostics (FRD) can be obtained for another Forms Session using the Monitoring feature within Oracle Applications Manager (Monitoring > Forms Sessions (under Current Activity)). Click on the Session Details button or the AUDSID to view database information for the selected Forms session. The Diagnostics button can then be clicked to enable Forms Runtime Diagnostics (FRD) for that Forms Session. The View Diagnostics button can be used to view the Forms Runtime Diagnostics (FRD) log file. When the relevant actions are completed, remember to switch off diagnostics using the Diagnostics button. Note that the profile option "Sign-On:Audit Level" must be be set to "Form" to be able to use this functionality. If this button is disabled then check that the Forms patchset level is 12 or higher (i.e. 6.0.8.20 or higher) and set the environment variable FORMS60_OAM_FRD for the Forms Listener process. See the Oracle E-Business Suite Maintenance Guide (R12.2 and onwards) or System Administrator's Guide – Maintenance (prior to R12.2) for more information. 10 Reports Client Process TraceReports can be traced in three different ways:
These produce identical output. See here for instructions on how to run. There are a number of Trace Options. None of them are ideal. For detail reports TRACE_ALL can give significantly distorted timings for the formatting part of the report (Reports Time), as well as produce a very large Reports Trace log file. TRACE_PRF gives an overview of where time is spent; either in formatting the report or retrieving data from the database (UPI or SQL), but no detail. So, for detail reports it is advisable to use the "Log report performance statistics" (TRACE_PRF) option. If this indicates that the issue is in "Reports Time" or "UPI" time then run the report again with the "Log all possible trace information" (TRACE_ALL) option (which includes TRACE_APP, TRACE_PLS, TRACE_TMS etc). For summary reports use the "Log all possible trace information" (TRACE_ALL) option. Choosing Reports Trace Options (Debug Option Values)The following trace options (TRACEOPTS) and Debug Option Values are available:
Combining options e.g. TRACEOPTS=(TRACE_APP,TRACE_PLS,TRACE_TMS) will include multiple source of information. Note that there are no spaces before or after the comma delimiters. The most useful options are TRACE_APP, TRACE_PLS and TRACE_PRF. However, for reports where the output is large compared to the database activity (e.g. detail reports) then TRACE_APP will give significantly distorted timings for the formatting part of the report (Reports Time), as well as produce a very large Reports Trace log file. So, it is advisable to start with TRACE_PRF. If TRACE_PRF indicates that the issue is in "Reports Time" or "UPI" time then run the report again with TRACE_ALL option (which includes TRACE_APP, TRACE_PLS, TRACE_TMS etc). Note that the times on the TRACE_ALL for report objects (e.g. boilerplate, fields, columns - TRACE_APP) will contain an overhead for each line, however an analysis of any large time gaps within this section (or elsewhere in the trace) should reveal where time is being consumed (when the report is not being traced). The difference in performance statistics ("Reports Time") between the TRACE_ALL and TRACE_PRF traces should give an indication of the overhead of the TRACE_ALL option. 11 Application Tier Operating System StatisticsIf a significant amount of time has been spent in the application server then Operating System Statistics should be provided for the application tier. See Operating System Statistics for more information on how to gather and present these using Oracle's OS Watcher and Lite Onboard Monitor (LTOM) tools. For Forms the "Forms Process ID" can be obtained through Help in Oracle Forms (Go to Help->About Oracle Applications, section "Forms Server"). If the value is not visible then it may be necessary to set profile option "FND: Diagnostics" to "Yes" and bounce the middle tier. For Self Service (OAF) the process can be obtained through the "About this Page" link at the bottom left hand side of the page. Go to the "Java System Properties" tab and get the value for the CLIENT_PROCESSID. Note that the "About this Page" link is only available when profile FND_DIAGNOSTICS set to YES. 12 PL/SQL ProfilerThe PL/SQL Profiler can be used to analyze which steps in the server side PL/SQL are consuming the time. The output is an html report with summary and detail sections, listing the program units and lines with the time consumed. There are three ways of producing a PL/SQL Profiler report in Oracle E-Business Suite, depending on the module being profiled.
See here for more information on how to use these methods. 12.1 Enable PL/SQL Profiler manually in PL/SQLPL/SQL Profiler data can be collected manually by enabling and disabling directly in PL/SQL. This could be useful if testing directly from SQL*Plus / SQL Developer (i.e. outside of E-Business Suite). This requires updating the PL/SQL code and re-compiling. Care is also needed to ensure that the commands to enable and disable are placed at the correct points (where they will definitely be executed and capture all the necessary actions). Enable PL/SQL Profiler using: dbms_profiler.start_profiler(run_comment => ‘ Disable PL/SQL Profiler using: dbms_profiler.stop_profiler; Note that profiler data is stored in memory and is flushed into profiler tables when the profiler is stopped. It can be advisable to run dbms_profiler.flush_data at intervals to flush data to tables and free up memory. See the following for more information:
12.2 Other reporting optionsAdditionally, the script profiler.sql can be run as follows to list the top resource consuming PL/SQL lines and the code lines around them. START profiler.sql This can be downloaded from My Oracle Support document "Script to produce HTML report with top consumers out of PL/SQL Profiler DBMS_PROFILER data (Document 243755.1)". Sample output can also be downloaded. There are also reporting scripts profsum.sql and profrep.sql available on older database versions. 12.3 InterpretationThe standard E-Business Suite "PL/SQL Profiler Report" contents are self-explanatory. It contains the total elapsed time for the run (run ID). If the run is part of a related run then the grand total for the related run is also reported. The "Execution Summary By Program Units" section reports on the total elapsed time for each program unit (e.g. Package Body) where the total time is greater than 1% of the total. The "Execution Details For Program Units" sections give a breakdown of the total elapsed time and executions of each source code line within the program units. This is sorted by line number and not by total time. There is an example in My Oracle Support document "How to generate pl/sql profiler output for the concurrent program under EBS? (Document 808005.1)". 12.4 Technical detailThe PL/SQL Profiler implemented in E-Business Suite comprises Oracle Database functionality, augmented by E-Business Suite functionality. There is also a reporting script provided by Oracle Server Technologies Center of Expertise. Here is a description of the components: and
E-Business Suite Forms, Self Service (OAF) and the Debug Rules/Options functionality available from Oracle Applications Manager and Concurrent Request submission all utilise the database functionality (package dbms_profiler and plsql_profiler tables to collect PL/SQL profiler data). See Oracle Database PL/SQL Packages and Types Reference for more information on the database tables and package. 12.5 Installing PL/SQL Profiler packageTo install the PL/SQL Profiler tables, run the following as the APPS user: $ORACLE_HOME/rdbms/admin/proftab.sql Note that if the tables and sequence (in this file) are created in a different schema then synonyms will be needed in the APPS schema. To install the dbms_profiler package, run the following (as the user running the application e.g. APPS): $ORACLE_HOME/rdbms/admin/profload.sql See My Oracle Support document "Using DBMS_PROFILER (Document 97270.1)" 12.6 Common issuesIf the PL/SQL Profiler report(s) do not return any detailed information then it could be because the PL/SQL is compiled in NATIVE mode. See My Oracle Support document "DBMS_PROFILER does not Return Detailed Information (Document 602711.1)" and Oracle Database PL/SQL Packages and Types Reference for more information. Note that if the PL/SQL profiler objects (installed with PROFTAB.sql and PROFLOAD.sql) are created in a different user than APPS, but are accessed by other database users then it will be necessary to add privileges (Grant SELECT/INSERT/UPDATE) and synonyms for the tables and sequences. See Oracle Database PL/SQL Packages and Types Reference for more information. 12.7 PL/SQL Hierarchical profiler (HPROF)The PL/SQL hierarchical profiler is available from 11G Release 1 onwards. It has a better hierarchical breakdown of the time spent in each program unit, showing how much time was consumed in the actual program unit (Function), the number of calls and how much time was consumed in its sub-programs (Descendents). Time spent in SQL and PL/SQL is accounted separately. However, it does not show the time spent at line level and it is not integrated into Oracle E-Business Suite functionality like PL/SQL Profiler. It does not need to be inserted into source code, but it can only be enabled for the current session. So for Oracle E-Business Suite the only way it can be enabled (without modifying code) is to use the profile option "Initialization SQL Statement - Custom"(internal name FND_INIT_SQL), adding the following: BEGIN Dbms_hprof.start_profiling(' The results are stored in a trace file, but can be reported in html output or written to database tables for further analysis. See here for instructions on how to obtain and report PL/SQL Hierarchical Profiler information. For more information see:
13 SQLHCWhen SQLs with possible sub-optimal execution plans have been identified then use SQLHC (sqlhc.sql).
The html reports are as follows:
If the SQL was executed when "statistics_level" was set to ALL (or "_rowsource_execution_statistics" is set to TRUE) then the current execution plans will show the actual row source statistics from memory (if still available). So for maximum benefit it should be run soon after the SQL has executed. Note that SQLHC may report misleading CBO statistics and other metadata (e.g. they differ from the time at which the program/SQL was run). See more. The section "How to interpret the output? > 4. Understanding sqltxtract output or sqlhc output" in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)" has some useful examples of how information from the SQLT or SQLHC can be used to resolve issues. 13.1 Latest VersionThe SQLHC tool will be updated from time to time, always make sure that latest version is installed. The latest versions can be downloaded from the My Oracle Support document "SQL Tuning Health-Check Script (SQLHC) (Document 1366133.1)" See My Oracle Support document "SQLT Changes (Document 1614201.1)" for information on the changes in each version. 13.2 More InformationThe main source of information is the My Oracle Support document "SQL Tuning Health-Check Script (SQLHC) (Document 1366133.1)". The SQLHC script (zip file) can be downloaded from this document. It also contains an excellent presentation “How_to_improve_SQL_Performance_with_new_SQL_Health_Check_Tool.pptx” and example output. The following My Oracle Support documents are also useful:
13.3 RunningSee the "Instructions" section of the My Oracle Support document "SQL Tuning Health-Check Script (SQLHC) (Document 1366133.1)" for more information. The sqlhc.sql script requires two parameters :
13.4 Running Issues - Plan TableSQLHC requires the 'plan_table'. On most databases, the plan table is pre-created centrally and referenced by a public synonym using the 'catplan.sql' script from $ORACLE_HOME/rdbms/admin. If it is not there or accessible then check the installation logs for issues. 14 SQLT with XTRACT MethodThis provides an html report (sqlt_snnnnn_main.sql), which contains most of the data required to identify the reasons why the CBO chose the execution plan.
Note that SQLT requires its own source code and objects on the database, so it needs to be installed. See My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)" for more information. The section "How to interpret the output? > 4. Understanding sqltxtract output or sqlhc output" in My Oracle Support document "Oracle E-Business Suite SQL Tuning Tips for Customers (Document 2098522.1)" has some useful examples of how information from the SQLT or SQLHC can be used to resolve issues. It has 3 main methods (XPLAIN, XTRACT and XCUTE).
The XTRACT method is recommended because it gives the runtime execution plan (provided the runtime execution plan is available in memory or AWR). If the runtime execution plan has been obtained from another source (e.g. SQL Trace, Display Cursor or SQL Monitor) then XPLAIN will suffice. 14.1 MethodsThere are 3 main methods: XTRACTThis does not actually execute the SQL, so a script is not needed and time is saved. It requires the SQL ID or SQL Hash Value. It obtains the runtime execution plan from memory or AWR and row source statistics (if available) from memory. It is RAC aware. If the SQL was executed when "statistics_level" was set to ALL (or "_rowsource_execution_statistics" is set to TRUE) then it will contain actual row source statistics on the execution plan. It also uses SQL Tuning Advisor and Test Case Builder. If the SQL is no longer in memory or AWR then this method errors. This is the preferred option. However, for maximum benefit it should be run soon after the SQL has executed. XPLAINThis requires a file containing the SQL. The application context may also need to be set. This may report a different execution plan than the one used at runtime. It is also based on the EXPLAIN PLAN FOR statement, so it does not do bind peeking. It has less information than XTRACT or XECUTE. However, if the runtime execution plan has already been obtained from another source (e.g. SQL Trace, Display Cursor or SQL Monitor) and the SQL/execution plan is no longer in memory or AWR then this is the preferred option. The majority of SQLs in E-Business Suite use either Multi-org views (11.5.10) or Synonyms (R12). These have database policies and contexts applied (virtual private database). In addition there are multi-language views/tables and HR uses security groups. These views and policies rely on the application context (e.g. the user, responsibility, application and security group), which add additional predicates to the SQL. Without the application context being set the execution plan could be inaccurate. XECUTEThis requires a SQL script containing the SQL to be executed, with the application context set and bind variables declared and populated. Although this provides all the information, bear in mind that it may use a different execution plan than the one used at runtime. It may be difficult to provide a SQL script that reproduces the conditions (data state, sequence of actions etc) of the program. The data may not be in a similar state to when the SQL was originally executed, including any temporary (e.g. Global Temporary Tables) and transitory tables (e.g. interface tables). So, it is preferable to run the program again and use the XTRACT method. The majority of SQLs in E-Business Suite use either Multi-org views (11.5.10) or Synonyms (R12). These have database policies and contexts applied (virtual private database). In addition there are multi-language views/tables and HR uses security groups. These views and policies rely on the application context (e.g. the user, responsibility, application and security group) to determine the data returned by views or synonyms. Without the application context being set these views/synonyms will return no rows. PreferenceSo the preference is:
14.2 Row Source Statistics / Statistics_Level = ALLThe SQLT (with XTRACT or XECUTE method) (and SQLHC) will also report the row source statistics if they are available. Row source statistics are the actual (not estimated) statistics (starts, number of rows, elapsed time, physical reads, buffer gets, memory used (in PGA)) for each execution plan line. They are available if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. If testing from SQL*Plus then the hint /*+ gather_plan_statistics*/ can be used to collect row source statistics for a specific SQL (even if "statistics_level" is not ALL and "_rowsource_execution_statistics" is FALSE). This could also be used in the SQL script if using the XECUTE method. 14.3 Use the XTRACT method as soon as possible after a SQL statement executedThis also applies to SQLHC. There are three reasons for this:
14.4 Latest VersionThe SQLT tool will be updated from time to time. Always make sure that latest version (for the database version) is installed. The latest versions can be downloaded from the My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)". There are different versions of SQLT for each database version (from 9.2 through to 12c). Always download and install the version specific to the database version being used. See My Oracle Support document "SQLT Changes (Document 1614201.1)" for information on the changes in each version. 14.5 More InformationSee My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)". This document contains:
The installation zip files contain useful information in the following files:
14.6 Installing and RunningSee the sqlt_instructions.html file that is delivered with the install zip for instructions on installing and running SQLT. These are also in My Oracle Support document "SQLT Usage Instructions (Document 1614107.1)" For the XTRACT method the SQL ID or Hash Value are required. For the XECUTE method a text file containing the SQL, application context and declared / initialized bind variables (if required) will be needed. For the XPLAIN method a text file containing the SQL text is required. If the SQL contains bind variables then they can be left in the SQL or replaced with literals of the same datatype. However, remember to use TO_DATE conversions on literals for date columns. Be aware that XPLAIN is blind to bind variable values used in the SQL statement. 14.7 Installation or Running IssuesGeneralMany of the errors that occur are usually due to incorrect installation. Please obtain the latest version from My Oracle Support document "All About the SQLT Diagnostic Tool (Document 215187.1)". Refer to the instructions sqlt_instructions.html (also on My Oracle Support document "SQLT Usage Instructions (Document 1614107.1)"). Check the install log files (*.log). And re-install exactly as indicated in the instructions, using default names for the directories. See My Oracle Support document "Troubleshooting SQLT Issues (Document 1521607.1)" for links to some useful My Oracle Support Community (MOSC) discussions on SQLT issues. Issues writing to filesBy default, SQLT will pick up the stage directories location to use from the database parameter "user_dump_dest". In Database version 11.2 the install script can fail with errors (ORA-29283: invalid file operation, ORA-20102: SQLT$STAGE: not capable of reading or writing), because the directory location in initialization parameter "user_dump_dest" is a symbolic linked directory. See my Oracle Support document "Installing SQLT fails with ORA-29283: invalid file operation (Document 1362270.1)" for more information and a solution. 14.8 Obtaining SQL ID and Hash ValueThe XTRACT method requires the SQL ID or the SQL hash value (not to be confused with the plan hash value). See here for more information on obtaining the SQL ID. For 10g and before, if only the SQL Trace/TKPROF is available then the hash value can be obtained from the SQL Trace (in the "hv=" token in the PARSING IN CURSOR line above the SQL Text). On 11G the TKPROF and SQL Trace also contain the plan_hash (identified by plh= token in the SQL Trace). This is not to be confused with the SQL hash value. 14.9 Improving Performance of SQLTNote that SQLT runs AWR and ASH reports. Some dictionary objects (particularly WRH$_LATCH_CHILDREN) will have grown significantly during the upgrade. So, fixed object and dictionary statistics should be gathered before running SQLT, especially if statistics_level is set to ALL (and is normally set to TYPICAL or BASIC). SQLT can take quite a while to run. To reduce the workload, it is recommended that the following is run (from SQL*Plus) before running sqltxtract.sql: To disable Test Case Builder TCB and/or SQL Tuning Advisor:
To disable the automatic export of a test case repository
If SQLT still takes a long time, and the schema objects used by the SQL contain a large number of sub-partitions, the granularity of the data collected can be reduced as follows:
Note that these commands can all be run as APPS. They do not need to be run as user SQLTXPLAIN. These values are stored in a table called SQLTXPLAIN.SQLI$_PARAMETER. When they are set, they do not need to be re-set for each execution of SQLT. The current values can be checked by querying the table SQLTXPLAIN.SQLI$_PARAMETER. To reduce the time further the counting of rows on tables can be disabled, by running the following. However, information on the actual number of rows in each table will be lost.
All of this assumes that a SQLT version greater than 1.4.4.4 (April 2, 2012) is being used. 14.10 Misleading CBO Statistics on SQLT and SQLHCNote that SQLT and SQLHC will report CBO Statistics and other metadata at the time it is run, not the point at which the program /SQL was run. It is possible that these could have changed since the program (with the performance issue) was run (especially the CBO Statistics).
15 Operating System StatisticsOn linux/unix environments there are a number of individual operating system commands that can be used to monitor overall resource usage, or the resource usage for individual processes. (e.g. "sar", "ps", "top", "ifconfig", "mpstat", "iostat", "netstat" "vmstat" etc.). However, the OS Watcher and LTOM tools provide automated ways of running some of these commands and then collating and presenting the results. 15.1 OS WatcherOracle’s OS Watcher tool (oswbb) runs shell scripts that run the operating system tools (ps, top, ifconfig, mpstat, iostat, netstat, traceroute, vmstat, meminfo (Linux Only), slabinfo (Linux Only)) to collect CPU, Memory, I/O and Network usage and collate the results. OSWatcher Analyzer (oswbba) presents these results graphically. oswbb and oswbba require installation, however this is relatively simple and there is little overhead. By default the diagnostics are only retained for 48 hours, but this can be configured. Note that it is a unix / linux based utility. For more information see My Oracle Support documents:
15.2 LTOMThe Lite Onboard Monitor (LTOM) is a java program that runs on unix / linux (e.g. Solaris, Linux, HP-UX, AIX, Tru64) The LTOM System Profiler uses information from operating system commands (such as "top", "vmstat" and "iostat") and Oracle database views (v$session, v$process, v$sesson_wait, v$system_event and v$system_statistics) to profile the system usage. LTOM also includes:
See the My Oracle Support documents
Diagnosing High CPU Usage in ModulesThe My Oracle Support document "How to Diagnose High CPU Usage Problems to the Module Level (Document 352648.1)" gives some tools and techniques for investigating processes that consume a large amount of CPU where there is no specific activity (e.g. very high number of logical reads) or wait event that explains this. 16 LockingSometimes a problem that appears to be a performance issue could actually be due to locking. Particularly if high occurrences of the following wait events are seen in the AWR or SQL Trace / TKPROF :
Be aware that the wait event "PL/SQL lock timer" is not caused by locks between sessions but by the USER_LOCK.SLEEP or DBMS_LOCK.SLEEP procedures in application code. The best explanation of the Oracle Database Locking Mechanism is covered in the Oracle Database Concepts Guide, Chapter 9 - Data Concurrency and Consistency, Overview of the Oracle Database Locking Mechanism. This also covers latches, mutexes and internal locks. My Oracle Support document "FAQ: Detecting and Resolving Locking Conflicts and Ora-00060 errors (Document 15476.1)" also gives a good explanation of how Oracle handles locking. There are three main types of lock that are likely to be encountered, TX, TM and UL. Here’s a brief explanation of all three. 16.1 TX (Row Locks)A TX enqueue is created as soon as a transaction is started. It is uniquely identified by the rollback segment number, slot number (in rollback segment’s transaction table) and slot number’s sequence number. The wait event for this lock is normally "enq: TX - row lock contention". A session, whilst carrying out DML on a row (or SELECT …. FOR UPDATE) can be waiting on a TX for the following main reasons:
My Oracle Support document "Waits for 'Enq: TX - ...' Type Events - Transaction (TX) Lock Example Scenarios (Document 62354.1)" explains some scenarios for TX locks, including:
16.2 TM (Table Locks)This lock is acquired by a transaction when using DML (or SELECT …. FOR UPDATE) or by the LOCK TABLE statement. DML operations require table locks to reserve DML access to the table on behalf of a transaction and to prevent DDL operations that would conflict with the transaction. The wait for this lock is "enq: TM – contention". 16.3 UL (User Lock)These are user specified locks. They can be requested and released using the DBMS_LOCK package procedures REQUEST and RELEASE. The wait for this lock is "enq: UL - contention". 16.4 Identifying where locks are being heldThe following will identify the locks that have been held for the longest time along with the blocking/blocked session: SELECT lr.sid blocker_sid, ld.sid blocked_sid, lr.type, -- Type will typically be TM, TX, UL. Otherwise it is system type. lt.name, lr.id1, -- The value contained in these varies. For TM ID1 contains the object_id, for TX it is the Rollback entry lr.id2, -- The value contained in these varies. For TX it is the Transaction entry. decode(lr.lmode, 0,'none(0)', 1,'null(NULL)(1)', 2,'row-S(SS)(2)', 3,'row-S(SX)(3)', 4,'Share(S)(4)', 5,'S/Row-X(SSX)(5)', 6,'exclusive(X)(6)', LTRIM(TO_CHAR(lr.lmode,'990'))) lock_hold_mode, decode(lr.request, 0,'none(0)', 1,'null(NULL)(1)', 2,'row-S(SS)(2)', 3,'row-S(SX)(3)', 4,'Share(S)(4)', 5,'S/Row-X(SSX)(5)', 6,'exclusive(X)(6)', LTRIM(TO_CHAR(lr.request,'990'))) lock_request_mode, lr.ctime time_blocker_held, ld.ctime time_blocked_waiting FROM v$lock lr, v$lock ld, v$lock_type lt WHERE lt.type (+) = lr.type AND ld.id1 = lr.id1 -- rollback entries match AND ld.id2 = lr.id2 -- transaction entries match AND lr.block = 1 -- blocker AND ld.block = 0 -- blocked ORDER BY lr.ctime DESC; The following will give the chains of locks - with the longest first: The key blocking sessions should be able to be identified from this. Note that if there are occurrences of blocker sessions blocking multiple blocked sessions then parts of the chains will be reported multiple times (there will be one row for each combination of chain starts and ends). SELECT * FROM (SELECT level chainlevel, blocked_sid, CONNECT_BY_ROOT blocker_sid ultimate_blocker_sid, sys_connect_by_path(blocker_sid,'/')|| '/' || blocked_sid blocking_path, time_blocked_waiting FROM (SELECT lr.sid blocker_sid, ld.sid blocked_sid, lr.id1, lr.id2, ld.ctime time_blocked_waiting FROM v$lock lr, v$lock ld WHERE ld.id1 = lr.id1 AND ld.id2 = lr.id2 AND lr.block = 1 AND ld.block = 0) linked_locks CONNECT BY NOCYCLE PRIOR blocked_sid = blocker_sid) chains -- blocked end of chain is not a blocker session WHERE NOT EXISTS (SELECT 'exists' FROM v$lock lr, v$lock ld WHERE ld.id1 = lr.id1 AND ld.id2 = lr.id2 AND lr.block = 1 AND ld.block = 0 AND lr.sid = chains.blocked_sid) -- blocker end of chain is not a blocked session AND NOT EXISTS (SELECT 'exists' FROM v$lock lr, v$lock ld WHERE ld.id1 = lr.id1 AND ld.id2 = lr.id2 AND lr.block = 1 AND ld.block = 0 AND ld.sid = chains.ultimate_blocker_sid) ORDER BY chainlevel DESC; The following will report any cycles/deadlocks: However, deadlock detection should prevent these from occurring. SELECT * FROM (SELECT DECODE(CONNECT_BY_ISCYCLE,1,'YES','NO') is_a_cycle, level, blocked_sid, CONNECT_BY_ROOT blocker_sid ultimate_blocker_sid, sys_connect_by_path(blocker_sid,'/')|| '/' || blocked_sid blocking_path, time_blocked_waiting FROM (SELECT lr.sid blocker_sid, ld.sid blocked_sid, lr.id1, lr.id2, ld.ctime time_blocked_waiting FROM v$lock lr, v$lock ld WHERE ld.id1 = lr.id1 AND ld.id2 = lr.id2 AND lr.block = 1 AND ld.block = 0) linked_locks CONNECT BY NOCYCLE PRIOR blocked_sid = blocker_sid) WHERE is_a_cycle = 'YES'; To get more information for any session ids (sid), with the longest running locks first : SELECT l.sid, l.block, DECODE(l.block,1,'Blocker','Blocked'), s.blocking_session, l.type, lt.name, l.id1, l.id2, decode(l.lmode, 0,'none(0)', 1,'null(NULL)(1)', 2,'row-S(SS)(2)', 3,'row-S(SX)(3)', 4,'Share(S)(4)', 5,'S/Row-X(SSX)(5)', 6,'exclusive(X)(6)', LTRIM(TO_CHAR(l.lmode,'990')) ) lock_hold_mode, decode(l.request, 0,'none(0)', 1,'null(NULL)(1)', 2,'row-S(SS)(2)', 3,'row-S(SX)(3)', 4,'Share(S)(4)', 5,'S/Row-X(SSX)(5)', 6,'exclusive(X)(6)', LTRIM(TO_CHAR(l.request, '990')) ) lock_request_mode, l.ctime time_held_s, s.seconds_in_wait, s.wait_class, s.event, s.p1text, s.p1, s.p1raw, s.p2text, s.p2, s.p2raw, s.p3text, s.p3, s.p3raw, s.serial#, s.program, s.client_identifier, s.terminal, s.command, ct.command_name, s.service_name, s.module, s.action, s.username, s.machine, DECODE(l.type, 'TM', o.object_name) object_name, s.sql_id, st.sql_text, -- just the first 64 characters of the SQL text st.hash_value, s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#, s.row_wait_row# FROM v$session s, v$lock l, v$sqltext st, v$lock_type lt, v$sqlcommand ct, dba_objects o WHERE s.sid (+) = l.sid AND ( (l.request != 0) -- either requesting a lock OR (l.lmode != 0 -- or holding a lock AND EXISTS (SELECT 'exists' FROM v$lock ld WHERE ld.request != 0 AND ld.id1 = l.id1 AND ld.id2 = l.id2) -- and blocking a blocked session ) ) AND st.sql_id (+) = s.sql_id AND st.piece (+) = 0 AND o.object_id (+) = l.id1 AND lt.type (+) = l.type AND ct.command_type (+) = s.command -- AND l.sid = The object and rowid for a particular blocked session can be determined by using the following SQL: This is primarily for row locks (TX), but it may also show some relevant information for TM locks, depending on the mode the table was locked and the activity taking place on the locked session. SELECT o.object_name, s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#, s.row_wait_row#, dbms_rowid.rowid_create( 0, s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#, s.row_wait_row#) restricted_rowid, dbms_rowid.rowid_create( 1, s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#, s.row_wait_row#) extended_rowid FROM v$session s, dba_objects o WHERE s.row_wait_obj# = o.object_id (+) AND s.sid = The rowid could be used to query the contents of the locked row on the object (table). The following SQL can be run for particular sql ids to get more SQL text: SELECT sql_text -- First 1000 characters FROM v$sql WHERE sql_id = or: SELECT sql_text FROM v$sqltext WHERE sql_id = 16.5 Identifying where locks were heldThe information on the views v$lock, v$session is only for current sessions only. However, it is possible that the Active Session History (ASH) may have captured information on locks held for more than a few seconds. The following query will give information on all sessions from the Active Session History (on DBA_ACTIVE_SESS_HISTORY) that were blocked due to a Row, Table or User lock. A similar query using view V$ACTIVE_SESSION_HISTORY could be used to check the very recent history (not yet flushed to DBA_ACTIVE_SESS_HISTORY). SELECT s.session_id, s.blocking_session, s.session_serial#, s.sql_id, s.wait_class, s.event, s.p1text, s.p1, s.p2text, s.p2, s.p3text, s.p3, o.object_type, o.object_name, s.current_obj#, s.current_file#, s.current_block#, s.current_row#, s.program, s.module, s.action, s.client_id, s.machine, COUNT(*)*10 approx_wait_secs, -- note that if 10 seconds is reported then it could be a lot less MIN(s.sample_time) start_sample_time, MAX(s.sample_time) end_sample_time FROM dba_active_sess_history s, dba_hist_seg_stat_obj o WHERE s.dbid = The following query will give information on chains of locks (due to table, row or user locks) that have occurred for more than a few seconds, with the longest chains first : SELECT level, sample_time, session_id blocked_sid, CONNECT_BY_ROOT blocking_session ultimate_blocker_sid, sys_connect_by_path(blocking_session,'/')|| '/' || session_id blocking_path FROM (-- Blocked Sessions SELECT s.session_id, s.blocking_session, s.sample_time FROM dba_active_sess_history s WHERE s.dbid = 16.6 "PL/SQL lock timer" waitsIn this case there is no lock or interaction between sessions. A session simply sleeps for a pre-determined length of time specified in a call to the USER_LOCK.SLEEP or DBMS_LOCK.SLEEP procedures. However it can appear that a session is locked or there is a performance issue. The details of sessions with "PL/SQL lock timer" waits can be queried using the following SQL: SELECT s.sid, s.seconds_in_wait, s.wait_class, s.event, s.p1text, s.p1, s.p1raw, s.serial#, s.program, s.client_identifier, s.terminal, s.command, ct.command_name, s.service_name, s.module, s.action, s.username, s.machine, s.sql_id, st.sql_text, -- just the first 64 characters of the SQL text st.hash_value FROM v$session s, v$sqltext st, v$sqlcommand ct WHERE st.sql_id (+) = s.sql_id AND st.piece (+) = 0 AND ct.command_type (+) = s.command AND s.event = 'PL/SQL lock timer' ORDER BY s.seconds_in_wait DESC; 16.7 Getting mode for TX Lock from SQL Trace or Active Session History (ASH)If a TX lock is encountered in a trace or Active Session History (ASH) then it is possible to determine the mode of TX lock as follows: Convert the mode (e.g. 1415053318) to hex (e.g. 54580006). This can be done by using: SELECT TRIM(TO_CHAR(1415053318,'XXXXXXXXX')) FROM dual; Note that the column V$SESSION.p1raw will contain the hex value anyway. The first four digits gives the lock type. Hex 54 = T in ASCII, 58 = X. The last four digits gives the lock mode (which is also available from V$LOCK for the holding session). See My Oracle Support document "How to Determine The Lock Type and Mode from an Enqueue Wait (Document 413934.1)". 17 Action Plan Templates17.1 Display Cursor ReportPlease produce a Display Cursor Report for SQL ID For the Display Cursor Report to be useful (i.e. contain actual row source statistics as well as the runtime execution plan).
The report can be produced by running: SET pages 0 SET lines 300 SET LONG 10000 SET LONGCHUNKSIZE 10000 SPOOL See here for how to obtain the SQL ID. 17.2 SQL Monitor ReportPlease produce a SQL Monitor report for SQL ID The SQL Monitor report can only be used if:
Note that the SQL Monitor report will only show the Active Period, CPU Activity and Wait Activity for execution plan lines if the "statistics_level" initialization parameter is set to ALL ( or "_rowsource_execution_statistics" is set to TRUE) at the time the SQL is executed. The report can be produced by running the following. The HTML format is preferred: set trimspool on set trim on set pages 0 set long 10000000 set longchunksize 10000000 set linesize 200 set termout off spool sql_monitor_for_ SQL Monitor reports are also available from Oracle Enterprise Manager. See here for how to obtain the SQL ID. 17.3 Reports Client Process TracePlease provide a Reports Client Process Trace. For summary reports use the "Log all possible trace information" (TRACE_ALL) option. For detail reports use the "Log report performance statistics" (TRACE_PRF) option. If this indicates that the issue is in "Reports Time" or "UPI" time then run the report again with the "Log all possible trace information" (TRACE_ALL) option. Add Options to the Concurrent Program DefinitionThis method is available in both 11i and R12. To produce a Reports Client Process Trace at Concurrent Program Level: 1. Log in to the environment that will be traced. 2. Choose the relevant responsibility for defining concurrent programs. 3. Navigate to the Concurrent > Program > Define window. 4. Query for the concurrent program to be traced. 5. Turn on Reports Trace for the concurrent program by entering TRACEOPTS=TRACE_ALL and TRACEFILE= Options must contain TRACEFILE along with TRACEOPTS, otherwise the request will error with "REP-0715: Trace file must be specified when tracing." 6. Navigate to the Submit Request window (e.g. Request > Run). 7. Submit a concurrent request for the concurrent program. 8. Make a note of the request_id of the concurrent request. 9. Check the progress of the concurrent request in the requests window. 10. When the request has completed navigate to the Concurrent > Program > Define window and disable the Reports Trace by clearing the Options field. 11. Obtain the trace file from the location specified in TRACEFILE above. Other values for TRACEOPTS are available. See here for a full list. The TRACEMODE parameter can also be used to specify if trace information is appended to the file (TRACE_APPEND) or overwrites it (TRACE_REPLACE). Specify Debug Options when submitting a Concurrent RequestThis method is available in R12 and 11.5.10 (with latest ATG patch levels). To produce a Reports Client Process Trace at Concurrent Request Level: 1. Log in to the environment that will be traced. 2. Set the profile option "Concurrent: Allow Debugging" (internal name FND_CONC_ALLOW_DEBUG) user-level value to "Yes". 3. Navigate to the Submit Request window (e.g. Request > Run). 4. Select the Concurrent Request to be run. 5. Click on the "Debug Options" button. 6. Enable Reports Trace by checking the "Reports Trace" Debug Option and then select Debug Option Value "Log all possible trace information". Debug Option Value "Log report performance statistics" can also be used. See here. 7. Click the "OK" button to confirm the selection. A "Create Debug Rule : Debug Option" page is displayed. Click on the "OK" button again. 8. Submit the concurrent request. 9. Make a note of the request_id of the concurrent request. 10. Check the progress of the concurrent request in the requests window. 11. When the request has completed, the trace file name and location (full path) can be obtained from the concurrent program log (file) . Other levels are available in the Debug Option Value. See here for a full list. Using Debug Workbench in Oracle Applications ManagerIn both 11.5.10 (with latest ATG patch levels) and R12 it is also possible to specify Debug Options in the Debug Workbench in Oracle Applications Manager by creating a Debug Rule. Site Map > Diagnostics and Repair (tab) > Diagnostics (heading) > Debug Workbench (link). Create a Debug Rule from the Debug Workbench, select "Reports Trace" Debug Option and then select Debug Option Value "Log all possible trace information". Debug Option Value "Log report performance statistics" can also be used. See here. The Debug Rule also specifies the context (either the specific User, Responsibility or Any (All)) and the number of repetitions or time span for which the debug/trace will be gathered. By default the Debug Options are disabled. They can be enabled by setting the profile option "Concurrent: Allow Debugging" (internal name FND_CONC_ALLOW_DEBUG) user-level value to "Yes". When the relevant actions are completed, remember to delete or end date the Debug Rule. See the Oracle E-Business Suite Maintenance Guide (R12.2 and onwards) or System Administrator's Guide – Maintenance (prior to R12.2) for more information. 17.4 PL/SQL Profiler ReportPlease provide a PL/SQL Profiler Report. Obtain PL/SQL Profiler Report from FormsTo obtain a PL/SQL Profiler report for Oracle Forms: 1. Log in to the environment that will be traced. 2. Set the profile option "Utilities:Diagnostics" (internal name DIAGNOSTICS) user-level value to "Yes". 3. Choose the relevant responsibility for the form and navigate to the form that will be traced. 4. Enable SQL Trace using the menu option : Help > Diagnostics > Trace. Check the "PL/SQL Profiling" box. 5. Execute the required actions on the Form. 6. When complete, disable SQL Trace using the menu option : Help > Diagnostics > Trace > Uncheck the "PL/SQL Profiling" box. 7. If the "PL/SQL Profiler Report" concurrent request is not started automatically then do the following:
8. When the "PL/SQL Profiler Report" concurrent request is complete, click on the "View Output" button to see the PL/SQL Profiler report. Obtain PL/SQL Profiler Report from Self ServiceThis is also sometimes referred to as OAF (Oracle Applications Framework) after the technology framework used on these applications. To obtain a PL/SQL Profiler report for Self Service (OAF) pages: 1. Log in to the environment that will be traced. 2. Set the profile option "FND: Diagnostics" (internal name FND_DIAGNOSTICS) user-level value to "Yes". 3. Choose the relevant responsibility for the HTML-based application. 4. To enable SQL Trace click the "Diagnostics" icon at the top right of the page > Set Trace Level > Go > Enable PL/SQL Profiler > Save. 5. Select "Home" and then execute the required actions. 6. When complete, disable the trace by clicking on the "Diagnostics" icon > Set Trace Level > Go > Disable PL/SQL Profiler > Save 7. The "Trace Ids" will be provided on the left hand side of the screen. This is relevant for SQL Traces (not PL/SQL Profiler) 8. Exit the HTML Application and log out. 9. If the "PL/SQL Profiler Report" concurrent request is not started automatically then do the following:
10. When the "PL/SQL Profiler Report" concurrent request is complete, click on the "View Output" button to see the PL/SQL Profiler report. Obtain PL/SQL Profiler Report for a Concurrent RequestNote that this method is only available in R12 and 11.5.10 (with latest ATG patch levels). To obtain a PL/SQL Profiler report for a Concurrent Request : 1. Log in to the environment that will be traced. 2. Set the profile option "Concurrent: Allow Debugging" (internal name FND_CONC_ALLOW_DEBUG) user-level value to "Yes". 3. Navigate to the Submit Request window (e.g. Request > Run). 4. Select the Concurrent Request to be run. 5. Click on the "Debug Options" button. 6. Enable SQL Trace by checking the "PL/SQL Profiler" Debug Option. There is only one Debug Option Value "PL/SQL Profiler", which is already selected. 7. Click the "OK" button to confirm the selection. A "Create Debug Rule : Debug Option" page is displayed. Click on the "OK" button again. 8. Submit the concurrent request. 9. Make a note of the request_id of the concurrent request. 10. Check the progress of the concurrent request in the requests window. 11. When the request has completed, the "PL/SQL Profiler Report" concurrent program will be submitted. 12. When the "PL/SQL Profiler Report" request has completed, select the "View Output" button (of the "PL/SQL Profiler Report") to view the output. See My Oracle Support document "How to generate pl/sql profiler output for the concurrent program under EBS? (Document 808005.1)" for more information. 17.5 PL/SQL Hierarchical Profiler ReportPlease provide a PL/SQL Hierarchical Profiler report. To enable the PL/SQL Hierarchical profiler for an E-Business Suite session using the Profile Option method: 1. Log in to the environment that will be traced. 2. If the PL/SQL Hierarchical profiler is being enabled for the current (logged in) user account then navigate to the Profile > Personal window, otherwise navigate to the Profile > System window. 3. Query the profile "Initialization SQL Statement - Custom"(internal name FND_INIT_SQL). 4. Enter the following as the profile value and save. BEGIN dbms_hprof.start_profiling(' Note: If there is existing SQL in the profile then the above should be merged with the existing SQL. 5. Log in (again) to the application using the user account being traced. This is necessary because the Initialization SQL Statement is only executed at login. 6. Execute the actions to be profiled. 7. Exit the application and log out. 8. Navigate to the profile form again, remove the SQL above (ensuring any original SQL remains) from the "Initialization SQL Statement - Custom" profile and save. 9. Obtain the raw output file from the file location specified in dbms_hprof.start_profiling above. 10. Generate PL/SQL hierarchical profiler HTML reports from the raw output file using: plshprof -output Where 11. This will create a set of HTML reports, which can be browsed starting from the root HTML file. Please provide all these files. 17.6 SQLHCPlease provide the output from SQLHC for
SQLHC should be provided on same environment that performance issue has been observed and should be run as soon after the observed program/process as possible. 17.7 SQLTPlease provide SQLT output using the XTRACT/XPLAIN/XECUTE method for [XTRACT Method only, and runtime execution plan already known] If the SQL is no longer in memory (or AWR) then provide a SQLT with XPLAIN method instead. For installation and running instructions read the file sqlt_intructions.html (included in the SQLT installation zip file). SQLT should be provided on same environment that performance issue has been observed and should be run as soon after the observed program/process as possible. [XTRACT Method only] The XTRACT method requires the SQL ID or the SQL hash value (not to be confused with the plan hash value). [XTRACT Method only] See here for how to obtain the SQL ID. [For 10g and before, if only the SQL Trace/TKPROF is available] The hash value can be obtained from the SQL Trace (in the "hv=" token in the PARSING IN CURSOR line above the SQL Text). [XTRACT Method and program rerun only] Set "statistics_level" initialization parameter to ALL ( or "_rowsource_execution_statistics" to TRUE) before re-running the program (which runs the SQL in question) and running SQLT with XTRACT. [XECUTE Method only] Please use the attached file. This file contains the application context, bind variable declaration / initialization and the SQL text. [XECUTE Method only] If the SQL contains context sensitive views/synonyms/policies (e.g. Multi-Language or Multi-Organization in Oracle E-Business Suite) then the application context (language, user, responsibility, security group) will also need to be set in the SQL script. The "Obtaining Traces (TKPROF) in E-Business Suite - From SQL*Plus" section in the My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)" has useful instructions on doing this. [XPLAIN Method only] If the SQL contains context sensitive views/synonyms/policies (e.g. Multi-Language or Multi-Organization in Oracle E-Business Suite) then the application context (language, user, responsibility, security group) will also need to be set before running SQLT with XPLAIN method. The "Obtaining Traces (TKPROF) in E-Business Suite - From SQL*Plus" section in the My Oracle Support document "Oracle E-Business SQL Trace and TKPROF Guide (Document 1674024.1)" has useful instructions on doing this. [XPLAIN Method only] Please use the attached file containing the SQL text. 17.8 Obtaining SQL IDThe sql_id can be obtained from:
If querying v$SQL, use the following query: SELECT sql_id, hash_value, SUBSTR(sql_text,1, 80) FROM v$sql WHERE sql_fulltext LIKE '% Alternatively, the sql_text column can be used to search on the first 1000 characters. The UPPER function could also be used on column sql_fulltext or sql_text if the exact case of some of the SQL is unknown. See My Oracle Support document "How to Determine the SQL_ID for a SQL Statement (Document 1627387.1)". Oracle Database Documentation Library LinksThe documentation libraries are available here. Concepts12c Release 1 Performance Tuning Guide12c Release 1 Real Application Clusters Administration and Deployment Guide12c Release 1 Database Advanced Queuing User's Guide12c Release 1 PL/SQL Packages and Types Reference12c Release 1 Advanced Application Developer's Guide12c Release 1 (Development Guide) Licensing Information12c Release 1 Change Log
|
显示更多
|
|