Higher 'direct path read' Waits in 11g when Compared to 10g (Doc ID 793845.1)

In this Document

  Symptoms
  Changes
  Cause
  Solution
  References

APPLIES TO:

Oracle Database - Enterprise Edition - Version 11.1.0.6 and later
Information in this document applies to any platform.

SYMPTOMS

  • Intermittent but noticeable increase in 'direct path read' waits were observed in 11g when compared to Oracle 10g. For example, top 5 events:

    Top 5 Timed Foreground Events  
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  
                                                               Avg  
                                                              wait   % DB  
    Event                                 Waits     Time(s)   (ms)   time Wait Class  
    ------------------------------ ------------ ----------- ------ ------ ----------  
    DB CPU                                           13,916          42.1  
    direct path read                  1,637,344      13,359      8   40.4 User I/O  
    db file sequential read              47,132       1,111     24    3.4 User I/O  
    DFS lock handle                     301,278       1,028      3    3.1 Other  
    db file parallel read                14,724         554     38    1.7 User I/O 
    Typically these will be higher at times where many serial Full Table Scans of Tables occur
  • Using Automatic Shared Memory Management (ASMM)
  • Database upgraded from 10g to 11g

CHANGES

Database upgraded from 10g to 11g

CAUSE

In 10g, serial table scans for "large" tables go through the buffer cache (by default). 

In 11g, there has a been a change in the rules that choose between using 'direct path reads' and reads through the buffer cache for serial (i.e. non-parallel) table scans. This decision is based on the size of the table, buffer cache size and various other statistics. Since Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches it is likely that they will be chosen for such reads in 11g and above. 

The choice can vary over time for the same tables, for example, when using Automatic Shared Memory Management (ASMM) with the buffer cache low limit set low when compared to the normal workload requirements (and usually after startup), 11g might choose to do serial direct path read scans for large tables that do not fit in the SGA. When ASMM increases the buffer cache due to increased demand, 11g might then change to go through the buffer cache for these same large tables.

SOLUTION

If you feel the waits are too high and you get better performance from queries by going through the cache then you can note the buffer cache and share pool requirements for a normal workload and set the low limits of buffer cache and shared pool in spfile/pfile close to these normal workload values using the db_cache_size and shared_pool_size parameters. The danger with this approach is that you may end up with suboptimal sizes generally.

Another 'solution' would be to look at the peformance of the queries waiting for the 'direct path read' events. Perhaps they could be tuned or need to be executed in parallel? You can use ASH reports to identify the SQL that is waiting and then use tuning advisors or manual methods to improve the performance of the queries.

Document 745216.1 Query Performance Degradation - Upgrade Related - Recommended Actions

你可能感兴趣的:(Higher 'direct path read' Waits in 11g when Compared to 10g (Doc ID 793845.1))