Statistics Package (ST…

7. Event Timings
-----------------
If timings are available, the Statspack report will order wait events by time
(in the Top-5 and background and foreground Wait Events sections).

If timed_statistics is false for the instance, however a subset of users or
programs set timed_statistics set to true dynamically, the Statspack report
output may look inconsistent, where some events have timings (those which the
individual programs/users waited for), and the remaining events do not.
The Top-5 section will also look unusual in this situation.

Optimally, timed_statistics should be set to true at the instance level for
ease of diagnosing performance problems.

8. Managing and Sharing performance data
-----------------------------------------

8.1. Baselining performance data

It is possible to identify snapshot data worthy of keeping, which will not
be purged by the Statspack purge. This is called baselining. Once you have
determined which snap Ids or times of day most represent a particular
workload whose performance data you would like to keep, you can mark the
data representing those times as baselines. Baselined snapshots will not
be purged by the Statspack purge.

If you later decide you no longer want to keep previously baselined
snapshots, you can clear the baseline (clearing the baseline does not
remove the data, it just identifies the data as candidates for purging).

NOTE: Statspack baseline does not perform any consistency checks on the
snapshots requested to be baselined (e.g. it does not check whether
the specified baselines span an instance shutdown). Instead, the
baseline feature merely marks Snapshot rows as worthy of keeping,
while other data can be purged.

New procedures and functions have been added to the Statspack package to
make and clear baselines: MAKE_BASELINE, and CLEAR_BASELINE. Both of these
are able to accept varying parameters (e.g. snap Ids, or dates, etc), and
can be called either as a procedure, or as a function (the function returns
the number of rows operated on, whereas the procedure does not).

Snap Ids or Begin/End dates
---------------------------
The Statspack MAKE_BASELINE procedures and functions provide flexibility in
the manner baselines are made or cleared. These can take various input
parameters:

- Begin Snap Id and End Snap Id

A begin and end snap Id pair can be specified. In this case, you choose
either to baseline the range of snapshots between the begin and end
snapshot pair, or just the two snapshots. The default is to baseline
the entire range of snapshots.

- Begin Date and End Date

A begin and end date pair can be specified. All snapshots which fall in
the date range specified will be marked as baseline data.

Similarly to the MAKE_BASELINE procedures and functions, the CLEAR_BASELINE
procedures and functions accept the same arguments.

Procedure or Function
---------------------
It is possible to call either the MAKE_BASELINE procedure, or the
MAKE_BASELINE function. The only difference is the MAKE_BASELINE function
returns the number of snapshots baselined, whereas the MAKE_BASELINE
procedure does not.
Similarly, the CLEAR_BASELINE procedure performs the same task as the
CLEAR_BASELINE function, however the function returns the number of
baselined snapshots which were cleared (i.e. no longer identified as
baselines).

8.1.1. Input Parameters for the MAKE_BASELINE and CLEAR_BASELINE
procedure and function which accept Begin and End Snap Ids

This section describes the input parameters for the MAKE_BASELINE and
CLEAR_BASELINE procedure and function which accept Snap Ids. The input
parameters for both MAKE and CLEAR baseline are identical. The
procedures/functions will either baseline (or clear the baseline for) the
range of snapshots between the begin and end snap Ids identified (the
default), or if i_snap_range parameter is FALSE, will only operate on
the two snapshots specified.
If the function is called, it will return the number of snapshots
operated on.

Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_snap Any Valid Snap Id - SnapId to start the baseline at
i_end_snap Any valid Snap Id - SnapId to end the baseline at
i_snap_range TRUE/FALSE TRUE Should the range of snapshots
between the begin and end snap
be included?
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema

Example 1:
To make a baseline of snaps 45 and 50 including the range of snapshots
in between (and you do not wish to know the number of snapshots
baselined, so call the MAKE_BASELINE procedure). Log into the PERFSTAT
user in SQL*Plus, and:

SQL> exec statspack.make_baseline -
(i_begin_snap => 45, -
i_end_snap => 50);

Or without specifying the parameter names:

SQL> exec statspack.make_baseline(45, 50);

Example 2:
To make a baseline of snaps 1237 and 1241 (including the range of
snapshots in between), and be informed of the number of snapshots
baselined (by calling the function), log into the PERFSTAT
user in SQL*Plus, and:

SQL> variable num_snaps number;
SQL> begin
SQL> :num_snaps := statspack.make_baseline(1237, 1241);
SQL> end;
SQL> /
SQL> print num_snaps

Example 3:
To make a baseline of only snapshots 1237 and 1241 (excluding the
snapshots in between), log into the PERFSTAT user in SQL*Plus,
and:

SQL> exec statspack.make_baseline(5, 12, false);

All of the prior examples apply equally to CLEAR_BASELINE.


8.1.2. Input Parameters for the MAKE_BASELINE and CLEAR_BASELINE
procedure and function which accept Begin and End Dates

The input parameters for the MAKE_BASELINE and CLEAR_BASELINE procedure and
function which accept begin and end dates are identical. The procedures/
functions will either baseline (or clear the baseline for) all snapshots
which were taken between the begin and end dates identified.

Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_date Any valid date - Date to start the baseline at
i_end_date Any valid date > - Date to end baseline at
begin date
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema

Example 1:
To make a baseline of snapshots taken between 12-Feb-2003 at 9am, and
12-Feb-2003 at 12 midday (and be informed of the number of snapshots
affected), call the MAKE_BASELINE function. Log into the PERFSTAT
user in SQL*Plus, and:

SQL> variable num_snaps number;
SQL> begin
SQL> :num_snaps := statspack.make_baseline
(to_date('12-FEB-2003 09:00','DD-MON-YYYY HH24:MI'),
to_date('12-FEB-2003 12:00','DD-MON-YYYY HH24:MI'));
SQL> end;
SQL> /
SQL> print num_snaps

Example 2:
To clear an existing baseline which covers the times 13-Dec-2002 at
11pm and 14-Dec-2002 at 2am (without wanting to know how many
snapshots were affected), log into the PERFSTAT user in SQL*Plus, and:

SQL> exec statspack.clear_baseline -
(to_date('13-DEC-2002 23:00','DD-MON-YYYY HH24:MI'), -
to_date('14-FEB-2002 02:00','DD-MON-YYYY HH24:MI'));


8.2. Purging/removing unnecessary data

It is possible to purge unnecessary data from the PERFSTAT schema using the
PURGE procedures/functions. Any Baselined snapshots will not be purged.

NOTE:
o It is good practice to ensure you have sufficient baselined snapshots
before purging data.
o It is recommended you export the schema as a backup before running this
script, either using your own export parameters, or those provided in
spuexp.par
o WARNING: It is no longer possible to rollback a requested purge operation.
o The functionality which was in the sppurge.sql SQL script has been moved
into the STATSPACK package. Moving the purge functionality into the
STATSPACK package has allowed significantly more flexibility in how
the data to be purged can be specified by the performance engineer.


Purge Criteria for the STATSPACK PURGE procedures and functions
---------------------------------------------------------------
Data to be purged can either be specified by:

- Begin Snap Id and End Snap Id

A begin and end snap Id pair can be specified. In this case, you choose
either to purge the range of snapshots between the begin and end
snapshot pair (inclusive, which is the default), or just the two
snapshots specified.
The preexisting Statspack sppurge.sql SQL script has been modified to
use this PURGE procedure (which purges by begin/end snap Id range).

- Begin Date and End Date

A begin and end date pair can be specified. All snapshots which were
taken between the begin and end date will be purged.

- Purge before date

All snapshots which were taken before the specified date will be purged.

- Number of days (N)

All snapshots which were taken N or more days prior to the current date
and time (i.e. SYSDATE) will be purged.

Extended Purge
--------------
In prior releases, Statspack identifier tables which contained SQL Text,
SQL Execution plans, and Segment identifiers were not purged.

It is now possible to purge the unreferenced data in these tables. This is
done by requesting the 'extended purge' be performed at the same time as
the normal purge. Requesting the extended purge be performed along with a
normal purge is simply a matter of setting the input parameter
i_extended_purge to TRUE when calling the regular purge.

Purging this data may be resource intensive, so you may choose to perform
an extended purge less frequently than the normal purge.

Procedure or Function
---------------------
Each of the purge procedures has a corresponding function. The function
performs the same task as the procedure, but returns the number of
Snapshot rows purged (whereas the procedure does not).


8.2.1. Input Parameters for the PURGE procedure and function
which accept Begin Snap Id and End Snap Id

This section describes the input parameters for the PURGE procedure and
function which accept Snap Ids. The input parameters for both procedure
and function are identical. The procedure/function will purge all
snapshots between the begin and end snap Ids identified (inclusive, which
is the default), or if i_snap_range parameter is FALSE, will only purge
the two snapshots specified. If i_extended_purge is TRUE, an extended purge
is also performed.
If the function is called, it will return the number of snapshots purged.

Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_snap Any Valid Snap Id - SnapId to start purging from
i_end_snap Any valid Snap Id - SnapId to end purging at
i_snap_range TRUE/FALSE TRUE Should the range of snapshots
between the begin and end snap
be included?
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema

Example 1:
Purge all snapshots between the specified begin and end snap ids. Also
purge unused SQL Text, SQL Plans and Segment Identifiers, and
return the number of snapshots purged. Log into the PERFSTAT user
in SQL*Plus, and:

SQL> variable num_snaps number;
SQL> begin
SQL> :num_snaps := statspack.purge
( i_begin_snap=>1237, i_end_snap=>1241
, i_extended_purge=>TRUE);
SQL> end;
SQL> /
SQL> print num_snaps


8.2.2. Input Parameters for the PURGE procedures and functions
which accept Begin Date and End Date

This section describes the input parameters for the PURGE procedure and
function which accept a begin date and an end date. The procedure/
function will purge all snapshots taken between the specified begin and
end dates. The input parameters for both procedure and function are
identical. If i_extended_purge is TRUE, an extended purge is also performed.
If the function is called, it will return the number of snapshots purged.

Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_date Date - Date to start purging from
i_end_date End date > begin - Date to end purging at
date - SnapId to end the baseline at
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema

Example 1:
Purge all snapshots which fall between 01-Jan-2003 and 02-Jan-2003.
Also perform an extended purge. Log into the PERFSTAT user in
SQL*Plus, and:

SQL> exec statspack.purge -
(i_begin_date=>to_date('01-JAN-2003', 'DD-MON-YYYY'), -
i_end_date =>to_date('02-JAN-2003', 'DD-MON-YYYY'), -
i_extended_purge=>TRUE);


8.2.3. Input Parameters for the PURGE procedure and function
which accept a single Purge Before Date

This section describes the input parameters for the PURGE procedure and
function which accept a single date. The procedure/function will purge
all snapshots older than the date specified. If i_extended_purge is TRUE,
also perform an extended purge. The input parameters for both
procedure and function are identical.
If the function is called, it will return the number of snapshots purged.

Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_purge_before_date Date - Snapshots older than this date
will be purged
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged.
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema

Example 1:
To purge data older than a specified date, without wanting to know the
number of snapshots purged, log into the PERFSTAT user in SQL*Plus,
and:

SQL> exec statspack.purge(to_date('31-OCT-2002','DD-MON-YYYY'));


8.2.4. Input Parameters for the PURGE procedure and function
which accept the Number of Days of data to keep

This section describes the input parameters for the PURGE procedure and
function which accept the number of days of snapshots to keep. All data
older than the specified number of days will be purged. The input
parameters for both procedure and function are identical. If
i_extended_purge is TRUE, also perform an extended purge.
If the function is called, it will return the number of snapshots purged.

Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_num_days Number > 0 - Snapshots older than this
number of days will be purged
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema

Example 1:
To purge data older than 31 days, without wanting to know the number
of snapshots operated on, log into the PERFSTAT user in SQL*Plus, and:

SQL> exec statspack.purge(31);


8.2.5. Using sppurge.sql

When sppurge is run, the instance currently connected to, and the
available snapshots are displayed. The DBA is then prompted for the
low Snap Id and high Snap Id. All snapshots which fall within this
range will be purged.

WARNING: sppurge.sql has been modified to use the new Purge functionality
in the STATSPACK package, therefore it is no longer possible to
rollback a requested purge operation - the purge is automatically
committed.

e.g. Purging data - connect to PERFSTAT using SQL*Plus, then run the
sppurge.sql script - sample example output appears below.

SQL> connect perfstat/perfstat_password
SQL> set transaction use rollback segment rbig;
SQL> @sppurge

Database Instance currently connected to
========================================

Instance
DB Id DB Name Inst Num Name
----------- ---------- -------- ----------
720559826 PERF 1 perf


Snapshots for this database instance
====================================

Base- Snap
Snap Id Snapshot Started line? Level Host Comment
-------- --------------------- ----- ----- --------------- --------------------
1 30 Feb 2000 10:00:01 6 perfhost
2 30 Feb 2000 12:00:06 Y 6 perfhost
3 01 Mar 2000 02:00:01 Y 6 perfhost
4 01 Mar 2000 06:00:01 6 perfhost

WARNING
~~~~~~~
sppurge.sql deletes all snapshots ranging between the lower and
upper bound Snapshot Id's specified, for the database instance
you are connected to. Snapshots identified as Baseline snapshots
which lie within the snapshot range will not be purged.

It is NOT possible to rollback changes once the purge begins.

You may wish to export this data before continuing.

Specify the Lo Snap Id and Hi Snap Id range to purge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for losnapid: 1
Using 1 for lower bound.

Enter value for hisnapid: 2
Using 2 for upper bound.

Deleting snapshots 1 - 2

Purge of specified Snapshot range complete.

SQL> -- end of example output


Batch mode purging
------------------
To purge in batch mode, you must assign values to the SQL*Plus
variables which specify the low and high snapshot Ids to purge.

The variables are:
losnapid -> Begin Snapshot Id
hisnapid -> End Snapshot Id

e.g.
SQL> connect perfstat/perfstat_password
SQL> define losnapid=1
SQL> define hisnapid=2
SQL> @sppurge

sppurge will no longer prompt for the above information.


8.3. Removing all data

If you wish to truncate all performance data indiscriminately, it is
possible to do this using sptrunc.sql This script truncates all
statistics data gathered, including snapshots marked as baselines.

NOTE:
It is recommended you export the schema as a backup before running this
script either using your own export parameters, or those provided in
spuexp.par

If you run sptrunc.sql in error, the script allows you to exit before
beginning the truncate operation (you do this at the 'begin_or_exit'
prompt by typing in 'exit').

To truncate all data, connect to the PERFSTAT user using SQL*Plus,
and run the script - sample output which truncates data is below:

SQL> connect perfstat/perfstat_password
SQL> @sptrunc

Warning
~~~~~~~
Running sptrunc.sql removes ALL data from Statspack tables. You may
wish to export the data before continuing.


About to Truncate Statspack Tables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If would like to exit WITHOUT truncating the tables, enter any text at the
begin_or_exit prompt (e.g. 'exit'), otherwise if you would like to begin
the truncate operation, press


Enter value for begin_or_exit:
Entered at the 'begin_or_exit' prompt

... Starting truncate operation

Table truncated.
Table truncated.

Commit complete.

Package altered.

... Truncate operation complete


8.4. Sharing data via export

If you wish to share data with other sites (for example if Oracle
Support requires the raw statistics), it is possible to export
the PERFSTAT user.

An export parameter file (spuexp.par) has been supplied for this
purpose. To use this file, supply the export command with the
userid parameter, along with the export parameter file name.
e.g.
exp userid=perfstat/perfstat_password parfile=spuexp.par

This will create a file called spuexp.dmp and the log file spuexp.log

If you wish to load the data into another database, use the import
command. For more information on using export and import, please
see the Oracle Utilities manual.

9. New and Changed Features
----------------------------

9.1. Changes between 10.2 and 11.1

Changes on the Summary Page of the Instance Report
o Host
- Platform name has been added to the Host information.
- The number of CPU cores and sockets is displayed, where available.
- Physical Memory is now shown in GB rather than MB.
o Snapshot information
- The DB time and DB CPU in seconds, is now printed close to the
snapshot Elapsed time.
o Load Profile
- DB time and DB CPU have been added to Load Profile. Units are
Per Second, Per Transaction, Per Execute and Per Call
The addtion of this normalized data assists when examinig
to reports to see whether the load is comparable.
- The number of 'Sorts' has been replaced with 'W/A MB processed'.
Displaying workarea statistics more accurately reflects not
only sorts, but also other workarea operations such as hash
joins. Additionally, using MB processed rather than the number
of workarea operations indicates the quantity of workarea work
performed.
- The following statistics have been removed from the front page,
as they are no longer considered as important as they once were:
% Blocks changed per Read
Recursive Call %
Rollback per transaction %
Rows per Sort

o Instance Efficiency
- This section has been renamed from 'Instance Efficiency Percentages'
to 'Instance Efficiency Indicators', as this more accurately
represents that these values are simply indicators of possible
areas to consider, rather than conclusive evidence.
- 'In-memory Sort %' has been replaced with 'Optimal W/A Exec %'
as the old statistic only showed sorts, and not all workarea
operations.


Modified sections of the Instance Report
o Wait Events and Background Wait Event
The % of Total Call Time has been added to these sections. Rows which
have null for the % of Call Time are Idle Events, and so do not
contribute to the % Total Call time.

o The PGA Memory Advisory now shows the Estimated Time to process the
data.

o The init.ora parameters section now shows the entire parameter value
rather than truncating the output

o SQL sections
SQL statements which have not completed execution (e.g. have a 0
execution count as the statement was still running when the End
snapshot was taken) can now appear in the SQL reports.


New Data captured/reported on - Level 1
o Foreground Wait Events
A new wait event section has been added. This shows wait events
for foreground processes only.

o OS Statistics - detail (from v$osstat)
This is a new section which shows the Load, %Busy, %User, %System
%WIO and %WCPU for each snapshot between the Begin and End snapshot
Ids specified.

o IO Stat by Function - summary (from v$iostat_function)
This section shows IO statistics broken down by Oracle function.
A by-function breakdown can help identify where the IO bandwith
is being consumed (e.g. by RMAN, DBWR, recovery, direct IO).
The summary section summarizes IO requests and IO rates.

o IO Stat by Function - detail (from v$iostat_function)
Similar to above, however this section shows details how the
IO requests and IO rates break down by small requests and
large requests (requests > 128k).

o Memory Target Advice (from v$memory_target_advice)
Shows potential values for memory_target parameter, and
their estimated cost, in DB time.

o Memory Dynamic Components
Shows infomration about the last resize operation for each
Memory Component.

o Memory Resize Operations
Shows all known Memory resize operations which occured between
the Begin and End snapshots.


Obsoleted data
o v$temp_histogram view and the corresponding Statspack table have
been dropped.

9.2. Changes between 10.1 and 10.2

Changes on the Summary Page of the Instance Report
o The front summary page of the instance report has been modified to show
- Host CPU and Memory configuration
- begin/end buffer cache and shared pool sizes (end values are only
shown if they differ from the begin values)
- Ave Wait (ms) for the Top-5 Timed Events section


Continuation of Summary Page on Page 2
o Page 2 of the Statspack report should be considered a continuation of
the front-page summary of the Statspack report. This page includes:
- Memory and CPU statistics captured by Oracle in the v$osstat view
- ratios derived from v$osstat and the Time model data
(v$sys_time_model)
- the raw Time-model system statistics data

These statistics should be consulted in conjunction with page 1 data
during the initial performance analysis stage, when formulating the
list of potential drill-down data to examine:
o The Operating System statistics data should be used to identify
whether the host is CPU bound, and if so, how much this Oracle
instance is contributing to the CPU usage.
o The Memory usage statistics show how much of physical memory is
consumed, and how much physical memory is used by the SGA and
PGA for this instance.
Please note that not all of the OS statistics are available on all
platforms.


Sections moved in the Instance Report
o The Time Model System Stats section has moved to page 2 of the report
(see Continuation of Summary Page on Page 2, above).

o The OS Stats section has been moved to follow the System Statistics
sections.


Modified sections of the Instance Report
o The Wait Events and Background Wait Events section of the report have
been modified to only show events with a total wait time of > .001s
to filter out unimportant events.

o The Timeouts column in the System Event and Background Event sections have
changed to be %Timeouts (as compared to Waits). Note that to avoid
loss of data, a %Timeouts value of 0 indicates timeouts occurred in < .5%.
A value of null indicates 0 timeouts.

o The SGA regions section of the report now shows the Begin and End sizes
of the various regions (the end sizes are only shown if different to
the begin sizes).

o The File IO Histogram section has been modified to include a new
bucket (<=2ms).

o The Buffer Pool Statistics section now shows the number of buffers in
K, M or G (where K is 1000 buffers, M is 1000000 buffers and G
is 1000000000 buffers)

o Omitting sections from the Statspack report
- The Rollstat sections of the report are omitted from the output when
Automatic Undo Management is used. If you still wish to see these
sections when using AUM, modify the display_rollstat parameter in
the sprepcon.sql file.
- It is also possible to avoid including the following sections in the
Instance report. However modifying these default settings is not
recommended, as valuable data may be missing during performance
diagnosis.
o Undostat (Automatic Undo data)
o File IO details

o Undo Segment Summary section now also shows the Min and Max values for
Tuned Undo Retention.


Changes in Data captured/reported on - Level 1
o The v$sgastat view has been modified in 10gR2 to show separate
rows for all memory allocations, without the summary 'Miscellaneous'
row. This now results over 500 individual rows.
To avoid capturing excessive and unneeded data, Statspack has been
optimized to capture only that data which will be useful when
investigating memory usage. It is expected Statspack will capture
in the order of 50 rows per snapshot.
To avoid showing all of these rows in the report, only the top rows
are shown (by default 35 rows are shown, although this can be
increased if needed by modifying the sgastat_top_n in sprepcon.sql).


New sections of the Instance Report
o Some SGA resizes can be detected by Statspack, when the individual
cache sizes are different at the time the snapshot is taken.
Any changes in cache sizes visible at snapshot time are shown
in the Cache Size Changes section of the report.

o Two new SQL sections have been added - SQL ordered by CPU and
SQL ordered by Elapsed time. These are now the first two
SQL sections (i.e. they appear before 'SQL ordered by Gets')
New thresholds for CPU and Elapsed time were not added to the data
capture, as it is believed that the top SQL in these categories
is already being captured by the existing thresholds.

o Two new process (PGA) memory sections have been added to the
Statspack Instance report:
o Process Memory Summary Stats, which shows a summary
of process memory allocation and usage for both begin and
end snapshots
o Top Process Memory (by component), which shows process
information for the process which have the most memory
allocated, broken down by component (for the begin and
end snapshots).

o The SQL Memory Statistics section has been added. This section
displays a summary of memory usage statistics for cursors.


New Data captured/reported on - Level 1
SGA Target Advisory (from v$sga_target)
Streams Pool Advisory (from v$streams_pool_advice)
PGA Memory usage (from v$process, and v$process_memory)
(see New sections of the instance report above for more
information)

Real Application Cluster Features
o The 'RAC Statistics' page now computes the estimated interconnect
traffic in KB/sec (Estd Interconnect traffic (KB/s)) in the Global
Cache Load Profile section.
Note your database's standard block size is used to compute this
statistic, so if your database uses multiple block sizes, this
statistic may have an estimation error. This statistic also does
not account for PQ messages.

o v$class_cache_transfer is no longer captured (and the corresponding
Statspack table has been dropped). Instead Statspack now captures
v$instance_cache_transfer.
o Dynamic Remastering Statistics section had been added.


SQL Report (sprepsql.sql)
o The time a plan was last active is shown in the SQL report for each
known plan.


Obsoleted data
o The sleeps 1-3 columns have been obsoleted in this release,
therefore Statspack no longer captures, nor reports on this
data (for v$latch_parent, v$latch_children, v$latch)

9.3. Changes between 9.2 and 10.1

Baseline
It is now possible to identify snapshots which you wish to keep. These
snapshots are termed baselined snapshots. Baselined snapshots will not
be purged by the Statspack purge. For more information, see section
'Baselining performance data'


Purge
The purge code has been moved from sppurge.sql into the STATSPACK
package, and has been significantly enhanced. Having the purge
functionality in the Statspack package allows greater flexibility in
specifying which data to purge (e.g. by date range, or by purging
snapshots older than N days, etc). For more information, see section
'Purging/removing unnecessary data'.


Streams
Performance data for Streams is now captured. See section 'New Data
captured/reported - Level 1', below for more details.


V$SQL.HASH_VALUE and V$SQL.OLD_HASH_VALUE columns
The algorithm used to calculate the hash_value column in the V$SQL view
(and V$SQL* related views) has been modified in 10g. In other words, the
hash_value for a statement in Oracle 10g will not be the same as the
hash_value for that same statement in prior releases.

To allow for backward compatibility, and comparison of the performance of
SQL statements in releases prior to 10g, Statspack continues to use the
old hash value as one of the columns comprising the primary key for the
Statspack SQL related tables. The old-format hash value is visible
in the v$sql.old_hash_value column (this has been added to the V$SQL and
related views for backward compatibility).

In the Statspack reports, Statspack continues to display the old hash
value for backward compatibility (this column is clearly identified as
Old Hash Value).

For more information on the hash_value change, see 'Data Compatibility -
Changing SQL Hash Value, and new SQL Id' below.


Running the Report

o Number of Days of Snapshots to List
It is now possible to influence certain aspects of what appears in the
Instance report, including the number of days of snapshots to list when
choosing the begin and end snapshots.
The configuration is performed by modifying the 'Customer Configurable
Report Settings' section of the file sprepcon.sql.
For more information see 'Configuring the Report' section of this document.

o Error Reporting
Error reporting has been modified, so that an input error made when
running the report now results in the report terminating with the
error shown, and the session being disconnected from SQL*Plus.
The error messages have also been modified to show the actual values
which caused the errors - this makes it easier to identify why the
report has been terminated, and so how the error can be avoided.
e.g. When running spreport, if you accidentally enter a snapshot id which
does not exist, an error is reported, and the report exits.
declare
*
ERROR at line 1:
ORA-20200: Begin Snapshot Id 3469 does not exist for this database/instance
ORA-06512: at line 25


SQL sections of the Instance Report
o The SQL ordered by Gets, Reads and Parse Calls sections have all been
modified to:

- only show rows which exceed more than 1% of the total resources
used for entire interval. This reduces the number of rows which are
candidates for printing. This is identified in the title of the section.

e.g. For SQL ordered by Parse Calls, only those rows which exceed
1% of the total parse calls will be candidates for displaying
in this section.)

-> SQL reported below exceeded 1% of total Parse Calls

This line in the title of the Parse Calls section identifies that
only SQL statements which exceeded 1% of the total parse calls
incurred in the interval (specified by the begin and end snapshots)
will be included.

Note that not all of the SQL that exceeded the 1% threshold are printed
in the report, just the highest-load.

- The total number of resources used by captured statements is compared
to the total number of resources used over the entire interval (as
specified by the begin and end snapshots). This comparison helps
identify how much of the total load can be accounted for in the
high-load SQL captured.
e.g. In the title for the SQL ordered by Gets section of the report,
a line similar to the following will appear

-> Captured SQL accounts for 74.8% of total Buffer Gets

This identifies that 74.8% of the total Buffer gets incurred during
the interval is attributable to the high-load SQL captured by Statspack
(Note that not all captured statements are displayed in the report, only
those which are the highest load).

o New SQL report 'SQL ordered by Cluster Wait Time'
There is a new SQL report added to the SQL reports section. This report
lists the top-SQL ordered by Cluster Wait Time. This report may be useful
in Real Application Cluster databases.


Derived Statistics
There is one new statistic in the Instance Activity Sections which
does not come from V$SYSSTAT: 'log switches (derived)'.
This statistic is derived from the v$thread view which Statspack now
captures. This statistic is shown in a new Instance Activity Stats sections
of the instance report, as described below.


Two new Instance Activity Stats sections
There are two new Instance Activity Stats sections in the instance report.

The first shows the begin and end absolute values of statistics which
should not be diffed (typically performing a diff is incorrect, because
the statistics show current values, rather than cumulative values).
These statistics come from v$sysstat (as do the other Instance Activity
statistics).

Instance Activity Stats DB/Inst: MAINDB/maindb Snaps: 22-23
-> Statistics with absolute values (should not be diffed)
-> Statistics identified by '(derived)' come from sources other than SYSSTAT

Statistic Begin Value End Value
--------------------------------- --------------- ---------------
logons current 10 10
opened cursors current 41 49
session cursor cache count 24 36

The second shows the number of log switches, which is derived from the
v$thread view.

Instance Activity Stats DB/Inst: MAINDB/maindb Snaps: 22-23

Statistic Total per Hour
--------------------------------- ------------------ ---------
log switches (derived) 0 .00


New Scripts
o sprsqins.sql - Reports on a single SQL statement (i.e. hash_value),
including the SQL statistics for the snapshot, the
complete SQL text and optimizer execution plan information.
This report differs from sprepsql.sql, in that it
can report on a SQL statement for any instance which
the PERFSTAT schema contains, whereas sprepsql.sql defaults
the dbid and instance number to the instance you are
currently connected to, thus restricting reporting of
SQL statements to those related to that instance only.
sprsqins.sql will prompt for a dbid, instance_number,
begin and end snap id's and the hash value of the SQL
statement to report on.
This report can be used when importing data from another
instance, or in a Real Application Clusters environment
to report on an instance which you are not directly
connected to.
o sprepcon.sql - This file contains SQL*Plus parameters which determine
some aspects of what is printed out in the Statspack
Instance report spreport.sql For more details on what
is configurable, see the sprepcon.sql file itself.


New Data captured/reported on - Level 1
Time Model data (from v$sys_time_model and v$sess_time_model)
Operating System statistics (from v$osstat)
Streams statistics (from
Streams Capture - v$streams_capture
Streams Apply - v$streams_apply_coordinator/reader/server
Propagation Sender - v$propagation_sender, dba_queue_schedules
Propagation Receiver - v$propagation_receiver
Buffered Queues - v$buffered_queues
Buffered Queue Subscribers - v$buffered_subscribers
Rule Sets - v$rule_set
Additional RAC Sections (from v$cr_block_server, v$current_block_server,
v$class_cache_transfer)
Enqueue Statistics (from v$enqueue_statistics, rather than v$enqueue_stat)
Java Pool Advisory (from v$java_pool_advice)
Thread information (from v$thread)


New Data captured, optionally reported on - Level 1
Event Histogram Statistics (from v$event_histogram)
(only displayed if SQL*Plus variable event_histogram = Y)
File Histogram Statistics (from v$datafile_histogram and
v$tempfile_histogram)
(only displayed if SQL*Plus variable file_histogram = Y)


New columns added to
o stats$shared_pool_advice
estd_lc_load_time, estd_lc_load_time_factor
o stats$sql_plan
sql_id, projection, time, object_alias, object_type, qblock_name,
remarks
o stats$sql_summary
sql_id, direct_writes, application_wait_time, concurrency_wait_time,
cluster_wait_time, user_io_wait_time, plsql_exec_time, java_exec_time,
sql_profile, program_id, program_line#, end_of_fetch_count
o stats$sql_text
sql_id
o stats$undostat
maxqueryhash, maxqueryid, activeblks, unexpiredblks, expiredblks,
tuned_undoretention


Cluster Features
o Real Application Clusters Statistics page (page 2 of a clustered
database report) has been modified to add new ratios and remove ratios
considered less useful.

o The Global Enqueue Statistics section, previously on page 3 of a RAC
instance report, has been moved to behind the Library Cache Activity
statistics.

o Statistics for CR and CURRENT blocks served, and for INSTANCE CACHE
TRANSFER, have been added after Global Enqueue Statistics page.

o New SQL report 'SQL ordered by Cluster Wait Time' has been added.


9.4. Changes between 9.0 and 9.2

Changes on the Summary Page of the Instance Report (spreport.sql)

o The Top 5 Wait Events has been changed to be the Top 5 Timed Events.

What was previously the Top 5 Wait Events has been expanded to give the
Top 5 timed events within the instance: i.e. in addition to including
Wait events, this section can now include the CPU time as reported in the
'CPU used by this session' statistic. This statistic will appear in the
Top 5 only if it's value is one of the Top 5 users of time for the
snapshot interval.

Note that the name of the statistic 'CPU used by this session' will
actually appear in the Top 5 section as 'CPU Time'. The statistic
name is masked in the Top 5 to avoid the confusion of the suffix
'by this session'.
The statistic will continue to appear in the System Statistics
(SYSSTAT) section of the report as 'CPU used by this session'.

Additionally, instead of the percentage calculation being the % Total
Wait Time (which is time for each wait event divided by the total wait
time), the percentage calculation is now the % Total Call Time.

Call Time is the total time spent in database calls (i.e. the total
non-idle time spent within the database either on the CPU, or actively
waiting).

We compute 'Call Time' by adding the time spent on the CPU ('CPU used by
this session' statistic) to the time used by all non-idle wait events.
i.e.
total call time = total CPU time + total wait time for non-idle events

The % Total Call Time shown in the 'Top 5' heading on the summary page
of the report, is the time for each timed event divided by the total call
time (i.e. non-idle time).
i.e.
previously the calculation was:
time for each wait event / total wait time for all events
now the calculation is:
time for each timed event / total call time


Purpose
~~~~~~~
The purpose for including CPU time with wait events:

When tuning a system, the first step is to identify where the most of the
time is spent, in order to identify where the most productive tuning
effort should be concentrated.

The majority of time could be spent in waiting for events to complete
(and so be identifiable in the wait event data), or the system could be
consuming much CPU (for which Operating System statistics, and the Oracle
CPU statistic 'CPU used by this session' in SYSSTAT are examined).
Having the CPU Time co-located with the wait events in the Top 5 section
of the instance report makes it easier to compare the relative values
and to identify whether the most productive investigation would occur
by drilling down the wait events, or in reducing Oracle CPU usage
(e.g. by tuning SQL).

Changes on the Top SQL sections of the Report (spreport.sql)

o When specified by the application, the MODULE information is reported
just before the SQL statement itself.
This information is preceded by the mention "Module: "

New columns added to
- stats$db_cache_advice
size_factor: compares the estimated cache size with the current cache size
- stats$sql_plan
search_columns: the number of index columns with matching predicates.
access_predicates: predicates used to locate rows in an access structure.
For example, start and/or stop predicates for an index range scan.
filter_predicates: predicates used to filter rows before producing them.
- stats$sql_summary
child_latch: the library cache child latch number which protects this
SQL statement (join to v$latch_children.child#). A parent SQL
statement, and all it's children are protected by the same library
cache child latch.
fetches: the number of fetches performed for this SQL statement

New Scripts
o spup90.sql - Upgrades a 9.0 Statspack schema to the 9.2 format

New Data captured/reported on - Level 1
- Shared Pool Advisory
- PGA statistics including PGA Advisory, PGA Histogram usage

New Data captured/reported on - Level 7
- Segment level Statistics

Cluster Features
o Real Application Clusters Statistics page (page 2 of a clustered database
report) has been significantly modified to add new ratios and remove
ratios deemed less useful.
o RAC specific segment level statistics are captured with level 7

SQL Plan Usage capture changed
o The logic for capturing SQL Plan Usage data (level 6) has been modified
significantly. Instead of capturing a Plan's Usage once the first time
the plan is used and never again thereafter, the algorithm now captures
the plans used each snapshot. This allows tracking whether multiple
plans are in use concurrently, or whether a plan has reverted back to
an older plan.
Note that plan usage data is only captured for high-load SQL (this is
unchanged between 9.0 and 9.2).

Due to the significant change in data capture, it is not possible to
convert existing data. Instead, any pre-existing data will be
archived into the table STATS$SQL_PLAN_USAGE_90 (this allows querying
the archived data, should this be necessary).


sprepsql.sql
o 'All Optimizer Plan(s) for this Hash Value' change:
Instead of showing the first time a plan was seen for a specific hash
value, this section now shows each time the Optimizer Plan
changed since the SQL statement was first seen e.g. if the SQL statement
had the following plan changes:
snap ids plan hash value
-------- ---------------
1 -> 12 AAAAAAA
13 -> 134 BBBBBBB
145 -> 299 CCCCCCC
300 -> 410 AAAAAAA

Then this section of the report will now show:
snap id plan hash value
-------- ---------------
1 AAAAAAA
13 BBBBBBB
145 CCCCCCC
300 AAAAAAA

Previously, only the rows with snap_id's 1, 13 and 145 would have been
displayed, as these were the first snap Id's these plans were found.
However this data could not show that plan AAAAAA was found again in
snap_id 300.

The new output format makes it easier to see when an older plan is again
in use. This is possible due to the change in the SQL Plan Usage
capture (described above).


9.5. Changes between 8.1.7 and 9.0

Timing data
o columns with cumulative times are now displayed in seconds.

Changes on the Summary Page
o All cache sizes are now reported in M or K

New Statistics on the Summary page
o open cursors per session values for the begin and end snapshot
o comments specified when taking a snapshot are displayed for the
begin and end snapshots

Latches
o The Latch Activity, Child and Parent Latch sections have the following
additional column:
- wait_time: cumulative time spent waiting for the latch

New Scripts
o spup817.sql - Upgrades an 8.1.7 Statspack schema to the 9.0 format
o sprepsql.sql - Reports on a single hash_value, including
the SQL statistics for the snapshot, the complete SQL
text and optimizer execution plan information.
o sprepins.sql - A report which can be run to query performance data
for any instance which the PERFSTAT schema contains.
The report will prompt for a dbid, instance_number and
begin and end snap id's.
This report can be used when importing data from another
instance, or in a Real Application Clusters environment
to report on an instance which you are not directly
connected to.

New Data captured/reported on - Level 1
- Data from v$resource_limit
- If the instance is a Cluster instance, v$dlm_misc data
- Additional columns are now captured in stats$enqueue_stat
- Automatic Undo Management statistics
- Buffer Cache advisory data
- New Auto-PGA memory management data
- Support for multiple sized-block buffer pools
- Support for resizable buffer pool and shared pool
- Data from v$instance_recovery

New Snapshot Level - Level 6
- New SQL plans and SQL Plan usage information for high-load SQL
statements are captured.

Cluster Features
o There is additional derived data and statistics which are now included
in the Statspack report for a clustered database. For more information,
see the 'Cluster Specific Data' section of this document.

New SNAP function
o the call to take a snapshot can also be a PL/SQL function call which
returns the snapshot Id of the snapshot taken. Using the function rather
than the procedure is useful in situations where you wish to know the
snap_id immediately, such as when running Statspack reports in batch
mode, or during benchmark runs.

Installation
o The installation script will no longer accept the SYSTEM tablespace for
the PERFSTAT user's DEFAULT or TEMPORARY tablespace. If SYSTEM is
specified, the installation will error.

SQL
o Each SQL report has two new columns CPU Time and Elapsed Time. These
show the cumulative CPU time and Elapsed time for all executions of
that SQL statement for the snapshot period. If cumulative CPU and
Elapsed times are not shown, the CPU and Elapsed times per execute
are shown.

Changed
o The SGA Breakdown difference section of the Statspack report now
shows the difference between begin and end values as a percentage
of the begin value, rather than in bytes.
o The data in the Dictionary Cache Stats and Library Cache Activity
sections are only printed if the number of gets is greater than zero.


9.6. Changes between 8.1.6 and 8.1.7

New Statistics on the Summary page
o connections at the begin snapshot and connections at the end snapshot

Load Profile
o executes per transaction and per second
o logons per transaction and per second

Instance Efficiency
o % Non-Parse CPU: which is the parse time CPU / CPU used by this session
o Parse CPU to Parse Elapsd%: which is the parse time CPU / parse time
elapsed
o Execute to Parse %: The ratio of executions to parses

Instance Efficiency - Shared Pool Statistics are shown for the begin and
end snapshots.
o Memory Usage %: The percentage of the shared pool which is used.
o % SQL with executions>1: The percentage of reused SQL (i.e. the
percentage of SQL statements with more than one execution).
o % Memory for SQL w/exec>1: The percentage of memory used for SQL
statements with more than one execution.
This data is newly gathered by the 8.1.7 Statspack for level 5 snapshots
and above, and so will not evident if the report is run against older
data captured using the 8.1.6 Statspack.

Tablespace and File IO
o Tempfile statistics are now captured. The statistics for tempfiles are
shown in the same sections with statistics for datafiles and tablespaces.
o The tablespace and File IO reports have been modified to include reads/s
and writes/s.

Latches
o The report has been modified to include parent and child latch
sections, which only appears in the report when a level 10 snapshot
is taken.

New Scripts
o sppurge.sql - Purges a range of Snapshot Ids
o sptrunc.sql - Deletes all data
o spup816.sql - Upgrades an 8.1.6 Statspack to the 8.1.7 schema

Batch Mode execution
o The installation, reporting and purge scripts (spcreate.sql, spreport.sql
and sppurge.sql) have been modified so they can be run in batch mode, if
the appropriate SQL*Plus variables are defined before the scripts are run.

SQL
o Two new SQL thresholds (and sections in the report) have been added:
sharable_mem and version_count
o The report which was previously ordered by rows processed has been
changed to be ordered by executions
o The full text of a SQL statement is now captured (previously only the
first 1000 bytes of the text was captured); the text is captured once
only. Previously, Statspack gathered all SQL related information,
including all the SQL text for each snapshot. The new strategy will
result less space usage.
o The first 5 lines of a SQL statement are shown in each SQL report
(rather than the first line)

File Rename
o The Statspack files have been renamed, with all files now beginning
with the prefix sp.
The new and old file names are given below. For more information on
the purpose of each file, please see the Supplied Scripts Overview
section.

New Name Old Name
------------ -------------
spdoc.txt statspack.doc
spcreate.sql statscre.sql
spreport.sql statsrep.sql
spauto.sql statsauto.sql
spuexp.par statsuexp.par
sppurge.sql - new file -
sptrunc.sql - new file -
spup816.sql - new file -
spdrop.sql statsdrp.sql
spcpkg.sql statspack.sql
spctab.sql statsctab.sql
spcusr.sql statscusr.sql
spdtab.sql statsdtab.sql
spdusr.sql statsdusr.sql

o The default Statspack report output file name prefix has been modified
to sp_ (was st_) to be consistent with the new script names.

10. Compatibility and Upgrading from previous releases
-------------------------------------------------------

10.1 Compatibility Matrix

Database ---- Statspack Release ----
Release 11.1 10.2 10.1 9.2 9.0 8.1.7 8.1.6
-------- ---- ---- ---- --- ---- ----- -----
11.1 Y - - - - -
10.2 - Y - - - - -
10.1 - - Y - - - -
9.2 - - - Y - - -
9.0 - - - - Y - -
8.1.7 - - - - - Y -
8.1.6 - - - - - - Y

In summary, it is best to use the Statspack release shipped with
the version of the database you are using.

If you are already using an earlier release of Statspack must use
a newer Statspack release (e.g. because you are upgrading the database),
it is possible to upgrade an existing Statspack schema, and so
keep previously captured data. See the 'Upgrading an existing Statspack
schema to a newer release' section of this document.


10.1.1. Using Statspack shipped with 11.1

The Statspack scripts shipped with 11.1 can not be used with any release
earlier than 11.1, as Statspack uses new v$views (and new columns added to
existing v$views) introduced in this server release.


10.1.2. Using Statspack shipped with 10.2

The Statspack scripts shipped with 10.2 can not be used with any release
earlier than 10.2, as Statspack uses new v$views (and new columns added to
existing v$views) introduced in this server release.


10.1.3. Using Statspack shipped with 10.1

The Statspack scripts shipped with 10.1 can not be used with any release
earlier than 10.1, as Statspack uses new v$views (and new columns added to
existing v$views) introduced in this server release.


10.1.4. Using Statspack shipped with 9.2

The Statspack scripts shipped with 9.2 can not be used with any release
earlier than 9.2, as Statspack uses new v$views (and new columns added to
existing v$views) introduced in this server release.


10.1.5. Using Statspack shipped with 9.0

The Statspack scripts shipped with 9.0 can not be used with any release
earlier than 9.0, as the 9.2 release uses new v$views (and new columns added
to existing v$views) introduced in this server release.


10.1.6. Using Statspack shipped with 8.1.7 on 9i releases

It is not possible to use the Statspack shipped with 8.1.7 with any 9i
instance, due to the definition of an undocumented view Statspack 8i used,
changing between Oracle8i and Oracle9i. Attempting to use 8.1 Statspack
on an instance running 9i will result in package compilation errors.


10.2. Upgrading an existing Statspack schema to a newer release

Scripts are provided which convert performance data in an existing
Statspack schema running an older Statspack release, to the newer schema
format.

Although data conversion is not a supported activity, these scripts have been
provided as a convenient way of keeping previously captured Statspack data.

Due to the differences in schema layout, minor irregularities may result
in statistics captured before conversion. An example of this is the
Enqueue statistics data migration: do not compare Enqueue statistics data
collected pre-10.1 to the Enqueue statistics data captured in 10.1 (for more
details, see section 'Upgrading the Statspack schema from 9.2 to 10.1').


Backups
~~~~~~~
Note: There is no downgrade script. Backup the PERFSTAT schema using
export BEFORE attempting the upgrade, in case the upgrade fails.
The only method of downgrading, or re-running the upgrade is to
de-install Statspack, and import a previously made export.

Before running the upgrade script, export the Statspack schema (for a
backup), then disable any scripts which use Statspack, as these will
interfere with the upgrade. For example, if you use a dbms_job to
gather statistics, disable this job for the duration of the upgrade.


Data Volumes
~~~~~~~~~~~~
If there is a large volume of data in the Statspack schema (i.e. a large
number of snapshots), to avoid a long upgrade time or avoid an unsuccessful
upgrade:
- ensure there is enough free space in PERFSTAT's default tablespace
before starting the upgrade (each individual upgrade section will
describe how to estimate the required disk space)
- if you do not use Automatic Undo Management, ensure you specify a large
rollback segment, if prompted
- if you do not use Automatic Memory Management, ensure you specify a large
sort_area_size (e.g. 1048576), if prompted

Rollback segment errors during upgrade
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If it is required, the upgrade script will prompt you for the rollback segment
and sort_area_size to be used on your site. If you do not need to specify a
rollback segment or sort_area_size (e.g. because you use Automatic Undo
Management and PGA Aggregate Target) simply press return, and ignore the
following errors appearing in the upgrade log file:

alter session set sort_area_size =
*
ERROR at line 1:
ORA-02017: integer value required


set transaction use rollback segment
*
ERROR at line 1:
ORA-02245: invalid ROLLBACK SEGMENT name

Package Compilation errors during upgrade over multiple releases
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Errors in compiling the STATSPACK *package body* *while in the process* of
running multiple Statspack upgrade scripts consecutively (e.g. when
upgrading multiple releases) should be ignored.

If your site is upgrading from (for example) 9.0 to 10.1 and has 10.1
installed, to upgrade the Statspack schema from 9.0 to 10.1, spup90.sql
followed by spup92.sql must be run.

The Statspack package compilation which is a part of the first upgrade
script (spup90.sql) will fail with errors; this is expected, as the schema
is in a partially upgraded state, and will not be fully upgraded to 10.1 until
spup92.sql is also run.

The final package compilation which is run as a part of the last upgrade
script (in this case spup92.sql), must complete successfully.

Note: The above example is not specific for the 9.0 to 10.1 upgrade,
it applies equally when upgrading Statspack through multiple
releases, no matter which releases.


10.2.1. Upgrading the Statspack schema from 10.2 to 11.1

Follow the general instructions in section 10.2. 'Upgrading an existing
Statspack schema to a newer release' above.

To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup102

Once the upgrade script completes, check the log files (spup102a.lis and
spup102b.lis) for errors. If errors are evident, determine and rectify the
cause. If no errors are evident, re-enable any Statspack data
collection or reporting scripts which were previously disabled.


10.2.2. Upgrading the Statspack schema from 10.1 to 10.2

Follow the general instructions in section 10.2. 'Upgrading an existing
Statspack schema to a newer release' above.

To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup101

Once the upgrade script completes, check the log files (spup101a.lis and
spup101b.lis) for errors. If errors are evident, determine and rectify the
cause. If no errors are evident, re-enable any Statspack data
collection or reporting scripts which were previously disabled.


10.2.3. Upgrading the Statspack schema from 10.1 to 10.2

Follow the general instructions in section 10.2. 'Upgrading an existing
Statspack schema to a newer release' above.

To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup101

Once the upgrade script completes, check the log files (spup101a.lis and
spup101b.lis) for errors. If errors are evident, determine and rectify the
cause. If no errors are evident, re-enable any Statspack data
collection or reporting scripts which were previously disabled.

10.2.4. Upgrading the Statspack schema from 9.2 to 10.1

Follow the general instructions in section 10.2. 'Upgrading an existing
Statspack schema to a newer release' above.

This release creates new tables and indexes, and requires approx.
20 extra MB.

To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup92

Once the upgrade script completes, check the log files (spup92a.lis and
spup92b.lis) for errors. If errors are evident, determine and rectify the
cause. If no errors are evident, re-enable any Statspack data
collection or reporting scripts which were previously disabled.


Data Compatibility - 'enqueue' wait event
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note that in 10.1, each enqueue has it's own distinct wait event, and the
general 'enqueue' wait event will no longer be used. Instead of seeing
'enqueue' as a wait event, you will now see 'enqueue: enqueue name -
request reason'
e.g.
enqueue: Transaction - row lock contention


Data Compatibility - 'latch free' wait event
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Note that in 10.1, many latches each have their distinct wait event. The
general 'latch free' wait event is still used, but only represents data
for those latches which do not have their own event. So it is now possible
to see 'latch free' as well as 'latch: ' in the list of
wait events
e.g.
latch: cache buffers chains
latch free


Data Compatibility - Enqueue Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A new v$view has been added in 10.1 - v$enqueue_statistics. This view
differs from the existing v$enqueue_stat view, as in addition to breaking
down enqueue activity by enqueue Type, it also breaks down enqueue requests
by Request Reason. So for enqueues which can be requested for multiple
purposes, the data is broken down by reason.

e.g. TX enqueue (transaction enqueue) can be requested for multiple reasons.

In 10.1 the data may look like:
Enqueue Type (Request Reason) Requests
----------------------------------- -------------
TX-Transaction (row lock contention) 55
TX-Transaction (allocate ITL entry) 1

Whereas in 9.2 the data would look like:
Enqueue Type Requests
----------------------------------- -------------
TX 56

Statspack has been enhanced to use the new v$enqueue_statistics view, rather
than continue using v$enqueue_stat.
The Statspack upgrade script spup92.sql migrates the data captured from
prior releases into the new format, in order to avoid losing historical data.

Note for the reasons explained in the example above, you must sum up the
enqueue statistics by Type in a 10.1 Statspack report, to be able to
make the equivalent comparison to the data shown in a 9.2 report.


Data Compatibility - Changing of RAC Statistics and Event Names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Real Application Cluster Event Names and Statistics have been changed
from 'global cache xxx' to 'gc xxx'. Historical performance data stored
in the Statspack schema has not been modified to reflect the new names,
so when comparing a Statspack report on a pre-10g system, be aware the
statistic names and event names may have changed.


Data Compatibility - Changing SQL Hash Value, and new SQL Id
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The computed value of the Hash Value column in the V$SQL family of tables
(v$sql, v$sqlarea, v$sqltext etc) has changed in release 10g. This means the
same SQL statement will have a different hash_value in 10g than in prior
releases. This change has been made as a consequence of introducing the
new SQL Id column. SQL Id can be considered a 'more unique' hash_value.
The new SQL Id has been introduced to further reduce the probability of a
'hash collision' where two distinct SQL statements hash to the same
hash_number.
Statspack captures SQL Id, but does not use it as the unique identifier.
Instead, Statspack continues to use the hash_value and first 31 bytes of the
SQL text to uniquely identify a SQL statement (AWR uses SQL Id).

10.2.5. Upgrading the Statspack schema from 9.0 to 9.2

Follow the general instructions in section 10.2. 'Upgrading an existing
Statspack schema to a newer release' above.

This release creates new tables and indexes, and requires approx.
20 extra MB.

To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup90

Once the upgrade script completes, check the log files (spup90a.lis and
spup90b.lis) for errors. If errors are evident, determine and rectify the
cause. If no errors are evident, re-enable any Statspack data
collection or reporting scripts which were previously disabled.


SQL Plan Usage Data Upgrade note:
If there is more than one database in a single Statspack schema (i.e.
there are multiple distinct dbid's), AND if Level 6 snapshots have
been taken using the 9.0 release Statspack, then the SQL plan usage
data will be saved, but will not be queried by the sprepsql.sql
SQL report (this is because during the data conversion, it will not
be possible to identify which database first identified a plan
usage).
For more details see 'SQL Plan Usage capture changed' in the 'Changes
between 9.0 and 9.2' section of this document.


10.2.6. Upgrading the Statspack schema from 8.1.7 to 9.0

Follow the general instructions in section 10.2. 'Upgrading an existing
Statspack schema to a newer release' above.

Then, to estimate whether you have sufficient free space to run this
upgrade, execute the following SQL statement while connected as PERFSTAT in
SQL*Plus:

select 10 + (2*sum(bytes)/1024/1024) est_space_mb
from dba_segments
where segment_name in ('STATS$ENQUEUESTAT');

The est_space_mb column will give you a guesstimate as to the required
free space, in megabytes.

To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup817

Once the upgrade script completes, check the log files (spup817a.lis and
spup817b.lis) for errors. If errors are evident, determine and rectify
the cause before proceeding. If no errors are evident, and you are upgrading
to 9.2, you may proceed with the upgrade.


Data Compatibility
~~~~~~~~~~~~~~~~~~
Prior to release 9.0, the STATS$ENQUEUESTAT table gathered data based on
an X$ table, rather than a V$view. In 9.0, the column data within the
underlying X$ table has been considerably improved, and the data
externalised via the V$ENQUEUE_STAT view.

The Statspack upgrade script spup817.sql migrates the data captured from
prior releases into the new format, in order to avoid losing historical data.

Note however, that the column names and data contained within the columns
has changed considerably between the two releases: the STATS$ENQUEUE_STAT
columns in 9.0 capture different data to the columns which existed in the
STATS$ENQUEUESTAT table in the 8.1. Statspack releases.

The column data migration performed by spup817.sql is as follows:

8.1 STATS$ENQUEUESTAT 9.0 STATS$ENQUEUE_STAT
--------------------- ----------------------
GETS TOTAL_REQ#
WAITS TOTAL_WAIT#


To further emphasise the difference, the column definitions appear below:

STATS$ENQUEUESTAT.GETS - 8.1
Reflected the number of enqueue gets, excluding enqueue conversions.
This statistic was incremented at the end of a get.

STATS$ENQUEUE_STAT.TOTAL_REQ# - 9.0
Is the total number of requests for an enqueue + the number of
enqueue conversions. This statistic is incremented at the beginning
of a get request.

STATS$ENQUEUESTAT.WAITS - 8.1
Reflected the number of times a session waited for at least 3
seconds for an enqueue operation (get or convert). The statistic
was incremented at the end of the wait (either if the enqueue was
successfully gotten or if the request timed out). If a session waited
for less than 3 seconds, this statistic was not incremented.

STATS$ENQUEUE_STAT.TOTAL_WAIT# - 9.0
Is the total number of times a session waited for any enqueue operation.
This statistic is incremented at the beginning of the wait.

For these reasons it is not valid to compare Enqueue statistics data
collected pre-9.0, to Enqueue statistics data captured in Oracle9i.


10.2.7. Upgrading the Statspack schema from 8.1.6 to 8.1.7

Follow the general instructions in section 10.2. 'Upgrading an existing
Statspack schema to a newer release' above.

Then, to estimate whether you have sufficient free space to run this
upgrade, execute the following SQL statement while connected as PERFSTAT in
SQL*Plus:

select 1.3*sum(bytes)/1024/1024 est_space_mb
from dba_segments
where segment_name in ('STATS$SQL_SUMMARY','STATS$SQL_SUMMARY_PK');
The est_space_mb column will give you a guesstimate as to the required
free space, in megabytes.
The larger the SQL statements in the sql_summary table, the more space will
be released after the upgrade is complete.

To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup816

Once the upgrade script completes, check the log files (spup816a.lis and
spup816b.lis) for errors. If errors are evident, determine and rectify
the cause before proceeding. If no errors are evident, and you are upgrading
to 9.0, you may proceed with the upgrade.


10.2.8. Upgrading the Statspack schema from 8.1.6 to 9.2

If you are running 8.1.6 Statspack and wish to upgrade to 9.2 Statspack, you
must follow the upgrade steps - in the following order:
- 10.2.4. Upgrading the Statspack schema from 8.1.6 to 8.1.7
- 10.2.3. Upgrading the Statspack schema from 8.1.7 to 9.0
- 10.2.2. Upgrading the Statspack schema from 9.0 to 9.2


10.2.8. Upgrading the Statspack schema from 8.1.6 to 9.0

If you are running 8.1.6 Statspack and wish to upgrade to 9.0 Statspack, you
must follow the upgrade steps - in the following order:
- 10.2.4. Upgrading the Statspack schema from 8.1.6 to 8.1.7
- 10.2.3. Upgrading the Statspack schema from 8.1.7 to 9.0


10.2.9. Upgrading the Statspack schema from 8.1.7 to 9.2

If you are running 8.1.7 Statspack and wish to upgrade to 9.2 Statspack, you
must follow the upgrade steps - in the following order:
- 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0
- 10.2.1. Upgrading the Statspack schema from 9.0 to 9.2

11. Oracle Real Application Clusters specific considerations
------------------------------------------------------------

11.1. Changing Instance Numbers

The unique identifier for a database instance used by Statspack is the
dbid and the instance_number. When in a Real Application Clusters environment,
it is possible the instance_number may change between startups (either
because the instance_number initialization parameter is set, or
because the instances are started in a different order).

In this case, as Statspack uses the instance_number and the dbid to identify
the instance's snapshot preferences, it is important to note that this may
inadvertently result in a different set of levels or thresholds being
used when snapshotting an instance.

There are three conditions which must be met for this to occur:
- the instance numbers must have switched between startups
- the DBA must have modified the default Statspack parameters used for
at least one of the instances
- the parameters used (e.g. thresholds and snapshot level) must not be
the same on all instances

Note that the only way the parameters will differ is if the parameters
have been explicitly modified by the DBA after installation, either by
saving the specified values or by using the modify_statspack_parameter
procedure.

It is easy to check whether any of the Statspack snapshot parameters are
different for the instances by querying the STATS$STATSPACK_PARAMETER table.

NOTE:
If you have changed the default Statspack parameters you may
wish to avoid encountering this problem by hard-coding the instance_number
initialization parameter for each of the instances of a Clustered
database - this will avoid encountering this problem.
For recommendations and issues with setting the instance_number
initialization parameter, please see the Real Application Clusters
documentation.


11.2. Real Application Clusters Specific Reports

sprepins.sql
sprepins.sql can be run to query performance data for any instance which the
PERFSTAT schema contains. The report will prompt for a dbid,
instance_number and begin and end snap id's.

This report can be used when importing data from another instance, or in a
Real Application Clusters environment to report on an instance which you are
not connected to.

For more information on sprepins.sql, see the 'Running the instance report
when there are multiple instances' section of this document.

sprsqins.sql
sprsqins.sql can be run to query SQL performance data for any instance
which the PERFSTAT schema contains. The report will prompt for a dbid,
instance_number, begin and end snap id's, and hash value.

This report can be used when importing data from another instance, or in a
Real Application Clusters environment to report on an instance which you
are not connected to.

For more information on sprsqins.sql, see the 'Running the SQL report
when there are multiple instances' section of this document.


11.3 Real Application Clusters Specific Data

New Real Application Clusters specific data displayed in Statspack instance
report:

- Page 2 of the Statspack report for a RAC instance displays RAC specific
derived statistics.
- RAC segment statistics
- RAC-specific data for Library Cache and Dictionary Cache
- Global Enqueue Statistics from v$ges_statistics
- Global CR Served Statistics
- Global CURRENT Served Statistics
- Global Cache Transfer Statistics


12. Conflicts and differences compared to UTLBSTAT/UTLESTAT
------------------------------------------------------------

12.1. Running BSTAT/ESTAT in conjunction to Statspack

If you choose to run BSTAT/ESTAT in conjunction to Statspack, do not do
run both as the same user, as there is a table name conflict - this table
is stats$waitstat.


12.2. Differences between Statspack and BSTAT/ESTAT

Statspack considers a transaction to either finish with a commit or a
rollback, and so calculates the number of transactions thus:
'user commits' + 'user rollbacks'

BSTAT/ESTAT considers a transaction to complete with a commit only, and
so assumes that transactions = 'user commits'

For this reason, comparing per transaction statistics between Statspack and
BSTAT/ESTAT may result in significantly different per transaction ratios.

13. Removing the package
-------------------------

To deinstall the package, connect as a user with SYSDBA privilege and run
the following script from SQL*Plus: spdrop
e.g.
SQL> connect / as sysdba
SQL> @spdrop

This script actually calls 2 other scripts:
1. spdtab -> Drops tables and public synonyms
2. spdusr -> Drops the user

Check each of the two output files produced (spdtab.lis, spdusr.lis)
to ensure the package was completely deinstalled.

14. Supplied Scripts Overview
------------------------------

Installation

Must be run as a user with SYSDBA privilege
spcreate.sql -> Creates entire Statspack environment (calls
spcusr.sql, spctab.sql, spcpkg.sql)
spdrop.sql -> Drops entire Statspack environment (calls
spdtab.sql, spdusr.sql)

Are run as a user with SYSDBA priv by the calling scripts (above)
spdtab.sql -> Drops Statspack tables
spdusr.sql -> Drops the Statspack user (PERFSTAT)

Are run as PERFSTAT by the calling scripts (above)
spcusr.sql -> Creates the Statspack user (PERFSTAT)
spctab.sql -> Creates Statspack tables
spcpkg.sql -> Creates the Statspack package


Reporting and Automation

Must be run as PERFSTAT
spreport.sql -> Generates a Statspack Instance report
sprepins.sql -> Generates a Statspack Instance report for the
database and instance specified
sprepsql.sql -> Generates a Statspack SQL report for the
SQL Hash Value specified
sprsqins.sql -> Generates a Statspack SQL report for the
SQL Hash Value specified, for the database and
instance specified
spauto.sql -> Automates Statspack statistics collection
(using dbms_job)
sprepcon.sql -> Script which configures SQL*Plus variables which
affect certain aspects of the Statspack instance
report spreport.sql This script is automatically
called as a part of the Statspack instance
report.

Upgrading

Must be run as SYSDBA
spup92.sql -> Converts data from the 9.2 schema to the
newer 10.1 schema. Backup the existing schema
before running the upgrade. If upgrading from
Statspack 8.1.6, spup816.sql must be run, then
spup817.sql, then spup90.sql, then spup92.sql
spup90.sql -> Converts data from the 9.0 schema to the
newer 9.2 schema. Backup the existing schema
before running the upgrade. If upgrading from
Statspack 8.1.6, spup816.sql must be run, then
spup817.sql, then spup90.sql
spup817.sql -> Converts data from the 8.1.7 schema to the
newer 9.0 schema. Backup the existing schema
before running the upgrade. If upgrading from
Statspack 8.1.6, spup816.sql must be run, then
spup817.sql
spup816.sql -> Converts data from the 8.1.6 schema to the
8.1.7 schema. Backup the existing schema
before running the upgrade


Performance Data Maintenance

Must be run as PERFSTAT
sppurge.sql -> Purges a limited range of Snapshot Id's for
a given database instance
sptrunc.sql -> Truncates all Performance data in Statspack tables
WARNING - Do not use unless you wish to remove
all data in the schema you are using.
You may choose to export the data
as a backup before using this script
spuexp.par -> An export parameter file supplied for exporting
the whole PERFSTAT user


Documentation

Should be read by the DBA running the scripts
spdoc.txt -> This file contains instructions and
documentation on the STATSPACK package

15. Limitations and Modifications
----------------------------------

15.1. Limitations

As the Statspack schema is updated to reflect the features in the
latest Oracle releases, the schema may change; backward compatibility
is not guaranteed.


15.2. Modifications

All Statspack code is Oracle proprietary and must not be modified. Any
modifications made to Statspack software will render the code and
data captured thereafter unsupported; unsupported changes may result in
errors in data capture or reporting. Instead, please request enhancements.


-------------------------------------------------------------------------

你可能感兴趣的:(package)