1.以sysdba身份登陆,查看数据文件路径
C:\Documents and Settings\Administrator>sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on 星期三 4月 14 10:51:41 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
连接到:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
sys@AAA>show user;
USER 为 "SYS"
sys@AAA>select file_name from dba_data_files;
FILE_NAME
--------------------------------------------------
D:\ORACLE\PRODUCT\10.2.0\ORADATA\AAA\USERS01.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\AAA\SYSAUX01.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\AAA\UNDOTBS01.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\AAA\SYSTEM01.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\AAA\BBB.DBF
D:\ORACLE\PRODUCT\10.2.0\ORADATA\AAA\PERFSTAT.DBF
已选择6行。
2.创建statspack存储数据的表空间,(注:statspack往往会产生大量的分析数据,所以表空间还是大点为好)。
create tablespace perfstat datafile 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\AAA\PERFSTAT.DBF' size 2G;
3.运行statspack安装脚本。默认位置在$oracle_home\rdbms\admin\spcreate.sql
sys@AAA> @D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\spcreate.sql
...................
输入 perfstat_password 的值: perfstat
...
输入 default_tablespace 的值: perfstat
..........
..........
输入 temporary_tablespace 的值: temp
..........
..........
安装完之后 会自动切换用户到perfstat下:
PERFSTAT@AAA> show user;
USER is "PERFSTAT"
安装完毕!
4.接下来采样分析,设定一个job,每小时执行一次采样。
首先查看当前DB中有没有正在运行的JOB:
perfstat@AAA>select job,schema_user,next_date,interval,what from user_jobs;
未选定行
创建statspack采样的job,没每个小时采样一次。
perfstat@AAA>variable job number;
perfstat@AAA>begin
2 dbms_job.submit(:job,'statspack.snap;',trunc(sysdate+1/24,'hh24'),'trunc(sysdate+1/24,''hh24'')');
3 commit;
4 end;
5 /
PL/SQL 过程已成功完成。
查看当前正在运行的job有哪些?
perfstat@AAA>select job as j,schema_user,next_date,interval,what from user_jobs;
J SCHEMA_USER NEXT_DATE INTERVAL WHAT
---------- ------------------------------ -------------- ---------- ----------
1 PERFSTAT 14-4月 -10 trunc(sysd statspack.
ate+1/24,' snap;
hh24')
5.由于statspack的采集和分析会做很多DB的分析,产生大量的分析数据,所以频繁的采样肯定会消耗系统性能,特别是在生产库中,所以当你建立了上面每小时执行一次的那个job,请务必在不需要的时候停止它。不然的话,这个失误可能会是致命的( statspack job每小时都会跑,永不停的跑下去,呵呵。),尤其在生产库中。
明天凌晨,系统比较清闲,采样已经没多大意义(采样分析的最终目的是分析高峰时段的系统瓶颈),所以停止这个job.
perfstat@AAA>variable job number;
perfstat@AAA>begin
2 dbms_job.submit(:job,'dbms_job.broken(1,true);',trunc(sysdate+1),'null');
3 commit;
4 end;
5 /
PL/SQL 过程已成功完成。
6.几个小时候后,看看生成的哪些快照。
perfstat@AAA>select snap_id,snap_time,startup_time from stats$snapshot;
SNAP_ID SNAP_TIME STARTUP_TIME
---------- -------------- --------------
1 14-4月 -10 14-4月 -10
2 14-4月 -10 14-4月 -10
7.设定任意两个快照,产生这段时间内的性能分析报告(此时需要跑spreport脚本,路径和刚才那个脚本一致)。
perfstat@AAA>@D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\spreport.sql
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
1858440386 AAA 1 aaa
Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
----------- -------- ------------ ------------ ------------
1858440386 1 AAA aaa 6979580041BD
490
Using 1858440386 for database Id
Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.
Listing all Completed Snapshots
Snap
Instance DB Name Snap Id Snap Started Level Comment
------------ ------------ --------- ----------------- ----- -------------
aaa AAA 1 14 4月 2010 09:5 5
2
2 14 4月 2010 09:5 5
3
3 14 4月 2010 11:0 5
0
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
输入 begin_snap 的值: 1
Begin Snapshot Id specified: 1
输入 end_snap 的值: 2
End Snapshot Id specified: 2
Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is sp_1_2. To use this name,
press to continue, otherwise enter an alternative.
输入 report_name 的值: d:\myreport.txt
...回车
8.完成后,会产生一个分析报告(d:\myreport.txt)。
附件:报告的截取片段:
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
1858440386 aaa 1 14-4月 -10 09:2 10.2.0.1.0 NO
2
Host Name: 6979580041BD490 Num CPUs: 2 Phys Memory (MB): 3,326
~~~~
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 1 14-4月 -10 09:52:22 15 4.3
End Snap: 2 14-4月 -10 09:53:20 15 5.8
Elapsed: 0.97 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 184M Std Block Size: 8K
Shared Pool Size: 380M Log Buffer: 6,860K
Load Profile Per Second Per Transaction
~~~~~~~~~~~~ --------------- ---------------
Redo size: 10,075.66 584,388.00
Logical reads: 58.41 3,388.00
Block changes: 18.81 1,091.00
Physical reads: 0.22 13.00
Physical writes: 0.00 0.00
User calls: 0.52 30.00
Parses: 2.83 164.00
Hard parses: 0.72 42.00
Sorts: 1.76 102.00
Logons: 0.02 1.00
Executes: 10.88 631.00
Transactions: 0.02
% Blocks changed per Read: 32.20 Recursive Call %: 99.69
Rollback per transaction %: 0.00 Rows per Sort: 70.69
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.62 In-memory Sort %: 100.00
Library Hit %: 90.06 Soft Parse %: 74.39
Execute to Parse %: 74.01 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 14.29 % Non-Parse CPU: 98.41
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 21.05 20.98
% SQL with executions>1: 54.05 60.06
% Memory for SQL w/exec>1: 80.51 83.00
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
CPU time 1 70.7
control file sequential read 189 0 1 23.6
db file sequential read 8 0 3 2.5
control file parallel write 27 0 1 1.9
log file sync 1 0 5 .6
.....................
.........................
...........................
9.若想删除某个快照,制定snapid直接delete
delete stats$snapshot where snap_id=1;
若想删除所有快照 ,只保留statspack结构,执行@sptrunc。脚本路径也在rdbms/admin下。若想连statspack一起干掉,也可以,请执行下面的脚本:@spdrop
从此你也可以利用statspack来了解当前数据库的运行状况了。