Logmnr 用法介绍及其注意事项
转载请注明出处:http://blog.csdn.net/xiaofan23z
Environment:linux +oracle10g two nodes rac
一、 安装LogMiner工具,以下两个脚本以SYSDBA身份运行
SQL> @$ORACLE_HOME/rdbms/admin/dbmslm.sql;
Package created.
Grant succeeded.
##创建DBMS_LOGMNR包,该包用来分析日志文件。
SQL> @$ORACLE_HOME/rdbms/admin/dbmslmd.sql;
Package created.
##创建DBMS_LOGMNR_D包,该包用来创建数据字典文件。
二、 使用LogMiner工具
下面将详细介绍如何使用LogMiner工具。
1、创建数据字典文件(data-dictionary)
1).首先在init.ora初始化参数文件中,指定数据字典文件的位置,也就是添加一个参数 UTL_FILE_DIR,该参数值为服务器中放置数据字典文件的目录。如: UTL_FILE_DIR = ($ORACLE_HOME\logs) ,重新启动数据库,使新加的参数生效:
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shutdown.
SQL> startup mount
ORACLE instancestarted.
Total System GlobalArea 922746880 bytes
Fixed Size 1222624 bytes
Variable Size 209717280 bytes
DatabaseBuffers 704643072 bytes
Redo Buffers 7163904 bytes
Database mounted.
SQL> alter system setutl_file_dir='/u01/app/oracle/product/10.2.0/db_1/log' scope=spfile;
System altered.
SQL> shutdown immediate
ORA-01109: databasenot open
Database dismounted.
ORACLE instance shutdown.
SQL>
SQL> startup mount
ORACLE instancestarted.
Total System GlobalArea 922746880 bytes
Fixed Size 1222624 bytes
Variable Size 209717280 bytes
DatabaseBuffers 704643072 bytes
Redo Buffers 7163904 bytes
Database mounted.
SQL>
SQL> show parameter utl
NAME TYPE VALUE
----------------------------------------------- ------------------------------
create_stored_outlines string
utl_file_dir string /u01/app/oracle/product/10.2.0
/db_1/log
SQL>
SQL> alter database open;
Database altered.
SQL>
Ps: LogMiner uses a dictionary file, which is a special file that
indicates the database that created it as well as the time the file was
created. The dictionary file is not required, but is recommended. Without a
dictionary file, the equivalent SQL statements will use Oracle internal object
IDs for the object name and present column values as hex data.
也可以不使用数据字典,但是oracle推荐使用。
From :How to Setup LogMiner [ID 111886.1]
转载请注明出处:http://blog.csdn.net/xiaofan23z
2). 然后创建数据字典文件
SQL> execute dbms_logmnr_d.build(dictionary_filename =>'dict.ora',dictionary_location =>'/u01/app/oracle/product/10.2.0/db_1/log');
PL/SQL procedure successfully completed.
2. 创建要分析的日志文件列表
1).创建分析列表,即所要分析的日志
SQL> executedbms_logmnr.add_logfile(LogFileName =>'+DATA/ldbrac/onlinelog/group_3.266.732154759',Options => dbms_logmnr.new); ##根据时间点或者在v$log,v$archived_log中查询出要分析的log
PL/SQL procedure successfully completed.
###为了验证效果 删除一个表然后切换log
droptable askey_id;
SQL> alter system switch logfile;
System altered.
将切换的log 加入到分析列表
SQL> execute dbms_logmnr.add_logfile(LogFileName =>'+DATA/ldbrac/onlinelog/group_2.262.732154617',Options =>dbms_logmnr.ADDFILE);
PL/SQL procedure successfully completed.
Alert log中会有如下信息显示
Sat Apr 21 11:35:192012
LOGMINER: Beginmining logfile: +DATA/ldbrac/onlinelog/group_2.262.732154617
Sat Apr 21 11:35:192012
LOGMINER: Begin mining logfile:+DATA/ldbrac/onlinelog/group_3.266.732154759
3、使用logMiner进行日志分析
1).无限制条件,即用数据字典文件对要分析的日志文件所有内容做分析
SQL> executedbms_logmnr.start_logmnr(DictFileName => '/u01/app/oracle/product/10.2.0/db_1/log/dict.ora');
PL/SQL procedure successfully completed.
2).带限制条件,可以用scn号或时间做限制条件,也可组合使用
--分析日志列表中时间从20120420从10:00到13:00的内容
SQL> execute dbms_logmnr.start_logmnr(startTime => to_date('20120421100000','yyyy-mm-dd hh24:mi:ss'),endTime => to_date('20120421130000','yyyy-mm-dd hh24:mi:ss'),DictFileName => '/u01/app/oracle/product/10.2.0/db_1/log/dict01.ora');
PL/SQL procedure successfully completed
dbms_logmnr.start_logmnr函数的原型为:
PROCEDURE start_logmnr(
startScn IN NUMBER default 0 ,
endScn IN NUMBER default 0,
startTime IN DATE default '',
endTime IN DATE default '',
DictFileName IN VARCHAR2 default '',
OptionsIN BINARY_INTEGER default 0 );
4.分析数据
V$LOGMNR_LOGS 是分析日志列表视图
分析结果在GV$LOGMNR_CONTENTS 视图中
根据条件查询分析结果
SQL> select* from V$LOGMNR_CONTENTSwhere sql_redolike'drop%';
SCN CSCN TIMESTAMPCOMMIT_TI THREAD# LOG_ID XIDUSN
---------- ---------- --------- ------------------- ---------- ----------
XIDSLT XIDSQN PXIDUSN PXIDSLT PXIDSQN RBASQN RBABLK
---------- ---------- ---------- -------------------- ---------- ----------
RBABYTE UBAFIL UBABLK UBAREC UBASQN ABS_FILE# REL_FILE#
使用logminer查询表v$logmnr_contents必须在同一个会话中进行,因为分析的那些
信息存储在这个session 的PGA中,在别的session里面是查不到的。
如果在别的session中视图查询这个视图得到如下错误
SQL>select * from V$LOGMNR_CONTENTS;
select* from V$LOGMNR_CONTENTS
ERRORat line 1:
ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selectingfrom
v$logmnr_contents
视图v$logmnr_contents中的分析结果仅在我们运行过程'dbms_logmrn.start_logmnr'这个
会话的生命期中存在。这是因为所有的LogMiner存储都在PGA内存中,所有其他的进程是看
不到它的,同时随着进程的结束,分析结果也随之消失。 最后,使用过程DBMS_LOGMNR.END_LOGMNR
终止日志分析事务,此时PGA内存区域被清除,分析结果也随之不再存在。
可以创建临时表解决这个问题
SQL>create table logmnr_tab1 as select *from V$LOGMNR_CONTENTS;
Tablecreated.
SQL>
然后在临时表中分析
selectscn,timestamp,log_id,seg_owner,seg_name,table_name,seg_type_name,operation,sql_redo
fromsys.logmnr_tab1where sql_redolike'drop%';
根据条件查询特定的DDL/DML操作
5.分析后释放内存
SQL> execute dbms_logmnr.end_logmnr;
PL/SQL procedure successfully completed
6.其它
1).删除日志分析文件
exec dbms_logmnr.add_logfile('+DATA/ldbrac/onlinelog/group_2.262.732154617',dbms_logmnr.removefile);
注意:
BEGIN
DBMS_LOGMNR.REMOVE_LOGFILE(LOGFILENAME=> '/oracle/logs/log2.f');
END;
/
Note: If a logfileis removed from the session, you must call the
DBMS_LOGMNR.START_LOGMNRprocedure again before accessing v$logmnr_contents.
If you do not, youwill receive an error:
SQL> select count(*) from v$logmnr_contents;
select count(*) from v$logmnr_contents
*
ERROR at line 1:
ORA-01306:dbms_logmnr.start_logmnr() must be invoked before selecting from
v$logmnr_contents
也就是说当你删除掉一个redo log后你需要重新启动logmnr否则也会得到ORA 01306错误。
转载请注明出处:http://blog.csdn.net/xiaofan23z
附录:我写的不够详细,更详细的可以查看oracle 官方文档
How to Setup LogMiner [ID 111886.1] |
||
|
||
|
Modified 31-JAN-2012 Type BULLETIN Status PUBLISHED |
|
OracleServer - Enterprise Edition - Version: 8.1.5.0 to 11.2.0.0. - Release: 8.1.5 to11.2
Oracle Server - Standard Edition - Version:8.1.5.0 to 11.2.0.2.0 [Release: 8.1.5 to 11.2]
Information in this document applies to any platform.
Checked for relevanceon 24-Jun-2010
To provide the steps for setting up LogMiner on the database.
Thisis intended to help user set up LogMiner.
RELATED DOCUMENTS
Oracle8i Administrator's Guide, Release 8.1.6
Oracle8i Administrator's Guide, Release 8.1.7
Oracle9i Database Administrator's Guide Release 2 (9.2)
Oracle® Database Utilities 10g Release 1 (10.1)
Oracle® Database Utilities 10g Release 2 (10.2)
Oracle® Database Utilities 11g Release 1 (11.1)
Oracle® Database Utilities 11g Release 2 (11.2)
Introduction:
=============
LogMiner runs in an Oracle instance with the database either mounted or
unmounted. LogMiner uses a dictionary file, which is a special file that
indicates the database that created it as well as the time the file was
created. The dictionary file is not required, but is recommended. Without a
dictionary file, the equivalent SQL statements will use Oracle internal object
IDs for the object name and present column values as hex data.
For example, instead of the SQL statement:
INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);
LogMiner will display:
insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),
hextoraw('c306'));"
Create a dictionary file by mounting a database and then extracting dictionary
information into an external file.
You must create the dictionary file from the same database that generated the
log files you want to analyze. Once created, you can use the dictionary file
to analyze redo logs.
When creating the dictionary, specify the following:
* DICTIONARY_FILENAME to name the dictionary file.
* DICTIONARY_LOCATION to specify the location of the file.
LogMiner analyzes redo log files from any version 8.0.x and later Oracle
database that uses the same database characterset and is running on the same
hardware as the analyzing instance.
Note: The LogMiner packages are owned by the SYS schema. Therefore, if you
are not connected as user SYS, you must include SYS in your call. For
example:
EXECUTE SYS.DBMS_LOGMNR_D.BUILD
To Create a Dictionary File on an Oracle8 Database:
===================================================
Although LogMiner only runs on databases of release 8.1 or higher, you can
use it to analyze redo logs from release 8.0 databases.
1. Use an O/S command to copy the dbmslmd.sql script, which is contained in the
$ORACLE_HOME/rdbms/admin directory on the Oracle8i database, to the same
directory in the Oracle8 database.
For example, enter:
% cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql
Note: In 8.1.5 the script is dbmslogmnrd.sql.
In 8.1.6 the script is dbmslmd.sql.
2. Use SQL*Plus to mount and then open the database whose files you want to
analyze. For example, enter:
STARTUP
3. Execute the copied dbmslmd.sql script on the 8.0 database to create the
DBMS_LOGMNR_D package.
For example, enter: @dbmslmd.sql
4. Make sure to specify an existing directory that Oracle has permissions
to write to by the PL/SQL procedure by setting the initialization
parameter UTL_FILE_DIR in the init.ora.
For example, set the following to use /8.0/oracle/logs:
UTL_FILE_DIR = /8.0/oracle/logs
Be sure to shutdown and restart the instance after adding UTL_FILE_DIR
to the init.ora.
If you do not reference this parameter, the procedure will fail.
5. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
for the dictionary and a directory pathname for the file. This procedure
creates the dictionary file, which you should use to analyze log files.
For example, enter the following to create file dictionary.ora in
/8.0/oracle/logs:
(REMEMBER TO INCULDE THE DASH '-' CONTINUATION CHARACTER AT THE END OF
EACH LINE WHEN ENTERING A MULTI-LINE PL/SQL COMMAND IN SQL*PLUS)
EXECUTE DBMS_LOGMNR_D.BUILD(-
DICTIONARY_FILENAME =>'dictionary.ora',-
DICTIONARY_LOCATION => '/8.0/oracle/logs');
After creating the dictionary file on the Oracle 8.0.x instance, the
dictionary file and any archived logs to be mined must be moved to the
server running the 8.1.x database on which LogMiner will be run if it is
different from the server which generated the archived logs.
To Create a Dictionary File on an Oracle8i Database:
====================================================
1. Make sure to specify an existing directory that Oracle has permissions
to write to by the PL/SQL procedure by setting the initialization
parameter UTL_FILE_DIR in the init.ora.
For example, set the following to use /oracle/logs:
UTL_FILE_DIR = /oracle/logs
Be sure to shutdown and restart the instance after adding UTL_FILE_DIR
to the init.ora.
If you do not reference this parameter, the procedure will fail.
2. Use SQL*Plus to mount and then open the database whose files you want to
analyze. For example, enter:
STARTUP
3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
for the dictionary and a directory pathname for the file. This procedure
creates the dictionary file, which you should use to analyze log files.
For example, enter the following to create file dictionary.ora in
/oracle/logs:
(REMEMBER TO INCULDE THE DASH '-' CONTINUATION CHARACTER AT THE END OF
EACH LINE WHEN ENTERING A MULTI-LINE PL/SQL COMMAND IN SQL*PLUS)
EXECUTE DBMS_LOGMNR_D.BUILD( -
DICTIONARY_FILENAME =>'dictionary.ora', -
DICTIONARY_LOCATION => '/oracle/logs');
To Create a Dictionary on the Oracle Database (9i and later)
====================================================
In the 9i and later releases, the ability to extract the dictionary to a flat file as well as creating a dictionary with the redo logs is available.
For example, enter the following to create the file dictionary.ora in /oracle/database:
1. Make sure to specify an existing directory that Oracle has permissions
to write to by the PL/SQL procedure by setting the initialization
parameter UTL_FILE_DIR in the init.ora.
For example, set the following to use /oracle/logs:
UTL_FILE_DIR =/oracle/database
Be sure to shutdown and restart the instance after adding UTL_FILE_DIR to the init or spfile.
If you do not reference this parameter, the procedure will fail.
2. Use SQL*Plus to mount and then open the database whose files you want to
analyze. For example, enter:
STARTUP
3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
for the dictionary and a directory pathname for the file. This procedure
creates the dictionary file, which you should use to analyze log files.
For example, enter the following to create file dictionary.ora in
'/oracle/database/:
Example:
-------------
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
2 '/oracle/database/', -
3 OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
If extracting the database dictionary information to the redo logs, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_FILES option and do not specify a filename or location.
Example:
-------------
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( -
2 OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
Please note that to extract a dictionary to the redo logs, the database must be open and in ARCHIVELOG mode and archiving must be enabled
Also to make sure that the redo logs contain information that will provide the most value to you, you should enable at least minimal supplemental logging.
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
Specifying Redo Logs for Analysis
=================================
Once you have created a dictionary file, you can begin analyzing redo logs.
Your first step is to specify the log files that you want to analyze using
the ADD_LOGFILE procedure. Use the following constants:
* NEW to create a new list.
* ADDFILE to add redo logs to a list.
* REMOVEFILE to remove redo logs from the list.
To Use LogMiner:
1. Use SQL*Plus to start an Oracle instance, with the database either mounted
or unmounted.
For example, enter:
STARTUP
2. Create a list of logs by specifying the NEW option when executing the
DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to
specify /oracle/logs/log1.f:
(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log1.f', -
OPTIONS => dbms_logmnr.NEW);
3. If desired, add more logs by specifying the ADDFILE option.
For example, enter the following to add /oracle/logs/log2.f:
(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log2.f', -
OPTIONS => dbms_logmnr.ADDFILE);
4. If desired, remove logs by specifying the REMOVEFILE option.
For example, enter the following to remove /oracle/logs/log2.f:
(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/oracle/logs/log2.f', -
OPTIONS => dbms_logmnr.REMOVEFILE);
Using LogMiner:
===============
Once you have created a dictionary file and specified which logs to analyze,
you can start LogMiner and begin your analysis. Use the following options to
narrow the range of your search at start time:
This option Specifies
=========== =========
STARTSCN The beginning of an SCN range.
ENDSCN The termination of an SCN range.
STARTTIME The beginning of a time interval.
ENDTIME The end of a time interval.
DICTFILENAME The name of the dictionary file.
Once you have started LogMiner, you can make use of the following data
dictionary views for analysis:
This view Displays information about
=================== ==================================================
V$LOGMNR_DICTIONARY The dictionary file in use.
V$LOGMNR_PARAMETERS Current parameter settings for LogMiner.
V$LOGMNR_LOGS Which redo log files are being analyzed.
V$LOGMNR_CONTENTS The contents of the redo log files being analyzed.
To Use LogMiner:
================
1. Issue the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner utility.
For example, if using the online catalog as your dictionary source, issue:
(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
EXECUTE DBMS_LOGMNR.START_LOGMNR(- OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
If using a dictionary file (e.g. /oracle/dictionary.ora), you would issue issue:
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
DICTFILENAME =>'/oracle/dictionary.ora');
Optionally, set the STARTTIME and ENDTIME parameters to filter data by time.
Note that the procedure expects date values: use the TO_DATE function to
specify date and time, as in this example:
(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
DICTFILENAME => '/oracle/dictionary.ora', -
STARTTIME => to_date('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), -
ENDTIME => to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
Use the STARTSCN and ENDSCN parameters to filter data by SCN, as in this
example:
(INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
DICTFILENAME => '/oracle/dictionary.ora', -
STARTSCN => 100, -
ENDSCN => 150);
2. View the output via the V$LOGMNR_CONTENTS table. LogMiner returns all rows
in SCN order, which is the same order applied in media recovery.
For example,the following query lists information about operations:
SELECT operation, sql_redo FROM v$logmnr_contents;
OPERATION SQL_REDO
--------- ----------------------------------------------------------
INTERNAL
INTERNAL
START set transaction read write;
UPDATE update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
COMMIT commit;
START set transaction read write;
UPDATE update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
COMMIT commit;
START set transaction read write;
UPDATE update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
COMMIT commit;
11 rows selected.
Analyzing Archived Redo Log Files from Other Databases:
=======================================================
You can run LogMiner on an instance of a database while analyzing redo log
files from a different database. To analyze archived redo log files from other
databases,
LogMiner must:
* Access a dictionary file that is both created from the same database as the
redo log files and created with the same database character set.
* Run on the same hardware platform that generated the log files, although it
does not need to be on the same system.
* Use redo log files that can be applied for recovery from Oracle version 8.0
and later.
REFERENCES:
==========
<148616.1> - Oracle9i LogMiner New Features
<249001.1> - Oracle 10g New Features of LogMiner
<174504.1> - LogMiner - Frequently Asked Questions (FAQ)
Related Products
Keywords
|
转载请注明出处:http://blog.csdn.net/xiaofan23z