千家信息网

LogMiner如何在Oracle中使用

发表于:2024-11-26 作者:千家信息网编辑
千家信息网最后更新 2024年11月26日,这篇文章给大家介绍LogMiner如何在Oracle中使用,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。LogMiner介绍LogMiner 是Oracle公司从产品8i以后提供
千家信息网最后更新 2024年11月26日LogMiner如何在Oracle中使用

这篇文章给大家介绍LogMiner如何在Oracle中使用,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。

LogMiner介绍

LogMiner 是Oracle公司从产品8i以后提供的一个实际非常有用的分析工具,使用该工具可以轻松获得Oracle 重做日志文件(归档日志文件)中的具体内容,LogMiner分析工具实际上是由一组PL/SQL包和一些动态视图组成,它作为Oracle数据库的一部分来发布,是oracle公司提供的一个完全免费的工具。

环境:Oracle 11.2.0.4 RAC

1.查询当前日志组

使用sys用户查询Oracle数据库的当前日志组:

--1.current logSQL> select * from v$log; GROUP# THREAD# SEQUENCE#  BYTES BLOCKSIZE MEMBERS ARC STATUS   FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------ ------------ ------------   1   1   29 52428800  512   2 YES INACTIVE    1547838 25-JUN-17   1547840 25-JUN-17   2   1   30 52428800  512   2 NO CURRENT    1567897 27-JUN-17  2.8147E+14 27-JUN-17   3   2   25 52428800  512   2 NO CURRENT    1567902 27-JUN-17  2.8147E+14   4   2   24 52428800  512   2 YES INACTIVE    1567900 27-JUN-17   1567902 27-JUN-17

这里当前日志(current)是:

thread 1 sequence 30

thread 2 sequence 25

2.业务用户插入操作

模拟业务用户jingyu插入T2表数据:

--2.业务用户插入操作sqlplus jingyu/jingyu@jyzhaoSQL> select count(1) from t2; COUNT(1)----------   0SQL> insert into t2 select rownum, rownum, rownum, dbms_random.string('b',50) from dual connect by level <= 100000 order by dbms_random.random;commit;100000 rows created.SQL> Commit complete.SQL> select count(1) from t2; COUNT(1)---------- 100000

3.归档日志切换

为了区分每个日志的不同操作,这里对数据库进行手工归档切换,模拟现实中实际的归档切换。

--3.模拟归档日志切换SQL> alter system archive log current;System altered.SQL> select * from v$log; GROUP# THREAD# SEQUENCE#  BYTES BLOCKSIZE MEMBERS ARC STATUS   FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------ ------------ ------------   1   1   31 52428800  512   2 NO CURRENT    1572517 27-JUN-17  2.8147E+14   2   1   30 52428800  512   2 YES ACTIVE     1567897 27-JUN-17   1572517 27-JUN-17   3   2   25 52428800  512   2 YES ACTIVE     1567902 27-JUN-17   1572521 27-JUN-17   4   2   26 52428800  512   2 NO CURRENT    1572521 27-JUN-17  2.8147E+14

4.业务用户插入操作

模拟业务用户jingyu删除T2表部分数据:

--4.业务用户删除操作SQL> delete from t2 where id < 10000;9999 rows deleted.SQL> commit;Commit complete.SQL> select count(1) from t2; COUNT(1)----------  90001

5.归档日志切换

为了区分每个日志的不同操作,这里对数据库进行手工归档切换,模拟现实中实际的归档切换。

--5.模拟归档日志切换SQL> alter system archive log current;System altered.SQL> select * from v$log; GROUP# THREAD# SEQUENCE#  BYTES BLOCKSIZE MEMBERS ARC STATUS   FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------ ------------ ------------   1   1   31 52428800  512   2 YES ACTIVE     1572517 27-JUN-17   1574293 27-JUN-17   2   1   32 52428800  512   2 NO CURRENT    1574293 27-JUN-17  2.8147E+14   3   2   27 52428800  512   2 NO CURRENT    1574296 27-JUN-17  2.8147E+14   4   2   26 52428800  512   2 YES ACTIVE     1572521 27-JUN-17   1574296 27-JUN-17

6.业务用户更新操作

模拟业务用户jingyu更新T2表部分数据:

--6.业务用户更新操作SQL> update T2 SET contents = 'xxx' where id > 99998; 2 rows updated.SQL> commit;Commit complete.

7.归档日志切换

为了区分每个日志的不同操作,这里对数据库进行手工归档切换,模拟现实中实际的归档切换。

--7.模拟归档日志切换SQL> alter system archive log current;System altered.SQL> select * from v$log; GROUP# THREAD# SEQUENCE#  BYTES BLOCKSIZE MEMBERS ARC STATUS   FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------ ------------ ------------   1   1   33 52428800  512   2 NO CURRENT    1575480 27-JUN-17  2.8147E+14   2   1   32 52428800  512   2 YES ACTIVE     1574293 27-JUN-17   1575480 27-JUN-17   3   2   27 52428800  512   2 YES ACTIVE     1574296 27-JUN-17   1575458 27-JUN-17   4   2   28 52428800  512   2 NO CURRENT    1575458 27-JUN-17  2.8147E+14

8.确认需要分析的日志

确认之后需要使用LogMiner分析的日志:

--8.确认需要分析的日志thread# 1 sequence# 30thread# 2 sequence# 25这部分日志肯定是有记录插入操作thread# 1 sequence# 31thread# 2 sequence# 26这部分日志肯定是有记录删除操作thread# 1 sequence# 32thread# 2 sequence# 27这部分日志肯定是有记录更新操作

9.备份归档日志

将相关的归档都copy备份出来:

--9. 将相关的归档都copy备份出来RUN {allocate channel dev1 device type disk format '/tmp/backup/arc_%h_%e_%t';backup as copy archivelog sequence 30 thread 1;backup as copy archivelog sequence 31 thread 1;backup as copy archivelog sequence 32 thread 1;backup as copy archivelog sequence 25 thread 2;backup as copy archivelog sequence 26 thread 2;backup as copy archivelog sequence 27 thread 2;release channel dev1;}

备份出来的归档日志文件如下:

[oracle@jyrac1 backup]$ ls -lrthtotal 17M-rw-r----- 1 oracle asmadmin 2.3M Jun 27 21:50 arc_1_30_947800247-rw-r----- 1 oracle asmadmin 591K Jun 27 21:50 arc_1_31_947800249-rw-r----- 1 oracle asmadmin 143K Jun 27 21:50 arc_1_32_947800250-rw-r----- 1 oracle asmadmin 9.5M Jun 27 21:50 arc_2_25_947800251-rw-r----- 1 oracle asmadmin 3.6M Jun 27 21:50 arc_2_26_947800253-rw-r----- 1 oracle asmadmin 77K Jun 27 21:50 arc_2_27_947800254

10.使用LogMiner分析

使用LogMiner分析归档日志:

--使用LogMiner分析归档日志--应该有插入操作的日志begin dbms_logmnr.add_logfile('/tmp/backup/arc_1_30_947800247'); dbms_logmnr.add_logfile('/tmp/backup/arc_2_25_947800251'); dbms_logmnr.start_logmnr(Options=>dbms_logmnr.dict_from_online_catalog);end;/--应该有删除操作的日志begin dbms_logmnr.add_logfile('/tmp/backup/arc_1_31_947800249'); dbms_logmnr.add_logfile('/tmp/backup/arc_2_26_947800253'); dbms_logmnr.start_logmnr(Options=>dbms_logmnr.dict_from_online_catalog);end;/--应该有更新操作的日志begin dbms_logmnr.add_logfile('/tmp/backup/arc_1_32_947800250'); dbms_logmnr.add_logfile('/tmp/backup/arc_2_27_947800254'); dbms_logmnr.start_logmnr(Options=>dbms_logmnr.dict_from_online_catalog);end;/

查询v$logmnr_contents

set lines 180 pages 500col username format a8col sql_redo format a50 select username,scn,timestamp,sql_redo from v$logmnr_contents where table_name='T2'; select username,scn,timestamp,sql_redo from v$logmnr_contents where username='JINGYU';select username,scn,timestamp,sql_redo from v$logmnr_contents where sql_redo like '%JINGYU%';select username,scn,timestamp,sql_redo from v$logmnr_contents where sql_redo like 'insert%JINGYU%';select username,scn,timestamp,sql_redo from v$logmnr_contents where sql_redo like 'delete%JINGYU%';select username,scn,timestamp,sql_redo from v$logmnr_contents where sql_redo like 'update%JINGYU%';

实验发现,以username为条件无法查询到相关记录,最终确认username都是unknown而不是真正执行语句的业务用户jingyu。

而挖掘出的日志sql_redo这个字段是完整的SQL,可以采用like的方式查询,比如我分析更新操作的日志,就可以得到下面这样的结果:

SQL> --应该有更新操作的日志SQL> begin 2  dbms_logmnr.add_logfile('/tmp/backup/arc_1_32_947800250'); 3  dbms_logmnr.add_logfile('/tmp/backup/arc_2_27_947800254'); 4  dbms_logmnr.start_logmnr(Options=>dbms_logmnr.dict_from_online_catalog); 5 end; 6 /PL/SQL procedure successfully completed.SQL> select count(1) from v$logmnr_contents; COUNT(1)----------  388SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where username='JINGYU';no rows selectedSQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where sql_redo like '%JINGYU%';USERNAME        SCN TIMESTAMP------------------------------ ---------- ------------SQL_REDO--------------------------------------------------------------------------------UNKNOWN       1575420 27-JUN-17update "JINGYU"."T2" set "CONTENTS" = 'xxx' where "CONTENTS" = 'YSWGNNLCLMYWPSLQETVLGQJRKQIEAMOEYUFNRUQULVFRVPEDRV' and ROWID = 'AAAVWVAAGAAAAHnABj';UNKNOWN       1575420 27-JUN-17update "JINGYU"."T2" set "CONTENTS" = 'xxx' where "CONTENTS" = 'WHCWFOZVLJWHFWLJDNVSMQTORGJFFXYADIOJZWJCDDOYXAOQJG' and ROWID = 'AAAVWVAAGAAAAOYAAE';SQL>

至此,LogMiner基本的操作实验已完成。

附:与LogMiner有关的一些操作命令参考:

conn / as sysdba--安装LOGMINER@$ORACLE_HOME/rdbms/admin/dbmslmd.sql;@$ORACLE_HOME/rdbms/admin/dbmslm.sql;@$ORACLE_HOME/rdbms/admin/dbmslms.sql;@$ORACLE_HOME/rdbms/admin/prvtlm.plb;--停止logmnrexec dbms_logmnr.end_logmnr --查询附加日志开启情况:select supplemental_log_data_min, supplemental_log_data_pk, supplemental_log_data_ui from v$database; --开启附加日志alter database add supplemental log data;--取消补充日志alter database drop supplemental log data (primary key) columns;alter database drop supplemental log data (unique) columns;alter database drop supplemental log data;--最后一个即为新的归档select name,dest_id,thread#,sequence# from v$archived_log;

最后确认如果开启了附加日志,username就可以捕获到正确的值:

SQL> set lines 180SQL> / GROUP# THREAD# SEQUENCE#  BYTES BLOCKSIZE MEMBERS ARC STATUS   FIRST_CHANGE# FIRST_TIME NEXT_CHANGE# NEXT_TIME---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ------------ ------------ ------------   1   1   35 52428800  512   2 YES INACTIVE    1590589 27-JUN-17   1591935 27-JUN-17   2   1   36 52428800  512   2 NO CURRENT    1591935 27-JUN-17  2.8147E+14   3   2   29 52428800  512   2 YES INACTIVE    1590594 27-JUN-17   1591938 27-JUN-17   4   2   30 52428800  512   2 NO CURRENT    1591938 27-JUN-17  2.8147E+141,362,30SQL> update t2 set contents =  2 'aaa' where id = 44449;1 row updated.SQL> commit;Commit complete.RUN {allocate channel dev1 device type disk format '/tmp/backup/arc_%h_%e_%t';backup as copy archivelog sequence 36 thread 1;backup as copy archivelog sequence 30 thread 2;release channel dev1;}begin dbms_logmnr.add_logfile('/tmp/backup/arc_1_36_947808116'); dbms_logmnr.add_logfile('/tmp/backup/arc_2_30_947808118'); dbms_logmnr.start_logmnr(Options=>dbms_logmnr.dict_from_online_catalog);end;/SQL> select username,scn,timestamp,sql_redo from v$logmnr_contents where username='JINGYU';USERNAME        SCN TIMESTAMP------------------------------ ---------- ------------SQL_REDO------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------JINGYU       1593448 27-JUN-17set transaction read write;JINGYU       1593448 27-JUN-17update "JINGYU"."T2" set "CONTENTS" = 'aaa' where "CONTENTS" = 'WZTSQZWYOCNDFKSMNJQLOLFUBRDOHCBMKXBHAPJSHCMWBYZJVH' and ROWID = 'AAAVWVAAGAAAACLAAL';JINGYU       1593450 27-JUN-17commit;

关于LogMiner如何在Oracle中使用就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。

0