Страницы

вторник, 29 ноября 2016 г.

A lot of log.* files in dirbdb directory

Since I had added ENABLEMONITORING in the GLOBALS file, I noticed that $OGG_HOME/dirbdb directory started growing.
That was OGG version 12.2.0.1.160517.
The size of this directory was 23G in the test environment and the ENABLEMONITORING option was set 3 weeks ago:
-bash-4.1$ du -sh dirbdb
 23.8G   dirbdb
From that 23.8G , 23.5G was occupied by the log.* files like below:
-rw-r-----   1 oracle dba      10485760 Nov 16 18:16 log.0000015971
-rw-r-----   1 oracle dba      10485760 Nov 16 18:28 log.0000015972
-rw-r-----   1 oracle dba      10485760 Nov 16 18:41 log.0000015973
-rw-r-----   1 oracle dba      10485760 Nov 16 18:55 log.0000015974
-rw-r-----   1 oracle dba      10485760 Nov 16 19:08 log.0000015975
-rw-r-----   1 oracle dba      10485760 Nov 16 19:20 log.0000015976
-rw-r-----   1 oracle dba      10485760 Nov 16 19:32 log.0000015977
-rw-r-----   1 oracle dba      10485760 Nov 16 19:43 log.0000015978
-rw-r-----   1 oracle dba      10485760 Nov 16 19:55 log.0000015979
-rw-r-----   1 oracle dba      10485760 Nov 16 20:07 log.0000015980
-rw-r-----   1 oracle dba      10485760 Nov 16 20:19 log.0000015981
-rw-r-----   1 oracle dba      10485760 Nov 16 20:31 log.0000015982
These files were generated approximately every ten minutes and each of them was 10M in size that made up 1.4G daily.
Having searched through MOS and Internet search engines, I stumbled across this MOSC Thread: Monitoring - Berkeley Database.
So I created a new DB_CONFIG file within the dirbdb directory and put this line into it:
-bash-4.1$ cat dirbdb/DB_CONFIG 
set_flags DB_LOG_AUTOREMOVE 
Then I restarted my OGG processes and discovered that these log files were gone.
Now there are only two log files that are kept:
[oracle@host ggs]$ ls -ltr dirbdb/log*
-rw-r-----   1 oracle   dba      10485760 Nov 28 13:31 dirbdb/log.0000017532
-rw-r-----   1 oracle   dba      10485760 Nov 28 13:41 dirbdb/log.0000017533
[oracle@host ggs]$ ls -ltr dirbdb/log*
-rw-r-----   1 oracle   dba      10485760 Nov 28 13:42 dirbdb/log.0000017533
-rw-r-----   1 oracle   dba      10485760 Nov 28 13:51 dirbdb/log.0000017534
[oracle@host ggs]$ ls -ltr dirbdb/log*
-rw-r-----   1 oracle   dba      10485760 Nov 29 07:21 dirbdb/log.0000017625
-rw-r-----   1 oracle   dba      10485760 Nov 29 07:22 dirbdb/log.0000017626
Looks like another option that should be added in the environments in which ENABLEMONITORING is set.
Yet, MOS returns no hits apart from the aforementioned MOSC thread.

понедельник, 31 октября 2016 г.

Excluding DDL in IE (Integrated Extract) using DBMS_STREAMS_ADM.SET_TAG

A couple of months ago I had to configure OGG IE (Oracle GoldenGate Integrated Extract) to exclude specific DDL from replication.
Having worked a lot with Oracle Streams, I decided to utilize DBMS_STREAMS_ADM.SET_TAG for that task.
I found a relevant MOS document How to exclude ddl in IE (integrated extract) issued from a specific user? (Doc ID 2107293.1) pretty fast.
The document suggested putting TRANLOGOPTIONS EXCLUDETAG into an extract parameter file and than all DDL statements preceeding with DBMS_STREAMS_SET.TAG should not be replicated.
Unfortunately the provided solution did not work for me.
I used OGG Server 12.2.0.1.160823 on Linux x86-64 (which means that I applied the latest bundle patch available at the moment).
The issue can be easily reproduced.
The database version used in this test was 12.1.0.2.160719.
Here are my extract and replicat parameter files:
GGSCI (misha2 as tc_ogg_replicat@db2) 112> view params etag

EXTRACT ETAG

USERIDALIAS db1_tc_ogg_extract
TRANLOGOPTIONS EXCLUDETAG 34
LOGALLSUPCOLS
EXTTRAIL ./dirdat/tc
DDL INCLUDE ALL

TABLE TC.*;

GGSCI (misha2 as tc_ogg_replicat@db2) 113> view params rtag

REPLICAT RTAG

USERIDALIAS db2_tc_ogg_replicat

MAP TC.*, TARGET TC.*;
Test schema TC will be used for demonstration and I setup the extract to exclude any LCRs with tag 34.
Then I am going to create and start the OGG processes with the following script:
GGSCI (misha2 as tc_ogg_replicat@db2) 138> alter credentialstore add user tc_ogg_extract@db1 password tc_ogg_extract alias db1_tc_ogg_extract


Credential store in ./dircrd/ altered.

GGSCI (misha2 as tc_ogg_replicat@db2) 139> alter credentialstore add user tc_ogg_replicat@db2 password tc_ogg_replicat alias db2_tc_ogg_replicat


Credential store in ./dircrd/ altered.

GGSCI (misha2 as tc_ogg_replicat@db2) 140> 

GGSCI (misha2 as tc_ogg_replicat@db2) 140> dblogin useridalias db1_tc_ogg_extract

Successfully logged into database.

GGSCI (misha2 as tc_ogg_extract@db1) 141> add extract etag integrated tranlog begin now

EXTRACT (Integrated) added.


GGSCI (misha2 as tc_ogg_extract@db1) 142> add exttrail ./dirdat/tc extract etag

EXTTRAIL added.

GGSCI (misha2 as tc_ogg_extract@db1) 143> register extract etag database


2016-10-31 12:46:49  INFO    OGG-02003  Extract ETAG successfully registered with database at SCN 39029510.

GGSCI (misha2 as tc_ogg_extract@db1) 144> 

GGSCI (misha2 as tc_ogg_extract@db1) 146> dblogin useridalias db2_tc_ogg_replicat

Successfully logged into database.

GGSCI (misha2 as tc_ogg_replicat@db2) 147> add replicat rtag , integrated, exttrail ./dirdat/tc

REPLICAT (Integrated) added.


GGSCI (misha2 as tc_ogg_replicat@db2) 148> start extract etag

Sending START request to MANAGER ...
EXTRACT ETAG starting


GGSCI (misha2 as tc_ogg_replicat@db2) 149> start replicat rtag

Sending START request to MANAGER ...
REPLICAT RTAG starting
I used two databases in my setup - db1 and db2 that I named SOURCE and TARGET in the rest of this blog post for the sake of clarity.
The extract configured to capture the changes from db1 database and write data to trail files.
The replicat reads the trail files and applies the changes into db2 database.
Next I am about to run the following code in the source database from which the extract captures changes:
TC@SOURCE> create sequence t_seq;

Sequence created.

TC@SOURCE> 
TC@SOURCE> create table t(
  2    id  int default t_seq.nextval,
  3    msg varchar2(10))
  4  /

Table created.

TC@SOURCE> 
TC@SOURCE> insert into t(msg) values ('NO TAG');

1 row created.

TC@SOURCE> commit;

Commit complete.

TC@SOURCE> 
TC@SOURCE> exec dbms_streams_adm.set_tag( hextoraw('12'))

PL/SQL procedure successfully completed.

TC@SOURCE> 
TC@SOURCE> insert into t(msg) values ('TAG '||rawtohex(dbms_streams_adm.get_tag));

1 row created.

TC@SOURCE> commit;

Commit complete.

TC@SOURCE> 
TC@SOURCE> exec dbms_streams_adm.set_tag( hextoraw('34'))

PL/SQL procedure successfully completed.

TC@SOURCE> 
TC@SOURCE> insert into t(msg) values ('TAG '||rawtohex(dbms_streams_adm.get_tag));

1 row created.

TC@SOURCE> commit;

Commit complete.

TC@SOURCE> 
TC@SOURCE> exec dbms_streams_adm.set_tag( hextoraw('56'))

PL/SQL procedure successfully completed.

TC@SOURCE> 
TC@SOURCE> insert into t(msg) values ('TAG '||rawtohex(dbms_streams_adm.get_tag));

1 row created.

TC@SOURCE> commit;

Commit complete.
Havind done that, I got next results in SOURCE database:
TC@SOURCE> select * from t;

        ID MSG
---------- ----------
         1 NO TAG
         2 TAG 12
         3 TAG 34
         4 TAG 56
And that in TARGET:
TC@TARGET> select * from t order by id;

        ID MSG
---------- ----------
         1 NO TAG
         2 TAG 12
         4 TAG 56
Notice that an insert of a record with ID=3, MSG="TAG 34" was not replicated because we have filtered out that record on the extract.
That works flawlessly being executed for DML, but it does not work for DDL:
TC@SOURCE> exec dbms_streams_adm.set_tag( hextoraw('12'))

PL/SQL procedure successfully completed.

TC@SOURCE> 
TC@SOURCE> alter table t add tag12 int;

Table altered.

TC@SOURCE> 
TC@SOURCE> exec dbms_streams_adm.set_tag( hextoraw('34'))

PL/SQL procedure successfully completed.

TC@SOURCE> 
TC@SOURCE> alter table t add tag34 int;

Table altered.

TC@SOURCE> describe t
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER(38)
 MSG                                                VARCHAR2(10)
 TAG12                                              NUMBER(38)
 TAG34                                              NUMBER(38)
I have added two columns to the table and the second one, TAG34, should not have been replicated.
But in fact, both of the commands were replicated:
TC@TARGET> describe t
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER(38)
 MSG                                                VARCHAR2(10)
 TAG12                                              NUMBER(38)
 TAG34                                              NUMBER(38)
It means that aforementioned MOS document How to exclude ddl in IE (integrated extract) issued from a specific user? (Doc ID 2107293.1) does not work in that configuration which, by the way, I think is quite typical.
Luckily, Oracle Support published another note after we concluded our work on the SR: EXCLUDETAG Issue With Integrated Extract While Excluding DDL For Specific User (Doc ID 2185538.1).
We need to put the following line into an extract parameter file to get it working: tranlogoptions _dbfilterddl
GGSCI (misha2 as tc_ogg_replicat@db2) 168> view params etag

EXTRACT ETAG

USERIDALIAS db1_tc_ogg_extract
TRANLOGOPTIONS EXCLUDETAG 34
TRANLOGOPTIONS _dbfilterddl
LOGALLSUPCOLS
EXTTRAIL ./dirdat/tc
DDL INCLUDE ALL

TABLE TC.*;
Having added that, the same code adding two columns worked as it should:
TC@SOURCE> describe t
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER(38)
 MSG                                                VARCHAR2(10)
 TAG12                                              NUMBER(38)
 TAG34                                              NUMBER(38)

TC@TARGET> describe t
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER(38)
 MSG                                                VARCHAR2(10)
 TAG12                                              NUMBER(38)
It means that TAG34 column, being added with tag 34, was not replicated.
There is also a new line in a ggserr.log file that was not present when I started the extract without _dbfilterddl:
Logmining server DDL filtering enabled.
2016-10-31 13:54:33  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host [172.16.113.245]:57437 (START EXTRACT ETAG ).
2016-10-31 13:54:33  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #6).
2016-10-31 13:54:34  INFO    OGG-00992  Oracle GoldenGate Capture for Oracle, etag.prm:  EXTRACT ETAG starting.
2016-10-31 13:54:34  INFO    OGG-03059  Oracle GoldenGate Capture for Oracle, etag.prm:  Operating system character set identified as UTF-8.
2016-10-31 13:54:34  INFO    OGG-02695  Oracle GoldenGate Capture for Oracle, etag.prm:  ANSI SQL parameter syntax is used for parameter parsing.
2016-10-31 13:54:38  INFO    OGG-03522  Oracle GoldenGate Capture for Oracle, etag.prm:  Setting session time zone to source database time zone 'GMT'.
2016-10-31 13:54:38  WARNING OGG-04033  Oracle GoldenGate Capture for Oracle, etag.prm:   LOGALLSUPCOLS has set the NOCOMPRESSDELETES and GETUPDATEBEFORES parameters on.
2016-10-31 13:54:38  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, etag.prm:  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /u01/app/oracle/12.2.0.1/ggs/BR/ETAG.
2016-10-31 13:54:38  INFO    OGG-01851  Oracle GoldenGate Capture for Oracle, etag.prm:  filecaching started: thread ID: 140594853938944.
2016-10-31 13:54:38  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT ETAG starting.
2016-10-31 13:54:38  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, etag.prm:  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /u01/app/oracle/12.2.0.1/ggs/dirtmp.
2016-10-31 13:54:58  WARNING OGG-02045  Oracle GoldenGate Capture for Oracle, etag.prm:  Database does not have streams_pool_size initialization parameter configured.
2016-10-31 13:54:59  INFO    OGG-02248  Oracle GoldenGate Capture for Oracle, etag.prm:  Logmining server DDL filtering enabled.
2016-10-31 13:55:08  INFO    OGG-02068  Oracle GoldenGate Capture for Oracle, etag.prm:  Integrated capture successfully attached to logmining server OGG$CAP_ETAG using OGGCapture API.
2016-10-31 13:55:08  INFO    OGG-02089  Oracle GoldenGate Capture for Oracle, etag.prm:  Source redo compatibility version is: 12.1.0.2.0.
2016-10-31 13:55:08  INFO    OGG-02086  Oracle GoldenGate Capture for Oracle, etag.prm:  Integrated Dictionary will be used.
2016-10-31 13:55:09  WARNING OGG-02905  Oracle GoldenGate Capture for Oracle, etag.prm:  Replication of OID column in object tables may diverge.
2016-10-31 13:55:09  INFO    OGG-00993  Oracle GoldenGate Capture for Oracle, etag.prm:  EXTRACT ETAG started.

tl;dr

EXCLUDETAG parameter does not work when a tag is set using DBMS_STREAMS_ADM.SET_TAG in OGG 12.2.0.1.160823
. It prevents replicating only DML commands and does not restrict DDL commands from being replicated.
We could use underscope parameter _dbfilterddl in the extract parameter file like in the following line:
TRANLOGOPTIONS _dbfilterddl
This way we may restrict both DDL and DML commands from being replicated when the appropriate tag is set (or any in case we use "EXCLUDETAG +" in the extract parameter file).

четверг, 1 сентября 2016 г.

Incorrect handling of unused columns during GoldenGate initial load using Direct Bulk Load to SQL*Loader

Today I have faced an interesting issue performing GoldenGate initial-load with a Direct Bulk Load to SQL*Loader.
The issue was related to incorrect handling of unused columns during such loads and can cause incorrect data getting loaded into a database under certain conditions.
I tried to reproduce the issue using one database in my setup but wasn't able to do that and stuck with two databases.
I used Oracle Database 12.1.0.2 with DBBP 12.1.0.2.160419 applied and GoldenGate 12.2.0.1.160419 Patch Set.
I'm going to use two database in my test case, let it be SOURCE and TARGET. Firstly, let's create a test schema with a single table populating it with some data:
DBA@SOURCE> grant create session, create table, unlimited tablespace to tc identified by tc;

Grant succeeded.

DBA@SOURCE> conn tc/tc@source
Connected.
TC@SOURCE> 
SQL> create table direct_load(
  2    column1 int,
  3    unused1 int,
  4    column2 int,
  5    column3 int);

Table created.

TC@SOURCE> 
TC@SOURCE> insert into direct_load values (1, 0, 2, 3);

1 row created.

TC@SOURCE> commit;

Commit complete.

TC@SOURCE> select *
  2    from direct_load;

   COLUMN1    UNUSED1    COLUMN2    COLUMN3
---------- ---------- ---------- ----------
         1          0          2          3
Secondly, we are making column "unused1" unused:
TC@SOURCE> alter table direct_load set unused column unused1;

Table altered.
Now I will execute the same DDL commands into TARGET database, which is obviously what GoldenGate server will do if we setup DDL replication:
DBA@TARGET> grant create session, create table, unlimited tablespace to tc identified by tc;

Grant succeeded.

DBA@TARGET> conn tc/tc@target
Connected.
TC@TARGET> create table direct_load(
  2    column1 int,
  3    unused1 int,
  4    column2 int,
  5    column3 int);

Table created.

TC@TARGET> 
TC@TARGET> insert into direct_load values (1, 0, 2, 3);

1 row created.

TC@TARGET> commit;

Commit complete.

TC@TARGET> select *
  2    from direct_load;

   COLUMN1    UNUSED1    COLUMN2    COLUMN3
---------- ---------- ---------- ----------
         1          0          2          3

TC@TARGET>
TC@TARGET>  alter table direct_load set unused column unused1;
Consider the scenario when we have decided to resynchronize data in TARGET database from SOURCE database.
I'm going to remove all rows from the table in TARGET database first:
TC@TARGET> truncate table direct_load;

Table truncated.
Proceed with GoldenGate setup. I want to use GoldenGate Direct Bulk Load to SQL*Loader capability to move the data from SOURCE database to TARGET database.
Let's use IRTC - as a name of the initial load replicat and IETC - as a name of the initial load extract:
-- setup initial load replicat
GGSCI (ogg-test) 2> edit params irtc
-- UserIdAlias may be customized accordingly to your environment or simply use a username/password pair
Replicat irtc
AssumeTargetDefs
UserIdAlias target_ogg_replicat
DiscardFile ./dirrpt/irtc.dsc, Purge
-- I want to use Direct Bulk Load
BulkLoad
-- I used PDB "orcl" for my experiments:
Map orcl.tc.*, target orcl.tc.*;

-- setup initial load extract
GGSCI (ogg-test) 149> edit params ietc
Extract ietc
RmtHost ogg-test, MgrPort 7809
RmtTask Replicat, Group irtc
UserIdAlias source_ogg_extract
Table orcl.tc.direct_load;

-- let's create GoldenGate processes
GGSCI (ogg-test) 116> add replicat irtc SpecialRun

REPLICAT added.

GGSCI (ogg-test) 117> add extract ietc SourceIsTable

EXTRACT added.
The setup has been finished. Right now we can try to perform initial load:
GGSCI (ogg-test) 15> info i*tc, tasks

EXTRACT    IETC      Initialized   2016-09-01 08:54   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE

REPLICAT   IRTC      Initialized   2016-09-01 08:54   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:28 ago)
Log Read Checkpoint  Not Available
Task                 SPECIALRUN

GGSCI (ogg-test) 16> start extract ietc

Sending START request to MANAGER ...
EXTRACT IETC starting
The load has completed pretty soon:
GGSCI (ogg-test) 17> info i*tc, tasks

EXTRACT    IETC      Last Started 2016-09-01 08:55   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table ORCL.TC.DIRECT_LOAD
                     2016-09-01 08:55:27  Record 1
Task                 SOURCEISTABLE

REPLICAT   IRTC      Initialized   2016-09-01 08:54   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:02:09 ago)
Log Read Checkpoint  Not Available
Task                 SPECIALRUN
ggserr.log file doesn't raise any concerns:
2016-09-01 08:55:11  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (velikikh): start extract ietc.
2016-09-01 08:55:11  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host [127.0.0.1]:54562 (START EXTRACT IETC ).
2016-09-01 08:55:11  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #8).
2016-09-01 08:55:11  INFO    OGG-01017  Oracle GoldenGate Capture for Oracle, ietc.prm:  Wildcard resolution set to IMMEDIATE because SOURCEISTABLE is used.
2016-09-01 08:55:11  INFO    OGG-00992  Oracle GoldenGate Capture for Oracle, ietc.prm:  EXTRACT IETC starting.
2016-09-01 08:55:11  INFO    OGG-03059  Oracle GoldenGate Capture for Oracle, ietc.prm:  Operating system character set identified as ISO-8859-1.
2016-09-01 08:55:11  INFO    OGG-02695  Oracle GoldenGate Capture for Oracle, ietc.prm:  ANSI SQL parameter syntax is used for parameter parsing.
2016-09-01 08:55:12  INFO    OGG-03522  Oracle GoldenGate Capture for Oracle, ietc.prm:  Setting session time zone to source database time zone 'GMT'.
2016-09-01 08:55:12  WARNING OGG-00752  Oracle GoldenGate Capture for Oracle, ietc.prm:  Failed to validate table ORCL.TC.DIRECT_LOAD. Likely due to existence of unused column. Please make sure you use sourcedefs in downstream Replicat, or the target table has exactly the same unused columns when using ASSUMETARGETDEFS or DDL replication.
2016-09-01 08:55:12  WARNING OGG-06439  Oracle GoldenGate Capture for Oracle, ietc.prm:  No unique key is defined for table DIRECT_LOAD. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
2016-09-01 08:55:12  INFO    OGG-06509  Oracle GoldenGate Capture for Oracle, ietc.prm:  Using the following key columns for source table ORCL.TC.DIRECT_LOAD: COLUMN1, COLUMN2, COLUMN3.
2016-09-01 08:55:12  INFO    OGG-01851  Oracle GoldenGate Capture for Oracle, ietc.prm:  filecaching started: thread ID: 7.
2016-09-01 08:55:12  INFO    OGG-01815  Oracle GoldenGate Capture for Oracle, ietc.prm:  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /export/home/velikikh/oracle/12.2.0.1/ggs/dirtmp.
2016-09-01 08:55:16  INFO    OGG-00975  Oracle GoldenGate Manager for Oracle, mgr.prm:  EXTRACT IETC starting.
2016-09-01 08:55:16  INFO    OGG-00993  Oracle GoldenGate Capture for Oracle, ietc.prm:  EXTRACT IETC started.
2016-09-01 08:55:16  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from EXTRACT on host [127.0.0.1]:58728 (START REPLICAT IRTC CPU -1 PRI -1 PARAMS ).
2016-09-01 08:55:16  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #8).
2016-09-01 08:55:16  INFO    OGG-01025  Oracle GoldenGate Delivery for Oracle:  REPLICAT task started by manager (port 7810).
2016-09-01 08:55:21  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from RMTTASK on host [::1]:59974 (REPORT 3871 7810).
2016-09-01 08:55:21  INFO    OGG-00960  Oracle GoldenGate Manager for Oracle, mgr.prm:  Access granted (rule #1).
2016-09-01 08:55:21  INFO    OGG-00973  Oracle GoldenGate Manager for Oracle, mgr.prm:  Manager started replicat task process (Port 7810).
2016-09-01 08:55:26  INFO    OGG-01229  Oracle GoldenGate Delivery for Oracle:  Connected to ogg-test:50783.
2016-09-01 08:55:26  INFO    OGG-00995  Oracle GoldenGate Delivery for Oracle, irtc.prm:  REPLICAT IRTC starting.
2016-09-01 08:55:26  INFO    OGG-03059  Oracle GoldenGate Delivery for Oracle, irtc.prm:  Operating system character set identified as ISO-8859-1.
2016-09-01 08:55:26  INFO    OGG-02695  Oracle GoldenGate Delivery for Oracle, irtc.prm:  ANSI SQL parameter syntax is used for parameter parsing.
2016-09-01 08:55:27  INFO    OGG-02679  Oracle GoldenGate Delivery for Oracle, irtc.prm:  The Replicat process logged on to database ORCL and can only apply to that database.
2016-09-01 08:55:27  INFO    OGG-06451  Oracle GoldenGate Delivery for Oracle, irtc.prm:  Triggers will be suppressed by default.
2016-09-01 08:55:27  INFO    OGG-01815  Oracle GoldenGate Delivery for Oracle, irtc.prm:  Virtual Memory Facilities for: COM
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /export/home/velikikh/oracle/12.2.0.1/ggs/dirtmp.
2016-09-01 08:55:27  INFO    OGG-00996  Oracle GoldenGate Delivery for Oracle, irtc.prm:  REPLICAT IRTC started.
2016-09-01 08:55:27  INFO    OGG-03522  Oracle GoldenGate Delivery for Oracle, irtc.prm:  Setting session time zone to source database time zone 'GMT'.
2016-09-01 08:55:27  INFO    OGG-02911  Oracle GoldenGate Capture for Oracle, ietc.prm:  Processing table ORCL.TC.DIRECT_LOAD.
2016-09-01 08:55:27  INFO    OGG-06495  Oracle GoldenGate Capture for Oracle, ietc.prm:  OGG created a session pool with SESSIONPOOLMAX 10, SESSIONPOOLMIN 2 and SESSIONPOOLINCR 2.
2016-09-01 08:55:27  WARNING OGG-02760  Oracle GoldenGate Delivery for Oracle, irtc.prm:  ASSUMETARGETDEFS is ignored because trail file  contains table definitions.
2016-09-01 08:55:27  INFO    OGG-06506  Oracle GoldenGate Delivery for Oracle, irtc.prm:  Wildcard MAP resolved (entry orcl.tc.*): Map "ORCL"."TC"."DIRECT_LOAD", target orcl.tc."DIRECT_LOAD".
2016-09-01 08:55:28  WARNING OGG-06439  Oracle GoldenGate Delivery for Oracle, irtc.prm:  No unique key is defined for table DIRECT_LOAD. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
2016-09-01 08:55:28  INFO    OGG-02756  Oracle GoldenGate Delivery for Oracle, irtc.prm:  The definition for table ORCL.TC.DIRECT_LOAD is obtained from the trail file.
2016-09-01 08:55:28  INFO    OGG-06511  Oracle GoldenGate Delivery for Oracle, irtc.prm:  Using following columns in default map by name: COLUMN1, COLUMN2, COLUMN3.
2016-09-01 08:55:28  INFO    OGG-06510  Oracle GoldenGate Delivery for Oracle, irtc.prm:  Using the following key columns for target table ORCL.TC.DIRECT_LOAD: COLUMN1, COLUMN2, COLUMN3.
2016-09-01 08:55:28  INFO    OGG-00178  Oracle GoldenGate Delivery for Oracle, irtc.prm:  owner = "TC", table = "DIRECT_LOAD".
2016-09-01 08:55:28  INFO    OGG-00991  Oracle GoldenGate Capture for Oracle, ietc.prm:  EXTRACT IETC stopped normally.
2016-09-01 08:55:33  INFO    OGG-00994  Oracle GoldenGate Delivery for Oracle, irtc.prm:  REPLICAT IRTC stopped normally.
Yes, I had a warning like this:
2016-09-01 08:55:12  WARNING OGG-00752  Oracle GoldenGate Capture for Oracle, ietc.prm:  Failed to validate table ORCL.TC.DIRECT_LOAD. Likely due to existence of unused column. Please make sure you use sourcedefs in downstream Replicat, or the target table has exactly the same unused columns when using ASSUMETARGETDEFS or DDL replication.
But I obviously didn't violate it as the source and the target table has the same unused columns.
Let's see what has loaded into TARGET database:
TC@TARGET> select * from direct_load;

   COLUMN1    COLUMN2    COLUMN3
---------- ---------- ----------
         1                     2
That's not I wanted to load and that differs from the original table in SOURCE database:
TC@SOURCE> select * from direct_load;

   COLUMN1    COLUMN2    COLUMN3
---------- ---------- ----------
         1          2          3
Clearly the data from COLUMN2 got loaded into COLUMN3 and I didn't get any errors afterwards leaving my database with "wrong" data.
That's the worst scenario. Originally I faced the issue while I was moving a bunch of data from one database to another using GoldenGate.
The load got ABENDED with ORA-01840 as in my case COLUMN3 had a DATE datatype which may be easily reproduced by the following code:
SQL> select to_date('2', 'yyyy-mm-dd hh24:mi:ss') from dual;
select to_date('2', 'yyyy-mm-dd hh24:mi:ss') from dual
               *
ERROR at line 1:
ORA-01840: input value not long enough for date format
Of course, if you are lucky, you may get some other datatype conversion errors. If you are not, you may end with wrong data in your database and it may take a while before you find the problem.
A few words about workarounds.
In this simple scenario, which I have written just for demonstration purposes, it was enough to drop unused columns in TARGET database:
TC@TARGET> alter table direct_load drop unused columns;

Table altered.

четверг, 9 июня 2016 г.

opatch and libpicl on Solaris SPARC

This morning while applying recent DBBP (Database Bundle Patch) the datapatch invocation finished with an error:
SQL Patching tool version 12.1.0.2.0 on Thu Jun  2 10:58:41 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done

Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'
and/or check the invocation log
/export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_invocation.log
for the complete error.
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
/export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Thu Jun  2 10:58:50 2016
Here are the contents of sqlpatch_invocation.log file:
SQL Patching tool version 12.1.0.2.0 on Thu Jun  2 10:58:41 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_invocation.log

SQL Patching arguments:
  verbose: 1
  force: 0
  prereq: 0
  upgrade_mode_only: 
  oh: 
  bundle_series: 
  ignorable_errors: 
  bootstrap: 
  skip_upgrade_check: 
  userid: 
  pdbs: 

Connecting to database...OK
catcon: ALL catcon-related output will be written to /export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_catcon__catcon_12392.lst
catcon: See /export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_catcon_*.log files for output generated by scripts
catcon: See /export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_catcon__*.lst files for spool files, if any
Bootstrapping registry and package to current versions...done
verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table

Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'
and/or check the invocation log
/export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_invocation.log
for the complete error.
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
/export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_12392_2016_06_02_10_58_41/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Thu Jun  2 10:58:50 2016
The famous "ORA-20001: Latest xml inventory is not loaded into table" error even has a dedicated article on MOS: Queryable Patch Inventory - Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table (Doc ID 1602089.1).
I tried all of the provided solutions but no one helped me.
As the sqlpatch_invocation.log suggested, I executed the dbms_sqlpatch.verify_queryable_inventory function:
SQL>select dbms_sqlpatch.verify_queryable_inventory from dual;

VERIFY_QUERYABLE_INVENTORY
--------------------------------------------------------------------------------
ORA-20001: Latest xml inventory is not loaded into table

Well, it was nothing new. Then I queried OPATCH_XML_INV external table because it is used internally by the aforementioned function:
SQL>select * from OPATCH_XML_INV ;
ERROR:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04004: error while reading file
/export/home/velikikh/oracle/12.1.0.2/home1/QOpatch/qopiprep.bat



no rows selected

Ok, at least I got some steps for further investigation. qopiprep.bat script simply executes opatch command with some additional processing:
$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
I decided to try executing it manually:
SQL>!opatch lsinv
Java HotSpot(TM) 64-Bit Server VM warning: PICL (libpicl.so.1) is missing. Performance will not be optimal.
Java HotSpot(TM) 64-Bit Server VM warning: PICL (libpicl.so.1) is missing. Performance will not be optimal.
Oracle Interim Patch Installer version 12.1.0.1.12
Copyright (c) 2016, Oracle Corporation.  All rights reserved.
..skipped...
These lines with JVM warning were rather unusual for me and I did not know why they occured.
As I had a feeling that they were related to my problem, I searched through the MOS knowledge base and found an article: VM Warning "PICL (libpicl.so.1) is missing" After Upgrading to Java SE 8u40+ or 7u80+ (Doc ID 2093623.1).
That had a reference to Poonam Bajaj's blog:
 Please note that this warning just indicates that the JVM could not load the PICL library so it won't be able to turn on some of the JIT Compiler optimizations that require the cache line data size information. 
It does not stop the Java process, or impact the process in any other way. 
Also, note that this is not a Java bug. The warning message is just intended to inform the user that the platform it is running on does not have the PICL library.
Well, nothing critical was in those messages but Queryable Opatch Inventory (hereinafter QOpatch) was not working.
To verify my assumption about the negative effect of libpicl message on QOpatch, I made a quick and dirty correction to qopiprep.bat file - I redirected the stderr to /dev/null :):
$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt 2>/dev/null
Having done that, I executed select against OPATCH_XML_INV table:
SQL>select * from opatch_xml_inv;

XML_INVENTORY
--------------------------------------------------------------------------------
  <ora
Allright, at least it started working. I didn't try executing other commands but instead asked the SA (System Administrator) team to install the libpicl.so library.
They installed pkg:/system/libpicl and I finally got datapatch working:
[velikikh@gsmcat OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Thu Jun  2 13:26:15 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /export/home/velikikh/oracle/cfgtoollogs/sqlpatch/sqlpatch_6653_2016_06_02_13_26_15/sqlpatch_invocation.log

Connecting to database...OK
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Bundle series DBBP:
  ID 160419 in the binary registry and not installed in the SQL registry

Adding patches to installation queue and performing prereq checks...
Installation queue:
  Nothing to roll back
  The following patches will be applied:
    22806133 (DATABASE BUNDLE PATCH: 12.1.0.2.160419 (22806133))

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 22806133 apply: SUCCESS
  logfile: /export/home/velikikh/oracle/cfgtoollogs/sqlpatch/22806133/20067756/22806133_apply_GSMCAT_2016Jun02_13_28_31.log (no errors)
SQL Patching tool complete on Thu Jun  2 13:32:19 2016
So the conclusion is, while using Oracle Database 12c on Solaris 11 SPARC64, you will definitely need to install the libpicl. This requirement has not mentioned either in Oracle Database documentation or MOS articles (at least I have missed something) but it is present.
From Oracle Database Installation Guide for Oracle Solaris:
The following packages must be installed:
    pkg://solaris/system/dtrace
    pkg://solaris/developer/assembler
    pkg://solaris/developer/build/make
    pkg://solaris/system/xopen/xcu4 (if not already installed as part of standard Oracle Solaris 11 installation)
    pkg://solaris/x11/diagnostic/x11-info-clients
    pkg://solaris/compress/unzip
Although I'd had these packages installed, the datapatch execution completed with the errors till I installed system/picl:
-bash-4.1$ pkg list dtrace assembler make xcu4 x11-info-clients unzip
NAME (PUBLISHER)                                  VERSION                    IFO
compress/unzip                                    6.0-0.175.2.10.0.4.0       i--
developer/assembler                               0.5.11-0.175.2.13.0.2.0    i--
developer/build/make                              0.5.11-0.175.2.0.0.34.0    i--
system/dtrace                                     0.5.11-0.175.2.12.0.4.2    i--
system/xopen/xcu4                                 0.5.11-0.175.2.13.0.1.2    i--
x11/diagnostic/x11-info-clients                   7.7-0.175.2.0.0.42.1406    i--
-bash-4.1$ pkg list picl
NAME (PUBLISHER)                                  VERSION                    IFO
system/picl                                       0.5.11-0.175.2.9.0.3.2     i--

суббота, 14 мая 2016 г.

Asynchronous Global Index Maintenance effects

I think Asynchronous Global Index Maintenance is one of the most exciting features in Oracle Database 12c.
I would like to blog about one particular case when that feature can cause performance problems.
Let's create some test data:
SQL> create table t
  2  partition by range(owner)
  3  (
  4    partition values less than ('SYS'),
  5    partition values less than (maxvalue)
  6  )
  7  as
  8  select *
  9    from dba_objects;
SQL> 
SQL> create index t_name_i on t(object_name);
Consider the following query:
SQL> explain plan for
  2  select max(object_name)
  3    from t;
SQL> select *
  2    from table(dbms_xplan.display);


PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 2886567490

---------------------------------------------------------------------------------------
| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |          |     1 |    25 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE            |          |     1 |    25 |            |          |
|   2 |   INDEX FULL SCAN (MIN/MAX)| T_NAME_I |     1 |    25 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------
It's quite simple and the queries like that are common in my environment. For instance, they are used to monitor application activities: "select max(date_column) from some_table" and the stuff alike.
Now I'm going to drop a partition of that table:
SQL> alter table t drop partition for ('X') update indexes;
The index is marked as having orphaned entries after that:
SQL> select index_name, orphaned_entries
  2    from ind
  3   where table_name='T';


INDEX_NAME                     ORPHANED_
------------------------------ ---------
T_NAME_I                       YES
Causing some important changes in the plan:
SQL> explain plan for
  2  select max(object_name)
  3    from t;
SQL> select *
  2    from table(dbms_xplan.display);


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
Plan hash value: 3249307143

----------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |     1 |    25 |   148   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE       |          |     1 |    25 |            |          |
|*  2 |   INDEX FAST FULL SCAN| T_NAME_I | 96566 |  2357K|   148   (1)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TBL$OR$IDX$PART$NUM("T",0,8,0,"T".ROWID)=1)
The filter clearly was occured due to orphaned index entries which is confirmed by looking into the CBO traces (event 10053):
Added Filter for Orphaned Entries of Index T_NAME_I:
TBL$OR$IDX$PART$NUM("T",0,8,0,"T".ROWID)=1
The IFFS (INDEX FAST FULL SCAN) would cause a huge amount of I/O being executed against large indexes.
For instance, some queries started scanning multi-gigabyte indexes in my database after I had dropped a single small partition while executing periodic maintenance tasks.
Of course, in such case Global Index Maintenance can't be delayed and an index synchronization procedure must be executed ASAP:
SQL> alter index t_name_i coalesce cleanup;
SQL> select index_name, orphaned_entries
  2    from ind
  3   where table_name='T';


INDEX_NAME                     ORPHANED_
------------------------------ ---------
T_NAME_I                       NO
SQL> explain plan for
  2  select max(object_name)
  3    from t;
SQL> select * from table(dbms_xplan.display);


PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 2886567490

---------------------------------------------------------------------------------------
| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |          |     1 |    25 |     3   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE            |          |     1 |    25 |            |          |
|   2 |   INDEX FULL SCAN (MIN/MAX)| T_NAME_I |     1 |    25 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

среда, 30 декабря 2015 г.

ORA-8102 on ALTER TABLE SHRINK SPACE with default column optimization

We have been using centralized AWR repository since 2013.
I developed some PL/SQL procedures for that using original awrextr.sql/awrload.sql scripts as a template.
That solution have some drawbacks, for example, an AWR retention for foreign databases always set to 40150 years after each load:
SQL> select dbid, retention from dba_hist_wr_control;

      DBID RETENTION
---------- --------------------
1110059808 +00366 00:00:00.0
1917063347 +40150 00:00:00.0
1996649024 +40150 00:00:00.0
We could change the retention but that would always be reset on the next AWR load.
To prevent the excessive AWR data growth, I executed DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE procedure once a year.
The 2015 year was no exception.
I also executed ALTER TABLE SHRINK SPACE command after I had executed DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE procedures.
Usually that command takes a while to execute, because the some of AWR tables are more than 50G in size, for example, WRH$_ACTIVE_SESSION_HISTORY.
It's a base table for DBA_HIST_ACTIVE_SESS_HISTORY view and contains an ASH data that's huge.
This year things went wrong:
SQL> alter table SYS.WRH$_ACTIVE_SESSION_HISTORY shrink space;
-- AFTER 4-8 hours
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 635
Session ID: 281 Serial number: 54363
alert.log was not very helpful:
Mon Dec 21 14:37:31 2015
Process 0x3cd728e40 appears to be hung while dumping
Current time = 1689592362, process death time = 1689530965 interval = 60000
Called from location UNKNOWN:UNKNOWN
Attempting to kill process 0x3cd728e40 with OS pid = 17740
OSD kill succeeded for process 3cd728e40
Mon Dec 21 14:39:46 2015
Errors in file /oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_25447.trc:
But a relevant trace file (which was more than 4G in size) spotted a light on a problem:
*** ACTION NAME:() 2015-12-21 14:39:46.890
 
oer 8102.2 - obj# 1021709, rdba: 0x086ad43c(afn 33, blk# 2806844)
kdk key 8102.2:
  ncol: 6, len: 26
  key: (26): 
 06 c5 0c 0b 06 63 09 04 c3 08 0e 3e 02 c1 02 05 c4 62 46 05 3b 03 c2 02 3d
 ff
  mask: (4096): 
 81 90 20 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Well, string "oer 8102.2" it's a sign of famous ORA-8102 error:
[oracle@localhost ~]$ oerr ora 8102
08102, 00000, "index key not found, obj# %s, file %s, block %s (%s)"
// *Cause:  Internal error: possible inconsistency in index
// *Action:  Send trace file to your customer support representative, along
//      with information on reproducing the error
obj# 1021709 in the trace file easily shows to us where the problem is:
SQL> select xmltype(cursor(select * from dba_objects where object_id=1021709)) xml_rec from dual;

XML_REC
----------------------------------------------------------------------------------------------------------


  
    SYS
    WRH$_ACTIVE_SESSION_HISTORY_PK
    WRH$_ACTIVE_1110059808_66853
    1021709
    1080082
    INDEX PARTITION
    27.10.2015 01:28:05
    22.12.2015 09:11:22
    2015-10-27:01:28:05
    VALID
    N
    N
    N
    4
    NONE
    Y
  

The index WRH$_ACTIVE_SESSION_HISTORY_PK is local partitioned on next columns:
CREATE UNIQUE INDEX "SYS"."WRH$_ACTIVE_SESSION_HISTORY_PK" ON "SYS"."WRH$_ACTIVE_SESSION_HISTORY" ("DBID", "SNAP_ID", "INSTANCE_NUMBER", "SAMPLE_ID", "SESSION_ID", "CON_DBID")
The all of the above columns are numbers.
The index key:
  key: (26): 
 06 c5 0c 0b 06 63 09 04 c3 08 0e 3e 02 c1 02 05 c4 62 46 05 3b 03 c2 02 3d
 ff
can be decrypted easily:
SQL> with input as (
  2    -- index key
  3    select '06 c5 0c 0b 06 63 09 04 c3 08 0e 3e 02 c1 02 05 c4 62 46 05 3b 03 c2 02 3d ff' c from dual),
  4    t(pos, num,c) as (
  5    -- traverse the index key recursively
  6    select 0 pos, 0, replace(c, ' ') c from input union all
  7    select pos+1,
  8           utl_raw.cast_to_number(hextoraw(substr(c, 3, to_number(substr(c,1,2), 'xx')*2))),
  9           substr(c, (to_number(substr(c, 1, 2), 'xx')+1)*2+1)
 10      from t
 11     where c<>'ff')
 12  select pos, num
 13    from t
 14   where pos>0
 15   order by pos
 16  /

       POS        NUM
---------- ----------
         1 1110059808
         2      71361
         3          1
         4   97690458
         5        160
Ok, now we've known that: dbid=1110059808, snap_id=71361, instance_number=1, sample_id=97690458, session_id=160.
But the index was built on six columns when the problem key contains only five.
What is the CON_DBID for the problem key? That's where the trouble lies!
CON_DBID was introduced in Oracle 12.1 and added as a "DEFAULT NOT NULL" column, i.e. that column addition was a metadata-only operation without updating the table blocks (unless _add_col_optim_enabled was not set to FALSE).
SQL> select to_char(property, 'fm0xxxxxxx')
  2    from sys.col$
  3   where obj# = 22800
  4     and name = 'CON_DBID';

TO_CHAR(PROPERTY,'FM0XXXXXX
---------------------------
40000000
Notice that property was 0x40000000 - I speculated that such property was set to columns which were added with default column optimization.
Some MOS notes prove that, i.e. Table Columns Have Wrong Default Value After Transporting Tablespaces (Doc ID 1602994.1).
I dumped the relevant table block and found that CON_DBID column was not present in it.
The pieces of that puzzle started to fit together.
I had constructed a simple test case which reproduced an ORA-8102 error:
SQL> create table t(x int, pad varchar2(100)) enable row movement;
SQL> insert /*+ append*/
  2    into t
  3  select level, lpad('x', 100, 'x')
  4    from dual
  5    connect by level<=1e5;
SQL> commit;
SQL> 
SQL> alter table t add y int default 10 not null;
SQL> 
SQL> create index t_xy_i on t(x,y);
SQL> 
SQL> delete t where x<=1e5/2;
SQL> commit;
SQL> 
SQL> alter table t shrink space;
alter table t shrink space
*
ERROR at line 1:
ORA-08102: index key not found, obj# 91957, file 10, block 3990 (2)
The problem is present in Oracle 11.2.0.4 and more newest versions. I didn't test in earlier 11g patchsets.
Oracle Support raised a new bug: ORA-8102 ON ALTER TABLE SHRINK SPACE WITH ADD COL OPTIMIZATION (unpublished), which is still under investigation.
The tables suspected to an ORA-8102 error can be identified easily by bitand(col$.property, 1073741824)=1073741824 (0x40000000 in hex):
SQL> select o.object_name
  2    from sys.col$ c,
  3         dba_objects o
  4   where bitand(c.property, 1073741824)=1073741824
  5     and o.object_id=c.obj#
  6     and o.owner='SYS'
  7   order by o.object_name;
I restricted above query to SYS schema. Below is the output of that query in one of database which was upgraded from 11.2.0.4 to 12.1.0.2 in the 2015:
SQL> select o.object_name
  2    from sys.col$ c,
  3         dba_objects o
  4   where bitand(c.property, 1073741824)=1073741824
  5     and o.object_id=c.obj#
  6     and o.owner='SYS'
  7   order by o.object_name;

OBJECT_NAME
-------------------------------------------------------------------
CDB_LOCAL_ADMINAUTH$
HISTGRM$
PROFNAME$
WRH$_ACTIVE_SESSION_HISTORY
WRH$_BG_EVENT_SUMMARY
WRH$_BUFFERED_QUEUES
WRH$_BUFFERED_SUBSCRIBERS
WRH$_BUFFER_POOL_STATISTICS
WRH$_CLUSTER_INTERCON
WRH$_COMP_IOSTAT
WRH$_CR_BLOCK_SERVER
WRH$_CURRENT_BLOCK_SERVER
WRH$_DATAFILE
WRH$_DB_CACHE_ADVICE
WRH$_DISPATCHER
WRH$_DLM_MISC
WRH$_DYN_REMASTER_STATS
WRH$_DYN_REMASTER_STATS
WRH$_ENQUEUE_STAT
WRH$_EVENT_HISTOGRAM
WRH$_EVENT_NAME
WRH$_FILEMETRIC_HISTORY
WRH$_FILESTATXS
WRH$_IC_CLIENT_STATS
WRH$_IC_DEVICE_STATS
WRH$_INSTANCE_RECOVERY
WRH$_INST_CACHE_TRANSFER
WRH$_INTERCONNECT_PINGS
WRH$_IOSTAT_DETAIL
WRH$_IOSTAT_FILETYPE
WRH$_IOSTAT_FILETYPE_NAME
WRH$_IOSTAT_FUNCTION
WRH$_IOSTAT_FUNCTION_NAME
WRH$_JAVA_POOL_ADVICE
WRH$_LATCH
WRH$_LATCH_CHILDREN
WRH$_LATCH_MISSES_SUMMARY
WRH$_LATCH_NAME
WRH$_LATCH_PARENT
WRH$_LIBRARYCACHE
WRH$_LOG
WRH$_MEMORY_RESIZE_OPS
WRH$_MEMORY_TARGET_ADVICE
WRH$_MEM_DYNAMIC_COMP
WRH$_METRIC_NAME
WRH$_MTTR_TARGET_ADVICE
WRH$_MUTEX_SLEEP
WRH$_MVPARAMETER
WRH$_OPTIMIZER_ENV
WRH$_OSSTAT
WRH$_OSSTAT_NAME
WRH$_PARAMETER
WRH$_PARAMETER_NAME
WRH$_PERSISTENT_QMN_CACHE
WRH$_PERSISTENT_QUEUES
WRH$_PERSISTENT_SUBSCRIBERS
WRH$_PGASTAT
WRH$_PGA_TARGET_ADVICE
WRH$_PLAN_OPERATION_NAME
WRH$_PLAN_OPTION_NAME
WRH$_PROCESS_MEMORY_SUMMARY
WRH$_RESOURCE_LIMIT
WRH$_ROWCACHE_SUMMARY
WRH$_RSRC_CONSUMER_GROUP
WRH$_RSRC_PLAN
WRH$_RULE_SET
WRH$_SEG_STAT
WRH$_SEG_STAT_OBJ
WRH$_SERVICE_NAME
WRH$_SERVICE_STAT
WRH$_SERVICE_WAIT_CLASS
WRH$_SESSMETRIC_HISTORY
WRH$_SESS_TIME_STATS
WRH$_SESS_TIME_STATS
WRH$_SGA
WRH$_SGASTAT
WRH$_SGA_TARGET_ADVICE
WRH$_SHARED_POOL_ADVICE
WRH$_SHARED_SERVER_SUMMARY
WRH$_SQLCOMMAND_NAME
WRH$_SQLSTAT
WRH$_SQLTEXT
WRH$_SQL_BIND_METADATA
WRH$_SQL_PLAN
WRH$_SQL_SUMMARY
WRH$_SQL_WORKAREA_HISTOGRAM
WRH$_STAT_NAME
WRH$_STREAMS_APPLY_SUM
WRH$_STREAMS_APPLY_SUM
WRH$_STREAMS_CAPTURE
WRH$_STREAMS_CAPTURE
WRH$_STREAMS_POOL_ADVICE
WRH$_SYSMETRIC_HISTORY
WRH$_SYSMETRIC_SUMMARY
WRH$_SYSSTAT
WRH$_SYSTEM_EVENT
WRH$_SYS_TIME_MODEL
WRH$_TABLESPACE
WRH$_TABLESPACE_SPACE_USAGE
WRH$_TABLESPACE_STAT
WRH$_TEMPFILE
WRH$_TEMPSTATXS
WRH$_THREAD
WRH$_TOPLEVELCALL_NAME
WRH$_UNDOSTAT
WRH$_WAITCLASSMETRIC_HISTORY
WRH$_WAITSTAT
WRI$_ADV_SQLT_PLAN_HASH
WRI$_OPTSTAT_HISTGRM_HISTORY
WRI$_SQLSET_BINDS
WRI$_SQLSET_DEFINITIONS
WRI$_SQLSET_MASK
WRI$_SQLSET_PLANS
WRI$_SQLSET_PLAN_LINES
WRI$_SQLSET_STATEMENTS
WRI$_SQLSET_STATISTICS
WRI$_SQLTEXT_REFCOUNT

117 rows selected.
You could see that mostly AWR tables are affected (WRH$%, WRI%). The brand new 12c database's AWR tables won't have property set to 1073741824 (0x40000000), unless you wouldn't create some.
If you set the _add_col_optim_enabled parameter to FALSE, you will be safe.
If you will face a same issue one day, the possible workaround for the issue is to rebuild table in anyway to update the problem column (MOVE table, redefine table online, or set column value equals to self).

понедельник, 30 ноября 2015 г.

V$SQL_PLAN_MONITOR.STARTS is higher than expected due to NLJ batching/Prefetching

Recently one of the developers asked me to explain why V$SQL_PLAN_MONITOR.STARTS is higher than expected for one particular query.
Here is a problem query (some columns are hidden to preserve readability):
SQL Plan Monitoring Details (Plan Hash Value=40624586)
================================================================================================================================================
| Id |                Operation                |        Name        | Execs |   Rows   | Read  | Read  | Activity |      Activity Detail       |
|    |                                         |                    |       | (Actual) | Reqs  | Bytes |   (%)    |        (# samples)         |
================================================================================================================================================
|  0 | SELECT STATEMENT                        |                    |     1 |        0 |       |       |          |                            |
|  1 |   NESTED LOOPS                          |                    |     1 |        0 |       |       |          |                            |
|  2 |    NESTED LOOPS                         |                    |     1 |    62594 |       |       |          |                            |
|  3 |     TABLE ACCESS BY INDEX ROWID BATCHED | MAIN_TABLE         |     1 |    62594 | 22777 | 178MB |    25.89 | Cpu (1)                    |
|    |                                         |                    |       |          |       |       |          | db file parallel read (28) |
|  4 |      INDEX RANGE SCAN                   | MAIN_TABLE_I       |     1 |    62594 |    25 | 200KB |          |                            |
|  5 |     INDEX UNIQUE SCAN                   | CHILD_TABLE_PK     |  109K |    62594 | 63798 | 498MB |    71.43 | Cpu (2)                    |
|    |                                         |                    |       |          |       |       |          | db file parallel read (78) |
|  6 |    TABLE ACCESS BY INDEX ROWID          | CHILD_TABLE        |  105K |        0 |  1402 |  11MB |     1.79 | db file parallel read (2)  |
================================================================================================================================================
You can see that we've obtained 62K rows at step 3, but notice a number of "Execs" at step 5: 109K. It's almost twice as higher than 62K.
Number of rows ("Rows (Actual)") is correct, though.
I checked relevant columns in V$SQL_PLAN_MONITOR view (STARTS/OUTPUT_ROWS) and verified that there's no contradiction between V$SQL_PLAN_MONITOR view and DBMS_SQLTUNE.REPORT_SQL_MONITOR output.
I investigated this issue further and constructed a simple test case which can be used to reproduce the issue.
SQL> create table fact
  2  as
  3  select date '2015-08-01' + trunc(level/4)/86400 fact_date,
  4         lpad('x', 240, 'x') padding,
  5         mod(level, 100000) dim_id
  6    from dual
  7    connect by level<=4*86400;

Table created.

SQL> 
SQL> exec dbms_stats.gather_table_stats( '', 'fact')

PL/SQL procedure successfully completed.

SQL> 
SQL> create table dim
  2  as
  3  select trunc(dbms_random.value(1,100000)) id,
  4         lpad('x', 340, 'x') padding
  5    from dual
  6    connect by level<=2*86400;

Table created.

SQL> 
SQL> create index dim_i on dim(id);

Index created.

SQL> 
SQL> exec dbms_stats.gather_table_stats( '', 'dim')

PL/SQL procedure successfully completed.
I created 2 tables: FACT and DIM, populated them with data and gathered statistics.
Now, I will flush the buffer cache and execute a test query:
SQL> alter system flush buffer_cache;

System altered.

SQL> 
SQL> select /*+ monitor leading(f) use_nl(d) full(f)*/
  2         count(f.padding),
  3         count(d.padding)
  4    from fact f,
  5         dim d
  6   where f.fact_date between to_date('01.08.2015 12:00', 'dd.mm.yyyy hh24:mi') and to_date('01.08.2015 12:10', 'dd.mm.yyyy hh24:mi')
  7     and d.id = f.dim_id;


COUNT(F.PADDING) COUNT(D.PADDING)
---------------- ----------------
            4214             4214

1 row selected.

SQL> 
Let's see the DBMS_SQLTUNE report for the last query:
SQL> select dbms_sqltune.report_sql_monitor from dual;


REPORT_SQL_MONITOR
-----------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report

SQL Text
------------------------------
select /*+ monitor leading(f) use_nl(d) full(f)*/ count(f.padding), count(d.padding) from fact f, dim d where f.fact_date between to_date('01.08.2015 12:00

.. skip..

SQL Plan Monitoring Details (Plan Hash Value=85884857)
===========================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |   Activity Detail    |
|    |                                 |       | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |     (# samples)      |
===========================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |       |        10 |    +14 |     1 |        1 |      |       |          |                      |
|  1 |   SORT AGGREGATE                |       |       1 |       |        10 |    +14 |     1 |        1 |      |       |          |                      |
|  2 |    NESTED LOOPS                 |       |         |       |        10 |    +14 |     1 |     4214 |      |       |          |                      |
|  3 |     NESTED LOOPS                |       |    5025 | 11890 |        10 |    +14 |     1 |     4214 |      |       |          |                      |
|  4 |      TABLE ACCESS FULL          | FACT  |    2408 |  3535 |        23 |     +1 |     1 |     2404 |  109 | 100MB |     8.70 | Cpu (1)              |
|    |                                 |       |         |       |           |        |       |          |      |       |          | direct path read (1) |
|  5 |      INDEX RANGE SCAN           | DIM_I |       2 |     1 |        10 |    +14 |  2539 |     4214 |  562 |   4MB |          |                      |
|  6 |     TABLE ACCESS BY INDEX ROWID | DIM   |       2 |     4 |        22 |     +2 |  7582 |     4214 | 2810 |  22MB |    91.30 | Cpu (21)             |
===========================================================================================================================================================
Ok, we've just reproduced the issue: row source 4 of the plan generated 2404 rows, but the number of "Execs" (2539) at line 5 is equals to 2539, which is slightly greater.
It was not by accident that I flushed the buffer cache before I execute a query.
If I'll execute a query once again, then dbms_sqltune.report_sql_monitor report has number of "Execs" in line 5 equals to number of rows in line 4:
SQL Plan Monitoring Details (Plan Hash Value=85884857)
======================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity | Activity Detail |
|    |                                 |       | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |   (# samples)   |
======================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |       |         1 |     +0 |     1 |        1 |      |       |          |                 |
|  1 |   SORT AGGREGATE                |       |       1 |       |         1 |     +0 |     1 |        1 |      |       |          |                 |
|  2 |    NESTED LOOPS                 |       |         |       |         1 |     +0 |     1 |     4214 |      |       |          |                 |
|  3 |     NESTED LOOPS                |       |    5025 | 11890 |         1 |     +0 |     1 |     4214 |      |       |          |                 |
|  4 |      TABLE ACCESS FULL          | FACT  |    2408 |  3535 |         1 |     +0 |     1 |     2404 |  108 | 100MB |          |                 |
|  5 |      INDEX RANGE SCAN           | DIM_I |       2 |     1 |         1 |     +0 |  2404 |     4214 |      |       |          |                 |
|  6 |     TABLE ACCESS BY INDEX ROWID | DIM   |       2 |     4 |         1 |     +0 |  4214 |     4214 |      |       |          |                 |
======================================================================================================================================================
I believe, that differences in execs/rows were caused by NLJ batching/Table prefetching.
How can we prove it?
First, I'll show how changes "physical reads cache prefetch" before/after query execution.
SQL> select n.name, s.value
  2    from v$statname n, v$mystat s
  3   where n.name like '%prefetch%'
  4     and s.statistic#=n.statistic#
  5     and s.value>0
  6   order by n.name;


NAME                                VALUE
------------------------------ ----------
physical reads cache prefetch      127987

1 row selected.

SQL> 
SQL> select /*+ monitor leading(f) use_nl(d) full(f)*/
  2         count(f.padding),
  3         count(d.padding)
  4    from fact f,
  5         dim d
  6   where f.fact_date between to_date('01.08.2015 12:00', 'dd.mm.yyyy hh24:mi') and to_date('01.08.2015 12:10', 'dd.mm.yyyy hh24:mi')
  7     and d.id = f.dim_id;


COUNT(F.PADDING) COUNT(D.PADDING)
---------------- ----------------
            4214             4214

1 row selected.

SQL> 
SQL> select n.name, s.value
  2    from v$statname n, v$mystat s
  3   where n.name like '%prefetch%'
  4     and s.statistic#=n.statistic#
  5     and s.value>0
  6   order by n.name;


NAME                                VALUE
------------------------------ ----------
physical reads cache prefetch      131321

1 row selected.
Statistic increased by the 131321-127987=3334 blocks. Second, when I disable nlj_batching, there are no differences in execs/rows:
SQL> alter system flush buffer_cache;

System altered.

SQL> 
SQL> select n.name, s.value
  2    from v$statname n, v$mystat s
  3   where n.name like '%prefetch%'
  4     and s.statistic#=n.statistic#
  5     and s.value>0
  6   order by n.name;


NAME                                VALUE
------------------------------ ----------
physical reads cache prefetch      137989

1 row selected.

SQL> 
SQL> select /*+ monitor leading(f) use_nl(d) full(f) opt_param('_nlj_batching_enabled' 0)*/
  2         count(f.padding),
  3         count(d.padding)
  4    from fact f,
  5         dim d
  6   where f.fact_date between to_date('01.08.2015 12:00', 'dd.mm.yyyy hh24:mi') and to_date('01.08.2015 12:10', 'dd.mm.yyyy hh24:mi')
  7     and d.id = f.dim_id;


COUNT(F.PADDING) COUNT(D.PADDING)
---------------- ----------------
            4214             4214

1 row selected.

SQL> 
SQL> select n.name, s.value
  2    from v$statname n, v$mystat s
  3   where n.name like '%prefetch%'
  4     and s.statistic#=n.statistic#
  5     and s.value>0
  6   order by n.name;


NAME                                VALUE
------------------------------ ----------
physical reads cache prefetch      137989

1 row selected.

SQL> 
SQL> select dbms_sqltune.report_sql_monitor from dual;


REPORT_SQL_MONITOR
-----------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report

SQL Text
------------------------------
select /*+ monitor leading(f) use_nl(d) full(f) opt_param('_nlj_batching_enabled' 0)*/ count(f.padding), count(d.padding) from fact f, dim d where f.fact_date be
 and d.id = f.dim_id

.. skip ..

SQL Plan Monitoring Details (Plan Hash Value=1381503666)
=================================================================================================================================================================
| Id |           Operation            | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |       Activity Detail       |
|    |                                |       | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |         (# samples)         |
=================================================================================================================================================================
|  0 | SELECT STATEMENT               |       |         |       |         1 |     +2 |     1 |        1 |      |       |          |                             |
|  1 |   SORT AGGREGATE               |       |       1 |       |         1 |     +2 |     1 |        1 |      |       |          |                             |
|  2 |    TABLE ACCESS BY INDEX ROWID | DIM   |       2 |     4 |         2 |     +1 |     1 |     4214 | 3360 |  26MB |   100.00 | db file sequential read (2) |
|  3 |     NESTED LOOPS               |       |    5025 | 11890 |         1 |     +2 |     1 |     6619 |      |       |          |                             |
|  4 |      TABLE ACCESS FULL         | FACT  |    2408 |  3535 |         1 |     +2 |     1 |     2404 |  109 | 100MB |          |                             |
|  5 |      INDEX RANGE SCAN          | DIM_I |       2 |     1 |         1 |     +2 |  2404 |     4214 |   12 | 98304 |          |                             |
=================================================================================================================================================================


1 row selected.
Or, if we want to obtain "more-classic" NLJ plan shape (I added no_nlj_prefetch(d) hint):
SQL> alter system flush buffer_cache;

System altered.

SQL> 
SQL> select n.name, s.value
  2    from v$statname n, v$mystat s
  3   where n.name like '%prefetch%'
  4     and s.statistic#=n.statistic#
  5     and s.value>0
  6   order by n.name;


NAME                                VALUE
------------------------------ ----------
physical reads cache prefetch      137989

1 row selected.

SQL> 
SQL> select /*+ monitor leading(f) use_nl(d) full(f) opt_param('_nlj_batching_enabled' 0) no_nlj_prefetch(d)*/
  2         count(f.padding),
  3         count(d.padding)
  4    from fact f,
  5         dim d
  6   where f.fact_date between to_date('01.08.2015 12:00', 'dd.mm.yyyy hh24:mi') and to_date('01.08.2015 12:10', 'dd.mm.yyyy hh24:mi')
  7     and d.id = f.dim_id;


COUNT(F.PADDING) COUNT(D.PADDING)
---------------- ----------------
            4214             4214

1 row selected.

SQL> 
SQL> select n.name, s.value
  2    from v$statname n, v$mystat s
  3   where n.name like '%prefetch%'
  4     and s.statistic#=n.statistic#
  5     and s.value>0
  6   order by n.name;


NAME                                VALUE
------------------------------ ----------
physical reads cache prefetch      137989

1 row selected.

SQL> 
SQL> select dbms_sqltune.report_sql_monitor from dual;


REPORT_SQL_MONITOR
------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report

SQL Text
------------------------------
select /*+ monitor leading(f) use_nl(d) full(f) opt_param('_nlj_batching_enabled' 0) no_nlj_prefetch(d)*/ count(f.padding), count(d.padding) from fact f, dim d wh
d.mm.yyyy hh24:mi') and d.id = f.dim_id

.. skip ..

SQL Plan Monitoring Details (Plan Hash Value=676372893)
==================================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |       Activity Detail       |
|    |                                 |       | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |         (# samples)         |
==================================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |       |         1 |     +2 |     1 |        1 |      |       |          |                             |
|  1 |   SORT AGGREGATE                |       |       1 |       |         1 |     +2 |     1 |        1 |      |       |          |                             |
|  2 |    NESTED LOOPS                 |       |    5025 | 11890 |         1 |     +2 |     1 |     4214 |      |       |          |                             |
|  3 |     TABLE ACCESS FULL           | FACT  |    2408 |  3535 |         1 |     +2 |     1 |     2404 |  109 | 100MB |          |                             |
|  4 |     TABLE ACCESS BY INDEX ROWID | DIM   |       2 |     4 |         2 |     +2 |  2404 |     4214 | 3360 |  26MB |    66.67 | Cpu (1)                     |
|    |                                 |       |         |       |           |        |       |          |      |       |          | db file sequential read (1) |
|  5 |      INDEX RANGE SCAN           | DIM_I |       2 |     1 |         1 |     +2 |  2404 |     4214 |   12 | 98304 |          |                             |
==================================================================================================================================================================
I have searched for a similar issues through MOS and found a Bug 13634445 : V$SQL_PLAN_MONITOR AND NL BATCHING IN 11G, which has status 92 "Closed, Not a Bug".
Another interesting observation is that by adding a "gather_plan_statistics" hint, I'll disable prefetching for that case and fix V$SQL_PLAN_MONITOR "discrepancy"
SQL> alter system flush buffer_cache;

System altered.

SQL> 
SQL> select /*+ monitor leading(f) use_nl(d) full(f) gather_plan_statistics*/
  2         count(f.padding),
  3         count(d.padding)
  4    from fact f,
  5         dim d
  6   where f.fact_date between to_date('01.08.2015 12:00', 'dd.mm.yyyy hh24:mi') and to_date('01.08.2015 12:10', 'dd.mm.yyyy hh24:mi')
  7     and d.id = f.dim_id;


COUNT(F.PADDING) COUNT(D.PADDING)
---------------- ----------------
            4214             4214

1 row selected.

SQL> 
SQL> select dbms_sqltune.report_sql_monitor from dual;


REPORT_SQL_MONITOR
------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL Monitoring Report

SQL Text
------------------------------
select /*+ monitor leading(f) use_nl(d) full(f) gather_plan_statistics*/ count(f.padding), count(d.padding) from fact f, dim d where f.fact_date between to_date('
dim_id

..skip..

SQL Plan Monitoring Details (Plan Hash Value=85884857)
==================================================================================================================================================================
| Id |            Operation            | Name  |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |       Activity Detail       |
|    |                                 |       | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |         (# samples)         |
==================================================================================================================================================================
|  0 | SELECT STATEMENT                |       |         |       |         8 |     +2 |     1 |        1 |      |       |          |                             |
|  1 |   SORT AGGREGATE                |       |       1 |       |         8 |     +2 |     1 |        1 |      |       |          |                             |
|  2 |    NESTED LOOPS                 |       |         |       |         8 |     +2 |     1 |     4214 |      |       |          |                             |
|  3 |     NESTED LOOPS                |       |    5025 | 11890 |         8 |     +2 |     1 |     4214 |      |       |          |                             |
|  4 |      TABLE ACCESS FULL          | FACT  |    2408 |  3535 |         9 |     +1 |     1 |     2404 |  109 | 100MB |    11.11 | direct path read (1)        |
|  5 |      INDEX RANGE SCAN           | DIM_I |       2 |     1 |         8 |     +2 |  2404 |     4214 |   12 | 98304 |          |                             |
|  6 |     TABLE ACCESS BY INDEX ROWID | DIM   |       2 |     4 |         8 |     +2 |  4214 |     4214 | 3360 |  26MB |    88.89 | db file sequential read (8) |
==================================================================================================================================================================