Jonathan Lewis

Subscribe to Jonathan Lewis feed Jonathan Lewis
Just another Oracle weblog
Updated: 14 hours 35 min ago

drop partition

Fri, 2022-08-05 14:24

This note is about some testing I did on the consequences of the (new in 12c) “deferred global index maintenance” feature that Oracle introduced as a strategy to reduce the impact of dropping partitions from a partitioned table.

Looking at my notes I see that created my first test in August 2013 on Oracle 12.1.0.1 – probably after reading Richard Foote’s series on the topic.

At the time I didn’t turn my notes into blog post but a recent request on the MOS Community Forum (needs a MOS account) prompted me to revisit and extend the tests using 19c.

  1. The Request
  2. The Background
  3. The Model
  4. Tests and Results
  5. Deep Dive
    1. dbms_space
    2. Tree dumps
    3. Dumping Redo
    4. Transactions and locking
  6. Summary
  7. Footnote

The Request

The database is 19.3, two-node RAC with a standby (type and function not specified). There is a table range-partitioned by month holding nearly three years of data. The table size is about 250GB and the OP wants to drop the partitions older than one year.

There was an issue doing this on some other environment that when running the daily maintenance windows “it consumes a lot of CPU but I could not find a link between both activities”.

Is there any thing that could be done to avoid any db impact especially as this is a “production 24” environment?

There was no comment about how many indexes there were (and that’s an important detail), nor how many were global, globally partitioned, or local (also important details), but there was a comment that the pmo_deferred_gidx_main_job was run immediately after the drop, generated a lot of redo, and was still running after 10 hours – so it’s reasonably safe to assume that global indexes had a big impact since that’s the “partition maintenance operations deferred global index maintenance” job.

From the comment about the system being “production 24” I assume that the target is to come up with a strategy that doesn’t deny access to the users for a few hours, has the least possible impact on what they normally do, and doesn’t require the standby database to be (partially) rebuilt / unavailable.

Since this is Oracle 19c and the OP wants to drop nearly two-thirds of the data (i.e. significantly more than he’s keeping) the “obvious” strategy to investigate is dropping the partitions (online, with update indexes) then rebuilding the global (or globally partitioned) indexes (online).

At a minimum it would be sensible to do some modelling to get some idea of why the other system spent so much time in pmo_deferred_gidx_main_job as this might allow you to work out either that it wouldn’t be a problem in this system, or that there was a variation on the method that would be better, or that you just don’t want to use the job because you’ve got a good idea of just how nasty it would be.

The Background

Deferred index maintenance means that global index maintenance does not take place when you drop a partition. Historically Oracle would, as part of the drop, delete every single index entry for the dropped partition from every single global index – doing a lot of work and taking a lot of time at the moment of the drop. Deferred maintenance means Oracle simply notes which object_ids no longer exist and when reading through a global index ignores index entries where the rowid includes the object_id of a dropped object.

Note: the rowid stored in a B-tree index for a global index is made up of 4 components that require 10 bytes of storage. In order these are: (object_id of table partition, file_id, block_id, row number within block). For a local index or index on a simple heap table the object_id of the table can be inferred from the identity of the index so it is not stored and the rowid needs only 6 bytes of storage.

So the benefit of deferred maintenance is that dropping a partition takes virtually no time at all, but (a) Oracle has to clean up the garbage at some point and (b) until the garbage has been cleaned up it has to be read before it can be ignored.

A thought about the second point – if Oracle can check for dropped object_ids very efficiently then it doesn’t necessarily matter that you haven’t cleaned out the garbage. You application won’t instantly run more slowly, it’s just that you won’t have achieved the (possible) benefit of a smaller index that might allow the application to run a little faster.

So here’s a hypothesis to explain why the OP’s previous experience of deferred maintenance was very slow : if you update global indexes in real time Oracle does that job as efficiently as possible because it can use key values and rowids from the table segment that it’s dropping to create a “delete array” for the index; if you defer the maintenance Oracle has to walk through the index one entry at a time and work out whether or not to delete it – and we all know that single-row processing is more expensive than array-processing. It’s worth noting that there are notes on MOS to support this hypothesis, e.g. Bug 27468233 : ALTER INDEX COALESCE CLEANUP IS GENERATING HUGE AMOUNT OF REDO reports an example of generating 23GB of redo while cleaning up an index of only 1.8GB. (Version 12.2.0.1)

So let’s build a model and do some simple tests.

The Model

I’m going to build a table with 6 (range) partitions and two global indexes. I’ll set up two very different patterns of data for the two indexes to see how much impact the data pattern might have.

I’ll drop the bottom three partitions, then clean up the mess in a variety of ways. There’s the call to the pmo_deferred_gidx_main_job, which normally runs at 2:00 am daily but can be initiated by a call to dbms_scheduler.run_job; then there’s the dbms_part.cleanup_gidx() procedure that has a couple of options; then there’s a simple call to “alter index coalesce cleanup [only][parallal N]” (which needs improved documentation) and finally, of course, “alter index rebuild [online]”.

For at least some runs of the tests it will be worth enabling SQL_trace to see what happens behind the scenes; and it’s always worth checking the session activity stats – and maybe some activity from some other dynamic performance views as well.

So here’s the code to create the data set:

rem
rem     Script:         12c_global_index_maintenance_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2022
rem
rem     Last tested:
rem             19.11.0.0
rem

create table t1 
partition by range(id) (
        partition p09000 values less than ( 9000),
        partition p18000 values less than (18000),
        partition p27000 values less than (27000),
        partition p36000 values less than (36000),
        partition p45000 values less than (45000),
        partition p54000 values less than (54000)
)
as
select
        rownum - 1                      id,
        trunc(dbms_random.value(0,600)) n1,
        rpad('x',100)                   padding
from
        all_objects ao
where
        rownum <= 54000
;

create index t1_n1 on t1(n1) pctfree 0;
create index t1_id on t1(id) pctfree 0;

select 
        index_name, num_rows, s.blocks, leaf_blocks, status, orphaned_entries 
from 
        user_indexes i, user_segments s 
where 
        i.index_name = s.segment_name 
and     i.table_name='T1' 
and     partitioned = 'NO'
;

alter table t1 drop partition p09000, p18000, p27000 update global indexes; 

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

select 
        index_name, num_rows, s.blocks, leaf_blocks, status, orphaned_entries 
from 
        user_indexes i, user_segments s 
where 
        i.index_name = s.segment_name 
and     i.table_name='T1' 
and     partitioned = 'NO'
;


For testing purposes I’ve set the index pctfree to 0; and I’ve reported some of the index stats before and after dropping the three partitions so that we can see what the optimizer thinks the indexes look like:

Index size information before drop
==================================
INDEX_NAME             NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS   ORP
-------------------- ---------- ---------- ----------- -------- ---
T1_ID                     54000        256         134 VALID    NO
T1_N1                     54000        256         128 VALID    NO


Index size information after drop
=================================
INDEX_NAME             NUM_ROWS     BLOCKS LEAF_BLOCKS STATUS   ORP
-------------------- ---------- ---------- ----------- -------- ---
T1_ID                     27000        256          68 VALID    YES
T1_N1                     27000        256         128 VALID    YES

Both indexes are valid (which is good for the application) and their segment sizing has not changed. The number of rows has halved in both indexes but the number of (populated) leaf blocks has remained unchanged one index even though it has halved for the other.

If you dumped a few index leaf blocks the explanation for the changes (and the difference in the changes) would become clear. The number of (non-deleted) index entries in the two indexes has not changed, but Oracle is (almost literally) ignoring half of them – the ones that include the object_ids for the original first three table partitions.

The t1_id index is on the (sequential) id and the table is partitioned by id, and we have dropped the partitions that hold (nothing but) the ids less than 27,000 – in earlier versions of Oracle this would have immediately deleted all the index entries from the first half of the index, leaving all the leaf blocks in the 2nd half of the index full – although the index entries are still in those blocks Oracle is behaving as if they don’t exist, which means it treats the blocks as empty when calculating leaf_blocks statistic. The t1_n1 index is on integer values from 0 to 599, randomly distributed across the full range of ids, so by dropping the partitions for ids less than 27,000 we (ought to) have deleted the first half of the index entries for n1 = 0, the first half for n1 = 1 and so on – leaving every index leaf block approximately half empty and available for inclusion in the leaf_blocks count.

How, then, does Oracle manage to “ignore” the rows that would have been deleted in older versions. We can always enable SQL tracing when gathering stats, run tkprof against the trace file, and look for the SQL that Oracle used – and if that doesn’t reveal all, use the sql_id of the relevant statements to pull their plans from memory. Here’s the query (reformatted) and plan for one of the index stats gathering queries that I pulled from memory after finding it and its sql_id in the tkprof output:

SQL> select * from table(dbms_xplan.display_cursor('gtnd3aphdkp3k'));

SQL_ID  gtnd3aphdkp3k, child number 0
-------------------------------------
select /*+ 
                opt_param('_optimizer_use_auto_indexes' 'on')
                no_parallel_index(t, "T1_ID")  
                dbms_stats  cursor_sharing_exact  use_weak_name_resl 
                dynamic_sampling(0)  no_monitoring  xmlindex_sel_idx_tbl 
                opt_param('optimizer_inmemory_aware' 'false')
                no_substrb_pad  no_expand index(t, "T1_ID") 
        */ 
        count(*)                                          as nrw,
        count(distinct sys_op_lbid(130418, 'L', t.rowid)) as nlb,
        null                                              as ndk,
        sys_op_countchg(substrb(t.rowid, 1, 15), 1)       as clf 
from
        "TEST_USER"."T1" t where "ID" is not null

Plan hash value: 4265068335

--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |       |       |   136 (100)|          |
|   1 |  SORT GROUP BY   |       |     1 |    17 |            |          |
|*  2 |   INDEX FULL SCAN| T1_ID | 27000 |   448K|   136   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((TBL$OR$IDX$PART$NUM(<?>,0,8,0,"T".ROWID)=1 AND "ID" IS NOT NULL))

If you’ve read Cost Based Oracle – Fundamentals you’ll recognise the SQL is typical of the pattern Oracle uses to gather stats on an index, with a couple of sys_op() function calls that dissect rowids to allow Oracle to calculate the number of leaf_blocks in, and clustering_factor of, the index. What’s new, though is the filter() predicate on the index scan that (presumably) is checking that the rowid belongs to a table partition that still exists. (In other circumstances the “<?>” would be the table-name, possibly it’s a printing error here. The value for 8 as the third parameter also appears in queries involving table-expansion with partial indexing.)

Unsurprisingly, if you execute a simple query driven through one of the indexes after dropping partitions you’ll see exactly the same filter() predicate generated for the execution plan for the range scan operation.

SQL_ID  822pfkz83jzhz, child number 0
-------------------------------------
select  /*+ index(t1(n1)) */  id from t1 where n1 = 300

Plan hash value: 2152633691

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |      1 |        |    44 (100)|     40 |00:00:00.01 |      41 |
|   1 |  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED| T1    |      1 |     45 |    44   (0)|     40 |00:00:00.01 |      41 |
|*  2 |   INDEX RANGE SCAN                         | T1_N1 |      1 |     45 |     1   (0)|     40 |00:00:00.01 |       3 |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("N1"=300)
       filter(TBL$OR$IDX$PART$NUM(<?>,0,8,0,"T1".ROWID)=1)

The tbl$or$idx$part$num() function is an important iten to consider at this point. How much impact will it have on your processing – it’s hard to give a generic answer since it may depend on exactly what your data looks like and whether or not the function can cache its result effectively. It’s also possible that the performance of the function is related to either the number of partitions dropped or the number of partitions still in existence – so that’s a detail that probably has to be tested at the correct scale before you go into production

More significantly, perhaps, is how long that impact is going to be relevant and what savings it has to be balanced against. My thought at this point is that if you drop a partition but don’t clean up the index you reduce your workload by not visiting (possibly cached) table blocks, but you pay for the benefit by calling the function for every index entry you visit (whether or not it would have required you to visit the table prior to dropping the partition). Maybe it’s okay to leave the index uncleaned for a few days or even a few weeks before you take any steps to clean up the mess; if that’s the case then maybe you can spread a relatively large number of clean-up jobs over a long enough period of time that their impact doesn’t become visible to the users.

Tests and Results Test number 1:

What does pmo_deferred_gidx_main_job do?

I had to connect as a dba and grant alter on this job to my normal test user to be able to do the following test. The default role for my test user had also been granted create job, so that might have been needed as well; and you’ll see that I’ve included a number of my typical “v$ snapshot” procedures to measure different aspects of the workload, and I’ve enabled the 10046 trace to see what Oracle does behind the scenes.

alter system flush buffer_cache;

execute snap_enqueues.start_snap
execute snap_events.start_snap
execute snap_my_stats.start_snap
execute snap_redo.start_snap
execute snap_rollstats.start_snap

alter session set events '10046 trace name context forever, level 8';

execute dbms_scheduler.run_job('SYS.PMO_DEFERRED_GIDX_MAINT_JOB',true)

alter session set events '10046 trace name context off';

execute snap_rollstats.end_snap
execute snap_redo.end_snap
execute snap_my_stats.end_snap
execute snap_events.end_snap
execute snap_enqueues.end_snap

The most significant discovery in this test was that the package (ultimately) executed two SQL statements:

ALTER INDEX "TEST_USER"."T1_ID" COALESCE CLEANUP PARALLEL 1
ALTER INDEX "TEST_USER"."T1_N1" COALESCE CLEANUP PARALLEL 1

So really all I needed from that point onwards was to worry about testing and/or optimising variations of the “alter index coalesce” command.

Test number 2:

What does the procedure dbms_part.cleanup_gidx() do?

This procedure takes a schema name and table name as its first inputs with defaults null and has two other parameters called parallel (default 0) and options (default ‘CLEANUP_ORPHANS’); the only other value you can supply for options is ‘COALESCE’. Again the 10046 trace of my wrapper was very useful, as this showed the following SQL when I specified just the schema and table name for the call:

        ALTER INDEX "TEST_USER"."T1_ID" COALESCE CLEANUP ONLY
        ALTER INDEX "TEST_USER"."T1_N1" COALESCE CLEANUP ONLY

The 19c manual does mention the “ONLY” keyword, but doesn’t explain its significance (but I will in a moment). If I re-ran the test with options set to ‘COALESCE’ the SQL statements changed to:

        ALTER INDEX "TEST_USER"."T1_ID" COALESCE CLEANUP
        ALTER INDEX "TEST_USER"."T1_N1" COALESCE CLEANUP

This did more work than the “cleanup only” run (figures in a moment). When I re-ran the tests setting the parallel option to 2 the same SQL statement appeared with “PARALLEL 2”.

So here are some of the most important numbers for the calls to dbms_part.cleanup_gidx(). They are the headline undo and redo figures from the session statistics:

Default: (cleanup_orphans)
==========================
Name                                       Value
----                                       -----
redo entries                              54,283
redo size                             12,412,092
undo change vector size                4,762,432

options=>"Coalesce"
====================
Name                                       Value
----                                       -----
redo entries                              57,427
redo size                             15,518,660
undo change vector size                6,466,264

As you consider those figures, let me remind you that the indexes started out at roughly 1MB each, and we dropped 27,000 rows in three partitions. A quick check of the “cleanup orphans” arithmetic and you can see (approx) 54,000 redo entries = (two indexes * 27,000 rows each), and 12.4M redo size / 54K redo entries = 230 bytes per index entry. I’ve highlighted that last result because that’s a number you can use as a baseline to estimate the redo that will be generated by cleaning up global indexes. How many rows will be dropped from the table, how many global indexes have you got on the table – multiply the two together and multiply by 230.

Of course, that’s just for “cleanup only” and it assumes that every row appears in every index (which, in a well engineered system, won’t be true). Where does the extra 3MB of redo come from? Let’s drop down one more level in the processing, and run explicit “alter index” statements through the test harness.

Test number 3:

What does alter index xxx coalesce cleanup [only] do?

Here are the redo and undo summaries fron two sets of tests – “coalesce cleanup only”, and “coalesce cleanup”

Index t1_n1                         Cleanup only            Cleanup
-----------                         -------------------------------
redo entries                              27,139             28,176
redo size                              6,209,140          8,930,900
undo change vector size                2,382,564          3,999,900

Index t1_id                         Cleanup only            Cleanup
-----------                         -------------------------------
redo entries                              27,144             28,966
redo size                              6,202,436          6,547,460
undo change vector size                2,379,868          2,461,908

For “coalesce cleanup only” the workload for the two indexes is (effectively) identical – it’s basically the undo and redo from marking 27,000 index entries as deleted and doing nothing else. The blocks have note been cleaned up in any way, that will be left to future sessions which need to insert and entry in a block and find that it is full, but has lots of space that can be reclaimed from deleted entries.

When we use the “coalesce cleanup” (i.e. without “only”) Oracle does some extra work, but the amount of work varies significantly depending on the nature of the index: t1_n1 generates an extra 2.7MB of redo, index t1_id generates only another 345KB. That may be a little surprising, but we’ve already had a clue that something like this might happen, and since every other strategy for “cleaning” the indexes comes down to running these variations of the coalesce command we should look a little further into what they do and how they work.

To get a complete picture we’ll have to do some work with the dbms_space package, the index treedump command, dumping redo, and we also ought to take a look at v$rollstat and v$enqueue_stat. and we’ll pursue that in the Deep Dive section.

Test number 4:

What does alter index rebuild online do?

There’s a very important point to check in this test – if your database is noarchivelog mode the rebuild will be nologging. and you’ll be fooled by the apparent efficiency of the mechanism right up to the point where you go to production and find that you’re generating a huge amount of redo. For the record, my indexes were roughly 64 blocks (512KB) each when rebuilt and produced the following redo figures (and virtually no undo):

Index t1_n1
-----------
redo entries                                 343
redo size                                594,984
redo size for direct writes              527,148
undo change vector size                   18,528
sorts (rows)                              27,058

Index t1_id
-----------
redo entries                                 345
redo size                                625,956
redo size for direct writes              560,092
undo change vector size                   18,096
sorts (rows)                              27,022

I’ve included the sorts statistic as a reminder that there are other potentially nasty overheads to consider. And when you do an online rebuild Oracle will have to lock the table briefly create a journal table (effectively a materialized view log) to capture the changes that go on while the rebuild is running then apply them when the rebuild is nearly complete, and the rebuild has to be based on a tablescan and sort.

Depending on what fraction of the partitions you are dropping, though, this does look like a very promising option – especially when you have to cater for the problem of shifting the redo to a remote site.

Deep Dive

We have seen how much redo was generated for both “coalesce cleanup” and “coalesce cleanup only” and have an idea that we know what’s happening, so we will be taking a look at some redo dumps to see if that confirms our suspicion. Before we get to that extreme, though, it’s worth taking a couple of simpler steps.

The dbms_space package

We can use the dbms_space.space_usage() procedure to see the state of blocks in the two index segments before and after the attempts to cleanup. It’s important to remember that for index segments the procedure uses the “FS2” state to report blocks that are “free”, i.e. formatted but contain no data, aren’t in the index structure and can be linked in to the index when an existing block needs to be split.

Here are three sets of results from calls to the procedure showing the state immediately before (and after) the partitions were dropped, then after a “coalesce cleanup only” test, and then after a “coalesce cleanup” test.

Index T1_N1                   Blocks       Bytes    |  After "Cleanup Only" |   After "Cleanup" 
----------------------------------------------------|-----------------------|-------------------
Unformatted                   :    0 /         0    |      0 /         0    |      0 /         0
Freespace 1 (  0 -  25% free) :    0 /         0    |      0 /         0    |      0 /         0
Freespace 2 ( 25 -  50% free) :    1 /     8,192    |      1 /     8,192    |     67 /   548,864
Freespace 3 ( 50 -  75% free) :    0 /         0    |      0 /         0    |      0 /         0
Freespace 4 ( 75 - 100% free) :    0 /         0    |      0 /         0    |      0 /         0
Full                          :  134 / 1,097,728    |    134 / 1,097,728    |     68 /   557,056


Index T1_ID                   Blocks       Bytes    |  After "Cleanup Only" |  After "Cleanup"
----------------------------------------------------|-----------------------|-------------------
Unformatted                   :    0 /         0    |      0 /         0    |      0 /         0
Freespace 1 (  0 -  25% free) :    0 /         0    |      0 /         0    |      0 /         0
Freespace 2 ( 25 -  50% free) :    1 /     8,192    |      1 /     8,192    |     65 /   532,480
Freespace 3 ( 50 -  75% free) :    0 /         0    |      0 /         0    |      0 /         0
Freespace 4 ( 75 - 100% free) :    0 /         0    |      0 /         0    |      0 /         0
Full                          :  128 / 1,048,576    |    128 / 1,048,576    |     64 /   524,288


There are two key points in these figures:

  • cleanup only doesn’t change the state of any space usage information.
  • cleanup doesn’t do anything to release space back to the tablespace; it simply compacts the data into a smaller number of blocks and tags empty blocks as “free” (and takes them out of the index structure, but that’s not visible in summary from dbms_space.space_usage).
Index treedumps

If we want to find out how many of the FS2 blocks are in the index structure and how many have been taken out of the tree then we’ll have to do a tree dump – get the object_id of the index and issue:

alter session set events 'immediate trace name treedump level {object id}';

Here are the first few lines of the tree dump for the t1_n1 index in the same three states, created, after cleanup only, after cleanup:

Immediately after drop
branch: 0x9000684 150996612 (0: nrow: 128, level: 1)
   leaf: 0x9000685 150996613 (-1: row:449.449 avs:8)
   leaf: 0x9000686 150996614 (0: row:444.444 avs:4)
   leaf: 0x9000687 150996615 (1: row:444.444 avs:4)
   leaf: 0x9000688 150996616 (2: row:444.444 avs:4)
   leaf: 0x9000689 150996617 (3: row:444.444 avs:4)
   leaf: 0x900068a 150996618 (4: row:444.444 avs:4)
   leaf: 0x900068b 150996619 (5: row:444.444 avs:4)

After "Cleanup only"
branch: 0x9000684 150996612 (0: nrow: 128, level: 1)
   leaf: 0x9000685 150996613 (-1: row:449.214 avs:8)
   leaf: 0x9000686 150996614 (0: row:444.216 avs:4)
   leaf: 0x9000687 150996615 (1: row:444.216 avs:4)
   leaf: 0x9000688 150996616 (2: row:444.226 avs:4)
   leaf: 0x9000689 150996617 (3: row:444.216 avs:4)
   leaf: 0x900068a 150996618 (4: row:444.206 avs:4)
   leaf: 0x900068b 150996619 (5: row:444.231 avs:4)
   leaf: 0x900068c 150996620 (6: row:444.201 avs:4)
 
After "Cleanup"
branch: 0x9000684 150996612 (0: nrow: 64, level: 1)
   leaf: 0x9000685 150996613 (-1: row:446.446 avs:16)
   leaf: 0x9000688 150996616 (0: row:444.444 avs:4)
   leaf: 0x900068a 150996618 (1: row:444.444 avs:4)
   leaf: 0x900068c 150996620 (2: row:444.444 avs:4)
   leaf: 0x900068e 150996622 (3: row:444.444 avs:4)
   leaf: 0x9000690 150996624 (4: row:444.444 avs:4)
   leaf: 0x9000692 150996626 (5: row:444.444 avs:4)
   leaf: 0x9000694 150996628 (6: row:444.444 avs:4)
   leaf: 0x9000696 150996630 (7: row:444.444 avs:4)
   leaf: 0x9000698 150996632 (8: row:444.444 avs:4)
   leaf: 0x900069a 150996634 (9: row:421.421 avs:12)

This is the index where the values 0 to 599 have been spread randomly across the 54,000 different id values, so each n1 value appears in roughly 90 rows – of which we’ve deleted about 45 by dropping the bottom 3 partitions of 6. For leaf blocks “row:X,Y” means there are X rows in the block directory of which Y would remain after rows marked as deleted had been removed, “avs:N” shows N bytes of free space in the block. The t1_n1 index can fit 444 index entries per leaf block and we can see that immediately after the “drop partition” none of them has been marked as deleted. After a “cleanup only” however we can see that (as expected) roughly half the rows in every leaf block have been marked as deleted with half remaining. After a “cleanup” we can see that we’re back to 444 rows per leaf block with no deletions and virtually no freespace.,

Notice, however, the way that the leaf block addresses have changed during the cleanup. If we examine just the last 3 digits of the decimal version of the leaf block addresses we start with:

613, 614, 615, 616, 617, 618, 618, 619, 620

but we end with:

613, 616, 618, 620 ..

Effectively, Oracle has “copied back” index entries from block 614 to block 613, copied the remaining rows forward from 614 and all the rows from 615 to 616, copied the “overflow” rows from 616 and all the rows from 617 into 618 and so on, detaching empty blocks from the index structure as they appear. (It’s not likely that Oracle thinks in terms of “copying forward/back”, it’s more likely that Oracle simply reads through the index in order constructing new leaf blocks in private memory and has a simple algorithm for deciding which block to replace e.g. “the most recent leaf block that does not have any overflow to use up”)

If we now examine the tree dumps from index t1_id we can see why the volume of redo differs on the final “cleanup” phase of the code.

Immediately after drop
branch: 0x9000784 150996868 (0: nrow: 134, level: 1)
   leaf: 0x9000785 150996869 (-1: row:426.426 avs:7)
   leaf: 0x9000786 150996870 (0: row:421.421 avs:1)
   leaf: 0x9000787 150996871 (1: row:421.421 avs:1)
   leaf: 0x9000788 150996872 (2: row:421.421 avs:1)
   leaf: 0x9000789 150996873 (3: row:421.421 avs:2)
   leaf: 0x900078a 150996874 (4: row:421.421 avs:1)
   leaf: 0x900078b 150996875 (5: row:421.421 avs:1)
...
   leaf: 0x90007c6 150996934 (64: row:400.400 avs:0)
   leaf: 0x90007c7 150996935 (65: row:400.400 avs:0)
   leaf: 0x90007c8 150996936 (66: row:400.400 avs:0)
   leaf: 0x90007c9 150996937 (67: row:400.400 avs:0)
   leaf: 0x90007ca 150996938 (68: row:400.400 avs:0)
   leaf: 0x90007cb 150996939 (69: row:400.400 avs:0)

After "Cleanup only"
branch: 0x9000784 150996868 (0: nrow: 134, level: 1)
   leaf: 0x9000785 150996869 (-1: row:426.0 avs:7)
   leaf: 0x9000786 150996870 (0: row:421.0 avs:1)
   leaf: 0x9000787 150996871 (1: row:421.0 avs:1)
   leaf: 0x9000788 150996872 (2: row:421.0 avs:1)
   leaf: 0x9000789 150996873 (3: row:421.0 avs:2)
   leaf: 0x900078a 150996874 (4: row:421.0 avs:1)
   leaf: 0x900078b 150996875 (5: row:421.0 avs:1)
...
   leaf: 0x90007c6 150996934 (64: row:400.0 avs:0)
   leaf: 0x90007c7 150996935 (65: row:400.303 avs:0)
   leaf: 0x90007c8 150996936 (66: row:400.400 avs:0)
   leaf: 0x90007c9 150996937 (67: row:400.400 avs:0)

After "Cleanup"
branch: 0x9000784 150996868 (0: nrow: 68, level: 1)
   leaf: 0x90007c7 150996935 (-1: row:303.303 avs:1940)
   leaf: 0x90007c8 150996936 (0: row:400.400 avs:0)
   leaf: 0x90007c9 150996937 (1: row:400.400 avs:0)
   leaf: 0x90007ca 150996938 (2: row:400.400 avs:0)
   leaf: 0x90007cb 150996939 (3: row:400.400 avs:0)
   leaf: 0x90007cc 150996940 (4: row:400.400 avs:0)
   leaf: 0x90007cd 150996941 (5: row:400.400 avs:0)

I’ve shown two sections of the treedump for the first two extracts, the start of the index and the start of the “2nd half” of the index where the id values are in the partitions that we kept. You’ll notice that the number of index entries per leaf block drops from 426 to 400 as we move through the index, that’s just the effect of a sequential id generally getting bigger (42 takes 2 bytes, 42,000 takes 3 bytes, 42,042 takes 4 bytes).

After “cleanup only” all the leaf blocks in the first half the the index show “no rows remaining of 4xx”, while all the leaf blocks in the 2nd half the index show “400 rows remaining of 400” – with the exception of the leaf block numbered 65 at address 0x90007c7, which shows 303 rows remaing of 400, that must be the block that held the highest few rows from partition p27000.

After the final cleanup we can see that this “mid-point” leaf block has become the “low value” leaf block, and the rest of the index leaf blocks look as if they are completely unchanged. (I think we can assume that the “copy forward/back” code caters for a few boundary conditions that (e.g.)stop Oracle from doing something silly with leaf blocks that are already completely full.)

In this case, then, we can guess that Oracle has simply removed 66 leaf blocks (blocks -1 to 64) from the index structure and reconnected block number 65 as the starting block. In the previous case the “cleanup” redo disconnected a simlar number of leaf blocks, but also rewrote all the blocks that were kept.

Dumping Redo

The information we examined so far prompts us to ask two significant questions:

  • Moving from the “cleanup only” to the “cleanup” increased the redo by 345KB in the case of the sequential t1_id index, but by 2.7MB in the case of the “random arrival” t1_n1 index. In both cases the number of leaf blocks to be relinked is very similar, so the difference seems to be due to the work done in compacting roughly 128 leaf blocks down to 64 leaf blocks – how does that generate (2700KB – 345KB)/64 = 36KB per “new” block? We might also wonder why “just” relinking 66 blocks appears to take 345K/66 = 5K per block anyway – that seems a little surprising.
  • For the full cleanup operation does Oracle delete the entries from a few consecutive blocks, compact them and write them and then carry on with the next few blocks; or does it walk the entire index deleting entries and then walk the index again to tidy up and compact (logically) adjacent blocks. If the latter then for a very large index (rather than my tiny test) that means we could see two consecutive full scans physically reading the whole index, and possibly generating more redo thanks to delayed block cleanout.

If we dump the redo log we can answer these questions by extracting some fairly simple details – although the resulting trace file is going to be rather large. For example we could code:

column current_scn new_value start_scn
select to_char(current_scn,'9999999999999999') current_scn from v$database;

alter index t1_n1 coalesce cleanup;

column current_scn new_value end_scn
select to_char(current_scn,'9999999999999999') current_scn from v$database;

alter session set tracefile_identifier='n1_index';
alter system dump redo scn min &start_scn scn max &end_scn /* layer 10 */;
alter session set tracefile_identiier='';


I’ve included a commented out “layer 10” in the dump command to show how you can be selective in what redo you dump. Layer 10 is the set of redo op codes for index-related change vectors. You will find other op codes (in particular 5.1) being dumped as well because Oracle dumps the whole of any redo record containing a change vector of the specified type.

When I dumped the redo for a single index cleanup the trace file was about 45MB – so not something you would want to read in detail – but you could start with a few simple searches, for example:

grep -n "OP:10"       or19_ora_13388_n1_index.trc >temp_n1_op10.txt
grep -n "REDO RECORD" or19_ora_13388_n1_index.trc >temp_n1_record.txt

grep "OP:"  or19_ora_13388_n1_index.trc | sed "s/^.*OP:/OP:/" | sed "s/ .*$//" | sort | uniq -c | sort -n

The first grep example simply extracts all the index-related op-codes (with line numbers from the trace file to make it easier to spot patterns. The second grep does the same for the start of each redo record because those lines also report the length of each record, which may make it possible to find out more about the surprising amount of redo generated by the compacting and relinking. The third example is just a bit of showing off, it extracts all the op-code lines, cuts the actual OP:nn.nn bit out, then sorts and counts the number of appearances of each op-code. Here’s the result of that last example from one of the tests:

      1 OP:13.24
      1 OP:17.28
      1 OP:24.1
      2 OP:11.5
      3 OP:10.11
      3 OP:11.3
     15 OP:10.14
     18 OP:14.4
     18 OP:22.5
     28 OP:10.7
     29 OP:5.11
     36 OP:22.2
     64 OP:13.22
     68 OP:10.34
     82 OP:10.39
    138 OP:10.8
    205 OP:5.4
    219 OP:10.6
    259 OP:4.1
    324 OP:10.5
    347 OP:5.6
    689 OP:5.2
  27327 OP:10.4
  27833 OP:5.1

It’s an obvious guess that the 27,000 OP:10.4 at the bottom are “delete index entry” and most of the OP:5.1 are their corresponding undo change vectors. The OP:10.4 count is a little high, but checking the session activity stats I found that they showed: rollback changes – undo records applied = 376, so some internal transaction rollback took place, and the session stats are a reasonable match for the “excess” OP:10.4, suggesting that at some point we saw a batch of deletes rollback and restart. (NOTE: to investigate further, this suggests that the entire operation is executing as a number of relatively small transactions and could safely be interrupted – a hypothesis supported by the 205 OP:5.4 (commit/rollback change vectors)). The “excess” OP:10.4 are also closely matched by the OP:10.5 / OP:5.6 figures (restore leaf during rollback / mark user undo record applied)

Before chasing any other details let’s answer the question about whether the compacting takes place during the delete phase or starts after the deletes are all done. If we jump to the end of the “OP:10” file we can check to see if there are OP:10.4 all the way through the file with small patches of other stuff scattered through, or it’s non-stop OP:10.4 with a load of “something else” appearing only in the last couple of thousand lines. The answer is that we get regular repeates of a pattern like the following:

OP:10.4 x ~400
OP:10.6 
OP:10.6 
OP:10.6 
OP:10.39 
OP:10.8 
OP:10.34 
OP:10.8 

For example (after lots of 10.4 on blocks 0x9000695 and 0x9000696. we see:

119184:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000696 OBJ:131045 SCN:0x0000000002165409 SEQ:1 OP:10.6 ENC:0 RBL:0 FLG:0x0000
119210:CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000695 OBJ:131045 SCN:0x0000000002165407 SEQ:1 OP:10.6 ENC:0 RBL:0 FLG:0x0000
119236:CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000694 OBJ:131045 SCN:0x0000000002165405 SEQ:1 OP:10.6 ENC:0 RBL:0 FLG:0x0000
119272:CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000684 OBJ:131045 SCN:0x0000000002165409 SEQ:1 OP:10.39 ENC:0 RBL:0 FLG:0x0000
119625:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000694 OBJ:131045 SCN:0x0000000002165409 SEQ:1 OP:10.8 ENC:0 RBL:0 FLG:0x0000
120499:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000695 OBJ:131045 SCN:0x0000000002165409 SEQ:1 OP:10.34 ENC:0 RBL:0 FLG:0x0000
120862:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000696 OBJ:131045 SCN:0x0000000002165409 SEQ:2 OP:10.8 ENC:0 RBL:0 FLG:0x0000

This translates to:

10.6:  lock block 0x09000696
10.6:  lock block 0x09000695
10.6:  lock block 0x09000694
10.39: update branch block 0x09000684
10.8:  new block 0x09000694
10.34: empty block 0x09000695
10.8:  new block 0x09000696

The pattern then repeats starting with deletes from block 0x09000697 and 0x09000698, and then producing an interesting detail:

DBA:0x09000698 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.6 
DBA:0x09000697 OBJ:131045 SCN:0x000000000216540d SEQ:1 OP:10.6 
DBA:0x09000696 OBJ:131045 SCN:0x000000000216540b SEQ:1 OP:10.6 
DBA:0x09000684 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.39 
DBA:0x09000696 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.8 
DBA:0x09000697 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.34 
DBA:0x09000698 OBJ:131045 SCN:0x000000000216540f SEQ:2 OP:10.8 

Check line 5 above – it’s another OP:10.8 creating another version of block 0x09000696. And that’s one of the reasons why the compaction creates more redo than expected. Oracle may recreate a block several times before the block gets to its final compacted state.

Looking at the detail of the trace file, in particular the pattern of deletes followed by “new block”, it looks as if Oracle deletes all the rows from just two adjacent blocks (perhaps to minimise block-level locking) and then does the best it can with the rows that are left and this may mean (as it does in our fragment) writing one full block and one partial block, though for a sparsely populated index it might mean writing just a single partial block.

To show the total effect on redo generation I’ve extracted the redo records for a complete cycle of the pattern (excluding several hundred deletes, which generate 228 bytes each). I’ve used egrep to pick out 3 patterns “OP:”, “REDO RECORD” and (using hindsight) “new block has”:

132650:REDO RECORD - Thread:1 RBA: 0x0002c8.00046e90.0134 LEN: 0x0058 VLD: 0x01 CON_UID: 3792595
132652:CHANGE #1 CON_ID:3 TYP:2 CLS:1 AFN:36 DBA:0x09000698 OBJ:131045 SCN:0x000000000216540e SEQ:1 OP:4.1 ENC:0 RBL:0 FLG:0x0000

132656:REDO RECORD - Thread:1 RBA: 0x0002c8.00046e90.018c LEN: 0x0164 VLD: 0x01 CON_UID: 3792595
132658:CHANGE #1 CON_ID:3 TYP:0 CLS:17 AFN:17 DBA:0x044001c0 OBJ:4294967295 SCN:0x00000000021653f8 SEQ:1 OP:5.2 ENC:0 RBL:0 FLG:0x0000
132661:CHANGE #2 CON_ID:3 TYP:1 CLS:18 AFN:17 DBA:0x04406fdf OBJ:4294967295 SCN:0x000000000216540f SEQ:1 OP:5.1 ENC:0 RBL:0 FLG:0x0000
132682:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000698 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.6 ENC:0 RBL:0 FLG:0x0000

132691:REDO RECORD - Thread:1 RBA: 0x0002c8.00046e91.0100 LEN: 0x00e4 VLD: 0x01 CON_UID: 3792595
132693:CHANGE #1 CON_ID:3 TYP:0 CLS:18 AFN:17 DBA:0x04406fdf OBJ:4294967295 SCN:0x000000000216540f SEQ:2 OP:5.1 ENC:0 RBL:0 FLG:0x0000
132708:CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000697 OBJ:131045 SCN:0x000000000216540d SEQ:1 OP:10.6 ENC:0 RBL:0 FLG:0x0000

132717:REDO RECORD - Thread:1 RBA: 0x0002c8.00046e91.01e4 LEN: 0x00e4 VLD: 0x01 CON_UID: 3792595
132719:CHANGE #1 CON_ID:3 TYP:0 CLS:18 AFN:17 DBA:0x04406fdf OBJ:4294967295 SCN:0x000000000216540f SEQ:3 OP:5.1 ENC:0 RBL:0 FLG:0x0000
132734:CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000696 OBJ:131045 SCN:0x000000000216540b SEQ:1 OP:10.6 ENC:0 RBL:0 FLG:0x0000

132743:REDO RECORD - Thread:1 RBA: 0x0002c8.00046e92.00d8 LEN: 0x0058 VLD: 0x01 CON_UID: 3792595
132745:CHANGE #1 CON_ID:3 TYP:2 CLS:1 AFN:36 DBA:0x09000684 OBJ:131045 SCN:0x000000000216540a SEQ:1 OP:4.1 ENC:0 RBL:0 FLG:0x0000

132749:REDO RECORD - Thread:1 RBA: 0x0002c8.00046e92.0130 LEN: 0x0128 VLD: 0x01 CON_UID: 3792595
132751:CHANGE #1 CON_ID:3 TYP:0 CLS:18 AFN:17 DBA:0x04406fdf OBJ:4294967295 SCN:0x000000000216540f SEQ:4 OP:5.1 ENC:0 RBL:0 FLG:0x0000
132770:CHANGE #2 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000684 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.39 ENC:0 RBL:0 FLG:0x0000

132779:REDO RECORD - Thread:1 RBA: 0x0002c8.00046e93.0068 LEN: 0x4038 VLD: 0x01 CON_UID: 3792595
132781:CHANGE #1 CON_ID:3 TYP:0 CLS:17 AFN:17 DBA:0x044001c0 OBJ:4294967295 SCN:0x000000000216540f SEQ:1 OP:5.2 ENC:0 RBL:0 FLG:0x0000
132784:CHANGE #2 CON_ID:3 TYP:1 CLS:18 AFN:17 DBA:0x04406fe0 OBJ:4294967295 SCN:0x000000000216540f SEQ:1 OP:5.1 ENC:0 RBL:0 FLG:0x0000
133123:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000696 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.8 ENC:0 RBL:0 FLG:0x0000
133139:new block has 444 rows

133653:REDO RECORD - Thread:1 RBA: 0x0002c8.00046eb5.0010 LEN: 0x20e8 VLD: 0x05 CON_UID: 3792595
133656:CHANGE #1 CON_ID:3 TYP:0 CLS:17 AFN:17 DBA:0x044001c0 OBJ:4294967295 SCN:0x000000000216540f SEQ:2 OP:5.2 ENC:0 RBL:0 FLG:0x0000
133659:CHANGE #2 CON_ID:3 TYP:1 CLS:18 AFN:17 DBA:0x04406fe1 OBJ:4294967295 SCN:0x0000000002165410 SEQ:1 OP:5.1 ENC:0 RBL:0 FLG:0x0000
133998:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000697 OBJ:131045 SCN:0x000000000216540f SEQ:1 OP:10.34 ENC:0 RBL:0 FLG:0x0000

134007:REDO RECORD - Thread:1 RBA: 0x0002c8.00046ec6.0010 LEN: 0x0064 VLD: 0x01 CON_UID: 3792595
134009:CHANGE #1 CON_ID:3 TYP:0 CLS:8 AFN:36 DBA:0x09000680 OBJ:131045 SCN:0x0000000002165409 SEQ:1 OP:13.22 ENC:0 RBL:0 FLG:0x0000

134017:REDO RECORD - Thread:1 RBA: 0x0002c8.00046ec6.0074 LEN: 0x3a74 VLD: 0x01 CON_UID: 3792595
134019:CHANGE #1 CON_ID:3 TYP:0 CLS:17 AFN:17 DBA:0x044001c0 OBJ:4294967295 SCN:0x0000000002165410 SEQ:1 OP:5.2 ENC:0 RBL:0 FLG:0x0000
134022:CHANGE #2 CON_ID:3 TYP:1 CLS:18 AFN:17 DBA:0x04406fe2 OBJ:4294967295 SCN:0x0000000002165410 SEQ:1 OP:5.1 ENC:0 RBL:0 FLG:0x0000
134361:CHANGE #3 CON_ID:3 TYP:0 CLS:1 AFN:36 DBA:0x09000698 OBJ:131045 SCN:0x000000000216540f SEQ:2 OP:10.8 ENC:0 RBL:0 FLG:0x0000
134377:new block has 362 rows

134799:REDO RECORD - Thread:1 RBA: 0x0002c8.00046ee4.00c8 LEN: 0x0058 VLD: 0x01 CON_UID: 3792595
134801:CHANGE #1 CON_ID:3 TYP:0 CLS:17 AFN:17 DBA:0x044001c0 OBJ:4294967295 SCN:0x0000000002165410 SEQ:2 OP:5.4 ENC:0 RBL:0 FLG:0x0000

This shows a complete transaction (the previous redo record is the commit (OP:5.4) for the a few hundred deletes.I’ve highlighted five lines in the output – the three redo record headers that report the most space, and two lines that show you how many rows are in the “newed” blocks gene.rated by the op codes OP:10.8

  • Line 24 – LEN = 0x4038 = 16,440 bytes. That’s basically the size of two blocks: the OP:5.1 is the old image (8KB) of block 0x9000696, the OP:10.8 is the new image (also 8KB) of the block
  • Line 30 – LEN = 0x20e8 = 8,424. That’s basically one block image. The OP:10.34 is “clear block 0x09000698” which requires very little redo, but the OP:5.1 is the old image (8KB) of the block.
  • Line 38 – LEN = 0x3a74 = 14,964 bytes. Again that’s basically the size of two blocks: the OP:5.1 is the old image (8KB) of block 0x9000698, the OP:10.8 is the new image of the block, but it has only 362 rows (of a final 444 rows) in it, so the dump is only about 6.5KB

All the other bits add up to about 1,200 bytes of redo, so in each cycle the compacting activity generates about 40KB of redo in total. Since we end up with 68 filled blocks the total “extra” redo we got as we switched from “coalesce compress only” to “coalesce compress” should be around 68 * 40KB = 2.65 MB, which is pretty close to what we atually saw for the t1_n1 index.

Transactions and Locking

I commented on the presence of the session statistics reporting “rollback changes – undo records applied”, and mentioned the presence of the OP:5.4 records in the redo. These are all pointers to the coalesce command being operated as a series of smaller transactions rather than one large transaction, and even before I had started to dump and examine the redo I had added v$rollstat and v$enqueuestat to my usual snapshot of dynamic performance views. Here are the results from a typical “coalesce cleanup” test for the t1_n1 index.

---------------------------------
Rollback stats
---------------------------------
USN   Ex Size K  HWM K  Opt K      Writes     Gets  Waits Shr Grow Shr K  Act K
----  -- ------  -----  -----      ------     ----  ----- --- ---- ----- ------
   0   0      0      0      0           0        2      0   0    0     0      0
   1   0      0      0      0      382184       85      0   0    0     0      0
   2   0      0      0      0      412930       91      0   0    0     0     61
   3   0      0      0      0      404260       90      0   0    0     0      0
   4   0      0      0      0      413412       91      0   0    0     0   -106
   5 -13  -1792      0      0      341294       99      0   2    0    24    -71
   6   0      0      0      0      355986       81      0   0    0     0      0
   7   0      0      0      0      444054       97      0   0    0     0      0
   8 -12  -1728      0      0      418474      114      1   2    0    24    -45
   9   0      0      0      0      422776       92      0   0    0     0     38
  10   0      0      0      0      426976       94      0   0    0     0      0


----------------------------------
System enqueues
----------------------------------
Type    Requests       Waits     Success      Failed    Wait m/s Reason
----    --------       -----     -------      ------    -------- ------
KI             3           0           3           0           0 contention
CR           250          24         250           0          12 block range reuse ckpt
IS            25           0          25           0           0 contention
TM             8           0           8           0           0 contention
TA             2           0           2           0           0 contention
TX           198           0         198           0           0 contention
US             8           0           8           0           0 contention
HW             8           0           8           0           0 contention
TT             4           0           4           0           0 contention
SJ             2           0           2           0           0 Slave Task Cancel
CU             2           0           2           0           0 contention
OD             1           0           1           0           0 Serializing DDLs

-------------------------------------
System REDO stats
-------------------------------------
Name                                       Value
----                                       -----
redo entries                              27,977
redo size                              8,853,604
undo change vector size                3,967,100

I have 10 undo segments online, and each has received about 400KB of undo – which comes to a total of 4MB – which is a very good match to the “undo change vector size” reported for the session. (I didn’t get any system generated rollbacks in the run.)

We can also see 198 TX enqueues – which is a very good match for the 196 OP:5.4 that I found in the sessions redo dump on this test.

As I commented in the section on redo generation, to handle the “coalesce cleanup” the session walks the index in leaf block order and (with variations dependent on situations like finding leaf blocks with nothing to delete, or finding leaf blocks that become empty after all the deletes have been done)

  • locks two leaf blocks
  • deletes the dropped rows
  • commits
  • packs the outstanding rows downwards
  • relinks leaf blocks and adjusts branch blocks as necessary
  • commits
    • “cleanup only” locks and deletes from a single row at a time, and doesn’t do the packing.
  • repeats until end of index

(Note – “cleanup only” seems to lock, delete and commit one block at a time)

This strategy has two side-effects. First, though I don’t think it’s documented, it looks as if you could simply kill the process if it started putting too much stress on your system. All that could happen is that the current (small) transaction would be cleaned up, but the rest of the work that had been done up to that moment would persist. I have actually tested this, managing to kill sessions while they were in the middle of a delete a batch (though I’ve never managed to catch an example where a session was in the middle of relinking). After restarting Oracle would simply pick up where it had left off after the previous (small) rollback.

The second side effect is another possible overhead. Partitioned tables tend to be big, and each index clean-up is likely to be a fairly big job generating a lot of redo and undo. How much impact could the undo have on your system? This depends in part on what your undo retention looks like, how long the clean-up takes, and the risk of other sessions running into ORA-01555 errors. In particular there’s an odd problem of undo segment extension causing updates to restart – if an update statement (or delete, or merge with update clause) results in an undo segment extending then the statement will rollback and restart using a different locking strategy.

I don’t know if something of this sort would happen with the deletes from an index coalesce, and the deletes are very small anyway so it might not matter anyway, but the coalesce executes a large number of transactions in a relatively short time period rotating around and gradually filling every available undo segment – which means other large updates are more likely to need to extend an undo segment.

It’s also worth noting that overlapping jobs will sometimes need to do a lot of work to check read (and write) consistency, and to find “upper bound commit” SCNs: if you have a process executing a large number of transactions in a short period of time it can be very expensive for another process to find an “upper bound commit” and you may see it doing a lot of reads of undo segments, reporting a number of “transaction tables consistent read rollbacks” and a large number of “transaction tables consistent reads – undo records applied” in the session stats. (Worst case scenario – the number of transactions could be similar to the number of (used) leaf blocks in the index).

One final point to consider – the report from the (system level) Enqueue stats showed a handful of TM locks, so I enabled lock tracing for the session to see if any of them came from my session and whether they were likely to be a concurrency threat.

alter session set events 'trace[ksq] disk=medium';

Most of the TM enqueues were from my session, but they appeared only after the coalesce was complete, and they were all related to sys-owned (dictionary) tables:

2022-08-05 19:49:27.902*:ksq.c@9175:ksqgtlctx(): *** TM-00000140-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-08-05 19:49:27.902*:ksq.c@9175:ksqgtlctx(): *** TM-00000061-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-08-05 19:49:27.902*:ksq.c@9175:ksqgtlctx(): *** TM-00000049-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-08-05 19:49:27.902*:ksq.c@9175:ksqgtlctx(): *** TM-00000004-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-08-05 19:49:27.903*:ksq.c@9175:ksqgtlctx(): *** TM-0000004B-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-08-05 19:49:27.903*:ksq.c@9175:ksqgtlctx(): *** TM-00000013-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***

In case you’re wondering, the first TM lock (object 0x140) is the index_orphaned_entry$ table, and Oracle had to lock it at the end of the “coalesce clean up” to delete the rows that linked the three dropped table partitions with the one global index that I had cleaned.

Summary

Deferred global index maintence means you can drop partitions very quickly with virtually no interruption to service and then have Oracle clean up the related index entries at a later point in time. The drawback to deferring the cleanup is that Oracle will use a row-by-row mechanism to mark all the index entries for the dropped data as deleted – at a cost of a full index scan and about 230 bytes of redo per dropped row per index – compared to a bulk-processing mechanism that can be applied if the index entries are dropped as part of the partition drop processing.

There are basically two mechanisms you can use to clean up the entries that are waiting deletion.

  • rebuild the index (online, probably)
  • use one of the special coalesce calls on the index

You can best spread the workload and minimise interference with your normal work by micro-managing the clean up, explicitly issuing whichever “alter index coalesce” or “alter index rebuild” command you prefer for each index in turn that needs to be cleaned up.

None of the coalesce command variants returns space to the tablespace; nor do they even drop the highwater mark on the index segment. If you want to return space to the tablespace you will need to excute a “shrink space” on the index after the coalesce is complete. All the coalesce options generate a very large amount of redo (230 bytes per index entry for each delete, plus a volume that may be several times larger than the actual volume of data in the index to compact the remaining rows into the smallest number of blocks.)

The rebuild (online) will generate a much smaller volume of redo – but the penalties include the problem of journalling and applying the changes that took place as the rebuild was running; plus the cost of scanning the table and sorting the data to produce the index.

Footnote

My tests are basically the worst case scenario (every leaf block has some entries to be deleted) and very close to best case scenario (leaf blocks will either become completely empty on deletion, or will have no rows deleted).

Hinting

Tue, 2022-07-26 07:05

This is just a lightweight note on the risks of hinting (which might also apply occasionally to SQL Plan Baselines). I’ve just rediscovered a little script I wrote (or possibly last tested/edited) in 2007 with a solution to the problem of how to structure a query to use an index fast full scan (index_ffs) following by a “table access by rowid” – a path which is not available to the optimizer for select statements (even when hinted, though it became available for deletes and updates in 12c).

It’s possible that this method was something I designed for a client using 9i, but the code still behaves as expected in 11.1.0.7. Here’s the setup and query:

rem
rem     Script:         wildcard.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Nov 2007
rem
rem     Last tested
rem             11.1.0.7
rem

create table t1
as
select
        cast(dbms_random.string('a',8) as varchar2(8))  str,
        rpad('x',100)                                   padding
from
        all_objects
where
        rownum <= 10000
;

alter table t1 modify str not null;
create index t1_i1 on t1(str);

begin
        dbms_stats.gather_table_stats(
                user, 't1', 
                cascade => true,
                method_opt => 'for all columns size 1'
        );
end;
/

explain plan for
select  
        /*+ 
                qb_name(main) 
                unnest(@subq1)
                leading(@sel$2fc9d0fe t1@subq1 t1@main)
                index_ffs(@sel$2fc9d0fe t1@subq1(t1.str))
                use_nl(@sel$2fc9d0fe t1@main)
                rowid(@sel$2fc9d0fe t1@main)
        */
        * 
from    t1 
where   rowid in (
                select  /*+ qb_name(subq1) */
                        rowid 
                from    t1 
                where   upper(str) like '%CHD%'
)
;

select * from table(dbms_xplan.display(format=>'outline alias'));

As you can see, I’ve got an IN subquery (query block subq1) to generate a list of rowids from the table for the rows that match my predicate and then my main query (query block main) selects the rows identified by that list.

I’ve added hints to the main query block to unnest the subquery (which will result in a new query block appearing) then do a nested loop from the t1 referenced in subq1 (t1@subq1) to the t1 referenced in main (t1@main), starting with an index fast full scan of t1@subq1 and accessing t1@main by rowid.

The unnest hint was actually redundant – unnesting happened automatically and uncosted. You’ll notice all the other hints are directed at a query block called sel$2fc9d0fe which is the resulting query block name when subq1 is unnested into main.

Here’s the resulting execution plan showing, amongst other details in the Outline Data, that this really was running on 11.1.0.7

Plan hash value: 1953350015

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |   500 | 65500 |   509   (0)| 00:00:07 |
|   1 |  NESTED LOOPS               |       |   500 | 65500 |   509   (0)| 00:00:07 |
|*  2 |   INDEX FAST FULL SCAN      | T1_I1 |   500 | 10500 |     9   (0)| 00:00:01 |
|   3 |   TABLE ACCESS BY USER ROWID| T1    |     1 |   110 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$2FC9D0FE
   2 - SEL$2FC9D0FE / T1@SUBQ1
   3 - SEL$2FC9D0FE / T1@MAIN

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      USE_NL(@"SEL$2FC9D0FE" "T1"@"MAIN")
      LEADING(@"SEL$2FC9D0FE" "T1"@"SUBQ1" "T1"@"MAIN")
      ROWID(@"SEL$2FC9D0FE" "T1"@"MAIN")
      INDEX_FFS(@"SEL$2FC9D0FE" "T1"@"SUBQ1" ("T1"."STR"))
      OUTLINE(@"SUBQ1")
      OUTLINE(@"MAIN")
      UNNEST(@"SUBQ1")
      OUTLINE_LEAF(@"SEL$2FC9D0FE")
      ALL_ROWS
      DB_VERSION('11.1.0.7')
      OPTIMIZER_FEATURES_ENABLE('11.1.0.7')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(UPPER("STR") LIKE '%CHD%')

Running the test under 19.11.0.0 (and adding the hint_report option to the dbms_xplan format) this is the resulting plan:

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   500 | 55000 |    47   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |   500 | 55000 |    47   (0)| 00:00:01 |
--------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$48592A03 / T1@MAIN

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"SEL$48592A03" "T1"@"MAIN")
      OUTLINE(@"SUBQ1")
      OUTLINE(@"MAIN")
      ELIMINATE_SQ(@"SUBQ1")
      OUTLINE_LEAF(@"SEL$48592A03")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(UPPER("T1"."STR") LIKE '%CHD%')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 5 (U - Unused (1), N - Unresolved (4))
---------------------------------------------------------------------------
   0 -  SEL$2FC9D0FE
         N -  index_ffs(@sel$2fc9d0fe t1@subq1(t1.str))
         N -  leading(@sel$2fc9d0fe t1@subq1 t1@main)
         N -  rowid(@sel$2fc9d0fe t1@main)
         N -  use_nl(@sel$2fc9d0fe t1@main)

   0 -  SUBQ1
         U -  unnest(@subq1)

Clearly the plan has changed – but the hint report says that Oracle has NOT ignored my hints; instead it tells us that they cannot be resolved. If we check the Query Block Name / Object Alias list and the Outline Data we see why: there is no query block named @sel$2fc9d0fe and the reason it doesn’t exist is that the optimizer has applied a previously non-existent transformation ‘eliminate_sq’ (which appeared in 12c) to subq1.

So, on the upgrade from 11.1.0.7 to 19.11.0.0 an SQL Plan Baseline that forced the path we wanted would no longer work (though it might be reported as “applied”) because there is a new transformation that we had (necessarily) not been blocking.

The solution is easy: add the hint no_eliminate_sq(@subq1) to our query and try again.

We still get the full tablescan even though the hint report tells us that the optimizer used the new hint. Here’s the new Outline Data, and the Hint Report showing that the hint was used.

  Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"SEL$8C456B9A" "T1"@"SUBQ1")
      OUTLINE(@"SUBQ1")
      OUTLINE(@"MAIN")
      UNNEST(@"SUBQ1")
      OUTLINE(@"SEL$2FC9D0FE")
      ELIMINATE_JOIN(@"SEL$2FC9D0FE" "T1"@"MAIN")
      OUTLINE_LEAF(@"SEL$8C456B9A")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(UPPER("STR") LIKE '%CHD%')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 7 (U - Unused (4))
---------------------------------------------------------------------------
   0 -  SUBQ1
           -  no_eliminate_sq(@subq1)
           -  qb_name(subq1)

   1 -  SEL$8C456B9A
         U -  leading(@sel$2fc9d0fe t1@subq1 t1@main)
           -  qb_name(main)

   1 -  SEL$8C456B9A / T1@MAIN
         U -  rowid(@sel$2fc9d0fe t1@main)
         U -  use_nl(@sel$2fc9d0fe t1@main)

   1 -  SEL$8C456B9A / T1@SUBQ1
         U -  index_ffs(@sel$2fc9d0fe t1@subq1(t1.str))

But now the Outline Data is showing us a new hint – eliminate_join(@sel$2fc9dofe t1@main). So we’re not losing the subquery, but we’ve lost the join thanks to a transformation that was actually available in 10.2 but presumably couldn’t be applied to our code pattern until at least 12.1. So let’s try again adding in another blocking hint no_eliminate_join(@sel$2fc9dofe t1@main).

We still get the full tablescan – and this time the Outline Data tells us that the problem hint is now eliminate_join(@sel$2fc9dofe t1@subq1) – which we might have anticipated, and now address by adding no_eliminate_join(@sel$2fc9dofe t1@subq1) to the query and having one more go. This finally gets us back to the path that we had previously seen in 11.1.0.7.

Summary

This note is just another simple demonstration that hints do not guarantee plan stability across upgrades – and then showing that it can take a few experimental steps to discover what’s new in the optimizer that is making your previous set of hints ineffective.

Typically the problem will be the availability of new transformations (or enhancements to existing transformations) which manage to invalidate the old hints before the optimizer has had a chance to consider them.

Trim CPU

Fri, 2022-07-22 00:56

Prompted by an unexpectedly high CPU usage on a hash join of two large dadta sets Stefan Koehler posted a poll on twitter recently asking for opinions on the following code fragment:

FROM
    TAB1
INNER JOIN TAB2 ON
    TAB1.COL1 = TAB2.COL1
AND TRIM(TAB1.COL3) > TRIM(TAB2.COL3)

While I struggle to imagine a realistic business requirement for the second predicate and think it’s indicative of a bad data model, I think it is nevertheless quite instructive to use the example to show how a hash join can use a lot of CPU if the join includes a predicate that isn’t on equality.

Trivia

Before examining the potential for wasting CPU, I’ll just point out two problems with using the trim() function in this way – because (while I hope that col3 is character string in both tables) I’ve seen code that uses “to_date(to_char(date_column))” instead of trunc(date_column):

Cut-n-paste from SQL*Plus:

SQL> select 1 from dual where trim(100) > trim(20);

no rows selected

==================================================================

SQL> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';

SQL> select d1, d2 from t2 where trim(d1) > trim(d2);

20-jul-2022 15:24:46 19-aug-2022 15:26:44

1 row selected.

SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

SQL> select d1, d2 from t2 where trim(d1) > trim(d2);

no rows selected

The trim() function converts numerics and dates to strings using the default format for the session before the comparison takes place, so not only can you get unexpected (i.e. wrong) results, two users can get contradictory results from the same data at the same time because they’ve specified different session defaults!

The CPU issue

The critical point that everyone should remember is this: hash joins can only operate on equality (though, to avoid ambiguity, one should point out that “equality” does also mean “not equals”, which is why hash anti-joins can be efficient).

This means that even though the clause “where tab1.col1 = tab2.col1 and tab1.col3 > tab2.col3” might specify the matching rows for an individual tab1 row with high precision and great efficiency for a nested loop join with the right index, a hash join has a completely different workload. Every tab1 row has to have its col3 compared with every tab2 row that matches on col1. The secondary tests multiply up to “n-squared”, and if any col1 value is is highly repetitive then the work done on checking col3 becomes excessive.

It’s easier to see this in a worked example, so here’s some sample data:

rem
rem     Script:         trim_cost.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2022
rem
rem     Last tested 
rem             21.3.0.0
rem             19.11.0.0
rem

create table tab1 as select * from all_Objects where owner != 'PUBLIC' and object_type != 'SYNONYM' and rownum <= 200;

create table tab2 as select * from all_Objects where owner != 'PUBLIC' and object_type != 'SYNONYM';

On a new pdb in 19.11 and 21.3 the second statement gave me roughly 46,000 rows. checking owners and row counts I got the following results:

SQL> select owner, count(*) from tab1 group by owner;

OWNER                      COUNT(*)
------------------------ ----------
SYS                             128
SYSTEM                           65
OUTLN                             7

SQL> select owner, count(*) from tab2 group by owner;

OWNER                      COUNT(*)
------------------------ ----------
SYS                           40104
SYSTEM                          417
OUTLN                             7

... plus about 17 rows aggregating 6,000 rows

And here’s the query (indicating 4 variations) that I’m going to use to demonstrate the CPU issue, followed by its execution plan and rowsource_execution_statistics:

set serveroutput off
alter session set statistics_level = all;

select
        count(*)
from
        tab1
inner join 
        tab2 
on
        tab1.owner = tab2.owner
-- and  trim(tab1.object_name)  > trim(tab2.object_name)
-- and  rtrim(tab1.object_name) > rtrim(tab2.object_name)
-- and  ltrim(tab1.object_name) > ltrim(tab2.object_name)
and     tab1.object_name > tab2.object_name
;

select * from table(dbms_xplan.display_cursor(format=>'projection allstats last'));

SQL_ID  74m49y5av3mpg, child number 0
-------------------------------------
select  count(*) from  tab1 inner join  tab2 on  tab1.owner =
tab2.owner -- and trim(tab1.object_name)  > trim(tab2.object_name) -- and rtrim(tab1.object_name) > rtrim(tab2.object_name) 
-- and ltrim(tab1.object_name) > ltrim(tab2.object_name) and tab1.object_name > tab2.object_name

Plan hash value: 2043035240

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.39 |     942 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.39 |     942 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    101K|    329K|00:00:00.39 |     942 |  1335K|  1335K|  814K (0)|
|   3 |    TABLE ACCESS FULL| TAB1 |      1 |    200 |    200 |00:00:00.01 |       5 |       |       |          |
|   4 |    TABLE ACCESS FULL| TAB2 |      1 |  46014 |  46014 |00:00:00.01 |     937 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("TAB1"."OWNER"="TAB2"."OWNER")
       filter("TAB1"."OBJECT_NAME">"TAB2"."OBJECT_NAME")

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - (#keys=0) COUNT(*)[22]
   2 - (#keys=1; rowset=407)
   3 - (rowset=256) "TAB1"."OWNER"[VARCHAR2,128], "TAB1"."OBJECT_NAME"[VARCHAR2,128]
   4 - (rowset=256) "TAB2"."OWNER"[VARCHAR2,128], "TAB2"."OBJECT_NAME"[VARCHAR2,128]

Comparing the basic colums the CPU time recorded at the Hash Join operation was 0.39 seconds, of which only a tiny amount was in the feeding tablescans. There are two things to note from the plan.

First is confirmation of my comments about the join having to be an equality and the inequality being applied later. You can see this in the Predicate Information in the way the user’s predicate list has been split at operation 2 into an access() predicate and a filter() predicate. The access predicate finds the right hash bucket and row(s) within bucket – the filter predicate is applied as a secondary test.

The second point to note is that the Column Projection Information shows us that the basic column values are passed up to the Hash Join, which tells us that the hash join operation has to do the trimming. The big question at that point is – how many times does the same value from the same incoming row get trimmed.

Remember that there are 128 rows in tab1 where where owner = ‘SYS’, so when a ‘SYS’ row arrives from tab2 the hash join has to find the right bucket then walk through the rows in that bucket (which will probably be nothing but those SYS rows). So how many times does Oracle evaluate trim(SYS). Arguably it needs to for each tab1 row in the bucket (though the hash table might have been built to include the trimmed value) but clearly it ought not to re-evaluate it 128 times for the column in the single tab2 row – and we’ll come back to that point later.

Let’s go back to the 3 variants on the first test. We were interested in the comparing trim() with trim(), but since trim() is equilavent to ltrim(rtrim()) I wondered if ltrim (left trim) and rtrim (right trim) took different amount of time, and whether the trim() time would be close to the sum of ltrim() time and rtrim() time.

Without showing the plans etc. here are the time reported in my 19.11.0.0 test at the hash join operation (the 21.3 times were very similar):

  • no trim – 0.39 seconds
  • ltrim() – 1.02 seconds
  • rtrim() – 2.66 seconds
  • trim() – 2.70 seconds.

Clearly that’s a lot of extra CPU on top of the base CPU cost. This is not entirely surprising since string operations tend to be expensive, neverthless the differences are large enough to be more than random fluctuations and operational error.

Remember that this is just two tables of 200 and 46,000 rows respectively. It turned out that the rowsources that Stefan was using were in the order of 800K and 2M rows – with CPU time increasing from 1,100 seconds to 2,970 seconds because of the trim().

So how many times was the trim() function called in total?

Faking it.

If we assume that the trim() built-in SQL function behaves in the same way as a deterministic PL/SQL function we can at least count the number of calls that take place by writing a deterministic function to put into the SQL. Something like:

create or replace package p1 as
        n1 number;
        function f1(v1 in varchar2) return varchar2 deterministic;
end;
/

create or replace package body p1 as 

        function f1 (v1 in varchar2)
        return varchar2 
        deterministic
        is
        begin
                p1.n1 := p1.n1 + 1;
                return trim(v1);
        end;

end;
/

set serveroutput off
alter session set statistics_level = all;

exec p1.n1 := 0

select
        count(*)
from
    tab1
inner join tab2 on
    tab1.owner = tab2.owner
and     p1.f1(tab1.object_name) > p1.f1(tab2.object_name)
-- and  p1.f1(tab1.object_name) > trim(tab2.object_name)
-- and  trim(tab1.object_name)  > p1.f1(tab2.object_name)
;

select * from table(dbms_xplan.display_cursor(format=>'projection allstats last'));

set serveroutput on
execute dbms_output.put_line(p1.n1);

I’ve created a package with a public variable n1 so that I can set it and read it from “outside”, then I’ve created (and lied about) a function that increments that variable and returns its input, claiming that it’s deterministic. Once I’ve got the package in place I’ve:

  • set the variable to zero
  • run a query that does one of
    • use my function twice
    • use my function once – on the build table
    • use my function once – on the probe table
  • report the execution plan with stats
  • print the value of the variable

The timings are not really important, but here’s the execution plan when I used the function on both sides of the inequality:

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:21.15 |    1513 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:21.15 |    1513 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |  23007 |    329K|00:00:21.13 |    1513 |  1335K|  1335K|  860K (0)|
|   3 |    TABLE ACCESS FULL| TAB1 |      1 |    200 |    200 |00:00:00.01 |       5 |       |       |          |
|   4 |    TABLE ACCESS FULL| TAB2 |      1 |  46014 |  46014 |00:00:00.02 |     937 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("TAB1"."OWNER"="TAB2"."OWNER")
       filter("P1"."F1"("TAB1"."OBJECT_NAME")>"P1"."F1"("TAB2"."OBJECT_NAME"))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=0) COUNT(*)[22]
   2 - (#keys=1)
   3 - (rowset=256) "TAB1"."OWNER"[VARCHAR2,128], "TAB1"."OBJECT_NAME"[VARCHAR2,128]
   4 - "TAB2"."OWNER"[VARCHAR2,128], "TAB2"."OBJECT_NAME"[VARCHAR2,128]

Apart from the change of function name the plan is the same – although it now takes over 21 CPU seconds to complete, of which most of the time is probably building and tearing down the PL/SQL stack. The important figure, though is the number of function calls I saw recorded in p1.n1: it was a little over 10 million calls to generate the 329 thousand rows (A-Rows for the hash join).

When I ran the code with only one call to my deterministic function it was called 5 million times regardless of whether it was used for the build or probe table. Oracle did nothing to minimise the number of times the function was called.

Predictive Mode

Near the start of this note I showed you a little query to aggregate the rows of the two tables by owner; with a little enhancement I can reuse that code to show you exactly how many times the deterministic function was called:

select
        v1.owner, ct1, ct2, ct1 * ct2, sum(ct1 * ct2) over() tot_ct
from
        (select owner, count(object_name) ct1 from tab1 group by owner) v1,
        (select owner, count(object_name) ct2 from tab2 group by owner) v2
where
        v2.owner = v1.owner
/

OWNER                  CT1        CT2    CT1*CT2     TOT_CT
--------------- ---------- ---------- ---------- ----------
SYS                    128      40104    5133312    5160466
SYSTEM                  65        417      27105    5160466
OUTLN                    7          7         49    5160466

3 rows selected.

The number of comparisons done by the filter() predicate 5,160,466: double it to get the number of function calls. For every single one of the 40,104 SYS rows in tab2 the function was called for every single one of the SYS rows in tab1, for both sides of the inequality.

It’s a shame that Oracle doesn’t calculate and project the “virtual columns” that will be used in the join predicates, because in my case that would have reduced the number of calls from 10 million to 40,232 – a factor of roughly 250. That would probably be worth a lot of CPU to Stefan.

Damage Limitation

For my silly little query that went from 0.39 seconds to 2.70 seconds you might decide there’s no point in trying to improve things – in fact many of the sites I’ve visited probably wouldn’t even notice the CPU wastage (on one call); but when the query runs for 2,970 seconds and a little fiddling around shows that it could run in 1,100 seconds you might be inclined to see if there’s something you could do improve things.

Andrew Sayer suggested the possibility of rewriting the query with a pair of CTEs (“with” subqueries) that were forced to materialize the trim() in the CTE. The cost of physically creating the two large GTTs might well be much less than the CPU spent on the trim()ed join.

Alternatively – and dependent on the ownership and quality of the application – you could write a check constraint on each table to ensure that the column value was always equal to the trim() of the column value.

A similar option would be to add an (invisible) column to each table and use a trigger to populate the column with the trimmed value and then use the trimmed column in the query.

Conclusion

I don’t think that anything I’ve done or described in this note could be called rocket science (or telescope science as, perhaps, it should be in honour of Webb); but it has shown how much insight you can gain into what Oracle is doing and how you may be able to pin-point excess work using a few simple mechanisms that have been around for more than 15 years.

Pagination cost

Thu, 2022-07-21 09:56

There’s a thread on the MOSC database tuning forum (needs an account) at the moment asking why a “fetch first N” query to fetch next 41 rows with an offset of 8602 rows takes longer to run than the same query when the offset is zero rows. Here’s a possible answer with a little lesson in thinking about what’s going on.

Apart from gremlins in the system there are two possible reasons

  • nothing has changed, but it takes longer to fetch 8643 rows in order and discard 8602 of them than it takes to fetch 41 rows in order and discard none
  • the optimizer has worked out that if it has to fetch 8643 rows then it ought to use a different plan but (as often happens) it was a bad idea to change the plan.

Here’s a little script to build some demo data.

rem
rem     Script:         fetch_first_offset.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2022
rem
rem     Last tested 
rem             21.3.0.0
rem             19.11.0.0
rem

create table t1 
as
select 
        * 
from 
        all_objects
where   rownum <= 50000
order by 
        dbms_random.value
/

create index t1_i1 on t1(object_name);

alter session set statistics_level = all;
set serveroutput off
set feedback off

column owner format a20
column object_type format a12
column object_name format a32

All I’ve done is create a table of 50,000 rows with an order by clause that maximises the randomness of the data pattern so that the index on object_name will have a very high clustering_factor.

Here’s the first query I’m going to run, followed by the execution plan, pulled from memory with rowsource execution stats enabled. I’ve queried for the first 20 rows (offset 0 next 20) ordered by object_name:

select
        owner, object_type, object_name
from
        t1
order by
        object_name
offset 
        0 rows
fetch next 
        20 rows only
/

select * from table(dbms_xplan.display_cursor(format=>'+cost allstats last'));


SQL_ID  fmdb8vuxwkp99, child number 0
-------------------------------------
select  owner, object_type, object_name from  t1 order by  object_name
offset  0 rows fetch next  20 rows only

Plan hash value: 3254925009

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |    23 (100)|     20 |00:00:00.01 |      25 |      2 |
|*  1 |  VIEW                         |       |      1 |     20 |    23   (0)|     20 |00:00:00.01 |      25 |      2 |
|*  2 |   WINDOW NOSORT STOPKEY       |       |      1 |     20 |    23   (0)|     20 |00:00:00.01 |      25 |      2 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |  50000 |    23   (0)|     20 |00:00:00.01 |      25 |      2 |
|   4 |     INDEX FULL SCAN           | T1_I1 |      1 |     20 |     3   (0)|     20 |00:00:00.01 |       5 |      2 |
-----------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=20 AND
              "from$_subquery$_002"."rowlimit_$$_rownumber">0))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "OBJECT_NAME")<=20)

As you can see from the execution plan Oracle has used an index full scan (because that will access the data in exactly the right order and allows the “nosort stopkey” on the “window (no)sort” operation). It has fetched (A-Rows) 20 rows and reported a cost of 23 – which basically corresponds to 3 block visits for the index and one block visit for each row from the table. In passing you’ll notice from the Predicate Information at operation 2 that Oracle has transformed our “fetch first” into an analytic query using row_number() over(). The phrase “syntactic sugar” seems appropriate.

How do things change if we ask for the 2nd 20 rows – (offset 20, next 20). I’ll show only the output from dbms_xplan, including its slightly mangled SQL statement but dropping the Predicate Information:

select  owner, object_type, object_name from  t1 order by  object_name
offset  20 rows fetch next  20 rows only

Plan hash value: 3254925009

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |    43 (100)|     20 |00:00:00.01 |      45 |
|*  1 |  VIEW                         |       |      1 |     40 |    43   (0)|     20 |00:00:00.01 |      45 |
|*  2 |   WINDOW NOSORT STOPKEY       |       |      1 |     40 |    43   (0)|     40 |00:00:00.01 |      45 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |  49999 |    43   (0)|     40 |00:00:00.01 |      45 |
|   4 |     INDEX FULL SCAN           | T1_I1 |      1 |     40 |     3   (0)|     40 |00:00:00.01 |       5 |
--------------------------------------------------------------------------------------------------------------

As you can see, the optimizer has still decided to use the index full scan, and this time has fetched 40 rows and passed them up the plan until at operation 1 it discards the first 20 rows. The cost (43) is again related to 3 blocks for the index, 40 blocks for 40 rows from the (randomly distributed) table.

What would we see if we added a /*+ full(t1) */ hint to the query to force a tablescan to get the 2nd 20 rows?

select /*+ full(t1) */  owner, object_type, object_name from  t1 order
by  object_name offset  20 rows fetch next  20 rows only

Plan hash value: 2433988517

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |      1 |        |   947 (100)|     20 |00:00:00.01 |     996 |       |       |          |
|*  1 |  VIEW                    |      |      1 |     40 |   947   (1)|     20 |00:00:00.01 |     996 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|      |      1 |  50000 |   947   (1)|     40 |00:00:00.01 |     996 | 15360 | 15360 |14336  (0)|
|   3 |    TABLE ACCESS FULL     | T1   |      1 |  50000 |   278   (1)|  50000 |00:00:00.01 |     996 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------

Oracle has obeyed the hint, and the tablescan has fetched all 50,000 rows from the table and sorted them. Fortunately the optimizer knows that it needs only the top 40 rows so it has been discarding rows as it sorts, hence the appearance of the “pushed rank” in the “window sort” at operation 2; we haven’t had to create a sorted list of all 50,000 rows before picking the top 40. Again, once we’ve got the top 40 we discard the top 20 to allow for the offset.

We note that the cost of the tablescan was 278 but the cost of the sort was really rather large, taking the total cost of this path to 947. So here’s a thought experiment – what’s likely to happen if we ask for an offset of 940 and next 20? Given the costs we’ve seen for the indexed access path the optimizer will calculate a cost of 3 (plus a bit, maybe) for the index and a cost of 960 for visiting the table giving a total cost of about 963 – which should make the tablescan strategy the lower cost.

select  owner, object_type, object_name from  t1 order by  object_name
offset  940 rows fetch next  20 rows only

Plan hash value: 2433988517

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |      1 |        |   947 (100)|     20 |00:00:00.02 |     996 |       |       |          |
|*  1 |  VIEW                    |      |      1 |    960 |   947   (1)|     20 |00:00:00.02 |     996 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK|      |      1 |  50000 |   947   (1)|    960 |00:00:00.02 |     996 |   267K|   267K|  237K (0)|
|   3 |    TABLE ACCESS FULL     | T1   |      1 |  50000 |   278   (1)|  50000 |00:00:00.01 |     996 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------

My guesswork about the cost seems to have been nearly right. Unhinted, with an offset of 940 (which you can see as the 960 rows fetched) the optimizer has decided that the tablescan path has a lower cost than the indexed access.

Of course we ought to check this by hinting the indexed access path and seeing what its cost is:

select  /*+ index(t1) */  owner, object_type, object_name from  t1
order by  object_name offset  940 rows fetch next  20 rows only

Plan hash value: 3254925009

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |      1 |        |   969 (100)|     20 |00:00:00.01 |     968 |      4 |
|*  1 |  VIEW                         |       |      1 |    960 |   969   (1)|     20 |00:00:00.01 |     968 |      4 |
|*  2 |   WINDOW NOSORT STOPKEY       |       |      1 |    960 |   969   (1)|    960 |00:00:00.01 |     968 |      4 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T1    |      1 |    960 |   969   (1)|    960 |00:00:00.01 |     968 |      4 |
|   4 |     INDEX FULL SCAN           | T1_I1 |      1 |  50000 |     9   (0)|    960 |00:00:00.01 |       9 |      4 |
-----------------------------------------------------------------------------------------------------------------------

The cost of the indexed access path is 969 – that’s 960 for the randomly scattered table rows we need plus 9 for the index scan (because at 960 index entries we’re going to visit a few more index leaf blocks than the original 3).

Summing Up

I’ve demonstrated with a simple query using “offset N rows fetch M rows only” that the optimizer will calculate the cost of fetching “N + M” rows using whatever paths are available, then pick the lowest cost path.

As you might expect, the presence of a suitable index will encourage the optimizer to walk the index in order jumping randomly around the table to avoid the cost of acquiring all the relevant data and sorting it. So for “small” values of “offset + next” Oracle might choose an indexed access path with “window nosort stopkey”, but for “large” values of “offset + next” it might choose to do a full tablescan with “window sort pushed rank”.

The consequence of this is that – in the same way we see the optimizer switching between nested loops and hash joins at the wrong moment – we may see the optimizer switch from an indexed access path to a full tablescan either too soon, or too late.

Answering the question

Why did the query with an offset of 8602 take so much longer than the query with an offset of zero when the next was only 41 rows?

It may be that the optimizer stuck with an indexed access path and had to do physical reads of 8,643 blocks when it should have switched to a tablescan.

It may be that the optimizer switched to a tablescan and sort when it should have stuck with using an index on well-clustered, well-cached, data.

As so often happens, the first step to answering an SQL performance question is to look at the actual execution plans.

LOB space

Wed, 2022-07-20 11:07

There’s a thread on the Oracle developer forum at present asking why calls to dbms_lob.getlength() and calls to dbms_space.space_usage() produce such different results for the storage used by a LOB column.

It’s a really good question to demonstrate two points. First that it’s hard to supply all the details that are needed when you ask a question; second that it’s hard to write well about even such a tiny topic. The source of the second problem answers the first problem – there’s a fairly large decision tree to consider:

  1. Is the LOB defined as enable or disable storage in row?
  2. Is it a BLOB or CLOB?
  3. Is compression enabled ?
  4. Is deduplication enabled ?
  5. Is it a basicfile or securefile LOB?
    1. Is there a chunk declaration
    2. Is the LOB undo limit set by pctversion or retention
    3. Have multiple freepools been declared

The answer to question 1 means the difference between a tiny LOB taking a few bytes in the table segment (enable) or a whole chunk in the LOB segment (disable).

The answer to question 2 is important for two points – (1) CLOBs are stored using a two-byte fixed width character set, which means they may take much more space than you might be expecting, and (2) the getlength() function reports characters not bytes.

Questions 3 and 4 result in similar space estimation errors: getlength() will report the decompressed length for every (logical) copy of a LOB value, so summing it across the table could be over-reporting quite dramatically.

Question 5.1 and 5.2 ought to start with a check of whether the LOB has been stored in a tablespace using (“legacy”) freelist management or (“new”) automatic segment space management because that affects whether or not it’s even possible to use securefiles, and it affects whether or not basicfiles can use the retention option.

Question 5.1: This applies only to basicfile LOBs because securefile LOBs ignore the chunk parameter. The dbms space.space_usage() procedures have many “out” parameters with the word “blocks” in their names – but the code uses these parameters to report chunks, not blocks. So if you’ve set the LOB chunk size to something other than default the result will need to be scaled up to get the block count (or you could just use the “bytes” values and divide by the LOB segment’s block size).

Question 5.2: This gets messy, because pctversion is silently ignored by securefiles but used by basicfiles. On the other hand retention can be used for securefiles or basicfiles (assuming have automatic undo management enabled), but its usage with basicfiles doesn’t match its usage with securefiles. Moreover if you include both retention and pctversion in your table declaration Oracle raises error: ORA-32600: RETENTION and PCTVERSION cannot be used together for both basicfiles and securefiles (so Oracle is not quite ignoring pctversion for securefiles). (It may be worth mentioning that if you use export/import to upgrade your database you may find that this “spontaneously” changes a basicfile lob to a securefile lob.)

Question 5.3: freepools is another parameter that is silently ignored for securefiles but can have a significant impact on the way that basicfiles reuse space from old copies of LOB values hence on the amount of allocated but unused space.

Underneath all these, of course, is the question “which version of Oracle”, because that affects the default for the choice between securefile or basicfile (though no-one ought to be using a version of Oracle that still has basicfile as the default). There’s also a significant variation with version of when a blob goes “out of row”. And then there’s the question of which overloads of dbms_space.space_usage() are available and which is the correct one to be used.

I think you can appreciate from the above how easy it might be to answer a “simple” question like “why do these two numbers differ” if the question included all the relevant details, but very hard to give a useful answer if the question simply quoted the results of unspecified calls to a couple of procedures or queries.

Answering the question

This user is running 11.2.0.4, has a basicfile BLOB, stored in an ASSM tablespace, no compression, not deduplicated, with the default 8KB chunk size, and storage enabled in row enable. So:

  • it’s a BLOB so getlength() will report the byte count and we don’t have to cater for 2-byte characters
  • it’s set to enable storage in row, so we have to ignore BLOBs with lengths up to 3,898 bytes (for 11.2.0.4 – it would be 3,964 for basicfile blobs in 19c.)
  • it’s basicfile so will stored 8,132 bytes per lob segment block. (securefile would use 8,060 – these figures don’t seem to have changed between 11g and 19c).
  • it’s stored under ASSM so there will be an overhead of just under 1% for segment space metadata when the segment is very large (one bitmap block out of each 128).
  • we can infer that the chunk size is 8KB because we have a report that shows “Full blocks” x 8KB = “Full bytes” – so we could be “losing” an average of 4KB per LOB. (Note, all blocks in a LOB segment are Full or Unformatted, and the metadata blocks aren’t reported by the dbms_space.space_usage() procedures)
  • we might guess that the LOB is using the default pctversion 10, but if it’s using the retention then the undo_retention time can make a huge difference to the amount of space holding “deleted but retained” lobs.

So, for the OP, a first stab at the code to estimate the expected number of blocks in the LOB segment would be:

with blob_lengths as (
        select  dbms_lob.getlength(b1) blob_len
        from    t1
),
blobs_ool as (
        select  blob_len
        from    blob_lengths
        where   blob_len > 3898  -- 11g boundary
)
select
        sum(ceil(blob_len/8132)) blob_blocks
from
        blobs_ool
/

You’ll notice I’ve deliberately used the label “blob” everywhere in the hope that this will be a reminder that the code won’t apply to CLOB columns with their 2-byte fixed-width character set.

Starting with this figure we then need to consider the “undo” for LOBs so (assuming the OP has created a basicfile with the default pctversion (i.e. 10) allowing 10% of the total space to be old copies of the BLOB values) the number of blocks has to be multiplied by 10/9; then, since the metadata for the segment will take about 1% of the segment’s blocks, multiply by 101/100. So scale the query result up by 101/90 to get the “expected” storage requirement.

To compare this estimate with results from dbms_space.space_used() we need to pick the right overload of procedures; two of them are for securefile lob segments only, the third (initially the only option and the one we want now) was generically for ASSM segments.

I’ve published a sample of one of the securefile options here, and the generic ASSM code that we need is part of the script here. Our estimate (excluding the 1% metadata) should be a reasonable match for the “Full blocks” reported by this procedure call.

Since this is a basicfile LOB (and especially since it’s on 11.2.0.4) we may find a massive discrepancy due to some serious defects in the internal code which can be triggered by the way the LOB is used – in which case it’s worth reading the series I wrote 5 years ago modelling a problem (mostly about time spent, in fact) a client had with basicfile LOB deletion.

Summary Points

You shouldn’t be using basicfile LOBs on any recent version of Oracle

Check exactly when your type of LOB goes “out of row”, and remember the difference between Blobs and Clobs when it comes to using length() or getlength() on them.

When summing lengths remember to ignore LOBs that will (probably) be stored “in row”, and don’t forget that even a couple of bytes stored “out of row” require a full “chunk” (not “block”).

The “undo data” for a LOB value is simply the previous copy of the value, left in place. The total volume of “undo” can be limited by pctversion for basicfile lobs (but is silently ignored by securefiles). Both types of lob will obey retention but the only (implicit) option for basicfiles is “auto” which means “the same as undo_retention” and requires automatic undo management to be enabled. This setting for retention can result in a very large amount of old lob data being kept.

If you’re using basicfile LOBs (especially with older versions of Oracle) the mechanism for re-using expired LOB values has some defects that can result in catastrophic behaviour that introduce performance and space problems.

drop t/s bug

Mon, 2022-07-18 03:17

A recent thread on the MOS database admin forum (needs an account) demonstrated a number of little issues with bugs, debugging, complex syntax, and the never-ending list of “not quite complete” code that shows up when features (in this case LOBs and partitioning) collide.

It’s a silly little thing, but one to remind you that you always have to ask “What have I forgotten?”, “Does my test suite include every case I need to test?”

In this case we had been given a model where, after creating a composite partitioned table with a LOB column using a statement that referenced 3 different tablespaces, the OP had

  1. moved one component of this complex structure to a fourth tablespace
  2. dropped the tablespace that had been defined as the holder of the moved component
  3. renamed the fourth tablespace to match the name of the dropped tablespace
  4. called dbms_metadata.get_ddl() to generate a new table definition.

The result of this was that the generated statement included a reference to a tablespace with the unexpected name of “_$deleted$51$0”.

Sidenote: The name is an example of the value stored in ts$.name when you drop a tablespace. The row in ts$ is not deleted (or marked for deletion), instead ts$.online$ is set to 3 and ts$.name is set to reflect the tablespace number with a name of the format “_$deleted${ts$.ts#}$0″.

Here, with some cosmetic changes to the “create table” statement and with a preamble to create enough tablespaces and quotas, is the model supplied by the OP (if you’re thinking of running it make sure you read it carefully first):

rem
rem     Script:         drop_ts_pt_bug.sql
rem     Author:         Jean-François56 / Jonathan Lewis
rem     Dated:          July 2022
rem
rem     Last tested 
rem             19.11.0.0
rem             11.2.0.4
rem 

connect sys/sys as sysdba

prompt  =================================================
prompt  Clearing the stage - some steps may report errors
prompt  =================================================

drop table test_user.test purge;

drop tablespace jf1 including contents and datafiles;
drop tablespace jf2 including contents and datafiles;
drop tablespace jf3 including contents and datafiles;
drop tablespace jf4 including contents and datafiles;

create tablespace jf1 datafile size 100m;
create tablespace jf2 datafile size 100m;
create tablespace jf3 datafile size 100m;
create tablespace jf4 datafile size 100m;

alter user test_user quota unlimited on jf1;
alter user test_user quota unlimited on jf2;
alter user test_user quota unlimited on jf3;
alter user test_user quota unlimited on jf4;

prompt  =======================
prompt  Connecting to test user
prompt  =======================

connect test_user/test

drop table test purge;

create table test(
       idarchive                number(10,0),
       data                     blob,
       partition_date           date,
       customer                 number(10,0),
       prefix_archive_key       varchar2(5)
)
partition by range (partition_date)
subpartition by list (customer)
(
partition p1 
        values less than (to_date('2008-06-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian'))
        tablespace jf1
        lob (data) store as basicfile (tablespace jf2 chunk 16384)
        (
        subpartition sp values (1) 
                tablespace jf3
                lob (data) store as basicfile (tablespace jf2)
        )
);

alter table test move subpartition sp lob(data) store as (tablespace jf4);


connect sys/sys as sysdba

drop tablespace jf2 including contents and datafiles;
alter tablespace jf4 rename to jf2;


connect test_user/test

set long 20000
set longchunksize 20000

set linesize 132
column text_line format a128

begin
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'PRETTY',             true);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'TABLESPACE',         true);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'SEGMENT_ATTRIBUTES', true);
        dbms_metadata.set_transform_param(dbms_metadata.session_transform,'STORAGE',            false);
end;
/

select dbms_metadata.get_ddl('TABLE','TEST') text_line from dual
/


The first thing I did after reading the posting was a quick search on MOS, using the four search terms: drop tablespace rename deleted. It was a lucky choice because on the first page of results from the Knowledge Base I found:

Renaming a Tablespace to An Already Dropped one Changes The Tablespace Name To "_$DELETED" (Doc ID 1937848.1)

Conveniently the notes in this document said: “This is caused by bug 18136584”,and that bug (and the problem) is labelled as “The bug is fixed in 12.2”. Unfortunately the OP was running 11.2.0.4, but that’s okay because the note also said: “Backport is feasible. However, A simple workaround is available.” So I thought I’d create the test case and check the workaround – which I why I’ve got the script.

It just so happened that I prepared, debugged and ran the script (without the workaround) on 19.11 before bothering to start up a VM with 11.2.0.4 – and I got the following output from my call to dbms_metadata.get_ddl():

  CREATE TABLE "TEST_USER"."TEST"
   (	"IDARCHIVE" NUMBER(10,0),
	"DATA" BLOB,
	"PARTITION_DATE" DATE,
	"CUSTOMER" NUMBER(10,0),
	"PREFIX_ARCHIVE_KEY" VARCHAR2(5)
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  TABLESPACE "TEST_8K_ASSM"
 LOB ("DATA") STORE AS SECUREFILE (
  ENABLE STORAGE IN ROW CHUNK 8192
  NOCACHE LOGGING  NOCOMPRESS  KEEP_DUPLICATES )
  PARTITION BY RANGE ("PARTITION_DATE")
  SUBPARTITION BY LIST ("CUSTOMER")
 (PARTITION "P1"  VALUES LESS THAN (TO_DATE(' 2008-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  TABLESPACE "JF1"
 LOB ("DATA") STORE AS BASICFILE (
  TABLESPACE "_$deleted$36$0" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
  NOCACHE LOGGING )
 ( SUBPARTITION "SP"  VALUES (1) SEGMENT CREATION IMMEDIATE
 LOB ("DATA") STORE AS BASICFILE (
  TABLESPACE "JF2" )
  TABLESPACE "JF3"
 NOCOMPRESS ) )

If you check line 18 (highlighted) you’ll see that even in 19.11 you can end up with generated statement that references “deleted” tablespaces – and here’s a funny little side effect (cut-n-paste from SQL*Plus – but your deletion number will probably be different):

SQL> drop table test;
drop table test
           *
ERROR at line 1:
ORA-00959: tablespace '_$deleted$36$0' does not exist

SQL> drop table test purge;

Table dropped.

So (part of) the bug is still present in 19.11; and on the plus side that means I can examine the workaround to see what it is and how it works. This is what the note says: “Execute below command once you finished with dropping and renaming the tablespace”:

alter table <table_name> modify default attributes tablespace <old_tablespace_name>;

That’s not actually going to do anything for us – but it’s an important clue to how we might be able to fix things; it also suggests why the bug “fixed” in 12.2 isn’t quite fixed in 19c – someone missed a bit of the code path: maybe the bit about LOBs, or maybe the bit about composite partitioned tables, or maybe (very precisely) the bit about partition-level default values for LOBs in composite partitioned tables. (And who knows what might be missing if we start looking at index-organized tables, nested table, and other complicated structures.)

Taking the clue from the suggested workaround, here are three possible fixes to try:

alter table test modify default attributes tablespace jf2;

alter table test modify default attributes                  lob(data) (tablespace jf2);

alter table test modify default attributes for partition p1 lob(data) (tablespace jf2);

The third of these options is the one that “works” – and the word is in quote marks because all I mean is that the generated SQL uses JF2 as the tablespace name rather than the “deleted” tablespace name – I make no guarantee about how future behaviour might vary from past behaviour after this change, and it’s likely to depend on exactly how you’re expecting to add and move partitions and subpartitions anyway.

Follow-up

The problem (which means, possibly, the omitted code path) comes from the need for handling default storage clauses. When we’re handling composite partitioning the only segments that come into existence are subpartition segments – but you can still specify physical (and logical) storage information at the table and partition level with the inference (possibly not stated explicitly) that any table-level metadata should be the default metadata for the partition level and any partition-level metadata should be the default metadata for subpartitions. What does this mean in terms of our original table creation script and the subsequent call to dbms_metadata?

You’ll notice that I’ve highlighted lines 8 – 11 in the output above from dbms_metadata.

  • Line 8 references tablespace test_8k_assm: I didn’t include a default tablespace at the table level for the table segments, but that’s the tablespace that happened to be my default tablespace when I ran the script.
  • Lines 9 – 11 define a default LOB storage clause with no specified tablespace and using securefiles (which is the default LOB storage for 19c). Again I didn’t specify anything about a table-level LOB in my original definition.
  • The rest of the generated definition has, apart from the “deleted” tablespace, reproduced my original definition – including the 16KB declaration of chunk size for the partition and the lack of specified chunksize for the subpartition.

So questions to think about:

  • what chunk size is / would be used in the subpartition – is it silently picking up the value specified for the partition, or is it silently picking up the default for the table, or is it simply using the “absolute” default of 1 block?
  • what happens if I execute a simple “add subpartition” on the existing p1 partition? Where will the subpartition be stored and what will its storage details look like.
  • What will I see if I execute a simple “add partition” to add a new partition to the table. Will I also get a physical subpartition and if so where will it be and what will its storage clause look like.
  • What would the dbms_metadata output have looked like if I had had a table-level LOB definition that specified tablespace jf2?
  • What side effects might appear if I extended the definition to interval partitioning, with automatic list subpartitions, and inserted a row that needed a new partition and subpartition?!

Underlying all these detailed questions, of course, is the specification for the maintenance work that the DBA is expected to handle, viz:

  • what is the defined strategy for adding new partitions and subpartitions to the table,
  • what is the strategy for aging out old partitions and subpartitions.
  • are there any plans about grouping partitions into “age-related” tablespaces
  • are tablespaces going to be renamed to transport them to another database

It’s possible that the anomaly in this note only showed up because the OP was experimenting with options, and maybe the ultimate production code will be based on a strategy that means the anomaly will never appear. It’s possible that the anomaly is already in the production system but only became visible when someone decided to think about archiving out old partitions and the archival code started raising errors. Playing around with models to discover what happens is time well spent; and modelling the full production life cycle before going live is a critical activity.

Some answers

To find out what we’ve actually got from the original create table statement we can query the views (user/all/dba/cdb):

USER_TABLES
USER_PART_TABLES
USER_TAB_PARTITIONS
USER_TAB_SUBPARTITIONS

USER_LOBS
USER_PART_LOBS
USER_LOB_PARTITIONS
USER_LOB_SUBPARTITIONS

USER_SEGMENTS

For the chunk sizes we find: user_lobs.chunk = 8192, user_lob_partitions.chunk = 2 (blocks?) and user_lob_subpartitions.chunk = 16,384 (bytes?). We also have user_part_lobs.def_chunk = 1 (block?). So the explicit p1 partition chunk size has cascaded down to its subpartition.

Side note: who needs consistency! You might also want to remember that when you use dbms_lob.get_length, the “blocks” parameter is actually counting chunks and (for CLOBS) the bytes parameter is actually counting characters.

For the tablespaces we find: user_lobs.tablespace_name= TEST_8K_ASSM, user_lob_partitions.tablespace_name = _$deleted$36$0 and user_lob_subpartitions.tablespace_name = JF2. We also have user_part_lobs.def_tablespace_name is null.

What happens if I try to add a subpartition to the existing p1 partition without first applying the fix:

SQL> alter table test modify partition p1 add subpartition sp2 values(2);
alter table test modify partition p1 add subpartition sp2 values(2)
            *
ERROR at line 1:
ORA-00959: tablespace '_$deleted$36$0' does not exist

Now try again but apply the fix before adding the subpartition:

SQL> alter table test modify default attributes for partition p1 lob(data) (tablespace jf2);

Table altered.

SQL> alter table test modify partition p1 add subpartition sp2 values(2);

Table altered.

Checking the data dictionary for the effects of changing the default attribute we find that user_lob_partitions.tablespace_name is now JF2, which has then been taken on by the new subpartition.

What about adding a new partition:

SQL> alter table test add partition p2 values less than (to_date('01-Jan-2010'));

Table altered.

SQL> select partition_name, chunk , tablespace_name from user_lob_partitions order by partition_name;

PARTITION_NAME              CHUNK TABLESPACE_NAME
---------------------- ---------- ------------------------------
P1                              2 JF2
P2                              1

SQL> select lob_partition_name, subpartition_name, chunk , tablespace_name from user_lob_subpartitions order by 1,2;

LOB_PARTITION_NAME   SUBPARTITION_NAME           CHUNK TABLESPACE_NAME
-------------------- ---------------------- ---------- ------------------------------
SYS_LOB_P20609       SP                          16384 JF2
SYS_LOB_P20609       SP2                         16384 JF2
SYS_LOB_P20622       SYS_SUBP20621                8192 TEST_8K_ASSM

The new partition has no tablespace_name, but it has automatically generated a subpartition (values (default)), which has climbed the tree to the table-level to set the tablespace for the LOB, and that had defaulted to the tablespace of the table itself, which was the user default tablespace of TEST_8K_ASSM. Maybe we should have modified the “default attributes lob(data)” at some point so that the user_part_lobs.def_tablespace_name was not null.

I’ll finish with just one more comment – you’ve seen how messy things can get and how much detail could be overlooked when handling marginally complex composite partitioned table. Do you really think that interval partitioning and automatic list partitioning are really going to mean you don’t have to worry about partition maintenance code? Possibly. If you plan to have one huge tablespace for all the bits and never have to worry about backing up and restoring that tablespace you will be able to forget about all the housekeeping code, but realistically you’ll need to know how to check and change the metadata and rename, move or otherwise manipulate segments so make sure you know what’s going to happen so that you don’t have to work it out when everyone’s running around in panic mode.

Index Wildcard

Fri, 2022-07-15 05:51

This is a little case study on working around a problem caused by a 3rd-party application that couldn’t be changed quickly and easily. There’s a touch of irony in this case as the problem I’ll address is a side effect of the best possible workaround to a more generic design flaw in the application, moreover the side effect appears because of a special case optimisation mechanism introduced in 12.2.

A recent post on the MOS database forum (needs an account) came up with a slightly surprising problem – a query with a predicate “columnX like ‘%XYZ%” would perform reasonably well if hinted to use a B-tree index on columnX (doing an index full scan) but performed badly when cursor_sharing was set to force because it switched to an index range scan which did a lot more work.

Modelling

Here’s a little script to build some data to demonstate the issue:

rem
rem     Script:         index_wildcard.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2022
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0

create table t1
as
select  obj.*
from
        (select * from all_objects where rownum <= 50000) obj,
        (select rownum from dual connect by level <= 5) gen
/

create index t1_id on t1(object_name);

alter session set cursor_sharing = force;
alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

set serveroutput off

All I’ve done is create a simple heap table with 250,000 rows and an index (which happens to be on a column declared as NOT NULL, but that detail is not important). Here’s my query:

select  /*+ index(t1(object_name)) cursor_sharing_exact */
        t1.*
from
        t1 t1
where
        object_name like '%XYZ%'
/

select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

In normal circumstances the leading wildcard would mean that the optimizer would not be able to use an index driven plan (unless the query satisfied the requirements to allow an index fast full scan), so the default plan would have been a tablescan with a selectivity of 5% (estimated rows = 12,500).

Because I’ve hinted the use of an index I get the following plan from a call to dbms_xplan.display_cursor():

LAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------
SQL_ID  1q1p6fyubhafu, child number 0
-------------------------------------
select /*+ index(t1(object_name)) cursor_sharing_exact */  t1.* from
t1 t1 where  object_name like '%XYZ%'

Plan hash value: 2798063786

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |      0 |00:00:00.08 |    1718 |   1717 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |  12500 |      0 |00:00:00.08 |    1718 |   1717 |
|*  2 |   INDEX FULL SCAN                   | T1_ID |      1 |  12500 |      0 |00:00:00.08 |    1718 |   1717 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("OBJECT_NAME" LIKE '%XYZ%')

Because I’ve set cursor_sharing to force I’ve included the hint /*+ cursor_sharing_exact */ in the query to make sure that Oracle sees the query with a literal rather than a system generated bind variable. The plan tells us that the optimizer has picked an index full scan, and the Predicate Information for operation 2 tells us that Oracle has checked our predicate for every single entry in the index, and the A-rows column tells us that it didn’t find any matching index entries. So the total work done was 1718 buffer gets and 250,000 index entries tested. (You’ll notice that the E-rows is still the standard 5% guess.)

Now let’s try again, allowing enforced cursor sharing to be applied (i.e. taking out the cursor_sharing_exact hint):

SQL_ID  cm4tjwuqxgbqg, child number 0
-------------------------------------
select /*+ index(t1(object_name)) */  t1.* from  t1 t1 where
object_name like :"SYS_B_0"

Plan hash value: 539998951

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |      0 |00:00:00.26 |     241K|
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |  12500 |      0 |00:00:00.26 |     241K|
|*  2 |   INDEX RANGE SCAN                  | T1_ID |      1 |  12500 |    250K|00:00:00.04 |    1718 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("OBJECT_NAME" LIKE :SYS_B_0)
   2 - access("OBJECT_NAME" LIKE :SYS_B_0)

The SQL text shows us that the literal has been replaced by the bind variable :SYS_B_0. Operation 2 has changed from an index full scan to an index range scan – which “feels” as if “the range” should be the whole index and the query shouldn’t, therefore, do any more work than the index full scan plan. But the Buffers column now shows 241K buffer visits at operation 1, the table access! What’s gone wrong?

In the case of the OP the unexpected visit to the table blocks didn’t do just a few hundred thousand “unnecessary” buffer visits, it did 4 million buffer gets and 127,000 disk reads.

Look at the Predicate Information (always): the filter() that matched our original predicate is no longer being applied to the index, it’s only applied at the table. The only predicate applied to the index is an access() predicate – so Oracle takes the bind value and works out that the range scan should start at the beginning of the index and stop at the end of the index – that’s what an access() predicate does, it doesn’t check the actual values in the index entries once it has derived the start and stop locations.

Trouble-shooting

You will have noticed in the initial code to set up the data I had a call to set event 10053 (the CBO trace event) – a fairly quick and easy option to look at in this case since I’m looking at such a simple query. The critical details that showed up in the trace files were:

  • in both cases the optimizer reported “User hint to use this index” in the Base Statistical Information.
  • in the case of the “literal string” SQL the optimizer did not consider an index full scan
  • in the case of the “forced bind” SQL the optimizer did not consider an index range scan

This is worth knowing as it tells us that the switch between paths was not an “accident” of circumstances or statistics, it was a deliberately code strategy; so we need to discover whether there is something we can do to change the strategy.

Time to check the hidden parameters and fix controls, and this is what I found in a listing of the 19 .11.0.0 system fix controls (listed as appearing in 12.2.x.x):

     BUGNO OPTIMIZE SQL_FEATURE                        DESCRIPTION

---------- -------- ---------------------------------- ---------------------------------------------------------------- 
  20289688 12.2.0.1 QKSFM_ACCESS_PATH_20289688         check for leading wildcard in LIKE with bind 

Doesn’t that sound a little bit promising? Let’s turn the fix off by adding the hint /*+ opt_param(‘_fix_control’ ‘20289688:0’) */ to the problem query and see what we get:

SQL_ID  75cxtymdrh8r2, child number 0
-------------------------------------
select /*+   index(t1(object_name))   opt_param('_fix_control'
'20289688:0')  */  t1.* from  t1 t1 where  object_name like :"SYS_B_0"

Plan hash value: 539998951

-------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |       |      1 |        |      0 |00:00:00.08 |    1718 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |  12500 |      0 |00:00:00.08 |    1718 |
|*  2 |   INDEX RANGE SCAN                  | T1_ID |      1 |  12500 |      0 |00:00:00.08 |    1718 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_NAME" LIKE :SYS_B_0)
       filter("OBJECT_NAME" LIKE :SYS_B_0)

We still have an index range scan (with a range that is obviously the whole index), but our predicate is now being applied as a filter() in the index with a total of only 1,718 buffer gets even though the literal has been replaced by a system-generated bind variable.

Summary

The application is accepting a search value from the user interface and constructing “literal string” SQL that adds a leading and trailing wildcard to the value. As a result of this the optimizer will not consider using a particular index unless hinted, and does a very expensive tablescan.

The optimizer will accept and use an index() hint to produce a plan that is much more efficient than the tablescan even though it has to use an index full scan and apply a filter predicate on every index entry.

Unfortunately cursor_sharing has been set to force to deal with the general parsing overheads of applications that construct “literal string” SQL, and when the index() hint is applied to a bind-variable version of this query the optimizer applies an index range scan and postpones the filter() until it has acquired rows from the table.

A check of the system fix controls identified a fix that looked as if it might have some relevance to our situation, “leading wild card, bind variable, LIKE clause”, and a test with an opt_param() hint to switch off that optimizer “fix” had the effect we wanted – the optimizer still reported an index range scan but it also used our initial predicate as a filter() predicate on the index.

Technically, since the fix_control was introduced in 12.2.0.1 we could have got the same effect by setting the optimizer_features_enable back to 12.1.0.2 – but that’s not a good idea when the production version is so much newer although it might be an acceptable strategy when implemented as a hint for a few special case queries.

After showing that the fix control works by hinting the next step should be to create an SQL Patch (using dbms_sqldiag.create_sql_patch) attached to the (forced) bind-variable version of the query.

Update (a few hours later)

It didn’t take long to find that the fix_control and the optimzer_features_enable were not the only workarounds. There’s a hidden parameter “_optim_peek_user_binds” with the description “enable peeking of user binds” default value “true”, so if you set this to false the optimizer can’t notice that the critical bind value starts with a wildcard and behaves the way the OP wants. As with the other options you could set this as a system parameter or a session parameter, or as a hint (SQL patch etc.) /*+ opt_param(‘_optim_peek_user_binds’ ‘false’) */ – the last being the kindest for the system.

ORA-00054 pt.2

Fri, 2022-07-08 05:03

This is the follow-up to an initial post that covered some details of using the errorstack and ksq traces as and aid to finding the cause of an intermittent ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired. We were (hypothetically) looking at a scenario where a batch-like process would occasionally fail raising this error and leaving us to deal with an error that we could reproduce on demand.

Recapping the previous article, we saw that we could set a system-wide call to dump an errorstack at level 1 whenever the error occurred, producing a trace file containing the statement that had raised the error along with its SQL ID and a call stack that would allow us to find the object_id of the specific object that had caused the lock conflict. Once we had the SQL ID we then had the option to set a system-wide call to dump the ksq (Kernel Service Enqueues) trace whenever that statement was executed [but see footnote 1]. Whenever the statement succeeded this would give us a complete listing of the locks (enqueues) needed by the statement, and when the statement failed (due to ORA-00054) we would be able to see very clearly where the breakdown had occurred.

alter system set events '54 trace name errorstack level 1'; 
alter system set events 'trace[ksq][SQL:3afvh3rtqqwyg] disk=highest';

Neither option, however, would tell us anything about the competing session or about the SQL that caused the competing lock to come into existence; all we could hope for was some hint about why the ORA-00054 and little clue about the part of the application that was causing the conflict.

Using the SystemState

The basic systemstate dump tends to be rather large – for starters, it’s going to include a lot of information about currently open cursors for every (user) session – so it’s not something you really want to make frequent use of, and you don’t want it to be triggered frequently. But if you have an occasional (and critical) batch failure due to an intermittent locking problem then you can issue a call like:

alter system set events '00054 trace name systemstate level 2, lifetime 1';

The “lifetime 1” ensures that a session will only dump a systemstate once in its lifetime – which may be necessary to ensure the system isn’t overloaded with by larger numbers of system state dumps begin generated in a very short time interval. You may need to allow for more than just 1 dump per session, though.

In fact, since I want to dump both the errorstack and the systemstate when the ORA-00054 occurrs the critical three lines in my model were as follows

alter system set events '54 trace name errorstack level 1; name systemstate level 2, lifetime 1';

alter table child enable novalidate constraint chi_fk_par;

alter system set events '54 trace name systemstate off; name errorstack off';

So what do you get if you make this call and then try to re-enable a foreign key constraint when the parent table is locked. In my very small system, with just a couple of live sessions, and shortly after instance startup my tracefile was about 1.5MB and 16,000 lines in size, so not something to read through without a little filtering.

From the Call Stack Trace produced by the errorstack dump I could see that the first argument to ktaiam (and the associated function calls) was 00001EA61.This told me that I would find at least one session holding a lock identified as TM-0001EA61 so that’s the text I searched for next. Note the little trap: the value reported in the call stack has an extra leading zero. I found the TM enqueue 11,000 lines further down the file in a “State Object (SO:)” of type “DML Lock”:

        SO: 0x9cf0e708, type: DML lock (83), map: 0x9bac7b98
            state: LIVE (0x4532), flags: 0x0
            owner: 0x9cf648d8, proc: 0xa0eed9f0
            link: 0x9cf0e728[0x9cf64948, 0x9cf64948]
            conid: 3, conuid: 3792595, SGA version=(1,0), pg: 0
        SOC: 0x9bac7b98, type: DML lock (83), map: 0x9cf0e708
             state: LIVE (0x99fc), flags: INIT (0x1)
        DML LOCK: tab=125537 flg=11 chi=0
                  his[0]: mod=6 spn=348
2022-07-06 23:34:49.285*:ksq.c@10787:ksqdmc(): Enqueue Dump        (enqueue) TM-0001EA61-00000000-0039DED3-00000000  DID: ksqlkdid: 0001-0029-0000001F

        lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x16
        mode: X, lock_flag: 0x20, lock: 0x9bac7bc0, res: 0x9e7ec5e8
        own: 0xa086dc70, sess: 0xa086dc70, proc: 0xa05617d0, prv: 0x9e7ec5f8
        SGA version=(1,0)

In my case there was only one holder for this lock, but in a live system the same object could be locked by many users.

Having found a state object for the lock I had to identify the process holding this lock – which means searching backwards to find the parent of this state object, then its parent, and so on. Here, in the order I found them (which is the reverse order they appear in the file) are the lines I found:

        SO: 0x9cf0e708, type: DML lock (83), map: 0x9bac7b98
      SO: 0x9cf648d8, type: transaction (85), map: 0x9bbebdc8
      SO: 0x8df2ac88, type: LIBRARY OBJECT LOCK (118), map: 0x63693c60
      SO: 0x8df4fa78, type: LIBRARY OBJECT LOCK (118), map: 0x81af83e8
    SO: 0xa0efc020, type: session (4), map: 0xa086dc70
  SO: 0xa0eed9f0, type: process (2), map: 0xa05617d0

So the DML lock is owned by a transaction, which is owned by a session (with a couple of “library object lock” state objects “in the way”) which is owned by a process. You may find other “extraneous” lines on the way but the key detail to note is the hierarchical pattern of state objects – keep going until you’re reached the session and process state objects.

Once we’ve got this information we need to search the systemstate dump for any cursors that this session/process has open to see if we can find something that looks like a statement that could have created the lock, so we need to search for state objects of type “LIBRARY OBJECT LOCK” with the correct owner information.

Technically they would have to appear in the trace file between the process state object we’ve found and the next process state object listed in the file, but it would be a little tedious to do this seacrh with a text editor so I switched from using vi to using grep – and here’s a search condition that will identify and print part of each state object owned by this session and process:

grep -B+2 -A+13 "owner: 0xa0efc020, proc: 0xa0eed9f0" or19_ora_19369.trc >temp.txt

The hexadecimal value following “owner: “ is from the session state object, the value following “proc: “ is from the process state object. or19_ora_19369.trc is my trace file, and each time I’ve found a matching line I’ll write the 2 lines before it, the line itself, and 13 lines after it to the file temp.txt.

In my example I found 29 state objects owned by the process, of which 25 were of type “LIBRARY OBJECT LOCK” – and I’ve reported two of them below:

      SO: 0x8df36d38, type: LIBRARY OBJECT LOCK (118), map: 0x7b54afa0
          state: LIVE (0x4532), flags: 0x1
          owner: 0xa0efc020, proc: 0xa0eed9f0
          link: 0x8df36d58[0x8df496c8, 0x8df31608]
          child list count: 0, link: 0x8df36da8[0x8df36da8, 0x8df36da8]
          conid: 3, conuid: 3792595, SGA version=(1,0), pg: 0
      SOC: 0x7b54afa0, type: LIBRARY OBJECT LOCK (118), map: 0x8df36d38
           state: LIVE (0x99fc), flags: INIT (0x1)

      LibraryObjectLock:  Address=0x7b54afa0 Handle=0x82967ac0 Mode=N
        CanBeBrokenCount=1 Incarnation=1 ExecutionCount=1

        User=0xa086dc70 Session=0xa086dc70 ReferenceCount=1
        Flags=CNB/[0001] SavepointNum=155 Time=07/06/2022 23:29:53
      LibraryHandle:  Address=0x82967ac0 Hash=5ce74124 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
        ObjectName:  Name=lock table parent in exclusive mode


      SO: 0x8defb228, type: LIBRARY OBJECT LOCK (118), map: 0x944f8880
          state: LIVE (0x4532), flags: 0x1
          owner: 0xa0efc020, proc: 0xa0eed9f0
          link: 0x8defb248[0x96fbe5e8, 0x8df496c8]
          child list count: 0, link: 0x8defb298[0x8defb298, 0x8defb298]
          conid: 3, conuid: 3792595, SGA version=(1,0), pg: 0
      SOC: 0x944f8880, type: LIBRARY OBJECT LOCK (118), map: 0x8defb228
           state: LIVE (0x99fc), flags: INIT (0x1)

      LibraryObjectLock:  Address=0x944f8880 Handle=0x896af550 Mode=N
        CanBeBrokenCount=1 Incarnation=1 ExecutionCount=0
        Context=0x7f125ecf34b8
        User=0xa086dc70 Session=0xa08664b8 ReferenceCount=1
        Flags=[0000] SavepointNum=0 Time=07/06/2022 23:29:53
      LibraryHandle:  Address=0x896af550 Hash=0 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD
        Name:  Namespace=SQL AREA(00) Type=CURSOR(00) ContainerId=3

Details to note:

  • The last line I’ve selected from the first state object looks like a good candidate SQL statement for creating the blocking lock. The line above it, showing “Hash=5ce74124”, gives us the hexadecimal equivalent of the v$sql.hash_value for this statement.
  • I believe the last line of the second state object is telling us that the associated statement has been flushed from the library cache but I’m not sure that I’m interpreting that correctly. You’ll notice though that the line does gives us a suitable namespace and type for something to do with a SQL or PL/SQL cursor (and a hash value of zero – so if it is/was a (Pl/)SQL statement that’s the clue that it’s not in memory any more).

Comparing these two state objects, the things I want to find with minimal hassle are (I hope) the lines that start with the text “ObjectName” that appear one line after a line holding the text “Hash=” followed by anything but a zero, from state objects labelled “LIBRARY OBJECT LOCK”. Here’s a one-line (wrapped) grep command to do that, followed by the (slightly re-formatted) results I got from my trace file:

grep -B+2 -A+13 "owner: 0xa0efc020, proc: 0xa0eed9f0" or19_ora_19369.trc |
                grep -A+15 "SO:.*LIBRARY OBJECT LOCK" |
                grep -A+1  "Hash=[^0]" |
                grep "ObjectName"


        ObjectName:  Name=lock table parent in exclusive mode 
        ObjectName:  Name=select pctfree_stg, pctused_stg, size_stg,initial_stg, next_stg, minext_stg, maxext_stg,
                     maxsiz_stg, lobret_stg,mintim_stg, pctinc_stg, initra_stg, maxtra_stg, optimal_stg,
                     maxins_stg,frlins_stg, flags_stg, bfp_stg, enc_stg, cmpflag_stg, cmplvl_stg,imcflag_stg, 
                     ccflag_stg, flags2_stg from deferred_stg$  where obj# =:1 

It looks like I’ve got the information I need (or, at least) a good clue about why my batch session raised an ORA-00054; and, in a real system the other open cursors reported for this session might give me enough information to work out where the problem is coming from.

Warnings

The first warning is just a reminder that there may have been multiple sessions/processes holding locks on table, so don’t stop after finding the first occurrence of the TM-xxxxxxxx lock, check to see if there are any more and repeat the search for its owning process and owned Library Object Locks.

The second warning is that all this work may not give you an answer. A session may have locked a table ages ago and still have an active transaction open; if you’re unlucky the statement that produced the lock may have been flushed from the library cache. A comment I made in 2009 about finding the locking SQL is just as relevant here for the systemstate dump immediately after the ORA-00054 as it was when I first wrote about querying v$sql all those years ago. You may get lucky, and this prompt dumping of the systemstate may make you luckier, but there’s no guarantee you’ll find the guilty statement.

Furthermore, the state objects that I’ve been looking at are “LIBRARY OBJECT LOCK” state objects – these are the things that linke to a cursor that’s held open by the session (i.e. things you’d see in v$open_cursor) so if session introduced a table lock then closed the cursor (and hasn’t commited) the table will still be locked but the systemstate won’t have a state object for the statement that locked the table. For example when I created and executed the following procedure to lock the table using an “execute immediate” I found a state object for the procedure call, but I didn’t find a state object for the “lock table” statement:

create or replace procedure lock_p as
begin
        execute immediate 'lock table PARENT in exclusive mode';
end;
/

On the other hand when I created the procedure with embedded SQL I found state objects for both the procedure call and the SQL statement.

create or replace procedure lock_p as
begin
        LOCK TABLE test_user.PARENT in exclusive mode;
mode';
end;
/

In passing, the text of the “ObjectName:” you find for the procedure call varies depending on whether you “execute lock_p” or “call lock_p()” from SQL*Plus. The former shows up as “BEGIN lock_p; END;” and the latter as “call lock_p()”

The second warning is to remember that there may have been multiple sessions/processes holding locks on table, so don’t stop after finding the first occurrence of the TM-xxxxxxxx lock, check to see if there are any more and repeat the search for its owning process and owned Library Object Locks.

Conclusion

If you need to track down the cause of an intermittent locking problem that results in an Oracle error ORA-00054 then enabling a system-wide dump of the systemstate (level 2 is sufficient) on error 54 may allows you to find out what everyone else was doing around the time of the problem.

If you don’t already know which object is the locked object that’s the direct cause of the ORA-00054 then enabling the errorstack trace at the same time will allow you to find the object_id of the object, so that you can then find the the processes/sessions that are holding a DML (TM-) lock with the correct id.

For each State Object for the relevant DML lock you can track backwards up the trace file, and then use the address of each pair of session and process state objects to find all of their “open cursor”/”library open lock” state objects, and check the “ObjectName” of each to see the SQL or PL/SQL text. This may give you the information you need to identify where/how the application is going wrong.

Cursors close, and cursors that are still open (but not pinned) can be flushed from the library cache, and a lock may have been placed by a cursor whose text is no longer available, or not part of the systemstate dump, so this method is not perfect – however, since the systemstate dump takes place the instance the error occurs it doess improve your chances that the problem statement is still available and reported.

Footnote 1

Trouble-shooting when the problem is not reproducible on demand often puts you in a position where you have to make a trade-off between information gained and overheads required. Dumping an errorstack for every occurrence of an ORA-00054 is probably a small overhead since you (hope that don’t) generated thousands of locking problems per hour – in the unlikely case that the errors occur very frequently to every session that connects you might be able to limit the overhead by adding the “lifetime” clause to the call, e.g:

alter system set events '00054 trace name errorstack level 1 , lifetime 5';

This would result in every session being able to dump the trace only on the first 5 occasions it triggered the error.

On the other hand, I can think of no effective way of limiting the ksq trace (beyond restricting its action to a specific SQL_ID). If the problem statement executes a couple of times in each batch run, and there are only a few batch runs per day, then the overhead will be small when you’re trying to find the details of a problem that happens one a week. But if the problem statement runs thousands of times in each batch run then it would probably be very expensive to enable the ksq trace to catch an intermittent error.

ORA-00054 pt.1

Wed, 2022-07-06 02:20

If you don’t recognise the number the relevant extract from the oraus.msg file is:

00054, 00000, "resource busy and acquire with NOWAIT specified or timeout expired"
// *Cause:  Interested resource is busy.
// *Action: Retry if necessary or increase timeout.

The error is typically the result of application code trying to do some DDL on an object that is locked in an incompatible mode by some other session, and when it shows up in the log from some batch process it can be difficult to find out what was going on at the same time to cause the problem because the error message tells you nothing about the blocker.

Nenad Noveljic has just published a note discussing what you can do to trouble-shoot this type of problem, but I thought I’d write up a note on one of the ways I’d address the problem, in this case starting from a specific question on the Oracle-L list-server.

Statement of problem

A piece of application code disables the foreign key constraints on a table, inserts (using “insert as select” with the /*+ append */ hint) a very large volume of data (tens to hundreds of millions of rows), then executes a pl/sql loop to re-enable, in novalidate mode, all the foreign key constraints on that table.

****Sample code *****
INSERT /*+Append*/ INTO TAB1 (c1,c2.....)
        SELECT (ROWNUM + sqq_key) ,col1, col2, col3..... from....tab2,
tab3, tab4.....;
COMMIT;

FOR I IN (
        SELECT TABLE_NAME, CONSTRAINT_NAME 
        FROM ALL_CONSTRAINTS 
        WHERE TABLE_NAME = v_table_nm 
        AND CONSTRAINT_TYPE = 'R' 
        AND STATUS = 'DISABLED'
) LOOP
   EXECUTE IMMEDIATE ('ALTER TABLE ' || v_table_nm || ' ENABLE NOVALIDATE CONSTRAINT '|| I.CONSTRAINT_NAME);
 END LOOP I;

From time to time one of the calls to re-enable a constraint fails raising ORA-00054, so the OP had set event 54 to do a systemstate dump to see if that would help identify the cause of the error:

alter system set events '54 trace name systemstate level 266, lifetime 1';

Picking through the resulting trace file, though, the OP got the impression that the session was blocking itself, leading to a worry that somehow the “commit;” wasn’t releasing locks properly so that the lock due to the insert was blocking the lock needed for the “enable constraint”.

Trouble-shooting

Reading the key question “is the commit not working properly?” my first thought was “it’s almost guaranteeable that the commit is doing what it’s supposed to do”; and I had no intention of reading a systemstate dump (or reading the bits that had been extracted by someone who had (almost guaranteeably) misinterpreted it).

Where you start trouble-shooting does depend to a degree on how much you already know about what’s going on. The OP, for example, already knew that the error appeared in response to one of the “alter table” commands and was also able to identify which constraint had caused the error to appear – but if you don’t even have that information how do you begin?

Since Oracle is raising an error (and one that probably doesn’t occur very frequently) you could just set the system to dump an errorstack every time the error occurred. (For a repeatable test you might use “alter session”, for a randomly occurring event you might have to “alter system” unless you were able to modify the batch code itself to issue its own “alter session” at the right point.) To minimise the size of the trace file level 1 should suffice, at least to begin with:

alter system set events '54 trace name errorstack level 1';

Here’s the start of the trace information that I produced by setting this event and trying to enable a constraint when I knew the call would be blocked by a competing lock:

dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x0, level=1, mask=0x0)
----- Error Stack Dump -----
<error barrier> at 0x7ffcdfb0dd20 placed dbkda.c@296
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
----- Current SQL Statement for this session (sql_id=3afvh3rtqqwyg) -----
alter table child enable novalidate constraint chi_fk_par

----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
ksedst1()+95         call     kgdsdst()            7FFCDFB0D180 000000002
                                                   7FFCDFB074B0 ? 7FFCDFB075C8 ?
                                                   000000000 000000082 ?

Key points: the report starts with the SQL that triggered the error and gives us its SQL_ID. Before doing anything else, then, let’s consider what locks might be necessary for the constraint to be enabled (noting, particularly, that in this case the constraint is being enabled with the “novalidate” option. The OP suspects that the problem appears because the session can’t acquire a lock on the child table – but maybe there are other locks involved.

Let’s model the scenario with a parent/child table and referential integrity constraint and see what locks appear as we try to enable the constraint with “novalidate” . But we want to find out what locking goes on when there are no problems with competing sessions and no errors raised. You’ll note that I used th:

rem
rem     Script:         errorstack.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2022
rem

create table parent (
        id      number(4),
        name    varchar2(10),
        constraint par_pk primary key (id)
)
;

create table child(
        id_p    number(4)       constraint chi_fk_par references parent,
        id      number(4),
        name    varchar2(10),
        constraint chi_pk primary key (id_p, id) 
)
;

alter table child disable constraint chi_fk_par;

pause Press return

-- alter system set events '54 trace name errorstack level 1'; 
alter system set events 'trace[ksq][SQL:3afvh3rtqqwyg] disk=highest';

alter table child enable novalidate constraint chi_fk_par;

alter system set events 'trace[ksq][SQL:3afvh3rtqqwyg] off';
-- alter system set events '54 trace name errorstack off'; 

This is using the “new” trace mechanism, tracing the “ksq” (Kernel Service Enqueues) component of the RDBMS library (“oradebug doc component rdbms”), restricted to tracing only when the current SQL statement has a specific SQL_ID.

If you examine the trace file you’ll find lots of lines referencing the source file ksq.c, with call names like ksqgtlctx (get lock context ?) and ksqcli (clear lock information?). I’m just going to grep out the lines that contain the text “mode=”:

2022-07-05 10:44:13.642*:ksq.c@9175:ksqgtlctx(): *** TM-0001E8C8-00000000-0039DED3-00000000 mode=4 flags=0x401 why=173 timeout=0 ***
2022-07-05 10:44:13.642*:ksq.c@9175:ksqgtlctx(): *** TM-0001E8C6-00000000-0039DED3-00000000 mode=4 flags=0x401 why=173 timeout=0 ***
2022-07-05 10:44:13.644*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-00000005-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.644*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-00000006-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.646*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-00000007-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.646*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-00000008-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.646*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-00000009-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.647*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-0000000A-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.647*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-0000000B-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.648*:ksq.c@9175:ksqgtlctx(): *** ZH-0001E8C8-0000000C-0039DED3-00000000 mode=6 flags=0x10021 why=225 timeout=0 ***
2022-07-05 10:44:13.649*:ksq.c@9175:ksqgtlctx(): *** TX-00090014-000022C2-0039DED3-00000000 mode=6 flags=0x401 why=176 timeout=0 ***
2022-07-05 10:44:13.649*:ksq.c@9175:ksqgtlctx(): *** TM-00000061-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-07-05 10:44:13.651*:ksq.c@9175:ksqgtlctx(): *** TM-00000049-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-07-05 10:44:13.651*:ksq.c@9175:ksqgtlctx(): *** TM-00000004-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-07-05 10:44:13.652*:ksq.c@9175:ksqgtlctx(): *** TM-0000001F-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-07-05 10:44:13.653*:ksq.c@9175:ksqgtlctx(): *** TM-00000012-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-07-05 10:44:13.654*:ksq.c@9175:ksqgtlctx(): *** RC-00000002-00000012-0039DED3-00000000 mode=4 flags=0x10401 why=294 timeout=5 ***
2022-07-05 10:44:13.654*:ksq.c@9175:ksqgtlctx(): *** RC-00000002-0000001F-0039DED3-00000000 mode=4 flags=0x10401 why=294 timeout=5 ***
2022-07-05 10:44:13.657*:ksq.c@9175:ksqgtlctx(): *** TM-00004887-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-07-05 10:44:13.657*:ksq.c@9175:ksqgtlctx(): *** TM-0001E643-00000000-0039DED3-00000000 mode=3 flags=0x401 why=173 timeout=21474836 ***
2022-07-05 10:44:13.657*:ksq.c@9175:ksqgtlctx(): *** TX-0003001C-0000223A-0039DED3-00000000 mode=6 flags=0x401 why=176 timeout=0 ***

  • The first two locks are table locks in mode 4 (share) on the child and parent tables respectively (The values 0x0001E8C8 and 0x0001E8C6 are the hexadecimal equivalents of the object_ids).
  • The next 8 locks in mode 6 (exclusive) are something to do with the child table (same object_id appearing) and a check in v$lock_type tells us they’re something to do with compression.
  • Then we see a TX (transaction) lock in mode 6, 5 TM locks on very low (data dictionary) object_ids in mode 3, two RC (result cache) locks in mode 4.
  • Finally there’s two TM locks and a TX lock – and the table locks are for the aud$unified table and one of its partitions.

It’s probably safe to ignore the locking related to the recursive transactions (especially in since the locks show non-zero timeouts); I don’t know what the ZH locks are about but the increasing nature of the second component of the lock id suggests that they’re not likely to be a problem (even though they have to be acquired without a timeout).

The thing that catches my eye is that we have to lock both the child and the parent – and until I did this test I wasn’t certain that for a “novalidate” constraint there would be any need for a data lock on the parent – though a rowcache lock to check for the legality of the constraint definition would make sense.

So maybe the problem isn’t about the child table, possibly it’s about the parent. I’m going to rerun the whole test again, enabling the ksq trace and the errorstack, and in the pause that’s in the script I’m going to lock the parent table from another session before enabling the constraint. From the new trace file I’m going to show you more lines about some of the TM locks (which will be for new object_ids since the code drops and recreates tables) and then a few more lines from the error stack.

First the locking information:

2022-07-05 11:20:54.686*:ksq.c@9100:ksqgtlctx(): ksqtgtlctx: PDB mode 
2022-07-05 11:20:54.688*:ksq.c@9175:ksqgtlctx(): *** TM-0001E8D4-00000000-0039DED3-00000000 mode=4 flags=0x401 why=173 timeout=0 ***
...
2022-07-05 11:20:54.688*:ksq.c@9851:ksqgtlctx(): ksqgtlctx: updated lock mode, mode:4 req:0
2022-07-05 11:20:54.688*:ksq.c@9960:ksqgtlctx(): SUCCESS

2022-07-05 11:20:54.689*:ksq.c@9100:ksqgtlctx(): ksqtgtlctx: PDB mode 
2022-07-05 11:20:54.689*:ksq.c@9175:ksqgtlctx(): *** TM-0001E8D2-00000000-0039DED3-00000000 mode=4 flags=0x401 why=173 timeout=0 ***
...
2022-07-05 11:20:54.689*:ksq.c@9001:ksqcmi(): returns 51
2022-07-05 11:20:54.689*:ksq.c@9948:ksqgtlctx(): FAILURE: returns 51

Note how we can see here that it’s the attempt to lock the parent (lower object id:0x0001E8D2) that makes Oracle raise the error. Notice, by the way, that internally it’s raising error 51 (which is “timeout occurred while waiting for a resource”) not error 54.

And here’s a section of the error stack – quite a long way down:

----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
...
ktagetg0()+929       call     ktaiam()             00001E8D2 000000004 000000000
                                                   7FFF26A8BC38 ? 000000000 ?
                                                   000000000 ?
ktagetp_internal()+  call     ktagetg0()           00001E8D2 ? 000000004 ?
141                                                000000004 ? 7FFF26A8BC38 ?
                                                   000000000 ? 000000000 ?
ktagetg_ddlX()+323   call     ktagetp_internal()   00001E8D2 000000000 000000004
                                                   000000000 ? 000000000
                                                   F7A4F57D00000000
ktagetg_ddl()+30     call     ktagetg_ddlX()       00001E8D2 ? 000000000 ?
                                                   000000004 ? 000000000 ?
                                                   000000000 ? 000000000
kkdllk0()+1551       call     ktagetg_ddl()        00001E8D2 ? 000000000 ?
                                                   000000004 ? 000000000 ?
                                                   000000000 ? 000000000 ?

You’ll notice in the list of arguments for these calls (which relate to getting locks for DDL) that the value 00001E8D2 (the object_id) and 000000004 (requested lock mode) keep appearing. So simply setting errorstack to level 1 will give you the SQL statement that caused the ORA-00054, and you will be able to find the object_id that Oracle was unable to lock and the attempted lock mode.

Conclusion

It can be a little tricky to track down the source of Oracle error ORA-00054 when it appears unpredictably and cannot be reproduced on demand, but there are two options that help you to get started.

The error is about attempting to lock something – so if you set the system to dump an error stack (even at only level 1) when the error occurs you should be able to find the object_id of the object that is the source of the problem and the lock mode being requested. It’s probably the case that this will be a small overhead when run at the system level since you probably don’t generate lock timeouts very often.

If you are lucky enough to know the SQL ID of statement that runs into the problem you can enable ksq (lock) tracing for that specific SQL statement and that will make it a lot easier to see exactly which lock attempt failed. If you have no idea of the SQL_ID, then ksq tracing for the whole system will probably be too much of an overhead to leave in place. The benefit of the ksq trace is that if you don’t know what locking your application code needs you will be able to see all the locks involved, and simply knowing what locks are involved may be enough to point you in the right direction.

Note: (which I haven’t tested) if the guilty SQL is called from inside a package, then using the SQL_ID of the package call may result in ksq tracing for every statement call inside the package call, and that might be a bearable overhead.

Footnote:

Although this allows us to discover where the locking conflict appeared, it doesn’t tell us what the blocking session did to get in our way. In the next installment I’ll describe how we can drill through through the systemstate dump to find out (if the information is still there) what the other session was doing to cause the problem.

Footnote 2:

It’s worth mentioning that in some cases of locking it can be a good idea to use the “wait N” (for a small value of N) option in your code as a wait of a few seconds may allow you to find some clues about blockers in the ASH (v$active_session_history / dba_hist_active_sess_history) information when a timeout occurs. In this specific case, though, I don’t think there’s a variant of the syntax that would allow you to do something like “alter table modify constraint …. wait 5”.

ORA-29857

Mon, 2022-06-20 09:30

This is another little “case study” on solving a problem. It’s based on a question on the MOSC DBA Forum (needs an account) about trying to drop a tablespace and seeing the error “ORA-29857: domain indexes and/or secondary objects exist in the tablespace” when (apparently) there are no domain indexes or secondary objects in the tablespace according to the queries that have been used to try and locate the problem objects.

Not being completely familiar with all the possibilities that might be relevant to that particular error, but having a vague memory of seeing an oddity once before, I thought I’d do a quick check on a couple of possible causes.

  • Case 1, which I assumed ought to raise the error because the tablespace held some objects that met the requirements for the error: I created a table in a tablespace called users and, for the domain indexes a couple of Context (Oracle Text) indexes in a tablespace called test_8k_assm. Then I tried to drop the tablespace holding the context indexes. Here’s the relevant cut-n-paste (19.11.0.0):
SQL> drop tablespace test_8k_assm including contents and datafiles;
drop tablespace test_8k_assm including contents and datafiles
*
ERROR at line 1:
ORA-29857: domain indexes and/or secondary objects exist in the tablespace

  • Case 2, which I thought would drop the tablespace because the model didn’t meet the requirement of the error message: I swapped the locations of the table and its indexes, creating the table in the tablespace called test_8k_assm and the Context indexes in the tablespace called users then I tried to drop the tablespace holding only the table. Here’s the relevant cut-n-paste (19.11.0.0):
SQL> drop tablespace test_8k_assm including contents and datafiles;
drop tablespace test_8k_assm including contents and datafiles
*
ERROR at line 1:
ORA-29857: domain indexes and/or secondary objects exist in the tablespace

So, the error message is misleading and, possibly, this might explain why the queries the OP had posted weren’t reporting any guilty objects. Whatever the error message says, the domain indexes that you need to deal with may not be in the tablespace you want to drop.

I also have a question-mark over “secondary objects” – I don’t know what objects (other than the bits and pieces that make up domain indexes) are deemed to be secondary.

Investigating

It’s at this point that the purpose of the blog note starts to appear – I don’t know the answer to the OP’s question and I’ve discovered a gap in my knowledge so I’m willing to spend a little time extending my knowledge and trying to give a little help to the OP.

We can start with the view dba_secondary_objects (based on secobj$ from script dcore.bsq) which tells us that there are two possible sources of secondary objects – index types and statistics types:

SQL> select dbms_metadata.get_ddl('VIEW','DBA_SECONDARY_OBJECTS') view_text from dual;

-- with a little cosmetic tidying:

VIEW_TEXT
-------------------------------------------------------------------------------------------
CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."DBA_SECONDARY_OBJECTS" ("INDEX_OWNER", "INDEX_NAME", "SECONDARY_OBJECT_OWNER", "SECONDARY_OBJECT_NAME", "SECONDARY_OBJDATA_TYPE")
AS
select u.name, o.name, u1.name, o1.name, 
       decode(s.spare1, 0, 'FROM INDEXTYPE',1, 'FROM STATISTICS TYPE')
from   sys.user$ u, sys.obj$ o, sys.user$ u1, sys.obj$ o1, sys.secobj$ s
where  s.obj# = o.obj# and o.owner# = u.user# and
       s.secobj# = o1.obj#  and  o1.owner# = u1.user#;

With this view definition I’d be tempted to do a little drilling down, with queries that return increasing amounts of data, and then pick subsets to expand to higher precision, e.g:

select  secondary_objdata_type, count(*)
from    dba_secondary_objects
group by
        secondary_objdata_type
/

Are there any secondary objects that aren’t domain indexes – if there are a few then query the view for just those to find out what they are and who owns them.

select  *
from    dba_secondary_objects
where   secondary_objdata_type = 'FROM STATISTICS TYPE'
/

In fact if the total number of secondary objects was small I might report the details about all of them and try to track them down, or if there was only one schema using that tablespace I might restrict my queries to the subset of secondary objects owned by that schema:

select  distinct index_owner, index_name
from    dba_secondary_objects
where   secondary_object_owner = 'TEST_USER'
;

The OP had a rather larger task than my little model – there were 14,499 segments in the tablespace, and a query for (segment_owner, segment_type, count(*)) returned 332 rows – so had taken a similar approach to mine to try and minimise the amount of work needed, starting with the following queries (which I’ve enhanced slightly and given a cosmetic makeover):

prompt  ==================================
prompt  Domain indexes (in the tablespace)
prompt  ==================================

select  owner, index_name, table_owner, table_name, tablespace_name
from    dba_indexes 
where   index_type = 'DOMAIN'
-- and  tablespace_name = 'TEST_8K_ASSM'        -- error: domain indexes don't have tablespaces
;

prompt  ==========================================
prompt  Domain indexes on tables in the tablespace
prompt  ==========================================

select  t.table_name, t.tablespace_name, i.index_name, i.tablespace_name
from    dba_indexes i, dba_tables t
where   t.table_name = i.table_name
and     t.owner = i.owner
and     i.index_type = 'DOMAIN'
and     t.tablespace_name = 'TEST_8K_ASSM'
;

prompt  ===================================================
prompt  Secondary objects that are tables in the tablespace
prompt  ===================================================

select  index_owner, index_name, secondary_object_owner, secondary_object_name, secondary_objdata_type
from    dba_secondary_objects
where   secondary_object_name in (
                select table_name 
                from dba_tables 
                where tablespace_name = 'TEST_8K_ASSM'
        )
;

prompt  ======================================================
prompt  Indexes in the tablespace that are secondary objects ?
prompt  ======================================================

select  index_owner, index_name, secondary_object_owner, secondary_object_name, secondary_objdata_type
from    dba_secondary_objects
where   index_name in (
                select index_name 
                from dba_indexes 
                where tablespace_name = 'TEST_8K_ASSM'
        )
;

The first query had an error – domain indexes do not report a tablespace_name in dba_indexes, so this query inevitably returned no rows.

The second query would report any domain indexes on tables that were in the given tablespace – which would be useful for the simple case where everything was created in the user’s default tablespace.

The third query reports any tables in the given tablespace that are the physical tables instantiating the “logical” domain index – which is particularly important for my example of the indexes being in the given tablespace when the table itself was in some other tablespace.

The final query is trying to report domain indexes that have corresponding physical objects in the given tablespace – but this isn’t going to help (at least with my Context indexes) because the index_name in dba_secondary_indexes will be a name that appears in dba_indexes as index_type = ‘DOMAIN’ and won’t have a value stored in the tablespace_name column.

Here’s the output I get (preceded by a full report of the view dba_secondary_objects in my otherwise empty little database) when I had the table in test_8k_assm and the indexes in tablespace users.

SQL> select * from dba_secondary_objects;

INDEX_OWNER     INDEX_NAME           SECONDARY_OBJECT_OWNER           SECONDARY_OBJECT_NAME            SECONDARY_OBJDATA_TY
--------------- -------------------- -------------------------------- -------------------------------- --------------------
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$I                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$K                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$N                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$U                  FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$I                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$K                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$N                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$U                FROM INDEXTYPE

8 rows selected.

==================================
Domain indexes (in the tablespace)
==================================

OWNER           INDEX_NAME           TABLE_OWNER     TABLE_NAME                TABLESPACE_NAME
--------------- -------------------- --------------- ------------------------- ------------------------------
TEST_USER       SH_IX_NAME           TEST_USER       SCRIPT_HANDLES
TEST_USER       SH_IX_HANDLE         TEST_USER       SCRIPT_HANDLES

2 rows selected.

==========================================
Domain indexes on tables in the tablespace
==========================================

TABLE_NAME                TABLESPACE_NAME                INDEX_NAME           TABLESPACE_NAME
------------------------- ------------------------------ -------------------- ------------------------------
SCRIPT_HANDLES            TEST_8K_ASSM                   SH_IX_NAME
SCRIPT_HANDLES            TEST_8K_ASSM                   SH_IX_HANDLE

2 rows selected.

===================================================
Secondary objects that are tables in the tablespace
===================================================

no rows selected

=====================================================
Indexes in the tablespace that have secondary objects
=====================================================

no rows selected

As you can see I have two Context indexes, each represented by 4 secondary tables (and that’s not the complete story because those tables have their own indexes and a couple of them have LOB columns, and one of the secondary tables is actually an index-organized table).

And this is the result when the base table is in tablespace users and the indexes are in test_8k_assm:

==================================
Domain indexes (in the tablespace)
==================================

OWNER           INDEX_NAME           TABLE_OWNER     TABLE_NAME                TABLESPACE_NAME
--------------- -------------------- --------------- ------------------------- ------------------------------
TEST_USER       SH_IX_NAME           TEST_USER       SCRIPT_HANDLES
TEST_USER       SH_IX_HANDLE         TEST_USER       SCRIPT_HANDLES

2 rows selected

==========================================
Domain indexes on tables in the tablespace
==========================================

no rows selected

===================================================
Secondary objects that are tables in the tablespace
===================================================

INDEX_OWNER     INDEX_NAME           SECONDARY_OBJECT_OWNER           SECONDARY_OBJECT_NAME            SECONDARY_OBJDATA_TY
--------------- -------------------- -------------------------------- -------------------------------- --------------------
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$I                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$K                  FROM INDEXTYPE
TEST_USER       SH_IX_NAME           TEST_USER                        DR$SH_IX_NAME$U                  FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$I                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$K                FROM INDEXTYPE
TEST_USER       SH_IX_HANDLE         TEST_USER                        DR$SH_IX_HANDLE$U                FROM INDEXTYPE

6 rows selected

=====================================================
Indexes in the tablespace that have secondary objects
=====================================================

no rows selected

So one of the middle two queries ought to return some rows if there are any Context indexes in the tablespace or on tables in the tablespace. (And for the final option – i.e. the table and its Context indexes being in the tablespace – both queries will return rows). The OP got no rows returned on all 4 queries.

What next?

It looks like the OP must have some secondary objects that aren’t playing by the rules if they’re “from index types”, or there are some secondary objects that are “from statistics types” and don’t get picked up by the queries. There is a view dba_ustat for “user stats types” which has interesting column names, but at this point I decided to follow a strategy that I’ve used many times in the past when Oracle raises an unexpected error on a repeatable process: repeat the process but enable SQL trace and see what Oracle was doing just before it raised the error.

Here’s the critical line in the trace file I produced; with a couple of the precding lines:

CLOSE #140708355472384:c=1860,e=1512,dep=1,type=0,tim=31701605233
EXEC #140708354845632:c=68018,e=68006,p=0,cr=1276,cu=0,mis=0,r=0,dep=0,og=1,plh=0,tim=31701605472
ERROR #140708354845632:err=29857 tim=31701605485

A search backwards up the trace file showed that cursor #140708354845632 was for the “drop tablespace” statement. Just above the ERROR and EXEC lines we see cursor #140708355472384 closing. It’s reasonably likely (though not 100% certain) that this is the cursor representing the last SQL statement that Oracle executed before raising the error so searching backwards for “PARSING IN CURSOR #140708355472384” was my next step (and if that statement didn’t look promising I’d have search backwards from the ERROR line to find the cursor number from the most recent “EXEC” line). In this case (with my data) the CLOSE took me to the following (cosmetically cleaned) SQL:

select null from  (
             select o.obj# from obj$ o, tab$ t      where t.ts#=:1 and t.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, tabpart$ tp      where tp.ts#=:1 and tp.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, tabsubpart$ tsp      where tsp.ts#=:1 and tsp.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, ind$ i      where i.ts#=:1 and i.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, indpart$ ip      where ip.ts#=:1 and ip.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, indsubpart$ isp      where isp.ts#=:1 and isp.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select o.obj# from obj$ o, clu$ c      where c.ts#=:1 and c.obj#=o.obj# and            (decode(bitand(o.flags, 16), 16, 1, 0)=1)    
union all    select i.obj# from ind$ i, tab$ t, tabpart$ tp      where tp.ts#=:1 and tp.bo# = t.obj# and t.obj#=i.bo# and i.type#=9    
union all    select i.obj# from ind$ i, tab$ t, tabcompart$ tcp,tabsubpart$ tsp      where tsp.ts#=:1 and tsp.pobj#=tcp.obj# and tcp.bo#=t.obj# and            t.obj#=i.bo# and i.type#=9    
union all    select i.obj# from ind$ i, tab$ t      where t.ts#=:1 and t.obj#=i.bo# and i.type#=9
)

Every single line of the UNION ALL is addressing the same tablespace by number (:1) looking for objects where bit 5 is set in the obj$.flags column (i.e. secondary object) or indexes where the ind$.type# is 9 (“cooperative index method”).

That looks like a good bet for being the code that Oracle uses to check if there are any objects that should block the drop, so I changed the bind variable to the tablespace number (from v$tablespace) changed the “select null” to “select obj#” and stuck the resulting query block into an IN subquery against dba_objects (this may not be the most efficient of queries):

select 
        secondary, owner, object_name, object_type, object_id
from    dba_objects
where   object_id in (
                select obj# from  (
                                  select o.obj# from obj$ o, tab$ t where t.ts#=6 and t.obj#=o.obj# and (decode(bitand(o.flags, 16), 16, 1, 0)=1)
                        union all ...
                        union all select i.obj# from ind$ i, tab$ t where t.ts#=6 and t.obj#=i.bo# and i.type#=9
                )
        )
order by secondary, owner, object_name
/

S OWNER           OBJECT_NAME                      OBJECT_TYPE              OBJECT_ID
- --------------- -------------------------------- ----------------------- ----------
N TEST_USER       SH_IX_HANDLE                     INDEX                       123956
N TEST_USER       SH_IX_NAME                       INDEX                       123944
Y TEST_USER       DR$SH_IX_HANDLE$I                TABLE                       123957
Y TEST_USER       DR$SH_IX_HANDLE$K                TABLE                       123960
Y TEST_USER       DR$SH_IX_HANDLE$KD               INDEX                       123966
Y TEST_USER       DR$SH_IX_HANDLE$KR               INDEX                       123967
Y TEST_USER       DR$SH_IX_HANDLE$U                TABLE                       123963
Y TEST_USER       DR$SH_IX_HANDLE$X                INDEX                       123965
Y TEST_USER       DR$SH_IX_NAME$I                  TABLE                       123945
Y TEST_USER       DR$SH_IX_NAME$K                  TABLE                       123948
Y TEST_USER       DR$SH_IX_NAME$KD                 INDEX                       123954
Y TEST_USER       DR$SH_IX_NAME$KR                 INDEX                       123955
Y TEST_USER       DR$SH_IX_NAME$U                  TABLE                       123951
Y TEST_USER       DR$SH_IX_NAME$X                  INDEX                       123953
Y TEST_USER       SYS_C0016528                     INDEX                       123952
Y TEST_USER       SYS_C0016536                     INDEX                       123964
Y TEST_USER       SYS_IL0000123945C00006$$         INDEX                       123947
Y TEST_USER       SYS_IL0000123957C00006$$         INDEX                       123959
Y TEST_USER       SYS_IOT_TOP_123949               INDEX                       123950
Y TEST_USER       SYS_IOT_TOP_123961               INDEX                       123962

20 rows selected.

This is the list I got when the base table and the Context indexes were all created in the tablespace I wanted to drop – notice that the “logical” indexes (the first two items, flagged as secondary=’N’) are reported along with all the physical objects relating to the context indexes.

When I created the indexes in the target tablespace but the base table in an alternative tablespace the query reported only the physical objects making up the Context indexes; when I created the base table in the target tablespace and the Context indexes in an alternative tablespace the query reported only the “logical” indexes.

Conclusion

As a last resort, when the simple queries you find in the manuals or on MOS (or elsewhere on the internet) don’t seem to supply all the answers the strategy of finding and hacking the query that Oracle runs just before raising the error is a good generic method for getting a better handle on what’s causing the problem.

In this example of Oracle error ORA-29758 you can get a list of the objects (logical or physical or both) that are blocking a “drop tablespace” command and that makes it easier to find out what else you need to drop before you can drop the tablespace.

Bear in mind that I’ve just demonstrated the approach while experimenting with a couple of Context indexes, where the information I needed was available through a couple of ordinary data dictionary views – I only dived down into the internals because the data dictionary views weren’t giving the OP a complete answer.

I don’t know what was in the OP’s database to make this problem happen (it may simply be that the database was version 11.2.0.4 while mine was 19.11.0.0 and there’s a bug that’s been fixed since 11g), so the OP may find that after they’ve cleared all domain indexes indicated by this query they may still see ORA-29758 and find that there’s another query that Oracle uses to check for other conditions (maybe those “from statistics type” secondary objects) that also needs to be hijacked and subverted to complete the job. I know that when I traced a successful “drop tablespace” there were 13 more statements after the one I’ve shown that looked as if they might be checking for the (non-)existence of other types of object before the tablespace could be dropped.

Case Study

Fri, 2022-06-17 07:25

The question “How do you trouble-shoot a performance problem” came up in an online session I did for the AIOUG (All-India OUG) today. It’s a very broad question and the only possible answers are either extremely generic, or very specific – so here’s a specific example that I talked about to give some idea of the processes I go through. It’s an example from the Oracle-l list server asking the (paraphrased) question:

I’ve got a parallel query that take 24 seconds to return 2,500 rows for 2018. .The same query for 2019 should return a similar amount of data but consumes a lot of TEMP space before failing; it takes 45 minutes to complete if I remove the parallel hint. The only difference between the two queries is a change to a single predicate: “opclf.year_number = to_number(‘YYYY’)” and the statistics are up to date – what’s going on?

The ease of diagnosing a problem is dependent on the ease of access to all the right information, and you don’t always know initially what the right information might be. In this case the OP had started by posting to github the SQL Monitor reports for the two queries, which were automatically generated since the queries were using parallel execution.

Click here to show/hide the first few sections for the 2019 output
SQL Text
------------------------------
SELECT /*+ PARALLEL(8) */ DISTINCT rd.document_id AS doc_id ,'firm_id' AS criteria_key ,opf.ultimate_parent_firm_id AS series_id ,period_number FROM ( SELECT /*+ PARALLEL(8) */ DISTINCT rd.document_id ,rd.client_role_id ,( CASE WHEN MONTHS_BETWEEN(TO_DATE('04/28/2022', 'MM/DD/YYYY'), TO_DATE('04/01/2017', 'MM/DD/YYYY')) > 12 THEN TRUNC(dc.date_value, 'YEAR') ELSE TRUNC(dc.date_value, 'MONTH') END ) period_number FROM REPORT_ENGINE.date_code dc ,REPORT_ENGINE.lit_fact_bd rd INNER JOIN
report_engine.firm FIRM ON rd.firm_id = FIRM.firm_id WHERE dc.date_value BETWEEN TO_DATE('04/01/2017', 'MM/DD/YYYY') AND TO_DATE('04/28/2022', 'MM/DD/YYYY') AND rd.publication_date_id = dc.date_id AND rd.year_number = to_number('2019') AND (FIRM.ultimate_parent_firm_id IN (to_number('252094'))) ) rd INNER JOIN report_engine.opposing_counsel op ON rd.client_role_id = op.legal_role_id INNER JOIN report_engine.lit_fact_bd opclf ON opclf.document_id = rd.document_id AND op.opposing_counsel_role_id
= opclf.client_role_id AND opclf.year_number = to_number('2019') INNER JOIN report_engine.firm opf ON opclf.firm_id = opf.firm_id AND opf.firm_id >= 1000 WHERE 1 = 1

Global Information
------------------------------
 Status              :  EXECUTING
 Instance ID         :  1
 Session             :  SYS (647:28741)
 SQL ID              :  3cjg20q2vw41f
 SQL Execution ID    :  16777216
 Execution Started   :  06/09/2022 05:08:24
 First Refresh Time  :  06/09/2022 05:08:25
 Last Refresh Time   :  06/09/2022 05:08:27
 Duration            :  221s
 Module/Action       :  sqlplus@c111dhw (TNS V1-V3)/-
 Service             :  SYS$USERS
 Program             :  sqlplus@c111dhw (TNS V1-V3)

Global Stats
====================================================================
| Elapsed |   Cpu   |    IO    |  Other   | Buffer | Write | Write |
| Time(s) | Time(s) | Waits(s) | Waits(s) |  Gets  | Reqs  | Bytes |
====================================================================
|     222 |     179 |       28 |       15 |    49M | 11624 |   2GB |
====================================================================

Parallel Execution Details (DOP=8 , Servers Allocated=16)
=====================================================================================================================================
|      Name      | Type  | Server# | Elapsed |   Cpu   |    IO    |  Other   | Buffer | Write | Write |         Wait Events         |
|                |       |         | Time(s) | Time(s) | Waits(s) | Waits(s) |  Gets  | Reqs  | Bytes |         (sample #)          |
=====================================================================================================================================
| PX Coordinator | QC    |         |    0.26 |    0.26 |          |     0.00 |  12144 |       |     . |                             |
| p000           | Set 1 |       1 |         |         |          |          |        |       |     . |                             |
| p001           | Set 1 |       2 |         |         |          |          |        |       |     . |                             |
| p002           | Set 1 |       3 |         |         |          |          |        |       |     . |                             |
| p003           | Set 1 |       4 |         |         |          |          |        |       |     . |                             |
| p004           | Set 1 |       5 |         |         |          |          |        |       |     . |                             |
| p005           | Set 1 |       6 |         |         |          |          |        |       |     . |                             |
| p006           | Set 1 |       7 |         |         |          |          |        |       |     . |                             |
| p007           | Set 1 |       8 |         |         |          |          |        |       |     . |                             |
| p008           | Set 2 |       1 |     220 |     177 |       28 |       15 |    48M | 11624 |   2GB | direct path write temp (28) |
| p009           | Set 2 |       2 |         |         |          |          |        |       |     . |                             |
| p010           | Set 2 |       3 |         |         |          |          |        |       |     . |                             |
| p011           | Set 2 |       4 |    1.71 |    1.70 |          |     0.01 |   595K |       |     . |                             |
| p012           | Set 2 |       5 |         |         |          |          |        |       |     . |                             |
| p013           | Set 2 |       6 |         |         |          |          |        |       |     . |                             |
| p014           | Set 2 |       7 |         |         |          |          |        |       |     . |                             |
| p015           | Set 2 |       8 |         |         |          |          |        |       |     . |                             |
=====================================================================================================================================

You’ll note that I haven’t got as far as the body of the execution plan yet, and I’ve highlighted line 44 – a line in the middle of the summary of activity for the parallel execution processes. There are 8 servers in each of two sets (we’re running parallel 8) and the line I’ve highlighted is the first server of the second set. The reason I’ve highlighted it is that it’s the one server that’s doing almost all the work – effectively the query (at some point in the plan) is running serially.

So, a first thought, maybe we’ve just been unlucky and running parallel 7 (say) would result in a better distribution of data across parallel servers and allow each of 7 processes to do whatever they had to do to a much smaller amount of data. Maybe a change to the distribution method (pq_distribute() hint) would spread the workload more evenly. In either case “N” smaller workload might still use a lot of TEMP, but possibly no individual process would fail, and the job would complete nearly N times as quickly.

Note: the 2018 Monitor report showed an equivalent skew in the data distribution, but the amount of work needed was much smaller with a read/write load of only 143MB compared to 2GB for the 2019 report. The OP did try running parallel 7, but with no change to the overall effect.

Let’s take a quick glance at the plan body (click to view/hide)
SQL Plan Monitoring Details (Plan Hash Value=1862006233)
=========================================================================================================================================================================================================================
| Id    |                     Operation                      |           Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Write | Write | Mem | Temp | Activity |        Activity Detail       |
|       |                                                    |                           | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs  | Bytes |     |      |   (%)     |         (# samples)         |
=========================================================================================================================================================================================================================
|     0 | SELECT STATEMENT                                   |                           |         |      |           |        |     3 |          |       |       |     |      |   |                             |
|     1 |   PX COORDINATOR                                   |                           |         |      |           |        |     3 |          |       |       |     |      |   |                             |
|     2 |    PX SEND QC (RANDOM)                             | :TQ10003                  |     781 |  153 |           |        |       |          |       |       |     |      |   |                             |
|     3 |     HASH UNIQUE                                    |                           |     781 |  153 |           |        |       |          |       |       |     |      |   |                             |
|     4 |      PX RECEIVE                                    |                           |     781 |  151 |           |        |       |          |       |       |     |      |   |                             |
|     5 |       PX SEND HASH                                 | :TQ10002                  |     781 |  151 |           |        |     2 |          |       |       |     |      |   |                             |
|  -> 6 |        BUFFER SORT                                 |                           |     781 |  153 |       219 |     +3 |     2 |        0 | 11624 |   2GB | 61M |   2G |    26.21 | Cpu (26)                     |
|       |                                                    |                           |         |      |           |        |       |          |       |       |     |      |   | direct path write temp (28) |
|  -> 7 |         NESTED LOOPS                               |                           |     781 |  151 |       219 |     +3 |     2 |      39M |       |       |     |      |   |                             |
|  -> 8 |          NESTED LOOPS                              |                           |     781 |  151 |       219 |     +3 |     2 |      39M |       |       |     |      |     0.49 | Cpu (1)                      |
|  -> 9 |           NESTED LOOPS                             |                           |     777 |   44 |       219 |     +3 |     2 |      39M |       |       |     |      |   |                             |
| -> 10 |            NESTED LOOPS                            |                           |      41 |   26 |       219 |     +3 |     2 |     6463 |       |       |     |      |   |                             |
| -> 11 |             HASH JOIN                              |                           |      41 |   21 |       219 |     +3 |     2 |     6463 |       |       |  6M |      |   |                             |
|    12 |              BUFFER SORT                           |                           |         |      |         1 |     +3 |     2 |    36855 |       |       |     |      |   |                             |
|    13 |               PX RECEIVE                           |                           |      87 |   19 |         1 |     +3 |     2 |    36855 |       |       |     |      |   |                             |
|    14 |                PX SEND HASH                        | :TQ10000                  |      87 |   19 |         1 |     +3 |     1 |    38694 |       |       |     |      |   |                             |
|    15 |                 NESTED LOOPS                       |                           |      87 |   19 |         1 |     +3 |     1 |    38694 |       |       |     |      |   |                             |
|    16 |                  NESTED LOOPS                      |                           |      87 |   19 |         1 |     +3 |     1 |    38694 |       |       |     |      |   |                             |
|    17 |                   TABLE ACCESS BY INDEX ROWID      | FIRM                      |       1 |    2 |         1 |     +3 |     1 |       43 |       |       |     |      |   |                             |
|    18 |                    INDEX RANGE SCAN                | FIRM_ULT_PARENT_FIRM_IDX1 |       1 |    1 |         1 |     +3 |     1 |       43 |       |       |     |      |   |                             |
|    19 |                   PARTITION RANGE SINGLE           |                           |         |      |         1 |     +3 |    43 |    38694 |       |       |     |      |   |                             |
|    20 |                    BITMAP CONVERSION TO ROWIDS     |                           |         |      |         1 |     +3 |    43 |    38694 |       |       |     |      |   |                             |
|    21 |                     BITMAP INDEX SINGLE VALUE      | LIT_FACT_BD_IDX09         |         |      |         1 |     +3 |    43 |       49 |       |       |     |      |   |                             |
|    22 |                  TABLE ACCESS BY LOCAL INDEX ROWID | LIT_FACT_BD               |      63 |   19 |         3 |     +1 | 38694 |    38694 |       |       |     |      |     0.49 | Cpu (1)                      |
|    23 |              PX RECEIVE                            |                           |      20 |    2 |         1 |     +3 |     2 |        2 |       |       |     |      |   |                             |
|    24 |               PX SEND HASH                         | :TQ10001                  |      20 |    2 |           |        |       |          |       |       |     |      |   |                             |
|    25 |                PX BLOCK ITERATOR                   |                           |      20 |    2 |           |        |       |          |       |       |     |      |   |                             |
|    26 |                 TABLE ACCESS FULL                  | OPPOSING_COUNSEL          |      20 |    2 |           |        |       |          |       |       |     |      |   |                             |
| -> 27 |             TABLE ACCESS BY INDEX ROWID            | DATE_CODE                 |       1 |      |       219 |     +3 |  6465 |     6463 |       |       |     |      |   |                             |
| -> 28 |              INDEX UNIQUE SCAN                     | PK_DATE_CODE              |       1 |      |       219 |     +3 |  6465 |     6465 |       |       |     |      |   |                             |
| -> 29 |            PARTITION RANGE SINGLE                  |                           |      19 |      |       219 |     +3 |  6465 |      39M |       |       |     |      |   |                             |
| -> 30 |             TABLE ACCESS BY LOCAL INDEX ROWID      | LIT_FACT_BD               |      19 |      |       220 |     +2 |  6465 |      39M |       |       |     |      |    35.92 | Cpu (74)                     |
| -> 31 |              INDEX RANGE SCAN                      | LIT_FACT_BD_IDX20         |       1 |      |       219 |     +3 |  6465 |      39M |       |       |     |      |     9.22 | Cpu (19)                     |
| -> 32 |           INDEX UNIQUE SCAN                        | PK_FIRM                   |       1 |      |       219 |     +3 |   39M |      39M |       |       |     |      |    10.68 | Cpu (22)                     |
| -> 33 |          TABLE ACCESS BY INDEX ROWID               | FIRM                      |       1 |      |       219 |     +3 |   39M |      39M |       |       |     |      |    16.99 | Cpu (35)                     |
===================================================================================================================================================================

You can see from the “->” symbols at the left hand side of the plan that this report was generated while the plan was still running. The thing that leaps out as you glance down the page is the value in the “Rows (Actual)” column at operations 7-9 (which show the rowsources generated by some nested loop joins) and operations 29, 32 and 33 of the plan that tell us something about how those rowsources were generated.

Operation 29 has executed (Execs) 6,465 so far, producing a total of 39M rows, and operations 32 and 33 have both executed 39M times each producing a total of 39M rows by index unique scan.

The plan for the 2018 data was similar though the join order for DATE_CODE, LIT_FACT_BD and FIRM was different (and it was the join to LIT_FACT_BD that increased the row count dramatically – so hinting it to be the last table in the join might help a bit), but the largest rowcount for the 2018 query was only 3M rows, not the 39M that had appeared after only 6,465 rows of a possible driving 39,855 in the 2019 query.

So it’s not surprising that the query could take much longer for 2019. It’s not the volume of output that matters, it’s the volume of input (or, more accurately, throughput or intermediate) data that matters.

Let’s think about that volume, though: the 2018 plan generated 3M rows and then crunched them down to 2,500 rows and the 2019 plan was supposed to produce a similar sized output (from 39M+ rows). Could we rewrite the query in some way that made it do some intermediate aggregation so that the volume of data to be aggregated was never enormous?

Let’s take a look at the plan from the 2018 Monitor report (click to show/hide)
SQL Plan Monitoring Details (Plan Hash Value=472871521)
=======================================================================================================================================================================================================================================
| Id |                     Operation                      |           Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  | Write | Write |  Mem  | Temp  | Activity |       Activity Detail       |
|    |                                                    |                           | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Reqs  | Bytes | (Max) | (Max) |   (%)     |        (# samples)         |
=======================================================================================================================================================================================================================================
|  0 | SELECT STATEMENT                                   |                           |         |      |         1 |    +24 |    17 |     2613 |      |       |       |       |       |       |   |                            |
|  1 |   PX COORDINATOR                                   |                           |         |      |         1 |    +24 |    17 |     2613 |      |       |       |       |       |       |   |                            |
|  2 |    PX SEND QC (RANDOM)                             | :TQ10003                  |       1 |   39 |         1 |    +24 |     8 |     2613 |      |       |       |       |       |       |   |                            |
|  3 |     HASH UNIQUE                                    |                           |       1 |   39 |         9 |    +16 |     8 |     2613 |      |       |       |       |    9M |       |     6.90 | Cpu (2)                     |
|  4 |      PX RECEIVE                                    |                           |       1 |   38 |         9 |    +16 |     8 |       3M |      |       |       |       |       |       |   |                            |
|  5 |       PX SEND HASH                                 | :TQ10002                  |       1 |   38 |        12 |    +14 |     8 |       3M |      |       |       |       |       |       |     3.45 | Cpu (1)                     |
|  6 |        BUFFER SORT                                 |                           |       1 |   39 |        23 |     +2 |     8 |       3M | 4584 | 143MB |   703 | 143MB |  151M |  151M |    34.48 | Cpu (2)                     |
|    |                                                    |                           |         |      |           |        |       |          |      |       |       |       |       |       |   | direct path read temp (6)  |
|    |                                                    |                           |         |      |           |        |       |          |      |       |       |       |       |       |   | direct path write temp (2) |
|  7 |         NESTED LOOPS                               |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
|  8 |          NESTED LOOPS                              |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
|  9 |           NESTED LOOPS                             |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
| 10 |            NESTED LOOPS                            |                           |       1 |   38 |        15 |     +2 |     8 |       3M |      |       |       |       |       |       |   |                            |
| 11 |             HASH JOIN                              |                           |      41 |   21 |        15 |     +2 |     8 |    19334 |      |       |       |       |    7M |       |   |                            |
| 12 |              BUFFER SORT                           |                           |         |      |        13 |     +2 |     8 |    19233 |      |       |       |       |    1M |       |   |                            |
| 13 |               PX RECEIVE                           |                           |      89 |   19 |        13 |     +2 |     8 |    19233 |      |       |       |       |       |       |   |                            |
| 14 |                PX SEND HASH                        | :TQ10000                  |      89 |   19 |         1 |     +1 |     1 |    19233 |      |       |       |       |       |       |   |                            |
| 15 |                 NESTED LOOPS                       |                           |      89 |   19 |         1 |     +1 |     1 |    19233 |      |       |       |       |       |       |   |                            |
| 16 |                  NESTED LOOPS                      |                           |      89 |   19 |         1 |     +1 |     1 |    19233 |      |       |       |       |       |       |   |                            |
| 17 |                   TABLE ACCESS BY INDEX ROWID      | FIRM                      |       1 |    2 |         1 |     +1 |     1 |       43 |      |       |       |       |       |       |   |                            |
| 18 |                    INDEX RANGE SCAN                | FIRM_ULT_PARENT_FIRM_IDX1 |       1 |    1 |         1 |     +1 |     1 |       43 |      |       |       |       |       |       |   |                            |
| 19 |                   PARTITION RANGE SINGLE           |                           |         |      |         1 |     +1 |    43 |    19233 |      |       |       |       |       |       |   |                            |
| 20 |                    BITMAP CONVERSION TO ROWIDS     |                           |         |      |         1 |     +1 |    43 |    19233 |      |       |       |       |       |       |   |                            |
| 21 |                     BITMAP INDEX SINGLE VALUE      | LIT_FACT_BD_IDX09         |         |      |         1 |     +1 |    43 |       51 |      |       |       |       |       |       |   |                            |
| 22 |                  TABLE ACCESS BY LOCAL INDEX ROWID | LIT_FACT_BD               |      64 |   19 |         1 |     +1 | 19233 |    19233 |      |       |       |       |       |       |   |                            |
| 23 |              PX RECEIVE                            |                           |      20 |    2 |        15 |     +2 |     8 |       20 |      |       |       |       |       |       |   |                            |
| 24 |               PX SEND HASH                         | :TQ10001                  |      20 |    2 |         1 |    +14 |     8 |       20 |      |       |       |       |       |       |   |                            |
| 25 |                PX BLOCK ITERATOR                   |                           |      20 |    2 |         1 |    +14 |     8 |       20 |      |       |       |       |       |       |   |                            |
| 26 |                 TABLE ACCESS FULL                  | OPPOSING_COUNSEL          |      20 |    2 |         1 |    +14 |     3 |       20 |      |       |       |       |       |       |   |                            |
| 27 |             PARTITION RANGE SINGLE                 |                           |       1 |      |        15 |     +2 | 19334 |       3M |      |       |       |       |       |       |   |                            |
| 28 |              TABLE ACCESS BY LOCAL INDEX ROWID     | LIT_FACT_BD               |       1 |      |        16 |     +1 | 19334 |       3M |      |       |       |       |       |       |    17.24 | Cpu (5)                     |
| 29 |               INDEX RANGE SCAN                     | LIT_FACT_BD_IDX20         |       1 |      |        15 |     +2 | 19334 |       3M |      |       |       |       |       |       |   |                            |
| 30 |            TABLE ACCESS BY INDEX ROWID             | DATE_CODE                 |       1 |      |        15 |     +2 |    3M |       3M |      |       |       |       |       |       |    10.34 | Cpu (3)                     |
| 31 |             INDEX UNIQUE SCAN                      | PK_DATE_CODE              |       1 |      |        16 |     +1 |    3M |       3M |      |       |       |       |       |       |     6.90 | Cpu (2)                     |
| 32 |           INDEX UNIQUE SCAN                        | PK_FIRM                   |       1 |      |        23 |     +2 |    3M |       3M |      |       |       |       |       |       |     6.90 | Cpu (2)                     |
| 33 |          TABLE ACCESS BY INDEX ROWID               | FIRM                      |       1 |      |        16 |     +1 |    3M |       3M |      |       |       |       |       |       |    13.79 | Cpu (4)                     |
=======================================================================================================================================================================================================================================

We see from operations 3 – 7 that the 3M rows generated from the nested loop joins pass up through a buffer sort operation before being crunched down to 2,613 rows. It’s probably the buffer sort (which is buffering but not sorting) that has mostly passed through a single server and spilled to disc in the 2019 report. We just don’t want that 39M+ rows ever to exist.

So how easy will it be to change the SQL (click to view/hide)
SELECT
        /*+ PARALLEL(8) */
        DISTINCT rd.document_id AS doc_id
        ,'firm_id' AS criteria_key
        ,opf.ultimate_parent_firm_id AS series_id
        ,period_number
FROM (
        SELECT
                /*+ PARALLEL(8) */
                DISTINCT rd.document_id
                ,rd.client_role_id
                ,(
                        CASE 
                                WHEN MONTHS_BETWEEN(TO_DATE('04/28/2022', 'MM/DD/YYYY'), TO_DATE('04/01/2017', 'MM/DD/YYYY')) > 12
                                        THEN TRUNC(dc.date_value, 'YEAR')
                                ELSE TRUNC(dc.date_value, 'MONTH')
                                END
                        ) period_number
        FROM REPORT_ENGINE.date_code dc
                ,REPORT_ENGINE.lit_fact_bd rd
        INNER JOIN report_engine.firm FIRM ON rd.firm_id = FIRM.firm_id
        WHERE dc.date_value BETWEEN TO_DATE('04/01/2017', 'MM/DD/YYYY')
                        AND TO_DATE('04/28/2022', 'MM/DD/YYYY')
                AND rd.publication_date_id = dc.date_id
                AND rd.year_number = to_number('2019')
                AND (FIRM.ultimate_parent_firm_id IN (to_number('252094')))
        ) rd
INNER JOIN report_engine.opposing_counsel op ON rd.client_role_id = op.legal_role_id
INNER JOIN report_engine.lit_fact_bd opclf ON opclf.document_id = rd.document_id
        AND op.opposing_counsel_role_id = opclf.client_role_id
        AND opclf.year_number = to_number('2019')
INNER JOIN report_engine.firm opf ON opclf.firm_id = opf.firm_id
        AND opf.firm_id >= 1000
WHERE 1 = 1;

Lines 7-10 and 27 tell us we alredy have an inline view where we’re doing a “select distinct” and, unwinding the mix of “Oracle” and “ANSI” syntax, we can see that it joins DATE_CODE, LIT_FACT_BD and FIRM, and we know that one of those tables explodes the intermediate data size to something enormous. So it looks like the original author of this code had already worked out that the query needed to aggregate early.

Checking back to the original plans we note that there’s only one “hash unique” operation, and there’s no sign of a “view” operation, so maybe the performance problem is a result of the optimizer suddenly deciding it can do complex view merging with this inline view, and perhaps all we need to do is add the hint /*+ no_merge */ to the inline view and see what happens.

Here’s the plan after adding the hint (click to hide/vew)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----
| Id  | Operation                                            | Name                      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----
|   0 | SELECT STATEMENT                                     |                           |       |       |   314 (100)|          |       |       |        |      | |
|   1 |  PX COORDINATOR                                      |                           |       |       |            |          |       |       |        |      | |
|   2 |   PX SEND QC (RANDOM)                                | :TQ10005                  |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,05 | P->S | QC (RAN D)  |
|   3 |    HASH UNIQUE                                       |                           |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,05 | PCWP | |
|   4 |     PX RECEIVE                                       |                           |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,05 | PCWP | |
|   5 |      PX SEND HASH                                    | :TQ10004                  |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,04 | P->P | HASH |
|   6 |       HASH UNIQUE                                    |                           |    23 |  2599 |   314   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|   7 |        NESTED LOOPS                                  |                           |    23 |  2599 |   313   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|   8 |         NESTED LOOPS                                 |                           |    23 |  2599 |   313   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|   9 |          NESTED LOOPS                                |                           |    23 |  2323 |   310   (1)| 00:00:06 |       |       |  Q1,04 | PCWP | |
|* 10 |           HASH JOIN                                  |                           |   388 | 21340 |   148   (1)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  11 |            PX RECEIVE                                |                           |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,04 | PCWP | |
|  12 |             PX SEND BROADCAST                        | :TQ10002                  |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,02 | P->P | BROADCA ST  |
|  13 |              PX BLOCK ITERATOR                       |                           |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWC | |
|* 14 |               TABLE ACCESS FULL                      | OPPOSING_COUNSEL          |    20 |   160 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWP | |
|  15 |            VIEW                                      |                           |   835 | 39245 |   146   (1)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  16 |             HASH UNIQUE                              |                           |   835 | 63460 |   146   (1)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  17 |              PX RECEIVE                              |                           |   835 | 63460 |   145   (0)| 00:00:03 |       |       |  Q1,04 | PCWP | |
|  18 |               PX SEND HASH                           | :TQ10003                  |   835 | 63460 |   145   (0)| 00:00:03 |       |       |  Q1,03 | P->P | HASH |
|* 19 |                HASH JOIN BUFFERED                    |                           |   835 | 63460 |   145   (0)| 00:00:03 |       |       |  Q1,03 | PCWP | |
|  20 |                 BUFFER SORT                          |                           |       |       |            |          |       |       |  Q1,03 | PCWC | |
|  21 |                  PX RECEIVE                          |                           |   835 | 52605 |   136   (0)| 00:00:03 |       |       |  Q1,03 | PCWP | |
|  22 |                   PX SEND HASH                       | :TQ10000                  |   835 | 52605 |   136   (0)| 00:00:03 |       |       |        | S->P | HASH |
|  23 |                    NESTED LOOPS                      |                           |   835 | 52605 |   136   (0)| 00:00:03 |       |       |        |      | |
|  24 |                     NESTED LOOPS                     |                           |   835 | 52605 |   136   (0)| 00:00:03 |       |       |        |      | |
|  25 |                      TABLE ACCESS BY INDEX ROWID     | FIRM                      |     1 |    12 |     2   (0)| 00:00:01 |       |       |        |      | |
|* 26 |                       INDEX RANGE SCAN               | FIRM_ULT_PARENT_FIRM_IDX1 |     1 |       |     1   (0)| 00:00:01 |       |       |        |      | |
|  27 |                      PARTITION RANGE SINGLE          |                           |       |       |            |          |    30 |    30 |        |      | |
|  28 |                       BITMAP CONVERSION TO ROWIDS    |                           |       |       |            |          |       |       |        |      | |
|* 29 |                        BITMAP INDEX SINGLE VALUE     | LIT_FACT_BD_IDX09         |       |       |            |          |    30 |    30 |        |      | |
|* 30 |                     TABLE ACCESS BY LOCAL INDEX ROWID| LIT_FACT_BD               |   598 | 30498 |   136   (0)| 00:00:03 |    30 |    30 |        |      | |
|  31 |                 PX RECEIVE                           |                           |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,03 | PCWP | |
|  32 |                  PX SEND HASH                        | :TQ10001                  |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,01 | P->P | HASH |
|  33 |                   PX BLOCK ITERATOR                  |                           |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,01 | PCWC | |
|* 34 |                    TABLE ACCESS FULL                 | DATE_CODE                 |  1854 | 24102 |     9   (0)| 00:00:01 |       |       |  Q1,01 | PCWP | |
|  35 |           PARTITION RANGE SINGLE                     |                           |     1 |    46 |     0   (0)|          |    30 |    30 |  Q1,04 | PCWP | |
|* 36 |            TABLE ACCESS BY LOCAL INDEX ROWID         | LIT_FACT_BD               |     1 |    46 |     0   (0)|          |    30 |    30 |  Q1,04 | PCWP | |
|* 37 |             INDEX RANGE SCAN                         | LIT_FACT_BD_IDX20         |     1 |       |     0   (0)|          |    30 |    30 |  Q1,04 | PCWP | |
|* 38 |          INDEX UNIQUE SCAN                           | PK_FIRM                   |     1 |       |     0   (0)|          |       |       |  Q1,04 | PCWP | |
|  39 |         TABLE ACCESS BY INDEX ROWID                  | FIRM                      |     1 |    12 |     0   (0)|          |       |       |  Q1,04 | PCWP | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----

Note particularly that operations 15 and 16 tell us that we’ve forced the optimizer into treating the inline view as a separate query block and we now have two aggregation steps, one inside the view, and another after joining FIRM (again) and LIT_FACT_BD (again) to the inline view.

Unfortunately the plan shown here is pulled from memory using dbms_xplan.display_cursor() after execution, so it include the various parallel executoin colums (TQ, IN-OUT, PQ Distrib), but doesn’t have the rowsource execution stats enabled so we can’t see the actual workload and data volume, but in the words of the OP: “adding no_merge hint did the trick and now the SQL is just executing fine”.

Summary

The steps for solving the performance problems of a specific SQL statement are very fluid. For a long-running or parallel statement the SQL Monitor report will automatically be created (though there are limits on the size of the plan that may disable the feature) and this is the easiest source of useful information, though you might also need to pull the execution plan from v$sql_plan to get details about parallel execution and partitioning at the same time.

If you’re not licensed for the diagnostic and performance packs, though, enabling SQL Trace to get the plan and waits gets you a lot of infomation, and querying (g)v$pq_tqstat immediately after running the query can fill in the parallel traffic details.

In this example the SQL Monitor report showed a highly skewed distribution, which might have been fixable by changing the PQ distribution, or even by simply changing the degree of parallelism.

Further examination of the report showed that the query generated an enormous rowsource which it then crunched down to a small result set. Comparing the 2018 and 2019 plans (which were not quite identical but were sufficiently similar to justify comparison) the same skew and explosion of rowsource were visible in both, though the data size involved in the 2018 plan made it seem that the plan was a “good” one which really it wasn’t.

The obvious target for tuning was to see if the explosion in volume could be reduced or eliminated by writing the query with some intermediate non-mergeable view(s), and it turned out that the query had been written with that intent in its original form but without a hint to block complex view merging. After adding the hint the performance was acceptable.

Join View delete

Tue, 2022-05-31 11:39

I’ve written a couple of notes about deleting from join views and (ultimately) the fact that if you have muliple key-preserved tables in join view then the first key preserved table in the from clause is the one where the delete is applied. The topic came up on the Oracle developer forum again today, and closed with a funny little quirk.

Here’s a little data model followed by one query and two deletes that I’ve just run on Oracle 19.11.0.0 and 21.3.0.0. The query and first delete feature a join using “ANSI” syntax, the second delete translates the join into “traditional” Oracle syntax.

rem
rem     Script:         delete_join_2.sql       
rem     Dated:          May 2022
rem     Author:         J P Lewis
rem
rem     Last tested
rem             21.3.0.0
rem             19.11.0.0
rem

create table t_parent (
        id      number 
                        constraint t_parent_pk primary key,
        val     varchar2(10)
);

create table t_child (
        id      number 
                        constraint t_child_fk references t_parent(id),
        val     varchar2(10)
);

alter table t_child add constraint t_child_pk primary key(id);

insert into t_parent values(1,rpad('parent',10));
insert into t_child  values(1,rpad('child', 10));

commit;

execute dbms_stats.gather_table_stats(user,'t_parent',cascade=>true)
execute dbms_stats.gather_table_stats(user,'t_child', cascade=>true)

set autotrace on explain

select t_child.* from t_parent join t_child on t_child.id = t_parent.id;

delete
        (select t_child.* from t_parent join t_child on t_child.id = t_parent.id);

delete (select t_child.* from t_parent, t_child where t_child.id = t_parent.id);

set autotrace off

The table definitions and content (with a little modification) come from the forum posting, and I have to admit that the “one-to-one” referential integrity from parent to child is a little unusual (though occasionally a very effective strategy). I’ve also enabled autotrace to get the (probable) execution plans for the three statements, but in the script on my test machine I’ve also enabled the CBO (10053) trace to pick up a few extra details.

The query is there to show us that the join will return a single row, and the row content will be from the t_child table. What happens, though, when we try to use that query as an inline view to the delete.

It is an updateable join view – and both tables are, in fact, key-preserved thanks to the primary key constraints at both ends of the join. Here’s what we get (with the execution plans removed from the output):

        ID VAL
---------- ----------
         1 child

1 row selected.



delete
*
ERROR at line 1:
ORA-02292: integrity constraint (TEST_USER.T_CHILD_FK) violated - child record found


1 row deleted.

Line 5 tells us that we selected one row – as expected.

Line 11 tells us that we’ve raised an error because we’re trying to break the foreign key constraint on the t_child table – and that error shouldn’t be a surprise because the t_parent table appears first in the from clause, so that’s the table the optimizer wants to delete rows from.

However, line 15 tells us that if we change to “traditional ” Oracle syntax the delete works – so which of the two delete statements is displaying buggy behaviour?

Execution plans

There’s a clue in the execition plans (when we get them – autotrace doesn’t give a plan for the delete that raised the error). So here are the plans for the select and the “traditional” delete in that order:

----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |    14 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T_CHILD |     1 |    14 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------



-------------------------------------------------------------------------------
| Id  | Operation        | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | DELETE STATEMENT |            |     1 |     3 |     1   (0)| 00:00:01 |
|   1 |  DELETE          | T_CHILD    |       |       |            |          |
|   2 |   INDEX FULL SCAN| T_CHILD_PK |     1 |     3 |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------

With assistance from the foreign key constraint, Join Elimination has occurred in both cases leaving t_child as the candidate for deletion. You’ll also note that the optimizer has recognised that while the view’s select list is “all columns” it only needs to optimize for “select rowid” to drive the delete, so produces an index-only access path to get the data it needs.

If we use explain plan, or check the 10053 trace file, for the ANSI delete that failed we’ll find the following:

-----------------------------------------------------------------------------------
| Id  | Operation           | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT    |             |     1 |     6 |     1   (0)| 00:00:01 |
|   1 |  DELETE             | T_PARENT    |       |       |            |          |
|   2 |   NESTED LOOPS      |             |     1 |     6 |     1   (0)| 00:00:01 |
|   3 |    INDEX FULL SCAN  | T_CHILD_PK  |     1 |     3 |     1   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN| T_PARENT_PK |     1 |     3 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$4396EC5C
   3 - SEL$4396EC5C / T_CHILD@SEL$1
   4 - SEL$4396EC5C / T_PARENT@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - access("T_CHILD"."ID"="T_PARENT"."ID")

There’s no Join Elimination, and Oracle is following the rule that “the first key preserved table in the from clause should be the target of the deletes.

Where’s the bug?

This is a question that doesn’t have an obvious answer.

The traditional syntax gives you the result you probably wanted but it’s deleting from the second table in the from clause of the join view – so is that a documentation bug or a coding bug because if the documentation is correct the statement should have raised the error that the ANSI syntax gave.

Conversely, is there an optimizer bug (or limitation) because table elimination could have been used in the ANSI case but wasn’t, although you could say that the ANSI code is doing the right thing (following the documentation) by raising an error and that the traditional syntax is doing the wrong thing.

If we search the CBO (10053) traces of the select and the ANSI join, looking for sections where the optimizer considers Join Elimination, the results are helpful:

grep -nT "^JE" or19_ora_30861_*.trc >temp.txt

With a little cosmetic editing here’s the output from the trace of the select statement:

or19_ora_30861_select.trc:  78  JE - Join Elimination
or19_ora_30861_select.trc: 807  JE:[V2]   Considering Join Elimination on query block SEL$1 (#0)
or19_ora_30861_select.trc: 813  JE:[V2] Query block (0x7fd3a152f5c0) before join elimination:
or19_ora_30861_select.trc: 816  JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
or19_ora_30861_select.trc: 817  JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
or19_ora_30861_select.trc: 818  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_select.trc: 819  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - predicate column refs.
or19_ora_30861_select.trc: 820  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using NTSID
or19_ora_30861_select.trc: 821  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - no constr. based join pred(s)
or19_ora_30861_select.trc: 822  JE:[V2] Query block (0x7fd3a152f5c0) after join elimination:

or19_ora_30861_select.trc: 844  JE:[V2]   Considering Join Elimination on query block SEL$58A6D7F6 (#0)
or19_ora_30861_select.trc: 850  JE:[V2] Query block (0x7fd39c09ee60) before join elimination:
or19_ora_30861_select.trc: 853  JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
or19_ora_30861_select.trc: 854  JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
or19_ora_30861_select.trc: 855  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_select.trc: 856  JE:[V2]: Can eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_select.trc: 857  JE:[V2] Eliminate table: T_PARENT (T_PARENT)
or19_ora_30861_select.trc: 858  JE:[V2] Query block (0x7fd39c09ee60) after join elimination:

We can see that the optimizer has considered join elimination twice (lines 2 and 12), once for each of two query blocks (sel$1 – a query block embedded in the text – and sel$58a6d7f6, a query block name generated during some query transformation).

At lines 5 and 6 we can see that Oracle failed to take advantage of the referential integrity constraint to eliminate t_parent with the explanation “predicate column refs”; but at lines 16 and 17 we see that there’s a change of heart and the optimizer does take advantage of the referential integrity to eliminate t_parent. So let’s look at the text of the “query block before join elimination” in both cases and see what’s changed:

First pass
----------
SELECT 
        "T_PARENT"."ID"  "QCSJ_C000000000300000",
        "T_PARENT"."VAL" "QCSJ_C000000000300002",
        "T_CHILD"."ID"   "QCSJ_C000000000300001",
        "T_CHILD"."VAL"  "QCSJ_C000000000300003" 
FROM 
        "TEST_USER"."T_PARENT" "T_PARENT",
        "TEST_USER"."T_CHILD" "T_CHILD" 
WHERE 
        "T_CHILD"."ID"="T_PARENT"."ID"


Second Pass
-----------
SELECT 
        "T_CHILD"."ID" "ID",
        "T_CHILD"."VAL" "VAL" 
FROM 
        "TEST_USER"."T_PARENT" "T_PARENT",
        "TEST_USER"."T_CHILD" "T_CHILD" 
WHERE 
        "T_CHILD"."ID"="T_PARENT"."ID"

The first pass seems to have expanded the two tables in the intial JOIN, ignoring the limited select list, so join elimination is blocked by the presence of the (non-PK) column t_parent.val (aliased as qcsj_c000000000300002, but when we get to the 2nd pass the select list has been limited to the minimum necessary set of columns that would be needed by a pure select statement and join elimination is possible.

The clue to how/why this has happened is the in the few lines between the two passes; the trace file also reports: “CVM: Merging SPJ view SEL$1 (#0) into SEL$2 (#0)” and when we look for the point where the optimizer registered query block sel$2 it turns out to have one item in its from clause, an object called: from$_subquery$_003 (which tells us where the 3 came from in all those column aliases QCSJ_C0000000003nnnnn. In effect it looks like the optimizer started by rewriting the ANSI select in Oracle syntax as (with a little poetic licence for clarity):

select  child_id, child_val, parent_id
from    (
         select  t_child.*, t_parent.*
         from    t_child, t_parent
         where   t_child.id = t_parent.id
        ) from$_subquery$_003
/ 

So the first pass was the optimizer trying to do join elimination on the inner query block and the second pass happened after the inner block was merged with the outer block and the resulting query block (called sel$58a6d7f6) allowed the join elimination to take place

So what happened with the delete statement. Here’s the result of the call to grep:

or19_ora_30861_delete.trc:  90  JE - Join Elimination
or19_ora_30861_delete.trc: 837  JE:[V2]   Considering Join Elimination on query block SEL$1 (#0)
or19_ora_30861_delete.trc: 843  JE:[V2] Query block (0x7fd3a152f758) before join elimination:
or19_ora_30861_delete.trc: 846  JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
or19_ora_30861_delete.trc: 847  JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
or19_ora_30861_delete.trc: 848  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
or19_ora_30861_delete.trc: 849  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - predicate column refs.
or19_ora_30861_delete.trc: 850  JE:[V2]: Try to eliminate T_PARENT by ref. join elim using NTSID
or19_ora_30861_delete.trc: 851  JE:[V2]: Cannot eliminate T_PARENT by ref. join elim - no constr. based join pred(s)
or19_ora_30861_delete.trc: 852  JE:[V2] Query block (0x7fd3a152f758) after join elimination:

That’s the lot – there is no attempt to do join elimination on a querye block called sel$58a6d7f6, though a few lines after this extract the trace file reports:

CVM:   Merging SPJ view SEL$1 (#0) into SEL$2 (#0)
Registered qb: SEL$58A6D7F6 0x9c0a3038 (VIEW MERGE SEL$2; SEL$1; SEL$2)

That’s exactly the same view merging that was present in the select – but the optimizer doesn’t attempt join elimination before moving on to report, a couple of lines later:

CVM:   Merging SPJ view SEL$58A6D7F6 (#0) into DEL$1 (#0)
Registered qb: SEL$4396EC5C 0x9c09dde0 (VIEW MERGE DEL$1; SEL$58A6D7F6; DEL$1)

It looks like this is a case of the optimizer missing a cycle in its transform/optimze loopback. The delete went through exactly the same process as the select, but failed to optimize the intermediate query block (which happened to be called from$_subquery$_004 in this case, being one layer deeper thanks to the extra level introduced by the delete).

Workaround

I tried two or three things to get the ANSI-style code to work without changing the table order in the from clause – extra layers of subqueries, explicit aliases, but Oracle kept coming back to a point where it had the full projection of all the columns in the two tables. (This reminded me of an old bug in ANSI select expansion that was fixed in 12cR2. It made me wonder if this was a code path where the same bug had been overlooked – it also made me wonder if there would be any problems if the two tables in the join totalled more than 1,000 columns – but that’s a test for another day.)

I did eventually find a fix (that didn’t involved switching the table order in the inline view):

delete
        (select t_child.id from t_parent join t_child on t_child.id = t_parent.id);

Note that I now have only the primary key of t_child in the select

The section in the CBO trace file for Join Elimination looked like this:

JE:[V2] Query block (0x7f93996231f8) before join elimination:
SQL:******* UNPARSED QUERY IS *******
SELECT "T_PARENT"."ID" "QCSJ_C000000000400000","T_CHILD"."ID" "QCSJ_C000000000400001" FROM "TEST_USER"."T_PARENT" "T_PARENT","TEST_USER"."T_CHILD" "T_CHILD" WHERE "T_CHILD"."ID"="T_PARENT"."ID"
JE:[V2]: Try to eliminate T_CHILD by ref. join elim using NTSID
JE:[V2]: Cannot eliminate T_CHILD by ref. join elim - no constr. based join pred(s)
JE:[V2]: Try to eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
JE:[V2]: Can eliminate T_PARENT by ref. join elim using PRIMARY(ID) <- FOREIGN(ID)
JE:[V2] Eliminate table: T_PARENT (T_PARENT)
JE:[V2] Replaced column: T_PARENT.ID with column: T_CHILD.ID
JE:[V2] Query block (0x7f93996231f8) after join elimination:
SQL:******* UNPARSED QUERY IS *******
SELECT "T_CHILD"."ID" "QCSJ_C000000000400000","T_CHILD"."ID" "QCSJ_C000000000400001" FROM "TEST_USER"."T_CHILD" "T_CHILD"
Registered qb: SEL$E703A888 0x996231f8 (JOIN REMOVED FROM QUERY BLOCK SEL$1; SEL$1; "T_PARENT"@"SEL$1")

You may decide that this is an obvious workaround, but it’s not particularly helpful. In the general case I would expect to see a stored view that joined the two tables, with the expectation that this type of delete was just one of many operations the view was used for. Rewriting a view to use the increasingly common ANSI syntax shouldn’t result in some statements crashing “for no apparent reason”.

The last laugh

Given that the select and delete both included a merge of sel$1 into sel$2 to produce a query block called sel$58a6d7f6 I wondered what would happen if I blocked the merge:

SQL>  delete  /*+  no_merge (@sel$1) */
  2          (select t_child.id from t_parent join t_child on t_child.id = t_parent.id);
	(select t_child.id from t_parent join t_child on t_child.id = t_parent.id)
        *
ERROR at line 2:
ORA-01752: cannot delete from view without exactly one key-preserved table

Isn’t that the error message that should have been coming out the entire time – according to the orginal documentation for updateable join views?

Quiz Night

Fri, 2022-05-20 12:09

It’s a long time since I’ve done a quiz night – but here’s something that fooled me (briefly) when it appeared on the Oracle Developers’ Forum. Here’s a table definition – and I’m not going to make it easy by giving you a “create table” statement, but it’s just a simple heap table:

sql> desc interr_skuplannparam
 name                             null?    type
 -------------------------------- -------- -------------------
 atpdur                                    number(38)
 depdmdopt                                 number(38)
 externalskusw                             number(1)
 firstreplendate                           date
 lastfrzstart                              date
 lastplanstart                             date
 plandur                                   number(38)
 planleadtime                              number(38)
 planleadtimerule                          number(38)
 planshipfrzdur                            number(38)
 restrictdur                               number(38)
 allocbatchsw                              number(1)
 cmpfirmdur                                number(38)
 custservicelevel                          float(126)
 maxchangefactor                           float(126)
 mfgleadtime                               number(38)
 recschedrcptsdur                          number(38)
 cpppriority                               number(38)
 cpplocksw                                 number(1)
 criticalmaterialsw                        number(1)
 aggexcesssupplyrule                       number(38)
 aggundersupplyrule                        number(38)
 bufferleadtime                            number(38)
 maxoh                                     float(126)
 maxcovdur                                 number(38)
 drpcovdur                                 number(38)
 drpfrzdur                                 number(38)
 drprule                                   number(38)
 drptimefencedate                          date
 drptimefencedur                           number(38)
 incdrpqty                                 float(126)
 mindrpqty                                 float(126)
 mpscovdur                                 number(38)
 mfgfrzdur                                 number(38)
 mpsrule                                   number(38)
 mpstimefencedate                          date
 mpstimefencedur                           number(38)
 incmpsqty                                 float(126)
 minmpsqty                                 float(126)
 shrinkagefactor                           number(38)
 item                                      varchar2(50 char)
 loc                                       varchar2(50 char)
 expdate                                   date
 atprule                                   number(38)
 prodcal                                   varchar2(50 char)
 prodstartdate                             date
 prodstopdate                              date
 orderingcost                              float(126)
 holdingcost                               float(126)
 eoq                                       float(126)
 ff_trigger_control                        number(38)
 workingcal                                varchar2(50 char)
 lookaheaddur                              number
 orderpointrule                            number
 orderskudetailsw                          number(1)
 supsdmindmdcovdur                         number(38)
 orderpointminrule                         number(38)
 orderpointminqty                          float(126)
 orderpointmindur                          number(38)
 orderuptolevelmaxrule                     number(38)
 orderuptolevelmaxqty                      float(126)
 orderuptolevelmaxdur                      number(38)
 aggskurule                                number(38)
 fwdbuymaxdur                              number(38)
 costuom                                   number(38)
 cumleadtimedur                            number(38)
 cumleadtimeadjdur                         number(38)
 cumleadtimerule                           number(38)
 roundingfactor                            float(126)
 limitplanarrivpublishsw                   number(1)
 limitplanarrivpublishdur                  number
 maxohrule                                 number(1)
 integration_stamp                         date
 integration_jobid                not null varchar2(32 char)
 error_str                                 varchar2(2000 char)
 error_stamp                               date

The column integration_jobid (the single “not null” column) has been defined with the default value of “INT_JOB”, which takes 7 bytes to store. What’s the result of the query at the end of this little script:

truncate table interr_skuplannparam;

insert into interr_skuplannparam (atpdur) 
select   0 
from     all_objects 
where    rownum <= 10000
/

commit;

execute dbms_stats.gather_table_stats(user,'interr_skuplannparam')

select avg_row_len from user_tables;

Hint: the value zero is represented internally as a single byte holding the value 0x80 (decimal 128).

Lag/Lead slow

Thu, 2022-05-05 04:05

This note is about a surprising performance difference between the lead() and lag() analytic functions (which turns out to be due to the behaviour of the nth_value() function) when the option to “ignore nulls” is included in their use (jump to conclusion). The detail I’ll be writing about was highlighted in a thread on the Oracle developer forum about a requirement to add a number of analytic columns to a dataset of 156 million rows using a statement of the following shape:

create table tb_target_ignore
as
select
        pat_id,
        visit_date_dt,
        ed_ucc_dt,
        lag (ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as prev_ed_ucc_dt,
        lead(ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as next_ed_ucc_dt,
        row_number() over (partition by pat_id order by visit_date_dt) as row_num
from
        tb_source
;

You’ll notice that I’ve introduced a row_number(), and both a lead() and a lag() of a column called ed_ucc_dt. All three analytic columns use the same partitioning and ordering, though, so Oracle will only be doing one “window sort” in the execution plan. Part of the performance problem, of course, was that with 156M rows of a couple of dozen existing columns and adding a dozen new columns, the workload due to sorting was bound to be very large, so there were various suggestions of how to minimise that part of the workload.

However Solomon Yakobsen pointed out that the code was using the “ignore nulls” option and there was a bug in 11g that made lead() and lag() very slow when this option was used. He subsequently reported that this defect was still present in 19c, restricted it to just the lag() function, and cited a MOS document ID referencing the problem: LAG Function is slow when using ignore nulls (Doc ID 2811596.1). The implication of the MOS note is that we shouldn’t expect this to change.

A follow-up posting by User_H3J7U gave us a reason for the slowness of the lag() function by running a sample query through dbms_utility.expand_sql(). Oracle rewrites the query to use variants of the nth_value() function when you use “ignore nulls”, but rewrites it to use variants of first_value() when you aren’t using the “ignore nulls” option. This isn’t a complete explanation of why lag() should be slow while lead() is not – but it’s a significant pointer towards a possible implementation issue and is a good clue about working around the problem. So let’s build a model of the situation.

The basic model
rem
rem     Script:         ignore_nulls.sql
rem     Author:         Jonathan Lewis / Sam P
rem     Dated:          May 2022
rem     Purpose:     
rem
rem     Last tested
rem             19.11.0.0
rem

create table tb_source (
        pat_id,
        visit_date_dt,
        ed_ucc_dt
)
as
with generator as (
        select rownum id
        from    dual
        connect by
                level <= 1e4    --> comment to avoid wordpress format issue
)
select
        g1.id,
        to_date('01-Apr-2022') + dbms_random.value(0,100),
        to_date('01-Apr-2022') + dbms_random.value(5,105)
--      to_date(null)
from
        generator g1,
        generator g2
where
        g2.id <= 20     --> comment to avoid wordpress format issue
order by
        dbms_random.value
/


spool ignore_nulls.lst

set serveroutput off
set timing on

prompt  ======================
prompt  Without "ignore nulls"
prompt  (My time 0.61 seconds)
prompt  ======================

create table tb_target_no_ignore
as
select
        pat_id,
        visit_date_dt,
        ed_ucc_dt,
        lag (ed_ucc_dt, 1) over (partition by pat_id order by visit_date_dt) as prev_ed_ucc_dt,
        lead(ed_ucc_dt, 1) over (partition by pat_id order by visit_date_dt) as next_ed_ucc_dt,
        row_number() over (partition by pat_id order by visit_date_dt) as row_num
from
        tb_source
;

prompt  ======================
prompt  With "ignore nulls"
prompt  (My time 0.88 seconds)
prompt  ======================

create table tb_target_ignore
as
select
        pat_id,
        visit_date_dt,
        ed_ucc_dt,
        lag (ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as prev_ed_ucc_dt,
        lead(ed_ucc_dt ignore nulls, 1) over (partition by pat_id order by visit_date_dt) as next_ed_ucc_dt,
        row_number() over (partition by pat_id order by visit_date_dt) as row_num
from
        tb_source
;

I’ve created a source table with 200,000 rows, consisting of 10,000 pat_id values, and 20 rows per pat_id. The 20 rows for a pat_id (probably) each have a different visit_date_dt and a different ed_ucc_dt.

After creating the data set I’ve created two more tables using the lead() and lag() functions to generate a previous (lag) and next (lead) ed_ucc_dt for each row, partitioning by pat_id, ordering by visit_date_dt. One statement includes the “ignore nulls” option the other doesn’t and, as you can see, the time to create the “no ignore” table was 0.61 seconds while the time to create the “ignore null” table was 0.88 seconds.

The variation isn’t dramatic – but this is just 200,000 rows, in memory, with only a few columns and only two columns added through lead and lag.

After the baseline test I tweaked the statement that created the table with the “ignore nulls” option to get three more times.

  • With neither lead() nor lag() the time was 0.29 seconds
  • With just the lead() column the time was 0.46 seconds – an increase of 0.17 seconds
  • With just the lag() column the time was 0.71 seconds – an increase of 0.42 seconds

You might note that 0.29 + 0.17 + 0.42 = 0.88 (the time I got for adding both columns) – it’s a little lucky that it looks like a perfect match, but even matching within a couple of hundredths of a second would be have been a nice detail. It certainly seems that lag() – with my test data – consumes more resources than lead() for a pair of operationd that look as if they should produce the same workloads.

Internal Rewrite

The next step was to check what the internal rewrite of the code looked like, so I passed the select part of the statements (the procedure won’t accepts “create as select”) through dbms_utility.expand_sql() and reformatted the results. Here are the two rewritten statements – first without “ignore nulls”:

select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
        decode(
                count(*) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 preceding and 1 preceding
                        ),a
                 1,     first_value(a1.ed_ucc_dt) over (
                                partition by a1.pat_id order by a1.visit_date_dt
                                rows between 1 preceding and 1 preceding
                        ),
                        null
        ) prev_ed_ucc_dt,
        decode(
                count(*) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 following and 1 following
                        ),a
                 1,     first_value(a1.ed_ucc_dt) over (
                                partition by a1.pat_id order by a1.visit_date_dt
                                rows between 1 following and 1 following
                        ),
                        null
        ) next_ed_ucc_dt,
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1


The code looks a little long and messy, but that’s mainly because just about everything it does happens twice. The lag() function (prev column) turns into a first_value() function that looks at the row preceding the current row in the partition (rows between 1 preceding and 1 preceding). However it first has to count over the same clause to see if a row exists, and then either report its value or report a null – hence the structure decode(count(), 1, first_value(), null)

Note: the full lag() function call is: “lag(expression, offset, default)” where the offset (number of rows to lag) defaults to 1 and the default is the value you want reported when there is no matching row, and defaults to null.

The call to lead() basically does the same thing, but uses (rows between 1 following and 1 following) to access the next row in the partition.

On the other hand this is the SQL that Oracle generates when we include the “ignore nulls” clause (which means Oracle can’t restrict the row range to just one preceding or following row):

select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
        nvl(
                nth_value(a1.ed_ucc_dt, 1) from last ignore nulls over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between unbounded preceding and 1 preceding
                ),
                 null
        ) prev_ed_ucc_dt,
        nvl(
                nth_value(a1.ed_ucc_dt, 1)           ignore nulls over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 following and unbounded following
                ),
                null
        ) next_ed_ucc_dt,
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1

Both lag() and lead() turn into nth_value() with a second parameter of 1 (i.e. nth == 1st … which makes you wonder why Oracle isn’t using first_value()), and we can also see the “ignore nulls” still being used.

The lag() call now uses the range (rows between unbounded preceding and 1 preceding) i.e. everything from the start of partition to the previous row, while the lead() call uses the range (rows between 1 following and unbounded following) i.e. from the next row to the end of partition.

The other important detail to note is that the translation of the lag() call also includes the clause “from last” – in other words we want the first row when reading the partition in reverse order, and that might have something to do with the extra time it takes to operate the (translated) lag() function.

Workaround

Oracle is using a generic nth_value() to translate a generic lead()/lag(), but we’re in the special case where we know n = 1, which means we (and Oracle) could use first_value()/last_value(). It’s perfectly reasonable for Oracle’s internal code to avoid special cases if it makes no difference to performance, of course, but maybe in this case we could imitate Oracle’s rewrite to get some benefit.

  • Step 1 – change nth_value() to the appropriate first/last.
  • Step 2 – get rid of the “from last” which won’t be needed with last_value()
  • Step 3 – move the “ignore nulls” to the spot that Oracle wants to see it with first/last

Here’s the resulting SQL – I’ve left the nvl(count, expression, null) in place, but if you wanted a null as the default return value for the original lead()/lag() calls you could simplify the code a little further.

create table tb_target 
as
select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
        nvl(
                last_value(a1.ed_ucc_dt ignore nulls) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between unbounded preceding and 1 preceding
                ),
                 null
        ) prev_ed_ucc_dt,
        nvl(
                first_value(a1.ed_ucc_dt ignore nulls) over (
                        partition by a1.pat_id order by a1.visit_date_dt
                        rows between 1 following and unbounded following
                ),
                null
        ) next_ed_ucc_dt,
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1
/

  • Run-time: 0.61 seconds.
  • Run-time with just last_value() / preceding: 0.47 seconds
  • Run time with just first_value() / following: 0.43 seconds

There still seems to be a little discrepancy between accessing to the preceding data compared to accessing the following data but there’s a much better balance than before.

One more edit – taking out the nvl() construct because the original lead()/lag() calls didn’t have a non-null default supplied:

create table tb_target 
as
select
        a1.pat_id pat_id,
        a1.visit_date_dt visit_date_dt,
        a1.ed_ucc_dt ed_ucc_dt,
--
        last_value(a1.ed_ucc_dt ignore nulls) over (
                partition by a1.pat_id order by a1.visit_date_dt
                rows between unbounded preceding and 1 preceding
        ) prev_ed_ucc_dt,
--
        first_value(a1.ed_ucc_dt ignore nulls) over (
                partition by a1.pat_id order by a1.visit_date_dt
                rows between 1 following and unbounded following
        ) next_ed_ucc_dt,
--
        row_number() over (partition by a1.pat_id order by a1.visit_date_dt) row_num
from
        test_user.tb_source a1
/

With this simplification the time dropped by a further couple of hundredths of a second hovering between 0.57 and 0.58 seconds.

There was one last detail about the test code that should be mentioned – do the changes in code still produce the same results? As a quick and dirty check I ran the following query after each variant of creating the tb_target table (tb_target_ignore is the table created using the original lead()/lag() code with “ignore nulls”):

select  *
from   (
        select * from tb_target minus select * from tb_target_ignore
        union all
        select * from tb_target_ignore minus select * from tb_target
        )
/

The result was always zero rows.

Hypothesis

I raised the idea that the difference in timing for the lead() and lag() functions might have something to do with the volume of data that Oracle could be processing to find the one row it needed.

My thinking was that for the lead() rewrite – the call to nth_value(ed_ucc_dt,1) – would simply be looking at the next row in the partition (if it existed) because my data has no nulls that neeed to be ignored, while the rewrite of the lag() function with its “from last” requirement could be making Oracle re-read the entire preceding section of the partition before starting to process it backwards.

As a possible check to see if this was a viable hypothesis I ran one more test – visible in the initial declaration of tb_source – I created the data with ed_ucc_dt set to null in every single row, so that Oracle would be forced to process from the current position to whichever end of the partition was relevant regardless of whether it was calling lead() or lag().

With this change in place the timing for the lead() only and lag() only statements were nearly identical – which is a weak support for the hypothesis.

And once I’d done that test the next obvious test was to see what happened if I increased size of each partition (using non-null values for ed_ucc_dt) to see if larger partitions would increase the difference between the forward and backward tests. To do this I changed the script to create the tb_source table to produce 5,000 pat_id value with 40 rows per pat_id by changing the where clause to:

where
        g2.id <= 40     --> comment to avoid wordpress format issue
and     g1.id <= 5e3    --> comment to avoid wordpress format issue

With this change in place the timings for the original form of the lead()/lag() statement were:

  • With both lead() and lag() in place the time was 1.05 seconds
  • With neither lead() nor lag() the time was 0.25 seconds
  • With just the lead() column the time was 0.41 seconds – an increase of 0.16 seconds
  • With just the lag() column the time was 0.98 seconds – an increase of 0.73 seconds

So the lag() time (ballpark figures) nearly doubles as the partition size doubles but the lead() time stays pretty much the same.

The results of these two tests do tend to suggest that the generic nth_value() implementation can do some short-circuiting when working “forwards”, using a mechanism that isn’t available when the “from last” clause requires it to work “backwards”.

Writing the previous paragraph prompted me to do one last test – it wouldn’t produce the same results, of course, but I ought to check the performance when I moved the “from last” clause out of the “prev”/lag() column expression into the “next”/lead() column expression in Oracle’s original translation to confirm that the problem was associated with the “from last” and not with the choice of “preceding” or “following” in the row range section of the over() clause. (It was the “from last” that made the difference.)

tl;dr

If you’re using the lag() or lead() functions with “ignore nulls” on a very large dataset you may find that you can rewrite the code with first_value() or last_value() calls that use less CPU. It’s probably only significant on fairly large data sets, and may be particularly noticeable for cases where the same over() clause is used many times.

The potential for excess CPU usage comes from the effect of a generic internal rewrite using the nth_value() function with the “from last” clause when your lead()/lag() use the special case of n = 1.

To get the correct rewrite you can use dbms_utility.expand_sql() to generate a suitable statement from which you can extract and edit the relevant pieces of text.

redefinition error

Wed, 2022-05-04 06:22

Here’s a note about a data error generated by using (possibly mis-using) the dbms_redefinition package. The original code to demonstrate the problem comes from a note on the Oracle Developer forum, and was brought to my attention by a tweet from Daniel Stein.

The critical feature of the demo is that we can end up with a column containing nulls despite being declared NOT NULL (and I don’t mean by that a simple “is not null” check constraint – which is not quite the same as a NOT NULL declaration).

Here’s the first part of a script, mostly copied from the forum post, that I’ve I’ve been running on 19.11.0.0:

rem
rem     Script:         redef_bug.sql
rem     Author:         Jonathan Lewis / Sebastian (User_6AT2M)
rem     Dated:          May 2022
rem
rem     Last tested 
rem             19.11.0.0
rem

create table test_nulls (
        id      number (8)      constraint tn_pk primary key,
        text    varchar2 (25)   constraint tn_nn_tx not null
)
/

create unique index i_test_nulls on test_nulls(text);

insert into test_nulls (id, text) 
select rownum, 'some text '||rownum from dual connect by level <= 50;

-- create an empty copy, without the constraints or indexes

create table test_nulls_interim (
        id      number (8),
        text    varchar2 (25)
);

begin
        dbms_redefinition.start_redef_table(
                uname           => user,
                orig_table      => 'test_nulls',
                int_table       => 'test_nulls_interim',
                col_mapping     => 'id id, substr(text, id, 2) text'
        );
end;
/

The script creates a table with a primary key declared on an id column, a not null declaration on a text column and a unique index on the text column, then populates the table with 50 rows that have the form (N, “some text N”) where N is a number between 1 and 50.

After creating an empty copy of the table with no constraints or indexes I start an online redefinition – modifying the content of the text column as part of the redefinition. If you check the col_mapping carefully you will realise that when id reaches 13 the result from the substr() function becomes null.

This is where the trouble starts. If I now call dbms_redefition.copy_table_dependents() to add the original constraints and indexes to the interim table what’s Oracle going to do about the not null declaration on the text column?

declare
        error_ct pls_integer;
begin
        dbms_redefinition.copy_table_dependents(
                uname           => user,
                orig_table      => 'test_nulls',
                int_table       => 'test_nulls_interim',
                num_errors      => error_ct,
                copy_indexes    => 1,
                copy_constraints=> true,
                ignore_errors   => false,
--
                copy_triggers   => false,
                copy_privileges => false,
                copy_statistics => false,
                copy_mvlog      => false
        );  

        dbms_output.put_line('error count: '||error_ct);
end;
/

begin
        dbms_redefinition.finish_redef_table(user, 'test_nulls', 'test_nulls_interim');
end;
/

drop table TEST_NULLS_INTERIM purge;

I’ve exposed all the parameters to the copy_table_dependents() procedure call in my code, and you can see that I’ve chosen to copy only the constraints and indexes, and I don’t want to ignore errors.

The PL/SQL anonymous block terminates successfully, doesn’t report any errors, and outputs an error count of zero. So let’s see what we’ve got as the final result of the redefinition.

column search_condition_vc format a20
break on table_name skip 1 on index_name
set echo on

select
         table_name, index_name, column_name 
from 
        user_ind_columns 
order by 
        1,2,column_position
/

desc test_nulls

select count(*) from test_nulls;
select /*+ full(test_nulls) */ count(*) from test_nulls;

select
        constraint_name,
        constraint_type,
        search_condition_vc,
        status,
        deferrable,
        deferred,
        validated
from
        user_constraints
where
        table_name = 'TEST_NULLS'
/

set echo off

Here are the results – my login.sql has a lot of column format commands so your results may look a lot messier if you run this bit of code. Here’s the echoed output:

SQL> select
  2           table_name, index_name, column_name
  3  from
  4          user_ind_columns
  5  order by
  6          1,2,column_position
  7  /

TABLE_NAME                INDEX_NAME           COLUMN_NAME
------------------------- -------------------- --------------------
TEST_NULLS                I_TEST_NULLS         TEXT
                          TN_PK                ID


2 rows selected.

SQL> 
SQL> desc test_nulls
 Name                                                                     Null?    Type
 ------------------------------------------------------------------------ -------- -------------------------------------------------
 ID                                                                       NOT NULL NUMBER(8)
 TEXT                                                                     NOT NULL VARCHAR2(25)

SQL> 
SQL> select count(*) from test_nulls;

  COUNT(*)
----------
        12

1 row selected.

SQL> select /*+ full(test_nulls) */ count(*) from test_nulls;

  COUNT(*)
----------
        50

1 row selected.

SQL> 
SQL> select
  2          constraint_name,
  3          constraint_type,
  4          search_condition_vc,
  5          status,
  6          deferrable,
  7          deferred,
  8          validated
  9  from
 10          user_constraints
 11  where
 12          table_name = 'TEST_NULLS'
 13  /

CONSTRAINT_NAME      C SEARCH_CONDITION_VC  STATUS   DEFERRABLE     DEFERRED  VALIDATED
-------------------- - -------------------- -------- -------------- --------- -------------
TN_NN_TX             C "TEXT" IS NOT NULL   ENABLED  NOT DEFERRABLE IMMEDIATE VALIDATED
TN_PK                P                      ENABLED  NOT DEFERRABLE IMMEDIATE VALIDATED

Both indexes have arrived; both columns have NOT NULL declarations.

When you count the number of rows in the table it’s only 12 – unless you force a full tablescan in which case it’s 50. This happens because the NOT NULL declaration of column text allows the optimizer to use an index-only plan on the index i_test_nulls and there are 38 nulls in the table that don’t appear in the index.

The check on user_constraints shows that both the primary key constraint and the “is not null” check constraint are enabled and validated.

Conclusion

There is code in the copy_table_dependents() procedure that is defective, deficient and dangerous. Clearly I’ve done something that I probably shouldn’t have done (and, historically, I wouldn’t have done) but the code should still have protected me from an error that leaves the database in a state that is not internally consistent and can produce incorrect results.

Notes

If you want to repeat this test and try some variations on the theme you will need a few non-trivial privileges (which can be granted through a role). For the basic dbms_redefinition package you will need: execute on dbms_redefinition, select any table, create any table, alter any table, lock any table, drop any table; and to execute the copy_table_dependents() procedure you will also need create any index, create any trigger.

The copy_table_dependents() procedure appeared (I think) in the 10g time-line; prior to that you had to sort out all the constraints and dependencies “by hand” – which you would probably manage online through the sync_interim_table() procedure (viz: “create an index, sync interim table, create next index, sync etc.) before calling the finish_redef_table() procedure. That being the case my immediate response to this issue was that if you don’t want the not null declaration on text then you can have to exclude the copy_constraints option when copying the table dependants; if you did want the not null declaration then you should have included it in the initial definition of the interim table because the start_redef_table() call would then have failed, raising:

ORA-12008: error in materialized view or zonemap refresh path
ORA-01400: cannot insert NULL into ("{schema}"."TEST_NULLS_INTERIM"."TEXT")

Adaptive Joins

Wed, 2022-04-13 07:53

There’s a question on the Oracle Forums at the moment about a query that’s taking a long time to parse. Even after being reduced to one prebuilt (currently remote) table with two non-correlated outer joins to it the parse time is several hundred seconds. This seems fairly bizarre – I have seen some very long parse times from Oracle, but 10 minutes for 3 tables is well over the top; it did remind me, though of a note I started a few years ago of a 4 table join taking 4 seconds to parse, so I thought I’d present the query, the plan, and a little chat on debugging. Here’s the query:

select
        /*+ parallel(t4,3) */
        t1.small_vc,
        t2.small_vc,
        t3.small_vc,
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
and     t1.small_vc in (1,2,3)
and     t2.small_vc in (1,2,3,4)
and     t3.small_vc in (1,2,3,4,5)
group by
        t1.small_vc,
        t2.small_vc,
        t3.small_vc
;

I’m expecting a simple cascade of hash joins, with t1, t2 and t3 – the “small” tables – turning into “build” tables, then t4 – the “large” table – passing through each of them in turn until the penultimate rowsource is aggregated.

Here’s the execution plan — which looks pretty much as I expected it to – but there’s something wrong about it that isn’t visible in the output. Why is the query (plan) saying it took 0.07 seconds to complete (A-time), returning only 60 rows, when my SQL*Plus session didn’t return any data for 4 seconds

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Starts | E-Rows | Cost (%CPU)| E-Time   |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |      1 |        |   300 (100)|          |        |      |            |     60 |00:00:00.07 |       5 |      0 |       |       |          |
|   1 |  PX COORDINATOR                |          |      1 |        |            |          |        |      |            |     60 |00:00:00.07 |       5 |      0 | 73728 | 73728 |          |
|   2 |   PX SEND QC (RANDOM)          | :TQ10004 |      0 |      1 |   300   (4)| 00:00:01 |  Q1,04 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   3 |    HASH GROUP BY               |          |      3 |      1 |   300   (4)| 00:00:01 |  Q1,04 | PCWP |            |     60 |00:00:00.01 |       0 |      0 |  1394K|  1394K|     3/0/0|
|   4 |     PX RECEIVE                 |          |      3 |      1 |   300   (4)| 00:00:01 |  Q1,04 | PCWP |            |    180 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      PX SEND HASH              | :TQ10003 |      0 |      1 |   300   (4)| 00:00:01 |  Q1,03 | P->P | HASH       |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       HASH GROUP BY            |          |      3 |      1 |   300   (4)| 00:00:01 |  Q1,03 | PCWP |            |    180 |00:00:00.14 |    6114 |   6018 |  1394K|  1394K|     3/0/0|
|*  7 |        HASH JOIN               |          |      3 |   8460 |   299   (4)| 00:00:01 |  Q1,03 | PCWP |            |  12600 |00:00:00.14 |    6114 |   6018 |  2171K|  2171K|     3/0/0|
|   8 |         JOIN FILTER CREATE     | :BF0000  |      3 |     22 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     75 |00:00:00.02 |       0 |      0 |       |       |          |
|   9 |          PX RECEIVE            |          |      3 |     22 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     75 |00:00:00.02 |       0 |      0 |       |       |          |
|  10 |           PX SEND BROADCAST    | :TQ10000 |      0 |     22 |     2   (0)| 00:00:01 |  Q1,00 | S->P | BROADCAST  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  11 |            PX SELECTOR         |          |      3 |        |            |          |  Q1,00 | SCWC |            |     25 |00:00:00.01 |       3 |      0 |       |       |          |
|* 12 |             TABLE ACCESS FULL  | T3       |      1 |     22 |     2   (0)| 00:00:01 |  Q1,00 | SCWP |            |     25 |00:00:00.01 |       3 |      0 |       |       |          |
|* 13 |         HASH JOIN              |          |      3 |  27322 |   297   (4)| 00:00:01 |  Q1,03 | PCWP |            |  12600 |00:00:00.10 |    6114 |   6018 |  2171K|  2171K|     3/0/0|
|  14 |          JOIN FILTER CREATE    | :BF0001  |      3 |     21 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     72 |00:00:00.01 |       0 |      0 |       |       |          |
|  15 |           PX RECEIVE           |          |      3 |     21 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     72 |00:00:00.01 |       0 |      0 |       |       |          |
|  16 |            PX SEND BROADCAST   | :TQ10001 |      0 |     21 |     2   (0)| 00:00:01 |  Q1,01 | S->P | BROADCAST  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  17 |             PX SELECTOR        |          |      3 |        |            |          |  Q1,01 | SCWC |            |     24 |00:00:00.01 |       3 |      0 |       |       |          |
|* 18 |              TABLE ACCESS FULL | T2       |      1 |     21 |     2   (0)| 00:00:01 |  Q1,01 | SCWP |            |     24 |00:00:00.01 |       3 |      0 |       |       |          |
|* 19 |          HASH JOIN             |          |      3 |  92953 |   294   (3)| 00:00:01 |  Q1,03 | PCWP |            |  12600 |00:00:00.10 |    6114 |   6018 |  2171K|  2171K|     3/0/0|
|  20 |           JOIN FILTER CREATE   | :BF0002  |      3 |     19 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     63 |00:00:00.01 |       0 |      0 |       |       |          |
|  21 |            PX RECEIVE          |          |      3 |     19 |     2   (0)| 00:00:01 |  Q1,03 | PCWP |            |     63 |00:00:00.01 |       0 |      0 |       |       |          |
|  22 |             PX SEND BROADCAST  | :TQ10002 |      0 |     19 |     2   (0)| 00:00:01 |  Q1,02 | S->P | BROADCAST  |      0 |00:00:00.01 |       0 |      0 |       |       |          |
|  23 |              PX SELECTOR       |          |      3 |        |            |          |  Q1,02 | SCWC |            |     21 |00:00:00.01 |       3 |      0 |       |       |          |
|* 24 |               TABLE ACCESS FULL| T1       |      1 |     19 |     2   (0)| 00:00:01 |  Q1,02 | SCWP |            |     21 |00:00:00.01 |       3 |      0 |       |       |          |
|  25 |           JOIN FILTER USE      | :BF0000  |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|  26 |            JOIN FILTER USE     | :BF0001  |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|  27 |             JOIN FILTER USE    | :BF0002  |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|  28 |              PX BLOCK ITERATOR |          |      3 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWC |            |  19517 |00:00:00.09 |    6114 |   6018 |       |       |          |
|* 29 |               TABLE ACCESS FULL| T4       |     48 |    343K|   289   (2)| 00:00:01 |  Q1,03 | PCWP |            |  19517 |00:00:00.06 |    6114 |   6018 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$1
  12 - SEL$1 / T3@SEL$1
  18 - SEL$1 / T2@SEL$1
  24 - SEL$1 / T1@SEL$1
  29 - SEL$1 / T4@SEL$1

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      DB_VERSION('19.1.0')
      OPT_PARAM('_fix_control' '16923858:5')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      FULL(@"SEL$1" "T1"@"SEL$1")
      FULL(@"SEL$1" "T4"@"SEL$1")
      FULL(@"SEL$1" "T2"@"SEL$1")
      FULL(@"SEL$1" "T3"@"SEL$1")
      LEADING(@"SEL$1" "T1"@"SEL$1" "T4"@"SEL$1" "T2"@"SEL$1" "T3"@"SEL$1")
      USE_HASH(@"SEL$1" "T4"@"SEL$1")
      USE_HASH(@"SEL$1" "T2"@"SEL$1")
      USE_HASH(@"SEL$1" "T3"@"SEL$1")
      PQ_DISTRIBUTE(@"SEL$1" "T4"@"SEL$1" BROADCAST NONE)
      PX_JOIN_FILTER(@"SEL$1" "T4"@"SEL$1")
      PQ_DISTRIBUTE(@"SEL$1" "T2"@"SEL$1" NONE BROADCAST)
      PX_JOIN_FILTER(@"SEL$1" "T2"@"SEL$1")
      PQ_DISTRIBUTE(@"SEL$1" "T3"@"SEL$1" NONE BROADCAST)
      PX_JOIN_FILTER(@"SEL$1" "T3"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$1" "T2"@"SEL$1")
      SWAP_JOIN_INPUTS(@"SEL$1" "T3"@"SEL$1")
      GBY_PUSHDOWN(@"SEL$1")
      USE_HASH_AGGREGATION(@"SEL$1")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   7 - access("T3"."ID"="T4"."ID3")
  12 - filter((TO_NUMBER("T3"."SMALL_VC")=1 OR TO_NUMBER("T3"."SMALL_VC")=2 OR TO_NUMBER("T3"."SMALL_VC")=3 OR TO_NUMBER("T3"."SMALL_VC")=4 OR TO_NUMBER("T3"."SMALL_VC")=5))
  13 - access("T2"."ID"="T4"."ID2")
  18 - filter((TO_NUMBER("T2"."SMALL_VC")=1 OR TO_NUMBER("T2"."SMALL_VC")=2 OR TO_NUMBER("T2"."SMALL_VC")=3 OR TO_NUMBER("T2"."SMALL_VC")=4))
  19 - access("T1"."ID"="T4"."ID1")
  24 - filter((TO_NUMBER("T1"."SMALL_VC")=1 OR TO_NUMBER("T1"."SMALL_VC")=2 OR TO_NUMBER("T1"."SMALL_VC")=3))
  29 - access(:Z>=:Z AND :Z<=:Z)
       filter(SYS_OP_BLOOM_FILTER_LIST(SYS_OP_BLOOM_FILTER(:BF0002,"T4"."ID1"),SYS_OP_BLOOM_FILTER(:BF0001,"T4"."ID2"),SYS_OP_BLOOM_FILTER(:BF0000,"T4"."ID3")))


One possible trouble-shooting step is simply to re-run the query, taking a snapshot of the session activity stats (v$mystat) and the session events (v$session_event) to see what they tell you (if anything) – here’s the critical extract from the session stats:

Name                                      Value
----                                      -----
CPU used when call started                  423
CPU used by this session                    429
parse time cpu                              411
parse time elapsed                          419

Most of the time is parse time, spent on the CPU. (If this had been a much larger scale problem and had occurred in the recent past I might have looked at ASH (v$active_session_hsitory) for any samples for the correct SQL_ID, and seen the problem revealed in the in_parse, in_hard_parse columns.

So let’s enable event 10053 and run the query again – but since it’s “only” 4 seconds, let’s tweak the timer option to report any step that took longer than 0.1 seconds. The default timer setting is a whole second (10^6 microseconds), so we set the fix-control to 5 to get 0.1 seconds (10^5 microseconds).

alter session set "_fix_control"='16923858:5';
alter session set events '10053 trace name context forever';

-- run the query, find the trace file

 grep TIMER or19_ora_23370.trc 

Here’s the output from the call to grep: it looks like group by placement (GBP) is causing a problem.

TIMER:  GBP: costing SEL$1 cpu: 0.303 sec elapsed: 0.309 sec
TIMER: GBP: iteration (#1) SEL$1 cpu: 0.303 sec elapsed: 0.309 sec
TIMER:  GBP: costing SEL$1565E019 cpu: 0.293 sec elapsed: 0.298 sec
TIMER: GBP: iteration (#2) SEL$1565E019 cpu: 0.294 sec elapsed: 0.299 sec
TIMER:  GBP: costing SEL$23EAFE84 cpu: 0.528 sec elapsed: 0.533 sec
TIMER: GBP: iteration (#3) SEL$23EAFE84 cpu: 0.528 sec elapsed: 0.533 sec
TIMER:  GBP: costing SEL$B5D97CA0 cpu: 0.533 sec elapsed: 0.540 sec
TIMER: GBP: iteration (#4) SEL$B5D97CA0 cpu: 0.534 sec elapsed: 0.540 sec
TIMER:  GBP: costing SEL$6C9B46B6 cpu: 0.531 sec elapsed: 0.531 sec
TIMER: GBP: iteration (#5) SEL$6C9B46B6 cpu: 0.531 sec elapsed: 0.532 sec
TIMER:  GBP: costing SEL$ED1298E3 cpu: 0.522 sec elapsed: 0.523 sec
TIMER: GBP: iteration (#8) SEL$ED1298E3 cpu: 0.523 sec elapsed: 0.524 sec
TIMER:  GBP: costing SEL$5968095A cpu: 0.523 sec elapsed: 0.523 sec
TIMER: GBP: iteration (#9) SEL$5968095A cpu: 0.524 sec elapsed: 0.523 sec
TIMER:  GBP: costing SEL$4CA81688 cpu: 0.525 sec elapsed: 0.527 sec
TIMER: GBP: iteration (#12) SEL$4CA81688 cpu: 0.526 sec elapsed: 0.528 sec
TIMER: Group-By Placement SEL$1 cpu: 3.766 sec elapsed: 3.793 sec
TIMER: Cost-Based Transformations (Overall) SEL$1 cpu: 3.769 sec elapsed: 3.795 sec
TIMER: Access Path Analysis (Final) SEL$1 cpu: 0.288 sec elapsed: 0.289 sec
TIMER: SQL Optimization (Overall) SEL$1 cpu: 4.072 sec elapsed: 4.108 sec

If you check further up the page, though, you’ll see in the Outline Information that Oracle has not used group by placement (it has done a “group by pushdown” but that’s different, and relates to aggregation in parallel execution. So one quick hack we could try is to add the hint /*+ no_place_group_by(@sel$1) */ to the query just to see what happens – and here’s the effect on the parse time:

Name                                      Value
----                                      -----
parse time cpu                               33
parse time elapsed                           34

Problem solved – provided we can get the hint into the code (by hand, or SQL Patch, etc.) But the question still remains: where did the time go? The trace file was fairly long (375,000 lines for the original, compared to 32,000 for the hinted) but a rapid scan seemed in order – and something very quickly caught my attention. It was pretty easy to spot because something big and nasty had happened 8 times.

The answer was in “Adaptive Plans”, which (mostly) get flagged with the label “AP:” in the 10053 trace file, for example:

AP: Computing costs for inflection point at min value 0.00
AP: Using binary search for inflection point search
AP: Costing Join for ADM inflection point at card 0.00
AP: Costing Join for ADM inflection point at card 0.00

You can see here that the optimizer is searching for an “inflection point”, that means it’s “thinking about” an adaptive join, and searching for the number of rows where a switch between a nested loop join and a hash join makes sense.

Notice particularly the comment about “using binary search”. After calculating the cost of the “best” join using the current estimates of cardinality for the incoming rowsource the optimizer starts calculating the possible costs (nested loop or hash, it ignores merge) for a set of alternative cardinalities until it finds the cardinality where the nested loop join and hash join have the same cost. That’s the number of rows that Oracle will use at run time to decide whether it should switch from its initial selection of join method to the alternative.

If the initial join selection was a hash join (i.e. large volume startegy) Oracle will use an alternative cardinality of 1 to start its list of estimates – on the assumption that that would produce a nested loop plan and then keep doubling the estimate until the resulting plan switched to a hash join, then close in on the break point by halving and doubling the remaining gap between the NL estimate and the HJ estimate.

If the initial join selection was a nested loop join Oracle will use a “worst case scenario” for the incoming estimate (acting, broadly speaking, as if every filter at that point had had a selectivity of 1 – i.e. 100% of the available data), and start by halving the estimate. This is fine when the starting estimate is (for example) for the first table in the join order and the esimate is just a few million rows.

Here’s the first estimate in my case (you’ll need a wide screen, or very small print):

6772:AP: Computing costs for inflection point at max value 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00 

After you’ve halved that number about 1,000 times you’re down to values in the region of a few thousand. And when you have to cost a couple of joins every time you halve, and when you’ve gone through the whole process 8 times that’s a lot of CPU.

In fact, with the no_place_group_by hint there was still one point where the optimizer did this adaptive join work – which probably accounts for most of the final 0.33 CPU seconds – but it didn’t show up in any separately reported timer events.

Of course the final little test of this parse time hypothesis is to add the hint /*+ no_adaptive_plan */ – so I did that, and the parse time was still about 0.3 seconds! Investigation of the 10053 trace showed that even with the hint in place the optimizer still went through that one huge binary chop – but when it had worked out the inflection point it printed the message:

AP: Adaptive plans bypassed for query block SEL$1 due to disabled by hint (no_adaptive_plan)
AP: Adaptive joins bypassed for query block SEL$1 due to adaptive plans disabled

According to the hint report the hint was valid, however, so that behaviour looks a little buggy. It then occurred to me that maybe I could have hinted /*+ no_adaptive_plan(@sel$1) */ – and that worked properly with the trace reporting:

AP: Checking validity for query block SEL$1, sqlid=7fjtvwktcmsgq
AP: Adaptive plans bypassed for query block SEL$1 due to disabled by hint (no_adaptive_plan)
AP: Adaptive joins bypassed for query block SEL$1 due to adaptive plans disabled

Had I not realised that this was possible I have fallen back on the hint /*+ opt_param() */ to set the parameter optimizer_adaptive_plans to false for the query in which case the following message (and no other “AP:” message) appeared near the start of the trace:

AP: Adaptive joins bypassed for query block SEL$1 due to disabled by adaptive join parameter

If you want to experiment on your own system here’s the script to create the data – the script name reflects the fact that I found this example by accident while working on something completely different:

rem
rem     Script:         12c_vector_transform_c.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2015
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem             12.2.0.1
rem             12.1.0.2

drop table t1;
drop table t2;
drop table t3;
drop table t4;
purge recyclebin;

create table t1 
as
select
        rownum          id,
        to_char(mod(rownum,10)) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t1 
        add constraint t1_pk primary key(id)
;

create table t2
as
select
        rownum          id,
        to_char(mod(rownum,12)) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t2
        add constraint t2_pk primary key(id)
;

create table t3
as
select
        rownum          id,
        to_char(mod(rownum,14)) small_vc,
        rpad('x',100)   padding
from
        all_objects
where
        rownum <= 70
;

alter table t3
        add constraint t3_pk primary key(id)
;

create table t4
nologging
as
select
        t1.id                   id1,
        t2.id                   id2,
        t3.id                   id3,
        rpad(rownum,10)         small_vc,
        rpad('x',100)           padding
from
        t1, t2, t3
;

spool 12c_vector_transform_c

set timing on
set linesize 255
set trimspool on
set pagesize 90
set arraysize 100
set serveroutput off

execute snap_my_stats.start_snap

alter session set statistics_level = all;

alter session set "_fix_control"='16923858:5';
alter session set events '10053 trace name context forever';

select
        /*+ 
--              opt_param('optimizer_adaptive_plans' 'false')
--              no_adaptive_plan
--              no_adaptive_plan(@sel$1)
--              no_place_group_by(@sel$1)
                parallel(t4,3)
        */
        t1.small_vc,
        t2.small_vc,
        t3.small_vc,
        count(t4.small_vc)
from
        t4,     
        t1,     
        t2,     
        t3
where
        t1.id = t4.id1
and     t2.id = t4.id2
and     t3.id = t4.id3
and     t1.small_vc in (1,2,3)
and     t2.small_vc in (1,2,3,4)
and     t3.small_vc in (1,2,3,4,5)
group by
        t1.small_vc,
        t2.small_vc,
        t3.small_vc
;

select * from table(dbms_xplan.display_cursor(null,null,'cost outline allstats all note -bytes'));

alter session set events '10053 trace name context off';

set serveroutput on
execute snap_my_stats.end_snap

spool off

Index Upgrade

Mon, 2022-04-11 10:55

Sometimes wishes come true and in 19c – with fix_control QKSFM_DBMS_STATS_27268249 – one of mine did. The description of this fix (which is enabled by default) is: “use approximate ndv for computing leaf blocks and distinct keys”.

Here’s a key item in the output file from running tkprof against the trace file generated by a simple call to:

execute dbms_stats.gather_index_stats(user,'t1_i2')

The index is a two_column index on t1(x1, x2) with a size of roughly 16,000 blocks on a table of approximately 6 million rows.

select /*+ opt_param('_optimizer_use_auto_indexes' 'on')
  no_parallel_index(t, "T1_I2")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl
  opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad  no_expand
  index_ffs(t,"T1_I2") */ count(*) as nrw,
  approx_count_distinct(sys_op_lbid(106818,'L',t.rowid)) as nlb,
  approx_count_distinct(sys_op_combined_hash("X1","X2")) as ndk,
  sys_op_countchg(substrb(t.rowid,1,15),1) as clf
from
 "TEST_USER"."T1" t where "X1" is not null or "X2" is not null

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT AGGREGATE APPROX (cr=15821 pr=0 pw=0 time=2812116 us starts=1)
   6018750    6018750    6018750   INDEX FAST FULL SCAN T1_I2 (cr=15821 pr=0 pw=0 time=894658 us starts=1 cost=2117 size=192000000 card=6000000)(object id 106818)


The first point of interest is the appearance of the approx_count_distinct() function calls used for the nlb (number of leaf blocks) and ndk (number of distinct keys) columns. It’s also worth nothing that the ndk value is derived from a call to sys_op_combined_hash() applied to the two base columns which means the number of distinct keys for a multi-column index is calculated in exactly the same way as the number of distinct values for a column group.

There are two more important details though: first that the mechanism uses a fast full scan of the whole index, secondly that the size of this index is about 16,000 blocks.

A final (unrelated) point is the little reminder in the hints that 19c includes an automatic indexing mechanism. It’s easy to forget such things when your overnight batch job takes longer than usual.

For comparison purposes, the following shows the effect of disabling the feature:

alter session set "_fix_control"='27268249:0';


select /*+ opt_param('_optimizer_use_auto_indexes' 'on')
  no_parallel_index(t, "T1_I2")  dbms_stats cursor_sharing_exact
  use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl
  opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad  no_expand
  index_ffs(t,"T1_I2") */ count(*) as nrw,count(distinct sys_op_lbid(106818,
  'L',t.rowid)) as nlb,count(distinct hextoraw(sys_op_descend("X1")
  ||sys_op_descend("X2"))) as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as
  clf
from
 "TEST_USER"."T1" sample block (  7.0114135742,1)  t where "X1" is not null
  or "X2" is not null

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  SORT GROUP BY (cr=1132 pr=0 pw=0 time=460459 us starts=1)
    421761     421761     421761   INDEX SAMPLE FAST FULL SCAN T1_I2 (cr=1132 pr=0 pw=0 time=67203 us starts=1 cost=150 size=8413700 card=420685)(object id 106818)

The calculations for nlb and ndk are simple count()s and the thing that ndk counts is a messy concatenation of the columns hextoraw(sys_op_descend(“X1”) || sys_op_descend(“X2”)) that Oracle has used to ensure that counts for like ‘AB’ || ‘CD’ and ‘ABC’||’D’ don’t get combined.

Perhaps most significantly for some people is that the execution plan shows us that the index fast full scan was a SAMPLE and only analyzed (a fairly typical) 1,132 blocks out of 16,000 and 400,000 rows out of 6 million This looks a bit of a threat, of course; but there may be a few critical indexes where this extra workload will stop random variations in execution plans when it really matters.

As with so many details of Oracle there are likely to be cases where the new method is hugely beneficial, and some where it’s a nuisance, so it’s good to know that you can be a little selective about when it gets used.

Footnote

Don’t forget that it’s a good idea to change think about setting the table preference “table_cached_blocks” to allow Oracle to produce a better value for the clustering_factor. This is another mechanism that increases the CPU required to gather index stats.

It’s an odd little detail that the fixed control appeared in 19.3.0.0 according to my histogram of v$system_fix_control and certainly wasn’t in 18.3.0.0 – but the entry in the view thinks that it was available from Oracle 8.0.0.0.

Parallel Inactivity

Thu, 2022-04-07 11:08

This is a simple script that I wrote more than 20 years ago (for 8.1.7.4 apparently) to get the session wait event figures from parallel query slaves as they were running. I’m posting it now because there’s a question on the Oracle Database Forum where it would be useful as a way for checking for performance problems in a long-running CTAS.

Oracle gives us the view v$px_session to link together the query co-ordinator with the parallel query slaves, and it gives us v$px_sesstat to report the linked session activity stats, but it doesn’t gives us a view to link together the session events for the multiple sessions, so this query simply joins v$px_session to v$session_events to produce that result:

rem
rem     Script:         px_waits.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2001
rem     Purpose:        Report events for PX execution
rem
rem     Last tested:
rem             19.3.0.0
rem
rem     Notes:
rem     Note that the co-ordinator can be on a different instance
rem     in parallel server systems.  To cater for an oddity of the
rem     qcinst report, we need the current instance number in case
rem     the QC is on the current machine.
rem
rem     If the co-ordinator is present, then it appears last on the list
rem     for a server group.
rem

column instance_number new_value m_inst
select instance_number from v$instance;

set linesize 120
set pagesize 60
set trimspool on
set tab off
set verify off

spool px_waits

break -
        on qcsid skip 1 -
        on server_group -
        on degree -
        on server_set -
        on sid -
        on server# skip 1

column event format a32

column  qcsid           format a6       heading 'Coord'
column  server_group    format 999      heading 'Grp'
column  degree          format a5       heading 'Deg'   noprint
column  server_set      format 999      heading 'Set'   noprint
column  server#         format 999      heading 'Sno'
column  sid             format 9999     heading 'SID'
column  name            format a32      heading 'Statistic'
column  value           format 99,999,999       heading 'value'

select
        ss.qcsid || '/' || nvl(ss.qcinst_id,&m_inst)            qcsid,
        ss.server_group,
        decode(degree,
                null,null,
                ss.degree || '/' || ss.req_degree
        )       degree,
        ss.server_set,
        ss.server#,
        ss.sid,
        se.event,
        se.total_waits,
        se.time_waited
from
        v$px_session            ss,
        v$session_event         se
where
        se.sid = ss.sid
order by
        ss.qcsid,
        ss.server_group,
        ss.server_set,
        ss.server#,
        se.event
;

clear columns
clear breaks
set verify on

spool off

See also: Parallel Activity for the session activity stats report.

Parallel Activity

Thu, 2022-04-07 10:49

This is a simple script that I wrote more than 20 years ago (for 8.1.7.4 apparently) to get the session activity from parallel query slaves as they were running. I’m posting it now because there’s a question on the Oracle Database Forum where it would be useful as a way for checking for a skewed data distribution in a long-running query.

Oracle gives use the view v$px_sesstat which connects PX slaves with their query co-ordinator (QC) and reports the session activity for all of them. The view only report statistics numbersm though, so this little query joins the view to v$statname to report the names. It eliminates stats where the value is zero, and orders by QC (though you could add a line to restrict the query to a single QC), parallel server group, then process, then statistic number.

rem
rem     Script:         px_stats.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2001
rem     Purpose:        Report statistics for PX slaves on the fly
rem
rem     Last tested
rem             19.3.0.0
rem     Notes:
rem     Note that the co-ordinator can be on a different instance
rem     in parallel server systems.  To cater for an oddity of the
rem     qcinst report, we need the current instance number in case
rem     the QC is on the current machine.
rem
rem     If the co-ordinator is present, then it appears last on the list
rem     for a server group.
rem
rem     This code really needs to be enhanced to do a proper job on OPS/RAC
rem     by accessing gv$px_sesstat, and handling sids and instances correctly
rem
rem     The user running this query has to have SELECT privileges 
rem     on the views v$instance, v$statname and v$px_sesstat
rem

column instance_number new_value m_inst
select instance_number from v$instance;

spool px_stats

set tab off
set pagesize 60
set linesize 156
set trimspool on
set verify off

break -
        on qcsid skip 1 -
        on server_group -
        on degree -
        on server_set -
        on sid -
        on server# skip 1


column  qcsid           format a6               heading 'Coord'
column  server_group    format 999              heading 'Grp'
column  degree          format a5               heading 'Deg'
column  server_set      format 999              heading 'Set'
column  server#         format 999              heading 'Sno'
column  sid             format 9999             heading 'SID'
column  name            format a52              heading 'Statistic'
column  value           format 99,999,999,999   heading 'value'


select
        st.qcsid || '/' || nvl(st.qcinst_id,&m_inst)            qcsid,
        st.server_group,
        decode(degree,
                null,null,
                st.degree || '/' || st.req_degree
        )       degree,
        st.server_set,
        st.server#,
        st.sid,
        sn.name,
        st.value
from
        v$px_sesstat    st,
        v$statname      sn
where
        sn.statistic# = st.statistic#
and     st.value != 0
order by
        st.qcsid,
        st.server_group,
        st.server_set,
        st.server#,
        st.statistic#
;

clear columns
clear breaks
set verify on

spool off

See also: Parallel Inactivity for a simple script reporting wait events

PLSQL_WARNINGS

Mon, 2022-04-04 04:45

I don’t suppose many people edit and compile PL/SQL code from the SQL*Plus command line any more, but the following reminder about avoiding mistakes is probably valid in other development environments even though it may be enabled through a difference mechanism.

You can set the level of warning that you get from the PL/SQL compiler – and the default value isn’t the best value to use if you want your PL/SQL to be efficient. Here’s a demonstration based on a common, trivial, but annoying error – it starts with an indexed table:

rem
rem     Script:         plsql_warning_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2017
rem
rem     Last tested 
rem             19.11.0.0
rem             11.2.0.4
rem

create table t1(
        id      varchar2(10),
        v1      varchar2(32)
);

create index t1_i1 on t1(id);

insert into t1 
select  rownum, object_name 
from    all_objects
where   rownum <= 1000
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

Note that although I’ve declared the id column as a varchar2() type I’m populating it with a number – that will, of course, be coerced into a character format. This is an example of something I’ve seen quite often in production systems – a character column storing something that (always) looks like a number – and it leads to the following type of coding defect:

create or replace function f1 (i_in number)
return varchar2
authid definer
as
        m_v1 t1.v1%type;
begin
        select  t1.v1
        into    f1.m_v1
        from    t1
        where   t1.id = f1.i_in ;

        return  f1.m_v1;

end;
/

show errors

The error should be quite easy to see in this example – but in a more complex example with a longer piece of code it might not be so visible. Still, I’ve got the “show errors” call immediately after my “create or replace” so that should help. Here’s the output from running the above:

Function created.

No errors.

There is an error in the code- but it’s not one that leads to a PL/SQL compilation error. My incoming parameter is numeric, and I’m using it in the predicate checking t1.id – which is an indexed character column – but that means the CBO will modify the predicate internally to: to_number(t1.id) = :b1, disabling the index. The function will work (provided the full tablescan doesn’t find a value in the table that raises a conversion error), but the performance probably won’t be what I’d hoped for.

Let’s recompile the procedure, but precede it with an alter session statement:

alter session set plsql_warnings = 'enable:all';
alter function f1 compile;
show errors

And here’s the resulting output:

Session altered.

SP2-0807: Function altered with compilation warnings

Errors for FUNCTION F1:

LINE/COL ERROR
-------- -----------------------------------------------------------------
10/8     PLW-07204: conversion away from column type may result in
         sub-optimal query plan

The compile succeeded (“function altered …warnings”), so the function will still execute and return the correct result, but we’ve had an explicit warning of exactly what we’ve done wrong and the effect it will have. But you can go one better – if you know your PLSQL error numbers:

alter session set plsql_warnings = 'enable:all','error:7204';
alter function f1 compile;
show errors

In this example I’ve added an extra option to the plsql_warnings parameter – I’ve told it to treat PLW-7204 as an error, not just as a warning, so this is what we see:

Session altered.


Warning: Function altered with compilation errors.

Errors for FUNCTION F1:

LINE/COL ERROR
-------- -----------------------------------------------------------------
10/8     PLS-07204: conversion away from column type may result in
         sub-optimal query plan

The difference doesn’t leap out and hit you in the eye, but instead of “with compilation warnings the error message says “with compilation errors. (It’s also lost the SP2-0807 prefix, but that’s not important.)

Since PLW-07024 is now an error the function state is invalid, and if you try using it in a query the query will raise Oracle error: ORA-06575: Package or function F1 is in an invalid state.

If you want to see how your packages, procedures and functions were last compiled you can query one of two views:

column param_name  format a32
column param_value format a32

select  param_name, param_value
from    user_stored_settings
where   object_name = 'F1'
and     object_type = 'FUNCTION'
/

PARAM_NAME                       PARAM_VALUE
-------------------------------- --------------------------------
plsql_optimize_level             2
plsql_code_type                  INTERPRETED
plsql_debug                      FALSE
nls_length_semantics             BYTE
plsql_warnings                   ENABLE:ALL,ERROR:  7204
plsql_ccflags
plscope_settings                 IDENTIFIERS:NONE
plsql_compiler_flags             INTERPRETED,NON_DEBUG

8 rows selected.

(The other view is a denormalized/pivoted version of this view, giving you one row and 8 columns for each object.)

If you want to make sure that you recompile the function with the same settings that you used previously you can add the clause “reuse settings” to the “alter function f1 compile” call; if you don’t do this the function will compile with whatever your current session settings (which may have been dictated by the system settings).

There are variations on this theme – if you check the “alter compile” syntax you’ll see that you can include “parameter = value” clauses in the call to compile so, for example, I could start a new session and issue:

alter function f1 compile plsql_warnings='error:7204';

This would give me exactly the same warning, and the same output on a subsequent “show errors” – though in this case the output from user_stored_settings would be: “DISABLE:ALL,ERROR: 7204”.

If you want a complete list of all the possible pl/sql warnings you can find them in $ORACLE_HOME/plsql/mesg/plwus.msg. The warnings fall into three categories: Severe, Informational, and Performance, and there is a short note in the message file giving the ranges:

/   SEVERE -- For this category the warning number should be between
/             5000 and 5999.
/   INFORMATIONAL - For this category the warning number should be between
/                   6000 and 6249.
/   PERFORMANCE   - For this category the warning number should be between
/                   7000 and 7249.

It’s worth taking a look at the possible errors – even if you don’t take advantage of the feature. There aren’t very many, but one I particularly like is Informational 6002: “Unreachable code”, which tells you exactly which bits of your PL/SQL are never going to run. (I wonder how many people would find code that failed to recompile if they set the plsql_warning to ‘error:6002’).

Summary

It’s worth knowing about the option to set the parameter plsql_warnings as it may avoid accidental inefficiency in SQL that’s embedded in PL/SQL, and it may highlight coding errors (like “unreachable code”) that otherwise manage to slip past test suites.

Personally I’d be inclined to set it to something quite aggressive on development to help developers spot errors as early and as easily as possible,

Pages