performance - index is not being used in oracle query with collection in where condition -


i have event table name mytable , in table have mytableid column on have created index. table having 70 million rows. now, have created purge proc purges events based on table type collection mytablecollection. collection have 100 rows limit.so, whole can purge 100 rows @ time. when ran below query in proc got stuck 40 mins.

delete  mytable               mytableid in (select mytableid table(mytablecollection)) 

when ran analyzer on query hard coded values, showed indexed range scan

delete  mytable                   mytableid in (10,20,30) 

is collection in query behind scenes playing role of not using index in query? thinking oracle might confused number of rows being fetched in collection. correct ? solution?

p.s: thinking of implementing forall delete rows.

when execute ...

where   mytableid in (10,20,30) 

... optimizer clever enough know it's going hit 3 rows. if ...

where   mytableid in (select /*+ cardinality(mytablecollection 3) */mytableid table(mytablecollection)) 

then has no idea how many rows in collection. assumes there 8192 rows, , chooses execution plan accordingly.

but can tell optimizer how many rows in collection, using cardinality hint:

where   mytableid in (select /*+ cardinality(mytablecollection 100) */                       mytableid table(mytablecollection)) 

"this table having 70 million rows"

by way, worth checking freshness of table's statistics, because still expect optimizer consider index 8192/70000000 rows.


Comments

Popular posts from this blog

networking - Vagrant-provisioned VirtualBox VM is not reachable from Ubuntu host -

c# - ASP.NET Core - There is already an object named 'AspNetRoles' in the database -

ruby on rails - ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true -