I'll emulate the problem discussed here by creating a table T. and the data is from all_objects. Just assuming we will browing TABLE T by pages sorted by object_id, while rows retrieved out should meet the condition 'object_name < 'Z', which will ask us to retrieved most rows in table t:
rudolf@TEST920.WORLD>create table t pctfree 0 nologging
2 as select * from all_objects;
Table created.
rudolf@TEST920.WORLD>insert /*+append*/ into t select * from t;
6619 rows created.
rudolf@TEST920.WORLD>commit;
Commit complete.
I repeated the last 2 stats serveral times, so that make the table T large enough:
rudolf@TEST920.WORLD> select count(*) from t;
COUNT(*)
----------
52952
Now, I'll create index on column object_id, and make a query asking for a page from 20 through 30:
rudolf@TEST920.WORLD>create index t_oid_ind on t ( object_id ) pctfree 0 nologging;
Index created
rudolf@TEST920.WORLD>col object_name format a30
rudolf@TEST920.WORLD>col owner format a30
rudolf@TEST920.WORLD>set autot on
rudolf@TEST920.WORLD>analyze table t compute satistics for table for all indexes for all indexed columns;
Table analyzed.
rudolf@TEST920.WORLD>l
1 select owner,object_name,object_id
2 from ( select rownum rno,a.*
3 from (select /*+first_rows*/ * from t where object_name < 'Z' order by object_id ) a
4 where rownum <= 30
5 )
6* where rno >= 20
rudolf@TEST920.WORLD> /
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------ ----------
SYS CLU$ 5
SYS CLU$ 5
SYS CLU$ 5
SYS CLU$ 5
SYS CLU$ 5
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
11 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=53053 Card=30 Bytes=1800)
1 0 VIEW (Cost=53053 Card=30 Bytes=1800)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=53053 Card=52483 Bytes=2466701)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=53053 Card=52483 Bytes=4513538)
5 4 INDEX (FULL SCAN) OF 'T_OID_IND' (NON-UNIQUE) (Cost=101 Card=52952)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
33 consistent gets ***********************
0 physical reads
0 redo size
587 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
11 rows processed
Please notice the line I marked with '**********'. Now let's create index on (object_name,object_id), and query in your way:
rudolf@TEST920.WORLD> analyze table t delete statistics;
Table analyzed.
rudolf@TEST920.WORLD>l
1* create index t_oname_oid_ind on t (object_name,object_id) nologging pctfree 0
rudolf@TEST920.WORLD>/
Index created.
rudolf@TEST920.WORLD>l
1 select owner,object_name,object_id
2 from t a,
3 ( select rid from
4 ( select rownum rno,rowid rid from
5 ( select rowid from t
6 where object_name < 'Z'
7 order by object_id
8 )
9 ) where rNo between 20 and 30
10 ) b
11 where a.rowid=b.rid
12* order by a.object_id
rudolf@TEST920.WORLD> /
OWNER OBJECT_NAME OBJECT_ID
------------------------------ ------------------------------ ----------
SYS CLU$ 5
SYS CLU$ 5
SYS CLU$ 5
SYS CLU$ 5
SYS CLU$ 5
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
SYS I_TS# 7
11 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (ORDER BY)
2 1 NESTED LOOPS
3 2 VIEW
4 3 COUNT **************
5 4 VIEW
6 5 SORT (ORDER BY)
7 6 INDEX (RANGE SCAN) OF 'T_ONAME_OID_IND' (NON-UNIQUE)
8 2 TABLE ACCESS (BY USER ROWID) OF 'T'
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
220 consistent gets *****************
208 physical reads *****************
0 redo size
587 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
11 rows processed
So it's quite clear the count (stopkey) (Stopkey is a new method introduced in 8i and later version. it's implemented mostly for getting TOP N records. )is more efficient in this cirmstance. But as you browse page by page, it will become less efficient. Any way it won't be slower than count operation. As Bitty said, few people will browse the whole resultset page by page. If most of browsing are focusing at the 1st 500 rows, using count(stopkey), if more of browsing are focusing at the last 500rows, still suggest to use count(stopkey), but index ur key columns in desc order. But if the user's action is ramdom, Jimmy's may serve the purpose. However, the people using this one must consider the CBO's negative affection:
rudolf@TEST920.WORLD> set autot traceo
rudolf@TEST920.WORLD> analyze table t compute statistics for table for all indexes for all indexed columns;
Table analyzed.
rudolf@TEST920.WORLD>
rudolf@TEST920.WORLD>
rudolf@TEST920.WORLD> 1 select owner,object_name,object_id
2 from t a,
3 ( select rid from
4 ( select rownum rno,rowid rid from
5 ( select rowid from t
6 where object_name < 'Z'
7 order by object_id
8 )
9 ) where rNo between 20 and 30
10 ) b
11 where a.rowid=b.rid
12* order by a.object_id
/
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=959 Card=52483 Bytes=3306429)
1 0 SORT (ORDER BY) (Cost=959 Card=52483 Bytes=3306429)
2 1 HASH JOIN (Cost=416 Card=52483 Bytes=3306429)
3 2 VIEW (Cost=287 Card=52483 Bytes=1049660)
4 3 COUNT
5 4 VIEW (Cost=287 Card=52483 Bytes=367381)
6 5 SORT (ORDER BY) (Cost=287 Card=52483 Bytes=1364558)
7 6 INDEX (FAST FULL SCAN) OF 'T_ONAME_OID_IND' (NON-UNIQUE) (Cost=22 Card=52483 bytes=1364558)
8 2 TABLE ACCESS (FULL) OF 'T' (Cost=58 Card=52952 Bytes=2276936)
Statistics
----------------------------------------------------------
42 recursive calls
0 db block gets
805 consistent gets ****************
0 physical reads
0 redo size
587 bytes sent via SQL*Net to client
503 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
11 rows processed
In fact, the total blocks of table t is only 583 blocks. And one more, this method will also lose efficiency its competitiveness when the avg_row_len is small. So, please remember to include the /*+rule*/ hint in your statement if you want to use this method. |