|
|
从ASKTOM上看到的文章,不知道是否与讨论主题有关。
Tom:
create table test( a int);
begin
for i in 1..10000 loop
insert into test values (i);
end loop;
end;
set autotrace on
select count(0) from test;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST'
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
20 consistent gets
0 physical reads
0 redo size
369 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
select * from test where a=10;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
21 consistent gets
0 physical reads
0 redo size
360 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
select * from test;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
686 consistent gets
0 physical reads
0 redo size
185864 bytes sent via SQL*Net to client
74351 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10001 rows processed
question:
1.
when query count(0), what exactly did oracle do? since we don't have index, did
oracle went to every block and query the number?
if so, why consistent gets is much lower that select * from test?
both are full scan table.
2.
why select * from test have much higher consistent gets than
select * from test where a=10?
since there are no index, oracle need to go to every block to get the value, it
should be same consistent gets, although the first one return more rows, but it
does't matter.
Regards,
--------------------------------------------------------------------------------
and we said...
q1) select count(0) is just like
select count(*)
from ( select 0 from t )
/
yes, oracle went to each block to find the rows to give you a zero
q2) its a side effect of your arraysize. You must have an 8k blocksize cause I
reproduced this exactly.
We were expecting about 20 consistent gets right? Well, the default array size
in sqlplus is 15 rows / fetch. 10000/15 = 666.66666. Well, 666+20 = 686 --
whoah there -- 686 is our consistent gets!
Thats what happened. When you fetched 15 rows, Oracle paused, gave you the
data. When you went back to get the next 15, it got the buffer again to resume
your query.
Watch what happens with different array sizes, starting with 15:
ops$tkyte@8i> select * from test;
10000 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
686 consistent gets
0 physical reads
0 redo size
108813 bytes sent via SQL*Net to client
46265 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed
ops$tkyte@8i> set arraysize 1000
ops$tkyte@8i> select * from test;
10000 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
30 consistent gets
0 physical reads
0 redo size
86266 bytes sent via SQL*Net to client
942 bytes received via SQL*Net from client
11 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed
ops$tkyte@8i> set arraysize 5000
ops$tkyte@8i> select * from test;
10000 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
22 consistent gets
0 physical reads
0 redo size
149793 bytes sent via SQL*Net to client
382 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed
Other interesting thing to note is that as the array size gets too large -- the
amount of data transferred goes up. there is a diminishing marginal return on
the array size so don't go OVERBOARD.
followup to comment one
Set the arraysize to some constant as it will not matter for single row fetches
whether it is too big but having it too small for lots of rows (NOT just full
scans -- lots of rows) does impact you. |
|