|
最初由 玉面飞龙 发布
[B]
1)disk sequence read 在direct IO下比Buffer IO稍微好一点 55s VS 75s
2)direct IO下 usr比sy 要高好多, buffer IO下sy比usr要高。好事情?坏事情?
3) in 中断 sy 系统调用 cs 上下文切换 direct IO 都比 buffer IO要高 。 好事情?坏事情?
4) Buffer IO下,vmstat cpu等待队列总是很高,而IO队列却较低;
direct IO相反,cpu等待队列一般,但IO等待队列很高。
5)iostat的结果差不多
6)明显buffer IO的page in很高
[/B]
The more I think of it, the more difficult a good performance comparison may be. Buffered I/O has two effects that direct I/O does not have: filesystem caching and read prefetch. To avoid filesystem caching effect on reproducibility, you have to test, unmount the filesystem and test a second time. This may become unnecessary or partially unnecessary if very_big_table is much bigger than the size of filesystem cache, or in case table is read in noparallel mode by Oracle, that size plus Oracle buffer cache. (Speaking of this, ignore my last message that asks you to repeat the test with no parallel hint. Now I understand that you ran the test in parallel to avoid Oracle's caching!) You see there're a lot of parameters here. Your result (6) implies that you're either bringing in file data the first time, or very_big_table is large relative to filesystem cache.
Your result (1) implies the same, because the first time file data is brought in filesystem page cache, it takes a while. But if the cache is smaller than the all table blocks, even the subsequent reads are still slow. Since your test is sequential read, I think this also leads to a conclusion that read prefetch is not nearly as important as caching, i.e., even if prefetch is done in buffered I/O, this I/O is still slow simply because the data has to be copied twice (disks to kernel buffer, i.e. page cache, then from there to user space).
I don't have good explanations of other results. It's known that direct I/O uses less sys CPU time than buffered I/O. But that doesn't explain any of (2) to (5).
Yong Huang |
|