|
==> I am not sure about the implementation on Solaris. Topic below is on Linux. Fix me if there is anything wrong.
1. AIO is supported default on SOLARIS plateforms.if you use direct IO,system will run unstable .
==> AIO and DIO can be used at the same time. In fact, they should be used at the same time. Buffer IO will result in more unstable condition because you depends on OS to help you manage your system (e.g. other application flush the caching buffer resulting in a lot of paging)
2. you can use dd under buffered I/O & direct I/O ,then issue iostat -xntc 3 command to check I/O stats,the BS in dd command can be 8k(just as the db_block_size), or 2M.
3. you can also use mkfile ,then use iostat -xntc 3 .
4. you can also use iometer,a tool provided by SUN and be available in www.sun.com to check io stats under buffered I/O & direct I/O .
==> iostat on linux can only monitor the physical IO instead of the logic ones. to look at which metric depends on what kind of IO pattern you have. If you do a lot of full table scan and hash join, bandwidth is your key. check on how many KB/sec you reached. default buffer IO here is good becaue OS will read ahead disk content for you. However I still perfer direct IO by change Oracle multiple_block parameter. At the same time, AIO will help you if you have a RAID config. (>5% but pay attention you don't have bottleneck either on the SCSI/FC bus on the disk number)
If you do a lot of random IO (OLTP), turn off Os buffer will help you (Or at least you should turn off OS read ahead feature). Look at iostat on await column, which is the real latency you get from Os level. it should be less than 10ms. Turn on AIO can also help a lot here. Or even you can consider vector IO (2.6 kernel has a good support but I am not sure Oracle). If you have large write cache on you raid controller, turn on/off it depends on your IO workload. definitly you should put a dedicated bay for log and turn cache on. and for data, maybe or not. |
|