|
|
I like clear out some thing here:
1. I am really sorry for confuse many of you by that init.ora file I posted early last Month.
2. There are no such init.ora file will work everywhere standard.
3. Configuration your init.ora file that work is base on you understand your application
not by DBA training book.
4. Database Tuning not science or say mathematics precise, but a scientific methodology.
Before I explain some of your questions, it is necessary descript what kind of application I was work on.
It is a 70% of OLTP and 30% of large query application, here is some of the number I collected to support my Init.ora file configuration.
Redo log size at 15 minute switch 2.8 GB (name user 29000 at 66% as active)
Archive stream during peak (10am – 2 pm) 41.3 GB
Redo I/O per sec. (name user 29000 at 66% as active) is 229.4 I/O per sec.(= 6.6MB /p.s.)
Database I/O per sec. Is 3599. and data size is 59.75MB/Sec.
At my fist time database stress testing set log_buffer = 4096000 after tuning set to 8192000,
Most books will say, “above 1Mb are unlikely to yield significant benefit” on Log_buffer.
But redo buffer helps absorb processing spikes. The memory-to-memory transfer (SGA/PGA to REDO BUFFER via SERVER) is much faster than memory-to-disk transfer (REDO BUFFER to REDO LOG via
LGWR). So if a process is making a lot of changes, the redo it generates will be written to a memory buffer. As the buffer fills up, the output process (LGWR) is awakened to empty the buffer. LGWR will need some lead time, since a sufficiently large transaction can generate redo faster than LGWR can write it to disk. If you have small transactions each transaction COMMIT causes redo to be flushed to disk before the COMMIT returns control to the user.
Sorry again I can’t answer all the questions on this thread due to the time limitation.
Chao_Ping understand most of the configuration ideas, he may like helping you on your questions as before.
Happy holiday! |
|