原帖由 diablo2 于 2008-10-27 11:58 发表
why not try altering PCTFREE of the table?
缺点是,浪费存储
it's a good one but the problem is, when data first insert into page it was very small, and then application will give the record a big VARCHAR.... that means, even if we set PCTFREE to 50%, there's still high chance fragmentation happens, maybe will not so bad as before, but may need to do reorg often as well....
but it's a good one, really didn't thought about this
原帖由 unixnewbie 于 2008-10-27 19:57 发表
If just for overflow problem, I would change those two columns to CLOB then CREATE TABLE .... LONG IN a seperate 32K SMSTBS.
CLOB or LF are large objects, there's no prefetching or pagecleaning happens to large objects...
in this case, all queries requires direct I/O when reading these 2 columns, and all INSERT/UPDATE/DELETE requires direct I/O as well.... that will cause negative performance issue for sure, we don't know how badly it will affect the system before doing any benchmark
CLOB or LF are large objects, there's no prefetching or pagecleaning happens to large objects...
in this case, all queries requires direct I/O when reading these 2 columns, and all INSERT/UPDATE/DELETE requires direct I/O as well.... that will cause negative performance issue for sure, we don't know how badly it will affect the system before doing any benchmark
Yes, but that's where FILE SYSTEM CACHING has its tribute.
Yes, but that's where FILE SYSTEM CACHING has its tribute.
hmm... that's really an interesting idea, use FS cache to handle large object...
never thought about that, have you used this approach in any project before?
hmm... that's really an interesting idea, use FS cache to handle large object...
never thought about that, have you used this approach in any project before?
Using FS caching is recommended for LOB in infocenter. You may need to tune AIX's minclient/maxclient as well. Use 'vmstat -v' or 'nmon' to track how much memory the system is using for FS caching.