|
NODE 1
今天收集的一些信息 ,由于今天下雨,业务量似乎偏低
刚收集完信息,还没来得及全面看,先贴出来
NODE2的信息我在另外一个帖子里贴出来 。
NODE 1 通过DB LINK 访问NODE2
# sysconfig -q ipc
ipc:
msg_max = 8192
msg_mnb = 16384
msg_mni = 64
msg_tql = 40
shm_max = 6442450944
shm_min = 1
shm_mni = 128
shm_seg = 32
sem_mni = 16
sem_msl = 25
sem_opm = 10
sem_ume = 10
sem_vmx = 32767
sem_aem = 16384
max_kernel_ports = 34400
ssm_threshold = 8388608
ssm_enable_core_dump = 1
shm_allocate_striped = 1
shm_enable_core_dump = 1
由于shm_max = 6442450944,所以原来计划db_block_buffer 3.5G 做了适当下调
Parameter last value Current value
Db_block_buffer 307200 368000
Shared_pool_size 828375040 1053741824
Log_buffer 1048576 4194304
Fast_start_io_target 307200 0
Processes 600 650
Db_block_max_dirty_target 307200 0
orabus@Ahyz1> vmstat 1 20
Virtual Memory Statistics: (pagesize = 8192)
procs memory pages intr cpu
r w u act free wire fault cow zero react pin pout in sy cs us sy id
26 810 288 937K 949K 168K 4G 397M 1G 28624 764M 0 5K 23K 22K 19 10 71
26 808 290 935K 950K 168K 1063 52 222 0 157 0 7K 19K 27K 40 14 46
26 810 289 935K 950K 168K 1859 120 449 0 332 0 6K 16K 23K 30 12 58
26 808 290 935K 950K 168K 1904 46 518 0 221 0 6K 21K 25K 39 14 47
23 803 295 935K 951K 168K 794 57 379 0 173 0 7K 22K 27K 41 15 44
33 801 290 935K 950K 168K 2136 55 219 0 163 0 6K 23K 28K 42 16 42
31 802 290 935K 950K 168K 2136 120 684 0 412 0 7K 26K 29K 48 19 33
35 795 291 935K 950K 168K 2171 165 604 0 437 0 7K 26K 32K 42 19 39
23 800 297 935K 950K 169K 466 127 288 0 217 0 6K 20K 26K 35 14 50
23 808 290 936K 950K 169K 2038 16 41 0 15 0 6K 21K 25K 35 13 51
26 808 291 936K 949K 169K 3510 203 1056 0 637 0 6K 24K 26K 34 14 52
26 794 304 935K 950K 169K 1071 75 482 0 252 0 6K 20K 24K 35 12 53
22 807 293 934K 951K 169K 763 71 328 0 223 0 6K 21K 26K 35 14 51
30 804 290 933K 952K 169K 2866 134 433 0 326 0 6K 23K 24K 36 14 50
25 806 292 933K 951K 169K 3235 101 1083 0 413 0 6K 22K 26K 37 15 47
23 803 295 933K 952K 169K 1395 103 894 0 304 0 6K 22K 27K 41 14 45
18 810 294 933K 951K 169K 1748 72 381 0 249 0 6K 20K 26K 32 14 54
27 804 290 933K 951K 169K 3334 112 844 0 422 0 7K 20K 25K 41 16 43
32 801 288 934K 950K 169K 799 68 216 0 153 0 8K 19K 24K 41 17 43
32 804 286 935K 950K 169K 1914 112 559 0 414 0 8K 23K 25K 47 18 35
orabus@Ahyz1> top
load averages: 8.54, 9.61, 9.79 16:12:02
549 processes: 10 running, 53 waiting, 184 sleeping, 296 idle, 6 zombie
CPU states: % user, % nice, % system, % idle
Memory: Real: 6030M/16G act/tot Virtual: 18727M use/tot Free: 7434M
PID USERNAME PRI NICE SIZE RES STATE TIME CPU COMMAND
524288 root 0 0 20G 605M run 704.3H 79.50% kernel idle
982231 orabus 48 0 4080M 1736K run 0:01 39.60% oracle
938207 orabus 42 0 4080M 1318K WAIT 1:33 28.20% oracle
896416 orabus 45 0 4080M 2048K run 0:00 21.60% oracle
735791 orabus 42 0 4080M 2269K sleep 0:11 21.10% oracle
556965 orabus 42 0 4080M 1515K run 2:34 19.20% oracle
997546 orabus 42 0 4080M 1589K WAIT 2:26 16.50% oracle
604921 orabus 42 0 4080M 1654K WAIT 9:58 16.30% oracle
758860 orabus 42 0 4080M 1400K WAIT 0:35 14.90% oracle
939957 orabus 45 0 4080M 1998K sleep 0:00 14.40% oracle
891920 orabus 42 0 4080M 1409K WAIT 1:45 13.30% oracle
1045569 orabus 45 0 4090M 12M sleep 0:35 12.80% oracle
857115 orabus 42 0 4080M 1982K WAIT 0:00 11.90% oracle
697597 root 42 0 0K 0K WAIT 2:38 11.70% icssvr_daemon_
577683 orabus 42 0 4084M 5275K WAIT 51:56 11.00% oracle
orabus@Ahyz1> vmstat -P
Total Physical Memory = 16384.00 M
= 2097152 pages
Physical Memory Clusters:
start_pfn end_pfn type size_pages / size_bytes
0 504 pal 504 / 3.94M
504 524271 os 523767 / 4091.93M
524271 524288 pal 17 / 136.00k
8388608 8912872 os 524264 / 4095.81M
8912872 8912896 pal 24 / 192.00k
16777216 17301480 os 524264 / 4095.81M
17301480 17301504 pal 24 / 192.00k
25165824 25690088 os 524264 / 4095.81M
25690088 25690112 pal 24 / 192.00k
Physical Memory Use:
start_pfn end_pfn type size_pages / size_bytes
504 1032 scavenge 528 / 4.12M
1032 1963 text 931 / 7.27M
1963 2048 scavenge 85 / 680.00k
2048 2278 data 230 / 1.80M
2278 2756 bss 478 / 3.73M
2756 3007 kdebug 251 / 1.96M
3007 3014 cfgmgmt 7 / 56.00k
3014 3016 locks 2 / 16.00k
3016 3032 pmap 16 / 128.00k
3032 6695 unixtable 3663 / 28.62M
6695 6701 logs 6 / 48.00k
6701 15673 vmtables 8972 / 70.09M
15673 524271 managed 508598 / 3973.42M
524271 8388608 hole 7864337 / 61440.13M
8388608 8388609 unixtable 1 / 8.00k
8388609 8388612 pmap 3 / 24.00k
8388612 8389128 scavenge 516 / 4.03M
8389128 8390059 text 931 / 7.27M
8390059 8398104 vmtables 8045 / 62.85M
8398104 8912872 managed 514768 / 4021.62M
8912872 16777216 hole 7864344 / 61440.19M
16777216 16777217 unixtable 1 / 8.00k
16777217 16777220 pmap 3 / 24.00k
16777220 16777736 scavenge 516 / 4.03M
16777736 16778667 text 931 / 7.27M
16778667 16786712 vmtables 8045 / 62.85M
16786712 17301480 managed 514768 / 4021.62M
17301480 25165824 hole 7864344 / 61440.19M
25165824 25165825 unixtable 1 / 8.00k
25165825 25165828 pmap 3 / 24.00k
25165828 25166344 scavenge 516 / 4.03M
25166344 25167275 text 931 / 7.27M
25167275 25175320 vmtables 8045 / 62.85M
25175320 25690088 managed 514768 / 4021.62M
============================
Total Physical Memory Use: 2096559 / 16379.37M
Managed Pages Break Down:
free pages = 944225
active pages = 636421
inactive pages = 162278
wired pages = 169853
ubc pages = 142286
==================
Total = 2055063
WIRED Pages Break Down:
vm wired pages = 13441
ubc wired pages = 0
meta data pages = 62820
malloc pages = 82847
contig pages = 1242
user ptepages = 2149
kernel ptepages = 492
free ptepages = 15
==================
Total = 163006
orabus@Ahyz1> sar -u 1 20
OSF1 Ahyz1 V5.1 732 alpha 12May2003
16:13:20 %usr %sys %wio %idle
16:13:21 27 12 47 15
16:13:22 26 12 48 15
16:13:23 46 17 30 8
16:13:24 36 12 38 14
16:13:25 37 16 34 12
16:13:26 44 20 24 13
16:13:27 49 19 24 8
16:13:28 46 20 27 7
16:13:29 41 17 33 9
16:13:30 43 15 33 9
16:13:31 36 13 35 16
16:13:32 38 18 28 15
16:13:33 42 20 25 13
16:13:34 42 21 27 10
16:13:35 49 23 21 7
16:13:36 44 21 25 10
16:13:37 44 20 24 11
16:13:38 43 19 26 12
16:13:39 32 18 33 17
16:13:40 41 21 28 10
Average 40 18 31 12
SQL/Business>select count(*),status from v$session group by status ;
COUNT(*) STATUS
---------- --------
42 ACTIVE
290 INACTIVE
1 KILLED
Elapsed: 00:00:00.06
SQL/Business>/
COUNT(*) STATUS
---------- --------
41 ACTIVE
292 INACTIVE
1 KILLED
Elapsed: 00:00:00.00
SQL/Business>/
COUNT(*) STATUS
---------- --------
37 ACTIVE
292 INACTIVE
1 KILLED
Elapsed: 00:00:00.00
SQL/Business>/
COUNT(*) STATUS
---------- --------
41 ACTIVE
290 INACTIVE
1 KILLED
Elapsed: 00:00:00.00
SQL/Business>/
COUNT(*) STATUS
---------- --------
40 ACTIVE
290 INACTIVE
1 KILLED
Elapsed: 00:00:00.00
SQL/Business> |
|