SarCheck®: Automated Analysis of HP-UX sar and ps data

(English text version 7.01.06)


This is an analysis of the data contained in the file sar26. The data was collected on 2010/01/26 (yyyy/mm/dd), from 00:00:00 to 23:50:00, from the system 'signa2A'. There were 143 data records used to produce this analysis. The operating system used to produce the sar report was HP-UX Release B.11.11. 12 processors are present. 64 gigabytes of memory are present.

Data collected by the ps -elf command on 2010/01/26 from 13:40:00 to 16:10:00, and stored in the file 20100126, will also be analyzed.

Table of Contents

SUMMARY

A change to at least one tunable parameter has been recommended. When the data was collected, no CPU bottleneck could be detected. No significant I/O bottleneck was seen. The system showed no signs of a memory bottleneck.

At least one possible memory leak has been detected. At least one possible runaway process has been detected. See the Resource Analysis section for details.

Limits to future growth have been noted in the Capacity Planning section.

RECOMMENDATIONS SECTION

All recommendations contained in this report are based solely on the conditions which were present when the performance data was collected. It is possible that conditions which were not present at that time may cause some of these recommendations to result in worse performance. To minimize this risk, analyze data from several different days, implement only regularly occurring recommendations, and implement them one at a time.

Change the value of 'dbc_max_pct' from 12 to 14. This recommendation has been made because the buffer cache statistics indicate that a larger buffer cache might improve performance.

Change the value of 'dbc_min_pct' from 5 to 6. This recommendation has been made because the buffer cache statistics indicate that a larger buffer cache might improve performance.

Change the value of 'nproc' from 14432 to 11000. The parameter 'nproc' is used to set the maximum number of processes which may run on the system simultaneously. At its peak, only 1827 entries were present in the process table. This recommendation is being made because the table was grossly oversized and was large enough to degrade performance.

Use the System Administration Manager (SAM) to change the values of tunable parameters. More information on the SAM utility and relinking the kernel is available in the System Administration Tasks manual.

RESOURCE ANALYSIS SECTION

Average CPU utilization was only 19.9 percent. This indicates that spare CPU capacity exists. If any performance problems were seen during the entire monitoring period, they were not caused by a lack of CPU power. User CPU as measured by the %usr column in the sar -u data averaged 13.6 percent and system CPU (%sys) averaged 6.2 percent. The sys/usr ratio averaged 0.46 : 1. The ratio of %sys to %usr activity did not indicate any excessive overhead caused by system calls. CPU utilization peaked at 51 percent from 17:20:00 to 17:30:01.

The CPU was waiting for I/O an average of 7.4 percent of the time. This infers that the system may have been somewhat I/O bound. The time that the system was waiting for I/O peaked at 49 percent from 03:00:01 to 03:10:00. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period when the system was waiting for I/O, then a performance bottleneck may be caused by processes waiting for I/O. Disk statistics indicate that some intermittent bottlenecks may have been present.

Graph of CPU utilization

The CPU was idle (neither busy nor waiting for I/O) and had nothing to do an average of 72.8 percent of the time. If overall performance was good, this means that on average, the CPU was lightly loaded. If performance was generally unacceptable, the bottleneck may have been caused by remote file I/O which cannot be directly measured with sar and therefore cannot be considered by SarCheck.

The average context switching rate was 2076.7 per second. This works out to an average of one context switch per processor every 5.78 milliseconds. No recommendations have been made to the timeslice parameter because no problems were seen with the context switching rate.

No unusual configurable parameter values were seen in those parameters which relate to the process accounting system. The current values of acctsuspend and acctresume are unlikely to have an impact on system performance.

Large systems, such as this one, may experience system panics due to a lack of 'equivalently mapped memory'. The current value of eqmemsize, the parameter which controls the size of equivalently mapped memory, is 15 and should be increased if system failures occur. There is not enough information to recommend a specific value for eqmemsize.

The average rate of System V semaphore calls was 216.8 per second. System V semaphore activity peaked at a rate of 6512.64 per second from 01:50:01 to 02:00:00. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period of peak semaphore activity, then that activity may be a performance bottleneck and application or database activity related to semaphore usage should be looked at more closely. No problems have been seen, and no changes have been recommended for System V semaphore parameters. Note that SarCheck only checks these parameter's relationships to each other since semaphore usage data is not available. Algorithms used by SarCheck to check these relationships are available in the help text of SAM.

No System V message activity was seen. No problems have been seen, and no changes have been recommended for System V message parameters. Note that SarCheck only checks these parameter's relationships to each other since message usage data is not available. Algorithms used by SarCheck to check these relationships are available in the help text of SAM, and in the file /usr/include/sys/msg.h.

The ratio of exec to fork system calls was 0.92. This indicates that PATH variables are efficient.

The syncer daemon used 0.19 percent of the CPU during the monitoring period. The syncer is responsible for writing data from the buffer cache to disk. It's activity indicates that it is not so active as to cause a problem.

This system's buffer cache is dynamic, meaning that its size is determined by the amount of free memory on the system. The average cache hit ratio of logical reads was only 67.0 percent. Based on the current values of dbc_min_pct and dbc_max_pct, the buffer cache can range in size from 3275 to 7860 megabytes of memory. The actual size of the dynamic buffer cache ranged from 3276.8 to 7573.7 megabytes of memory. After implementation of the changes described in the Recommendations Section, the size of the dynamic buffer cache will range from 3930.1 to 9170.1 megabytes of memory. The following graph shows that the actual size of the dynamic buffer cache moved around quite a bit. This is normal and indicates that there is a healthy balance between memory pressure and the need to buffer I/O

Graph of dynamic buffer cache utilization

Total swap space including pseudo-swap reached a peak of 36.56 percent full at 15:40:00.

Graph of swap space usage

Total size of paging space known to the swapinfo utility as type "dev" was 61444.00 megabytes. This space was not used during the monitoring period. The priority of the "dev" paging space ranged from 0 to 1.

Total size of paging space known to the swapinfo utility as type "reserved" peaked at 22465.43 megabytes at 15:40:00.

The swap out rate was always 0.00 per second. This indicates that no serious memory pressure was seen.

Data collected with ps -elf shows that the swapper daemon used an average of 0.011 percent of one CPU. This does not indicate a memory shortage.

At least 8158876 pages of memory were always free. The value of lotsfree was 253952 pages and the value of gpgslim never changed, indicating a lack of memory pressure. The value of desfree was also 62464 pages and the fact that gpgslim always equalled desfree does not indicate a memory-poor condition. The following graph illustrates the fact that freemem was always greater than lotsfree, indicating that this system had more than enough memory to support the load present. Since the minimum number of free pages was significantly greater than the value of lotsfree, this system appears to have a surplus of physical memory. If this is always the case, up to 5928693 pages, or 23159.0 megabytes of memory might be more effectively used as a buffer cache.

Graph of free memory

The inode cache did not fill up during the monitoring period. This indicates that the cache is larger than necessary. Peak inode cache usage statistics (max used/cache size) as reported by sar: 5912/20570.

Graph of inode cache usage

The process and open file tables were less than 80.0 percent full. Peak table usage statistics (max used/final table size) as reported by sar: Process table: 1827/14432. Open file table: 271134/567150.

Graph of open file table usage

The process table, controlled by the nproc parameter, was grossly oversized. Process table sizes in excess of 4000 can cause performance degradation and the table should only be larger than this if there is a good reason. Specific recommendations for changing the size of this table have been made in the recommendations section.

The fs_async flag is set to 0. This may result in reduced disk performance, but keeps filesystem data structures consistent in the event of a system crash. This option is currently in the state recommended for production systems.

There were 18 volume groups seen and the maxvgs parameter was set to 36. This leaves plenty of room for growth and no changes to maxvgs have been recommended.

Volume Group Statistics
VG Name Current PVs Active PVs Current LVs Open LVs Total Size Free Space
/dev/vg00 2 2 10 10 136.69 GB 69.39 GB
/dev/vg01 2 2 3 3 136.71 GB 48.71 GB
/dev/vg04 2 2 1 1 249.95 GB 0.00 GB
/dev/vg21 1 1 1 1 149.98 GB 0.01 GB
/dev/vg06 1 1 1 1 79.98 GB 0.00 GB
/dev/vg07 1 1 1 1 149.98 GB 0.00 GB
/dev/vg09 2 2 1 1 19.99 GB 0.01 GB
/dev/vg12 1 1 1 1 10.00 GB 0.00 GB
/dev/vg25 1 1 1 1 20.00 GB 0.00 GB
/dev/vg15 1 1 1 1 39.99 GB 0.00 GB
/dev/vg19 1 1 2 2 6.00 GB 0.00 GB
/dev/vg05 1 1 1 1 20.00 GB 0.00 GB
/dev/vg22 1 1 1 1 59.99 GB 0.00 GB
/dev/vg23 1 1 1 1 159.98 GB 0.01 GB
/dev/vg20 1 1 1 1 399.94 GB 0.02 GB
/dev/vg24 1 1 1 1 20.00 GB 0.00 GB
/dev/vg26 1 1 1 1 99.98 GB 0.01 GB
/dev/vg27 1 1 1 1 49.99 GB 0.01 GB

The volume group /dev/vg00 contained 2 physical volumes and 10 logical volumes. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 136.69 gigabytes, of which 49.23 percent was allocated and 50.77 percent was free.

The volume group /dev/vg01 contained 2 physical volumes and 3 logical volumes. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 136.71 gigabytes, of which 64.37 percent was allocated and 35.63 percent was free.

The volume group /dev/vg04 contained 2 physical volumes and 1 logical volume. All of the physical volumes were active. The size of the group was 249.95 gigabytes, of which 100.00 percent was allocated.

The volume group /dev/vg21 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 149.98 gigabytes, of which 99.99 percent was allocated and 0.01 percent was free.

The volume group /dev/vg06 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 79.98 gigabytes, of which 100.00 percent was allocated and 0.00 percent was free.

The volume group /dev/vg07 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 149.98 gigabytes, of which 100.00 percent was allocated.

The volume group /dev/vg09 contained 2 physical volumes and 1 logical volume. All of the physical volumes were active. The size of the group was 19.99 gigabytes, of which 99.94 percent was allocated and 0.06 percent was free.

The volume group /dev/vg12 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 10.00 gigabytes, of which 100.00 percent was allocated.

The volume group /dev/vg25 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 20.00 gigabytes, of which 99.98 percent was allocated and 0.02 percent was free.

The volume group /dev/vg15 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 39.99 gigabytes, of which 99.99 percent was allocated and 0.01 percent was free.

The volume group /dev/vg19 contained 1 physical volume and 2 logical volumes. All of the logical volumes were open. The size of the group was 6.00 gigabytes, of which 99.93 percent was allocated and 0.07 percent was free.

The volume group /dev/vg05 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 20.00 gigabytes, of which 99.98 percent was allocated and 0.02 percent was free.

The volume group /dev/vg22 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 59.99 gigabytes, of which 99.99 percent was allocated and 0.01 percent was free.

The volume group /dev/vg23 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 159.98 gigabytes, of which 100.00 percent was allocated and 0.00 percent was free.

The volume group /dev/vg20 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 399.94 gigabytes, of which 100.00 percent was allocated and 0.00 percent was free.

The volume group /dev/vg24 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 20.00 gigabytes, of which 99.98 percent was allocated and 0.02 percent was free.

The volume group /dev/vg26 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 99.98 gigabytes, of which 99.99 percent was allocated and 0.01 percent was free.

The volume group /dev/vg27 contained 1 physical volume and 1 logical volume. All of the physical volumes were active and all of the logical volumes were open. The size of the group was 49.99 gigabytes, of which 99.98 percent was allocated and 0.02 percent was free.

The average system-wide local I/O rate as measured by the r+w/s column in sar -d was 517.6 per second. This I/O rate peaked at 2660 from 00:50:00 to 01:00:00.

Graph of total disk I/O rate

The following graph shows the average and peak percent busy and service time for 5 disks, sorted by percent busy.

Graph of up to 5 busiest disks

Note: 22 disks were present. By default, the presence of more than 12 disks causes SarCheck to only report on the busiest disks. This is meant to control the verbosity of this report. To see all disks included in the report, use the -d option.

The -dbusy switch has been used to sort the disk analysis by the average percent of time the disk was busy. The -dtoo switch has been used to format statistics into the following table.

Disk Device Statistics
Results sorted by Average Percent Busy
Disk Device
(vol group)
Avg Pct Busy Peak Pct Busy Avg Queue Depth Avg Svc Time Disk Size Pct Allocated Fragmented LVs?
c17t0d2
(/dev/vg04)
30.74 % 99.48 % 0.6 3.7 ms 199.97 GB
25596 blocks
100.00 % No
c0t6d0
(/dev/vg00)
6.72 % 95.64 % 2.4 7.2 ms 68.34 GB
4374 blocks
98.47 % No
c17t0d6
(/dev/vg07)
5.46 % 99.06 % 13.9 2.0 ms 149.98 GB
19197 blocks
100.00 % No
c17t2d2
(/dev/vg15)
3.88 % 85.70 % 0.6 3.1 ms 39.99 GB
10238 blocks
99.99 % No
c17t3d4
(/dev/vg20)
3.15 % 99.88 % 7.9 15.3 ms 399.94 GB
25596 blocks
100.00 % No

The disk device c17t0d2 was busy an average of 30.74 percent of the time and had an average queue depth of 0.6 (when occupied). This disk device was occasionally more than 50.0 percent busy, which indicates the possibility of an intermittent disk I/O bottleneck that may cause periods of performance degradation. During the peak interval from 00:10:00 to 00:20:00, the disk was 99.48 percent busy. Peak disk busy statistics can be used to help understand performance problems. If performance was worst when the disk was busiest, then a performance bottleneck may be that disk. The average service time reported for this device and its accompanying disk subsystem was 3.7 milliseconds. This is indicative of a very fast disk or a disk controller with cache. Service time is the delay between the time a request was sent to a device and the time that the device signaled completion of the request. The disk device c17t0d2 was reported by pvdisplay as being a 199.97 gigabyte disk. All of the space on this disk device has been allocated. This disk device was a part of volume group /dev/vg04 and contained 1 logical volume.

The disk device c0t6d0 was busy an average of 6.72 percent of the time and had an average queue depth of 2.4 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 01:40:00 to 01:50:01, the disk was 95.64 percent busy. The average service time reported for this device and its accompanying disk subsystem was 7.2 milliseconds. This is relatively fast. Queue depth on this device peaked at an unlikely 83.4. This data is surprising and may indicate a problem with the sar -d statistics. The disk device c0t6d0 was reported by pvdisplay as being a 68.34 gigabyte disk. 1072 megabytes of space was reported as being free and 68912 megabytes have been allocated. This disk device was a part of volume group /dev/vg00 and contained 10 logical volumes. Each logical volume on this disk resided in a contiguous block of physical extents. This is the most efficient layout for a logical volume.

The disk device c17t0d6 was busy an average of 5.46 percent of the time and had an average queue depth of 13.9 (when occupied). This usage pattern is typical of that generated by sync activity. Sync activity refers to efforts made by the sync process to transfer data from the system buffer cache to disk. During the peak interval from 00:50:00 to 01:00:00, the disk was 99.06 percent busy. The average service time reported for this device and its accompanying disk subsystem was 2.0 milliseconds. This is indicative of a very fast disk or a disk controller with cache. Queue depth on this device peaked at an unlikely 77.0. This data is surprising and may indicate a problem with the sar -d statistics. The disk device c17t0d6 was reported by pvdisplay as being a 149.98 gigabyte disk. All of the space on this disk device has been allocated. This disk device was a part of volume group /dev/vg07 and contained 1 logical volume.

The disk device c17t2d2 was busy an average of 3.88 percent of the time and had an average queue depth of 0.6 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 02:10:01 to 02:20:01, the disk was 85.70 percent busy. The average service time reported for this device and its accompanying disk subsystem was 3.1 milliseconds. This is indicative of a very fast disk or a disk controller with cache. The disk device c17t2d2 was reported by pvdisplay as being a 39.99 gigabyte disk. 4 megabytes of space was reported as being free and 40948 megabytes have been allocated. This disk device was a part of volume group /dev/vg15 and contained 1 logical volume.

The disk device c17t3d4 was busy an average of 3.15 percent of the time and had an average queue depth of 7.9 (when occupied). This usage pattern is typical of that generated by sync activity. During the peak interval from 03:10:00 to 03:20:00, the disk was 99.88 percent busy. The average service time reported for this device and its accompanying disk subsystem was 15.3 milliseconds. This service time is acceptable. The disk device c17t3d4 was reported by pvdisplay as being a 399.94 gigabyte disk. 16 megabytes of space was reported as being free and 409520 megabytes have been allocated. This disk device was a part of volume group /dev/vg20 and contained 1 logical volume.

At 15:00:00 ps -elf data indicated that there were 1790 processes present. This was the largest number of processes seen with ps -elf but it is not likely to be the absolute peak because the operating system does not store the true "high-water mark" for this statistic. There were an average of 1720.4 processes present.

Graph of the number of processes present

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by rmilbur, pid 26084. Between 13:50:00 and 14:20:00, this process grew from 8401 to 9169 pages. Memory usage grew at an average rate of 1536.0 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by tpierce, pid 8340. Between 14:40:00 and 15:20:00, this process grew from 7120 to 8144 pages. Memory usage grew at an average rate of 1536.0 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by mmcphail, pid 20292. Between 13:40:00 and 14:20:00, this process grew from 6864 to 7888 pages. Memory usage grew at an average rate of 1536.0 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by jfurlon, pid 4998. Between 14:20:00 and 14:50:00, this process grew from 7376 to 8401 pages. Memory usage grew at an average rate of 2050.0 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by ckneafs, pid 16628. Between 14:00:00 and 14:30:00, this process grew from 6352 to 7888 pages. Memory usage grew at an average rate of 3072.0 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by jomoore, pid 5012. Between 15:00:00 and 15:30:00, this process grew from 7120 to 7633 pages. Memory usage grew at an average rate of 1026.0 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by nbalfou, pid 15302. Between 13:40:00 and 14:08:19, this process grew from 6608 to 7376 pages. Memory usage grew at an average rate of 1627.3 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by msmith1, pid 6605. Between 13:40:00 and 14:50:00, this process grew from 7120 to 9680 pages. Memory usage grew at an average rate of 2194.3 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by nlester, pid 14469. Between 14:20:00 and 14:50:00, this process grew from 5840 to 7120 pages. Memory usage grew at an average rate of 2560.0 pages/hr during that interval.

A possible memory leak was seen in /usr/opt/dlc/v101b/bin/_progres, owned by vtindam, pid 3691. Between 14:30:00 and 15:00:00, this process grew from 7632 to 8144 pages. Memory usage grew at an average rate of 1024.0 pages/hr during that interval.

The reporting of processes with possible memory leaks has been suppressed because the threshold of 10 processes was reached. To report them all, use the -dcml switch.

CPU usage seen in /usr/opt/dlc/v101b/bin/_progres, owned by llovell, pid 1180. Between 13:40:00 and 16:00:00, 2607 seconds of CPU time were used. CPU utilization by this process averaged 31.04 percent of a single processor during that interval.

CPU usage seen in /usr/opt/dlc/v101b/bin/_progres, owned by jjyoung, pid 1101. Between 15:10:00 and 15:30:00, 258 seconds of CPU time were used. CPU utilization by this process averaged 21.50 percent of a single processor during that interval.

The following is a list of the top CPU using processes.

  • Process 1180, command /usr/opt/dlc/v101b/bin/_progres, used 2607 seconds of CPU time.
  • Process 25675, command /usr/opt/dlc/v101b/bin/_progres, used 748 seconds of CPU time.
  • Process 19704, command /opt/perf/bin/midaemon, used 568 seconds of CPU time.
  • Process 27331, command /usr/opt/dlc/v101b/bin/_progres, used 488 seconds of CPU time.
  • Process 5557, command /usr/opt/dlc/v101b/bin/_progres, used 469 seconds of CPU time.
  • Process 8082, command /usr/opt/dlc/v101b/bin/_progres, used 458 seconds of CPU time.
  • Process 29731, command /usr/opt/dlc/v101b/bin/_progres, used 291 seconds of CPU time.
  • Process 4998, command /usr/opt/dlc/v101b/bin/_progres, used 283 seconds of CPU time.
  • Process 5081, command /usr/opt/dlc/v101b/bin/_progres, used 232 seconds of CPU time.
  • Process 7430, command /usr/opt/dlc/v101b/bin/_progres, used 220 seconds of CPU time.
  • CAPACITY PLANNING SECTION

    This section is designed to provide the user with a rudimentary linear capacity planning model and should be used for rough approximations only. These estimates assume that an increase in workload will affect the usage of all resources equally. These estimates should be used on days when the load is heaviest to determine approximately how much spare capacity remains at peak times.

    Based on the limited data available in this single sar report, the system cannot support an increase in workload at peak times without some loss of performance or reliability, and the bottleneck is likely to be disk I/O. Implementation of some of the suggestions in the recommendations section may help to increase the system's capacity.

    Graph of remaining room for growth

    The CPU can support an increase in workload of approximately 76 percent at peak times. The amount of memory present should be able to support a greater load. The busiest disk can support a workload increase of approximately 0 percent at peak times. For more information on peak CPU and disk utilization, refer to the Resource Analysis section of this report.

    The process table, controlled by the parameter 'nproc', can support at least a 100 percent increase in the number of entries. The file table, controlled by the parameter 'nfile', can support approximately a 67 percent increase in the number of entries.

    CUSTOM SETTINGS SECTION

    The default MLTIME threshold was changed in the sarcheck_parms file from 7195 to 1500 seconds. This value is likely to compromise the accuracy of the analysis.

    The PSELFDIR keyword was used to change the ps directory to /tmp.

    The date format used in this report is yyyy/mm/dd. It was set in the sarcheck_parms file.

    Please note: In no event can Aptitune Corporation be held responsible for any damages, including incidental or consequent damages, in connection with or arising out of the use or inability to use this software. All trademarks belong to their respective owners. Evaluation copy for: Your Company. This software expires on 2010/07/02 (yyyy/mm/dd). SC9000 Code version: 7.01.06. Serial number: 00074231.

    Thank you for trying this evaluation copy of SarCheck. To order a licensed version of this software, just type 'analyze9000 -o' at the prompt to produce the order form, and follow the instructions.

    (c) copyright 1995-2010 by Aptitune Corporation, Portsmouth NH, USA, All Rights Reserved. http://www.sarcheck.com

    Statistics for system, signa2A
      Start of peak interval End of peak interval Date of peak interval
    System model number is, "9000/800/rp8420"      
    Statistics collected on, 2010/01/26      
    Average CPU utilization, 19.9%      
    Peak CPU utilization, 51% 17:20:00 17:30:01 2010/01/26
    Average user CPU utilization, 13.6%      
    Average sys CPU utilization, 6.2%      
    Average waiting for I/O, 7.4%      
    Average run queue depth, 1.0      
    Peak run queue depth, 2.0 20:50:00 21:00:00 2010/01/26
    Average swap queue occupancy, 0.0%      
    Average swap out rate, 0.00 / sec      
    Peak pct swap space used, 36.6% 15:40:00   2010/01/26
    Peak MB swap space used, 22465.43 15:40:00   2010/01/26
    Average cache read hit ratio, 67.0%      
    Average cache write hit ratio, 87.1%      
    Disk device w/highest peak, c17t3d4      
    Avg pct busy for that disk, 3.15%      
    Peak pct busy for that disk, 99.88% 03:10:00 03:20:00 2010/01/26
    Avg number of processes seen by ps, 1720.4      
    Max number of processes seen by ps, 1790 15:00:00   2010/01/26
    Percent of process tbl used, 12.7%      
    Process table overflows, No      
    Percent of file table used, 47.8%      
    File table overflows, No      
    Inode cache pct of time full, 0.0%      
    Inode cache overflows, No      
    Approx CPU capacity remaining, 76.5%      
    Approx I/O bandwidth remaining, 0.0%      
    Remaining process tbl capacity, 100%+      
    Remaining file table capacity, 100%+      
    Can memory support add'l load, Yes