SarCheck®: Automated Analysis of AIX sar and ps data

(English text version 7.01.18)


NOTE: This software is scheduled to expire on 07/13/2010 and has not yet been tied to your system's Machine ID. To permanently activate SarCheck, please run /opt/sarcheck/bin/analyze -o and send the output to us so that we can generate an activation key for you.

This is an analysis of the data contained in the file aixsar14. The data was collected on 05/14/2010, from 09:00:00 to 17:00:01, from the system 'aixsysx1'. There were 48 data records used to produce this analysis. The operating system used to produce the sar report was Release 5.3 of AIX. The system configuration data in the sar report indicated that 4.0 processors were configured. 2048 megabytes of memory were reported by the rmss utility.

Data collected by the ps -elf command on 05/14/2010 from 09:00:01 to 17:00:00, and stored in the file aixps0514, will also be analyzed. This program will attempt to match the starting and ending times of the ps -elf data with those of the sar report file named aixsar14.

Data collected by the lspv and lslv commands and stored in the file aixfs0514 will also be analyzed.

Table of Contents

SUMMARY

When the data was collected, no CPU bottleneck could be detected. A memory bottleneck was seen. No significant I/O bottleneck was seen. A change to at least one tunable parameter has been recommended. Limits to future growth have been noted in the Capacity Planning section.

NOTE: The file /opt/sarcheck/etc/sarcheck_parms was seen but no changes have been made to the thresholds used by SarCheck's rules and algorithms. This does not indicate a problem and the file is probably being used to control SarCheck's menu defaults.

RECOMMENDATIONS SECTION

All recommendations contained in this report are based on the conditions which were present when the performance data was collected. It is possible that conditions which were not present at that time may cause some of these recommendations to result in worse performance. To minimize this risk, analyze data from several different days, implement only regularly occurring recommendations, and implement them one at a time or as groups of related parameters.

Additional memory may improve performance. The recommendation to add memory was triggered by the following condition(s). The peak rate of page-outs to the paging spaces. The rate that the page stealer was scanning memory. If possible, borrow some memory for test purposes, and monitor system performance and resource utilization before and after its installation.

NOTE: The following 3 vmo changes should be made all at once.

Change the value of the lru_file_repage parameter from 1 to 0 with the command 'vmo -o lru_file_repage=0'. The -o flag changes the value of a parameter only until the next reboot. To make the change permanent, use the command 'vmo -p -o lru_file_repage=0'. The lru_file_repage parameter is used to change the algorithms used by the LRUD (page stealing daemon).

Change the value of the maxclient% parameter from 30 to 90 with the command 'vmo -o maxclient%=90'. The -o flag changes the value of a parameter only until the next reboot. To make the change permanent, use the command 'vmo -p -o maxclient%=90'. Change the value of the maxperm% parameter from 30 to 90 with the command 'vmo -o maxperm%=90'. The -o flag changes the value of a parameter only until the next reboot. To make the change permanent, use the command 'vmo -p -o maxperm%=90'.

This is the end of this set of vmo parameter changes that should be implemented together.

Change the value of maxfree from 1344 to 1136 with the command 'vmo -o maxfree=1136'. The -o flag changes the value of a parameter only until the next reboot. To make the change permanent, use the command 'vmo -p -o maxfree=1136'. The value for minfree was 960 and the value for maxpgahead was 8. The j2_maxPageReadAhead value used was 128. The value of lcpu reported by sar was 4.0. The number of memory pools seen was 3. The delta between minfree and maxfree was increased to make it a multiple of 16.

Change the value of the j2_nPagesPerWriteBehindCluster parameter from 64 to 128 with the command 'ioo -o j2_nPagesPerWriteBehindCluster=128'. The -o flag changes the value of a parameter only until the next reboot. To make the change permanent, use the command 'ioo -p -o j2_nPagesPerWriteBehindCluster=128'. This recommendation will increase the number of pages per cluster processed by the write behind algorithm. Since sequential JFS2 write activity can be inferred from the statistics, keeping a few additional pages in memory is likely to improve performance.

RESOURCE ANALYSIS SECTION

An average of 25.45 percent of this partition's entitled CPU capacity (%entc) was used during the monitoring period. The percentage peaked at 39.40 from 10:00:01 to 10:10:01. There were 0.46 physical processors in use when the percentage of entitled CPU capacity was at its peak.

Graph of physical processors consumed

The average number of physical processors consumed by this partition (physc) was 0.30. The peak number of physical processors consumed was 0.46 from 10:00:01 to 10:10:01.

Information in this paragraph is taken from the sar -u report. This information may not be completely accurate on a micropartitioned system and is provided because people are used to seeing it. Average CPU utilization (%usr + %sys) was only 22.6 percent. This indicates that spare CPU capacity exists. If any performance problems were seen during the entire monitoring period, they were not caused by a lack of CPU power. CPU utilization peaked at 36 percent from 10:00:01 to 10:10:01. The CPU was waiting for I/O (%wio) an average of 5.7 percent of the time. The time that the system was waiting for I/O peaked at 11 percent from 09:50:01 to 10:00:01.

Graph of CPU utilization

The preceding graph shows the relationship between %entc data and the sum of %usr and %sys. The %entc data is more accurate and should be used instead of the traditional %usr and %sys metrics. The %wio column is probably not very accurate but higher values are likely to indicate times of greater I/O activity. Because the %usr, %sys, and %wio data is not accurate on micropartitioned systems, it has not been used to calculate the percent of time that the system was idle.

The minimum multiprogramming level (v_min_process in schedo) has been set to 0. This is a safe value for small configurations and may be low for larger configurations. This parameter is very dependent of workload and the correct value cannot be determined with sar and ps data. A memory shortage has been seen and a value which is too low may cause performance problems. More information can be found on the web by using your favorite search engine.

The average rate of System V semaphore calls (sema/s) was 5.1 per second. System V semaphore activity (sema/s) peaked at a rate of 5.12 per second from 11:40:01 to 11:50:01. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period of peak semaphore activity, then that activity may be a performance bottleneck and application or database activity related to semaphore usage should be looked at more closely. No problems have been seen, and no changes have been recommended for System V semaphore parameters. Note that SarCheck only checks these parameter's relationships to each other since semaphore usage data is not available.

The average rate of System V (msg/s) message calls was 0.060 per second. No problems have been seen, and no changes have been recommended for System V message parameters. Note that SarCheck only checks these parameter's relationships to each other since message usage data is not available.

There were no times when enforcement of the process threshold limit (kproc-ov) prevented the creation of kernel processes. This indicates that no problems were seen in this area.

The ratio of exec to fork system calls was 0.90. This indicates that PATH variables are efficient.

No buffer cache activity was seen in the sar -b data. This is normal for AIX systems, which typically do not use the traditional buffer cache.

A serious memory bottleneck was seen.

There was no indication of swapped out processes in the ps -elf data. Processes which have been swapped out are usually found only on systems that have a very severe memory shortage.

The average number of page replacement cycles per second calculated from the vmstat -s data was 1.76. The number of page replacement cycles per second (from vmstat -s) peaked at 2.51 from 09:20:00 to 09:40:01. This means that the page stealer was scanning memory at a rate of roughly 5148 mb/sec during the peak. We are collecting data on this statistic and have not yet been able to quantify when this value is high enough to indicate a problem. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period of peak replacement cycle activity, then a shortage of physical memory may be a performance bottleneck.

The average number of kernel threads waiting to be paged in (swpq-sz) was 1.57. The average number of kernel threads waiting to be paged in (swpq-sz) peaked at 2.1 from 09:50:01 to 10:00:01. When the peak was reached, the swap queue was occupied 42 percent of the time. A more useful statistic is sometimes available by multiplying the swpq-sz data by the percent of time the queue was occupied. In this case, the average was 0.34 and the peak was 0.88 from 09:50:01 to 10:00:01. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst when the number of kernel threads waiting to be paged in was at its peak, then a shortage of physical memory may be a performance bottleneck.

The following graph shows any significant statistics relating to page replacement cycle rate, number of kernel threads waiting to be paged in, and number of swapped processes. The page cycle replacement rate has been calculated using the "revolutions of the clock hand" field reported by vmstat -s.

Graph of page replacement cycle rate and swap queue size

The average page out rate to the paging spaces was 9.90 per second. The paging space page out rate peaked at 62.59 from 09:40:01 to 10:00:00. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst when the paging space page out rate was at its peak, then a shortage of physical memory may be a performance bottleneck. The following graph shows the rate of paging operations to the paging spaces.

Graph of paging space page in and page out rate

There was 1 paging space seen with the lsps -a command. The size of the paging space was 10240 megabytes and the size of physical memory was 2048 megabytes. At 11:00:00 paging space usage peaked at approximately 1536.0 megabytes, which is about 15 percent of the page space available.

Graph of paging space usage

The recorded setting for maxpin% leaves 409.60 megabytes of memory unpinnable. A memory-poor environment was seen even though most of the system's memory was unpinnable.

Graph of the percentage of pinned memory

The average rate at which I/O was blocked because the LVM had to wait for pbufs was 0.0092 per second. The peak rate was 0.12 per second from 16:40:00 to 17:00:00. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst when LVMs had to wait for pbufs, then a problem may be that the number of pbufs was insufficient.

Graph of the rate of blocked I/Os

The above graph shows that the rate of I/O blocking is small. The problem does not seem to be significant enough to justify any recommendations and the graph is presented to show the small amount of blocking that was seen.

The average context switch rate (cswch/s) was 3412.40 per second. The context switch rate (cswch/s) peaked at 7464 per second from 11:10:01 to 11:20:01. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period of peak context switching, then a problem may be that too many processes were blocked for I/O or IPC.

The following graph and table show the relationship between used memory, maxperm%, maxclient%, numperm, numclient, and minperm%. Because the values of maxperm% and maxclient% are the same, only one can be seen in the graph.

Graph of memory usage based on various vmo parameters

VMM Statistics
Metric Average Range
Memory in use: The percentage of memory being used for either file or non-file pages 98.3% 98.1 - 98.4
Non-file: IBM frequently calls this 'computational memory'. 92.5% 89.9 - 93.3
numperm: Memory which holds all cached file pages (JFS, JFS2, GPFS, NFS, etc.) 5.8% 4.8 - 8.4
numclient: Memory which holds all cached file pages except JFS. 5.8% 4.8 - 8.4
Parameter Value
maxperm% 30.0
maxclient% 30.0
minperm% 5.0

The delta between minfree and maxfree was increased to make it a multiple of 16.

No I/O bottleneck was seen in the sar statistics, therefore no changes are recommended for maxpgahead. The value of minpgahead was set to 2. This is the kind of small value that typically works best in most environments.

No I/O bottleneck was seen in the sar statistics, therefore no changes are recommended for j2_maxPageReadAhead. The value of j2_minPageReadAhead was set to 2. This is the kind of small value that typically works best in most environments.

The value of numclust is 1. If fast disk devices, disk arrays, or striped logical volumes are in use, the performance of disk writes could be improved by increasing this value. SarCheck does not have access to enough information about the system's disk devices to make any specific recommendation for tuning numclust.

The value of maxrandwrt was 0. This value causes random JFS writes to stay in RAM until a sync operation.

The value of j2_nPagesPerWriteBehindCluster was 64. This value determines the number of additional pages to be kept in RAM before scheduling them for I/O when the pattern is sequential. A change to the value of j2_nPagesPerWriteBehindCluster has been made to keep additional pages in RAM.

The value of j2_maxRandomWrite was 0. This value causes random JFS2 writes to stay in RAM until a sync operation.

The value of j2_nRandomCluster was 0. This value determines how far apart writes need to be to be considered random by the JFS2 write behind algorithm.

The average system-wide local I/O rate as measured by the r+w/s column in the sar -d data was 677.1 per second. This I/O rate peaked at 1673 per second from 11:10:01 to 11:20:01. The average size of an I/O based on the r+w/s and kb/s columns was 55.7 kilobytes, or 13.9 pages. The iostat utility reports that 47.8 percent of disk data transferred were writes and the rest were reads. Most of the I/O seen on this system was sequential. None of the filesystem activity seen on this system was JFS. The fact that some filesystems were mounted CIO or DIO will cause the values of numperm and numclient to be lower. This is because CIO and DIO do not use the filesystem caching features of the VMM.

Graph of Total Disk I/O rate

I/O pacing was not in use. A significant amount of fast I/O was seen to at least one disk device and the I/O rate peaked from 11:10:01 to 11:20:01. Consider turning on I/O pacing if interactive performance or keyboard response problems were seen. This is a technique to limit the amount of I/O that a process can perform, typically as a way of preventing batch jobs from hurting interactive response time when high I/O rates are present.

The -dtoo switch has been used to format filesystem statistics into the following table.

Filesystem Statistics
Filesystem (mounted over) Type FS Size Percent Free Free Space Inter policy Intra policy MWC Copies DIO CIO
hd5 (N/A) boot       minimum edge on/ACTIVE 2 No No
hd6 (N/A) paging       minimum middle off 2 No No
lg_dumplv (N/A) sysdump       minimum middle on/ACTIVE 1 No No
cbclv (/cbc) jfs2 384.0 MB 44.63 % 171.4 - 178.1 MB minimum middle on/ACTIVE 2 No No
lvsuport (/suport) jfs2 256.0 MB 99.21 % 254.0 MB minimum middle on/ACTIVE 2 No No
lvtech (/tech) jfs2 128.0 MB 70.86 % 90.7 - 91.2 MB minimum middle on/ACTIVE 2 No Yes
hd8 (N/A) jfs2log       minimum center off 2 No No
hd4 (/) jfs2 128.0 MB 74.42 % 95.3 MB minimum center on/ACTIVE 2 No No
hd2 (/usr) jfs2 2688.0 MB 22.08 % 593.5 MB minimum center on/ACTIVE 2 No No
hd9var (/var) jfs2 768.0 MB 79.80 % 612.9 - 613.7 MB minimum center on/ACTIVE 2 No No
hd3 (/tmp) jfs2 384.0 MB 99.43 % 381.8 MB minimum center on/ACTIVE 2 No No
hd1 (/home) jfs2 128.0 MB 93.42 % 119.6 MB minimum center on/ACTIVE 2 No No
hd10opt (/opt) jfs2 768.0 MB 55.65 % 427.4 MB minimum center on/ACTIVE 2 No No

Filesystem hd5 mounted on (N/A) was a mirrored boot filesystem. The inter policy was "minimum" which favors availability over performance. The intra policy was "edge". The edge is a good place to put mirrored filesystems or filesystems where most access is sequential, especially when there isn't a lot of activity in the center. DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume and this favors performance over availability.

Filesystem hd6 mounted on (N/A) was a mirrored paging filesystem. The inter policy was "minimum". The intra policy was "middle". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem lg_dumplv mounted on (N/A) was a sysdump filesystem. The inter policy was "minimum". The intra policy was "middle". The middle is a good place to put data when the center of the disk is full, especially if the data is accessed sequentially. DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem cbclv mounted on (/cbc) was a mirrored jfs2 filesystem. Total size of the filesystem was 384.0 megabytes. The filesystem was 44.63 percent free and 171.4 - 178.1 megabytes of space remained. The inter policy was "minimum". The intra policy was "middle". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem lvsuport mounted on (/suport) was a mirrored jfs2 filesystem. Total size of the filesystem was 256.0 megabytes. The filesystem was 99.21 percent free and 254.0 megabytes of space remained. The inter policy was "minimum". The intra policy was "middle". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem lvtech mounted on (/tech) was a mirrored jfs2 filesystem. This filesystem was mounted with CIO which bypasses the filesystem cache and the VMM read-ahead and write-behind algorithms. Concurrent I/O is the same as DIO but without inode locking. Multiple threads can read and write data concurrently to the same file except if the inode itself needs to change. Total size of the filesystem was 128.0 megabytes. The filesystem was 70.86 percent free and 90.7 - 91.2 megabytes of space remained. The inter policy was "minimum". The intra policy was "middle". Write verification was disabled for this logical volume.

Filesystem hd8 mounted on (N/A) was a mirrored jfs2log filesystem. The inter policy was "minimum". The intra policy was "center". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem hd4 mounted on (/) was a mirrored jfs2 filesystem. Total size of the filesystem was 128.0 megabytes. The filesystem was 74.42 percent free and 95.3 megabytes of space remained. The inter policy was "minimum". The intra policy was "center". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem hd2 mounted on (/usr) was a mirrored jfs2 filesystem. Total size of the filesystem was 2688.0 megabytes. The filesystem was 22.08 percent free and 593.5 megabytes of space remained. The inter policy was "minimum". The intra policy was "center". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem hd9var mounted on (/var) was a mirrored jfs2 filesystem. Total size of the filesystem was 768.0 megabytes. The filesystem was 79.80 percent free and 612.9 - 613.7 megabytes of space remained. The inter policy was "minimum". The intra policy was "center". DIO and CIO were not used on this filesystem. Write verification was enabled for this logical volume and this favors availability over performance.

Filesystem hd3 mounted on (/tmp) was a mirrored jfs2 filesystem. Total size of the filesystem was 384.0 megabytes. The filesystem was 99.43 percent free and 381.8 megabytes of space remained. The inter policy was "minimum". The intra policy was "center". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

Filesystem hd1 mounted on (/home) was a mirrored jfs2 filesystem. Total size of the filesystem was 128.0 megabytes. The filesystem was 93.42 percent free and 119.6 megabytes of space remained. The inter policy was "minimum". The intra policy was "center". DIO and CIO were not used on this filesystem. Write verification was enabled for this logical volume.

Filesystem hd10opt mounted on (/opt) was a mirrored jfs2 filesystem. Total size of the filesystem was 768.0 megabytes. The filesystem was 55.65 percent free and 427.4 megabytes of space remained. The inter policy was "minimum". The intra policy was "center". DIO and CIO were not used on this filesystem. Write verification was disabled for this logical volume.

The following graph shows the average/peak percent busy and average service time for up to 5 disks, sorted by average service time.

Graph of up to 5 slowest disks

The -dtoo switch has been used to format disk statistics into the following table.

Disk Device Statistics
Results sorted by Average Service Time
Disk Device
(vol group)
Avg Pct Busy Peak Pct Busy Queue Depth Avg Svc Time Disk size Percent Unallocated Unallocated Space Fragmented LVs?
hdisk1
(rootvg)
2.21 14.0 0.5 6.2 68.25 GB 76.92 % 52.50 GB Yes
hdisk0
(rootvg)
6.29 26.0 0.4 4.0 68.25 GB 75.46 % 51.50 GB Yes

Powerpath Disk Device Statistics
Results sorted by Average Service Time
Disk Device
(vol group)
Avg Pct Busy Peak Pct Busy Queue Depth Avg Svc Time
hdiskpower1
(oracle_vg1)
29.00 36.0 0.0 0.0

The following disk analysis has been sorted by the average service time of the disks.

Please note that if RAID devices were present, %busy statistics reported for them are likely to be inaccurate and should be viewed skeptically. The presence of a RAID device is generally invisible to the operating system and therefore invisible to this program.

The disk device hdisk1 was busy an average of 2.21 percent of the time and had an average queue length of 0.5 (when occupied). This indicates that the device is not a performance bottleneck. During multiple peak intervals the disk was 14 percent busy. The average service time reported for this device and its accompanying disk subsystem was 6.2 milliseconds. This is relatively fast. Service time is the delay between the time a request was sent to a device and the time that the device signaled completion of the request. The disk device was reported by lspv as being a 68.25 gigabyte disk. There were 52.50 gigabytes of space reported as being free and 0.00 gigabytes have been used. This disk is part of volume group rootvg and contained 0 logical volumes. At least one logical volume occupied noncontiguous physical extents on the disk.

The disk device hdisk0 was busy an average of 6.29 percent of the time and had an average queue length of 0.4 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 09:40:01 to 09:50:01, the disk was 26 percent busy. The average service time reported for this device and its accompanying disk subsystem was 4.0 milliseconds. This is indicative of a very fast disk or a disk controller with cache. The disk device was reported by lspv as being a 68.25 gigabyte disk. There were 51.50 gigabytes of space reported as being free and 0.00 gigabytes have been used. This disk is part of volume group rootvg and contained 13 logical volumes. At least one logical volume occupied noncontiguous physical extents on the disk.

The disk device hdiskpower1 was busy an average of 29.00 percent of the time and had an average queue length of 0.0 (when occupied). This indicates that the device is not a performance bottleneck. During multiple peak intervals the disk was 36 percent busy. The disk device was reported by lspv as being a 33.69 gigabyte disk. This disk is part of volume group oracle_vg1 and contained 0 logical volumes.

Data collected by ps -elf indicated that there were an average of 79.4 processes present. There were a peak of 84 processes during multiple time intervals. This was the largest number of processes seen with ps -elf but it is not likely to be the absolute peak because the operating system does not store the true "high-water mark" for this statistic.

Graph of the number of processes present

No runaway processes, memory leaks, or suspiciously large processes were detected in the ps -elf data file.

No table was generated because no unusual resource utilization was seen in the ps -elf data.

ARP packets were sent at an average rate of 0.0008 per second. From 09:00:01 to 09:20:00 the peak was 0.0008 per second. No ARP packets were purged during the monitoring period. There were an average of 0.00 entries per bucket and each bucket was capable of holding 7 entries.

The number of IP packets received per second averaged 30.56 and peaked at 106.10 per second from 14:40:00 to 15:00:00. No action is required because no packets were fragmented. The number of IP packets sent per second averaged 25.90 and peaked at 93.72 per second from 14:40:00 to 15:00:00. None of the IP packets sent were fragmented so no action is required.

According to netstat statistics, the rate of IP fragments dropped after a timeout averaged 0.00 per second. The value of ipfragttl, the time to live for ip fragments, was 2 half-seconds.

The number of TCP packets received per second averaged 27.87 and peaked at 102.74 per second from 14:40:00 to 15:00:00. According to netstat statistics, 0.0087 percent of the packets received were duplicates. Only a small number of duplicate packets were received. This is normal and no action is required. The number of TCP packets sent per second averaged 24.87 and peaked at 92.04 per second from 14:40:00 to 15:00:00. According to netstat statistics, none of the packets were retransmitted. This is normal and no action is required.

According to netstat statistics, no UDP socket buffer overflows were seen during the monitoring period.

CAPACITY PLANNING SECTION

This section is designed to provide the user with a rudimentary linear capacity planning model and should be used for rough approximations only. These estimates assume that an increase in workload will affect the usage of all resources equally. These estimates should be used on days when the load is heaviest to determine approximately how much spare capacity remains at peak times.

Based on the data available, the system cannot support an increase in workload at peak times without some loss of performance or reliability, and the bottleneck is likely to be memory utilization. Implementation of some of the suggestions in the recommendations section may help to increase the system's capacity.

Graph of remaining room for growth

The CPU can support an increase in workload of at least 100 percent at peak times. The level of page outs and/or swapping, together with the presence of swapped processes, indicates that the amount of memory present will have trouble supporting any increase in workload at peak times. The busiest disk can support a workload increase of at least 100 percent at peak times. For more information on peak resource utilization, refer to the Resource Analysis section of this report.

CUSTOM SETTINGS SECTION

The default GNUPLOTDIR value was changed in the sarcheck_parms file to /tmp/usr/local/bin.

The GNUPLOT keyword was found in the parms file and changed the gnuplot version to 4.0.

Please note: In no event can Aptitune Corporation be held responsible for any damages, including incidental or consequent damages, in connection with or arising out of the use or inability to use this software. All trademarks belong to their respective owners. This software licensed for the exclusive use of: test. This software expires on 07/13/2010 (mm/dd/yyyy). Code version: 7.01.18. Serial number: 27817288.

This software is updated frequently. For information on the latest version, contact the party from whom SarCheck was originally purchased, or visit our web site.

NOTE: This software appears to be unregistered. Please register with us by printing the registration form using 'analyze -o', filling it out, and sending it to us via snail mail, fax, or email.

(c) copyright 1995-2010 by Aptitune Corporation, Portsmouth NH, USA, All Rights Reserved. http://www.sarcheck.com