First of all let me say that uptimes listed on a site specifically for the purpose of showing off uptimes really don't mean anything, especially when the numbers are so easily faked. Netcraft is a little better of an indicator because uptime isn't it's primary mission and is much more difficult to fake but I have proven that I can not only fake the uptime on Netcraft but I can fake the other statistics that it gathers (OS and Web Server). So I have lost some respect for the numbers that even Netcraft holds although they are more likely to be right.
Second of all, average CPU utilization is really not an important indicator of how much the server is used and how important it is. Many important roles take very little CPU. For instance I have had several Linux machines with multi-year uptimes, but you would never see those uptimes because the machines are far behind firewalls in closets performing very important internal networking tasks.
Things like intranet servers and proxy servers for several thousand clients. These machines just sat in the closets chugging away. Now even though massive traffic is being processed, filtered, access granted/denied, the load average may only barely register, even on old machines. If your CPU usage averages above 50% or even above 25% you might want to think about putting a faster machine or analyzing other aspects of the configuration to see if there are other ways to make the machine run optimally. That is a fairly high average utilization depending on the type of work the machine is doing.
Uptime is not nearly as important as reliability but it is important. It's nice to know I don't have to schedule an upgrade and reboot of a critical machine at 2:00am because most of these things can be done on the fly without having to reboot, on Linux and most UNIX system that is. I have also had servers with close to two years of uptime that do have higher utilization averages. One of them serves many fairly high traffic web sites, imap/pop mail, streaming video/audio, DNS, and many other functions. And it's only an old Compaq server running Linux. During high traffic periods the CPU utilization can be up in the 80% range but still the overall average is under 20%. This is a very high CPU utilization average, too high. Because during that period where the CPU is averaging 80% there are many peaks of 100% where during that time the machine can not service requests as fast as a machine could that had more resources and a faster processor.
When tuning a system your goal is to get the system to do the most amount of work with the least amount of CPU utilization. Good running system will usually have a low load average. Things that can effect it are not having enough RAM which causes a lot of paging/swapping which increase CPU time. But there are *many* factors where bottle-necks can occur and cause the system not to perform well.
[ December 08, 2002: Message edited by: void main ]