New 2005 benchmark coming

formatting link

Reply to
Wayne Tiffany
Loading thread data ...

How can these things be benchmarks? Every year they change. So the one thing that really needs benchmarking, SW, convienently is left out of the equation. Hmmmm.

Let's say a drag race was a bench mark. 1/4 mile timed run from a standing start this year. Next year its 1000 ft., next year 1500 ft., next year a

1/4 mile but down a 1 percent grade. Anybody have any idea how this would give a bit of data that could be used to compare cars and tuners?

Mike Wilson's ship in a bottle still stands as a reliable consistent comparison because it hasn't changed.

Let's call these demo-marks.

Wayne Tiffany wrote:

Reply to

The benchmark is intended to be used in comparing hardware platforms, not for comparing SW versions. It changes "every year" so that users can make comparisons on the most relevant version of the software. That's usually the newest version.

If I'm buying systems for 2005, I wouldn't want to be benchmarking those systems using software designed in 2001. The demands are different. For example, the new benchmark probably has some changes in order to deal with RealView, which the 2003 benchmark couldn't test if you could run it on 04.

Reply to
Dale Dunn


SW benchmarks test SW more than the hardware. For instance, SW is terrible at releasing memory. What if, during the run, it overflowed to the page file. This is only one flaw. I'm sure others here can think of many more. I certainly wouldn't base a hardware purchasing decision on it.

I agree with P. The ship in a bottle tests computational speed, and OpenGL performance. The only other important metrics are disk speed, just buy fast ones. Or if you really want to get fancy, run the SPEC suite. That's a managed standard.



Reply to


I'm not happy with the benchmarks. In the past, they have done a horrible job of isolating the various hardware subsystems. On that we agree.

I was just trying to explain that they were never meant for comparing versions of SW.


Reply to
Dale Dunn

Why wouldn't we want to benchmark the software?

And how does the benchmark help with legacy hardware?

The original SPECapc benchmark from which this one was probably derived was primarily a graphic card benchmark. Not really that useful in everyday applications.

Further, this benchmark is not likely open source like the SPEC benchmarks are so we can't really be sure what we are testing.

If you are going to be benchmarking hardware I am not sure how a changing benchmark is going to help with that. Do you really think that if SW2005 runs certain forced rebuilds 5% slower than SW2004 on the same hardware that this new benchmark is going to somehow tell you that buying a new piece of hardware will make up for the slower running on 2005?

The only thing I have found this kind of benchmark good for is burning in hardware and checking relative performance when tuning the system.

Finally, if you look at the hardware test websites like Anandtech and Tom's they tend to use the same benchmarks year after year. There must be a reason.

Reply to

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.