Speed of 64-bit Pro/E and Pro/M?

Well it's well established that 64 bit platform opens the door for making larger models. What isn't consistent is how the peformance changes - the benchmarks I've seen show apps getting faster, getting slower, staying the same.

For those of you who have crossed the divide, has the performance improved or slowed, all other things held equal? Is memory usage changed?

Dave

Reply to
dgeesaman
Loading thread data ...

I recently came across a PTC FAQ document that states the 64-bit version can be 5-20% slower than the 32-bit equivalent solution. Whether that's true in all cases leaves plenty of room for comment.

Dave

Reply to
dgeesaman

It's pretty black and white. I can't run it (assembly, analysis, Mechanica, Mechanisms, rendering, any highly memory/storage intensive function) in a 32 bit environment but I can (without errors, freezing, crashing) with 64 bit hardware, software and OS. If you're calculating advantage on a finer accuracy scale than this, I think PTC is correct: it weighs in favor of 32 bit architecture (and from everything I've seen, as well). But, perhaps, marginally in many cases. The rest of the argument is how do you compare systems? Eventually, by price, you won't be able to avoid 64 bit machines because they've become so much more prevalent; OSes you can get in any architecture, programs, as well. So it depends on those elusive bench marks which are a) few and far between when it comes to realistic Pro/e use and b) never compare at the margin of utility (where 32 bit is under stress). So, if you never hav e an assembly or analysis that runs over (by a Task Manager Process display of xtop.exe activity) about half of the 32 bit rated capacity, hey, no worries. If you're on or past the margin, get the 64 bit architecture because the demands will get worse, only worse. And your scale of comparison will be pass/fail, not minute percentage differences.

It is not a black/white issue, since we don't run the same exact analysis all day every day. Experienced FEA users know there is almost always more simple and less resource-intensive set of assumptions.

If Mechanica turns out to run roughly the same speed in 64 bit platform, then it's a matter of cost and other issues whether to run on 64-bit. You gain the ability to run larger analyses, but if there is an overall performance penalty, then from a user point of view the tradeoff is between larger model limits vs. performance on smaller analyses.

So I ask again, if anyone has compared Pro/M in 32- and 64-bit platforms for raw performance.

Dave

Reply to
David Geesaman

On Mar 19, 1:26 pm, " snipped-for-privacy@yahooooooo.com" wrote: > Well it's well established that 64 bit platform opens the door for > making larger models. What isn't consistent is how the peformance > changes - the benchmarks I've seen show apps getting faster, getting > slower, staying the same. >

I recently came across a PTC FAQ document that states the 64-bit version can be 5-20% slower than the 32-bit equivalent solution. Whether that's true in all cases leaves plenty of room for comment.

Dave

I think you stated the most cogent and convincing argument in your first post. It's pretty black and white. I can't run it (assembly, analysis, Mechanica, Mechanisms, rendering, any highly memory/storage intensive function) in a 32 bit environment but I can (without errors, freezing, crashing) with 64 bit hardware, software and OS. If you're calculating advantage on a finer accuracy scale than this, I think PTC is correct: it weighs in favor of 32 bit architecture (and from everything I've seen, as well). But, perhaps, marginally in many cases. The rest of the argument is how do you compare systems? Eventually, by price, you won't be able to avoid 64 bit machines because they've become so much more prevalent; OSes you can get in any architecture, programs, as well. So it depends on those elusive bench marks which are a) few and far between when it comes to realistic Pro/e use and b) never compare at the margin of utility (where 32 bit is under stress). So, if you never have an assembly or analysis that runs over (by a Task Manager Process display of xtop.exe activity) about half of the 32 bit rated capacity, hey, no worries. If you're on or past the margin, get the 64 bit architecture because the demands will get worse, only worse. And your scale of comparison will be pass/fail, not minute percentage differences.

David Janes

Reply to
David Janes

It is not a black/white issue, since we don't run the same exact analysis all day every day. Experienced FEA users know there is almost always more simple and less resource-intensive set of assumptions.

If Mechanica turns out to run roughly the same speed in 64 bit platform, then it's a matter of cost and other issues whether to run on 64-bit. You gain the ability to run larger analyses, but if there is an overall performance penalty, then from a user point of view the tradeoff is between larger model limits vs. performance on smaller analyses.

So I ask again, if anyone has compared Pro/M in 32- and 64-bit platforms for raw performance.

Dave I'd suggest you check here for the broadest, most general experience:

formatting link
and check the benchmarks for 32 and 64 bit machines. Hopefully, this represents 32 bit machines running 32 bit apps and 64 bit machines running 64 bit apps and not 64 bit machines/OSes running 32 bit apps. These can be very difficult, very complicated analyses, requiring testing professionals to get involved with their scientifially set up testing labs (it's all about the numbers). And the numbers can be all about the test set up (witness the number of different results from the same machine.) But, in general, note that the results are more than twice the time for the 64 bit machines.

David Janes

Reply to
David Janes

We ran a simple analysis on 3 machines, Dell 370 p4 @ 3.2GHz w/2GB, Dell

390 DuoCore @ 3GHz w/4GB and a Dell 670 P4 @3.2GHz w/4GB running Win XP64.

The 370 bogged down and ran out of speed part way through the analysis. It did finish, but the CPU was running at 100%. The 390 plowed right through it in 64% of the 370 time. The 670 plowed through it in 67% of the 370 time.

Your numbers may vary, depending on actual machine specs.

Reply to
Ben Loosli

At my last company we were hitting the memory brick wall on 32 bit. As soon as we went from a 4GB 32 bit PC to a 12GB 64 bit PC some analysis were reduced from 24 hours to 3 hours. Bit it did crash a bit more often.

Does anyone know what the benefits of Vista might be? 32 bit & 64 bit?

Ant

Reply to
Ant

I'm downloading evaluation copy of Solaris, gonna give it a try in some heavy promechanica run. Konrad

Reply to
KA

Based on what I know of Vista, it does not change anything w.r.t. memory size, etc compared to XP Pro 32 and 64 bit. Except that Vista uses more resources for itself, all other things held equal.

Dave

Reply to
David Geesaman

We ran a simple analysis on 3 machines, Dell 370 p4 @ 3.2GHz w/2GB, Dell 390 DuoCore @ 3GHz w/4GB and a Dell 670 P4 @3.2GHz w/4GB running Win XP64.

The 370 bogged down and ran out of speed part way through the analysis. It did finish, but the CPU was running at 100%. The 390 plowed right through it in 64% of the 370 time. The 670 plowed through it in 67% of the 370 time.

Your numbers may vary, depending on actual machine specs.

Reply to
David Janes

At my last company we were hitting the memory brick wall on 32 bit. As soon as we went from a 4GB 32 bit PC to a 12GB 64 bit PC some analysis were reduced from 24 hours to 3 hours. Bit it did crash a bit more often.

Does anyone know what the benefits of Vista might be? 32 bit & 64 bit?

Ant

That's a very good question and I don't know the answer. But, from all I've heard, Vista is simply XP Plus, so it more or less depends on what you found XP to be and what you thought of its limitations. One thing I've not seen is a list of enhancements over XP or XP limitations that got surpassed with Vista. IOW, I've not seen one single advantage to upgrading.

David Janes

Reply to
David Janes

Someone told me that you can access usb stick memory when using vista to prevent it from swapping to the HDD as much. I don't know if this applies to Vista 32 bit though.

Reply to
Ant

Thanks Ben.

Today I set up RAID 1 on my wife's server and that has liberated a couple of hard drives. So this weekend I'll install XP Pro 64 bit on one of them and Pro/E, Pro/M 64 bit also.

This way I can do a test with nearly all other things held equal. The system will only have 2GB of RAM, but should be plenty sufficient to get a raw speed comparison for modestly sized models.

Dave

Reply to
dgeesaman
Reply to
David Janes

I've just put 2 discs to raid0. Files and windows' swap is on it. Runs way faster, but I couldn't measure any numbers. I feel that putting more discs to that raid would get it even faster, but the difference in my case wouldn't be that big. I think that setting another disc/raid so I could put system swap on one and pro/m temp files on another would be better. Try WindowsStart-Programs-AdministativeTools-Performance. By default it displays disc queue in green. If you have long moments of 100% disc queue and simultaneously 0% processor load- you need a faster disc or raid. Maybe next week I'll find enough time to run pro/m on solaris- I can't sleep not being sure whether it manages memory/disc better than win. Konrad

Reply to
KA

vs a networked one. Incomprehensible, considering how heavily networked Pro/e is and how MUCH this can influence performance.

Interesting question. However, my understanding is that the license server traffic is somewhere between tiny and miniscule. So I don't see how that could affect performance.

delays when Pro/e decides to start disk swapping.

As with anything, RAM is faster than HDD swapping. Faster hard drives almost always cost more than adding more RAM, and are substantially lower bang-for-the-buck. If disk swapping is slowing you down, put in RAM until the swapping stops.

especially to compare a few of your own configs to the average. Again, what it means, for troubleshooting purposes, I'm not sure.

My intent to compare Mechanica runtimes isn't supported bye the OCUS benchmark, but I'll try to run that too if I find time. Is it true that the 32- and 64- bit versions are indeed identical scripts?

I'm a little consumed right now with configuring a new SBS2003 arrangement for my spouse's company. This 32- vs. 64- comparison is not on a pressing schedule.

Dave

Reply to
dgeesaman

delays when Pro/e decides to start disk swapping.

Pro/m makes a lot of temporary files. For my analyzes it can be hundreds of gigabytes. How much RAM can I put into a motherboard? In my opinion, as long as disc operation doesn't slow down the processor- it's the processor speed that has the biggest impact on overall productivity. Sure, the less RAM, the less memory you can give the solver- and the more often a temp file is being dumped to disc. But the worst thing you can do is to assign too much memory to solver, because you end up with system's paging mechanism and that clogs the computer totally. After many long battles, I still cannot find good rule of thumb to determine what is the optimum amount of memory assigned to the solver- it depends not only on RAM, but also on the job to be done. Does anybody have any good advice on it? Konrad

Reply to
KA

Can you give any numbers about the study? How many nodes/edges/parts/contacts, SPI/MPI? Max eq. order? Regular or iterative solver? Standalone or integrated pro/m? Konrad

Reply to
KA

We are in the process of ordering a dual quad core system with 8GB of ram and 2-146GB drives to use as a compute engine for our engineers who don't want to have their workstations tied up while processing a Mechanica run.

The engineers will use Remote Desktop to access the server to submit their runs.

We have also thought about having them create a batch Mechanica file on their workstation, then copy the .bat file to the compute server and let it run.

Reply to
Ben Loosli

delays when Pro/e decides to start disk swapping.

Good point. Some models are large in footprint, and generate huge temp files, models need a lot of RAM and CPU time but don't create large temp files. I totally agree - if you're seeing gigs upon gigs being written to disk then a RAID0 setup is worthwhile.

PTC once told us to set it for 40% of the system RAM. It seems to work ok for us, although the parameter hasn't seem to make much difference either way.

Dave

Reply to
David Geesaman

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.