dual xeon?

Hello, I would like to change my actual hardware configuration, PIII, with a dual xeon system (Nocona, socket 604). Can Pro/E (and Wildfire in particular) use these two CPU togheter? Will Pro/E be faster than a single cpu conf.? thanks

pikki

Reply to
pikki73
Loading thread data ...

Earlier Pro/e's cannot but it looks as though Wildfire can. There's a setting in config.pro for 'number of processors' at any rate.

Go to Olaf Corten's site and grab the test files. Run the computer both ways (exact same computer, just remove one cpu) and see what happens. I'd kinda like to know myself ... maybe here's an excuse to get that underpriced Origin 3400 I keep seeing on eBay :-)

(My guess it that Wildfire only uses more cpu's for the rendering - since rendering is commonly done on many boxes behind the scenes, while it seems pretty unlikely that they rewrote the entire app to be thread-safe. Might be kewl if you like to leave that Live Rendering thingy turned on tho.)

Reply to
hamei

Hello

I'm using a double Xeon computer configuration myself for running Pro/E Wildfire 1. Wildfire is not capable to use dual processors, only rendering and regeneration of independent parts in an assembly (but I'm not sure about that) can gain some profit from dual CPUs. But there is one main advantage when dual CPUs are used. Even in case that Pro/E ceased to work (is not responding - hung up) or is fully occupying one CPU, the system is still fully responsible due to "free" second CPU.

Jus my two cents

Greetings

Joze

Reply to
Joze BARBARIC

There are advantages to 2 CPUs, even though Pro-E doesn't of it's self utililize dual processors except for some functions such as video. For one thing you can assign one of the processors to run Pro-E and have a processor to run other things. Or one example I've run into recently, you can run an analysis program and still have a processor available to work on other things such as Pro-E while it is running.

Reply to
Dan Richards

Yes, it'll be faster than it would on a single cpu system. But nothing like double, maybe 25-30% faster. While I've heard that Pro/e isn't optimized for dual processor use, the system takes any multi-threaded application and parcels it's thread out in a way to keep both processor busy. What I've seen on a dual Xeon system is that both processors, when running Pro/e, stay about equally busy. Optimizing Pro/e for dual processors would probably achieve results that are more like parallel processing, but it might also not be the best thing for the system. It could, in fact, cause lockups that wouldn't happen otherwise when the system is deciding, chiefly, on the allocation of resources, allowing it, instead, to act as if it were the only program/processes/threads in the universe. This was always the biggest weakness of Windows systems, one that's somewhat gotten away from with NT. Brings to mind the expression: "Be careful what you wish for."

Reply to
David Janes

What you're seeing then is that the system requirements are being run on one cpu while Pro/E uses the other. Actually, it's not that straightforward. A multi-tasking operating system does many tasks besides just running a single application. (DOS does not, which is what makes it actually superior to Win NT for machine control.) The kernel scheduler decides what to run and when. Under normal circumstances an application does not get to "just run." It's being interrupted all the time so that other computer services can also run. In Oonix the services are usually called daemons, but they exist in WinNT as well. With a single cpu Pro/E is being interrupted all the time so that whatever else you have going (do ctrl-alt- del in Win NT and then look at the running services) can ALSO get some cpu time.

With two cpus those services don't need to interrupt pro/e's time. HOW the time is apportioned is up to the operating system's scheduler. OS/2 uses a round-robin approach - as tasks come up to be operated upon they are stacked in a queue. As soon as one task has finished on any cpu, the next is sent to that cpu in a round-robin fashion. This ensures that the cpu's are kept full. How NT does scheduling I don't know, but it's not as efficient as OS/2 :) Tests years ago showed two-cpu OS/2 servers whupping 4-cpu NT boxes by a large margin. That was my experience with NT as well - it's actually pretty shitty at smp. Irix is probably the best of anybody - they have the unique ability to scale a single image of the operating system (not clusters, which is a bunch of separate boxes co-operating rather than a single large box) over something like a thousand cpus. If they couldn't do this they would be dead meat because today they have nothing else to offer. In fact, with their stock price dropping to where they are in danger of being delisted again, it surprises me that Sun hasn't just gobbled them up for that one capability.

Effing imbecile management :-(

No, not at all. It isn't a matter of "optimizing" at all. You have to go back into history a little bit here ... obviously the first computers only had a single cpu. Hey, gotta walk before you can run, right ? When people tried to make a computer more powerful, what can you do ? Either take the cpu you have and redesign it to run faster or how about add another ? IF the application you are trying to accomplish is amenable to multiple small chunks ( think this way - serving ten thousand hits to a web page can be broken into small pieces where twenty cpu's can each handle x number, while streaming one continuous video can't really be apportioned to a bunch of separate cpus ) THEN smp makes good sense. For DOS smp would be stupid. A real multi-tasking os doesn't necessarily mean that the user is simultaneously running PhotoShop and Pro/E. It means that you can be doing one thing while the os is also running an ftp server in the background, burning a CD, etc etc. You can see where smp would be great for this. Now instead of one DOS box (Windows 98) you have several under one controller.

This is the level where you're seeing 20-30% improvements in Pro/E performance - because it's not being interrupted by other tasks.

NEXT step is for the application itself to be written to utilize more than one cpu. Many tasks that you wouldn't think would profit from smp really *can* - for instance, opening the app requires drawing the splash screen, drawing a window, then filling the window, reading your config.pro file, setting up resources, blablabla. Each task *could* go to a separate thread rather than running them sequentially. Each thread then gets assigned to a cpu by the operating system, as they come up for processing. Boom ! NOW you got an smp-friendly app. The kewl thing about threads is that the same program can run on a single-cpu box or one with sixteen. On multi-cpu hardware as a thread comes up for processing it goes to the necxt processor. If you only have one, it waits. If you have 32, it goes to the first vacant spot. There is an obvious limit to this tho - when the overhead of scheduling tasks takes more time than

*doing* the tasks, you've now gone backwards. That's why sometimes you cna see an aplication run *slower* in smp than on a single-cpu machine. Btw, here is where Unix initially fell down - tasks can be split into processes or threads. Processes take longer to create. Threads are faster but not as safe. You can see where if one thread is reading a file then modifying it while another thread comes along to ALSO read that file and use it, if thread A has modified it but thread B doesn't know about that and writes its OWN modifications to the file you could end up with Big Trouble. So apps which actually USE smp features have to be coded to be thread-safe. And tested, as well :) Unix initially didn't like threads but now they have "pthreads" which is the same as threads on NT or OS/2 ... anyway ...

Pretty gross generalizations here but you get the idea. There are many ways to deal with smp but overall, ANY single-threaded app (such as Pro/E) will see *some* benefit from running on a dual cpu machine. (In case you get excited by the concept, more than two for a desktop seems to make no difference whatsoever. I ran a Netfinity with four cpu's for a while to play and saw no improvement at all for desktop apps beyond two cpu's.) An app which is written specifically to *use* smp can see improvements more on the order of 80% or so. I've done benchmark tests that showed an almost linear improvement with math functions, but you couldn't even *touch* the machine while they were running or you lost your big-number benchmark :)

It would be kewl if Pro/E were multi-threaded. Then you'd see more significant performance improvements running on your smp box ... but the code is old and rewriting it to please the small percentage of people who have an smp machine can't sound too exciting to PTC. Before you blast PTC for that, I don't think any of the other cadcam apps are multi-threaded either. Mastercam, Smurfcam, SaladWurx certainly are not. VX, probably not. Instead of doing something trick like that, they spent their money changing to Windows and drawing new icons. Probably gives them a better r.o.i., but sheesh :( Maya is somewhat smp-friendly. I-DEAS, don't know but it's about the only manufacturing app that is as nice as Pro/E ... in fact, maybe nicer. It

*may* be smp-friendly but it's pretty unlikely. The only commercial people I've seen who are *very* careful about threads are in the OS/2 world and that's kind of a byproduct of a failing in OS/2. Due to a problem with OS/2's input queue, anything non-tricial that is *not* multi-threaded and thread-safe can lock up the desktop. Then the users will come to your house and kill you so OS/2 developers end up with some of the best-performing, most co-operative apps on the planet, more out of necessity than anything else. Every cloud has a silver lining :)
Reply to
hamei

According to this, one might interpet VX as being multi-threaded.

"There were many goals in developing what has become UPG2," says Mike Crown, the director of Product Development at Varimetrix, "First and foremost was to develop a kernel that had seamless integration between solids and surfaces, where users could work between the two types of entities and perform various functions-all the time without having to first think about if an element is a surface or a solid or convert from one to the other."

UPG2 uses what is known as a Proximity Compliant Tolerance Scheme to selectively heal geometry on the fly, allowing for what Crown calls "the fastest, most reliable geometry engine in the industry." Essentially UPG2 is healing the model incrementally whenever necessary, such as when two surfaces are intersected. This allows UPG2 to deal with non-native data, regardless of its origin.

Another issue was performance, part of which was achieved by building a multithreaded kernel. Even without multiprocessors, Varimetrix wanted the kernel to have excellent speed when dealing with freeform surfaces.

Reply to
clintonyee

Ah. That would indicate smp-friendly. Only way to know for sure would to run a process-watching progam while VX was running. There are several for OS/2, I'm sure Windows has some as well.

I always thought Varimetrix was kewl and Mike Crown was no slouch. Too bad they went over to the Dark Side :-(

Good to know, thanks.

Reply to
hamei

By the darkside do you mean over to the Windows platform?

I believe VX was originally designed for UNIX workstations. Perhaps that is why the kernel was designed to be multithreaded from the beginning.

Reply to
clintonyee

:-)

Yes, if you see version 4 or so around, it's a really nice-looking Irix app. Ran on other Unices as well, of course.

Nowadays it looks like a child's toybox :-( Stinking icon-mania. We spent years learning to read in school, now we're supposed to point and say "ga ! ga ! want !" or decipher what the sketch of the nun with the arrow thru her head stands for ?

Okay, I much prefer Unix to that crap from Redmond but in this case, OS/2 and Windows were way ahead of Unix. OS/2 in particular - Windows stayed stuck in DOS mode until very recently, when they finally discovered that Windows 98 et al were poop. Strange how M$ can sell people on the most wonderfullest thing since sliced bread this week, only to turn around next week and say that was crap, now y'all have to buy *this* most wonderfullest thing ! and y'all go for it ... sad. But I digress .... OS/2 and Winick were threading proponents while Unix was in love with processes. Forking a process takes a lot of overhead. It's slow. Spinning off another thread is fast and much finer-grained. Unix has come around to that with pthreads now but in the beginning the Intel users were ahead on this one. Wonder what Acorn and Amiga and those did ?

Reply to
hamei

Mike Crown thought very little of Pro/E and how restrictive it was.

He did like UG, though.

Are you using VX ?

jon

Reply to
jon banquer

This article, IMO, was one of the top three that Joe Greco ever wrote. One of the most interesting things about this article to me was what Mike Crown had to say about what the next big thing would be in CAD.

formatting link
"Crown was excited about what he sees in the future-a whole new type of modeling called Partition. While he was sketchy with the details, the basic concept is that it will take hybrid modeling to an even higher level and make it easy to model even extremely complex shapes, by interactively providing the user with many more shape definition and modification options."

Think3 calls it Global Shape Modeling. They also have something called Zone Modeling which I need a better understanding of. VX calls it "Morphing".

IMO, thinkID has better tools in this area than what I have seen in any other modeler and a better UI to go with it.

You can decide for yourself as I have upload a Camtasia video that shows a very complex model being build start to finish.

IP address - 63.173.39.175 Login - cnczone Password - cadcam (note: password is case sensitive) Port - 21 Max users - 3

The file is in a folder named:

Jon Banquer

The file is called:

snowshoe english.avi

This .avi file was done in a screen capture product called Camtasia. IMO, Camtasia videos look horrible played in Windows Media Player. The Camtasia Player is free and one can read about it compared to Windows Media Player here:

formatting link

The following are tips on how to work with Camtasia Player that should make viewing this video more enjoyable:

Play / Pause: spacebar

Rewind: PageUp

Forward: PageDown

Full Screen: alt+enter

Years from now we will know that Mike Crown was dead nuts on.... just like he was about why someone needs hybrid modeling and why a CAD modeler is much better off being built hybrid from the start. He though SolidWorks would have a tough time adding surfacing functionality on top of the Parasolid kernel.. man was he ever right about that !!!

I believe VX has actually built 3 kernels over the years.

jon

Reply to
jon banquer

jon, If you were doing design to prototype (stl) output (no cnc).

What would you get: VX or ThinkiD

Thanks!

Reply to
clintonyee

?

VX has had their own integrated CAM since the very beginning .... thru 5 axis simultanious mill.

In any case, I hope more Pro/E users will consider VX CAD/CAM as it's fast, a joy to use and very robust.

VX is also a very nice company to do business with.... something you don't hear about PTC. :>)

jon

Reply to
jon banquer

I will not be doing any cnc at all.

But I assume you you pick VX over ThinkiD. Correct?

Reply to
clintonyee

I like thinkID's technology for GSM and for it's UI.... the rest I prefer VX.

thinkID's solid operations are not as good as VX.

VX's kernel is very robust.... very.

jon

Reply to
jon banquer

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.