Instead of a standard SW benchmark we need...

We need something that could press SW to its limits the same way every time to force a crash if a crash is inevitable. That way, we could easily tell if a SP or a new piece of hardware or software added to our system is going to be more or less reliable than what we're using at the time. I wonder if some SW programming guru could think about writing a public domain "burn in" utility for SW. Something that could run a suite of tests, creating solid models, assy's, drawings, using sweeps, lofts, extrudes, imports, etc., you know all the normal stuff. And when you want to test, you just set it up and walk away until it crashes. Then you go read the logs; how long did it take, what was it doing, etc.

If there's ten thousand different things that can make SW crash then I'm sure some AI software could cover at least a few thousand of those things. If an SP can't make it past 5 hours then it doesn't get used.

Hmmm, I wonder if it's possible. That's how you do it with burn in software when testing new hardware.

Any thoughts?

- Eddy

Reply to
Eddy Hicks
Loading thread data ...

Would the model be built from scratch via a macro or code?

I remember seeing a post many months ago where someone was looking for the ultimate complex model. My thought at the time was that the ultimate model would take a month to load ;^)

I'm not sure what came of it, but now that I think about it, maybe you can modularize this by making an assembly of many parts. Each part would demonstrate a particular feature or technique.

You can make a list of every feature in SW and assign a different volunteer for each. They would send in a pre- determined number of examples either in one part (with several features) or several parts (one feature each).

For example...

Volunteer: Joe User Assigned Feature: Surface Sweep

Joe sends in... A helical Sweep A Sweep with Guide Curves A Sweep demonstrating Start/End tangencies A Sweep demonstrating "Keep Normal Constant" etc, etc.

You can have a macro open each part, do a rebuild, report any errors, changes to faces, total rebuild times etc.

This would be a good benchmark to compare individual features, but of course in the real world a part is made of of many different features.

Are you thinking of something like the SolidSolutions benchmark? It takes an existing assembly and does things with it.

Mike Wilson

Reply to
Mike J. Wilson

Yeah, I guess the SolidSolutions (now Spec) benchmark spurred the idea. And I like the idea of volunteers being assigned a feature or function. And the idea of a code snippet or macro putting it all together. Might be a good way to start things off. Maybe a VB applet that loads parts from a home folder until it runs out. Maybe something along the lines of...

"IF something found THEN add to assy and perform actions and log it all ELSE stop" - could get this beast underway. And if people are so inclined or think they've got a new way to break Solidworks they could volunteer a new part or subassy and the applet would pick up on it just by finding it in that home folder.

Here's what I'm thinking now Mike...

1) an applet is coded that creates unique and specific parts, based on a specific feature or set of features (extrude+shell+edits, sweep+loft+surface+thicken+edits, etc.). This applet locates these parts in a home folder.

2) Users create parts and subassys of challenging features or things that have been known to cause crashes for them in the past. These parts and subassys would be added to the home folder.

3) An applet is created, whose job it is to create a parent assy, adding the pieces from 1&2 above - only once for each unique piece, until it's parsed the entire folder. This applet performs specific actions on the parent assy as well as specific pieces of the assy, like in-context edits to the parts created in step 1, along with zooms, pans, rotates, hiding, showing, suppressing, lightweighting, resolving, etc. and all these functions are logged to a text file. It runs until it SW crashes or until a user stops it. But every action is written to the log before it's performed so when there's a crash you know what caused it.

4) As more users come up with more pieces, these pieces get added to the home folder and hence, added to the parent assy via the main applet.

Hmmm, maybe this isn't such a weird idea? :)

- Eddy

Reply to
Eddy Hicks

It was me, and actually I was looking for the most complex PART in the sense of many different features, for a documentation tool. Check

formatting link
for the one I got (and send me your part if you have a more complex one)

I'd say the SW demo models are a good start in that sense.

A proposal for a benchmark procedure could be:

1) run EcoSqueeze with the option to remove the parasolid info to "clear the model" 2) run a macro with "ForceRebuild(false)" to rebuild everything in the assembly 3) get the rebuild statistics from SW (will look how from API) 4) compare the geometry (how ?)

Philippe Guglielmetti -

formatting link

Reply to
Philippe Guglielmetti

Reply to
R. Wink

formatting link
should show the model structure with a graph (if you install Adobe SVG viewer) You cannot follow the links as the .sldprt model is not publicly available (copyright + huge size)

Reply to
Philippe Guglielmetti

i can't believe you've included "huge size" as an excuse/reason. my cable internet connection has absolutely zero problems downloading extremely large files. :)

Reply to
kenneth b

Server space is often limited. I've had to remove stuff from my site to make room for other stuff. It sounds like a valid reason to me.

matt

"kenneth b" wrote in news:c0irne$16a9sd$ snipped-for-privacy@ID-150979.news.uni-berlin.de:

Reply to
matt

ooops, didn't think of that

Reply to
kenneth b

Eddy,

Sounds allot like what SW "claims" to do already as part of their standard testing. When I was there, they had a whole room full of machines (different hardware configs) running real time tests. How effective their test suites were/are is open to debate. I didn't see exactly "what" functionality they were testing, or to what depth.

Even then, the answers I got led me to believe that they made decisions "SOLELY" on statistical/bean counter/marketing data. This probably means that they only test for the level of functionality needed by the average user. Anything more comprehensive would probably be construed as not "cost effective". This was blindingly evident back then.

They seem to have gotten somewhat better as of late, but it sure wouldn't hurt to have an independent test routine. The big question, to me at least, is would they bother to verify and honor the results. If so, would they act on them.

Regards

Mark

Reply to
MM

Hey Mark,

My idea is based on us, the users, doing tests to get the software to fail. Mostly to justify to ourselves whether a specific SP fixes what was broken to us, and whether it broke anything else in the process. I think the whole thing is do-able but whether it's a benefit to SW Corp never even entered my mind. The idea was to save us testing time, not them. That would only be an ancillary benefit.

- Eddy

Reply to
Eddy Hicks

Eddy,

Would save allot manual testing I suppose. If it were flexible enough to be customized to individual companies, it would even be worth buying.

By the way, I bought the MSI "K8T Master2-FAR" with two 246's, 2gb ram, and a FX1000. I should get the parts on Monday.

Regards

Mark

message

Reply to
MM

Awesome! Should be a screamer. Like I said, can't go wrong with MSI. Did you remember to buy "registered" ram? :)

- Eddy

Reply to
Eddy Hicks

Oh yeah, I was going to ask about this. Would this make a good product? I would think there would be more interest in the project if there were profits to be had. Of course since this could turn into a monster, one may not have a choice but to sell it?

Mike Wilson

Reply to
Mike J. Wilson

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.