A quick little survey ......

I'm a bit ashamed of myself so consider this a brief preface a modest apology in advance. You see, this post may actually be on topic .

People keep comparing various CAD, CAD/CAM & CAM packages based on cost or the color of buttons (jb, listen up) or the latest buzzwords in ads (I'll mention no names).

But what do you actually get? You get executable code, help and documentation. So let's compare those instead.

How many MB of executables came with package XYZ? How many MB of help files & documentation?

I know that most in these two (so far) groups use at least one system .... (not mentioning the nameless) so it should be easy to check your disks to see how many bytes XYZ takes.

If you cannot easily separate executables from support-type files, that's fine. The total will do nicely. If you can please post the totals too. You may not have all the possible $$ bells & whistles ... so a note as to how "full" an install you have might be proper.

How much did you pay per KB/MB of what you are using (roughly, IF you know AND care to disclose -- )?

One *might* assume that more KB/MB is better .... assuming CAD or CAD/CAM or CAM.

Any takers?

Might be interesting.

Reply to
Cliff
Loading thread data ...

Edgecam; a CAD/CAM program.

Not that easy to break down. Everything is divvied up into specific sub directories.

Quick total of some folders;

Main section that links to others: 100MB "Macros"-- PDI and PCI files: 5.3 Tool library/tool store: 32 Help: 28.4 Examples: 132

Cost: way too much. The annual "mordida" has exceeded the original cost by a factor of 2.

PS.

The old DOS Pathtrace PAMS version is about 16MB complete. It matches the latest version in capabilities in CAD and CAM until you get to very elaborate surfaces. You can still do them, but it is a major pain.

Reply to
alphonso

Many programmers correlate code quality with code efficiency.

Code efficiency can be defined in many ways:

Total time (human interface plus CPU) time to get the result - faster is better Lines of code to get the result - smaller (MB) is better Man hours to code the lines - smaller is better

I don't think any programmer tries to do the opposite of above.

Reply to
haulin79

LOFL Cliff -

What a survey.

My all time favorite CODE SIZE VS CAPABILITY:

ANVIL-1000

Important stuff fits on one floppy. The remaining stuff takes up about

1/3 of an second floppy.

Back from the days when tight code was a true necessity.

Get a new hobby buddy -

Later -

SMA

Reply to
Sean-Michael Adams

VX 10.91(complete integrated Cad/Cam) This is the installed configuration: binaries: 97 meg help files: 15 meg examples/training: 97 meg PDFs of manuals: 198 meg The whole kit and kabootle downloadable is about 300 meg.

Reply to
Steve Mackay

Special cases at compile time do offer your cause in effect thingy to run wild when it's determined how big object code can be often determined at compile time.

The compiler generates a path for the function inline with each behavior to be determined and how nested the result gets at the the point of call. The overhead from to much redundant code results from object code size which have been spun off from several tributary branches in order to perform a small task.

Function code size to object code size is later shown how efficient the compiler traces its path results that may appear by how these things get packaged into libraries, and these have to do with planning and organization at the start. The compiler is their to connect your dots, however it knows little how far apart the dots be located, this is the code behavior which is left to good software design.

Lets say you have this small routine that calls up a function which has the property of being part of a library of functions, so the compilers has the function to build a trace between your routine and the function located in the library, this to be viewed as tree to.

The compiler traces these functions to their destinations just like etch-sketch so in this way the program routine acts on the library. How the compiler gets from point a to point b in the straightest possible line can result in hoops and loops but still getting their.

The compiler can compile twice to see what's going on up ahead to allocate some memory in the form of an optimized trace, when this line is stretched out to contain some optimizing code for speed, size, an algorithm inserts a bypass that is space for memory to make it appear the bulky code is smaller then it truely be. That said, these things boil down to organization charts of the libraries and its relationship to the start of the program. Just how soon will things jump into the library that kind of thing such as can be accomplished with threads again to look ahead for bulky library code.

Should we pull in one or our functions from an entire library or develop a second possibly third routine which traces back to our root function. If all we need to do is attach the entire library into our project what are the trade-offs?.

Is it the compilers fault how things get packaged into libraries but looking at it another way, if you insert an MS library to the project, there is some advantage of these things to be not to specific toward our everyday approaches that a developer encounters, so why should we try or not try to eliminate as much mundane coding for the developer as time will permit. The trade off might be object code size possibly speed.

It would seem the more programs the better which gives us more developers not all the same, smallest, fastest not always bug free but development time is reduced.

Anyway we continue to the finish the last line of code when we come to the second, then third branch of develpment, our new little tree seems to be growing so on so forth until each branch shares something in common with the next at which point, we should have a library of our own that is specific to the function-behavior of the program.

By not compiling into our program XYZ's library, our code is smaller and faster but the extra time needed to develop it has cost our investors on Wallstreet dreaded quarterly aches.

Maybe we should have used a standardized library instead, but as everyone knows they will say our code was bloated, meanwhile ABC company already hits the showroom floor because they used libraries.

John

Reply to
John Scheldroup

2,087,583,744 bytes is the size of my SURFCAM folder. It began life a few years ago as a basic folder, but has since had a few upgrades added to it. I don't know how small it would be if it were a clean installation. I have four different help/manual files ranging in size from 18 meg to 115 meg. There's lots of video clips and sample files, and with four different versions in the folder I don't have enough time to sort it all out without some kind of PO and arrangements for payment for the time involved. Above info is for SURFCAM 3-axis Plus. How much $ per meg? Not sure, but with that many megs of stuff it can't be much.
Reply to
Charlie Gary

Wow. Never really checked but...1.35 gigs w/o help. But hey, it's got lots of "stuff". : )

-- Bill

Reply to
Bill

Comparing size of a program is not relevant.

My favorite example is WordPerfect 5.1, which came in DOS and Windows versions. The DOS version installed was about 1.8MB. The Windows version was about 10.6MB.

Just using a WIMP interface added a lot to the program bloat.

Reply to
Ben Loosli

Surfcam isn't all the difficult to judge. Look in the "surfxxx" where xxx is version/year folder, and get the size. That's the executable directory. But there are more support directories for the translators, veryify, etc.. It's not that hard if you know what you're looking at.

Take for instance Surfcam 2002(we have 4 seats of 2 axis+) Binaries :128 meg. AVI videos:265 meg Preditor Verify:29.3 meg Tool Libraries:3.35 meg Catia translator: 6.34 meg Sample Files:5 meg HTML Help Files 16 meg PDF manuals 19.8 meg

Reply to
Steve Mackay

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.