So now with AMD selling 64 bit processors and Microsoft coming out with XP
64 bit operating systems, how long till SolidWorks will be programmed to run
64 bit? With all the bloat that happening in the software, surely we could all benefit from this if it happens sooner rather than later.
Just pondering what system to upgraded to and how long the wait will be before I can benefit from it.
The pace at which swx is becoming less productive is likely to continue offsetting any hardware speed increases. On a modern PC Sw2004 is much slower to use than was sw2001 on a P3-750. If swx doesn't establish some priority on performance, I think hardware improvements will be treading water, at best. I'm hoping the NBT (speed/productivity) in modeling software will emerge soon, meanwhile just finishing my first (and last) fixture drawing set in sw2004. my $.02 bill
I don't suspect they will do it anytime soon. Your question got me to thinking:
SolidWorks has a solid record of not supporting performance oriented hardware. For example:
AGP bus -- not supported
64bit -- dropped. Used to support it circa 1995,1996, 1997 Hyperthreading -- not supported, causes performance decrease if turned on Multiprocessing -- not supported but can result in imperceptible speed increase. (see clustering below for other SW apps that aren't supported) Clustering -- not supported even by apps that can utilize it like CosmosWorks and FloWorks.
While FloWorks has made tremendous strides in performance over the last few years, let us not forget that they are really NIKA and are licensed by SW.
That being said, apparently the AMD 64 bit processors will run 32 bit OS and apps quite well and do offer two key advantages, the ability to run a great deal more memory than is possible with current Intel processors and a great deal more memory bandwidth than is currently available. So they are on top for running 32 bit apps and when software vendors start coming out with 64 bit apps that are cost effective they will have little choice but to go with AMD.
offer two key advantages, the ability to run a great
Just a question - will we still have the Windows limitation of only 2GB per application (3GB if the switch is hacked), or does Windows XP work with 64 bit processors? Accessing more memory would be a HUGE advantage, especially with the way SWx is going. But it might as well not be there if the OS doesn't allow it.
Whatever the compiler used to compile the software says it is. How many bits the system 'has' is irrelevant. Solidworks compiled for a 32 bit system will still only be able to access 2GB of address space (boot.ini hacks notwithstanding) on a 64 bit system.
It's not up to the compiler settings, usually. Not if you expect it to actually run on the target hardware.
No, it's not. Not at all .. else why not be happy with 8 or 16 bit systems?
That may well be the case IF the 64 bit system is emulating 32 ..... which, as long as there ARE 32 bit parts out there to interoperate with ....
Remember: The integers in the part databases also have a data format. Reading a 64 bit integer on a 32 or 16 bit system might cause problems...
At some point in time almost all systems may be 64 bit ... THEN part databses may be migrated to a 64 bit format. The same no doubt applies to floating points.
Some software may take full advantage of a 64 bit processor but much of that will be new I suspect. Things like graphics and much floating point MAY run faster, if properly written and compiled, IF they do not affect the format of the numbers in the resulting stored/interoperable part database.
I don't know (right off) about things now declared as "double precision" in the existing applications software that might, on a 64 bit system, become single precision. There may well be some gains there IF the part's binary format remains a constant.
Actually, it is. Properly written code will have the size of native data types abstracted out. Competent C/C++ programmers have been doing this for years. It is the only way to ensure that code written on platform X will work after recompiling for platform Y.
Er, we're talking about 32 bit to 64 bit. Of course it doesn't work the other way. You can always put a half gallon of milk in a one gallon jug. The reverse, or course, is not possible without a great deal of extra code, which would slow things down too much.
We're talking about the reverse. A 32 bit integer will *always* fit in a 64 bit slot. And once again, as long as the code is written so that the actual size of various data types (int, long, long long, double, float, etc) isn't hard wired, the native platform size doesn't matter. The same is true for the data format. There is always some decomposition that takes place in the data format simply because all of those longer data types (int, long, double) are converted to streams of (8 bit) bytes for storage in the file system.
Double precision numbers on today's system are already 64 bit. They are stored in two parts, each 32 bits long, which cannot be accessed simultaneously (unless you have dual processors, but that is another mess in and of itself). THat's part of the performance advantage of
64-bit systems. One access per value instead of two.
Or if the format was created such that the size of native types has been abstracted out. Which it should be. Only SW knows if that is the case.
Jim, The part's data storage format for integers and floats has to work across the board I think. Kind of hard to shoehorn 64 bit parts into 32 bit ones, don't you think? I suppose one could always port all the 32 to dummy 64-like data structures and use a special version of the applications software to translate it to real 32 bit every time ....
What causes software & part bloat and bugs?
Now, about that 2GB memory limit & 2**(32-1) ..... ?
Not hard. Just somewhat inefficient. As I said in my previous message, double precision floating point numbers are *already* 64 bits. There is also already a 'long long' data type, which is a 64 bit integer.
As I've already said, if the code is written properly, the size of data type on the host machine should not matter to the application software. The compiler *should* do that work. Now what percentage of software has been written properly? Not much I imagine.
In my opinion, reliance on someone else's code that was designed to do something other than what it is being used for. If you want something done right, do it yourself and all that.
In the presence of a 64-bit operating system, the 4GB limit becomes 18 exabytes. That's 18E+18. Minus whatever the host OS decides to keep for itself.
Two sequential address spots. Intel actually has datatypes that allow you to use 128-bit data (known as a double quadword).
In any event, it is a safe bet to assume that SW already uses 64 bit double precision floating point data. The 32bit floating point type typically does not have enough accuracy. So, on a 64 bit platform, the double precision data lives at one address, as opposed to two sequential addresses.
This assumes the *data* is 32 bits today. I don't think it is.
The 32-bit FPU(floating point unit) already has native math functions to work on 64 bit values. In fact, all 32-bit floating point values stored in memory are converted to double precision extended floating point (80-bit) format when moved from memory to the FPU for processing.
They already do that every time you have some code using double precision floating point values.