David Nebenzahl wrote: : : Here's how I explained it to someone I'm working for, who expressed to : me that maybe the Canon 3.2 MP camera we were using wasn't good enough : and that we might need, say, a 16 MP camera. : That is only part of it as well. The point where "more MP" is silly is when you have more MP than your lens is able to resolve.
Most quality 35mm lenses can resolve somewhere around 200 lines/ mm. So, it would make sense that you do not need more than 200 pixels per mm on your CCD, either.
Given the CCD's in use today, that is somewhere around 10 - 12 MP. : : I took him over to the : screen of his iMac and asked him how many megapixels he thought the : display was; after discovering that the screen resolution was set to : 1440x900, this was found to be ~1.3 MP. : If you are going to shoot photos for a web site, then you do not need a lot of MP, certainly.
If you are going to make enlargements, or want to enlarge just a portion of the image, then it is helpful to have the additional resolution.
What is of most interest to me is the maximum frames/second that I can shoot. Most 35mm "pro" level film cameras could do 5 - 6 frames/second, which means you could go through a roll of 36 exposure film in 6 - 7 seconds. That got expensive...
Digital SLR cameras are now at that level today, for around $1500.00, body only. And up, of course. Now, sustaining that kind of speed is still problematic. Most DSLR's only deliver that kind of frames/second using JPEG images, which is an inheritantly "lossy" format. So, if you want to shoot for enlargements or to enlarge just a section of the image, you are back to losing data before you can start working with it...
You can shoot RAW, which basically takes the sensor data and shoves it into a quasi-standard (each camera maker has a different idea of how their RAW images are formatted to storage, but they more or less follow the same principals. Sorta. Basically). However, you are moving a lot more data to storage, and eventually, you will fill your storage buffer, and have to stop shooting at a high frame rate. The key for me is how quickly that storage buffer is cleared. Newer cameras can write the data faster, but there are still hardware limitations.
As for how long does it take an AF camera to focus, that depends on the camera. Most DSLR cameras today have some kind of predictive AF mode, which calculates where a subject will be when the shutter is released. This technology was first marketed by Minolta, who sold their camera technology to Sony. Point & shoot cameras, you probably have no such luck. You also need to consider how long it takes a camera to "wake up" from a power conserving mode. Again, newer DSLR are going to tend to be faster than their older counterparts. Nothing like missing a shot because the camera was fondling itself...
As for myself, I shoot Minolta (now Sony) DSLR's. I moved from Minolta MF bodies to the 2nd gen. AF body when Canon was thrashing around trying to get an AF camera to work even half way well, and Nikon was still thinking that AF was a passing fad. When it came to digital, however, Minolta was asleep at the wheel. Their first DSLR was a frankenstein of a mid-grade consumer AF body with an electronics package grafted (ungracefully) to the AF body. But, at 1.75 MP, it was fine for photos on a web site, and it preserved my $$$$ AF lens collection. Minolta eventually re-entered the DSLR market with a poor effort, and exited the photography business completely, which is where Sony comes in. Sony recently introduced a DSLR body that lives up to the Minolta AF legacy, and is what Minolta should have done before they took their ball and went home. In the meanwhile, of course, Canon and Nikon ate Minolta's lunch in terms of market share.
Bruce