Chalk And Cheese - A Few Further Thoughts On Impulses
When studying the mathematics associated with DSP, one encounters time and again a mathematical equation from which the deduction could be made that the act of sampling is to create impulses which are numerically equal in size to a multiple of the size of Dirac's Delta Function.
However, this is a posteriori deduction about the nature of sampling, and not an a priori matter.
This is where a number of texts and tutorials about DSP fall down, because they introduce, from lesson one, an explanation that sampling comes about by multiplying an input waveform by a comb of delayed Delta Functions, but without any justification or explanation of the claim.
Any body knowing the slightest about Delta Functions can complain rightly at this claimed action. For instance, the instantaneous multiplication of one function by another, where the instantaneous height of one of them is of a magnitude ooo fg will yield a sample that is also ooo fg and not one that is ooo unity.
An answer to this protest might be that one multiplies by the _AREA_ of the impulse, but anybody knowing the slightest about the calculus can complain rightly about this, because there is no part of the calculus that brings about the instantaneous multiplication of the instantaneous value of one function by the area of another. Indeed, if one is going to introduce an area into the field of integral transforms, one had better introduce an integration to evaluate that area! It is no achievement to have studied the calculus and then to allow oneself to be fobbed off with silly claims.
Some who cannot answer the objections raised above say to themselves that the impulse is of Planck width....10^(-43) seconds, but the inevitable conclusion is that the amplitude of the sampling pulse is therefore 10^(43) and not unity!
So, what is the answer to this apparent anomaly; an anomaly that the mathematics of sampling seems to bring up Delta Functions but sampling by such as an a priori claim throws up valid objections? How does one justify or rationalise the explanation?
The worlds of discrete mathematics and continuous mathematics are as different as chalk and cheese. Chalk is gritty and made up of discrete particles whereas Cheese is a smooth continuum of material.
However, from a distance Chalk appears to be smooth and close up, microscopically close, cheese appears to be gritty. Perhaps this might account for some of the (very) cheesy explanations given to claim that sampling is the action of multiplication by a comb of Delayed Delta functions?
What I suggest that we are doing as engineers is stretching the credibility of what we are taught as nascent mathematicians.
The grittiness of cheese - when we learn about the calculus, we evaluate in small steps, based on dt tending towards zero. I suggest that our use of the impulse for sampling assume that we are at this limit and not at a limit of zero.
The smoothness of chalk - the Nyquist sampling rate gives us sufficient information to recreate the input waveform. A greater sampling rate gives us no further information (although it might ease the implementation of an anti-aliasing filter). I suggest that our use of an Impulse comb assumes the time between samples to be at the dt of our calculus, although this is stretching credibility at an input frequency approaching half the Nyquist.
There are many shape that act as Impulses. What is important is that the spectrum of the Impulse completely cover the spectrum of the Impulse response. Any frequencies outside the spectrum of the Impulse response will not be passed through, and therefore it is immaterial whether we include them in the input frequency spectrum. The presence, or lack, of these higher frequencies in whatever combination accounts for the wide disparity in the shape of those functions that can act as impulses. (This may well be the old tree stump that "Chimera" was sawing away at, but she did not explain it in such a fashion, and, indeed, it was difficult to extract any meaningful information from the slough of slimy childish comments with which she indulged herself) Armed with this info, we do not have to even bring in the topic of Distributions to explain away impulses.
In the real world of sampling, our sampling takes place over an interval that is sufficient for the sample-and-hold capacitor to charge up. For the calculus-like approximation, and, indeed to satisfy Nyquist, there must be no appreciable change in the input value for the duration of this sampling. Therefore, we can take the amplitude of the input waveform to be constant for the duration of the sample, and equal to that existing at the rising edge.
Can we take the sampling interval as the base-line for our impulse? The answer I believe to be, "Yes". This, however would make the sampled value be in error by a factor of 1/T. This matter of 1/T was claimed by Briston-Johnsowe, but he did not explain it and claimed that the sampling pulse was of ideally zero, or practically Planck-Time, in width.
(Aficionados of my previous attempts to eschew obfuscative explanations might now like to take this 1/T as my Big-K)