Derivation Of The Spectrum Due To f(t).d(t - T)?????

There are a number of authors and a number of contributors to this NG who claim that sampling of analogue signals is represented by multiplying the incoming waveform f(t) by a comb of Diracian Delta Functions of the form d(t - T) and who then go on to claim that each such sample gives rise to a contribution to the spectrum of the sampled signal of f(T).e^(-sT).

As this claim, this apparently faulty meme, ought to be such a fundamental part of DSP, its foundation stone in fact, it should be possible to prove this assertion by appeal to Dirac's properties of his Delta Function, and to the Laplace transform. I think that such a proof should be a simple thing to be provided by that body of authors and contributors if the claim were to be true.

In our training of the Laplacian method, we are presented with a whole range of such derivations, the spectra for d(t), u(t), t, sin(t), rect(t), sinc(t), etc etc etc. Why not a similar derivation for f(t).d(t - T)? It cannot be a difficult matter for those who make the claim that sampling is so represented!

As that body of authors and contributors seem unable to provide such a proof and resort to side issues and rather silly and childish ad hominem attacks when challenged upon the matter the conclusion that I reach is that the claim is false, and that that body of authors and contributors hold a religious-like stance to the matter and respond with the emotional maladjustment which is the mark of all those who are the religious loonies of the world today. (11/9 and the resultant war brought on by the religionists Bush, Blair and Windsor being prime examples of catastrophes brought on by emotional maladjustment)

Reply to
Airy R. Bean
Loading thread data ...

There are a number of authors and a number of contributors to this NG who claim that sampling of analogue signals is represented by multiplying the incoming waveform f(t) by a comb of Diracian Delta Functions of the form d(t - T) and who then go on to claim that each such sample gives rise to a contribution to the spectrum of the sampled signal of f(T).e^(-sT).

As this claim, this apparently faulty meme, ought to be such a fundamental part of DSP, its foundation stone in fact, it should be possible to prove this assertion by appeal to Dirac's properties of his Delta Function, and to the Laplace transform. I think that such a proof should be a simple thing to be provided by that body of authors and contributors if the claim were to be true.

In our training of the Laplacian method, we are presented with a whole range of such derivations, the spectra for d(t), u(t), t, sin(t), rect(t), sinc(t), etc etc etc. Why not a similar derivation for f(t).d(t - T)? It cannot be a difficult matter for those who make the claim that sampling is so represented!

As that body of authors and contributors seem unable to provide such a proof and resort to side issues and rather silly and childish ad hominem attacks when challenged upon the matter the conclusion that I reach is that the claim is false, and that that body of authors and contributors hold a religious-like stance to the matter and respond with the emotional maladjustment which is the mark of all those who are the religious loonies of the world today. (11/9 and the resultant war brought on by the religionists Bush, Blair and Windsor being prime examples of catastrophes brought on by emotional maladjustment)

Reply to
Airy R. Bean

It appears that when you say "spectrum" you mean Laplace Transform. Assuming that's what you mean, then this is indeed very easy to show.

The definition of the LT is

(i) L(f)(s) = int_0^infinity f(t) e^(-st) dt.

And the definition of delta(t-T) is that if g is continuous (and T > 0) then

(ii) int_0^infinity g(t) delta(t-T) dt = g(T).

(I'm calling it delta instead of d to avoid confusion with the d in "dt".))

Now (i) says that

L(f(t)delta(t - T))(s) = int_0^infinity f(t)delta(t-T) e^(-st) dt,

and applying (i) with g(t) = f(t) e^(-st) shows that

int_0^infinity f(t)delta(t-T) e^(-st) dt = f(T) e^(-sT).

Uh, right. Who is it who's been unable to give the proof above? Are you sure it's "unable" and not just "unwilling"? I mean this is all covered in every book on differential equations I've ever seen...

************************

David C. Ullrich

Reply to
David C. Ullrich

Up to the point quoted below, I agree with everything that you say, so thanks for your input.

But. to evaluate the (unilateral) Laplace transform, we either have to be able to present f(t) as a function of exponentials, in which case simple integration applies together with the adding of exponents, or we have to apply Integration by Parts. (or else for a time domain product we have to evaluate an s domain convolution)

You simply cannot make a simple suggestion that g(t) = f(t).d(t - T) as you have done below.

Integration by parts.....

int(UV) = U.int(v) - int[ dU.int(V) ]

which becomes even more unwieldy when it needs to be int(UVW) as in evaluating the LT of f(t).d(t - T)

Reply to
Airy R. Bean

Uh, right. What I gave was a complete and correct proof (well, it was extremely informal by mathematical standards). You just saying I can't do what I did doesn't make the proof invalid, sorry. The fact that you don't understand something doesn't make it wrong.

************************

David C. Ullrich

Reply to
David C. Ullrich

Hi there,

Im still a student but I recall studying the effect of delta function sampling and I recall the proof by David. The way I understood the proof of it was that f(t).delta(t-T) = f(T) where T is often a constant delay or shift. You can say that f(t) has been sampled at point T.

This occurs because the delta function is zero everywhere except at T so if you multiply the two, every point is zero except for the one at T (where delta(t-T) = 1). This is a basic property of a delta function and can be proved but Its fairly obvious so im not going to.

Now you can evaluate f(t) at T, and use that in the laplace transform like: integral(from 0 to inf) of ( f(T).e^-(st) ) dt if f(T) is a constant you can pull it out the front.

If you periodically sample a signal, e.g with a delta "comb": f(t).(sigma (from n=0 to inf) of (delta( t - nT))) function you get an array of values, such that sampled f(t) = f(0) + f(T) + f(2T)........ since it is not continuous now, you can represent the signal as f(kT).

Now you are looking at a starred laplace transform which is no longer an integral because it is not continuous. Instead it is a summation which is multiplied by a complex variable: F*(s) = sigma(k=0 to inf) of ( f(kT).e^-(T.s.k) )

From there it is easy to see how each component contributes to the spectrum. To complete the transform, apply the proof of a geometric sum to get a neat ratio of polynomials (in basic problems).

The z-transform is more commonly used and is strongly related to the starred laplace transform.

I hope that is what you were looking for and also that it is all correct (my memory sometimes plays tricks on me) Cheers Marc

Reply to
Marc W

The fact that you cannot do what you did _DOES_ make the "proof" invalid.

I understand fully what I am proposing.

If there is to be any criticism that there is a lack of understanding then it seems to arise in your own contribution.

Nevertheless, thank-you for your attempt at assistance, wrong though it was.

Reply to
Airy R. Bean

Thanks for joining the discussion.

What you say below is not supported by the properties of the Diracian Delta function. It has to be under an integral sign to get the f(T).

Reply to
Airy R. Bean

Thanks for your contribution, which I disagree with as follows.....

You need to do int(+/-inf)(f(t).d(t-T).e^(-st)) to determine your result.

You simply cannot say that f(t).d(t-T) yields f(T) unless you do so under an integral. This arises from the fundamental properties of the Diracian Delta function .

Certainly, if you integrate by parts, you would choose int(f(t).d(t-T)) as the integrated bit to yield f(T), but when I try this, I get 0!......

int(UV) = U.int(V) - int[dU.int(V)] giving.....

int(+/-inf)(f(t).d(t-T).e^(-st)) as .....

with "f(t).d(t-T)" as V and "e^(-sT)" as U .....

f(T).e^(-st) - int(e^(-st)/-s . f(T)).....

f(T).e^(-st) - -s.e^(-st)/-s . f(T).....

f(T).e^(-sT) - f(T).e^(-sT)....

  1. What I seek is a sound mathematical proof of the claim that f(t).d(t-T) gives rise to a spectrum contribution of f(T).e^-(st), and claiming that f(t).d(t-T) equals f(T) is not sound. We use the properties of the Diracian Delta Function in so many other aspects of signal processing that it is just not right to pull-a-fast-one.

Reply to
Airy R. Bean

You just saying that it was a complete and correct proof does not make the proof valid, sorry.

The fact that you don't understand something doesn't make it right.

Reply to
Airy R. Bean

It would if you gave an explanation for _why_ I can't do that, instead of just asserting it.

I begin to understand the comments that you called ad hominem a few posts up, although I haven't seen them. If you were interested in understanding this you'd ask me to explain why the steps in the proof were correct instead of just asserting they're invalid. But you're somehow convinced that you must be right, even though we're talking about standard mathematical facts that you can read in undergraduate textbooks. The idea that you can be so certain you're right and all those books are wrong is simply kooky.

************************

David C. Ullrich

Reply to
David C. Ullrich

Is not the same thing true of you, and therefore an invalid comment to make?

Reply to
Airy R. Bean

I did give you an explanation, suggesting that you must either integrate by parts or else must resort to frequency-domain convolution.

Reply to
Airy R. Bean

( modify address for return mail )

formatting link
formatting link

Reply to
r_obert

No, because I gave an actual proof that I was right, starting from the definitions.

************************

David C. Ullrich

Reply to
David C. Ullrich

That's exactly the part that you simply _stated_ without proof. It's not true, by the way.

************************

David C. Ullrich

Reply to
David C. Ullrich

That's not true. What's true is that

f(t).delta(t-T) = f(T).delta(t-T).

delta(t-T) = 1 when t = T is not true either.

Actually there's no such thing as the value of delta(t-T) when t = T. The delta "function" is not a function. What it is is a "generalized function". What that means is a slightly long story, but the simplest way to think of it is probably this: There's no such thing as delta(t-T) _except_ when it appears under an integral sign. And if g is a continuous function then

(*) int_-infinity^infinity g(t)delta(t-T) dt = g(T).

Really - (*) is a version of the _definition_ of what the delta "function" really is.

That's the way it has to be to make things like the Laplace transform come out right. If you set g(t) = e^(-st) in (*) you see that the Laplace transform of delta(t-T) is e^(-Ts), which is what we want it to be. If on the other hand it were true that delta(t-T) = 0 when t T and 1 when t = T then that Laplace transform would be the integral of a function that vanishes except at one point, and that integral is _zero_.

(If we were talking about discrete variables instead of continuous variables then what you say about delta(t-T) would be exactly right.)

************************

David C. Ullrich

Reply to
David C. Ullrich

If you say, "No" to the question posed below, then you are saying that you are not convinced that you must be right.

Reply to
Airy R. Bean

.....[Pantomime mode ON].....

Oh, Yes! It is!

.....[Pantomime mode ON].....

Reply to
Airy R. Bean

No, that's not true and is not supported in any way by the properties of the Diracian Delta Function.

There are no simple multiplication properties unless expressed under an integral sign.

What is true is that int(+/- inf)(f(t).d(t-T)) is f(T), but the Delta is no longer present having been integrated out.

Reply to
Airy R. Bean

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.