Derivation Of The Spectrum Due To f(t).d(t - T)?????

Dirac did not define his function a as generalised function because the concept did not exist in 1930.

The introduction of generalised functions and the Theory Of Distributions is entirely irrelevant when no mathematics resulting from that theory is applied.

The product of a generalised function is another generalised function. No generalised functions appear in our DSPs.

Reply to
Airy R. Bean
Loading thread data ...

No. Sorry. Good reason to do so.

Reply to
Airy R. Bean

Reply to
Airy R. Bean

Do what?

Reply to
Torkel Franzen

Where?

Reply to
Torkel Franzen

Perhaps you should try to understand it a bit more before saying that someone is making false statements.

The above means that (ie is equivalent with):

int_{-oo}^oo f(t).delta(t-T) dt = int_{-oo}^oo f(T).delta(t-T) dt

The left handside simplifies to f(T). The right handside simplies to f(T). If you don't see the latter, define h(t)=1 (for all t). Now

int_{-oo}^oo f(T).delta(t-T) dt = int_{-oo}^oo f(T).h(t).delta(t-T) dt

= f(T) . int_{-oo}^oo h(t).delta(t-T) dt = f(T).h(T) = f(T) since h(T)=1.

Hence they are indeed equal.

Wilbert

Reply to
Wilbert Dijkhof

Your dU, ie dU/dt, ie d/dt e^(-sT) equals zero. Thus int[dU.int(V)] = int(0.f(T)) = 0.

and U.int(V) = e^(-sT) . f(T), which is equal to the left handside.

Wilbert

Reply to
Wilbert Dijkhof

And what would that good reason be? BTW, MS Outlook Express, despite its failings, does group things by thread even if the subject line has been changed.

Reply to
Jon Harris

You really don't know nearly as much about any of this as you think you do.

delta is a _measure_. The product of a measure and a continuous function is a standard thing; by _definition_ saying that f(t).delta(t-T) = f(T).delta(t-T) means that if g is a continuous function then

(*) int g(t).f(t).delta(t-T) = int g(t).f(T).delta(t-T)

(where int is the integral from -infinity to infinity in general, or from 0 to infinity here). And (*) is true, because both sides equal g(T)f(T).

Yes, that's true. Doesn't imply that f(t).delta(t-T) = f(T).delta(t-T) is false.

************************

David C. Ullrich

Reply to
David C. Ullrich

Well actually no, that's not what it means. See my reply to Airy.

************************

David C. Ullrich

Reply to
David C. Ullrich

I didn't say that Dirac used the term "generalized function". The delta "function" _is_ a generalized function, aka distribution. Dirac's delta "function" was one of the first examples of the _concept_, which was named and put on a rigorous mathematical basis by Schwarz years later.

************************

David C. Ullrich

Reply to
David C. Ullrich

Uh, sorry, I didn't read carefully, in particular I didn't realize that you were now quoting less than complete sentences.

Yes, I am convinced I'm right. The complete sentence that I thought the "is not the same true of you" was asking about was this:

"But you're somehow convinced that you must be right, even though we're talking about standard mathematical facts that you can read in undergraduate textbooks."

Yes, I'm convinced I'm right. No, it's not true that I'm convinced I'm right _in spite of_ the fact that I'm contradicting what's in hundreds of textbooks. The person who's somehow convinced he's right and all those books are wrong is you.

************************

David C. Ullrich

Reply to
David C. Ullrich

Uh, right. Now why don't you give us a _proof_ of your assertion that the only way to evaluate a Laplace transform is integration by parts or convolution?

It's a funny thing. I've been a mathematician for over 20 years, and this is the first time I've ever seen anyone insist that X is the _only_ way to accomplish Y.

************************

David C. Ullrich

Reply to
David C. Ullrich

hey Beanie,

i have proposal for you. would you promise to shut up and go away (or, at least, just shut up) if i show you where your flaw is in your integration by parts? it's amazing why you don't just accept the simple proof (applying the definition of the dirac impulse function) that

+inf integral{ x(t) d(t-T) e^(-st) dt} ( d(t) = dirac impulse function ) -inf

+inf = integral{ [x(t) e^(-st)] d(t-T) dt} -inf

+inf = integral{ x1(t) d(t-T) dt} ( where x1(t) = x(t) e^(-st) ) -inf +inf = integral{ x1(u+T) d(u) du} ( substitute u = t-T ) -inf +inf = integral{ x2(u) d(u) du} ( where x2(t) = x1(t+T) ) -inf

= x2(0) ( by definition of the dirac impulse )

= x1(T)

= x(T) e^(-sT)

now that's how normal people without mental illness will evaluate that bilateral Laplace Transform integral. (people *with* mental illness may want to use a less direct and more difficult method so that they will more likely screw up and get the wrong answer.)

for some reason Beanie wants to evaluate it by parts and he insists he gets a different answer (zero). setting aside the lesson of Occam's Razor (

formatting link
the simplest explanation should be used), the answer *if* you integrate by parts *correctly* (which Beanie can't seem to do correctly), the integral turns out to be the same.

Laplace{ x(t) d(t-T) } = x(T) e^(-sT) not zero as Beanie claims

Beanie, if you promise to behave yourself and stop trolling this newsgroup, i'll show your flaw in your integration by parts. both integrating by parts and the straight forward integration comes out with the same answer (as if any of us suspected otherwise).

but i'm gonna use the *textbook* definition of integrating by parts:

b | b b integral{ u dv} = (uv)| - integral{ v du} a | a a

(Beanie, you'll need a mono-spaced font to read this ASCII math well.)

Beanie wants (if he agrees to conventional notation)

u = e^(-st) and dv = x(t) d(t-T) dt

Beanie, if you agree to those (very reasonable) terms, i will show you (and everyone else reading it) the flaw in your derivation that erroneously says the result is zero.

r b-j

in article snipped-for-privacy@uni-berlin.de, Airy R. Bean at snipped-for-privacy@privacy.net wrote on 11/29/2004 04:02:

that's because your knowledge and insight is far less than you think and you're also quite mentally ill. (perhaps only the latter is true if you're the master troll Jerry makes you out to be, but both are true if you're simply the obnoxious crackpot and jerk that is more ostensible.)

Reply to
robert bristow-johnson

For what it's worth, tongue firmly in cheek, and with apologies to David Ulrich, who did it properly:

You seem to have a problem with the Dirac Delta definition.

Anyway, presumably you are happy with the Laplace transform L[f(t).delta(t-T)] = int_0^infinity f(t).delta(t-T).exp(-st) dt

Maybe the notion that the Delta function is, loosely speaking, zero nearly everywhere is sort-of vaguely familiar to you, so you might be convinced that something like this is true:

L[f(t).delta(t-T)] = lim M->0 int_(T-M)^(T+M) f(t).delta(t-T).exp(-st)dt

And maybe considering f(t) and exp(-st) might get you ruminating on their continuity, and a daring insight like this may leap out:

L[f(t).delta(t-T)] = lim M->0 int_(T-M)^(T+M) f(T).delta(t-T).exp(-sT)dt

Forgive me if I'm getting carried away, but isn't the Riemann Integral linear? I think it might be, so wouldn't that mean that

L[f(t).delta(t-T)] = lim M->0 f(T).exp(-sT) int_(T-M)^(T+M) delta(t-T)dt

I really think that I might be onto something here, but I need to work out how to integrate that pesky Dirac Delta ... any takers?

Reply to
Richard

Even more apologies to David Ullrich for spelling his name wrong

Reply to
Richard

in article NZQqd.13085$ snipped-for-privacy@news02.tsnz.net, Richard at richman snipped-for-privacy@hotmail.com wrote on 11/29/2004 21:32:

...

it's actually pretty straight-forward to do it right (it's on the other post i just made). Beanie prefers to use integration-by-parts, and then makes a mistake (he actually doesn't do it right from the beginning) and comes out with zero so then he claims that all accumulated knowledge since Heaviside and Nyquist about it is in error. don't fall for his bullshit. he really doesn't know what he's typing about.

r b-j

Reply to
robert bristow-johnson

OK, let's integrate by parts:

L[f(t).delta(t-T)] = int_0^infinity exp(-st).f(t).delta(t-T).dt

using int_a^b udv = (uv)|_a^b - int_a^b v.du

choose u = exp(-st) => du = -s.exp(-st)dt and, dv = f(t).delta(t-T) dt => v = int_0^t x(u) delta(u-T)du = 0 if t < T, or x(T) if t >= T

so, L[f(t).delta(t-T)] = [exp(-st).v(t)]_0^inf - int_0^inf -s.exp(-st).v(t)dt

= 0 + s.int_T^inf x(T)exp(-st)dt = [-x(T).exp(-st)]_T^inf = x(T).exp(-sT)

hopefully that is satisfactory for you, excuse the sloppiness

Richard

Reply to
Richard

Disclaimer, my previous post was not to be taken very seriously - the maths was intentionally hand-wavy and tongue-in-cheek. David Ullrich did it the standard way further up.

I just integrated it by parts for Mr Bean, so he is hopefully now satisfied ...

Richard

Reply to
Richard
A

let's integrate by parts correctly:

L[x(t).delta(t-T)] = int_0^infinity exp(-st).x(t).delta(t-T).dt

using int_a^b udv = (uv)|_a^b - int_a^b v.du as our formulation of parts,

choose: u = exp(-st) => du = -s.exp(-st)dt dv = x(t).delta(t-T) dt => v = int_0^t x(u) delta(u-T)du = 0 if t < T, or x(T) if t >= T

so, L[x(t).delta(t-T)] = [exp(-st).v(t)]_0^inf - int_0^inf -s.exp(-st).v(t)dt

= 0 + s.int_T^inf x(T)exp(-st)dt = [-x(T).exp(-st)]_T^inf = x(T).exp(-sT)

hopefully that is satisfactory for you, I think the sloppiness is warranted.

Richard

Reply to
Richard

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.