10 metres audio cable going into PC = too long?

The telecom people would find it very amusing to see how much their customers take for granted. They work very hard and use many tricks and techniques to keep power-mains hum out of phone loops. Apparently they are doing a good job if we think that it is effortless.
Reply to
Richard Crowley
Loading thread data ...
As with various of the other statements I have seen in this thread on various sub-topics, the above seems to me to be an over-simplification. Interesting to speculate if in this case it is the above statement that is ambiguous, or the ways in which the terms are actually used by engineers are ambiguous... Perhaps this supports the argument that people become engineers because they can't communicate very well... :-)
If you go back to some of the early sources [e.g. 1] then you can find some that describe what is observed by the receiver/destination as something like a 'received signal' which may include some 'noise' (and some distortion or other systematic alterations).[2]
However the sources also routinely refer to 'signal to noise' ratio.
Shannon seems to resolve this by distinguishing between the 'signal' (i.e. what the source transmitted) and the 'received signal' (i.e. what the destination actually observed to arrive).
So if we were to use a term like 'received signal' in the above statement it would essentially become either a tautology or self-referential as the signal includes the noise. Thus the problem with the statement is that it is unclear due to the ambiguous use of 'signal'. Hence, as often is the case with such ambiguous statements, people start arguing about the meaning when they are simply using different definitions which the ambiguity allows. :-)
FWIW for the above reason, when teaching Information Theory/ Comms/ Instrumentation I tended to use another approach which is common in the area. This is to say that a 'signal' means that the pattern (or part of the pattern) *is used to convey information content*.
Thus in the context of communications a 'signal' means that the sender and destination have to have pre-agreed the coding/modulation system to be employed, and the meanings of the code symbols or distinguishable patterns.
In the context of a physical scientist making observations - e.g. an astronomer observing what can be received from a distant radio galaxy - the 'signal means that the observed pattern will be used to obtain information about the distant source.
The status of 'signal' then stems from the deliberation or requirement that it conveys information on a defined basis.
In both contexts what distinguishes 'signal' from 'noise' is the information conveyance the 'signal' provides, and that 'noise' tends to obscure, or limit, or make uncertain, the information recovery. This then helps make clear the actual meaning in practice of terms like 'signal to noise ratio'. (Although there may then be hours of fun for all the family as they argue about the distinction in this phrase between assuming 'signal' means either the intended/transmitted or the 'received' signal. :-) )
[1] e.g. Shannon
formatting link
[2] Probably best at this point not to start worrying about distortion as being 'signal' or not... ;->
Reply to
Jim Lesurf
Very true.
Not to mention that "no hum at all" is only in the perception of the customer, whereas telco people tend to actually measure it.
Granted though, a telephone installer just uses a very simple test set that gives a "good/bad" indication, not a specific number. And that would be the most that a customer would likely ever see. But when a cable is installed the pairs are very specifically measured and compared against design specifications, which were calculated very closely prior to construction. Nobody wants to invest in new cable plant and end up with a cable that can't be used...
Reply to
Floyd L. Davidson
Actually the language is probably a bit *too* precise for non-engineers... and it gets worse too, because nobody had mentioned "distortion" until your article.
First, here are correct technical definitions, from Federal Standard 1037C, for signal, noise, and distortion. (Just be aware that they don't necessarily mean what one might thing!)
1. Detectable transmitted energy that can be used to carry information.
2. A time-dependent variation of a characteristic of a physical phenomenon, used to convey information.
3. As applied to electronics, any transmitted electrical impulse.
4. Operationally, a type of message, the text of which consists of one or more letters, words, characters, signal flags, visual displays, or special sounds, with prearranged meaning and which is conveyed or transmitted by visual, acoustical, or electrical means.
Note that it is something that "can be used to carry information", but there is no requirement that "information" either be present or be useful.
The energy used for AC power *is* a signal. In this thread *all* references to hum (which clearly *does* carry information, otherwise we would not be able to hear it and distinguish that it as unique) and to "power line" or "AC" are always correctly referred to as a "signal", and may or may not be a "noise" depending on the circumstance.
1. An undesired disturbance within the frequency band of interest; the summation of unwanted or disturbing energy introduced into a communications system from man-made and natural sources.
2. A disturbance that affects a signal and that may distort the information carried by the signal.
3. Random variations of one or more characteristics of any entity such as voltage, current, or data.
4. A random signal of known statistical properties of amplitude, distribution, and spectral density.
5. Loosely, any disturbance tending to interfere with the normal operation of a device or system.
Each of those definitions carries some baggage, which usually goes unnoticed until someone gets pedantic about technical terms.
Definition 1, the most precise and restrictive definition, requires that the disturbance be "introduced", which implies that it originate external to the circuit itself. That is the difference between "noise" and "distortion", when the two are differentiated. Generally though, a distortion is a noise, but a noise is not necessarily a distortion. (Much as a signal might be noise, but noise is not necessarily a signal.)
Definition 2 includes the term "distort". Definitions 3 and 4 use the term "random". And definition 5 is the more commonly used catch all term.
1. In a system or device, any departure of the output signal waveform from that which should result from the input signal waveform's being operated on by the system's specified, i.e., ideal, transfer function.
Note: Distortion may result from many mechanisms. Examples include nonlinearities in the transfer function of an active device, such as a vacuum tube, transistor, or operational amplifier. Distortion may also be caused by a passive component such as a coaxial cable or optical fiber, or by inhomogeneities, reflections, etc., in the propagation path.
2. In start-stop teletypewriter signaling, the shifting of the significant instants of the signal pulses from their proper positions relative to the beginning of the start pulse.
Note: The magnitude of the distortion is expressed in percent of an ideal unit pulse length.
The significance of the distinction between noise and distortion might be lost on anyone but a design engineer, or perhaps a theoretical physicist. At a maintenance and operations level, it makes no difference.
Ahem, Shannon is an "early source"???? Telecommunications as we know it today was a hundred years old by the time Shannon began publishing! And that has only been ~60 years now. I spend many years working on equipment that was designed before Shannon...
Shannon does not exclude noise from being a signal. He merely uses the proper terms to distinguish between different signals, with the realization that we have no interest in the information carried by some signals... :-)
What is commonly called "Signal to Noise Ratio" is commonly more correctly called "Signal + Noise to Noise Ratio". In circumstances where the ratios are greater than, say, 15-20 dB or so, it is of little importance. Hence in typical telecommunications voice channels it is rarely considered. On the other hand in some data circuits and when applied to noise figures for microwave radio receivers, where the ratios are much smaller, the fact that the signal is actually Signal + Noise is important.
Ah, but ignorance on the part of some is not the fault of those who actually *are* using the term without ambiguity. Some posters, Don Pearce being the most obvious, have not understood the term and have been confused, and made efforts at confusing others.
But that doesn't mean the terms are actually ambiguous.
Note the difference between something that "can" and something that "is". Also, "information" seems to be misunderstood in that definition... if you are suggesting that "hum" is a noise that does not contain information, which is not the case. :-)
That would not fit the typical way the term is used in practice by people who work in the telecommunications field.
Again, "can" is appropriate, but "will be" is going to cause a misunderstanding.
That is too restrictive.
And it might well be the information carried by the noise signal that makes the information from the desired signal uncertain...
Everyone who has any interest in effective communications should study what Claude Shannon summarized. It is absolutely fascinating to read.
Can it contain information?
Distortion can *always* be counteracted by the introduction of an "error signal" which is opposite to the distortion. Therefore it would seem that distortion is necessarily a signal in all cases.
Reply to
Floyd L. Davidson
Wow, that brings back memories! The Shannon Day conference/celebration was quite an interesting event. Now I'm going to have to dig through my old files for that packet of papers.
I'd tend to say that distortion adds to noise side of the SNR, and some can be corrected ...but *always* ? Let's say the distortion is the result of clipping...
[ ...or maybe I've missed your point. ]
Ron Capik
Reply to
Ron Capik
Absolutely always. Recall that distortion is a known condition resulting from the communications channel itself. The output is known *before* the signal is input. (E.g., clipping is not arbitrary, and produces a very specific error signal.)
Which of course is something Shannon describes, and uses in examples, in "A Mathematical Theory of Communication".
Reply to
Floyd L. Davidson
Seems I must have missed something. From my reading of that paper it would seem that it is only the case in a closed loop, discrete system when the (mythical) perfect observer and error channel exist to generate said error correction.
With a continuous source Shannon noted: " ... Since, ordinarily, channels have a certain amount of noise, and therefore a finite capacity, exact transmission is impossible. "
From the subject line I would expect to be dealing with continuous source "ordinary" channels.
Ron Capik
Reply to
Ron Capik
That describes the theoretical "equivalent" implementation that Shannon used to illustrate the point.
For a practical example, consider typical implementations of equalizers to counter amplitude distortion. By measuring the characteristics of the channel, and one time adjustment can be made that corrects amplitude distortion. The equalizer essentially introduces an equal and opposite error to the known distortion introduced by other parts of the channel, with the results that amplitude distortion is removed from the equation (to the degree that the equalizer can actually match the distortion).
(I'm not sure what you mean by a "continuous source" channel. I more or less ignored your odd use of "discrete system" above, but it suffers the same problem of being ambiguous in this context. The two words should related to analog vs. digital, but I don't think that's what you meant.)
Keep in mind that I merely said it "could" be done. I did *not* say it was practical. Of course in many cases that is exactly what is commonly done (e.g., with amplitude distortion as described above), but in others it just is not practical for any number of reason, one of which would be when enormous bandwidth is required. For example, it would hardly make sense to reduce quantization distortion with that method!
Regardless, the point is that distortion is a known change which can always be predicted from the characteristics of the channel when a known signal is applied to the input. The difference between distortion and noise is that noise is external to the definition of the channel, and cannot be calculated before the fact. Hence there is no "known error signal" with noise, but there is with distortion.
Reply to
Floyd L. Davidson
"Laurence Payne" wrote in message news: snipped-for-privacy@4ax.com
Oh come on, we're very selective with the fights we pick. Our targets are always very weak.
Someone already did a big number on downtown Manhattan, but it was not sufficient to get a new electrical code written.
Reply to
Arny Krueger
You don't ("you" being the generic "Usenet user"). "He" on the other hand does. On many occasions he's quoted back several pages and added one or two lines to the bottom... the word "trim" doesn't exist in his vocabulary. One wonders if he uses AOL.
Still, it could be worse. At least it's not top-posted HTML.
Reply to
Glenn Richards
The first parts of Shannon's paper deal with the mathematics of desecrate systems then later deal with what he calls "continuous source" channels.
You said " *always* " and "absolutely always."
I'll accept that as a hand waving limit of sorts.
I'd contend that your assertion may be true within the limits of channel characterization for a noise free channel. I'd like to see the math that extends this to a system with noise; math that proves the distortion can "absolutely always" be removed.
Ron Capik
Reply to
Ron Capik
The fact that it can be done does not make it reasonable to implement.
It means simply that the cost of implementation might well be unreasonable.
Shannon discussed it in terms of a discrete channel *with* noise:
We now consider the case where the signal is perturbed by noise during transmission or at one or the other of the terminals. This means that the received signal is not necessarily the same as that sent out by the transmitter. Two cases may be distinguished. If a particular transmitted signal always produces the same received signal, i.e., the received signal is a definite function of the transmitted signal, then the effect may be called distortion. If this function has an inverse -- no two transmitted signals producing the same received signal -- distortion may be corrected, at least in principle, by merely performing the inverse functional operation on the received signal.
Well, certainly *I* am not qualified to take the math to a level that Claude Shannon specifically avoided, saying that it was too complex and would be a distraction anyway.
"We will not attempt, in the continuous case, to obtain our results with the greatest generality, or with the extreme rigor of pure mathematics, since this would involve a great deal of abstract measure theory and would obscure the main thread of the analysis." Shannon, "A Mathematical Theory of Communications" Part III: Mathematical Preliminaries, 2nd paragraph, P32
However, the case for an analog channel is not really different, *in principle*! It merely requires that you have an error channel of either zero noise or infinite bandwidth (and therefore infinite capacity). In practical situations, just as Shannon points out that it is impossible to actually have 100% recovery at the output of an analog channel because channels have noise and therefore a finite capacity.
And of course you quoted previously that signficance, but did not note Shannon's comment on it, which is in the paragraph following what you quoted:
"This, however, evades the real issue. Practically, we are not interested in exact transmission when we have a continuous source, but only in transmission to within a certain tolerance. The question is, can we assign a definite rate to a continuous source when we require only a certain fidelity of recovery, measured in a suitable way. Of course, as the fidelity requirements are increased the rate will increase. It will be shown that we can, in very general cases, define such a rate, having the property that it is possible, by properly encoding the information, to transmit it over a channel whose capacity is equal to the rate in question, and satisfy t he fidelity requirements. A channel of smaller capacity is insufficient.
The exact same is true for an error channel which supplies the required error correction signal to cancel distortion from the channel discussed above. To the degree that one desires a "noise free" output it is equally possible to generate a "distortion free" output.
Again, it might not be reasonable to construct such a system...
Reply to
Floyd L. Davidson
On Sat, 29 Apr 2006 13:26:17 GMT, Ron Capik Gave us:
If it can be absolutely identified, it can presumably be absolutely removed or at least attenuated.
Reply to
Roy L. Fuchs
In article , Floyd L. Davidson writes
Yes but quite some time ago now. FWIW we don't or very rarely have long lumps of overhead line anymore that carry baseband audio. For voiceband circuits these days its digital end to end with a A/D and D/A convertor at each end.
And for phones its going much the same way, well over here at least. BT have the 21CN nets which are data circuits which you run data or audio or whatever you like over them..
I asked a couple of cable jointers who were working beside the road the other day re that one, and it seems that its the exception rather than the rule these days. There is some cable which has a foil screen around it, but as to woven braids seems they aren't used anymore..
Well the ones ntl use here according to a friend of mine who works with their plant day in and day out sez otherwise. Seems only some of the cable they use has a foil screen but then again they use fibre and co-ax for distances of any length, seems digital rules;)..
No its not, you have to define what your using it for an in what application..
Yes except that if we're talking like we were about currents circulating in the "screen" of a multicore cable, then there is going to be quite a bit of difference in practice between a heavily woven copper braid and the light foil wrap where the connection to that is by a fairly thin drain wire...
Yes we sometimes do, but very rarely these days, it s getting to be a very digital world over here. Analogue circuits are quite rare nowadays and BT have been known to have to get guys out of retirement to work on the few remaining ones!. If you wanted say a speech band 300- 3500 Hz point to point circuit these days it'd be digital end to end or if you required a music grade circuit that would definitely be digital copper would only be for the patch leads to connect the gear.
Even some recording and sound re-inforcement systems use digital leads from the stage area to the mixer now..
Well they don't define what you are doing with that. Consider say 10 meters of Andrews LDF 4-50 cable connected to a transmitter with the correct plug, what are they connecting that other end to?. Nothing or a load partially connected?.
Or do they mean the connection to the shield, referred to the point where that would normally be connected, is greater than one tenth of lambda?. If thats what they meant then they didn't describe that very well.
It seems that they were thinking of say a braided cable like perhaps RG214 or similar when you "could" take that out as a pigtail perhaps......
I think its relevant on the subject, but YMMD as they say..
I'll have a look at that again when I get a moment and try some experiments here too...
Well how far do you want to go with that;?...
What do you do over there are you involved in a Telco?..
The above was only to demonstrate what I meant by balanced working..
As above just for demo..
Well I have tried that and it doesn't hum at least not what I can hear!. And out mains is quite unclean;!..
Humm... What do you use out there in deepest Alaska, batteries;-?.....
Yes it is poor circuit design, but people do it all the time!...
Reply to
tony sayer
In article , Richard Crowley writes
Do they have engineers anymore?, the accountants that run the industry say they don't need 'em!..
That above example was for a small transmitter that is in a remote location that is fed by a long overhead copper pair, well two of them for stereo, and that goes into line trannies and equalisers and it didn't have any discernible humm on it. However thats about to change, a digital microwave link is to be installed as soon as, copper is on its way out it seems!...
Reply to
tony sayer
In article , Floyd L. Davidson writes
They seem to do things differently over there Floyd, my friend who works for a Telco here reckons that if they get 80 working pairs out of a newly installed 100 pair their doing well;!.
All due to employing subbies who sub out and then sub some more;(.
He said they didn't measure things like signal to noise ratios and such anymore as they don't need to its all going digital anyway and digital is perceived as "perfect" so no need!........
Reply to
tony sayer
Most residential POTS service in the UK is fully digital from end to end?
That certainly is not true in the US, and I've never heard anyone in the UK say that it was there before either.
I'm not familiar with the terminology. However there are of course such circuits here too (ISDN, for example), but by far the majority of POTS service is delivered as an analog line, after being trunked to a remote unit with digital services.
However, none of that is relevant! Power line influence is, if anything, *more* of a problem for digital services than it is for old fashion POTS via an analog line.
I don't have a great deal of confidence in someone who is getting their information from "cable jointers" alongside the road.
Lets be blunt: you don't know what you are talking about.
"Woven braid" has *never* been used for telephone cable. And I'll repeat it just one more time: multipair cable for long runs is virtually *all* wrapped with a shield, and additionally has at least one single strand of bare wire running along with the shield to provide greater conductivity.
I guess I need to tell you that I am *not* guessing.
If you can't cite a valid source... please don't exaggerate what you do know.
But that doesn't change the way the shielding functions. All it does is change the effectiveness of that functionality, and clearly copper braid is much more expensive... to a degree that the difference is not worth the cost.
I take your lack of a responsive answer as an affirmative one.
I'm finding that to be a little difficult to believe, given the other statements you've made.
They detailed it precisely enough. The outer conductor is not connected. It makes virtually *no* difference what you do with the inner conductor. :-) The point is that depending on the frequency and the length (not on what it is connected to) it will (or not) act as a very good antenna.
They mean the length of the cable is longer than 1/10 of a wavelength, and that there is no connection to the shield, but there is (to virtually anything you'd like to connect, including a box of "nothing") to the center conductor. Under some circumstances, which depend on the length and the frequency, it will act as an antenna.
That would be one example.
Please review this portion of what I wrote in my last message:
But chocolate chip cookies are more relevant.
The intent is to go as far as is practical, in terms of cost.
Just over four decades in telecommunications.
Proper technology seems to work the best.
Reply to
Floyd L. Davidson

Site Timeline

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.