I'm having an argument with a co-worker. I accept I could be totally wrong.
The discussion, believe it or not, is about the nature of digital vs analog signals.
My argument is that fundamentally "digital" in the context of "digital vs analog" is a reference as to how information is represented in the signal. Aside from that, a digital signal is fundamentally no different than any analog signal at the "wave level." All the fundamentals are still there, phase-lock, frequency-lock, attenuation, impedance, etc. Effectively, at this level a the term digital is no longer relevant and we are left dealing with traditional analog issues.
His argument is that a digital signal is fundamentally very different than analog, and that a phase-lock on a digital signal is a totally different beast than on an analog signal.
This conversation started because I argued that resetting a T1 CSU after a power hit or rain storm (should it begin taking errors) could be a valid fix if the errors cease, because a digital signal is fundamentally no different than an analog signal when it comes to signal propogation, synchronization, etc.
He argues that a T1 only breaks if a serviceable/replaceable component is malfunctioning and must be replaced and the digital signals are not subject to the same propogation issues as an analog signal. Resetting the CSU only prolongs the detection and subsequent reparation of the faulty component. The environmental factor only served to expose the problem.
I acknowledge you can regenerate a digitally encoded signal easily as opposed to an analog signal, but again this is only due to the difference in how the information is represented.
Its a stupid argument. Nonetheless. I'm appealing to the masses for support.
Surely at the level of Maxwell's equations, there is no distinction in theory between analog and digital.
At the other extreme, say for a computer programmer dealing strictly with binary circuits and operations, analog representations are not likely to be needed. But for a digital circuit designer, the analog nature of digital signals can be extremely important (i.e., Fourier analysis).
So at the level of fundamental theory, I would say no difference. At the level of application, I would say that appropriate (useful) modeling and analytical tools for analog and digital signals would often be different.
Absolutely correct. (I'll explicitly define that for you at the end of this article, with an impeccable cite.)
True, though that might mean different things to different people, so it's hard to be sure exactly what it is supposed to be saying.
The modulation process is analog, even if the modulating signal is digital. Phase is an analog characteristic of a signal, regardless of whether the signal carries digital information or analog information. You can of course modulate the phase of a signal with digital information. And one can have an analog circuit that provides a phase lock to a signal that carries digital information.
Note that virtually every T1 receiver in existence has a circuit to recover the clock rate of the incoming digital signal. That circuit is a phase locked loop. It is an *analog* circuit, in all respects. (Not that a digital circuit could not be designed to provide the clock rate, but it would be several times more expensive.)
Ideally once an error condition is gone the hardware should be able to synchronize itself to the signal and begin operating in a normal manner. That doesn't always happen and it is always an equipment fault when it doesn't.
The difference between your colleague's concept and reality is that the malfunction can be a *design* malfunction, rather than a component malfunction. The cause might be hardware (insufficient surge protection, for example) or software (some specific signal condition sends the software into an unbreakable loop).
That is exactly true. But of course you do not want to under emphasize the significance of that statement. Noise immunity is the principal advantage of digital transmission systems. A digital system will operate without error over a medium (e.g., fiber optics) that has far too much noise to use with analog techniques. And because the signal can be precisely regenerated, noise is not cumulative. Hence multiple sections can be added in tandem to provide extremely long physical circuit lengths, all with zero errors. Analog systems cannot do that. Analog systems on the other hand can provide a higher output signal to noise ratio while using less bandwidth.
It isn't really such a stupid argument. It comes up quite often as people migrate their concepts from an analog world to a digital one. Some 25 years ago it was fairly difficult to find telco people who had enough experience with digital facilities to know much about it. Today of course there are people retiring who have *always* worked with digital! And they can't quite get a grip on how analog circuits work... :-)
Whatever, here are the killer definitions. These are from the Telecommunications: Glossary of Telecommunication Terms, Federal Standard 1037C at
ANALOG SIGNAL 1. A signal that has a continuous nature rather than a pulsed or discrete nature. Note: Electrical or physical analogies, such as continuously varying voltages, frequencies, or phases, may be used as analog signals
ANALOG DATA Data represented by a physical quantity that is considered to be continuously variable and has a magnitude directly proportional to the data or to a suitable function of the data.
ANALOG TRANSMISSION Transmission of a continuously varying signal as opposed to transmission of a discretely varying signal.
DIGITAL Characterized by discrete states.
(Note that I prefer a definition that also mentions a finite symbols set, which is redundant information but makes the meaning much more obvious.)
DIGITAL DATA 1. Data represented by discrete values or conditions, as opposed to analog data. 2. Discrete representations of quantized values of variables, e.g. , the representation of numbers by digits, perhaps with special characters and the "space" character.
DIGITAL PHASE MODULATION Modulation in which the instantaneous phase of the modulated wave is shifted between a set of predetermined discrete values in accordance with the significant conditions of the modulating signal.
SIGNAL 1. Detectable transmitted energy that can be used to carry information. 2. A time-dependent variation of a characteristic of a physical phenomenon, used to convey information
MODULATION The process, or result of the process, of varying a characteristic of a carrier, in accordance with an information-bearing signal.