That was one of my own main reasons for learing to type. I'm a lefty
and the idiot teachers gave me so much shit about smearing my work
that I ended up pressing harder and smearing it worse from the stress
and hate I was defending myself from. 'Twas a beeyotch.
Computers were a heaven send. No more retyping entire pages because
of a simple typo on a resume'.
I still use the ^H for fun.
emulator software rather than the real DEC hardware, tho. It has been
a long while.
While we have the gift of life, it seems to me that only tragedy
is to allow part of us to die - whether it is our spirit, our
IIRC the crucial ANSI escape code sequence returned the screen
character at a designated position to the sender, which allowed them
to write a space to that position and resend the character one row
down, making the text appear to droop. I believe it was meant to be
used to save and restore the previous screen after sending a warning
message in a box.
The programmers pulled those stunts only on each other, usually when
they needed to compile and the recipient was playing a game that
bogged down the VAX. The player could change the name of the game
process but not hide its size from other users. I happened to be
watching when one hit.
I've used the same method to write a Matrix Waterfall screen saver and
a graphic display of a shift register's contents in an experimental
O.K. A program, then, not just a simple escape sequence. The
VT100 made it possible, by having that character return, but not a
single code to drop everything in a column down one line at a time. :-)
A great use.
Someone at work a few decades ago went to a user's group meeting
for the CDC Cyber-6600, and came back with a tape. He loaded it and set
it up for delayed execution.
You need to know that the 6600's console consisted of two large
round CRTs to either side of above the keyboard. Typically, one CRT
would display the status of the whole system, while the other was used
to check on and interact with a specific job. The phosphor on both was
Anyway -- he hung around when the night operator came on, and
waited. Then he heard a scream.
What happened was:
1) Both screens blanked.
2) A pair of green eyes slowly rose from the bottom of the CRTs.
3) They looked at the operator.
4) They looked down at the keyboard.
5) They looked at where the wall clock would likely be.
6) They looked back at the keyboard.
7) They looked back at the operator.
8) They slowly sunk off the bottom of the CRTs.
9) Normal operation returned. :-)
Sort of like the other -- but taking advantage of the particular
design of the CDC 6600's console. :-)
I guess that really would creep out someone too serious to consider
writing pranks themselves, like those who believe faked UFOs (mine may
have inspired the Exeter NH affairs). We had one very good female
programmer whom the manager carefully shielded from human contact,
especially with us rather rowdy hardware types.
We were too busy for anything that complicated. One of the engineers
wrote a screen saver that was normally blank and occasionally flashed
in large letters:
THE END IS NEAR!
It seemed to appear as a personal warning to anyone walking by.
I was writing test code directly on the machine we were developing,
which had extremely sensitive microvolt and picoamp meters. If I
removed the shielding their noise level rose when anyone approached,
so I made the machine come alive and greet the visitor.
The Ph.D. project manager had been an instrument designer at Keithley,
where he designed a meter that could detect 60 electrons per second.
These were custom circuits on ~2" x 4" cards that fit in the test head
over the wafer. The current meter resolved to 100 femtoAmps over a
common mode input voltage range of 0 to 100V. The only suitable coax
and reed relay insulation with low enough dielectric absorption was
special teflon foam tape from W.L.Gore.
That company built Analog Devices' production-line parametric testers,
the machines that confirm each part meets specs, and had an
arrangement to get enough of their hand-selected highest-performing op
amps to build more sensitive and accurate analog circuits than anyone
else. Most of them went back in testers for AD.
When Schlumberger bought the company the competitors sued with the
FTC, claiming it was "unfair" that the biggest company in the industry
now owned the technology leader. The distractions of the lawsuit
O.K. My first thought was at least partially right. :-) (A
small part of the industry -- down to counting electrons as they wander
I remember that we had one of Keithley's picoammeters in a
Faraday cage with the controls remoted to the outside with Teflon
shafts. I forget what it was measuring, but they were sure careful
about stray fields.
Eventually, that should saturate, and more make it out to the
rest of the industry. :-)
The hand-selected ones had different part numbers that weren't in the
catalog. There's always a bell curve distribution. We got the upper
sigma for input bias and offset current, Radio Shack was rumored to
get the lowest one. Other parameters didn't matter to us so maybe some
audio synthesizer company got the ones with the highest frequency
response. Our machines didn't check for that, only the
guaranteed-by-test data sheet parameters, though I've measured Bode
plots on the bench. As long as the circuit settled to the required
accuracy within one millisecond we were happy.
That was for analog measurements. The digital memory chip testers sent
out address and data at 50MHz, state-of-the-art in the early 80's when
the test vectors had to be generated in the main rack and sent out
long cables to the test head.
I bought a batch of Chinese Schottky solar panel isolation diodes that
all are slightly below the reverse voltage leakage spec at room
temperature, and worse above it, as though they were the rejects from
a production tester.
In class they treat op-amps as ideal devices. I was exposed to the
nitty-gritty of all the ways they aren't, and how to measure the
Ouch! Perhaps a buffer near the test head which could be
pre-loaded from the computer, and then triggered to spit it out at the
RAM on command. If the tests were repetitive enough, that would
minimize the total bandwidth sent to the test head -- and local checking
for errors and only spit the errors back to the computer to minimize the
bandwidth the other way.
I remember long ago when I worked for Transitron -- one of the
products was what they called a "ref-amp" -- a potted transistor with a
Zener and forward diode (called a Stabistor, and selected for
temperature behavior) in series with the emitter. The transistor B-E
forward drop, that of the diode and of the Zener were matched by a
minicomputer measuring large batches of each device at three
temperatures -- -50C, +50C, and +150C, and punching out a deck of cards
-- one per device. These went to the mainframe to be sorted to get the
minimum temperature sensitivity. The devices were in multi-device
carrier submerged in a bath of silicone oil -- a different one for each
I remember when we got a few of them back from a customer, and
I had to test them through the whole temperature range. At the -50, the
other silicone oils set up to a gel, and at the +150, the low-temp one
boiled, so we mixed about 50-50 of high and low temp oils. Put the
DUT in the bath, toss in a handful of dry ice chunks, and wait until it
got below the -50, and then turn on the hotplate under it and take a
reading at ever 10 degrees C on the way up. Then turn off the hotplate,
wait for it to cool down, swap in the next device, and repeat. (Yes,
the devices did go out of spec between the three points at which they
were tested before assembly. :-)
Anyway -- someone else wandered into the room, and tossed in
some dry ice while the bath was still quite hot. It boiled like mad
(into the pockets of CO2 in the liquid), with vapor spilling over the
edge of the really large beaker, spreading out over the bench, and
curling in to the still hot hotplate coils.
It was interesting seeing the silicone oil vapor burn. We had a CO2
extinguisher handy, and put it out, and were able to continue with the
tests. One thing which I had not expected was that there was fine white
sand over everything, from the burning Silicone oil. :-)
Sound adequate for the purpose, at least. :-)
Yep. In class, the input impedance is infinite, the output
impedance is zero, the open-loop gain is infinite, balance between
inputs is perfect, and no capacitance anywhere to delay signal changes.
For most applications, you can get away with treating them as being like
that -- but of course they are not.
My first exposure to op-amps was the plug-in modules with two
tubes which were used as part of the Beta testers on the transistor
production line. I had no idea how they worked at the time, so the
schematics rather puzzled me. :-) Trying to remember who made them, and
I *think* that it was Philbrick -- who also made ones with discrete
transistors in little metal bricks.
I first learned how to use them from the Burr-Brown application
That was the era of the 2102 1K x 1 and 2141 4K x 1 static RAM, and
welding-sized power supplies for big (64K) banks of them. They had
samples of 6116 2K x 8 and 6264 8K x 8 CMOS static memory but IIRC
they weren't fast enough, so I was given them for my wirewrap 8080
computer. At first it had 256 bytes of memory.
DRAM was still being developed. A few years later I designed an ASIC
controller for it.
When they closed they had been trying to design ceramic hybrid pin
drivers to fit in the test head with some pattern memory on them.
The test head contains the electronics that have to be close to the
probe card that contacts the IC bonding pads to test it while it's
still on the wafer. The head can't be too large or heavy to position
within a thousandth of an inch. It's the smaller box on the positioner
on the purple machine.
I've never been in a clean room to see these machines in use. The
wafer probing I did at Unitrode was all individual setups with
standard lab test equipment, with no dust control since we were
testing only prototypes. Wafers are brittle, but otherwise exposed ICs
are surprisingly resistant to damage.
Nope -- No Dell. (I don't use Windows, FWIW.) But there was an
intersting memory test published in the manual for the Motorola MC6809
CPU. It was a position independent program -- except for the checksum
on the program itself. Once it is running, you could ask it to relocate
itself and run in the memory just checked, so it could check the memory
it just left. :-)
As it turned out -- running it in a chunk of memory was a better
test of speed problems than running it *on* the memory in question.
There was one board which had problems with rapid sequential accesses,
so running in that memory would crash the program, but running on (that
is, checking) that memory would never find a problem. It could not give
detailed information on the particular bits failing, but it was enough to
motivate me to retire that board and wire-wrap a board using 6108s
And -- with today's memory size -- exhaustive tests can take
forever -- walking bit or other tests. :-)
There were 8 of the 2102s in my Altair 680b (kit computer based
on the Motorola 6800, not the Intel 8080 which the first Altair had).
While not too fast, they were still a lot faster than the machine,
simply because they pulled the clock down to 500 KHz instead of 1 MHz
(the max for the CPU chip) because they used 1702a EPROMs for the
monitor, and did not bother implementing a stretchable clock for the
system, so it had to be slowed down to the 1702a's speed. Otherwise, it
could have been 1 MHz -- or later, 2 MHz for the 6800B, which I
wire-wrapped into a replacement CPU card for the SWTP 6800 (moving the
baud-rate clock off the CPU board, because the original of that system
had something like a 768 KHz CPU clock to divide down to match baud
rates. :-) (I was also using the 6116s in that system at its end of
life. A lot more reliable than the dynamic RAM chips which were used on
some boards. So -- the replacement CPU board was wire-wrap too.
Four of the 6264 chips would have saturated the address space of
the 6800. :-)
I had fun diagnosing a problem with some Multibus RAM cards in
my first unix box -- based on the Motorola 68000. Turned out to be a
problem in a delay chip used to make the different clock pulses for the
Sure -- keep the waveshape clean for pulses fed to the chips,
and minimize delay for what comes back to test.
A pity that doesn't give closer views.
Brittle, yes -- and static sensitive -- especially memory chips.
(For that matter, memory chips are also sensitive to illumination
levels.) I remember hobby articles using RAM chips without covers as
image sensors. :-)
I scratch-built my wirewrapped computer to learn how to design and
program them instead of to use it, though the editor / assembler I
wrote for it worked well enough to compose and print my resume. The
I/O circuits were simplified versions of those in the IBM PC and the
RS Color Computer.
It had a minicomputer switch panel with enough circuitry to write and
display memory and vary the CPU clock, down to <1 cycle per second for
debugging. At first I had to toggle in a boostrap loader that read in
the monitor program (a mini OS) from a Teletype tape.
That got old very quickly so I added NiCads to keep the 6116 alive. As
more slightly used sample RAM became available I installed more 6116s
and rewired the lowest socket to take a 2816 EEPROM. We had been
experimenting with adaptive algorithms to program them rapidly in
Eventually my growing code collection crashed into the 8080's lack of
relative jumps and I stopped working on it. By then better CPUs were
coming along too rapidly to know which to upgrade to. I would have bet
on the 6809, 68020 or 8086 instead of the 8088 that soon dominated in
the IBM PC. The company chose the DEC LSI-11 and then the TI TMS9900.
At later jobs the TMS320 DSP family was THE choice for fast dedicated
systems, an early color scanner and Mitre's digital radios. The DRAM
controller I designed was for the scanner, to prioritize competing
memory requests from the A/D, DSP and the IEE488 controller and then
try to fit in refresh cycles. During a scan the data stream from the
A/D converter couldn't wait, but it effectively kept the memory
refreshed by hitting every row repeatedly.
" A normal read or write cycle refreshes a row of memory, but normal
memory accesses cannot be relied on to hit all the rows within the
necessary time, necessitating a separate refresh process."
Actually DRAM holds data for several seconds without a refresh. If you
have a $100,000 IC tester to play with you can write a pattern and
then read it after 1 second, then repeat for 2 seconds, etc. I saw the
first bit drop out at 2.
The color printer & scanner company was also driven under by a
lawsuit. I barely dodged having to testify because I knew that the
thermistor circuit I used which the plaintiff claimed as their 1980
trade secret was in my 1978 car.
The CoCo had a much better choice of CPU than the IBM PC. --
just not enough hardware to support the fancier disc controllers. :-)
O.K. Somewhat similar to the Altair 8800 (Intel 8080 CPU). The
Altair 680b had a monitor program in a single 1702a, so you did not have
to load a bootstrap from front panel switches -- unlike with the Data
General Nova which I used for a while at work. :-)
Later -- I designed a specialized 6800 based system at work for
an experiment with imaging and the effect of noise patterns on usability
of the images. It generated range specified pseudo-random noise
patterns (both offset and gain) and stored them in special wire-wrapped
memory cards (more 6116s). Then the image clocked a different port to
the memory to put the gain and offset on the same pixel each time
through the image. (Only project that I was actually in control of
which had co-workers doing some of the wire-wrapping for me, once I
verified the first memory card to work. :-)
Later -- that got used for an additional experiment. Image was
formed with a slow scanning mirror bouncing it -- and the display had the
offset applied to it so the image stayed stable, but the noise patterns
moved from side to side. That made a *big* difference in the usability
of the image. It was rather like driving past a picket fence and being
able to see what is beyond it, unlike when stationary. :-)
At first, the code got loaded via the emulator probe from a
Tektronix microprocessor development lab, but that was a bit awkward, so
I wire-wrapped another card to program 2716 EPROMs, since hand keying it
all in to a Prolog PROM burner was error-prone. :-) I added the burn
program to the control program I wrote for it, so subsequent updates to
the program were easier.
Yes -- the 6809 would have been a good choice before the jump to
16-bit words or larger. It has nice relative jumps and branches to
anywhere in the address space. With the 6800, if you need a long jump,
you code that and a short branch around it on the opposite sense of the
test. Really a very nice regular instruction set. Same, of course, for
the 68000 family. The 8086 and 8088 were stuck with that weird
segmentation scheme to allow code written for the 8080 to be assembled
and run. The 6809 was an upgrade of the 6800, but the 68000 was a clean
jump to a new architecture. I was working on designing and
wire-wrapping a 68000 based system when I stumbled on a 68000 based v7
unix system at a hamfest.
The real indication of the power of the 6809 was the Microware
OS-9 operating system. Position independent and re-entrant code, and a
multi-user multi-tasking OS running in 64K of address space or less.
(And OS-9 Level 2 which required memory mapping hardware, but could make
use of the full address space that the 68000, or later the 68020,
The TI 9900 was an interesting CPU -- based on their 990
minicomputer -- but rather clumsy, especially in the initial
implementation. Having 16 16-bit registers which were mapped into
memory, so you could get a fresh set of registers by simply changing a
pointer was nice -- until you realized the number of cycles that it took
for any register operations. Later, they modified it so the registers
had copies in the CPU chip itself, and a BLWP (Branch and Load Workspace
Pointer) copied all of that region of memory into the on-chip registers,
so any access to the registers was a lot quicker -- unless you changed
the contents, at which point they still needed to be written out into
The other thing was the weird I/O approach. It had an I/O space
4K long (IIRC), and you could do any output or input length up to 16
bits, but you needed a clock cycle for each bit, making it rather slow
for even floppy disk I/O -- unless you gave up on their approach and
used a memory-mapped approach so you could get all 16 bits in a single
I don't think that there is anything using the TMS9900
instruction set these days. :-)
O.K. No experience with the DSP world, though I was interested
by the Motorola one, and still have the manual somewhere. :-)
A separate refresh process -- but it can be at a board level, so
when nothing is accessing that board, it can be doing its own refresh
Any idea how temperature sensitive that might be? I would
expect shorter times near the top temperatures at which it could
operate. (Faster leakage through the semiconductor insulators. :-)
Ouch! You've had them following you around. :-(
Reminds me of a patent claim and lawsuit for a particular bit of
computer design. I read about it in comp.unix.wizards (IIRC) and it
sounded familiar, so I went into where I thought that I had read about
it, and found a description in the manuals for the CDC 6600, which
predated the claimed invention date, so there was prior art.
Also -- we (a government lab) got a demand that we go through
all of our oscilloscope probes, looking for ones which might infringe a
patent that Tektronix had. (So -- we had to list everything that we
found that was not branded by Tekronix -- including a couple of examples
of a BNC, some RG-58 coax, and a resistor soldered to the center
conductor. Obviously lab-made. :-)
I would have spewed coffee on the keyboard and laughed myself silly,
but I backed off from such pranks when I found out that a fragile
neurotic could believe that they had snapped and were hallucinating.
For homework my wife programmed the Cylon red eye on the data register
display of a PDP-8.
Now -- there would never be a fragile neurotic working with
computers, would there? :-) (Other than the user who took a fortune
cookie program output personally -- just because it happened to be for a
Libra, and she was a Libra. :-)
Ah -- the PDP-8 -- so much hardware for so little capability. :-)
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.