Modern car paint and rust

Thanks. I hate waste and frozen bolts.

Reply to
Larry Jaques
Loading thread data ...

The Air France crash in the South Atlantic is a good example of the difficulty of predicting in advance what warning the operator should be given while their attention is on controlling the vehicle. Too much or misleading info can be worse than not enough.

formatting link
"At one point the pilot briefly pushed the stick forward. Then, in a grotesque miscue unforeseen by the designers of the fly-by-wire software, the stall warning, which had been silenced, as designed, by very low indicated airspeed, came to life. The pilot, probably inferring that whatever he had just done must have been wrong, returned the stick to its climb position and kept it there for the remainder of the flight."

I read the CVR transcript in French and it supports the article's conjectures. The airliner descended approximately level in or near a Deep Stall, relatively stable in nose-high pitch but not in roll, which kept the cockpit crew fully occupied and confused about what was happening. The flight controls had less than their normal effect and the engines showed the expected full power RPMs though they weren't receiving the airflow to produce the corresponding thrust.

The pitot tubes had iced up in the storm's rising (super?)saturated air and given the pilots and flight control computer incorrect low airspeed values, initiating the problem, then probably soon thawed and showed similar correct low values of forward airspeed because by then the plane had gently stalled in Coffin Corner and was falling mainly downward, its forward indicated airspeed below the stall warning low cutoff until the captain tried nosing down, which was the proper way to break out of the stall and regain airspeed and control.

Similarly, a NASA engineer told me the inside story of Neil Armstrong's computer "failure" during the moon landing. The computer serviced all inputs in a program loop. There was a warning light kept Off by a hardware watchdog timer that the program would reset on each pass unless it hung. The timeout was comfortably long enough in all preflight tests but during the moon landing some added tasking extended the loop beyond the timeout and allowed the warning light to flicker On before the end of each loop pass, which Armstrong interpreted as the failure it was supposed to indicate, not just an unexpectedly high workload.

I knew something of the issue from designing industrial control panels and then watching mindless UAW drones misuse them. I learned that controls had to be not only idiot-proof but vandal-proof. Although I had no design input on the aerospace electronics I prototyped I paid attention to the discussions about their possible effect on cockpit situational awareness. There was a joke circulating at the time that the automated airliner cockpit of the future would contain a man and a dog. The dog was trained to bite the man if he touched the controls. The man's only task was to feed the dog.

-jsw

Reply to
Jim Wilkins

On 02/23/2017 8:29 AM, Jim Wilkins wrote: ...

Not only aircraft; TMI-II became something more than just a turbine trip causing a reactor trip with the sidebar of a steam relief valve not reclosing automagically because the latter caused an anomolous level reading in the physically nearby pressurizer level. This was misinterpreted by reactor operators and they subsequently turned off the safety system HPI (high pressure injection) pumps fearing were going to overfill the pressurizer and if that were to happen, risk over-pressurizing the primary system itself. The incident progressed downhill from there until a fresh shift came on and the SRO on that shift recognized the problem and restarted HPI plus RCPs to restore primary coolant flow and begin the recovery process.

If the original crew had done nothing but let the control and safety systems do their job instead of intervening, the incident would have consisted of no more than an unscheduled trip and restart once the initiating fault in the transmission yard that was the initiating event. (They lost connection to the grid owing to transformer failure at full power (850 MWe) which left nowhere for the generator output to go so that initiated the turbine trip. System was designed to be able to handle a "full load rejection" trip, but owing to various other conditions, runback couldn't always be fast enough so a reactor trip could also be expected maybe half the time.)

Reply to
dpb

Neon John is the expert on that incident. I've discontinued my research on recent infrastructure accidents which could be mistaken for a search for exploitable vulnerabilities.

Reply to
Jim Wilkins

On 02/23/2017 11:47 AM, Jim Wilkins wrote: ...

No idea who that might be; I was (nuke) engineer at the reactor vendor until the summer before the incident; had at the time just moved to Oak Ridge w/ small consulting firm; we were on incident response team via contract to NRC by 9AM the morning of the incident so I'm pretty-much familiar with both the specific reactor design and the incident...

Reply to
dpb

John DeArmond

Reply to
Jim Wilkins

Also the BFRL - Big Flashing Red Light for when anything else goes wrong.

Reply to
clare

Sorry if I wasn't clear that I meant -I- don't know much about TMI, not you.

Reply to
Jim Wilkins

On 02/23/2017 4:36 PM, Jim Wilkins wrote: ...

Ahhhh...gotcha' (now :) )...thanks.

Was only using the incident to agree with the earlier anecdote that the human often _is_ the weak link in the chain/loop.

Reply to
dpb

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.