Machina Speculatrix and Emergent Behaviour

Have we all lost our way in a whirlpool of ever increasing complexity?

The seminal work by Grey Walter in his Machina Speculatrix (1950's era!) exhibited interesting and complex behaviours that "emerged" from the use of only _TWO_!!!!! active devices - a couple of valves ("tubes" for the unchristian warmongers of Yankland).

Today we use devices with millions of active devices - the microcomputers of today, but the behaviours of the derived robots are limited to what we program into them - ie, the behaviours are a representation of our own programming skills. We don't see a corresponding increase in behaviours that matches the orders-of-magnitude increase in active devices.

Should we, perhaps, be inventing circuitry that can exhibit behaviour of its own, rather than forcing on our robots the relatively simple behaviours that we can program into them?

Are we producing extensions of our own brains and not nascent artificial intelligence?

Reply to
Pierian Spring
Loading thread data ...

Grey Walter was ahead of his time. However, what you describe has already been happening for a long time now, especially with the subsumption-type approach to robotics developed by Rodney Brooks. However, these sorts of bots cannot do much which is really intelligent. Like Grey Walter's bots, they are basically limited to operating in very specific environmental niches. They cannot deal with the sort of general everyday problems that a 5 YO can deal with. They cannot deal with abstractions, language, analysis of general visual scenes, on and on.

Reply to
dan michaels

Neither. Nor is robotics strictly a matter of reproducing ANY type of behavior, though some platforms are built on the concept. Brooks admits behaviors are perceived; they are abstract concepts that become whatever we want them to be.

Grey Walter experimented with machines that had only a few functions, because they were extremely simple mechanically. They could not right themselves if tipped over. They could not coordinate travel over uneven surfaces. They could not fetch a beer sitting right in front of them, let alone from a refrigerator in a crowded house.

Behaviors are NOT merely what's programmed into the robot. This is a common misconception. Behaviors are derived first from the physical capabilities of the machine, just as they are in nature. Forget the number of transistors in a microprocessor; that's a meaningless number when robots are primarily mechanical devices. If a robot can't physically climb stairs it makes no sense to build in behaviors for this. Robots remain limited behaviorially because their mechanisms are constrained by the laws of physics and $$$.

-- Gordon

Reply to
Gordon McComb

"Pierian Spring"

I've worked a little bit with swarm algorithms and even have a paper published on ant colony simulation, and I do believe that emergent behavior has more chances to succeed in tasks that need broad knowledge instead of highly specialized knowledge.

It is easy to prove that a big number of simple agents (simple behavior) interacting in great volume produce complex behavior, but it is very difficult to develop a system based on emergent behavior that actually produces a specific desired behavior. Complexity is difficult to control.

Cheers

Padu

Reply to
Padu

"Gordon McComb" wrote in message news: snipped-for-privacy@NOgmccombSPAM.com...

I have to agree. My favorite robot architectures always include a mixture of "reflexes" and programmed micros. This method exhibits the strong points of each type of construction. For example, imagine a circuit that consists of little more than a pendulum or microswitch that can detect the robot being upside down. Add to that a pair of solenoids or other simple devices to set the robot upright, and send just enough data to the brain to let it know that it had in fact been flipped over and when it is back on its feet again, so to speak. Now, the flipped-over sensor constitutes a type of reflex, and the brain only has to take into account the fact that the robot has been turned over, and perhaps righted again. You barely have to program that, and the hardware will take care of itself. You can also add other simple reflex circuits that will report to the brain what has happened and do something about it, and the brain simply does the bookkeeping then. You add a couple of lines of code or a loop or two for handling it, and it turns out that you can very easily compartmentalize the code for that with little pain. I built a fire extinguisher bot a few years ago for the Trinity College fire fighter competition that used a similar set of reflexes. One used a pair of fire sensor eyes aimed together at a point a few inches in front of the robot. When one eye saw the fire, it was used to navigate toward it. When both eyes saw the fire, due to their angle and spacing, there was only one place it could be- dead ahead at a fixed distance (it was a candle). In that unique case, the robot reflexively sprayed it with water. The brain only used the sensory data to steer by, but the eye/water squirter combination would spray the instant it had the flame in sight. The brain could then do the "paperwork"- hey, we saw fire, headed for it and since both eyes then got it, we successfully located it through navigation. Then, the reflex kicked in and told us so, and now there is no fire- we must have put it out. So with a handful of simple sensors and reflexes, and a very minimum of complex code, you get something that acts like an organism and exhibits something similar to layered brain architecture. I strongly recommend that both hobbyist and professional try this approach and see how well even thorny problems can respond to it.

Cheers!

Sir Charles W. Shults III, K. B. B. Xenotech Research

321-206-1840
Reply to
Sir Charles W. Shults III

Since I didn't notice either Joseph Jones or Daniel Roth pipe in on this thread, I thought I would for them. I gave myself their book "Robot Programming, a Practical Guide to Behavior-Based Robotics" for Christmas. I'm about halfway through it now. It is a very good introduction to the kind of systems Gordon and Charles have been talking about.

Paul Pawelski

Sir Charles W. Shults III wrote:

....

Reply to
catman

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.