Where is behavior AI now?

I am rereading Brooks, _Cambrian Intelligence_, again. It occurs to me that artificial intelligence will be found in the decisions that switch behaviors rather than the layers of behavior themselves. Anyone aware of any recent progress on behavior based robotics as it applies to AI advances? or it behavior reached a plateau as well?

Reply to
RMDumse
Loading thread data ...

Hey Randy. According to a quick check, that book contains papers published between 1985 and 1991 ...

formatting link
Rather dated now. Also, take a look at Brooks' more recent work. I think his BBR stuff has definitely plateaued out, COG project has stalled for years, and now he's moving into "living machines" !!!

formatting link
If you will remember about 3 or so years ago, I started a thread called "Neosubsumption, a new year beckons", which was mainly about putting some definition into all the levels inbetween classical AI at the top-end, and the BBR stuff at the bottom-end. I think, that's where to go.

Some people looking in-between are maybe Mataric [one of Brooks' old students], plus Minsky's always around too.

formatting link
formatting link
Push Singh and Marvin Minsky (2003). An architecture for combining ways to think. Proceedings of the International Conference on Knowledge Intensive Multi-Agent Systems. Cambridge, MA. Describes a cognitive architecture supporting multiple "ways to think" each suited for a particular type of problem.

Reply to
dan michaels

Does this post relate to your thoughts last march?

cheers, dpa

RMDumse wrote:

Reply to
dpa

==================

Chicken - egg

What good is determining "I am trapped" but then not being able to do anything different! because you can't make a transition because there are no other states or transitions available to you?

No, I'd say the most difficult part of the problem is 1) suspecting other behaviors might have more natural goodness and then 2) actually being able to try some other behavior and then

3) actually knowing if the result is better for having taken the transition or not so you can repeat that change in future behaviors. I.e. Learning. ==============

... and from the post it was responding to ....

============== To say "I am trapped, so I change behavior states in order to escape," totally skips over what is to me the most difficult part of the problem; determining that "I am trapped."

In my experience, it is not the differentiation of the required states that are problematic, but rather the accurate identification (by the robot) of the necessity to change states.

How does the robot "know" it is trapped and needs to change state? And how often is it "wrong" about that conclusion?

"Aye, there's the rub."

best regards, dpa ==================

Howdie Tex. I agree with you on this one. The real problem the robot has is in sensing and perception, which must be done adequately PRIOR TO responding. And then it needs to have an extensive enough internally stored knowledgebase for it to be able to choose the best, or at least adequate, behavioral response.

This is where Brooks' subsumption devices breakdown, and essentially where the BBR approach stalls. The BBR devices can survive in simple environmental situations, but they [at least Brooks' original formulation] have no memory, no long-range perception, and no planning for directed behavior. These latter items are all of the "inbetween" levels that I mentioned last time, and also the sort of things that Mataric [post Brooks-subsumption work] and Minsky [commonsense thinking work] have been addressing.

These approaches also have a lot in common with the "hybrid" systems that Arkin talks about in his book on BBH. He describes the practical limitations of the subsumption approach, and the next steps in chapter

6 ....

formatting link

Reply to
dan michaels

Exactly what I was thinking. I'm reading something basically 15 years old, as if it were the latest word. I'm wondering, how do I connect to more recent information.

I recently read Joe Jones "Behavior Based Robotics" and really liked it. It was the clearest expression of BBR I'd seen. Brooks tends to write in very etherial terms. Jones brought it out of the clouds and into clear focus.

Brooks interconnections looked quite inspired by neural nets, very complicated and messy, where Jones made arbitration on a layers basis very simple and straightforward.

Yes, I remember the thread. I had forgotten the name of it. I would like to see a revisitation given some time has passed.

The link to Maja Mataric was very useful. I went to her publications page, and this answers the impetus behind the original post, where can one find out about BBR, because the amount of published material (i.e. books) is limited. There I see which journals are publishing the latest articles.

So, Dan, other readers, do you subscribe to these journals? or do you do your research following on line? In the GR community (and other scientific papers), one can usually keep up by scanning the preprint files on arXiv.org. Is there a central collective where these sorts of papers can be found? or do you go to the publications pages of the particular researcher you like?

Ah. Well, I tried just that. Found Brooks publications followed to his

2001 Nature article, and am reading it now.
formatting link
Reply to
Randy M. Dumse

I also liked Joe Jones' book VERY much. Good and very practical info on the limitations of various sensors, and the need for multiple redundancy. Biological organisms are lucky in having literally millions of [tiny] sensory fibers in 6 or 8 modalities. Difficult to replicate in our worlds, which leads to the robot's major problems.

Jones' chapter on arbitration was also very helpful. The subsumption scheme he describes, however, is pretty much in line with Brooks' original scheme, and an expansion on what Jones' wrote in this older book, Mobile Robots, 1999 or so.

After leaving Brooks' lab, Mataric immediately began adding on some layers above the lowest subsumption layers. Eg, 1992 paper on "Integration of Representation into Goal-Driven Behavior-Based Robots". I see she now has a ton of students and different projects on all sorts of things robotic. You might find some of her swarm stuff, "herds of nerds" as I recall, of interest. I haven't checked recently to see if she has seriously added to the BBR ideas.

No subscriptions, mostly visitation to homepages of the particular authors.

Also, the other single best source I've found for robotics/AI is "CiteSeer". Once you find "one" paper in any given topic area, there are endless links to others to track down. I have 100s of papers cached to HD, with no time to read ;-).

formatting link

I have that paper on the ole HD. It is basically a summary of what Brooks expanded upon in his later book "Flesh and Machines", 2003 or so. This is a good half-biographical book, and half about his various ideas. For my part, I totally disagree with his conclusion regards model complexity, and also his later conclusion about some "missing stuff" ...

============= Models lack complexity Building models that are below some complexity threshold also would mean that there is nothing in principle that we do not understand about intelligent or living systems. We have all the ideas and components lying around, we just have not yet put enough of them together in one place, or one model. When, and if, we do, then everything will start working a lot better. As for the first possibility, while this may be true, it does seem unlikely that is true across so many different aspects of biology. ==============

In fact, what the fields of complexity theory, molecular biology, etc, are starting to discover is that biological organisms have an immense systems-level complexity which we are only beginning to tap into the secrets of.

It's really an issue of everything affecting everything else via an immense number of internal feedback loops. This starts down at the cell level, with DNA transcription loops, namely DNA --> RNA --> proteins

--> DNA-regulation. There are 100s if not 1000s of such loops at work in every single cell in the body, and which interact with each, and relate to why there are so many different cell types and functions. Then, of course, the rest of the body and brain build upon this basis. Levels of complexity, multiplied.

I would highly recommend 2 books (for now) ...

formatting link
?q=web+of+life+capra

- Endless forms Most Beautiful, New Science of Evo Devo and the Making of the Animal Kingdom, by Sean Carroll, 2005

- The Web of Life, by Fritjof Capra, 1997 or so

Reply to
dan michaels

Relate, yes. However, I was apparently cycling back to the same conclusion via another route, not having that wonder series of exchange in mind. But perhaps missing the quality of such discussions.

This time I was more interested in filling the gap between 1991 (or so) and 2006.

So here's the route I was coming this time. Say you have a simple robot. You know, our simple robots of today hard mostly hardware, extremely hardware, here I mean, almost no moving parts, no body joints, no waists, no heads that swivel, seldom knees or elbows, or articulated finger tips, etc. Mostly hard materials stuck together.

Take jBot as an example. Two motors. That's it. That's the total number of (significant) outputs. (Yes there's a display you might be able to read if you chased it down, picked it up, and held it close to your face, but otherwise, as far as actuators that deal with the world, two motors, that's it. Likewise there are limited sensors. 5 (or 4 or 6 I don't exactly remember) sonars are the primary world sensors. Really we could stop there.

Yes there are stall sensors, but they don't measure the world, they measure the motors. Yes there are navigational sensors, encoders, IMU with compass, accelerometers, gyros, gps, but they again monitor the motors and body of the robot relative to the world, and not the external world for details to make decisions about the robot. To wit, the compass doesn't measure the magnetic field to see if there is one, and it's direction and strength, rather it assumes a constant magnetic field, and then tries to determine the robots orientation to that existant field.

So to my point from this. How many different behaviors can jBot have? It has two outputs. It has 1 bank (of 5) rangers to sense the world with. All the remaining sensors are feedback on what the robot is doing relative to the world, rather than what the world is like external to the robot. How many different behaviors (evidenced by output actions) can jBot have?

1) Left stop, right stop? 2) Left forward, right stop? 3) Left forward, right forward? 4) ...

How many inputs can those behaviors sense?

Isn't there a limit to the number of interconnections possible?

Isn't there a limit to intelligence given that matrix of combinations?

While we can come up with many names for all sorts of fancy behaviors, isn't there a limit to how much intelligence can be evidenced with BBR?

Isn't the intelligence of the robot, not in the behaviors at all (which given a list of all outputs, can be fully described by the states of all possible outputs), but in the number of ways we can combine those behaviors to have "emergent" properties?

And there I come full circle again, back to the same place as the March discussion. Isn't intelligence not in the behaviors themselves, but in the way they are interconnected, sensed and triggered?

Reply to
Randy M. Dumse

Yeah, but to the control unit (the brain) you should normally look at the motors as being part of the external world. So if you have some type of sensory feedback from the motor, then that really is just another world sensor.

And of course, if you have a stall sensor, it's telling you things about the external world indirectly - like the fact you have run into something which is preventing the wheels from turning.

Infinite. But forced to be limited by the finite complexity of the machine.

Infinite. But again, forced to be limited by the finite complexity of the machine.

No. Except for the finite complexity of the machine you build.

No. Again. Only limited by the complexity of the machine you build.

A lack of sensors didn't seem to limit Helen Keller's intelligence. It only slowed down her learning about the environment.

Yes. But there's a better way to understand the problem. You have to think of it as a temporal problem, and not a spatial problem. I'll explain more below.

Yes I think so.

In the context of AI, I actually think of a behavior not just as an output, or an output sequence, but as a reaction. An output sequence (spin the right wheel clockwise for 1 second) is as you say, not intelligence. The intelligence comes from when the system chooses to generate that output. The output alone of is of no importance. Only when it's pared with a system to trigger the output, does it become intelligence, so only then, do I think of it as a "behavior" instead of just an output.

But, true intelligence, is even more than a complex set of behaviors built into a machine. It's the ability of a machine to learn new behaviors on it's own. The main difference between our current human intelligence, and machine intelligence, is that most machines we build don't include much in the way of learning. Most the machines we build we intentionly don't make them learning machines because they are generally more useful to us as tools if their behavior isn't constantly changing on us. They are easier to work with, and control, if we can predict their behavior than if their behavior is constantly adapting and changing. This is why most people don't see machines as intelligent (or why many think machines can't be intelligent). It's because the machines they know about aren't learning machines.

Now, back to what I was saying about about the temporal (time based) nature of the problem.

We are commonly taught to think of the operation of complex machines (computers for example) in the spatial domain. We are taught to think of them as state machines, which, at any one point in time, the machine will be in some fixed state. The input sensors all have one fixed value at this point in time, the outputs all have some fixed value at this point in time, and the environment, is in some fixed configuration at this point in time (which with luck the sensors will report to us). The way we design machines, or write computer software, is often, to think of the machine in this way, and to specify by hand, the transitions it makes from state to state.

For an input output machine like a typical robot with sensors and effectors, we like to think of it's behavior as a function, which maps the current inputs, to the current outputs, at a single point in time. And for very simple problems, that can work fairly well. But, for anything interesting, the machine has to solve problems in the temporal domain. Which simply means, the current outputs, must be calculated not from just the current inputs, but from past inputs as well. This means the full current "state" on which the machine is basing it's decision about what output to create, is a function of some amount of historic sensory data as well.

For a simple example. If you have a sensor that tells you wheel position, the current input only tells you the wheel's position. But, if the machine has temporal knowledge of the last position, it can calculate average speed, so the machine can "known" if the wheel is turning. Or if it has temporal knowledge of two recent past positions, it can calculate change in speed (acceleration). This is the purpose of a PID controller. It accumulates historic sensory information and creates an output function which is not just based on current sensory data (P), but on a summary of historic sensory data as well (I the summation, and D, the difference).

Lets look at this from another perspective. Lets say we have a very simple machine with a two binary input sensors, like bumper switches on the right front and left front of the bot. Lets say the code samples the state of these two switches 10 times per second. So each, 1/10 of a second, there is a two bits of binary input flowing into the machine that must create all the machine's "intelligence" (what outputs it creates to react to the environment).

You can write fairly simple code that will do something trivial like back up and turn to the left when the right bumper switch is activated, or the opposite with the left bumper switch is activated. So the switches are just acting as triggers to some simple behaviors - a typical bot programming technique. But there is a lot more information available in those sensory inputs if the machine is able to track, and respond, to a recent past history of bumper inputs. For example, if the bot is driving forward and hits a wall straight on, the input sequence might look like this:

L 0000011111 R 0000011111

time ->

Where a 0 value means the switch is not activated and a 1 value means the switch is activated. Each column of 0/1 values is the bumper switch value at that point in time. We see in the above that both switches activate at the same time.

If we hit the wall at an angle, the input might look like this:

L 0000011111 R 0000111111

In this case, the Right switch activated first, followed in 1/10 of a second by the Left switch.

If we write code that uses only the current switch values to trigger behaviors, there are two binary inputs each time, which means 2^2 or 4 different possible combinations of switch values. So, we can use this to trigger a total of 4 different behaviors. The inputs would in effect have a meaning something like this:

LR 00 Touching nothing 01 Hit something small on the right 10 Hit something small on the left 11 Hit something big.

But if we use not just the current input, but the current input and the last input, we now have 4 binary inputs to trigger behaviors with, which means 2^4 or 16 different combinations. And they will give us that much more information about the state of the environment to react to. They could mean things like this:

Prev Cur LR LR meaning 00 00 nothing happening 00 11 Hit wall straight on 01 01 Hit something on right 01 11 Hit wall at angle 11 11 stuck.

Buy why stop at just the last two sensory input values? We can use 3, or

4, or 1000. The more historic sensory data we record, and use to trigger our output behaviors, the more complex, and the more "intelligent" our behaviors can become.

SO, back above, when you asked:

I said infinite, because even though you have only 2 binary sensory inputs, there's no limit on how long of a history of sensory inputs over time, you can (in theory but not practice) record, and use as a basis for triggering behaviors. In my simple-bot example which has 2 binary inputs sampled 10 times per second, if you were to record only 1 second of sensory data, (10 x 2 bit samples) and produce reactions to that small set of sensory data, you would already have 2^20 or 1,048,576 different sensory conditions to react to. If you recorded only 2 seconds of sensory data, only 40 bits, you would have over a trillion input states to respond to.

The problem here, is that the machine needs to understand what state the environment is in, and produce the correct reaction (behavior) for the current state of the environment. But the current input values from the sensors don't tell us much about the complete state of the environment. To form a more complete picture of the state of the environment, we need to save historic sensory data one way or another. The simple way, is to just record a history of raw sensory data. But that doesn't work well because we quickly end up with more data than we know what to do with. The data must be compressed, and summarized into a smaller, and more useful, set of information.

In most robots, the compression techniques are hand picked, and hand crafted, by the robot designer. We might for example build a map of the environment. How that map data is stored, and how the data is updated, is all something the robot designers normally choose to best fit the requirements of the problems the robot is trying to solve. But, that internal representation of the environment (the map), can generically be thought of as nothing more than a compressed, and summarized, history of past sensory inputs.

The robots behaviors (its outputs) end up being triggered (aka a function of) this internal world state, and not just the current sensory inputs.

A smart robot for example might explore the environment and find an electrical outlet that it can plug into to recharge its batteries. And then later, when the batteries start to run low, it can drive straight to that outlet for a recharge. That entire behavior of driving to the outlet, which might require making multiple turns to navigate down a hall, and into a room, to get to the outlet it can reach, was in effect, all controlled, by what it might have learned 2 weeks ago. Or in other words, sensory inputs it received 2 weeks ago, is controlling the fact that it just took a left turn now.

So, even a very simple machine, with two wheels, and very limited sensors, still has a hard job ahead of it to act intelligent. It needs to collect, and summarize historic sensory data, and produce different outputs, based on that summarized data. And even a small collection of summarized sensory data, can quickly lead to very complex behaviors.

But, as I said at the beginning. If you are trying to make a robot act intelligent (like animals and humans), one of the main things you have to add is learning. And that's where the problem really gets interesting.

Without learning, the robot designer will typically hand pick what type of internal state information should be maintained, and how that sensory data will update that state information. And they will also manually create the algorithms for how the machine reacts to the internal state data. There is no end to the number of algorithms that have been created like this, and published in the literature for different robotics problems. But, to be truly intelligent (like humans), much of that has to be replaced with strong generic learning algorithms. The internal data structures stored by the system, can no longer be hand selected for the needs of the job at hand. They must use some type of generic sensory compression system. And the behaviors produced in reaction to the internal state, must be learned, though experience.

In the end, the machine, ends up mapping environment context (as represented by these internal models or state information), into a different behavior, for every possible context (aka internal machine state). A learning machine, will not use a fixed mapping, but will evolve the mapping over time.

The limit of the intelligence of a machine, is first and foremost limited by the size of it's internal state - not by it's sensors or effectors. So Helen Keller, with limited sensors (compared to normal humans), still had a brain with the same internal state size which could still learn to produce behaviors of complexity equal to any intelligent human.

Second, the intelligence is limited by the sophistication of the algorithms that map sensory data to internal state changes, and which map state, to the current outputs. Any fixed mapping is what I see as the machine's current "knowledge" - i.e. how it currently reacts to it's understanding of the state of the environment. This is what I think of as the machine's current behavior set.

And third, the intelligence is limited by its learning ability - how it is able to adjust its behavior set over time.

Most robots projects, including things like the Darpa cars, are getting smarter by using more complex algorithms for updating internal state, and for producing outputs based on that internal state. But they are not very intelligent, because all this internal state has been hand-created by us humans as we design the robot. True human intelligence, requires a generic form of internal state, that can be adapted to all the problems human can solve. When we write a chess program, the internal state of the environment is handcrafted for the the need of that problem - it stores the state of the chess board. When we build a Darpa car, the internal state is hand crafted to the need of the car, we build a map of the environment around the car by accumulating sensory data and we hand craft algorithms for controlling the steering, and gas, and brake, based on the current internal state, and the internal goals defined by the application. It's all very hand-crafted to solve one problem.

But the brain, doesn't have hand-crafted hardware created by evolution for playing chess, and different hand crafted hardware created by evolution for driving a car. It uses the same, generic learning hardware handcrafted by evolution, to do either.

To move from the smart machines we have today, which get all their "smarts" by us hand-crafting it into them, to true intelligent machines, we must replace all these hand-crafted algorithms that are customized for the applications and problems we want to solve, with generic learning algorithms, with the power to solve any of the problems humans can solve, like playing chess, or driving a car. Not having the answer to how to build strong generic learning into a machine, is why we currently have smart machines, but not intelligent machines.

As a hobby, I've been looking at AI and trying to understand how to create intelligent machines for about 30 years now. I can continue to talk more about how I've been approaching this problem of creating strong generic learning machines if anyone cares. Most my work has just been with software simulations, but I've recently decided to play with robots partly just for the fun of it, and partly, because I felt that doing more experimentation with real world hardware might help give me some improved insights on how to solve this problem.

So far, I've learned a lot about the current state of hobby robotics, but haven't gotten anything done on my AI work. :)

Reply to
Curt Welch

I'm afraid I haven't got past bump switches 101, but my robots use a very simple history gathering algorithm to recognise when they are caught in a corner. Expressing the recent frequency of collisions as a stress factor, the robot can respond to being caught in a corner by panicking and executing a radical about turn. I don't offer this as an example of machine intelligence, but as an illustration of the types of information the machine can build up using the simplest of sensors.

I agree that the complexity of sensory input does not necessarily limit intelligence. A hypothetical intelligent robot armed only with a bump sensor and an ability to perform rudimentary odometry, placed in a box with an obstacle which it is allowed to discover and memorise could make some profound logical deductions about the universe if the obstacle was subsequently removed. That's assuming the robot has the kind of cognitive capability to make deductions. Is that the key point? Cognition. Or would syllogism suffice?

____________________________________________________ "I like to be organised. A place for everything. And everything all over the place."

Reply to
Tim Polmear

This is well described by Joe Jones in his latest book, mentioned earlier. Even a simple robot needs short-term memory, and not simply Brooksian reactive-subsumption, else it can be easily caught in infinite repetitive behavioral loops, and never "know" the diffference, nor take corrective action.

What you're talking about here is still VERY simple behavioral repertoire, and I think Randy is looking for something beyond that. For a "robot" [rather than just an AI program in a box] to perform robust behavior over a wide range of environmental problems, extensive sensory intelligence about the outside world is of prime importance, as well as a large stored knowledgebase, plus the ability to use both.

EG, imagine how your daily life would change if you had been deaf and blind from birth, Curt's example of Helen Keller notwithstanding. Then, reflect that onto your robot.

HK had the internal processing power necessary to overcome her sensory limitations, by virtual of 2 factors. First, she did have the great internal processing power of other humans, and secondly, she still had a very sophisticated sensory input capability - ie, millions of touch sensors in her hands, pain sensors, heat sensors, ability to sense the unique properties of water [her ultimate link to reality according to the movie, IIRC] - which allowed her to integrate into the world of humans.

Now, like your robot, imagine HK's life without her millions of touch/etc sensory fibers. Without vision, hearing, or even touch, she would have been forever locked into her private world, with no way out.

Reply to
dan michaels

Yeah, that's a good example. The number of ways we can dream up to create algorithms which summarize and react to past sensory data is limited only by our intelligence.

Yes, and that's exactly why learning of some type is always needed for intelligence. Complex but fixed reaction algorithms can work fine in one environment, but get stuck in a short behavior loop in another. A system that has higher goals which cause it to change its behaviors when the current behavior set is not meeting the goal, (aka learning of some type) will always be able to escape the loops created by some new environment (or will at least constantly search different loops to try and find a "better" one).

I like the Brooksian reactive-subsumption approach because I think that format is closer to how practical learning machines need to be structured. It gives us insights into alternatives to how machines can be structured. Learning machines tend to be structured very differently than how we would manually write code to perform similar functions. Understanding how they might be structured is a big part of making progress with building learning machines.

Reply to
Curt Welch

And there is a remarkable issue: baby and bathwater. It does seem Brooks is so interested in promoting reactive systems, he resists things which use memory. He promotes state machines, and to me state information is the equivalent to relative memory, yet absolutely minimizes the advantage of state.

Funny, I'd set Arkin aside a few months ago. I just picked it up, and that's exactly where my book mark was, near the end of ch 6. I find Arkin a difficult read. His constant return to the use of lists seems an unnatural cadence for the working of my own mind. Well, fortunately there are many different ways to think (so evidenced in that case). I suppose there is great utlitiy in that diversity.

To try to put some more clarity on the purpose of my opening post, to me the enabling principle that makes a modern stored program computer different from the proceeding sequencers, like looms, is the ability to bifurcate execution path - "if" branches. The looms have no choice and run in the same sequence eternally. The computer can test conditions and make branches so as to respond differently according to those test.

Likewise, I suspect there is no intelligence in behaviors at all. (That is not to say there isn't utility in behaviors, just as there is utility in sequencers that weave beautiful cloths.) Behaviors are like sequencers, rout and unchangable. The intelligence lies in the decisions to change behaviors. The decisions are like the branches, but instead of changed of execution they are selections of behaviors.

As such, change of behaviors seem to me to be characterizeable as state changes. So again, I find state as the fundamental factor enabling intelligence, and the transitions that change state the source of intelligence.

Reply to
Randy M. Dumse

I'm not quite sure what you meant by this last comment, but an obvious case where a simple state-machine breaks down and doesn't handle short-term history [memory] well is the repetitive collision loops that Tim.P mentioned. To avoid such situations, the machine must keep a history of recently visited states, so it can recognize repetitive looping through the same states, and break the loop. Usual FSM formulations, including anything Brooks did, will handle this type of memory problem, from what I can tell. The history of state visits in the loop might, in theory, be many states long.

EG, my walking machine controller, that we've talked about several times, allows linking of sensor-alarms to invoke state transitions - ie, it's a standard reactive-machine - but it doesn't have the memory capability to step back and analyze what behaviors are occurring in perspective. For that you need an "overseer" with true STM to record the history of state visits, look for repetitive patterns, and then initiate changes. In Society of Mind, Minsky mentions briefly the idea of A-Brains and B-Brains, with the B-Branins essentially monitoring activity of the A-Brains specifically to stop such habitual behavior.

However, what you've avoided again is the problem of "how" to make those decisions, and which David was alluding to. For this, the first thing necessary is good intel via sensory systems of the nature of the external environment, plus a stored knowledgebase regarding how to use that intel.

Reply to
dan michaels

OTOH, have you ever seen a bump-and-go toy get stuck in a corner?

Is the real issue large enough memory to record a history, or limited programming in ways to render the problem moot in the first place.

Though not necessarily as efficient in all cases, using a random solution to getting out of a corner may be just as effective as going through a known sequence, the latter of which would imply storing a history of the failed attempts.

Even Tim's solution, which I find elegant in its simplicity, doesn't need a history to work. Panic is an excellent behavior. In us it might rise from some chemical excreted each time we try something that doesn't get us out of a jam. We don't actually need to keep a history of what we've tried and what we haven't. At some level, we go into panic mode, becoming in effect a bump-and-go toy. At that level, we're not thinking rationally, and histories be damned. Funny thing is that this behavior is endemic to us and most living creatures.

If we're really going to get into behaviorism, we need to look at the random factor and include irrational thought processes. What I find lacking in most trials of behavior-based AI and robotics is that the behavior is too Spock-like. What we might term "negative" behaviors can be powerful problem solvers, and there are billions of examples of these negative behaviors in biology.

-- Gordon

Reply to
Gordon McComb

Except of course, you have to keep some type of historic information to know when to trigger panic mode. :)

Reply to
Curt Welch

Maybe, maybe not... It's done with chemicals in biologics, and I imagine maybe a rising voltage in a robot. There's a threshhold where the mechanism would implicitly understand it can't take any more chemical/voltage without sustaining damage or overload, so there's no need to "program" this level in. It's how much to increase the voltage for each particular type of event that's the tricky part, and could coceivably vary from individual to individual, as it does for biologics. Whether this is hard-wired (born-in evolution), learned (which *could* involve a history of some type), or from sensory data alone could be a matter of specific design.

-- Gordon

Reply to
Gordon McComb

You have a long and reasoned argument. I'm sorry, but I do not buy it. I think the argument is based on falacious reasoning, via application of reductio ad absurdum. It starts with bump switches. No magic demon sits on the bump switch and sends back detailed morse code describing the tecture and color and arangements of the world sliding by the wisker. No UART sits on them coding up ascii messages of depth and complexity. No such information content exists no matter how often the bump switch is read.

There are no trillions of inputs with 20 readings on 2 sensors taken over 2 seconds. All that is there is a 1 one part in 20 representation of a switch closure, not 40 bits of information. There are two parts to the information. One is purely binary. The switch is open, the switch is closed. The other part is purely a measure of time. Now matter if the switches are read 1 time a second, or 1 million times a second, only (largely useless) resolution is gained on the timing of that bit change. It is of little "information" significance. Given two bump switches as the only inputs, the information content is relative between the two switches, so which switch went first (assuming both closed before escape action was taken and releaved the first) is about all of significance that can be made of the situation.

But even my premises that there are only a few bits of information at most in two bump switches misses the point I tried to make in my post, by putting the horse before the cart (or some such).

I was focused on outputs. If your robot has limited output combinations, it doesn't matter how intelligent it might otherwise be. It can only express itself by having a very few states.

Here the poster child should not be Helen Keller, but Terri Schiavo. If there are no motor functions available to evidence intelligence, we tend to assume there is none.

jBot is a wonderful and talented robot, a credit to the state of the art. But it can't even wag it's tail, when it is finished hunting it's spot.

So without much over splitting of the analog control of outputs (so as to have infinite responses, like: this is a one meter circle, this is a

1001 millimeter circle, this is a 1002 millimeter circle, and so on) we ought to be able to characterize all possible behaviors, as evidenced by output settings. Then given all the possible inputs in terms of relationships (again avoiding analog hair splitting) we should be able to come up with an equivalent machine, by linking all output states by input data relations, and be able to describe the complexity of the machine we see.

Complex behaviors like escapes? perhaps they are simple behaviors applied according to passage of time. Then that takes me back full circle to the original premise. The intelligence is in the changes of behaviors, and not the behaviors themselves.

Reply to
Randy M. Dumse

Yes, there are many ways to create internal state that changes as a function of the past history. Whether it's chemical, or a charge level on a cap, or a timer counting down from the last event, or a position of a shaft or a lever, it's still all creating an internal state value which changes is a function of the history of the bot. And that's what I'm talking about as recording, and reacting to past events as "history". No matter how it's implemented, it's still going to be a function of past history, which means the machine is recording, in some way, past history.

Reply to
Curt Welch

It's not really a matter of how "much" memory, rather mainly the ability to record history and recognize habitual pattern loops. You can arrange ways to handle simple and specific problems, like getting out of corners, but for very general purposes of intelligence, you just cannot rely on randomness to solve your problems.

Also, panicking as a valid mode of "intelligent" behavior is just not an option, by definition. To postulate a life-n-death situation, you don't want your sentry robot going into panic mode and killing everything in sight that moves, etc, or panicking and jumping out of the top-floor window because the elevator is broke. On and on.

The Spock problem of BBR is due to the fact the behavioral repertoires and cognitive abilities of present-day bots are just too limited in the first place. To use a street term, they're just plain dumb, and can't do very much. Especially if they have no ability to learn built in in the 1st place. You can get some variation in behavior "emerging", as Brooks talks about, but still not like real intelligence.

Randomness or irrationality might play a part, but what you really need is a huge cognitive and behavioral repertoire to work off of in the first place. Creativity, I think, comes from having a large #examples stored in your internal knowledgebase that you can piece together in different combinations.

Reply to
dan michaels

Howdy,

Randy

I'm not sure I understand your premise, here. That limited output resources (i.e., 2 drive motors) somehow translates into limited behaviors?

I don't think that is a given.

For example, when I drive the R/C camera car, its behavior is distinctively more intelligent than when the robot is driving itself, especially interacting with other humans. And it is limited to the same two output motors in both cases.

The complexity of the robot's behavior and the intelligence, even humor, inferred from that behavior does not seem to depend on the limited-ness of the output resources, but rather on the way they are manipulated.

regards, dpa

Reply to
dpa

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.