Where is behavior AI now?

[...]

Before we worry about what it is that distinguishes us from animals, wouldn't it be a reasonable goal to find out first what it is that distinguishes animals from everything else?

-- Joe Legris

Reply to
J.A. Legris
Loading thread data ...

Seems to me, that's pretty much what subsumption and BBR are all about. How do you tell a rock from a subsumption robot? The rock just sits there doing nothing [although, as I recall, Curt has some strong opinions about "rock intelligence" ;-)], while the sub.s-bot both reacts to and acts upon its environment.

To me [but not to Marvin, given his past comments], the sub.s-bot is basically a good start at what distinquishes animals from everything else. Autonomy, sensing, and action. However, as this thread is all about, the BBR/sub.s approach looks to be basically stalled, and there doesn't seem to be much [or enough] effort at adding all those levels in-between the 2 ends Marvin just talked about. I think Brooks' book Flesh and Machines was really the death-knell of sub.s, and his final conclusion - there is some "missing stuff" - was totally wrong, as I mentioned in an earlier post.

I do distinquish the difference between human and animal, in that humans have "another" level on top that adds high-level symbolic processing to what the animals have - roughly parallel to the distinction between higher-order and primary consciousness that Dennett and Edelman talk about. But you have to start adding those in-between levels to bridge the gap down to subsumption. Marvin's high-level commonsense reasoning systems can only work on a proper lower-level foundation platform that can function successfully in the real world.

For my part, I'm pursuing [and have been for a while] the idea of adding those in-between levels from the bottom up, which is the same way organisms evolved intelligence. I just don't think you can do this from the top down. For one thing, there is too much of an unexplained gap between the highest-most symbolic levels in humans, and primary consciousness levels in the other animals. IOW, you don't start with language first, and then extend that to build the communications-interaction capability of a monkey. You do it the other way around. My $0.02.

Reply to
dan michaels

But it's not a rock. It's a rock lobster!

(I guess only B-52 fans will get this one...)

-- Gordon

Reply to
Gordon McComb

That's because the few people in those areas have been banging their head against the wall for decades now. Is there any "reflective, meaning-based, knowledge-using" system in production use anywhere? (I don't think we can consider Cyc a production system.)

John Nagle Animats

Reply to
John Nagle

Southern california beach humor?

formatting link
's%20Lyrics/Rock%20Lobster%20Lyrics.html

Reply to
dan michaels

The band is from Athens, GA, actually. Am I showing my age here?

formatting link
's
formatting link
If you can create an AI system that can make sense of Fred Schneider's lyrics, than you can create anything!

-- Gordon

Reply to
Gordon McComb

[...]

But what are the leads into these "in-between levels"?

What is the dna, protein and control loop equivalents for the evolution of intelligent neural systems?

I suspect that high level mimicry of human intelligence may turn out as unreal as high level mimicry seen in an animatron. Looks real with lots of clever programming but lacks something fundamental to duplicate human or even animal intelligence. All of the intelligence being a product of the programmer being part of the developmental loop and forever requiring the programmer in that loop.

-- JC

Reply to
JGCASEY

Yeah, for sure. Do I really need another kid to train to behave correctly? :)

I think the foundation of human intelligence however is strong general purpose real time reinforcement learning. I think once we master this technology to the level that would allow us to build a human like machine, we will also be able to use it for an infinite number of sub-problems in practical engineering applications. RL techniques are already used for example in learning algorithms that auto train PID controller parameters for example. These sort of applications will only grow as our technologies advance.

As an example, I just saw a ad for a ping-pong robot. It sounded like they had created a machine that could actually play ping-pong. But sadly, it was just a ball pitching machine that could shoot a steady stream of balls for you to hit back. I suspect it had some ability to sense when you had hit the ball back (maybe even detecting the vibration caused by the bounce on the table) and would shoot the next with the same timing you would expect if a person had returned the shot. But it wasn't smart enough to hit the ball with a paddle.

This is the type of problem that would take a large amount of custom code and hardware but which I believe would work far better with a strong learning controller driven by a fairly simple critic. It would take it a long time to train it, but this is something that would happen as part of the product design work, and once trained, it could easily be mass produced

- but yet, it would retain it's learning ability, so that it could constantly adapt to different conditions and different players.

I think there are an infinite number of complex control problems (many done by humans now) where the foundation of intelligence could be applied to limited domain problems like the one above.

A big difference between humans and their RL skills and machines, is that to get a human to do something for you, you have to trade it for something the human needs - you have to reward it with something of value to the human - money, food, etc. Machines built to perform a task, are wired with a critic that rewards them for doing a good job. The work they were built to do, is their reward. The ping pong machine loves to play ping pong, and nothing else. So you don't have to give it breaks, or scooby snacks, to keep it motivated. Allowing it the play ping pong is it's reward.

This is where I think AI technology will be work well for solving real engineering problems and not be overkill.

A friend suggest to me that someone should built a nano robot that could travel around and capture insects to prevent the need for using pesticides. That's very much in the line of an ant-bot type idea.

Do ants have any ability to learn? Anyone know? I assume they have very little learning ability if any. But nature can be surprising. Knowing very little about ants, I would guess they are mostly just very complex reactive machines.

I think trying to create behavior as complex as an ant or bee is likely to be best produced with the help of genetic algorithms that slowly evolve a fixed but very complex controller.

Collecting trash might be another interesting application for smarter robots. Something that roamed public areas and picked up trash and put in near by trash cans. Could you build a bot for example that could clean up the trash at a movie theater or that a crowd would leave after a concert or sports event? These are the applications that in theory seem fairly simple but yet require a large amount of intelligence that's very hard to hand-program into a machine. And even harder to expect it to work in environments it's never been tested in. Adaptive learning machines however should be able to not only learn complex tasks like these, but adapt to different environments and for example discover new techniques for dong a better job on it's own.

Reply to
Curt Welch

Ah! I get the problem between us now. It is simply one of context. Since I'd opened with Brooks behavior based robots as a premise, I've been speaking according to Brooks rules. You are speaking from outside his rules. Otherwise, I think we are largely otherwise in agreement.

Brooks requires inputs to be horizontally applied to the behaviors. In the behaviors there can be state machines. But the idea of something outside a behavior which can "read and record the activity of the inputs for some amount of time, and then producing a mapping function" is completely alien in a Brooks architecture. In fact anything that is a mapping function of any kind is completely alien in a Brooks architecture.

Now for my part, I get state. I love state and the use of state. You'd be hard pressed to find someone more a fan of state as a concept than I. Brooks also admits state. Explicitly btw. However, Brooks would likely take exceptions the the "we can't do it" because his book _Cambrian Intelligence_ is practically at a persistently loud level all the way through diatribe of, "Oh Yes We Can! and this robot experiment proves it" I have never seen an instance where Brooks has said something cannot be done with anything but reactive responses. While Jones far favors reactive "servo" responses, he acknowleges you may have to use state (non-servo ballistic manuevers) sometimes for escapes. I can remember anything like that kind of concession in Brooks.

Yes, by recording past transitions we can make more intelligent decisions. We can agree on that. However, it is never necessary to record all past history. One of my own definitions of "state information" is that is the minimum amount of information to be captured to adequately permit all required future decisions.

Randy

formatting link

Reply to
RMDumse

The implication in the previous messages was that BBR is essentially the lowest level, and what needs filling in is the levels in-between that and the symbolic levels at the top-end. Much higher than DNA and proteins.

There are 2 strains of people on this thread. The guys who came over from c.a.p., with their grandiose ideas about solutions to general AI [hello, Curt :)], and the guys who are currently building robots which are essentially at the BBR level. A wide gulf there. I put myself in the 2nd group. I'm not expecting to add some kind of GOFAI-type processor - or direct link to CYC, or unstructured neural net, etc - to my little robot, and expect much. And neither are Randy or David, I would imagine.

Rather what we need to look at is the "next" step up from the bottom. Better sensors and perceptual processing, plus memory and learning, simple goal-planning [cf, chap 6 of Arkin's book]. Our robots are little better than blind, deaf, and dumb, like Tommy of the Who [now Gordon's got me doing it too :)].

The first thing is much better sensors and perceptual processing, so the robot isn't stuck forever in Helen Keller's internal world [see Curt and Joe's comments]. But this is such a radical step up in cpu requirements, that it's a problem right now too. Most of our little bots don't have much more power than PICs or AVRs, and maybe a DSP chip here or there.

I realize these sorts of comments about stupid little robots always lets Marvin down, but that's where OUR [this forum's] world currently is, I think.

Reply to
dan michaels

I am so pleased to see Marvin grace us with his presence again, I would hate to think we were letting him down with our interests. Personally, I think I'm on to something enabling and trancedental concerning another level of understanding above the plateau bbr has come to rest upon. But I might be self aggrandizing.

Perhaps I should ask Marvin himself. Would the separation of layered behaviors from emergent intelligence, with the realization the behaviors themselves are devoid of intelligence, but rather, the sequencing of behaviors being the source of intelligence not be a significant clarifying point for ai? Or, in you opinion, is this all beneath a professional level of curiousity or research?

Randy

formatting link

Reply to
RMDumse

Thank you JC for that comment. It really fits beautifully, and gives me a platform to give an argument for my suggestion that behaviors are the smallest atomic components only of what now is too broadly called behavior, but also includes combinatioons of behaviors, which ought to have a different name.

I have suggested a new use of "behavior" is to mean "any static application of a output value (constant or linear variable) to an output device". Behaviors are reactionary, responses, and hold no inherent intelligence.

Again using the tone and the tune analogy and taking a ethological approach.

Consider a bird. What must a bird do to be heard in the world. Well, if the bird call is noise, it will not be heard, it cannot be identified, and it is without a very important survival tool to its species.

So what is the very most simple sound a bird could possible make to distinquish itself from noise, which would be useless. Well the opposite of noise, is a clear and pure tone. So any bird that can make a tone, any tone, is hugely more "situatied" than a bird that is silent, or one that makes noise. However better off, the bird is still not well identified by making a tone. Why? There are many unintelligent and non living things that can make a simple pure tone. For instance, if a wind blows over a hollow tube, a resonance will be set up and a steady tone ensue.

Like the silence in the OM chant, the bird can further show its life like ability by establishing a constant cycle (frequency of its tone) and then breaking that tone.

Now reach to see the point. A bird that can do nothing but cheep cheep cheep establishes indentity and a higher than inanimate intelligence this way. As other features are added, additional tones, additional silences, modifiers of the notes (trills, slides, etc.) the bird shows more and more identity, and increases its usefulness of this survival skill.

Closing the point: A tone alone is not enough to identify a sound as being animate, or inanimate origin. There is no intelligence in a tone. However, a sequencing of tones, a cycling of cycles, that shows intelligence. The intelligence arrises not from the tone, but from the melody (tune).

Likewise, there is no intelligence represented in atomic behaviors. It is the sequencing atomic behaviors that causes the observer to suspect and identify intelligence.

Looking to robots, a robot that can do one behavior, say "cruise" and drive straight ahead, we are not impressed. It has motion, but it is not very intelligent (in fact, we would likely say it has no intelligence at all).

Now a robot that has at least two behaviors, and sequences them, seems much more intelligent than the former. Say a robot has cruise and escape. This robot never runs into something and just stays there slipping its tires. This robot escapes.

Now a very interesting extension is that a robot that does random escapes between cruises, even when it isn't trapped, looks more intelligence than one with only cruise. A robot that does escapes when trapped looks even more intelligent.

Therefore I would say I have given a strong argument for the case, there is no intelligence in atomic behaviors, in and of themselves. There is intelligence in the sequencing of behaviors, whether they are appropraite or not, any sequencing of behaviors looks more intelligent than and atomic behavior which remains constant.

Randy

formatting link

Reply to
RMDumse

I've not read enough about BBR to know all the typical details yet, but the little I've picked up so far, I think it's not quite low enough. The problem is that we (as humans with brains) tend to try and classify behavior into buckets. We look at an animal and say things like, "oh, that's wall following behavior", or "that's goal seeking", or "that's light avoidance". The lowest level we tend to try and understand behavior informally is the highest level at which our brain is able to detect recurring patterns.

When we try to build robots, we tend to think first at similar levels. How for example might we create wall following behavior, or obstacle avoidance in a robot. And I suspect (but don't know because of my lack of reading on the subject), a lot of BBR work has systems for creating similar level behaviors. For example, our robot hits a wall and keeps spinning it's wheels, and we decide we need to add wall avoidance which combines some some combination of sensory triggers and behaviors to make it either prevent from hitting the wall in the first place (turn before getting too close), or adjust after a collision (back up and turn). But for the most part, only one behavior is active at a time, and there is some logic for selecting which behavior to use in the current situation which might be a priority interrupt type of logic (drive straight is the default lowest priory behavior until it's interrupted by a higher priority sensory condition for turning - like there is a wall near us).

BBR (as far as I can tell) is the attempt to move to using simpler behaviors with simpler triggers, so that the system will produce the correct combination of behaviors to fit a wide range of environmental conditions - so that new and thought about (by the designer) sequences will emerge and just naturally perform a useful function.

I believe what people end up coding by hand in these systems is typically not low enough however. The problem needs to be factored down even further, to even simpler behaviors. The reason they are not I believe, is simply because it's too hard for us to understand how to do this by hand. It moves the complexity of the behavior down to a set of more complex trigger logic. And understanding how to code the complex logic for to trigger micro behaviors, is something that's not at all intuitive to a human programmer. It's just too hard for us to understand.

But, for the same reason BBR seems to have an advantage of being more adaptive to different environments (environments the programmer might not have thought about), I think factoring the problem down to even simpler behaviors, is likely to only improve the adaptability and the amount of emergent behaviors that spontaneously arise from the machine.

The level of behavior I'm talking about is more like "turn right wheel 1 deg clockwise". This is consistent with the level I was trying to attack the problem with my pulse sorting network (which Dan another others from c.a.p know about). But it would be very hard to hand code a complex set of conditional expressions to specify for a typical robot when it should "turn right wheel 1 deg clockwise". Which is why this level of micro behavior is not used so much. But, with the correct learning system at work evolving the complex logic tests used to select micro behaviors, I think you would get excellent results (and ultimately, use this type of low level micro behavior selection system, as the foundation for a system that produced human level behaviors).

But keep in mind, I came here not to continue the discussions we have in c.a.p. but because I've been building and experimenting with robots to see if I can apply some of those grandiose ideas to real world problems. So I want to know more about what had worked in these real world applications - as I am as people teach me more about ideas like BBR. :)

But, if we could produce a good generic RL trained decision tree (which is basically what I was trying to create with my "pulse sorting" networks) which could be plugged in to drive micro behaviors in the simplest of bots, I think it could be of real value for even these very simple bots. I've been playing with the vex hardware which has only a small PIC processor but that's more than enough to test some of basic RL ideas.

As you well know, I think more can be done with general solutions than you do. I think if we solve the lower level better than we have now, it will make filling in those higher levels much easier.

The problem with hand-coding instead of learning, is that you have to hand code all the levels - which means you have to conceptualize how to factor the problem into levels, and sub-levels, and then fill in each level. Strong learning systems should do that factoring and filling in automatically. With a strong learning system to work with, the problem for the programming becomes one of picking the correct high level goals and motivations. If you can define them correctly, the learning system will fill in the implementation details for you.

Well, we don't really have a problem there do we? It's the next step that has all the problem...

Right. There are plenty of cheap high quality high bandwidth sensors we can put on robots (vision, sound, vibration sensors), but the problem is that we don't have good processing systems to deal with complex high bandwidth temporal data sources like these. Think about how useful sound is for example if you have brain like ours to process it. You could add low cost directional microphones and D/A converters to a any of our cheap robots but yet it's not done (except in the very special and limited application of sonar distance sensors). It's because extracting useful trigger data from an N-channel microphone system is beyond what we easily know how to do.

If we were driving the bot, we would learn to hear the sound of the bot hitting a wall and of the wheels spinning on the ground and the sound of the motors straining. These are the type of clues that exist buried in an audio data stream that you don't expect a human to be able to easily program to trigger the switch to a different behavior (we need to back up and turn left because our wheels sound like they are slipping on this surface). This is the type of thing a learning system needs to be able to find, and extract from complex data streams. It needs to recognize the correlations between a sound, and the fact that a goal is not being reached.

And I think a big part of why current learning systems don't do this very well, is because we don't yet have the right data processing algorithms - which I believe you think of as a perception problem, but I tend to see more of as a behavior problem. But either way, we don't have good enough systems for finding, and extracting, useful information from complex sensory data streams - which I think agrees with the point you are making. I think this ability is the number one most important ability to improve on.

I think if done correctly, it won't be as radical as it seems to be. And though many robots use very cheap low power CPUs, we have plenty of high power cpus that really are still affordable. And, I think better algorithms for the automatic processing of complex data can be of use even for simple sensors with much smaller processors. For example, we can create fairly low bandwidth audio signals that any human could make very good use of (I'm thinking even less than phone bandwidth). Or just tactile vibration sensors which need not be more than single bit sensors. But the problem is always not knowing what to do with the data because it's too complex for us to understand how to transform that data into useful behavior triggers.

I think stupid little robots are likely to be an important testing ground for the ideas that will lead us to the high level processing levels Marvin (and many of us) want to create.

Reply to
Curt Welch

Acknowledged.

Wow, I keep a separate section here for books on state machines, and I've had a hard time filling an "apple crate" full of them. I count 14 at the moment. I subscibed to FSA-research to look for activity, and have seen less than two dozen posts (other than my own trying to start conversations) in two years.

And here you find more references than I can hope to read from your CiteSeer search.

Thanks.

Reply to
Randy M. Dumse

Yes, the argument shifted. No intention of slight to you, though. Just an attempt at a stair step argument. i.e. 1) What's in it is limited and we agree on that. 2) Next step was to see if I could get concurrance on what can be put in it is also limited (given hardware suite remains the same).

Again returning to premises, I have noticed limited inputs and outputs on our robots. I have hypothesized, maybe that means there are limited combinations and permutations of inputs to outputs. I have not yet found an argument that convinces me one way or the other. I suspect there is a limited set. I also suspect many arguments will be presented suggesting inifinte sets, but when examined carefully, it will be found the sets have non-unique members, meaning they can be reduced to limited sets. My thinking is seemingly infinite sets can be reduced by removing the analog components of outputs to a variable, while the method of calculation of the output remains static.

Now, I do favor the idea of a limited set. If there are a limited set, then we can be sure we have programmed every possible behavior into our robot. Likewise, we can possibly identify every possible transition from one behavior to another. We can know if we have addressed all the robot can do. We can know if we have made the robot as intelligent as possible. We might even be able to see every possible emergent property.

Wouldn't that be a significant goal?

So how can I imagine this might be possible. Let's check the low end limit. Look at Braitenberg 101. His very first robot has one photosensor and one motor. Can we construct a finite set of behaviors for this robot? I say yes. The robot can 1) drive the motor toward the light with some gain function, 2) drive away from the light with some gain function, 3) drive the motor at some speed function ignoring the light.

4) not drive the motor. That's it. Any other output behavior can be recast as a combination of the above outputs.

In fact to demonstrate just that, notice #4 is really just a special case of #3, where the speed variable is zero'd (depending on how the hardware details are done, but as far as externally observable behavior, the point stands).

Now, to my late-night-after-a-hard-week thinking I can't come up with any other possible behaviors for this robot. I can come up with ideas of random behaviors, but any I can think of, can be reduced to sequenes of the set of three listed.

So I am still very found of the conjecture, limited inputs and limited outputs means all possible atomic behaviors can be accounted.

Any display of "emergent intelligence" will come not from the behaviors, but from the sequencing of them.

Pretty strong argument above as it seems to me just now. No counter argument comes to mind. Hummm... that seems an important enough thought to stop this post here, and take up your wonderful examples in a separate post later, and see if anyone has counter argument.

Reply to
Randy M. Dumse

CiteSeer is quite overwhelming for the #papers available for downloading in any given topic area. Everytime I click on one paper, it opens up several avenues for more, more, more. Talk about your tangled web. Right now I'm downloading some papers on emergence and self-organization, which is my other main area of interest, after visual perception.

formatting link
Takes a minute or so to get a paper, and up to an hour to read. Got several 100s on the HD. Maybe will get to them after I retire ;-).

Reply to
dan michaels

CiteSeer is quite overwhelming for the #papers available for downloading in any given topic area. Everytime I click on one paper, it opens up several avenues for more, more, more. Talk about your tangled web. Right now I'm downloading some papers on emergence and self-organization, which is my other main area of interest, after visual perception.

formatting link
Takes a minute or so to get a paper, and up to an hour to read. Got several 100s on the HD. Maybe will get to them after I retire ;-).

Reply to
dan michaels

I don't know what sort of argument you would find convincing, but for me your hypothesis does not comport with observation. It seems to me that the musical analogy, with which you apparently agree, is the most apt. To paraphrase, in your own words:

"I have noticed limited inputs and outputs on my violin. I have hypothesized, maybe that means there are limited combinations and permutations of inputs to outputs. I have not yet found an argument that convinces you one way or the other."

So it is your intuition that we can "identify every possible transition from one note to another. We can know if we have addressed all the violin can do."

That is certainly not my intuition. Wonder what Bach would think of that statement?

Why does that sound silly in reference to the violin but acceptable in reference to a different sort of instrument, the robot? I submit there is no difference. I have not seen anything in your musings to convince me otherwise. The attempt seems to me to be to put some sort of mathematical restraints on human creativity. That strikes me as a meaningless endeavor.

Now, if in fact you believe that you CAN put a limit on the musical possibilities of the violin by applying some sort of formula based on counting the violin's limited "inputs and outputs" then we evidently have fundamentally different understandings of human creativity, and should let it go at that.

We see this on slashdot every few years. Someone has "analyzed" pop tunes and written software that will compose #1 hits without human intervention. It never happens. As annoying as it is to the engineers, the realm of possibilities for human creativity is not so easily constrained and mimicked.

So, counting input and output resources as a means of restricting what a clever human might do with those resources seems a silly exercise to me.

(However, if you do get that #1 Hit-making software to work, I'd like to know! Let's make a little Top-40 money, pay for some of this escoteric robot research... ;)

best dpa

Reply to
dpa

And that's why late-night-after-a-hard-week thinking should probably be set aside for morning. Shortly after posting I thought...

Oh no, Curt's going to filet my arugment. I've overlooked state information in the input stream. Flash the light two times, and the robot does some trick. Flash three times it plays dead. and so one.

Just a few moments later,

Oh no, dpa's going to point out there aren't three behaviors at all, only one, because if you use a two variable algorythm, you can use one variable with gain and produce all photovore-ish outputs, and the other variable with gain and produce all the photo-insensitive outputs. So from his excellent example of state reduction through non state variables, I am hoisted by my own premise of state reduction.

I don't know. I'm sure there's a pony in my argument somewhere. I'll keep looking.

Reply to
Randy M. Dumse

Right. I've not studied Brook's work. I've only scanned over some of it. So I'm talking general theory and not specific what Brooks's systems might suggest.

No, I think his architecture is a mapping function. Any system that takes data as input and produces some result dependent on the data is a mapping function to me.

So, are you saying the system can change state in response to any input at the same time it's producing some output? If so, then that state change _IS_ short term memory which is recording some historic aspect of the inputs. If there are no limits to what the state changes can include, then his system does in fact allow a complete recording of state for the last N inputs.

As a simple example to show you what I mean, assume you have a single binary input as the sensory, and you wanted the system to record the last 3 inputs. How many states does that take? 8 (aka 2^3). So if the machine has 8 internal state, then every state can represent one unique pattern of the last three inputs. You then specify the state changes as a function of the current state, and the current input to accurate reflect the last 3 inputs at all times. That gives you a system that has a short term memory that records the last three inputs.

If you want the last 1000, you just need a few (ha) more states. :)

Is that consistent with his use of states?

If he allows state that changes as a function of the current state and the current inputs, then he's talking about the exact same thing I am just using different names.

For sure. And in most real world cases, it's not even possible for any reasonable life span because there's just too much data. The state however tends to summarize a large amount of past inputs however - by calculating things like an average for example.

Yes, that's always key. As a matter of fact, many of the techniques I've played with used that same assumption in a slightly different direction. If you have a learning machine with finite amount of state memory (aka short term memory), it must maximize the amount of potentially useful sensory data it stores in the memory in order to maximize the odds of sorting out the useful state information from the non useful state information as it learns. A key part of that process, is the removal of duplicate information, and the equalization of the amount of information stored in each state variable. You don't want to store what amounts to the same information 10 times, because some fact about the environment came to you through 10 different sensors at the same time. This type of thinking is important in understanding how you must (or can) transform and combine sensory data before you try to use it to control behavior.

Reply to
Curt Welch

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.