What is a behavior?

Jones example on pg 49 of a "primative behavior" includes a trigger. He says,
"Primitive behaviors, as we use the term in behavior-based robotics,
have two parts:
"1. A control component that transforms sensory information into actuator commands.
"2. A trigger component that determines when it is appropriate for the control component to act."
I don't think a behavior should have a trigger in it at all. I think a behavior should be defined to be an output action, usually based on a transform sensory information.
Thoughts?
Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Then it's not a behavior because even outside robotics, behaviors are the REACTION to some stimulus.
To that end, there really is no such thing as ANY object in this universe that merely acts -- has just an output. Everything has a cause and effect. If you separate cause from effect then you have defined some other system that should have its own term, but it's not behavior-based robotics.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon McComb wrote:

I disagree, and offer this argument in rebutal.
On page 52 Jones lists "Cruise" as a primitive behavior. He readily admits it has no trigger, or if it did, it would be constantly stuck on. So I think he is self coontradictory saying all primitive behaviors have triggers, then offering one without a trigger.
I did say I think a behavior should be defined to be an output action, usually based on a transform sensory information.
The reason I said "usually" about being a transform of input stimuli, the other thing Cruise doesn't have is anything to react to. It is simply a constant applied to the outputs with no regard to any input. It may terminate, but subsumptions terminates it, while Cruise itself continues to suggest a constant output.
Behaviors can be reactions to stimuli. But that doesn't mean the behavior IS the reaction to stimuli. For instance, if one stimuli can evoke two behaviors under different times and circumstances, the behavior has to be considered separate from the stimuli that caused it to be invoked. Also if two stimuli can evoke a single behavior, then the behavior must in some sense be independent of the trigger that evoked it.
(If I stub my toe, or if I look at my property taxes I get angry. The triggers are independent of the anger behavior, but the anger behavior being hopping mad with release of adreniline, increased pulse, reddening of the face, tightening of the lip, etc., is identical in both cases.)
The trigger is not he behavior. & Not all behaviors have inputs signal inputs.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
My point is Randy you are taking an established field of endeavor, with its own definitions, and trying to rewrite it. You're insisting a "behavior" is something that can be quantified simply by mechanical means (an input or output), rather than an observable whole event, as was originally described by Brooks. I think you're missing something very basic, and can only recommend a more thorough reading back to Brooks. Brooks is pretty clear that a behavior must have an input. (Even his "wander" behavior, which is undefined by its nature, is not inputless.)
Your statement on inputs: "But that doesn't mean the behavior IS the reaction to stimuli" is a non-sequitor. A behavior is a reaction. A reaction is to some action -- if you don't like that definition, take it up with Sir Isaac. Synonyms for action: stimumus, trigger, input. True enough, some behaviors are caused by multiple stimulus, but that still means an input is required.
I think it's okay to redefine an approach, but call your system something else, rather than trying to turn around existing and accepted methodology. The problem of people redefining the same terms to mean different things is part of the reason AI keeps getting stuck. If people can't agree on the terms, they certainly cannot agree on anything else.
So, might I suggest for input-less actions we start calling the system Randysian. Anything but behavior-based.
As an aside, anger is not a behavior, it is an emotion. This is a common mistake in Brooksian BBR. A behavior that reflects anger would be smashing your fist on the table, or punching your CPA in the face and telling him he's screwed up agrain. By itself, anger and other emotions can have no observable output, and in Brooksian behavioralism, without being able to observe it, there is no behavior. Only what we can OBSERVE is relevent. Otherwise it gets into issues of sapience that hopelessly complicates matters.
-- Gordon
RMDumse wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon McComb wrote:

But that is more or less the problem, where are those established definitions?
Brooks describes "behaviors" as "task acomplishing" pg 4 Cambrain Intelligence. Other than that, there is no solid definition. I've quoted Jones definition in the opening post, and made comments about how I think Jones then contradicts his own basic definition. Arkin uses yet different definitions, mainly taken from biological inspirations and aptly cautions it means different things in different fields. Robin Bailey in "Introduction to AI Robotics" probably more explicitly evokes the biological origins of behavior, errors on the side of calling the meathod "reactivce", but really doesn't offer a clear definitions either.
Point is, I'd love to kow the definition from the field of endeavor to know whether I am re-inforcing the definitions, extending, or replacing them. How?
Here is this word: Behavior. No one gives a consistent definition. We might as well be talking about "thingamagiggies". No one knows what a "thingamagiggies" is, but they are quite ready to speak up, and tell you what you are doing isn't it that. Okay, their comments are one of opinion, but where is the standard against which we can test the opinion for clarity?
My whole point of starting this thread was to see if there was a solid definition for behavior as applicable to robotics and AI. If there isn't, how can it be that I'm taking something established and trying to add my own definitions.
"The problem of people redefining the same terms to mean different things is part of the reason AI keeps getting stuck. If people can't agree on the terms, they certainly cannot agree on anything else." I very much agree!
I am pointing at the interface between behavior and arbitration and saying, Look Here! Here is one of those places where the terms are really poorly defined. I think this is a very important reason AI is stuck. AI lies in the decisions that activate behavior (behavior meaning in this case primitive action behavior - because I have no better single word definitions for this version of behavior). The trigger being in the behavior part confuses use where we thing behaviors are intelligent. They might not be. It's the triggers that are doing the trick. Until we get our heads straightened out about what's behavior and what's trigger, Behavior-Based AI will remain stuck, because it isn't the behavior part that has the AI, it's a misplaced subsumption part that contains the AI.
If I may take a departure, I am reminded of the story of M. Curie. She was analyzing pitchblende and discovered beside the Uranium she was familiar with, more radiation than she could account for. So she began separating away various componds (pitchblende is a mixture of 30 some elements). Over several years of unceasing labour the Curies refined several tons of pitchblende. They progressively concentrated the radioactive components. Eventually isolating the chloride salts. In the final separation, the liquid was allowed to evaporate, and when it did, she looked into the container, and saw nothing. It appeared the radiation must have escaped, because there was nothing visible left, and yet so powerful a source of radiation couldn't come from... nothing. Where had it gone? When the lab was dark, it became clear something was left. She had extracted radium and Polonium, both new elements. There was so little of it, it was all but otherwise undetectable, but it was so powerful, it lite the darkness.
So if you will forgive me characterizing my own intentions in this thread, I am not trying to redefine Behavior Based Robotics, as you suppose. If anything I am trying to separate the components into those which have intelligence, and those which do not, because I think the part that is intelligence, is exceedingly fragile and rare, and mere traces of it have been found. Refinement is necessary to extract the essense of the intelligence, and exceeding care is necessary to find it among the grosser elements of the concoctions we now observe at large with less focused scrutiny on what is active in the intelligence, and what is not.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Things change state over time. That is all I see as a "behavior".
Some "behaviours" (changes in states) are internal and perhaps not directly observable as in a biological brain or inside a running program. All we have is part of the behavior, the bit we see at the output which may or may not be related to the other bit we can see, the input. When we see a relationship between an input we might say the behavior was triggered by the input.
When the same input can produce two or more possible outputs we have to assume different internal states. Emotions are internal states that modulate the relationship between inputs and outputs.
Intelligent behavior is behavior that we all decide is intelligent just as we decide if this is jazz, rap, country, hip hop, pop, whatever kind classification we might want to give it.
IMHO :)
-- JC
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
JGCASEY wrote:

to observe and identify a behavior and being able to design an behavior may or may not be quite the same thing.
Would you say, if you could observe all possible inputs, and all possible output changes, you knew that you had observed all possible behaviors? (I think this question begs the question of the interface between induction and deduction, so don't trouible if you don't want to answer.)

Yes, but it is a very loose definition of state. This is the system engineers definition of state, somewhat meaning, a snap shot of all the values in the variables and settings at a given moment.
So I'm not concerned if the trigger is based on a state, such as an emotion (anger so I squint), or a current value (light to bright, so I squint). What I notice trigger is not a part of the reaction, if two different triggers cause the same reaction.
When the same input produces two different outputs depending on the timing of the input, I am very much in agreement, something has modified the reaction. Again, its not so much concern if it is state as in memory, or state as in level, but that the modification again shows, the trigger is not part of the behavior, because the behavior chosen has been modified by something else between the trigger and which behavior is output.
Sounds like we are probably in agreement here.

This is very strongly resonent with Brooks, who says, Intelligence is in the eyes of the beholder. But this is again from an observational perspective. Can we make a design, and conclude, "I don't care who you are, that there's gotta be smart." ;)
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

I don't know how you could determine that you had observed all possible inputs and all possible outputs. Also Keep in mind an "input" can be a temporal pattern.

I am not sure I really get what you are trying to say. You could break the sequence of states into behavioral chunks in which the trigger was one of those chunks but where it begins and ends is arbitrary or made out of practical necessity. Life is just one ongoing "behavior" and the word itself I think can become very debased as explaining anything.
The system engineers definition of state is about the only one I can imagine so if you have something more esoteric in mind it is out of my league.

This moves into the realm of philosophy. My interest in robotics is how can we build intelligently behaving robots without too much thought about "intelligence". If it can solve a problem it has shown a degree of intelligence.
-- JC
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

This isn't geometry where you can set out all the rules by number and present them in a linear fashion. Any evolving methodology will have keypoints that you then draw to diffeent conclusions that may or may not be written down, but you have to first recognize the keypoints. Assuming we're sticking with Brooksian behavior, then a keypoint is that a behavior is a reaction to a stimulus. Brooks never describes it as anything other than that. I don't think I've come across any other author or researcher in the field that has offered a counter concept.
Another important Brooks keypoint is that all behaviors must be observable. That's why lower-order emotional states -- such as anger, love, fear, etc. -- are discarded. It's not that some machine will *never* be capable of these emotions (though we will never be absolutely sure, just like we can't be sure our dog "loves" us), but in his quest to simplify AI he cuts off at the hjead the notion of emotionalism as part of behaviors. Where would "fast, cheap, and out of control" fit into this?
I think if you reread Brooks you may find that his overall aim is to simplify AI, not complicate it. If he has inputless behavior states that's an enormous complication because it suggests a level of randomness/unknown/undefined states that has to then be controlled by a much smarter dispatcher. You're now back to the things other AI researchers have tried to tackle, with no or limited success.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon McComb wrote:

Can I read that to say "a behavior is a reaction to an external stimulus"? That is, we can't have hidden stimulus?
I the danger of such an approach would be reading into Brooks what isn't written and isn't there. But let's go with your premise above. I personally think Brooks' Behavior-Based approach is a world more complex than Jones. Jones is simplified and linearized for the sake of presentation, and I'm glad Jones did this. But I still think there's a world of biologically complex possibilities with Brooks' more explicit subsumption, where he has inhibit lines and subsumption lines running around almost like feedback, which can greatly complicate the nature of responses... however, let's look at a very very simple example and see what we can do with it.
Bump switch: We have a bump switch. So we can do escape behaviors. Being a real world bump switch, it is a bit sticky. How shall we make our robot robust? Knowing sometimes our bump switch will stick for a while before releasing? When we back up, we notice the bump switch doesn't disengage as it should. Is this noticed condition a stimuli, or since it is all internal, is it not allowed to be called a stimuli? Without Brooks lines which can inhibit the sensing of the stuck switch, but Jones "trigger in the behavior" design, how can we go on operating? or are we required to always do constant escapes?
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Absolutely not!

You can absolutely have a "hidden" (i.e. internal) stimulus.
A stimulus is a stimulus. What is "hidden" is relative: a line follower uses sensory systems that are invisible to the human eye, for example. We feel pain from an internal injury the same way a robot might get an input that its batteries are low, and then its "find food" behavior kicks in. Watching the robot we may be neither aware that a given proprioceptor has been triggered, or that the machine is currently in the "find food" behavior. We will realize this event after it has docked with its charger.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Something that occurred to me is that "Behaviour" is now a commonly accepted term to describe a robots (usually basic "useful" functions) and we all know what it means - the outcome of a Action Reaction, Cause and Effect combination.
Perhaps, when we dwell upon it too much it's a little too abstract?
When I describe myself I wouldn't say I have an "Avoid the door behaviour", possibly the word "Instinct" would be a better fit???, and behaviours come out of a number of simultaneous instincts triggering?
Anyway thats my 2pence...
Mark
Gordon McComb wrote:

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@yahoo.co.uk wrote:

Well, the definition being a little too abstract was indeed my original reason for posting. The developing theme or my recent posts was the lack of precision of the use of the word behavior was possibly one of the reasons Behavior Based Robotics in specifics, and AI in general, was not making progress. My position is if you don't have a proper language to speak about a thing, it is almost impossible to think about the thing in the right way.
Yes, BBR has a definition of a behavior and how it is used in BBR, per the opening post and reference to Jones. However, if that definition leads to errors in thinking about problems, then it might be a good thing to review or modify. If the change is too big to live inside BBR as a modification, and we need a new name that allows behaviors to be thought of in a different way... okay. But my feeling is not that BBR needs to be thrown out, just the definition of behavior in BBR needs modification.
What are the issues I have with Jones's definition then?
He requires inputs, and he requires a trigger.
His first example of a behavior, Cruise, has no inputs and has no triggers. In "Flesh and Machines" Brooks describes Genghis first "Stand" behavior as having no inputs and no triggers.
To me there are obvious problems with a definition that fails the first example in both books, and I suspect several concepts here are muddled together.
Let me see if I can draw attention to what I see as muddled. Look at Jones example diagram. There are inputs for a transfer function, and there is an input for a trigger. Look at the input to the trigger and consider it separately. Isn't the trigger really just a transfer function with a digital output? Doesn't the trigger qualify as a behavior itself (with an always on trigger as Jones says about Cruise)? So doesn't the definition of a behavior require the behavior being defined to contain a valid behavior? But somehow it's transfer function being digital, and a qualification of an analog one is supposed to mean it is not a behavior?
On the flip side, the trigger may cause a AFSM to advance, which doesn't look at the stimuli or trigger at all after initiation.
In the Servo Behavior the trigger has one purpose, qualifying the output, while in the Ballistic Behavior the trigger starts a sequence where both trigger and stimuli are ignored. I do not see these two functions as similar enough to call the same thing.
I've just got my Arkin "Behavior Based Robotics" back. I think his approach to behavior seems more carefully thought out, but it is developed from animal behavior to robot behavior over a couple chapters, so I'm still looking for the definition I think is most succinct. Note I did post a response from Arkin on what he thought a behavior was, elsewhere.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Gordon McComb wrote:

Yeah, but that's a heck of an interesting "aside" to consider.
If you'll notice, with the anger (I recognized as a emotion but called a behavior) I listed an number of observables. 1) hopping meant literally here, 2) release of adreniline, 3) increased pulse, 4) reddening of the face, 5) tightening of the lip...
So if these involuntary reactions are not anger, what are they? And if they are anger, is anger an unobservable emotion, or an observable behavior?
Prior I would have called anger an emotion, and therefore a state. But recent thinking is since it has observable immediate behavioral reactions, I might also change my stance and consider it a behavior, as well.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

I think you'd go there if you want to be hopelessly lost, and never achieve any fundamental demonstration of a behavior-based system. Behaviors are doable; true emotions -- not just fake anthropomorphic stuff that looks like an emotion and is really just a canned behavior -- would require computing power far above the modern supercomputer. We're not even sure anything but humans feel things like anger or hate.
-- Gordon
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

We have a large and complex vocabulary for describing humans. They include words like "emotion", and "anger", and "intent", and "will" and "awareness", and "pain". We also have a large vocabulary for describing machines, like, shaft, and gear, and sub-assembly, and engine, and controller, and motor, software, sensors, and signals. But we aren't for the most part allowed to use the "human" words, when talking about machines, and we aren't for the most part, expected to use the machine words when talking about humans.
However, if you believe humans are just machines, and everything we talk about for humans, must some day have a counterpart in a machine we build that attempts to duplicate human skills, you have to end up creating your own human to machine language dictionary to translate the two languages back and forth.
Basically you start by realizing that all the words we use to describe humans, like "emotions" are nothing more than behavior classes. By7 saying humans have emotions, all we are really saying is that they act in a way what we associate with various emotions. We say someone is mad, when we see them doing any of many different behaviors we have learned to classify as "the behavior of a mad man". We learn to recognize our own madness when we see ourselves acting in those ways.
So even though we are taught to talk about "anger" as an internal state, it's really not quite like that. It's really just the condition of a human acting in ways we label as "angry". When we recognize ourselves as acting "angry" we also learn to recognize various internal conditions which we sense in ourselves, but that we never know was happening in others. But we then associate those internal behaviors, with the idea that "we are angry".
In the end, what "angry" and all the other words like it mean, is simply that we sense ourselves acting in the way we call "angry".
The same thing happens for things like "wants". If we see a cat chasing a mouse, we are taught to describe that behavior by saying, "that cat wants to catch that mouse". But again, we know nothing about internal states of the mouse or what mechanical condition exists that is causing to cat to produce that mouse-chasing behavior. So this concept of "want", which we have been taught to think of as some internal "state" of the cat, is just bogus crap. All we know, is that the cat is chasing the mouse.
From programing robots, we know we produce "chasing" behavior, by writing interesting code. But no where in that code, do we typically see "want" as an internal state. Behavior based robot code especially doesn't seem to translate to anything that looks like a "want" for the robot. Yet it can make the robot produce behavior that we might describe as the robot "wants" the ball.
So, for any of these things, like emotions, and desire, what we really have to do, is figure out how to make a robot produce "anger" behavior, or "fear" behavior, or "love" behavior, or "want" behavior at same times we might expect a human or animal to produce those types of behaviors. People have created scripted behaviors in robots to make them shake their head, or stamp a foot, or many other complex sequences which look to us like they are part of some "emotional" reaction. But these simple scripted behaviors don't seem to ever be triggered for the right reasons. And they tend to look the same every time, where as real animal and human behavior is different ever time. So again, you can't make a machine act emotional, or in any way "intelligent" by scripting pre-recorded behavior sequences. It must produce these behaviors on it's own, as a very complex reaction to what is happening to the machine. It's not something we can hand-code.
Reinforcement learning systems however give us an answer as to what emotional behavior is all about - and to why we have it. This is because RL systems must assign values to everything (sensory information and behaviors). It does this to allow it to train itself how to react to the environment. It attempts to produce behaviors that increased the "good" stuff, and decreases the "bad" stuff. For example, an RL trained robot that gets rewarded for being plugged into it's charger, needs to learn that the sight of the charge, is a "good" thing, because that's what it sees right before it gets a charge reward. As it learns that the sight of the charger is a good thing, it will learn behaviors to bring the charger into view. Doing that, gives it a reward, because just seeing something "good" (the charger) now acts as a reward. So it learns behaviors to seek out the things which it has learned are "good" and to avoid, or escape from, the things it has learned are "bad".
This is where love and fear come from. Love is simply our RL hardware predicting strong future rewards in association with some sensory condition (likely caused by some object). Fear is where our hardware is predicting a high probability of future pain. (low or negative rewards).
Anger is just an aggressive form of fear - that is, when we act in a proactive way to our fear by trying to change the thing that is causing the fear (aka the thing which is causing our hardware to predict lower future rewards). Some times we react to this prediction of a decrease in future rewards by trying to run away from it (and we call that fear), and sometimes we react to it by trying to change it (kill it, disable it) and we call that type of behavior anger.
So, the trick to creating an intelligent machine, is not to get confused about how we talk about humans. The language is really bogus. We don't have an internal state of "anger". What we really have is "anger behavior", triggered in response to a prediction of less future rewards by an RL system.
In the end, making a machine act intelligent, or human like, in any sense, is all about making the robot act the right way, at the right times. I happen to believe the only way we are going to do that, is by building a reinforcement learning system to shape the behaviors in real time. If you get it right, you well see it act in ways you will want to call "angry", and "happy", and "sad", and all the other ways we describe humans and animals, without having to script these behaviors into the robot.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

I'm in that camp.

I am not inclined to agree. As I said, there are physiological changes associated with anger, and even babies can recognize emotions on other's faces before much else of communication skills develop. Brooks talks at length about Cog, Kismet and Furby. Things with faces do pretty well at connection to our "emotions" communications channels, and we seem proned to be able to receive such signals.
I think emotions will have to be acheived. I notice almost all animals have emotions. I'm not sure if any save man have intelligent thoughts. But emotions, I am without doubt. So I think emotions is a stage we will pass through on our way to AI.

Whoa, that's a pretty big claim. I can understand you think it is complex. But is it true we can't hand code an operating system? Well, sure. where else did the first one come from? Of course you have things like C's written in C, and Forth's written in Forth, and so on, but somebody somewhere hand coded the first one, make no mistake. Once started we can bootstrap our tools to make better tools. But I don't think we can exclude hand-coding as a possibility just because something is complex.

I don't know, evolution, given millions of years, has made many many dumb animals. You'd have to say real intelligence is probably an accidental artifact, rather than something predictable.

Hey, this is no different than a human. Consider what the original happy meal looked like. Now think about the entrances to buildings. If the builder really wants you to go through a door, they'll put a symbol on it that makes it look like that happy meal, usually upside down, an arch or dome, or umbrella or circular symbol etc. Look at a picture of Notre Dame upside down, for instance, and tell me if you see any repeated imagery you hadn't noticed before. It's not only men, by the way. ... But that is a whole 'nother discussion. Anyway, humans seem fascinated with their original re-chargers to a degree of blatant obsession.

You have indicated this interest in learning several times now. You say you haven't read much on BBR, I have to admit to not yet getting to the reading I want to do on reinforced learning, although what I've got on my reading list is Emergence, Evolutionary Robotics, and Introduction to Genetic Algorithms. Any of them you like? What literature supporting your position on RL do you like?
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Well, I think you might have failed to understand what I was trying to communicate there.
I do believe we can hand-code (aka build) an intelligent machine. What we can't do, is hand code the behavior into it. Instead, we have to figure out out to hand-code a learning algorithm, which creates the behavior for us.
We already have many learning algorithms that already work - like all the neural networks. And we know that when trained, they create machines which no human can understand. So this concept of something being too complex for a human to understand is already well understood.
The TD-Gammon program that learned to play backgammon is the strongest backgammon program in the world and plays at the same level as the best humans. But yet, it learned on it's own how to play backgammon by trial and error. The entire sum of it's knowledge, is represented by the weights in a fairly small neural network (80 nodes). These nodes might have a few hundreds weights. In other words, everything you need to know about playing backgammon at the level of the best humans, is represented by a few hundred floating point numbers. That's it. If you printed it out on a sheet of paper, it would probably be less than a half sheet of numbers. But in those numbers, is everything you need to know to play backgammon.
There is no human alive that could have created that small list of numbers by hand. There is no human alive that can look at those numbers, and understand what it is telling us. But yet, you plug those numbers in to the backgammon interpreter, and that programs it to play a very strong game of backgammon. Those numbers are the software that turns a dumb machine, into a smart backgammon playing machine.
Human behavior is created by a similar system at work in our brain. The patterns of behavior we see at the high level, are created by the interaction of billions of simple machines (neurons), connected in complex ways with complex but finely tuned weights of association. If we could dump the configuration of our brains, we would not have half a sheet of numbers as was needed just to play backgammon, we would have trillions of numbers (the effective weights of the synapse connections). And like the small list of numbers from the TD-Gammon game, these numbers control our behavior. But there's no hope in hell of a human ever understanding the numbers, or in hand-configuring a brain, by adjusting the numbers. The only hope, is to build a learning algorithm which configures the system for us, through experience.
A typical BBR type program is what you get when you attempt to hand configure such a system. It works very well for very simple tasks, but quickly becomes too complex for a human to program when trying to make it do more complex things. The BBR approach has hit a wall of complexity because we have reached the limits of what we can understand with our very limited human brain. To make even small gains in what the robots are doing, we have to deal with exponentially greater complexity in our code.

It's a problem I've been looking at for about 25 years now. Progress has been very slow. :)
My interest however is for AI - to ultimately figure out how to build machines that can do anything a human can do. If your goal is to figure out the best way to program a robot today (which is true for most people in this group), then my obsession with learning algorithms would only prevent you from getting your robot doing something useful.

It's because the little I have seen tells me they are trying to hand-code behavior instead of building learning algorithms, so it's of limited use to me.

Yeah, all those are headed in this same direction. Genetic Algorithms are a very coarse grained approach to reinforcement learning. But the basic algorithm and power is the same. You need a system to try variations of a design, and it must have some way to evaluate each variation. You keep the best, and give up on the worst, and then continue to explore variations of the best. It's a directed random search. It's trial and error learning.
Reinforcement learning is the same basic concept, but the implementation tends to be very different.

I've not seen any literature supporting my position. There is a ton of literature on reinforcement learning algorithms, but none of it is as fanatical about it being so important as I believe it is.
The problem is that these ideas started back in the first half of the last century with various ideas from psychology which led to Sinner's work in Behaviorism. Skinner was fanatical about it, and many of those that still believe in the behaviorism approach are still fanatical about it. However, Skinner lost the battle in trying to make people understand the full importance's of conditioning in explaining human behavior. Most people currently believe that even though humans are conditioned, that only explains our most basic behaviors. They believe it might explain why we can train a dog to sit, but that human behavior is far too complex to be explained in such simple terms. For example, it's common for people to believe that human language behavior is what separates us from the animals, and as such, we must have language hardware that goes way beyond the simple ideas of operant and classical conditioning. Norm Chomsky, the well known Linguist (basically the founder of the field), is the big champion of the anti-Skinner crowd.
Skinner wrote the book Verbal Behavior, to address the issues of how all language behavior is explained in terms of conditioning. But Chomsky, wrote an argument against that book, that left the majority of people believing Chomsky was right and Skinner was wrong. I happen to agree with Skinner.
Skinner lost the battle to make even his peers in psychology understand his argument. And though Skinner's work and behaviorism in general was very popular in the '50s, it's mostly gone down hill since then. The common accepted view seems to be that it's too simplistic. It's seen as greedy reductionism.
The real reason behaviorism and all it implies has lost favor is that it's failed to produce anything useful since the 50's. Since the beginning of AI work, people have attempted to use the ideas of reinforcement learning to build smart machines - but they have all suffered from just what people expected - they seem to be too simple. They can only learn very simple things, but that seems to be where it stops.
The failure however I believe was just an implementation issue. The have all failed to build the correct type of reinforcement learning machine. It's like trying to build a flying machine, and getting the weight to lift ratios wrong because you are using too heavy of an engine, and when it fails to fly, you decide the approach is just wrong (because that's what everyone is telling you anyway).
Reinforcement learning algorithms however continue to be researched, and in the past few decades, improvements have been made. This has caused a rebirth of excitement in the approach. However, even the best algorithms are still far away from producing anything that looks like intelligence. So, most people still see reinforcement learning as just one type of learning that has some limited applications, but which can't hope to be the foundation of human intelligence. I think those people just lack foresight and wisdom. I think Skinner was right 50 years ago and one day, we will find the correct implementation of a reinforcement learning machine that's going to get everyone excited again about the ideas and the approach and in the end, people are going to realize that humans are just fairly simple reinforcement learning machines, and that all our behavior, including oru langauge and "thought" behaviors, are just conditioned into us in the same way we condition a dog to sit on command.
For a review of what's been done in reinforcement learning algorithms for AI, here's a good book:
Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto MIT Press, Cambridge, MA, 1998 A Bradford Book
The whole thing is on-line at Sutton's web site so you can read it without having to buy the book (or just scan a few chapters).
http://www.cs.ualberta.ca/~sutton/book/the-book.html
Unfortunately, the algorithms in the book only work for state signals that have what is know as the Markov property. What this means, is that the algorithms only work if the sensory signals tell the system the complete state of the environment. For example, to learn to play tic-tac-toe, the sensory signals must tell you the full and complete board position (which is trivial to do). They also assume that the number of unique states the environment can be in, is small and finite - small enough so that the algorithm can track a "value" for every possible state the environment can be in. This means that to use these RL algorithms for learn to play a game like tic-tac-toe, you need an array large enough to hold a number for very possible board position. This is possible for a very trivial environment like tic-tac-toe, but becomes impossible quickly for anything of greater complexity.
Robotics however uses sensory signals that are very non Markov. This means that the sensory data doesn't tell the robot the full state of the environment. It's only gives the robot a small fraction of the data needed to know what the state of the environment is in. In addition, even if the robot knew the full state of the environment, any real world environment has far too many states to allow these simple RL algorithms to track a different value for each possible state. (you would have to track the state of the entire universe really). It also makes learning impractical because these algorithms require the system to visit the same state many times before it will correctly learn the true value of that state. Robots interacting in the real world tend to never see the exact same state twice.
So nothing you find in a good RL book like the one above, will give you any algorithms which can solve any real world robotics problems. The world is still waiting for people to find and develop these better algorithms. That's what I've been exploring for many years now as a hobby - kinda like a crossword puzzle I keep picking up and working on every few months or years, trying to make progess on it.
There are many people researching, and developing, better RL algorithms. I've seen mention of various papers on the field, but haven't actually gotten my hands on any papers that seem to be useful to me.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
We know that intelligences have "hidden states". We also know that some of those states can come about without something we might perceive as an input. The trick here is to realize that any system, if known, can be modeled. I don't want to ramble, but I suppose that this might end up as a sort of haphazard set of examples and points. First, let's encounter the hidden states issue. In any system, there can be a set of values or variables that contain important information but are not necessarily obvious to an outside observer. One such concept can be seen in fuzzy logic thermostats or systems that keep track of past states. History is one such hidden state, and we cannot always predict what a record of history in a system might result in, much less how a system will react to it. A perfect example is a machine that is keeping tabs on how its outputs have yielded, and then goes on to modify its outputs based on the results. Feedback systems do this in many cases. To look at a thermostat we could not tell what it had done in the past or what the temperatures were, but it might adjust its control outputs to best minimize energy consumption and it might even take time of day into account. These are really simple devices but the hidden variables can be the value of its internal clock, some preprogrammed schedule or sequence of events, and the history it has recorded. What about "input-less states"? Some systems can perform actions based on random numbers. Humans seem to. So can game characters and robots with randomness in their subroutines. Some may question my inclusion of game characters but recognize that these nonexistent things are a type of robot, but one that lives in a conceptual space only. The difference between these and real robots is tenuous- you could create a software bot that actually outputs control codes to a robot body. And isn't this what those of us with advanced robot projects actually do? I use this concept all the time, because it provides the ability to create really interesting "personalities" and then embed them in actual working hardware. So I include software bots and video game characters as "softbots" and will treat them as being in the same category as real robots because they can easily be brought into the real world. Just add metal and batteries. But states can result without an external stimulus very easily. The random example above is the trivial case. In reality, we can expect that there are internal states that can be reached due to a threshold of some sort being achieved, or a timer expiring, or the battery running low, just to name a few examples. The most interesting behaviors will result from such invisible states and will provide that spontaneous, almost lifelike set of actions that people find fascinating in animals or other people. Emotions are far too oversold as being difficult or mysterious. They are simple internal states, but there are some very distinct components to them that are easy to master in software. Let's first admit that emotion has nothing to do with what we "think". Under anesthetic or at the borders of sleep, we can have thoughts of "gee, I should be mad or scared but I don't feel that way". This is the first and most basic clue to the true nature of emotions. What we call emotion is nothing of the sort- it is our body's response to emotion that we sense, not the emotion itself. Panic is an ethereal thing, stored in a register in our brains, and its effect is to dump adrenaline and other chemicals into our bloodstreams. Our bodies respond with tachycardia, sweating palms, rapid breathing, tightening of the abdominal muscle... but what we feel is the body sensation, the visceral component, not the panic itself! In other words, virtually everyone calls these visceral responses panic when in fact it is nothing of the sort! It is the diagnostic, the symptom, not the actual thing. We are getting old news and calling it the thing. Panic is a value stored in your brain, that decays with time and resolution of a situation. The same is true for fear, boredom, love, and happiness. We never "feel" the emotion itself, only the response of the body to its chemistry. Now, that having been said, we can quantify emotion as "an internal state that reflects a set of conditions". This is overly broad, but a first shot at a definition. Let me provide a perfect example of an emotion that you can produce in a machine and you can fully know that the machine can use and respond to it. In our experience, we are constantly predicting what comes next. We have a set of expectations that we produce based on experience. As we go through the day, we expect that things will move along in a certain fashion and that we know what comes next. But imagine picking up a sun-warmed rock and finding that it is light instead of heavy, and that it is ice cold and squishy instead of hot and hard. All at once, some little circuit in our heads goes "whoa- that's not right!" We experience surprise because (and this is the crux) our expectations were violated or not met. Software can easily duplicate this. The software for a robot can have expectations and when they are not met, a word in memory labeled "surprise" can be set to a high value. Now the robot should have some sort of fallback position, and when this value is high, it should invoke that subroutine. As resolution of the unexpected is being achieved, the value of surprise is then lowering in a predictable manner. Did the robot "feel" surprise? Perhaps. it will depend on how you handle the exceptions internally. If the software can keep track of how much processing it is doing that is out of the ordinary, and how large the violation of expectation was, then yes, I would say that it could feel surprise. In particular, we can say that if it also keeps a history so it can look back and identify times when it was surprised, or hungry (low batteries) or in scafred (recovering from a near fall or some other danger) then our cleverness is the only limit to what we can put in its little software brain. But to truly feel these things, it has to be able to map them to its body model, and this is why machines do not yet have emotions that we can accept as being emortions. Almost nobody creates a body model in the robot's software to map sensations and emotions to! Without this basic item, we cannot yet claim to have achieved emotion or true sensation in a robot.
Cheers!
Sir Charles W. Shults III, K.B.B. Xenotech Research 321-206-1840
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Yeah, I basically agree with all you points.
My belief is that human like behavior is created by generic sensor processing algorithms and behavior learning algorithms. When you create a generic system, that simply analyzes all the data it receives equally, then the body is as much a part of the environment as the rest of the environment is. So, when such a generic system ends up creating models of the environment, it's going to be creating a body model or a model of self at the same time. As we learn to best ways to react to the environment, we will, at the same time, be learning how to react to our own body. So as you say, when we sense the changes in our body that results from various physiological processes at work, we learn to identify, and react to them, in different ways. It's really no different than how we sense that our car is not running normally and we react in ways to that. Or body really is just like a car we spend our life driving around in - it's just a car we (aka the brain) can't get out of.
The brain is also able to sense its own actions, since the brain is part of the same environment it's trying to sense. So, like it does with all sensory data, it learns how to best react to its own actions. That simple and obvious ability is what creates our natural sense of self.
What you said about surprise I agree with in that the brain is always trying to predict what will happen next and our behavior becomes a function of the results of those predictions. We act one way when our predictions are met, and we act another way when our predictions fail. However, how react in both cases, I think is learned. We learn though experience how to react when things unfold as we expect them to, but when something happens that we didn't expect, we generally don't know how to act. That is, we have had so little experience with this very unusual condition, (or with anything even close to it), we tend to basically not know what to do. We tend to stand around with our mouth open and do nothing simply because the brain doesn't have any good answers about what it should do. The stuff we see all the time, is the stuff we expect, and since we have seen it a lot, we have had a lot of experience trying different ways to react to it, and have long since learned good ways to deal with the most expected situations.
So surprise is a function of how much our prediction system failed to accurately predict what was about to happen, and when it fails, we have to fall back to some default behavior for dealing with the unexpected - which often is a fear reaction of some type because the less we understand what is happening, the higher the probability that something bad will happen to us - and that is something we have learned through experience. A typical reaction is an attempt to escape from the unexpected - like for your cold rock example, it's likely we would drop it because we have learned natural reactions to try and escape from the unexpected before we get hurt.
But my main point, is that I believe human behavior is all learned, so both how we react to the expected, and how we react to the unexpected, is all learned through a life time of experience.
How you program this into robots in a way that is useful for practical robotic applications I don't yet know. But I'm working on that.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.