What is a behavior?

Yes, of course it is. I was just pointing out that your object of "identifying intelligence" is chasing a moving target, because as soon as a mechanism is identified, the definition of intelligence changes. I think that's why there's the feeling of "failure" and so much mudslinging in the AI community. It's not that we aren't advancing, it's just that with every advance, the horizon retreats by the same amount... hmm... a round earth again :-).

I don't really have a better way of defining the problem, except to hope that one day there will be a Columbus who'll convince us that intelligence is circular and

*continent*, as Curt and I recently seemed to agree. When that happens, everyone will stop searching for the edge of the world, the ghost in the machine beyond the horizon, and realize that there's only the machine, and *that's enough*.

Exactly my point. Common usage is the problem, and must be changed in the same way that Columbus' voyage changed ideas of the earth. Hopefully the Columbus of AI will actually do the full circumnavigation as well!

Clifford Heath.

Reply to
Clifford Heath
Loading thread data ...

I don't know how you could determine that you had observed all possible inputs and all possible outputs. Also Keep in mind an "input" can be a temporal pattern.

I am not sure I really get what you are trying to say. You could break the sequence of states into behavioral chunks in which the trigger was one of those chunks but where it begins and ends is arbitrary or made out of practical necessity. Life is just one ongoing "behavior" and the word itself I think can become very debased as explaining anything.

The system engineers definition of state is about the only one I can imagine so if you have something more esoteric in mind it is out of my league.

This moves into the realm of philosophy. My interest in robotics is how can we build intelligently behaving robots without too much thought about "intelligence". If it can solve a problem it has shown a degree of intelligence.

-- JC

Reply to
JGCASEY

Yeah, but that's a heck of an interesting "aside" to consider.

If you'll notice, with the anger (I recognized as a emotion but called a behavior) I listed an number of observables. 1) hopping meant literally here, 2) release of adreniline, 3) increased pulse, 4) reddening of the face, 5) tightening of the lip...

So if these involuntary reactions are not anger, what are they? And if they are anger, is anger an unobservable emotion, or an observable behavior?

Prior I would have called anger an emotion, and therefore a state. But recent thinking is since it has observable immediate behavioral reactions, I might also change my stance and consider it a behavior, as well.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Well, that depends on if you are limiting your idea of "action" to external actions (wheels spin etc.). In the most general sense, I consider electrons moving inside the controller actions as well.

But the normal use of behavior in English tends to be limited to act we can see with our eyes, which would mean external actions in a robot and not the internal movement of electrons inside the controller and other circuits. Behavior based robotics however I think (remember I haven't read any books on it) think of the behavior in terms of the outputs produced by logical behavior modules - which many times, but not all times, connect to the outside world.

Reply to
Curt Welch

Words like distributor, and transmission, are labels for very specific parts of the hardware. They are words meant to describe the hardware.

Words like behavior are not used to label parts of the hardware. A car has behavior just as much as robot has behavior. But do we waste time trying to define what the word behavior means is in the context of a car because we think it will better help us understand how a car works? I don't think so.

The word intelligence is another word like behavior which is more confusing than it is helpful. It is best not to use when talking about machines or waste any time in trying to define it. If you want to build a robot that does things like humans do, then simply try to understand the actual hardware at work in a human and talk about it.

If you want to understand the actions of robots, talk about it's actions.

To me, arbitration is just another type of behavior.

The problem with AI and with understanding human behavior is that it can be sliced and subdivided a million different ways. And every way someone decides to subdivide it, leaves then feeling like their way is the only correct way to subdivide it. But yet, none of the subdivisions have yet produced a machine that anyone thinks acts like a dog, yet alone a human.

I believe this is because the brain is basically a device which creates controlled chaos. And chaotic behavior is something too complex for a human to understand. We have no hope of hand-specifying behavior at the level of complexity it exists in humans. It's just far too complex. Just like a simple neural network is too complex for a human to understand. That is, once it's been trained, no human can look at the values of the weights in the network and explain, or understand, whey any one weight is set to the value it is, instead of being set to some other value.

I believe the only route to creating behavior in a machine near human behavior, is to create a strong learning machine. The approaches to how such a machine could be structured that look promising to me are ones that actually share a lot in common with some of the approaches in BBR. However, where as the had-designed BBR systems that have a small handful of behaviors, can't touch the complexity needed. A brain doesn't have 100 behavior modules, it's got 100 billion (aka neurons).

The type of unexpected behaviors we can see emerge from a BBR type system is what I think happens with the brain - except it's modules are even simpler, and it's got many more, and they are configured by the learning algorithms at work instead of being configured by the programmer.

Reply to
Curt Welch

I think you'd go there if you want to be hopelessly lost, and never achieve any fundamental demonstration of a behavior-based system. Behaviors are doable; true emotions -- not just fake anthropomorphic stuff that looks like an emotion and is really just a canned behavior -- would require computing power far above the modern supercomputer. We're not even sure anything but humans feel things like anger or hate.

-- Gordon

Reply to
Gordon McComb

There is certainly a lot of that going on. But I don't believe it will keep happening. Instead of chasing the dream off the end of the world, I think we will actually catch it one day. I think one day, we will produce a machine with enough intelligence, that most people will call it intelligent.

The line retreats because so far, what we do is pick one behavior of a human, which we have never seen a machine do, and think that if we saw a machine do THAT, it must be intelligent (play chess). But when we make a machine do that, we then decide it's not intelligent, because there's still so much more it can't do (play checkers, or play go). The chess program isn't smart enough to know not to stick it's hand in a fire.

I think at some point, we will produce a machine that's intelligent enough, in a general way, that we will look at it and see what looks like a living conscious machine, instead of just a robot. For example, have you see a robot that acts like a cat or dog? One that looks at you in the way a cat or dog looks at you wondering if you might be about to feed it? Or harm it? Have you seen a robot came running because it's learned the sound of the can opener? Or that came running when you came home because it knew it was you from the sound of the car and the garage door opening? How about a robot that gets excited and comes running when you open the door that lead to it's charger? Have you seen a robot that you can train to do complex tricks as easy as you can train a dog to do?

Without producing the more complex human behavior of language use etc, these sorts of basic behaviors we see in animals are something we haven't produced for real in robots. At best we have built robots that tend to mimic some of these sorts of behaviors. When we can produce a generic learning system, that can learn these sort of seeking, and attention behaviors on its own, I believe this will be a turning point where robots will start to look "alive" and "intelligent", even though they will still be far short of full human intelligence. This is the point where the robot will be smart enough to know not to stick its foot in a fire (at least the second time). It won't be good enough to convince the majority of people that human intelligence is possible, but it will be enough for AI people know they are on a very productive path where the they have finally crossed the line from dumb machine, to intelligent conscious machine. It will all be down hill from there.

Reply to
Curt Welch

"Variable priority" and "least commitment" are both still priority schemes. And they still require behaviors to sometimes be active and sometimes inactive. Each behavior is still associated with a threshold and a trigger.

As an aside, any behavior can be continuously enabled by setting its threshold to 0, or disabled by setting it to a large number. So the "cruise" behavior previously discussed that has "no input" is just like all the other behaviors, with its threshold set to 0. (continuously active).

"Motor schema" is more of a weighted continuum, like the previously discussed steering algorithm for jBot, and is continuously active. In Jones chapter 4, it is the sole exception to the more general arbiter scheme.

In the behavior-based paradigm, you would instead have a separate behavior with it's own trigger for each instance. The behavior itself is only a data structure with pointers to thresholds and variables and actions and so forth, it is easy to have as many as you want.

So, one might test a bend sensor against a threshold and close a gripper if the threshold is exceeded, grasping an object in the gripper. SR04 works that way. Another behavior might test an IR distance measurement and close the gripper if some threshold is exceeded, to try to touch a close object. A third might detect that a watchdog timer has exceeded some threshold, determine the robot is stuck. and close the gripper to try to push the robot away from an obstruction.

The point is that all three behaviors have the same action -- close the gripper -- but for three different trigger conditions.

It is the combination of the trigger condition and the action that form the behavior. That's what a behavior is, as the term is used in behavior-based robotics.

Note that the threshold could be testing some internal variable, and the action could be anything, and may not be observable.

dpa

Reply to
dpa

We have a large and complex vocabulary for describing humans. They include words like "emotion", and "anger", and "intent", and "will" and "awareness", and "pain". We also have a large vocabulary for describing machines, like, shaft, and gear, and sub-assembly, and engine, and controller, and motor, software, sensors, and signals. But we aren't for the most part allowed to use the "human" words, when talking about machines, and we aren't for the most part, expected to use the machine words when talking about humans.

However, if you believe humans are just machines, and everything we talk about for humans, must some day have a counterpart in a machine we build that attempts to duplicate human skills, you have to end up creating your own human to machine language dictionary to translate the two languages back and forth.

Basically you start by realizing that all the words we use to describe humans, like "emotions" are nothing more than behavior classes. By7 saying humans have emotions, all we are really saying is that they act in a way what we associate with various emotions. We say someone is mad, when we see them doing any of many different behaviors we have learned to classify as "the behavior of a mad man". We learn to recognize our own madness when we see ourselves acting in those ways.

So even though we are taught to talk about "anger" as an internal state, it's really not quite like that. It's really just the condition of a human acting in ways we label as "angry". When we recognize ourselves as acting "angry" we also learn to recognize various internal conditions which we sense in ourselves, but that we never know was happening in others. But we then associate those internal behaviors, with the idea that "we are angry".

In the end, what "angry" and all the other words like it mean, is simply that we sense ourselves acting in the way we call "angry".

The same thing happens for things like "wants". If we see a cat chasing a mouse, we are taught to describe that behavior by saying, "that cat wants to catch that mouse". But again, we know nothing about internal states of the mouse or what mechanical condition exists that is causing to cat to produce that mouse-chasing behavior. So this concept of "want", which we have been taught to think of as some internal "state" of the cat, is just bogus crap. All we know, is that the cat is chasing the mouse.

From programing robots, we know we produce "chasing" behavior, by writing interesting code. But no where in that code, do we typically see "want" as an internal state. Behavior based robot code especially doesn't seem to translate to anything that looks like a "want" for the robot. Yet it can make the robot produce behavior that we might describe as the robot "wants" the ball.

So, for any of these things, like emotions, and desire, what we really have to do, is figure out how to make a robot produce "anger" behavior, or "fear" behavior, or "love" behavior, or "want" behavior at same times we might expect a human or animal to produce those types of behaviors. People have created scripted behaviors in robots to make them shake their head, or stamp a foot, or many other complex sequences which look to us like they are part of some "emotional" reaction. But these simple scripted behaviors don't seem to ever be triggered for the right reasons. And they tend to look the same every time, where as real animal and human behavior is different ever time. So again, you can't make a machine act emotional, or in any way "intelligent" by scripting pre-recorded behavior sequences. It must produce these behaviors on it's own, as a very complex reaction to what is happening to the machine. It's not something we can hand-code.

Reinforcement learning systems however give us an answer as to what emotional behavior is all about - and to why we have it. This is because RL systems must assign values to everything (sensory information and behaviors). It does this to allow it to train itself how to react to the environment. It attempts to produce behaviors that increased the "good" stuff, and decreases the "bad" stuff. For example, an RL trained robot that gets rewarded for being plugged into it's charger, needs to learn that the sight of the charge, is a "good" thing, because that's what it sees right before it gets a charge reward. As it learns that the sight of the charger is a good thing, it will learn behaviors to bring the charger into view. Doing that, gives it a reward, because just seeing something "good" (the charger) now acts as a reward. So it learns behaviors to seek out the things which it has learned are "good" and to avoid, or escape from, the things it has learned are "bad".

This is where love and fear come from. Love is simply our RL hardware predicting strong future rewards in association with some sensory condition (likely caused by some object). Fear is where our hardware is predicting a high probability of future pain. (low or negative rewards).

Anger is just an aggressive form of fear - that is, when we act in a proactive way to our fear by trying to change the thing that is causing the fear (aka the thing which is causing our hardware to predict lower future rewards). Some times we react to this prediction of a decrease in future rewards by trying to run away from it (and we call that fear), and sometimes we react to it by trying to change it (kill it, disable it) and we call that type of behavior anger.

So, the trick to creating an intelligent machine, is not to get confused about how we talk about humans. The language is really bogus. We don't have an internal state of "anger". What we really have is "anger behavior", triggered in response to a prediction of less future rewards by an RL system.

In the end, making a machine act intelligent, or human like, in any sense, is all about making the robot act the right way, at the right times. I happen to believe the only way we are going to do that, is by building a reinforcement learning system to shape the behaviors in real time. If you get it right, you well see it act in ways you will want to call "angry", and "happy", and "sad", and all the other ways we describe humans and animals, without having to script these behaviors into the robot.

Reply to
Curt Welch

I'm in that camp.

I am not inclined to agree. As I said, there are physiological changes associated with anger, and even babies can recognize emotions on other's faces before much else of communication skills develop. Brooks talks at length about Cog, Kismet and Furby. Things with faces do pretty well at connection to our "emotions" communications channels, and we seem proned to be able to receive such signals.

I think emotions will have to be acheived. I notice almost all animals have emotions. I'm not sure if any save man have intelligent thoughts. But emotions, I am without doubt. So I think emotions is a stage we will pass through on our way to AI.

Whoa, that's a pretty big claim. I can understand you think it is complex. But is it true we can't hand code an operating system? Well, sure. where else did the first one come from? Of course you have things like C's written in C, and Forth's written in Forth, and so on, but somebody somewhere hand coded the first one, make no mistake. Once started we can bootstrap our tools to make better tools. But I don't think we can exclude hand-coding as a possibility just because something is complex.

I don't know, evolution, given millions of years, has made many many dumb animals. You'd have to say real intelligence is probably an accidental artifact, rather than something predictable.

Hey, this is no different than a human. Consider what the original happy meal looked like. Now think about the entrances to buildings. If the builder really wants you to go through a door, they'll put a symbol on it that makes it look like that happy meal, usually upside down, an arch or dome, or umbrella or circular symbol etc. Look at a picture of Notre Dame upside down, for instance, and tell me if you see any repeated imagery you hadn't noticed before. It's not only men, by the way. ... But that is a whole 'nother discussion. Anyway, humans seem fascinated with their original re-chargers to a degree of blatant obsession.

You have indicated this interest in learning several times now. You say you haven't read much on BBR, I have to admit to not yet getting to the reading I want to do on reinforced learning, although what I've got on my reading list is Emergence, Evolutionary Robotics, and Introduction to Genetic Algorithms. Any of them you like? What literature supporting your position on RL do you like?

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Well, I think you might have failed to understand what I was trying to communicate there.

I do believe we can hand-code (aka build) an intelligent machine. What we can't do, is hand code the behavior into it. Instead, we have to figure out out to hand-code a learning algorithm, which creates the behavior for us.

We already have many learning algorithms that already work - like all the neural networks. And we know that when trained, they create machines which no human can understand. So this concept of something being too complex for a human to understand is already well understood.

The TD-Gammon program that learned to play backgammon is the strongest backgammon program in the world and plays at the same level as the best humans. But yet, it learned on it's own how to play backgammon by trial and error. The entire sum of it's knowledge, is represented by the weights in a fairly small neural network (80 nodes). These nodes might have a few hundreds weights. In other words, everything you need to know about playing backgammon at the level of the best humans, is represented by a few hundred floating point numbers. That's it. If you printed it out on a sheet of paper, it would probably be less than a half sheet of numbers. But in those numbers, is everything you need to know to play backgammon.

There is no human alive that could have created that small list of numbers by hand. There is no human alive that can look at those numbers, and understand what it is telling us. But yet, you plug those numbers in to the backgammon interpreter, and that programs it to play a very strong game of backgammon. Those numbers are the software that turns a dumb machine, into a smart backgammon playing machine.

Human behavior is created by a similar system at work in our brain. The patterns of behavior we see at the high level, are created by the interaction of billions of simple machines (neurons), connected in complex ways with complex but finely tuned weights of association. If we could dump the configuration of our brains, we would not have half a sheet of numbers as was needed just to play backgammon, we would have trillions of numbers (the effective weights of the synapse connections). And like the small list of numbers from the TD-Gammon game, these numbers control our behavior. But there's no hope in hell of a human ever understanding the numbers, or in hand-configuring a brain, by adjusting the numbers. The only hope, is to build a learning algorithm which configures the system for us, through experience.

A typical BBR type program is what you get when you attempt to hand configure such a system. It works very well for very simple tasks, but quickly becomes too complex for a human to program when trying to make it do more complex things. The BBR approach has hit a wall of complexity because we have reached the limits of what we can understand with our very limited human brain. To make even small gains in what the robots are doing, we have to deal with exponentially greater complexity in our code.

It's a problem I've been looking at for about 25 years now. Progress has been very slow. :)

My interest however is for AI - to ultimately figure out how to build machines that can do anything a human can do. If your goal is to figure out the best way to program a robot today (which is true for most people in this group), then my obsession with learning algorithms would only prevent you from getting your robot doing something useful.

It's because the little I have seen tells me they are trying to hand-code behavior instead of building learning algorithms, so it's of limited use to me.

Yeah, all those are headed in this same direction. Genetic Algorithms are a very coarse grained approach to reinforcement learning. But the basic algorithm and power is the same. You need a system to try variations of a design, and it must have some way to evaluate each variation. You keep the best, and give up on the worst, and then continue to explore variations of the best. It's a directed random search. It's trial and error learning.

Reinforcement learning is the same basic concept, but the implementation tends to be very different.

I've not seen any literature supporting my position. There is a ton of literature on reinforcement learning algorithms, but none of it is as fanatical about it being so important as I believe it is.

The problem is that these ideas started back in the first half of the last century with various ideas from psychology which led to Sinner's work in Behaviorism. Skinner was fanatical about it, and many of those that still believe in the behaviorism approach are still fanatical about it. However, Skinner lost the battle in trying to make people understand the full importance's of conditioning in explaining human behavior. Most people currently believe that even though humans are conditioned, that only explains our most basic behaviors. They believe it might explain why we can train a dog to sit, but that human behavior is far too complex to be explained in such simple terms. For example, it's common for people to believe that human language behavior is what separates us from the animals, and as such, we must have language hardware that goes way beyond the simple ideas of operant and classical conditioning. Norm Chomsky, the well known Linguist (basically the founder of the field), is the big champion of the anti-Skinner crowd.

Skinner wrote the book Verbal Behavior, to address the issues of how all language behavior is explained in terms of conditioning. But Chomsky, wrote an argument against that book, that left the majority of people believing Chomsky was right and Skinner was wrong. I happen to agree with Skinner.

Skinner lost the battle to make even his peers in psychology understand his argument. And though Skinner's work and behaviorism in general was very popular in the '50s, it's mostly gone down hill since then. The common accepted view seems to be that it's too simplistic. It's seen as greedy reductionism.

The real reason behaviorism and all it implies has lost favor is that it's failed to produce anything useful since the 50's. Since the beginning of AI work, people have attempted to use the ideas of reinforcement learning to build smart machines - but they have all suffered from just what people expected - they seem to be too simple. They can only learn very simple things, but that seems to be where it stops.

The failure however I believe was just an implementation issue. The have all failed to build the correct type of reinforcement learning machine. It's like trying to build a flying machine, and getting the weight to lift ratios wrong because you are using too heavy of an engine, and when it fails to fly, you decide the approach is just wrong (because that's what everyone is telling you anyway).

Reinforcement learning algorithms however continue to be researched, and in the past few decades, improvements have been made. This has caused a rebirth of excitement in the approach. However, even the best algorithms are still far away from producing anything that looks like intelligence. So, most people still see reinforcement learning as just one type of learning that has some limited applications, but which can't hope to be the foundation of human intelligence. I think those people just lack foresight and wisdom. I think Skinner was right 50 years ago and one day, we will find the correct implementation of a reinforcement learning machine that's going to get everyone excited again about the ideas and the approach and in the end, people are going to realize that humans are just fairly simple reinforcement learning machines, and that all our behavior, including oru langauge and "thought" behaviors, are just conditioned into us in the same way we condition a dog to sit on command.

For a review of what's been done in reinforcement learning algorithms for AI, here's a good book:

Reinforcement Learning: An Introduction Richard S. Sutton and Andrew G. Barto MIT Press, Cambridge, MA, 1998 A Bradford Book

The whole thing is on-line at Sutton's web site so you can read it without having to buy the book (or just scan a few chapters).

formatting link
Unfortunately, the algorithms in the book only work for state signals that have what is know as the Markov property. What this means, is that the algorithms only work if the sensory signals tell the system the complete state of the environment. For example, to learn to play tic-tac-toe, the sensory signals must tell you the full and complete board position (which is trivial to do). They also assume that the number of unique states the environment can be in, is small and finite - small enough so that the algorithm can track a "value" for every possible state the environment can be in. This means that to use these RL algorithms for learn to play a game like tic-tac-toe, you need an array large enough to hold a number for very possible board position. This is possible for a very trivial environment like tic-tac-toe, but becomes impossible quickly for anything of greater complexity.

Robotics however uses sensory signals that are very non Markov. This means that the sensory data doesn't tell the robot the full state of the environment. It's only gives the robot a small fraction of the data needed to know what the state of the environment is in. In addition, even if the robot knew the full state of the environment, any real world environment has far too many states to allow these simple RL algorithms to track a different value for each possible state. (you would have to track the state of the entire universe really). It also makes learning impractical because these algorithms require the system to visit the same state many times before it will correctly learn the true value of that state. Robots interacting in the real world tend to never see the exact same state twice.

So nothing you find in a good RL book like the one above, will give you any algorithms which can solve any real world robotics problems. The world is still waiting for people to find and develop these better algorithms. That's what I've been exploring for many years now as a hobby - kinda like a crossword puzzle I keep picking up and working on every few months or years, trying to make progess on it.

There are many people researching, and developing, better RL algorithms. I've seen mention of various papers on the field, but haven't actually gotten my hands on any papers that seem to be useful to me.

Reply to
Curt Welch

This isn't geometry where you can set out all the rules by number and present them in a linear fashion. Any evolving methodology will have keypoints that you then draw to diffeent conclusions that may or may not be written down, but you have to first recognize the keypoints. Assuming we're sticking with Brooksian behavior, then a keypoint is that a behavior is a reaction to a stimulus. Brooks never describes it as anything other than that. I don't think I've come across any other author or researcher in the field that has offered a counter concept.

Another important Brooks keypoint is that all behaviors must be observable. That's why lower-order emotional states -- such as anger, love, fear, etc. -- are discarded. It's not that some machine will

*never* be capable of these emotions (though we will never be absolutely sure, just like we can't be sure our dog "loves" us), but in his quest to simplify AI he cuts off at the hjead the notion of emotionalism as part of behaviors. Where would "fast, cheap, and out of control" fit into this?

I think if you reread Brooks you may find that his overall aim is to simplify AI, not complicate it. If he has inputless behavior states that's an enormous complication because it suggests a level of randomness/unknown/undefined states that has to then be controlled by a much smarter dispatcher. You're now back to the things other AI researchers have tried to tackle, with no or limited success.

-- Gordon

Reply to
Gordon McComb

We know that intelligences have "hidden states". We also know that some of those states can come about without something we might perceive as an input. The trick here is to realize that any system, if known, can be modeled. I don't want to ramble, but I suppose that this might end up as a sort of haphazard set of examples and points. First, let's encounter the hidden states issue. In any system, there can be a set of values or variables that contain important information but are not necessarily obvious to an outside observer. One such concept can be seen in fuzzy logic thermostats or systems that keep track of past states. History is one such hidden state, and we cannot always predict what a record of history in a system might result in, much less how a system will react to it. A perfect example is a machine that is keeping tabs on how its outputs have yielded, and then goes on to modify its outputs based on the results. Feedback systems do this in many cases. To look at a thermostat we could not tell what it had done in the past or what the temperatures were, but it might adjust its control outputs to best minimize energy consumption and it might even take time of day into account. These are really simple devices but the hidden variables can be the value of its internal clock, some preprogrammed schedule or sequence of events, and the history it has recorded. What about "input-less states"? Some systems can perform actions based on random numbers. Humans seem to. So can game characters and robots with randomness in their subroutines. Some may question my inclusion of game characters but recognize that these nonexistent things are a type of robot, but one that lives in a conceptual space only. The difference between these and real robots is tenuous- you could create a software bot that actually outputs control codes to a robot body. And isn't this what those of us with advanced robot projects actually do? I use this concept all the time, because it provides the ability to create really interesting "personalities" and then embed them in actual working hardware. So I include software bots and video game characters as "softbots" and will treat them as being in the same category as real robots because they can easily be brought into the real world. Just add metal and batteries. But states can result without an external stimulus very easily. The random example above is the trivial case. In reality, we can expect that there are internal states that can be reached due to a threshold of some sort being achieved, or a timer expiring, or the battery running low, just to name a few examples. The most interesting behaviors will result from such invisible states and will provide that spontaneous, almost lifelike set of actions that people find fascinating in animals or other people. Emotions are far too oversold as being difficult or mysterious. They are simple internal states, but there are some very distinct components to them that are easy to master in software. Let's first admit that emotion has nothing to do with what we "think". Under anesthetic or at the borders of sleep, we can have thoughts of "gee, I should be mad or scared but I don't feel that way". This is the first and most basic clue to the true nature of emotions. What we call emotion is nothing of the sort- it is our body's response to emotion that we sense, not the emotion itself. Panic is an ethereal thing, stored in a register in our brains, and its effect is to dump adrenaline and other chemicals into our bloodstreams. Our bodies respond with tachycardia, sweating palms, rapid breathing, tightening of the abdominal muscle... but what we feel is the body sensation, the visceral component, not the panic itself! In other words, virtually everyone calls these visceral responses panic when in fact it is nothing of the sort! It is the diagnostic, the symptom, not the actual thing. We are getting old news and calling it the thing. Panic is a value stored in your brain, that decays with time and resolution of a situation. The same is true for fear, boredom, love, and happiness. We never "feel" the emotion itself, only the response of the body to its chemistry. Now, that having been said, we can quantify emotion as "an internal state that reflects a set of conditions". This is overly broad, but a first shot at a definition. Let me provide a perfect example of an emotion that you can produce in a machine and you can fully know that the machine can use and respond to it. In our experience, we are constantly predicting what comes next. We have a set of expectations that we produce based on experience. As we go through the day, we expect that things will move along in a certain fashion and that we know what comes next. But imagine picking up a sun-warmed rock and finding that it is light instead of heavy, and that it is ice cold and squishy instead of hot and hard. All at once, some little circuit in our heads goes "whoa- that's not right!" We experience surprise because (and this is the crux) our expectations were violated or not met. Software can easily duplicate this. The software for a robot can have expectations and when they are not met, a word in memory labeled "surprise" can be set to a high value. Now the robot should have some sort of fallback position, and when this value is high, it should invoke that subroutine. As resolution of the unexpected is being achieved, the value of surprise is then lowering in a predictable manner. Did the robot "feel" surprise? Perhaps. it will depend on how you handle the exceptions internally. If the software can keep track of how much processing it is doing that is out of the ordinary, and how large the violation of expectation was, then yes, I would say that it could feel surprise. In particular, we can say that if it also keeps a history so it can look back and identify times when it was surprised, or hungry (low batteries) or in scafred (recovering from a near fall or some other danger) then our cleverness is the only limit to what we can put in its little software brain. But to truly feel these things, it has to be able to map them to its body model, and this is why machines do not yet have emotions that we can accept as being emortions. Almost nobody creates a body model in the robot's software to map sensations and emotions to! Without this basic item, we cannot yet claim to have achieved emotion or true sensation in a robot.

Cheers!

Sir Charles W. Shults III, K.B.B. Xenotech Research

321-206-1840
Reply to
Sir Charles W. Shults III

Can I read that to say "a behavior is a reaction to an external stimulus"? That is, we can't have hidden stimulus?

I the danger of such an approach would be reading into Brooks what isn't written and isn't there. But let's go with your premise above. I personally think Brooks' Behavior-Based approach is a world more complex than Jones. Jones is simplified and linearized for the sake of presentation, and I'm glad Jones did this. But I still think there's a world of biologically complex possibilities with Brooks' more explicit subsumption, where he has inhibit lines and subsumption lines running around almost like feedback, which can greatly complicate the nature of responses... however, let's look at a very very simple example and see what we can do with it.

Bump switch: We have a bump switch. So we can do escape behaviors. Being a real world bump switch, it is a bit sticky. How shall we make our robot robust? Knowing sometimes our bump switch will stick for a while before releasing? When we back up, we notice the bump switch doesn't disengage as it should. Is this noticed condition a stimuli, or since it is all internal, is it not allowed to be called a stimuli? Without Brooks lines which can inhibit the sensing of the stuck switch, but Jones "trigger in the behavior" design, how can we go on operating? or are we required to always do constant escapes?

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

Absolutely not!

You can absolutely have a "hidden" (i.e. internal) stimulus.

A stimulus is a stimulus. What is "hidden" is relative: a line follower uses sensory systems that are invisible to the human eye, for example. We feel pain from an internal injury the same way a robot might get an input that its batteries are low, and then its "find food" behavior kicks in. Watching the robot we may be neither aware that a given proprioceptor has been triggered, or that the machine is currently in the "find food" behavior. We will realize this event after it has docked with its charger.

-- Gordon

Reply to
Gordon McComb

Yeah, I basically agree with all you points.

My belief is that human like behavior is created by generic sensor processing algorithms and behavior learning algorithms. When you create a generic system, that simply analyzes all the data it receives equally, then the body is as much a part of the environment as the rest of the environment is. So, when such a generic system ends up creating models of the environment, it's going to be creating a body model or a model of self at the same time. As we learn to best ways to react to the environment, we will, at the same time, be learning how to react to our own body. So as you say, when we sense the changes in our body that results from various physiological processes at work, we learn to identify, and react to them, in different ways. It's really no different than how we sense that our car is not running normally and we react in ways to that. Or body really is just like a car we spend our life driving around in - it's just a car we (aka the brain) can't get out of.

The brain is also able to sense its own actions, since the brain is part of the same environment it's trying to sense. So, like it does with all sensory data, it learns how to best react to its own actions. That simple and obvious ability is what creates our natural sense of self.

What you said about surprise I agree with in that the brain is always trying to predict what will happen next and our behavior becomes a function of the results of those predictions. We act one way when our predictions are met, and we act another way when our predictions fail. However, how react in both cases, I think is learned. We learn though experience how to react when things unfold as we expect them to, but when something happens that we didn't expect, we generally don't know how to act. That is, we have had so little experience with this very unusual condition, (or with anything even close to it), we tend to basically not know what to do. We tend to stand around with our mouth open and do nothing simply because the brain doesn't have any good answers about what it should do. The stuff we see all the time, is the stuff we expect, and since we have seen it a lot, we have had a lot of experience trying different ways to react to it, and have long since learned good ways to deal with the most expected situations.

So surprise is a function of how much our prediction system failed to accurately predict what was about to happen, and when it fails, we have to fall back to some default behavior for dealing with the unexpected - which often is a fear reaction of some type because the less we understand what is happening, the higher the probability that something bad will happen to us - and that is something we have learned through experience. A typical reaction is an attempt to escape from the unexpected - like for your cold rock example, it's likely we would drop it because we have learned natural reactions to try and escape from the unexpected before we get hurt.

But my main point, is that I believe human behavior is all learned, so both how we react to the expected, and how we react to the unexpected, is all learned through a life time of experience.

How you program this into robots in a way that is useful for practical robotic applications I don't yet know. But I'm working on that.

Reply to
Curt Welch

Prehaps the word "Behaviour" is a little too broad when you analyse it too closely.

For me I'm happy to think interms of "Cruise Behaviour", "Avoid Behaviour", "Mapping Behaviour" etc...

Think the dictionary def sums it up nicely:

be=B7hav=B7ior Pronunciation (b-hvyr) n=2E

1=2E The manner in which one behaves. 2=2E a=2E The actions or reactions of a person or animal in response to external or internal stimuli. b=2E One of these actions or reactions: "a hormone . . . known to directly control sex-specific reproductive and parenting behaviors in a wide variety of vertebrates" Thomas Maugh II. 3=2E The manner in which something functions or operates: the faulty behavior of a computer program; the behavior of dying stars.

Curt Welch wrote:

Reply to
markwod

Something that occurred to me is that "Behaviour" is now a commonly accepted term to describe a robots (usually basic "useful" functions) and we all know what it means - the outcome of a Action Reaction, Cause and Effect combination.

Perhaps, when we dwell upon it too much it's a little too abstract?

When I describe myself I wouldn't say I have an "Avoid the door behaviour", possibly the word "Instinct" would be a better fit???, and behaviours come out of a number of simultaneous instincts triggering?

Anyway thats my 2pence...

Mark

Gord> >

Reply to
markwod

Well, the definition being a little too abstract was indeed my original reason for posting. The developing theme or my recent posts was the lack of precision of the use of the word behavior was possibly one of the reasons Behavior Based Robotics in specifics, and AI in general, was not making progress. My position is if you don't have a proper language to speak about a thing, it is almost impossible to think about the thing in the right way.

Yes, BBR has a definition of a behavior and how it is used in BBR, per the opening post and reference to Jones. However, if that definition leads to errors in thinking about problems, then it might be a good thing to review or modify. If the change is too big to live inside BBR as a modification, and we need a new name that allows behaviors to be thought of in a different way... okay. But my feeling is not that BBR needs to be thrown out, just the definition of behavior in BBR needs modification.

What are the issues I have with Jones's definition then?

He requires inputs, and he requires a trigger.

His first example of a behavior, Cruise, has no inputs and has no triggers. In "Flesh and Machines" Brooks describes Genghis first "Stand" behavior as having no inputs and no triggers.

To me there are obvious problems with a definition that fails the first example in both books, and I suspect several concepts here are muddled together.

Let me see if I can draw attention to what I see as muddled. Look at Jones example diagram. There are inputs for a transfer function, and there is an input for a trigger. Look at the input to the trigger and consider it separately. Isn't the trigger really just a transfer function with a digital output? Doesn't the trigger qualify as a behavior itself (with an always on trigger as Jones says about Cruise)? So doesn't the definition of a behavior require the behavior being defined to contain a valid behavior? But somehow it's transfer function being digital, and a qualification of an analog one is supposed to mean it is not a behavior?

On the flip side, the trigger may cause a AFSM to advance, which doesn't look at the stimuli or trigger at all after initiation.

In the Servo Behavior the trigger has one purpose, qualifying the output, while in the Ballistic Behavior the trigger starts a sequence where both trigger and stimuli are ignored. I do not see these two functions as similar enough to call the same thing.

I've just got my Arkin "Behavior Based Robotics" back. I think his approach to behavior seems more carefully thought out, but it is developed from animal behavior to robot behavior over a couple chapters, so I'm still looking for the definition I think is most succinct. Note I did post a response from Arkin on what he thought a behavior was, elsewhere.

-- Randy M. Dumse

formatting link
Caution: Objects in mirror are more confused than they appear.

Reply to
RMDumse

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.