What is a behavior?

Prehaps the word "Behaviour" is a little too broad when you analyse it too closely.
For me I'm happy to think interms of "Cruise Behaviour", "Avoid
Behaviour", "Mapping Behaviour" etc...
Think the dictionary def sums it up nicely:
be·hav·ior Pronunciation (b-hvyr) n. 1. The manner in which one behaves. 2. a. The actions or reactions of a person or animal in response to external or internal stimuli. b. One of these actions or reactions: "a hormone . . . known to directly control sex-specific reproductive and parenting behaviors in a wide variety of vertebrates" Thomas Maugh II. 3. The manner in which something functions or operates: the faulty behavior of a computer program; the behavior of dying stars.
Curt Welch wrote:

Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

That's right. It's simple physics. All action (behavior) is motion of matter. All motion represents the transfer of energy. Energy is conserved, so in this universe, you can't have an action without a cause. The energy that caused the action, had to some from somewhere.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

So all behaviors have actions? and there are no behaviors without action?
(I would tend to agree, but I want to be certain there isn't some odd circumstance where a behavior doesn't have an action. I haven't explored all cases. It might be that something like reaction to hidden anger, a newly formed grudge, isn't externally expressed, but then, I'd be inclined to argue that it is a change of state, and something added to memory, for future reference in selecting future behaviors.)
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Well, that depends on if you are limiting your idea of "action" to external actions (wheels spin etc.). In the most general sense, I consider electrons moving inside the controller actions as well.
But the normal use of behavior in English tends to be limited to act we can see with our eyes, which would mean external actions in a robot and not the internal movement of electrons inside the controller and other circuits. Behavior based robotics however I think (remember I haven't read any books on it) think of the behavior in terms of the outputs produced by logical behavior modules - which many times, but not all times, connect to the outside world.

--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Hi,
Jones definition of behavior is dictated by the requirements of implementing the subsumption priority algorithm. It is not intended, I think, as a generic definition in the sense that you are defining it.
A "behavior" as in "Behavior Based Robotics" is based on an underlying subsumption-style architecture which requires a trigger to determine when a behavior output is active, and therefore subsuming any lower priority behaviors, and when the output is inactive, and therefore allowing lower priority behaviors to control the robot.
Without the triggers, only the highest priority behavior would ever control the robot. It would never "release" control. A particular low priority behavior only controls the robot when all of the higher priority behaviors are inactive -- i.e., not triggered.
The triggers are an integral part of the subsumption process.
dpa
RMDumse wrote:

Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

In context, I think it was intended to be a generic definition. It is at the opening of Ch3 titled Behaviors and is worded ""Primitive behaviors, as we use the term in behavior-based robotics".

This is only the case for one type of arbitration. Jones shows many possible ways to arbitrate behaviors, and Arkin even more. In fact, Jones Ch 4 is all about arbitration, and Brooks subsumption method is only one, listed as a minor heading under "Other Arbitration Schemes" on pg 93.

I very much agree with this comment. But triggers are an integral part of the subsumption process, does not mean triggers are an integral part of the behaviors subsumed in the subsumption process..
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

That is not so. All the schemes described in Jones chapter 4 use the same basic structure: a series of behaviors feeding an arbiter.
If the higher priority behaviors never releases the arbiter, none of the lower behaviors can control the robot. That's how the arbiter works. The sole exception to this is the "motor schema" method.
This is why each "behavior" as in "Behavior Based Robotics" must include a threshold of some sort and a trigger, such that at some times it asserts its outputs and subsumes other, lower-priority behaviors, and at other times it does not. That arbitration is the very core of behavior-based robotics.
dpa
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
dpa wrote:

Pg 103 last bullet in the summary Jones says, "Other types of arbitration schemes include variable priority, motor schema, and least commitment." And as mentioned Arkins discusses many other possible ways to combine behavior. "Motor schema" is far from the sole exception.

But this is largely my point.
Fixed priority comes from having a trigger that requests control at a fixed level, and a trigger comes from the necessiry of having fixed behavior to assign a level.
If you have a trigger built into the behavior, you don't need an intelligent arbiter. You simply have the highest priority winning. The arbiter is completely deterministic by its wiring, and therefore unintelligent. The intelligence is built into the behavior in the form of the trigger, and the wiring priority to the arbiter.
In other schemes, the arbiter can be intelligent, choosing priorities based on other inputs, state, learning; or even caprice for the sake of learning, for that matter. (My reaction is to suggest behaviors and learning are incompatible, that is, behaviors are chosen based on learning, not that behaviors are modified based on learning.)
Now I will acuse you of being lead by your bias. Because you assume a fixed arbitration scheme as the only one valid, you insist a trigger must exist in the behavior. If you allow for other than fixed arbitration, then the trigger belongs outside the behavior, either as a stand along component, or possibly in the arbiter.
I see the more general case being behavior and trigger being separate items which when lumped into one allow only for one easily useful arbitration scheme: fixed priority.
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Howdy,
Randy wrote:

Nonesense. I've assumed no such thing.
You asked why Jones defines a trigger as a fundamental part of a behavior "as the term is used in behavior-based robotics."
Let me repeat that last part for affect: AS THE TERM IS USED IN BEHAVIOR-BASED ROBOTICS.
Now, why do you suppose there is that caveat?
Oh, well just too hard to get a signal thru the noise.
best dpa
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

"Variable priority" and "least commitment" are both still priority schemes. And they still require behaviors to sometimes be active and sometimes inactive. Each behavior is still associated with a threshold and a trigger.
As an aside, any behavior can be continuously enabled by setting its threshold to 0, or disabled by setting it to a large number. So the "cruise" behavior previously discussed that has "no input" is just like all the other behaviors, with its threshold set to 0. (continuously active).
"Motor schema" is more of a weighted continuum, like the previously discussed steering algorithm for jBot, and is continuously active. In Jones chapter 4, it is the sole exception to the more general arbiter scheme.
<snip>

In the behavior-based paradigm, you would instead have a separate behavior with it's own trigger for each instance. The behavior itself is only a data structure with pointers to thresholds and variables and actions and so forth, it is easy to have as many as you want.
So, one might test a bend sensor against a threshold and close a gripper if the threshold is exceeded, grasping an object in the gripper. SR04 works that way. Another behavior might test an IR distance measurement and close the gripper if some threshold is exceeded, to try to touch a close object. A third might detect that a watchdog timer has exceeded some threshold, determine the robot is stuck. and close the gripper to try to push the robot away from an obstruction.
The point is that all three behaviors have the same action -- close the gripper -- but for three different trigger conditions.
It is the combination of the trigger condition and the action that form the behavior. That's what a behavior is, as the term is used in behavior-based robotics.
Note that the threshold could be testing some internal variable, and the action could be anything, and may not be observable.
dpa
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

A trigger is nothing more than another sensory input. It's just that in that context, it's common to have a separate input that works as an enable input. Whether you call it a trigger or a sensor is arbitrary. They simply find it convenient to think of it as a trigger apparently.
For example, an AND gate with two inputs can also be thought of as a data switch with one data input and one enable signal. It's the same device either way you think about it. Neither way of describing it is right and neither way is wrong. It's just two alternate ways to describe the same thing.
It seems to me you are working too hard to try and understand what a behavior is. Behavior is just a word. Behaviors are not the things you need to understand. The hardware is what you need to understand. If you understand it, at all levels of abstractions, then you know all you need to know.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Curt Welch wrote:

If I someone said the distributor is part of the transmission, because it is taken off a gear train, would it matter? Transmission is just a word, transmissions are not things you need to understand, the drive train is what you need to understand. ... Yet, funny, when I take the transmission off the engine, the engine won't run and I can't test it separate from the rest of the drive train.
Similarly, wether the trigger belongs in the behavior or the arbiter is important so we can understand the hardware at all levels of abstraction. Otherwise we are prone to errors in understanding, and run into troubles in our ability to build and demonstrate working replicas.
My interest here is in identifying intelligence. If it has been accidentally grouped with behavior, when it should have been grouped with arbitration, then we have hidden AI from ourselves by not being critical of our definitions and subdivisions of the problem.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

I think you're out of luck here. Intelligence is any kind of behaviour (ability to respond to environmental events) which appears to us to be purposeful, planned, but for which we're unable to discern the purpose or plan. As soon as we can discern the purpose and plan, it's just a machine and no longer thought intelligent. In other words, the word gets applied *only* to behaviours that are inscrutable. Different folk define intelligence differently because they have differing abilities to figure out how it's achieved.
You're trying to identify and discriminate what is, by common usage, inscrutable? Good luck!
Clifford Heath.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Clifford Heath wrote:

I think I hear your premise, but I have my doubts.
For instance. If we see a child reach for a pot on the stove, and getting once burned, start to do the same, then thinking better of it, I think we all conclude the child has gained some "smarts" or intelligence. We donote the purpose and the plan of reach, we denote the purpose and the plan when withdrawing at the first sense of heat the fingers to their mouth in the memory of pain.
However, don't you think the goal of AI is to reduce the unknown mechanism of Intelligence to that which is known machine like?

inscrutable \in-SKROO-tuh-buhl\, adjective: Difficult to fathom or understand; difficult to be explained or accounted for satisfactorily; obscure; incomprehensible; impenetrable.
Well, defining the unknown as the unknowable is one way of ensuring failure. The idea the Earth could in any way be round was certainly inscrutable in the early 1400's for the majority of mankind. It still is for the Flat Earth Society. I think intelligence is just another unknown.
As I hear your argument, you are saying it will always be unknown, because as parts of it become known, it will be rout machinelike, and only the unknown remaining will still be called intelligence. I can't argue against that possibility. But it is a shift in socieital norm, which is different from my intent of knowing the currently unknown. Do you see it another way?
-- Randy M. Dumse www.newmicros.com Caution: Objects in mirror are more confused than they appear.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Yes, of course it is. I was just pointing out that your object of "identifying intelligence" is chasing a moving target, because as soon as a mechanism is identified, the definition of intelligence changes. I think that's why there's the feeling of "failure" and so much mudslinging in the AI community. It's not that we aren't advancing, it's just that with every advance, the horizon retreats by the same amount... hmm... a round earth again :-).
I don't really have a better way of defining the problem, except to hope that one day there will be a Columbus who'll convince us that intelligence is circular and *continent*, as Curt and I recently seemed to agree. When that happens, everyone will stop searching for the edge of the world, the ghost in the machine beyond the horizon, and realize that there's only the machine, and *that's enough*.

Exactly my point. Common usage is the problem, and must be changed in the same way that Columbus' voyage changed ideas of the earth. Hopefully the Columbus of AI will actually do the full circumnavigation as well!
Clifford Heath.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

There is certainly a lot of that going on. But I don't believe it will keep happening. Instead of chasing the dream off the end of the world, I think we will actually catch it one day. I think one day, we will produce a machine with enough intelligence, that most people will call it intelligent.
The line retreats because so far, what we do is pick one behavior of a human, which we have never seen a machine do, and think that if we saw a machine do THAT, it must be intelligent (play chess). But when we make a machine do that, we then decide it's not intelligent, because there's still so much more it can't do (play checkers, or play go). The chess program isn't smart enough to know not to stick it's hand in a fire.
I think at some point, we will produce a machine that's intelligent enough, in a general way, that we will look at it and see what looks like a living conscious machine, instead of just a robot. For example, have you see a robot that acts like a cat or dog? One that looks at you in the way a cat or dog looks at you wondering if you might be about to feed it? Or harm it? Have you seen a robot came running because it's learned the sound of the can opener? Or that came running when you came home because it knew it was you from the sound of the car and the garage door opening? How about a robot that gets excited and comes running when you open the door that lead to it's charger? Have you seen a robot that you can train to do complex tricks as easy as you can train a dog to do?
Without producing the more complex human behavior of language use etc, these sorts of basic behaviors we see in animals are something we haven't produced for real in robots. At best we have built robots that tend to mimic some of these sorts of behaviors. When we can produce a generic learning system, that can learn these sort of seeking, and attention behaviors on its own, I believe this will be a turning point where robots will start to look "alive" and "intelligent", even though they will still be far short of full human intelligence. This is the point where the robot will be smart enough to know not to stick its foot in a fire (at least the second time). It won't be good enough to convince the majority of people that human intelligence is possible, but it will be enough for AI people know they are on a very productive path where the they have finally crossed the line from dumb machine, to intelligent conscious machine. It will all be down hill from there.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Clifford Heath wrote:

This is because people confuse "being intelligent" with being alive, conscious, sentient etc.
-- JC
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Words like distributor, and transmission, are labels for very specific parts of the hardware. They are words meant to describe the hardware.
Words like behavior are not used to label parts of the hardware. A car has behavior just as much as robot has behavior. But do we waste time trying to define what the word behavior means is in the context of a car because we think it will better help us understand how a car works? I don't think so.

The word intelligence is another word like behavior which is more confusing than it is helpful. It is best not to use when talking about machines or waste any time in trying to define it. If you want to build a robot that does things like humans do, then simply try to understand the actual hardware at work in a human and talk about it.
If you want to understand the actions of robots, talk about it's actions.

To me, arbitration is just another type of behavior.

The problem with AI and with understanding human behavior is that it can be sliced and subdivided a million different ways. And every way someone decides to subdivide it, leaves then feeling like their way is the only correct way to subdivide it. But yet, none of the subdivisions have yet produced a machine that anyone thinks acts like a dog, yet alone a human.
I believe this is because the brain is basically a device which creates controlled chaos. And chaotic behavior is something too complex for a human to understand. We have no hope of hand-specifying behavior at the level of complexity it exists in humans. It's just far too complex. Just like a simple neural network is too complex for a human to understand. That is, once it's been trained, no human can look at the values of the weights in the network and explain, or understand, whey any one weight is set to the value it is, instead of being set to some other value.
I believe the only route to creating behavior in a machine near human behavior, is to create a strong learning machine. The approaches to how such a machine could be structured that look promising to me are ones that actually share a lot in common with some of the approaches in BBR. However, where as the had-designed BBR systems that have a small handful of behaviors, can't touch the complexity needed. A brain doesn't have 100 behavior modules, it's got 100 billion (aka neurons).
The type of unexpected behaviors we can see emerge from a BBR type system is what I think happens with the brain - except it's modules are even simpler, and it's got many more, and they are configured by the learning algorithms at work instead of being configured by the programmer.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Prof. Ron Arkin has been kind enough to make a comment to an email request:
Behaviors mean different things to different people, depending on where their community lies. My work has been heavily influenced by ethology and psychology and thus I draw from those communities for my definitions. Engineers have a different stance. No one is right or wrong, they just have different meanings from one group to the next. That's what's particularly hard about interdisciplinary research, in getting the language/semantics correct between different groups. You may be encountering some of that here.
I'll stick with my definitions which appear on page 24 of my book.
--
An individual behavior is a stimulus/response pair for a given
environmental setting that is modulated by attention and determined by
  Click to see the full signature.
Add pictures here
βœ–
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.