Where is behavior AI now?

Well, what I wrote wasn't an argument. It's simple mathematical fact. So I'm not sure what point you are trying to make, or what you didn't understand.

I really have no clue what you are thinking here.

If you read 40 bits of information, you have 40 bits of information. Is that really so hard to understand?

Now, if you want to talk information theory, then the information content of those 40 bits for a typical bot will in fact be quite a bit less than 40 bits because the bits you read are not statistically independent. Each time you read the switch at a high rate (like in my example) you have a high probability of it being the same as the last time you read it). And, if the bot has some algorithm for trying to avoid hitting things, then for most environments, the switch will be open far more than closed. This means the true information content of the bit stream will be far less than the 1 bit per bit read.

Yes, I didn't really get into the amount of information potentially embedded in the time domain for an application like that.

Sure, the information coming from a bumper switch on a bot is fairly useless above a certain temoral resolution. It's effectively noise.

There's far more useful information in it than that. Writing code to take advantage of the information might not be very easy, but the information is there none the less.

My point is not how much useful information might be in 1 second of bumper switch data. My point was that there is useful information in the history of what has happened over time. The bot can explore the entire world using only bumper switches and two wheels for driving to learn stuff about the world. The limit of the amount of information is really a function of the environment, and not the bot. It could drive around for 10 years, and build up detailed map data about the the entire world, having only binary bumper switch data to work with. It might have collected trillions of bits of raw bumper switch data in all that time, but the data might be compressed down to only a megabyte of map data because much of the switch data was redundant.

If the bot is located in a square room with nothing else in it, then all the data there is to collect, is the fact that you are in a square room. Once you figure that out, the information content from the bumper switches becomes effective 0 (because you can always predict what the bumper switch data will be). Now, for a real bot, that never happens, because there's all sorts of complex information about the bot itself mixed in the data (like maybe the motion batteries are running down so it's taking the bot longer to drive from one side of the room to the next - or the left wheel is starting to turn slower making it curve because a gear is wearing down). All that sort of data can be extracted from the bumper switch data given enough time in a real bot.

My point is the amount of data from the switches constantly increases based on how long you collect data from them. It's not limited to a few bits. If the bot drives around for years, it can might extract a megabyte of useful data from the environment though those simple binary inputs.

That could easily be true.

I can't really tell what point you are making here. If you put me inside a bot with two bumper switches for inputs, and two wheels for output (and lets say the wheels each have only three states - forward, stopped, or reverse - no variable speed control). I can still drive this thing around the environment, build up detailed map information, find out that I've been bumping into something that keeps moving, but other things that seem to be walls in a square room, then later figure out the thing that's been moving is actually another bot, with an intelligent drive. And even though they don't speak my natural language, after a few years of bumping into this other bot, we develop our own language, and start to communicate using a Morse-code like system. All that could be done with 2 outputs with 3 states each and two inputs with 2 states each. There's really no limit to the complexity of behavior you can generate if you have enough of the right type of logic inside the machine. As long as you have some outputs, and some inputs, then you can act intelligently.

BTW, I have no idea what jBot is, so maybe this is causing some confusion on my part. Should I look it up and become familiar with it to understand your points?

The complexity of behavior will be limited to the complexity of the controller inside the machine, not to the complexity of the I/O connections.

Ok, I'm a bit lost with what you thinking about. Are you talking about any machine, or the jBot?

In general, you can study the inputs and outputs and try to predict the behavior. But, depending on the machine design, it might be so hard to do that it's effectively impossible. I could for example, have a bot drive around a room in random directions where the "random" is defined by some standard PRNG. The machine could then record bumper switch hits, and collect bits of data using some technique such as recording a 0 if the left switch is hit first, and and 1 if the right switch is hit. It could then take about 1000 bits of collision data, and then run it though an encryption, or hash function, and then use the output of that, with some algorithm, to select the "random" directions it picks to turn.

You could spend 100 million dollars studying the behavior of bot like that and most likely, get no almost no where in being able to predict it's behavior, or create an equivalent machine to predict it's behavior. Now many, if you studied it for about million years, you could finally decode it's secretest. Some amount of study would no doubt do it. But the amount of study would be huge for what is in fact a very simple bot algorithm. Some machines are just inherently very hard to reverse engineer.

Yeah, I think that's true. Kind of. But I don't really understand what you think a behavior is. I would just say the intelligence is in it's outputs period.

A common technique for programming robots is to create a predefined output sequence and call that a "behavior". It might be something like, "drive forward for 1 second". Or "turn 90deg to the right". Then the system can use some higher level logic to trigger the selection of different behaviors. This allows bots to do useful things, but it doesn't make them look very smart. Humans and animals are far more flexible than that.

To me, that is just a two level system for the control of the outputs (the behavior generators at the low level, and the behavior selectors at the high level). In the end, the intelligence is in how the bot controls the outputs period - at all levels - not just the high level behavior selector.

Reply to
Curt Welch
Loading thread data ...

Hey, VERY simple is all you're going to get from me, I couldn't back-propagate my way out of a wet paper bag. Making sense of the environment seems a hideously complicated task, to my way of thinking and let's face it: People with PhDs in all sorts of things have been working in the field for decades.

But my illustration of a bot in a box was not intended to merely represent a typical bump-n-go thing on wheels. I maintain that even that minimalist environment could be used to test a robust machine intelligence system.

Driving, bumping, odometry, mapping, route planning, these are all mechanical aspects of the process. In my example, I suggest that the theoretical obstacle is removed after being mapped by the robot. It doesn't take intelligence by the robot to discover that the obstacle is missing and update its area map, An intelligent robot, however, might be able to generalise about the nature of obstacles:

I can crash into obstacles I can drive around obstacles Obstacles can exist Obstacles can be in front of me or beside me or behind me Obstacles can be close or far away Obstacles can be in my way or not Obstacles can be small like the one I found at x,y position Obstacles can surround me There can be a distance between obstacles Obstacles cause a change in certain registers of my circuits When I am stationary, obstacles have no effect on me unless I have already crashed into one. Obstacles can be approached from a number of directions Obstacles can meet in a corner Some obstacles stop my odometry circuits registering forward motion even though I am trying to drive forwards. Some obstacles can move Obstacles do not usually move Some obstacles can be there and then not there If the obstacles that surround me behave in the same way as the obstacle that was in the middle of my world and which is now gone, what will happen? And so on...

Writing these ideas as a human is a fairly simple task, but I think it would be a very elaborate robot that could fully comprehend the nature of a box with a block in the middle.

What would her mind have done then, I wonder. A 'tabula rasa' with no sensory input... The neurons would have created some kind of altered sense of reality unless consciousness is entirely based on sensory input. Or would the brain have simply atrophied, losing its capacity to function beyond the semi-autonomous regulation of he body systems. Any evil scientist worth his reputation could answer the question with a brain in a jar and an MRI machine :)

____________________________________________________ "I like to be organised. A place for everything. And everything all over the place."

Reply to
Tim Polmear

As far as robotics goes, though, behavior AI is precisely about limiting the problem, because the robots themselves are limited because of their mechanics in their mobility, and usually in their function -- they roll, don't walk, have no appendages or limited appendages, etc. I realize there is a disconnect in talking about the potential of AI versus the the current reality, and all I'm saying is that for every million dollar solution there is a dollar store answer. That's why industry still uses the dollar store version. Until there is a specific requirement to use sophistication the fallback will always be toward the tried and true.

Yes, but then without a gamut of true behaviors it's not really AI. You've merely preprogrammed what you want the robot to exhibit, and by that nature it's no longer artificial intelligence, but crafty pre-wiring. If it's a sophisticated AI system the "bad" behaviors are modified by the "good" behaviors, so while the robot may have a panic mode, it will also have behaviors that prevent it from causing damage to itself, people, furniture, or the cat.

-- Gordon

Reply to
Gordon McComb
[...]

Could you, or would you, really do this? I rather doubt it. For a fair test you'd have to be confined to the bot with *no* other inputs. No food, no instructions, nothing. What seem like mere practical or implementation-specific obstacles may be more fundamental than we expect. If we cannot get a human to operate intelligently in a stimulus-poor environment, why should we expect a simpler system to do it?

Randy is onto something. The I/O complexity and properties are directly connected to the intelligence of the system. We can offer Helen Keller as counter-evidence, but she had a full set of senses up to the age of

1 1/2, later retaining touch, taste, olfaction, heat and cool, pain and a perfectly good set of muscular-skeletal actuators with built-in position and force sensors. Imagine if she had been born with only two single-jointed fingers bearing a few binary-valued touch sensors. She'd have wound up as a vegetable. Observe that even the simplest organisms are literally covered with sensors and muscular tissue. Complex senses and actions long preceded complex brains.

-- Joe Legris

Reply to
J.A. Legris

Yes, I realize all of this, and am actually in favor of the KISS principle. However, in the end with BBR, you are stuck with devices with a rather limited repertoire of behavioral intelligence. I think Randy and David and I [and Curt] are attempting to look beyond that.

Reply to
dan michaels

Ugh, I just wrote a long reply, hit the wrong button, and lost it. So now I have to write it again....

Humans do what they do, because they were motivated by rewards to do it. This is because we are reinforcement learning machines.

I didn't get into the subject of motivation in my example, but as you say, it is important to understand why we would do something.

If I locked a person in a bot control box which had only 2 lights to represent the bumper switches, and two buttons to make the bot move, but didn't tell the person what the switches were for, and gave them no motivation to figure it out, most people would never figure anything out.

But, if you told them that the only way they were going to get food, or anything else, would be to figure out the switches and lights, then that alone would act as motivation for them to start playing with the buttons.

But, all reinforcement learning machines must be rewarded for their effort, or else, the effort (behavior) will in time die out. So you will have to give the person some simple tasks that produce rewards to keep them going. Something they can figure out to cause food to drop into the room before they starve to death.

And to learn more, there has to be constant motivations to keep them experimenting and keep them learning. Maybe one set of actions gives them food. Another gives them water. Another turns the lights on or off. As the person plays with the buttons, and learns more about what they do, and what the lights mean, there has to be constant progress towards greater rewards, or else the person will simply stop developing greater bot-control skills.

No machine, or human, can learn a complex task in order to get a reward. You must learn the simple tasks first, and there must be a reward for doing it. You must learn to walk, before you learn to run, and there must be a reward for learning to walk, or else it will never be learned. And if walking is never learned, running will never be learned.

The behaviors we produce are always a result of the environment we are exposed to. And to produce typical complex human intelligent behavior, the environment must hand feed the complexity to us, step by step.

If you don't motivate it correctly, it will never learn. Simple as that. If you call it fair, to not give it a motivation, then of course it will not learn.

If you motivate the human correctly, it will do just fine in a stimulus poor environment.

Yes, she still had a large number of important sensors. But more important, is that she learned to behave "intelligently" because she had a teacher that worked with her to create an environment with the correct motivations to shape her behavior. Normal human environments (like a typical home or typical classroom) that work to shape our behavior, wouldn't have worked for her, because of her limited sensory ability. But, create the correct environment, with the correct rewards, and she learns to produce complex intelligent behavior like anyone.

Humans learn to do the complex things we do for the same reasons a dog learns to jump though a hoop on command. It's because we are motivated to learn these things. The only difference is that the human brain as the ability to learn far more behaviors, than a dog brain, and more important, we have hardware optimized for learning language behaviors.

Probably true. But not because she had limited senses, but instead, because a human brain is not optimized to deal with those senses.

The basics of intelligence is quite simple. It's just a problem of producing the correct behaviors, in the correct context. The context is defined by the sensory inputs (including their short term history). The "correct" behaviors, are the ones that produce the most long term rewards, for each context. The machine learns though experience, what type of behaviors, produce the best results, in a given context.

To create complex human like "intelligent" behavior, you simply need enough sensory inputs to create a complex enough context, and a machine with enough power, to create billions of different responses to that changing context.

With high bandwidth sensors, the system can create a very complex context with only a few milliseconds of sensory data (a picture is worth a thousand words and all that). The length of the short term memory time the bandwidth of the sensory systems defines how complex of a context it is responding to.

With simpler sensory systems, with less sensory information flowing in, the system simply needs to have a longer short term memory in order to create a similar sized context to respond to. A very limited sensory input can be adjusted for by having a very long short term memory. But the drawback is that it takes that much longer, to correctly understand the current context of the environment - and the quicker the context is changing, the harder it is to keep up with the changes if the sensory system is slow.

That's true. Because the more senses you have, the quicker you can understand the correct context of the environment. And that's a huge survival advantage. The quicker you can spot a predator (or any threat) that's going to do you harm, the more likely it is you are going to survive.

Imagine driving your little bot around for an hour bumping into a tiger about 100 times before you figure out it's a tiger like the one that killed your cell mate? The limited sensory inputs just mean it takes a lot long to collect enough data to recognize a complex object - to recognize the correct complex context of the environment and then produce the right behavior in response to that context (run for you life).

It's not that you can't make an intelligent brain with a limited sensory system. It's just that as long as you are going to have a complex brain, then it's optimal (for survival) to also invest in a large sensory system to speed up the system's ability to track the changing context of the environment. We need complex sensory systems because we are trying to survive in a complex environment that changes quickly.

Reply to
Curt Welch

Then perhaps you haven't programmed the robot all that well (not a serious suggestion here, but retorical argument for sake of amplification of missing pieces.) As soon as it sees humans, it should do a humor act like you do when you see a human close by. Oh, but it does have eyes does it? Well, what's the point of a humor output if it can't tell if there is anyone there to see it! (Hummm... Is the problem the humor act, or is the knowing when to do it?)

So perhaps it cannot act as intelligent as you, perhaps because it does not have the sensors to detect occassions for humor, or perhaps it doesn't have the dexterity in its fingers that you do on the RC controls. Opps. It doesn't have fingers does it? But surely, if your fingers can tickle the RC controls to jiggle-giggle (or however you express humor) the robot could make the same motor command outputs without the controller, that you can with the control, too, right?

So now, about that jiggle-giggle thing. What combinations of motor control does it take to do them? We go both forward for a while, then both back for a while, then forward left, back right, then back left forward right, then pause, then repeat...

Which one of these "behaviors" is it the robot can't do?

My answer, none. I see the atomic "both forward" as a behavior. Also I see the atomic "left forward right back" as a behavior. I see all those quantifiably different output combinations as atomic behaviors.

But the jiggle-giggle thing? Right now I'm wondering if that isn't a behavior at all. It's something else. It's a time sequenced exhibition of behaviors. It is an attempt to send "morse code" by patterning behaviors.

Hence I suggest, "It occurs to me that artificial intelligence will be found in the decisions that switch behaviors rather than the layers of behavior themselves."

Given limited outputs, I suspect we can list all possible (or observed) outputs. So being in some state of output becomes quantifiable. But, it is the transitions between those states which shows the intelligence.

Okay, I offer a proposal. Let's put you in a steel box with your RC controller (antenna outside) but you have no vision of where you are going, you have no sense of the roughness of the terrain, you have no camera to see when humans are approaching your robot, you have only this feedback: You have five numbers that represent ranges. You have current indications in motors. You have a few numbers showing compass heading and inertia changes. Now I'll bet you, your robot will look much more intelligent without your help, than you will with full control.

Again I think the intelligence is not in creating robots with behaviors (BBR's). If the outputs are few, and the inputs also few, all possible behaviors are pretty quickly delineated. The emmergence in intelligence is instead in the sequencing of the behaviors.

even though I seldom place a close on my posts, let it be assumed as implied and understood, always, best regards,

Randy

formatting link

Reply to
RMDumse

Then perhaps you haven't programmed the robot all that well (not a serious suggestion here, but retorical argument for sake of amplification of missing pieces.) As soon as it sees humans, it should do a humor act like you do when you see a human close by. Oh, but it does have eyes does it? Well, what's the point of a humor output if it can't tell if there is anyone there to see it! (Hummm... Is the problem the humor act, or is the knowing when to do it?)

So perhaps it cannot act as intelligent as you, perhaps because it does not have the sensors to detect occassions for humor, or perhaps it doesn't have the dexterity in its fingers that you do on the RC controls. Opps. It doesn't have fingers does it? But surely, if your fingers can tickle the RC controls to jiggle-giggle (or however you express humor) the robot could make the same motor command outputs without the controller, that you can with the control, too, right?

So now, about that jiggle-giggle thing. What combinations of motor control does it take to do them? We go both forward for a while, then both back for a while, then forward left, back right, then back left forward right, then pause, then repeat...

Which one of these "behaviors" is it the robot can't do?

My answer, none. I see the atomic "both forward" as a behavior. Also I see the atomic "left forward right back" as a behavior. I see all those quantifiably different output combinations as atomic behaviors.

But the jiggle-giggle thing? Right now I'm wondering if that isn't a behavior at all. It's something else. It's a time sequenced exhibition of behaviors. It is an attempt to send "morse code" by patterning behaviors.

Hence I suggest, "It occurs to me that artificial intelligence will be found in the decisions that switch behaviors rather than the layers of behavior themselves."

Given limited outputs, I suspect we can list all possible (or observed) outputs. So being in some state of output becomes quantifiable. But, it is the transitions between those states which shows the intelligence.

Okay, I offer a proposal. Let's put you in a steel box with your RC controller (antenna outside) but you have no vision of where you are going, you have no sense of the roughness of the terrain, you have no camera to see when humans are approaching your robot, you have only this feedback: You have five numbers that represent ranges. You have current indications in motors. You have a few numbers showing compass heading and inertia changes. Now I'll bet you, your robot will look much more intelligent without your help, than you will with full control.

Again I think the intelligence is not in creating robots with behaviors (BBR's). If the outputs are few, and the inputs also few, all possible behaviors are pretty quickly delineated. The emmergence in intelligence is instead in the sequencing of the behaviors.

even though I seldom place a close on my posts, let it be assumed as implied and understood, always, best regards,

Randy

formatting link

Reply to
RMDumse

But that was not your premise, as I understand it. You wrote:

We only have 12 musical notes. So how many different melodies can be written? Can you calculate it with some simple connection matrix? Is it 12 factorial? Is there a limit?

I reject the premise that the variety of robot behaviors is limited by the number of output resources, or can be determined by some simple connection matrix, as referenced in my original query, to wit:

Your response, a proposed experiment in a metal room, concerns itself with limiting input sensors, not output resources, and so strikes me as irrelevant.

Let me try again. Musicians have been able to get an infinite variety of music from the same 12 notes since the time of the ancient Greeks. The variety and "intelligence" of the music does not seem to be limited by the limited nature of the "output resources."

Similary, a remotely piloted vehicle, as an extension of the human operating it, has an infinite variety of behaviors, as varied as the imagination of its operator. The attempt to limit the complex world of "behavior" to some combination of motor movements strikes me as naive, like suggesting that because there are only 88 keys on a piano keyboard, that somehow limits the number of possible piano pieces that can be composed.

Clearly we have a long way to go before our robot's autonomous behaviors begin to resemble the intelligence and creativity of that same robot as operated by a human. Until that is achieved, it seems pointless to assume that advances in AI are being held back by the lack of output resources.

It seems to me that the lack of perceived intelligent behavior in our robots is not a function of their lack of output resources, but rather how those resources are used. If you accept that as true, then I think the rest of your conjecture falls apart.

In fact you may remember from our last action-packed episode, the somewhat heretical observation that sophisticated robot behavior arises from REDUCING the number of possible states, which I still think is worthy of further contemplation.

as always, dpa

Reply to
dpa

I realize your music analogy is about how M outputs does not necessarily constrain N results, but I think it works just as well in a larger picture. To wit:

I find the above analogy doubly interesting because Pythagorean intervals aren't the only way to play music. There's the pentatonic scale, which the Greeks based their scale on, which consists of five notes, a scale every rocker gets to know well, and are identified by the black keys on the piano. Indian music divides an octave into 22 steps, and uses only a subset of them. Arabic music has 16 steps, as I recall.

The point is there is more to music than the (two) Pythagorean intervals and a 12-step scale, and different cultures have created their music using a variety of tonalities. To me, this is yet another example of how there can never be one way of doing anything. A Western ear will hear only 5 or 12 notes to an octave, because that's the music we listen to; yet there is a whole world of musical differences that have historically proven themselves to be "tuneful" and enjoyable to other cultures. This thread serves as ample proof that there is no right (or in my view wrong) way of approaching robotic programming, whether or not it includes aspects of behaviorism.

You spoke of a "heretical observation." I have one I'm sure will send me to hell: personally I find behaviors for many (I didn't say all) tasks in robotics a dead-end. The formal literature on the subject has all about ceased; Brooks hasn't written anything new on it, really, since his original papers some 10 years ago -- his Cambrian Intelligence book is a reprint of old articles. Could it be that behaviorism is not capable of all that it is cracked up to be, and the real results can not sustain the hype?

I base my contrarian observations on my box turtle, Brantley, who has escaped yet again. Turtles are rather intelligent as far as reptiles go, but what I saw of Brantley's abilities could be summed up in in one phrase: "dogged determination and random actions." Every day Brantly tried the same length of bender board in his quest to see the rest of the world. He did the same things over and over again. There was no indication he ever learned that what he did even a minute ago would not work right now. But it turns out that Brantley was able to escape into the larger part of our yard (thick with ivy and underbrush) exactly because he had no memory of what didn't work. As a state machine all it did was eat and poop. He kept trying a random assortment of things until, one day, his efforts yielded success.

I'm not saying the ideal robot just goes around doing random acts, simply because there is NO ideal robot. It seems to me that just as there is more than one musical scale and an infinite number of ways to put those tones together, just about everything works to one degree or another, depending on our application. It seems dubious to argue an idiometric approach such as behavior AI when the application universe may be infinite.

-- Gordon

Reply to
Gordon McComb

In one sense, behaviour-based robotics is a much bigger task than mere methodological behaviourism because roboticists must both justify their theories of behaviour and implement them too, which probably requires a full-blown quantitative analysis of behaviour. Behaviourism is still mostly in the qualitative stage so it cannot provide much guidance here. One thing it can do now is to provide reality checks on the results, assuming that natural-like behaviour is the goal.

IMO, a quantitive analysis of behaviour is to be found in neuroscience

- consequently, behaviourism will have to sneak a peak under hood if it hopes to achieve quantitative status. Jeff Hawkin's work on intelligence is an example of trying to stay close to the neurological basis of behaviour (e.g.

formatting link
but I think we're going to have to get much closer still.

Unfortunately, the complexity of neurological processes does not lend itself to digital implementation - we immediately run into a computational explosion, just as we do in attempting to simulate other natural phenomena with computers. We are left with a choice between a simulation whose implementation is vastly more complex than the phenomenon being simulated, or a simulation of a vastly simplified model. For example IBM's Blue Brain project requires a processor executing roughly 3 billion operations per second for just 1 or 2 neurons, and that doesn't include molecular-level processes such as gene expression.

If robotics is to obtain a natural model of behaviour it will need a computational platform whose intrinsic behaviour is somehow analogous to neurological processes (or some distillation of their essence, if such a thing is possible), so that instead of explicitly computing the behaviour, it just "does" the behaviour. There may be some way of doing this with analog electronic circuitry, but I suspect that the only thing that behaves like a neuron is another neuron.

And so maybe that is why behaviour-based robotics seems to have stalled. A behavioural approach leaves out the details that are vital to the implementation.

-- Joe Legris

Reply to
J.A. Legris

It's unclear if the computational explosions that people run into when attempting to implement behavior is due to some intrinsic nature of the effect (as it is with weather for example), or if it's simply a fall out of using the wrong model. I've always believed it was just a problem of not having the right model.

Yes, that's true.

I think that's just a silly statement. "computing" and "doing" are one and the same things. Computers don't "compute", they behave according to their design.

I suspect the basic function performed by a network of neurons will be easy to duplicate in digital hardware.

Yes, something is left out, that's for sure. The question remains what that is however.

Reply to
Curt Welch

Hey Randy, since you never acknowledged my earlier post regards CiteSeer, I thought I'd do some homework for you to show you how useful and powerful it is .... note the ability to download PDF files, etc.

formatting link
--> 4606 docos
formatting link
formatting link
--> 1619 files
formatting link
formatting link
--> 8361 docos
formatting link
Also
formatting link
--> SICK
formatting link
On and on.

Reply to
dan michaels

Using Brooks as a primary example, one can only conclude that BBR has stalled, regards going past the basics of intelligence. And what might be your conjecture as to why this is so?

Reply to
dan michaels

Well, on a completely practical note, can you count the output states of jBot for us? I make it about 5. Now what I don't know what to do with is the "output plus variable ratio" when it is dynamically stearing. Is that a separate output? a separate output with a variable? or shall we say each discernably different ratio is a different output? But seriously, how many output states does jBot have? Or do you insist it is infinite. A point of understanding I'm trying to make hangs in the balance.

A note is a state. A state is static, a period without change.

With 12 notes, you can make 12 notes.

Maybe you can make another beat frequency note with the discord of two similar notes, but I'm sure that's outside the relm of your question/point.

One note long? 12.

Given the requirement of stasis, yes, there is a limit. It is only by combining states that any melody of significance can begin to form.

That is a true pity.

I was hoping if we could see behaviors as atomic states, and not strings of atomic states, we could then have a very interesting discussion of the communications content in stringing the behaviors together. Music would be a wonderful analogical platform to consider this line. Then we could separate the intelligence from the behaviors.

Now reconsider my first post. "It occurs to me that artificial intelligence will be found in the decisions that switch behaviors rather than the layers of behavior themselves."

If I rewrote this to say, "It occurs to me that the making of music will be found in the decisions that switch notes according to a melody rather, than the separation of notes into layers themselves."

Randy

Reply to
RMDumse

A neuron does what neurons do and a computer does what computers do. Getting one to do what the other does is a matter of approximation and interpretation. They are never one and the same.

-- Joe Legris

Reply to
J.A. Legris

I've not read Brooks and I have no idea what BBR is. So I can't comment directly no why that might have stalled since I have no idea what it is.

But on the general search for intelligence, especially ones that attempt to use ideas from behaviorism, I think the problem has always been a lack of having the correct model. That is, we just don't have have the correct algorithm, or description yet. Though behaviorism has shown us the basics, it falls far short from showing how to implement it.

It's like documenting the flight path of a bird, without any understanding of how the bird actually manages to fly. Even a complete understanding of behavior, doesn't make implementation obvious (but does allow you to know when you have the wrong implementation).

Human and animal behavior only looks simple if you limit the environment and reinforcers to something very simple - such as what happens in a Skinner box. Otherwise, it's a a huge parallel process where all our behaviors and motivations are interacting and competing with each other. What behaviorism hasn't answered, is how to implement a large parallel learning system - something that produces behavior so complex, we can't even understand it unless we limit it to an isolated test in a Skinner box. This is the same missing piece which has been missing for over 50 years.

It's stalled for 50 years, because the step from how single behaviors are modified by reinforcement, to how millions are modified in parallel, is a huge gap to cross. No one has found a path across it yet. But even though we have not crossed it, I think much progress has been made in understanding the nature of the problem.

It's like trying to reverse engineer an encryption algorithm. It's just very hard to do. You really can't "see" the algorithm in the behavior. You simply have to try different algorithms until you find the one that works. Reverse engineering the basics of the brain and human intelligence seems to have much in common with this type of problem.

Reply to
Curt Welch

BBR = behavior-based robotics.

Brooks = Dr. Rodney Brooks, of MIT, who championed the concept and made it relatively popular today. He didn't "invent" it, but he added some additional elements (i.e. subsumption) that were supposed to enhance the viability of behaviors as an AI model, especially in small robots where computational power is limited.

When someone talks of "behavior AI" for robotics, in the present sense it must by its nature include Brooks and his theories. Sort of like talking about psychoanalysis and forgetting all about Freud.

-- Gordon

Reply to
Gordon McComb

Curt was just in some sort of mind block ;-). Abbreviation BBR has been used in 1/2 the posts on this thread, give or take. I'm sure he's read

1/2 of Brooks' papers too.

Reply to
dan michaels

Yeah, I guessed that. Or maybe someone wrote it in one of these messages.

Yeah, I know who he is but haven't read much of what he's published. He's published alot.

I've seen the subsumption architecture stuff. At least some of it. Is that what people are talking about?

The subsumption architecture is interesting because it demonstrates how there are very different ways to solve the same problems. And it shows how much the state of the environment can be used in place in internal state variables to achieve some fairly complex tasks. For people use to writing traditional computer software which has little to no access to state outside the machine, we get used to assuming most the state that controls the action of the machine is stored inside the machine (memory states etc). The tendency to think in those terms blinds us at times to how much the machine can do simply by reacting to the external state. Brook's subsumption architecture makes it clear there the external state is an important part of how reactive agents work.

The problem with the technique in general, is that at least the initial stuff was not learning based. Instead of building a reaction machine that could learn, it required an intelligent programmer to hand-code all the algorithms into it. I don't think humans are smart enough to hand code the types of programs that exist in our brain. It's like trying to hand code the weights on a neural network. It's just not something humans can do for programs beyond the trivial.

I think the brain is basically a reaction machine that does much of what it does using subsumption like techniques. However, most the real complexity is created by the learning algorithms and are beyond what any human could design by hand.

If you are looking for a better way to hand-code behavior into a robot, I'm not sure if the subsumption architecture is going to buy you much. Humans just aren't smart enough to be able to specify software that way for complex problems. However, I think understanding the subsumption approach, can give us insights into the correct way to structure a learning machine.

Reply to
Curt Welch

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.