Syntax and robot behavior



It's hard to justify of course. But it's based on a few things I've noticed about dogs (I've owned or lived with many over the years but don't claim to be an expert in any sense). For one, they seem to learn nothing quickly. They are far more creatures of habit than humans are. If you try to change their routine, like let them out a new door, instead of the old door, they keep going back to the old door. It takes many exposures to a new routine before you see obvious signs of them learning the new routine.
Second, they show little ability to plan - which I think we do a lot of with the help of our memory ability. For example, I used to play with my dog by throwing a tennis ball in the house which the dog would fetch. But, when it rolled under the couch, the dog couldn't fit under the couch to get it. But the dog was small enough to run behind the couch and get the ball. However, trying to teach that dog that trick was close to impossible. It would see the ball by looking under the couch, and simply wanted to go straight to the ball. If I led the dog around to the side of the couch, and it saw the ball behind the couch, it would instantly run for it and get it.
But no matter how many times I did this with the dog, it would never seem to make the connection that when the ball was behind the couch, it should give up trying to go straight for it, and instead, run around it.
This I believe shows that the dog can learn to react to what it sees in front of it, and what happened in the past few seconds, but has a much harder time learning a longer and more complex sequence of behaviors (running around).
I suspect humans use their memory, to help solve, and learn the solution, to these longer term problems. Like the dog, our ability to directly learn, has very temporal range. So, when we first solve, or are shown, the solution to a long problem, we don't instantly learn to run around the couch either. Instead, the next time we see a similar problem, instead of just automatically knowing we need to run around, a memory of the past event pops into our head instead. We have a memory of seeing the ball behind the couch and then getting it. This memory then acts to guide our current behavior to head away from the ball (to get behind the couch).
After multiple times of using our memory to guild us to a solution, the behavior becomes automatic. We see the ball roll under the couch and we don't sit there having memories that then trigger us to act, we just instantly head off in the direction needed.
So the ability of our mind to call up memories of similar past events seems to act as a bridge, to allow us to see the path to longer term problems than the brain can easily learn automatically on its own. This seems to me to be our strength in planing and reasoning solutions to new problems, based on memories of past experience.
The dog's I've owned, never seemed to have this ability and their ability to solve a problem as simple as running behind a couch to get a ball when you can't go over it seems beyond them. You must instead train them to do it one small step at a time, with many repetitions.
Some animals, such as some birds, I believe have shown unexpected skills at reasoning out a multiple step solution to a food problem. I wonder if they don't in fact have some human like memory skills?

I think they see just like us. They of course don't think of it as being an "image" but neither do kids. We just see things around us and know how to react to them.

Yes, I believe that's true. My dog certainly makes woofing like noises and moves her feet in her sleep at times that makes me thing she is having a dream very similar to how we do. We say she must be chasing bunnies when we see her doing that.

I doubt that as well.

My point is that I can see the the ball roll behind the couch, and have a mental image pop into my head of me running behind the couch from the last time I did a task like that. Once I have that "memory" I then start to act out the solution which came to me in my "vision". I suspect dogs don't have the same ability to have memories pop into their head in the middle of them running around a forest. I think there head is consumed with 99.9% of what they are currently seeing and that's about all it's consumed with. When they run to the top of a rock and look around, it's not because they had a thought of rabbits just before that and reacted by running to the rock to look for rabbits. Instead, they saw the rock, and reacted to the rock by running to the top of it because they have learned from experience that running to a high place is a good thing to do (because in the past it had led to good things like rabbits). So where rabbits might pop into our head, and then the sight of the rock, combined with the thought of rabbits, might make us run to the top of the rock, I suspect the dog didn't have a thought of rabbits, and just reacted to the rock directly.
When I walk down a trial, I might just as likely be thinking about an AI problem, or what I might be doing later that day. Even though I do a lot of that with language (which we don't expect a dog to do), there is much I might think about like that is not language related at all. I might be getting thirsty and my mind might pop up an image of that water fountain I saw at the trial head. This is what I suspect doesn't happen in a dog's head - at least not anywhere near the extent it can happen in ours.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
RMDumse wrote:

Calling this syntax and technology is one way to say it. Most people would probably say something more like animals don't have a good ability to manipulate symbols. Daniel Dennett, the philosopher, says that at best chimps and apes have the ability to use "proto-symbols".
Analogous to the snake-thing, a while back I invented a similar scenario regarding 2 chimps. Imagine one chimp trying to explain to another chimp that he had just eaten a grub over behind that bush over there. With no language, how does the chimp go about explaining grub, "behind" that bush, before=past-tense, on and on? Past-tense, alone, is a killer for a chimp. Explain yesterday.
BTW, as you read Brooks' book, ask yourself if it is really announcing the death-knell of reactive-robotics, as I suggested in a prior post. Towards the end, he asks why hasn't AI succeeded better, and finally comes to the conclusion that: #5 = there is some "missing stuff". So, now he's on to the "Living Machines" project.
EEHH! Wrong answer. It's actually #2 = not enough complexity in our current AI systems. [note I read the book 3 years ago, and may have the numbers wrong].

Personally, I think we need to think more about information and signal processing machines rather than state machines, per se, although the 2 might be ultimately congruent. Animals do have a very good ability to remember things, but they don't have 2 things that humans have: (1) ability to manipulate symbols [as indicated above], and (2) ability to work through complicated temporal sequences. Just try to get a chimp to write a short computer program [forget about the billion monkeys typing on keyboards, and writing shakespeare by random chance, scenario]. Just try to teach a dog to unwrap himself when his leash is wrapped around a tree. A trivial sequential task, but no dog ever did it [that I know of].
One problem with Brooks' simple reactive machines, and that I've also been dealing with lately myself, is the problem of **getting stuck on local maxima**.
This is the problem with simple reactive bots, for which "the environment is its own best representation" [Brooks famous punch line], and which don't have specific memory/tracking/representational systems [and I mean "specific" here], and which your own basic FSMs will not deal with very well.
This does go back to the idea of "remembering state". The typical FSM, once in a state, does not have any knowledge of how it got into that particular state, since most FSMs are big loop systems, with multiple possible trajectory paths. My walking machine controllers are the ultimate example of this. They just repeat the same leg sequences over and over. When it's in a particular state, there is no memory of its past history prior to getting into that state. It's just gone through a transition from one to the next is all.
Also, in past weeks, I have been playing both with photovore-sensor behaviors, and this past week, with sonar "echo-vore" behavior. By this last, I mean I'm using 2 sonars differentially to keep a bot aligned with a wall or other surface, just like using 2 photo-cells to track a light source .
The problem with both of these situations is that the sensor systems can easily get locked onto false maxima, by turning so far that they lock onto a new surface or light source, different from the original. This is the downside to using the enivronment as its own best representation, and not having some sort of historical or internal representation of what has been happening. That's the downside to too simple BBR. As robot programmers, we have all invented ways to get past this problem, by adding rules/etc, IE, something on top of simple reactive BBR. Rules that kick in, in certain situations, based upon recent history of events.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.