the Fish and the Robot

This is a spin-off from the "where is BBR AI now" thread, related to my afternoon in the park experience.

After work, I went downtown and happened to spot a fish in the water while crossing the bridge over Boulder Creek. Right there is a small 2' waterfall with a lot of rocks in the water below it. Also, a couple of sandy spots under the water. I could see 2 fishes lollygagging over the big sandy spot. I knew there were supposed to be fish in the creek, but didn't think there were really very many. After a minute or 2, I decided they where probably sitting there waiting for a drowned bug to wash over the falls, so they could have dinner. I really didn't see much else going on.

After a couple of more minutes, I decided to test my theory about the fish waiting for dinner, so I tossed a penny into the creek in the middle of the sandy spot. [if the Boulder council knew about the penny in the water, they'd have me shot, BTW].

Immediately, WHAM ... about 30 or more fish came shooting out from between rocks [where they blended in well and were difficult to spot] from everywheres in the creek, and all went straight for the penny drop. Coming in fast from 360 degrees around the drop site. Looked like spokes on a wheel. Well, guess there ARE fish in boulder creek after all. I also tried this a few more times over 10 minutes or so, and eventually most of the fish habituated to chasing fake bugs. 10-minutes and the 10th penny, just a slight commotion.

They figured it out pretty fast, although I'm not exactly sure how they did it. It didn't look like every single one of the fish had a chance to nibble on the fake bugs, but still the frenzy over new penny drops faded fast. Fish story-telling? Only react in case of positive feedback

- ie, your buddy reacting?

In any case, to the point. Related to the other thread.

What's the difference between the fish and our level of robots?

What would happen if we tossed a penny or ball or whatever into the center of a bunch of mini-sumos? Pretty much, nothing. The mini-bots could easy run over and grab, or at least hit, the object. In this sense, they are not too different from the fish. They move, they turn, they bump into things, they can act autonomously, etc. They behave.

What they cannot do at all as well as the fish is to sense and perceive, and make something important of the sensory input. We have a lot of very simple sensors, but in essence our BBR bots are still blind and deaf like Helen Keller, but also very very dumb [in the IQ sense]. For small bots, the CMUCam is about the top end, but still that is hopelessly primitive compared to fish perception.

Attending properly to this problem is really the next level beyond BBR, I think.

Comments?

Reply to
dan michaels
Loading thread data ...

Living creatures must learn their behaviors. Over its lifetime, a living creature will learn some percentage of the optimum set of behaviors that would make it a "perfect" specimen. Robots have the advantage that they can be programmed with 100% of their "optimum" behavior set at day 1.

Many living creatures' behaviors are associated with a group - the herd, flock, school, etc. - & behave according to what others in the group are doing. In the fish example, each individual fish may not necessarily be aware of the "bug" that was dropped in the water, but rather are aware that other fish are swarming aggressively, focused on some point. Each fish in the school reacts to the other fish in the school (at least 1 fish must have actually sensed the "bug" in the 1st place) & rush toward the point that all the other fish are rushing towards. Some lucky fish - it may not even necessarily be the 1st one to have sensed the bug - ends up with a meal, rewarding his behavior.

In a similar yet opposite context, individual sheep in a herd react to the other sheep by bunching together & fleeing, even though each individual hasn't actually sensed a wolf. Some number of individual sheep must actually sense the wolf, & their behavior triggers the "instinctive" behaviors of the other sheep in the herd, who haven't actually detected the threat. Any sheep who fails to exhibit the necessary degree of "correct" behavior may end up as the wolf's lunch.

In the case of the mini-bots, they are generally missing this "herd-instinct" behavior. They may react to the ball, but not to the other mini-bots' reactions to the ball. They need to not only sense their environment for "things" of interest, but also to determine what others of their ilk are doing, & react to that as well. If the desired outcome is to grab an object, those individual mini-bots who can't actually "see" the object need to be able to detect the fact that some of their bretheren are reacting to "something", & allow themselves to be drawn toward the center of that activity. An individual mini-bot may succeed in grabbing the ball if it's able to fight through the mass of its compatriots & be lucky enough to be in the right place at the right time, just like a fish that swims to the point where all the other fish are focusing their attention just might get rewarded with a bug to eat.

An interesting series of exercises fall out of this: How far might each individual mini-bot be programmed to go, to achieve its goal of grabbing the ball? How hard should each unit fight to get closer to the center of activity, & once it actually detects the desired object (as opposed to merely the commotion surrounding the object), how hard should it fight to secure the object? Should the effort stop when it's been determined that another min-bot has secured the ball, or should it try to wrest control away? Does it go so far as to "kill" another of its own kind, if necessary to achieve its goal? How might an individual mini-bot determine when to abandon its effort to grab the ball & go back to its normal "milling about with the herd" mode? How might a mini-bot secure the ball, defensively, to keep other mini-bots from taking it away, once grabbed?

Other variations include perhaps adding colored lights to indicate some sense of "purpose", such as green means "I'm heading towards something good," red means "I'm avoiding something bad,", etc. so that others detecting their light AND motion can behave appropriately. Also, different "types" of mini-bots could be mixed, with each type programmed to react differently to its own type vs. the other types. Programmed behaviors in reaction to other-than-own types could be "avoid", "flee", "attack", "ignore", etc.

JM

Reply to
John Mianowski

Except of course, for the instinctual behavioral part.

You're kind of mixing regular schooling behavior with what happened in the case I cited. Regular schooling behavior involves a form of self-organization where each fish obeys basically simple rules to keep its distance from other fish, but to also follow the pack. IE, rule#1: go in the general direction of the entire school. Rule #2: maintain a more or less moderate distance from your nearest neighbors.

OTOH, what I observed was 30+ fish, who were spread out with each sitting in their own particular spots within the rocks, suddenly converging en-masse and at maximum warp-speed to a central point from

360 degrees of direction. Visualize spokes on a wheel, collapsing into the center hub. This is different from schooling behavior. Their intent to get to the dinner-spot overwhelmed any fish-thoughts about schooling behavior.

You kind of missed the point. You're talking here again about programming behaviors. I was talking about the fundamental problem of perception.

Reply to
dan michaels

I would suggest that the fish did not learn anything, but from instinct knew to start ignoring stimulus that wasn't food or a mate. Come back the next day and I'd bet they'd start out again just as curious about the money you're tossing into their water.

There is a good book that addresses much of this. It's called "Time, Love, Memory," by Jonathan Weiner. The book is about the origins of behavior, and Weiner uses scientific studies to look at just how much lower creatures (specifically fruit flies) retain such things as memory. It is a biographical book of the players involved that also discusses a lot of scientific underpinnings, and is a very entertaining read. The author won a Pulitzer prize for a previous book.

-- Gordon

Reply to
Gordon McComb

Clearly, a fish doesn't know the value of a dollar! Ha.

Actually, I think my experiment illustrated a couple of points.

First, I do agree with you that, if I went over and did the same thing today, the fish would probably act pretty much act the same, so they didn't in fact "learn" very much from yesterday's trial. Hmmm.

However, it's also clear that the fish do have some sort of short-term memory mechanism that prevented them from endlessly repeating the same behavior over and over. [and it wasn't just related to randomization of response].

To repeat the same useless thing over and over would obviously be survival-negative, as they would be wasting a lot of energy with their hit-speed attacks, but not be rewarded for their energy-expenditures. So, in the past, they evolved some sort of innate short-term memory mechanisms which reduces the useless behavior. IE, rule: if several repeated attacks produce the same bad result - penny-UGH! - then stop doing it. There are obviously many ways this could be implemented, but all require some sort of short-term "memory store".

Secondly, as I indicated before, the experiment also illustrates how advanced the fish's sensory and perceptual systems are, compared to our little robots, which is really important here.

Reply to
dan michaels

Again, see the book. It's specifically about manipulating the genes that control "memory." Now, in a fruit fly memory is different than what we have because we have so many more receptors, but it turns out the genes are strikingly similar between humans and fruit flies. He's the value of the work.

This is the real value of the experiment, IMO. Fish will follow most anything that catches their attention, as long as it doesn't exceed a size thresh hold. The sensory mechanism doesn't need to be complex, but are probably more exomplex than the sensor array that the average hobbyist can afford. I think it's important to bear in mind that there are sensors capable of picking up an incredible amount of data, but they are far beyond what we are willing to pay. Eyes are free to their owner.

(Curiously, through, they tend to ignore their own poop, and the poop of their buddies. They know it's not food, though it might behave -- slowly float or whatever exactly like food. Now *THAT* is worth a doctoral thesis.)

-- Gordon

Reply to
Gordon McComb

People behave the same way.

-- JC

Reply to
JGCASEY

OTOH, he does know pennys are pretty much worthless.

Randy

formatting link

Reply to
RMDumse

You didn't intersperse bugs, you don't know what they learned.

You didn't intersperse in some dimes. Maybe they learned and habituated only to the copper color.

You didn't intersperse some copper colored bugs. There are ways to refine your assumptions.

Great first experiment. Now where's your follow up research? Is it worth validating your claim? Or are we back on our simple little robots?

Randy

formatting link

Reply to
RMDumse

The follow-up research is obviously to build a sensory system comparable to that of the fish and which is small enough to fit on my small robot. That's the way to robot intelligence. :)

Reply to
dan michaels

I'll look for it.

But our robots need this to advance to the next level.

Fish have excellent olfactory and I would imagine underwater taste sensing capability. They probably know which of their buddies the individual poop comes from. They know the pennies aren't food by nibbling, taste, and smell. But first, they have get sense the penny drop from 20+ feet away, and get over to it in jig time. That's the good part.

Reply to
dan michaels

Yes, but the point was this: If you have a high-end sensor capable of greater definition than what we commonly use today, you'll probably write some code to take advantage of it. The robot will seem to have more intelligence because it has greater information-gathering ability. There's diminishing returns writing behavior code for a robot that is basically blind: some touch switches, a couple IR sensors, and maybe an ultrasonic sensor on a sweeping turret. Compared to that fish with its robust vision system such a robot is the equivalent of an amoeba.

So, in order to get to the next phase we need either lower-cost sensors, or more money to invest in our robots. Or maybe both. That will make the next level of BBR much more meaningful.

-- Gordon

Reply to
Gordon McComb

Begging to differ, it isn't obvious to me at all. My point above was you don't know what the sensory system of the fish is doing. So how could you even tell if you were building something comparable?

It's wonderful to read other people's research, but hats off to someone who does and publishes their own.

A side note: I've run into habituation as a research subject before. Back in the days of the AIM-65 at Rockwell, I did a side project for a couple researcherx in San Diego, where we'd test baby mice by playing a loud noise, and checking their reaction to it, as measured by their jerking while balanced on a platform with a load cell.

(Did you know if you make a loud enough sound, like jingling car keys, you can kill a mouse? It will shock and go into cardiac arrest. Or at least so I am told.)

The rate of habituation was used to determine their mental soundness and abilities. By finding something to test at the very early stages of life, you could accelerate the rate of testing without having to let the rats mature to see if they had their faculties or not.

Anyway, one of the arguments of Brooks I find most compelling is introspect doesn't work. Our minds will make up lies, rather than admit it doesn't know something. And so we are prone to self delusions.

What our stupid little robots were to Brooks, was a way to check hypothesis against reality, and let reality arbitrate between what was useful/effective and what was not. Likewise, I think instead of trying to build something fishy, and comparing our robots to it, we should really learn what it is we should be building.

Reply to
Randy M. Dumse

"dan michaels" wrote:

Yeah, I think your example shows how good even simple animals are at learning. Though a fish does have much better sensors than our typical small bots (probably even better that most high end multimillion dollar bots), that really isn't the big problem. The big problem is our bots aren't making good use of the sensors they have - which means they are making good use of the data coming from the sensors they have.

How for example, did the fish learn so quickly to stop chasing the pennies, even when many never had the chance to try and eat the penny? I think your answer about fish story telling was not as far off as you might have thought.

After a life time of eating with other fish, the fish have probably learned to read the behavior of the other fish. When they all swim towards a splash that seem like it could be food, the first fish that taste the food, and decide it's not food, probably behave very differently, than when it was real food. Next time, take some bread and throw in chunks and watch how they group reacts. You will probably notice a lot of extra splashing and fighting over the real food that never happened with the penny.

The other fish, the ones that never made it to the food, or the penny, can no doubt sense this difference. Just like we could. If they see a lot of splashing and fighting going on, they know it's real food, and they join to fight. If they don't see it, they know either it's fake food, or the food's all gone, and there was now reason to swim over there. The behavior of the mob is telling a story to the entire group, about whether there is food, and indirectly, about how much food there is.

So every time you throw the penny in, and the fish swim towards it, and see no resulting mob fighting, they know the splash was not food, even though they never got close enough to test the food for themselves.

This type of behavior is easily explained in terms of reinforcement combined with internal systems that predict rewards. The splash is a predictor of reward (because the fish has many times in the past received a reward from real food after sensing a near by splash). One the fish senses the splash, not only does it trigger the behavior of swimming towards the splash, it also causes the internal systems to increase the prediction of a future reward. That increase, by itself, acts as a reinforcer to reward whatever behaviors the fish was doing at the time it heard the splash (swimming near the bridge for example). So, the splash itself, acts as reward to encourage the fish to continue to swim near the bridge.

But, as the fish swims towards the splash, and then, the expected fighting by the mob over the food doesn't happen, this causes the internal reward predictions to drop (oh, we probably aren't going to get any food). That drop in the internal reward prediction, acts as a punishment for the behavior in action - swimming towards the sound of the splash.

This is how each time you throw a penny in, the fish are being taught a lesson to now swim towards the splash.

But, if they learn so quickly (10 examples in 10 minutes) to not swim towards the splash, is this not making the fish forget everything it knows about getting food by swimming towards a splash? Well, if the sensory systems could not tell the difference between that splash, in that place, at that type of day, from all the other splashes, then sure, it would quickly forget everything it knew about getting food by swimming towards a splash. But if the sensory perception, is advanced enough, to tell the difference between that splash, and other splashes it's experienced in the past, then what it's learning, is to not swim towards that type of splash, in this place (under the bridge in the afternoon when bot programmers tend to throw pennies at them instead of early in the morning when the kids from the school waiting for the buss throw crackers to them).

This is what the "better" sensory processing is needed for that you talked about. We need automatic systems that can analyze, and combine, all the sensory data from different sensors, and create the correct "context" for associating the current rewards, with the current environmental context. When the fish is being thrown crackers, the context can be different in many ways. The daylight levels might be different, the precise sound of the splash might be different, the number of other fish around might be different. He might be in a different part of the stream. He might have to swim harder because there's more current in the water. The fish might be more hungry in the morning when the crackers tend to be thrown. Anything the fish can sense, through any sensor, is helping to establish the current environmental context. And with some contexts, the fish gets more food than others. In some contexts, swimming towards a splash tends to reward the fish with food, and in other contexts (which might be very similar, but different in some detectable way) swimming towards the splash doesn't tend to be rewarded with food.

My point here is there is 1), to show the habituation, the machine has to have learning - it must be changing it's behavior in response to the current context. Most simple bots have little or no learning coded in them so they have no hope of acting like the fish did. 2), It must have strong generic sensory processing that can analyze all the sensory data combined, to identify the unique differences between which contexts lead to rewards, and which don't. This is a complex statistical process - not something you can make work by hand-coding behaviors (well, unless you have millions of years to work with like evolution). And 3), the system must include reward prediction, and those predictions will modify behavior - they will be the true source of the changes to behavior. This is how the fish learns to "read" the behavior of the other fish, without having to have that skill hard coded into it by the creator (evolution for the fish, human programmers for bots). It learns to read the behavior of the fish, just like it learns to read everything about the environmental context. Some contexts are predictors of rewards, and some are not. It must learn, through experience, to tell the difference. Then, with a good reward prediction system helping to shape behaviors, any behavior which causes the bot to increase the prediction of future rewards, will be instantly rewarded. No need to wait for actual food to show up to act as a hard coded reward of the behavior.

If you want to make a bot act anything like fish (or any animal that learns from experience), that is the type of system it must have built into it. You can't do it by hard coding behaviors into the machine because unless the programmer is there to teach it the difference between the sound of a penny hitting the water, and a cookie hitting the water, it will never learn to ignore the splash create by a penny. You instead, as the programmer, hard code the result you want to bot to achieve (get real food in it's stomach), as a hard-coded reward. Then you use a statical based system to do the associations between the multimodal sensory contexts, and the actions that work in each context for producing rewards, and the behaviors that don't work.

This is how intelligent machines are going to work. The only unknown, is the implementation details of how such a machine, is best able to take multiple sensory inputs, and define the "context", and map each context, to behavior. But it's going to be modality independent because whatever mathematical techniques work for correlating sensory data to actions is going to work well no matter what the sensory data "means" or what the actions "mean". At this level, the only "meaning" that's important, is how much reward does each mapping produce - i.e., how "good" is each behavior, for each sensory context, based on past experience.

As a foot note, let me add I don't know anything about fish. It's possible that fish for example are born hard wired with the ability to recognize the activity of fish fighting over food as a reward. They might not have learned it in their life though experience. They might also be born with the behavior of swimming towards a splash instead of having to learn it. And their learning to avoid the penny, might be only temporary. If you were to stop throwing pennies for 30 minutes, you might find they had forgotten all they learned, and were back to the old behavior of swimming like mad towards the splash (i.e., it was a form a habituation instead of long term learning). These are all variations of how evolution might have implemented learning in fish to maximize its chance of survival. But these and many other variations would would be easy to code, if we simply had better generic learning algorithms to start with. Which we don't. Yet. And that's what we are missing to be able to create more intelligent behaviors in bots and all our machines.

I agree with you that we are lacking strong enough sensory perception code to make use of more complex multiple modality sensory data. But I already know what the code needs to do. It's not just a perception problem - it's a generic statistical learning problem that maps sensory data to actions (aka BBR), and adjusts the priorities of those mappings (learning), though experience. It learns to abort the swimming behavior quicker and quicker in that special sensory context defined by the sound of a penny splashing in the water, in that stream, on that day, in that location. And BTW, the reason it is "aborting" the swimming behavior, is simply because other behaviors (like "hide in the rocks to protect yourself from predators") is becoming the more dominate behavior in that context instead of "swim towards splash". Each time the behavior is used and it fails to produce the expected rewards, it gets demoted in that context. The system is always picking the behaviors that seems like they will produce the most rewards. When there is no sign of food around, the behavior that is the most likely to produce the best rewards is the "hide in rocks" behavior, or maybe, "look for mate", etc.

Reply to
Curt Welch

Only if the programmer is able to know every possible environmental condition the robot is going to experience in it's life - which they never are. Which means, robots programmed like that only work well, in the environments they were programmed to work in.

It seems I just duplicated a lot of what you already said in my previous post.

That is exactly why you need learning and why the programmer can't know ahead a time how to answer these questions. If the bot is completing against a bot the programmer has never seen before, it's unlikely that he would have coded the correct amount of aggressiveness into his bot. But, with context sensitive learning, the bot can learn to recognize the "blue" bot and learn that trying to grab the ball away from it never works, so it shouldn't bother trying. But it can learn that the red bot can't hold on to shit and that he can grab the ball way every time. So if the red bot is around holding a ball, it should always try to grab the ball from it. But, it might also learn that the green bot is better than he is at grabbing the ball from the red bot. So if the green bot is around, and is trying to grab the ball, then he might as well go do something else that is likely to be more productive.

All these priorities about which behavior is the most productive in different environmental contexts, is something that must be learned (and constantly adjusted) with experience, if you want to bot to act "intelligently". And you will never make it work very well if that "context" is defined by a hard-coded perception system. We need strong, generic, statistical algorithms for merging all the sensory data, and using that to select the best current behavior to produce.

Yes, you give them "frown" and "smile" hardware.

A strong reinforcement learning system, that had only a simple hard coded reward like getting a ball in the goal, but which also had a strong context sensing system could learn a large set of different behaviors what we as humans would label as you did above. But internally, the bot only needs one state or purpose - do whatever works best, in this context, to produce the most expected future reward.

These systems, like I talked about in the other post, need to include a strong internal, reward prediction system. If you take the output of that internal reward prediction system, and wire it to an external signal, like your lights, then you have simple smile and frown hardware that other intelligent bots could learn to pick up on as you suggested.

Reply to
Curt Welch

Except of course, the species as a whole learned those as well through the slower learning process of evolution. It's learning either way. They just happen on different time scales. One example is the species learning as a whole and the other is an individual learning in its life time.

Reply to
Curt Welch

Right. If you are coding such a system, you could think of the "memory store" as weights that might exist in neural network. Long term memory store are weights that are modified, and stay modified forever (creating long term learning). Short term memory store are wights that can be modified, but which always slowly return to their staring condition (creating habituation). You could easily code a system that used a combination of both. The shorter term memory would be modified quickly, allowing the fish to learn to ignore the splash in only 10 trials. But the long term memory would also be modified, but much slower. So if you come back the next day, the with would have revered (almost) to the same starting behavior. But, if you did this every day for a year, the long term memory effect might eventually cause the fish to stop swimming towards a splash all together.

Yes, I agree. It can correctly identify the sensory clues to distinguish how the context you set up that day, was unique from other environments the fish had experienced in the past. Better sensors of course help, but better sensory processing algorithms I see as more important. There is a lot of useful information in even simple sensors that most small bots fail to make any use of. This includes the bot sensing what it is doing and merging that with what it sensing externally. I suspect few small bots make good use of what can be extracted from that data.

Reply to
Curt Welch

Forget fish sensors. They are overkill. Just go for better sensory processing algorithms. Try to create a generic learning system that can integrate the sensors you already have on a small bot, including data from what outputs your code is producing as extra "sensory" inputs.

For example, a bot with sonar should learn that if the wheels are being commanded to roll forward, but the sonar is not detecting thing moving towards it, then something is "wrong" and the bot is not moving.

For example, here's what I was thinking of trying to make work with my vex hardware. Reward the system for moving forward as quick as possible for as long as possible, without turning. I think a simple critic using the sonar data and wheel movement data could generate that reward signal. It would give the system more rewards as the it moved forward and the sonar sensed objects approaching. The faster the objects approached, the more rewards the critic would generate. Then, see how good the bot is at using all it's sensory data, to keep moving in a complex environment without constantly hitting things or getting stuck.

A really smart system, will learn it's way around the environment and find things like long halls where it can do a straight run down the hall without having to stop or turn. Or it might find a more complex route through the halls which maximises the long straight paths.

Simple sensors like the vex has is a hard enough challenge. Sonar ranging, combined with light level detection, combined with bumper switches is a lot to work with as it is.

For example, a good learning system might learn that when driving straight, if an object is approaching at the expected speed we are driving (aka the object is stationary), and the light on the left is a fairly constant level, but the light on the right takes a sudden jump higher, that the bot should turn right to get the most rewards (because the light jump was caused by the hall, or open door, or free path to the right which is more likely to yield a long path of rewards than continuing to go straight will in that context).

Or it might learn to use light sensors to help it stay in the middle of the hall and not get stuck hitting the wall which causes it to "loose rewards" before it gets going again. The point is that it should be combing information from all the sensors it has, and looking for correlations with past reward levels to trigger all it's behavior. It doesn't need to "know" what the sensory data "means". It just needs to be able to correlate temporal sensory data patterns, with the behaviors that have produced the most rewards (worked the best) when patterns similar to that were last seen.

I think simple problems like that will help lead to better sensory processing and learning systems and intelligence without having to deal with something like a high bandwidth video system. But, if you have the processing power and memory to process a video data stream, that's useful as well. I just don't think the simple sensors are being used as well as they should be yet.

Reply to
Curt Welch

Then it could learn something wrong. There are many instances when a robot may move forward but there's nothing in front of the sonar within range, so all the sensor sees is normal noise and transients common in ultrasonic pings. It would appear to the robot that "I'm not moving!" when in fact it is. This reading alone is not sufficient sensory input.

Real life often screws up the best laid plans. You need to remember if things were always this easy, people would be doing them.

-- Gordon

Reply to
Gordon McComb

As Gordon pointed out, this was probably not really learning, as the fish probably will react the same way the next day. Rather the changed behavior was some sort of short-term memory and/or habituation. BTW, ever hear the one about the goldfish ...

"Goldfish are said to have such short memories that every trip around the bowl is a new experience"

How else to live in a bowl, and now go totally crazy.

Unfortunately bread makes no noise when it hits the water, and immediately floats down stream, in that current. OTOH, I am sure the fish in the creek were especially tuned to the particular "class" of sound produced by the penny hitting the surface. Sharp and strong, maybe like a dragonfly lighting down, etc.

This is rather interesting, because it's several levels more advanced than a single fish sensing and perceiving a good stimulus [sound strike] from 20' away - what I have been talking about. Here the fish are able to both perceive the behaviors of their neighbors, and also react in some appropriate manner.

You realize that, before they can do this - engage in complex social behavior - FIRST they need the adequate perceptual mechanisms.

Ditto.

No doubt, something like that.

Yes, there are several problems here to be solved. One is better sensors, and the other is sensory integration, and 3rd one [hiding in the background] is the learning.

this last is why we have computers.

Yes, as mentioned. First the adequate perceptual capability, then the other stuff. Regards evolution, one presumes this is how it happened. IE, indiviudal fishes needed the sensory-action mechanisms before they could properly use these in the context of interacting with other fishes in large schools, etc.

You can give the bot, via both mechanical design and coded routines, the basic ability to generate actions, and then use the other stuff to subsume control over these actions. Layer upon layer of control.

Frogs are born with the ability to automatically snap at flies that fly past - courtesy of evolution. Many [or most] habitual behaviorss in lower animals are generally held to be instinctual, rather than learned individually.

There are many ways that habituation to unproductive stimuli can be wired in. It may simply be that, after making several warp-speed attacks, lactic acid builds up in the fish muscles and doesn't quickly dissipate, and fish aggressive behavior is geared inversely to the level of lactic acid. Not much to do with learning or "memory" as you're positing it. Etc.

In fact, what I think you have been clearly illustrating is that it's really a lot easier to code behavior and learning than it is to replicate good perceptual processing. Despite what you say in the next few sentences.

Reply to
dan michaels

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.