stupid little robots

Howdy,

Following on from the behavior based robotics discussion, it seems clear why a researcher into artificial intelligence might view robotics as a distraction, even irrelevant. If the main topic of interest is AI, then robotics is certainly a minor player.

However, for those of us who are interested in robotics as a field unto itself, AI is a natural component of robotics.

That is, that robot designers/builders need some sort of guidance on how to organize and order their software based on the robots we are building now, not hypothetical robots of the future with human-like intelligence and the ability to learn.

From that point of view, the most useful thing that the field

of AI has made available to robot builders is the subsumption architecture, or so-called "behavior based robotics," in all its many variations and dialects.

There seems to be an emerging consenus on this list that this approach has "stalled," evidently because of the paucity of new papers being published on the subject. My experience is that this is not so, but that may be because, like everyone else, I use a modified, even bastardized, concept of BBR, for which I do no see any particular plateau approaching.

We've heard a lot of statements like "the BBR approach cannot solve complex problem sets" with no actual experience or proof to back it up. Like the fellow who wanted to close the patent office in the 1800s because "everything had already been invented." At least everything he could think of. My experience is otherwise.

Perhaps this perceived plateau is actually among the theorists and their debating societies, and not among actual robot builders and their creations? That seems to ring more true.

best dpa

Reply to
dpa
Loading thread data ...

The term / idea of "stupid little robots" comes from a comment Minsky made a couple of years ago. He was bewailing that fact that so much effort, especially in the "academic" community, has been directed away from the pursuit of real AI to concentration on subsumption approaches.

formatting link
However, this shouldn't dissuade us in the personal or hobby robotics field, ie lots of residents of this forum, from pursuing our interests, as we're not in academics and not publishing papers [well, you are!]. After all, there is only so much processing power you can strap onto the back of a mini-sumo, or even a 6WD robo-magellan bot.

formatting link
OTOH, anyone familiar with the original subsumption concept that Randy mentioned in the other thread, regards Brooks' original papers, understands the limitations inherent in simple reactive approaches that don't contain any more advanced systems than simple sensory-motor behavioral reflex loops.

My personal feeling is, the reactive architecture makes a nice "foundation platform" for extension to more complex designs. This includes adding memory systems, learning, advanced sensory integration, the hydrid approaches that Arkin talks about in his book, etc. In fact, I put it to you that very few of us have limited our designs to purely reactive systems. Most of us already have seen past that, on one way or another.

Exactly, as mentioned above.

Reply to
dan michaels

With the advent of Wifi, I'd say that even a hobbyist could consider tapping the power of enormous internet clusters for their little robots.

You can use free shell accounts to set up your own ad-hoc cluster,

you can possibly use an existing cluster (most of them are rather idle, if you believe the statistics viewable on the web),

or you can suscribe to a distributed processing network (but this is less obvious to tap them for real-time processing).

Between friends and family you can easy have access to half a dozen to a dozen of computers accessible thru the Internet (assuming your friends and family listened to you when you told them to buy a MacOSX or Linux system ;-)).

The minimal CPU with a IDE HD 40GB, 512MB DDR2 RAM, a Sempron 1800 MHz, a motherboard with LAN, and a PSU costs about 190 ¤. 153 ¤ if you remove the HD and make it network boot. So with some handycraft, you can build a cheap CPU cluster (eg 1520 ¤ for 8 CPUs, 4GB of RAM and

320 GB of HD), with quite a sizeable processing power; you could easily run between 0.1 and 1 Mneurons. Just add a Wifi router, and you're all set to make a very smart little robot.
Reply to
Pascal Bourguignon

Well, that puts a whole different spin on it for me.

Thanks for the explanation.

I will not expect much attention from Marvin, then.

Ahem. hem.

I hear what you're saying, but I would be careful not to exclude any folks posting here. Or accidentally dissuading academia from participation. Teaching and published professionals are present. And their interests maybe more robotics rather than AI.

I, myself, crossed that line this year, when I started teaching robotics at the undergraduate/graduate level course. While I haven't published anything refereed in robotics yet, I am published in several peer reviewed arenas, including electronics, computers and general relativity. And my department head has expressed interest in creating a robotics research facility at the university for me to lead, Sort of a retirement spot back home for me, (or a wild flight of fantasy depending on how the funding and the future turns).

I always thought you, too, dan, might also do some more serious publishing, even more than you've done to date on your web sites, because your style of research seems rather professional.

Further, since this is one of the most civil of all the usenet forums I've participated in, I would love to see more academics join us and give us the benefit of their thoughts.

An aside: I take it you and Curt are frequenters of c.a.p. Are there other newsgroups than c.r.m. worthy of participation?

Reply to
Randy M. Dumse

This is where it gets all fuzzy for me. The "anyone familiar" and "intuitively obvious" an so on.

Can you define specifically a task which you have attempted to solve with a subsumption approach that was was not solvable? Or are these conclusions all based on thought experiments? A specific example would be helpful.

regards, dpa

Reply to
dpa

Do you have a suggestion for a Wifi transceiver which will be small enough and low power enough to fit easily on a 4"x4" robot, run off of

4AA cells?

Reply to
dan michaels

Well, as I understand it, the term behavior-based robotics was essentially adopted to distinquish from the original subsumption idea of Brooks, which was little more than sensory-motor reactive loops, plus some emergence of behaviors, based upon external environmental input. Depending upon the individual personality, BBR can probably mean a lot beyond subsumption.

However, I was mainly thinking about the sort of things Arkin disccusses in chapters 5+6 of his book, namely short+long-term memory, goal planning [short and long time frames], internal maps and representations of the external world [Brooks was adamant that with reactive robotics, the "world is its own best represenation"], and everything dealing with high-level symbolic processing.

Reply to
dan michaels

There were some very long threads discussing Marvin's comment a couple of years ago, including on this forum, IIRC.

Heaven forbid. All "civil" discussion is welcome. Luckily, for an unmoderated forum, this forum has been totally civil for as long as I can recall.

I was mainly trying to point out that, as I understand it, Marvin's comments should be taken as mainly pertinent to "his" opinion of directions in academic research. Too many grad students building "small" robots when they could be working on higher-level and symbolic AI approaches. Outside of academics, I wouldn't worry too much about it

- esp in this world.

However, as everything in life actually goes in "cycles" [check out "The Fourth Turning" by Strauss and Howe], and given the comments on the other thread, it just may be that the cycle is getting back around to higher-level AI. Maybe, Marvin may be prescient as usual.

I won't bore you with my life story, but suffice it to say, I've been around the track a couple of times, in and out of several professions.

Joe Legris, too, on c.a.p..

c.a.p tends to be very non-civil at times. And also, I hate to say it, extremely repetitive in the discussions. And mostly more conjecture than anything really useful in a "practical" sense, like on this forum. I have been developing my own research directions, and have used c.a.p. to bounce around ideas. Lately c.a.p has run down, as many of the people with something good to say [ie, providing a good mix from different areas] have stopped posting.

There is also yahoo c.a.p. where some of the old google c.a.p'ers started a couple of years ago, due to the gross incivility on c.a.p. You might like it, since they seem to talk more about physics and computation [a couple of your areas] than pure AI :). There are a couple of philosophers there who are pure dualists [if you like that stuff], and Marvin also attends to it regularly ...

formatting link

Reply to
dan michaels

If you want to call everything a behavior then doesn't any system reduce to a set of elementary behaviors and therefore any system can be considered behavior based?

Can you define what you mean by behavior based vs not behavior based? Every system changes its states over time and a sequence of such changes we can call a behavior.

A reactive system needs to be programmed (or evolved) and what subsumes what is decided by the programmer (or evolved). There is no real time learning or planning based on predictions of outcomes.

-- JC

Reply to
JGCASEY

Hi

Thanks for the reply.

However, no one seems be able to offer a tangible example of a specific task not solvable with a BBR or "subsumptive" approach. Lots of handwaving and generalities, or suggestions of classes of problems that no one seems to have solved at all with any approach.

However, given what seems to be a general consensus, it should be easy to offer a specific example. The fact that no one seems to be able to offer more than general assertions, as do you also, Dan, if you'll allow me to be so bold, makes me question the basis of that general consensus. How do you know it's true?

For example, I have a hard time understanding why an algorithm to search through an array of values returned by a sonar array for some triggering pattern for a "behavior" is acceptable as BBR, but a similar algorithm to search through an array of values of, say the last 60 seconds of sonar readings, or odometry locations, or any other history one may keep, is not. Both seem to me to fall well within the behavior based paradigm, Brooks' original theorizing notwithstanding.

So, given that distinction, I am still looking for some proof of what seems to be the widely held belief in the inapplicability of this approach to some particualr problem. Surely if this is such a common and widely held belief, someone can offer at least one tangible example? An actual problem -- best would be one based on actual experience but that seems increasingly unlikely -- but at any rate something specific, to aid in my understanding. At this point I have not encountered a problem I would so designate.

best, dpa

dan michaels wrote:

Reply to
dpa

I should add the caveat here, in case you check it out, that many of the current threads on that forum should probably be labelled [highly-OT], but you might look back a month or two.

Actually, I personally haven't found yahoo-ai.p too useful, especially because so many discussions do involve physics and computation, since these are not my areas of interest, plus the AI discussions tend to be geared more towards classical gofai type stuff.

Reply to
dan michaels

Sounds like you have a very broad personal definition of BBR, which in essence subsumes [no pun intended !!] all other areas of AI, including the ones I just mentioned, such as internal representation and high-level symbolic reasoning.

However, if you look in chapter 6 of Arkin, I believe you'll find he doesn't view the domain of BBR, per se, quite so broadly, pg 206 ...

"... Hybrid deliberative/reactive robotic architectures have recently emerged combining aspects of traditional AI symbolic approaches and their use of abstract represnetational knowledge, but maintaining the goal of providing the responsiveness, robustness, and flexibility of purely reactive systems. Hybrid architectures permit reconfiguration of reactive control systems based on available world knowledge through their ability to reason over the underlying behavioral components. Dynamic control system reconfiguration based upon deliberation (reasoning over world models) is an important addition to the overall competence of general purpose robotics ..."

Basically, this is an issue of broadness of definitions, it would seem. If you want to say BBR is all this and more, I guess that's your way of defining it.

Also, the fact that other approaches to AI haven't solved the "general AI problem" is a different matter from how one defines the domain of BBR.

Reply to
dan michaels

I've got no suggestion, but even my three years old Palm Tungsten G contains a Wifi tranceiver, and it works on batteries no bigger than 4 AA cells, and would easily fit on a 4"x4" structure.

There are all kinds of wifi based devices that could be used:

formatting link

Reply to
Pascal Bourguignon

To answer the question, what is behaviour-based and what is not, look at Ch. 1 of Arkin's Behaviour-Based Robotics (1998). He talks of the ecological approach "in which the robot's goals and surroundings heavily influence its design" (p.1) He also discusses situatedness: "a strong two-way coupling between organism and environment" (p.8), and sensing and acting within the environment in real-time as opposed to abstract knowledge representation and planning (p.15), and multiple competing or cooperating processes (p.15).

Brooks' characterization of BBR (p.26) includes: Situatedness - the robot is surrounded by the real world Embodiment - the robot has a physical body that dynamically intereacts with the world Emergence - intelligence arises from the interacton between robot and world - it is a property of neither

Arkin's "spectrum of robot control" (p.20) sums it up. It's a continuum. He says we should drop "the false dichotomy that exists between hierarchical control and reactive systems"

DELIBERATIVE CONTROL (CLASSICAL, SYMBOLIC) Representation dependent Slower response High-level intelligence Variable latency (complex computations) Perception establishes and maintains internal world-model Assumes that tasks can be decomposed, planned and scheduled Poor tolerance of uncertainty . . . [EVERYTHING IN BETWEEN] . . . REACTIVE CONTROL (BBR, REFLEXIVE) Representation-free Real-time response Low-level intelligence Low latency (simple computations) Perception leads directly to action Assumes that tasks will decompose in real-time as they present themselves Handles uncertainty with reactiveness and flexibility

---------------------------------------------------------

IMO classical symbolic processing might be part of the behaviour of some intelligent systems, but it is not an explanation for it. It is an effect, not a cause. Consequently, Arkin's spectrum of robot control is more just a history of robotics. I think BBR is just another step along the path to the realization that much of so-called intelligent behaviour is actually a property of biological machines that evolved in a natural environment. We might try to approximate it with machines, but it is a much bigger task than we estimate.

We already have artificial intelligence - that's why machines are so dumb, they're artificial. If you want real intelligence from a non-biological machine you're first going to need an extremely complicated machine - similar to a correspondingly intelligent animal, I suspect. Then you're going to need much of the information that was distilled through the interactions of billions of organisms over hundreds of millions of years of evolution. That's on the order 10E17 organism-years.

Is it possible that evolution took as long as it did because the information it produced (in the guise of genomes) cannot be obtained much faster? Suppose we could somehow speed it up by a factor of a million or even a trillion. That's still 100,000 organism-years. That's a lot of small-stupid robots. My advice is to make more - many many more.

-- Joe Legris

Reply to
J.A. Legris

Yes. A Compact Flash card. I have one for my iPaq 2215.

Hi btw, just passing through :-)

Alison

formatting link
(that's my site I built yesterday btw)

Reply to
techie_alison

Thanks for the info. The latter looks like a nice little device. I seriously doubt you can get 328' of range indoors, even at the low-end rate of 1-Mbps, with just 25-mW [14 dBm] of transmit power ...given my experience with RF.

What sort of max range have you seen?

I imagine this sort of lashup will start getting very popular in robotics in the next couple of years. I've played some with 900-Mhz transceivers and ZigBee, but this looks interesting too.

BTW, there's an article about using WiFi, Bluetooth, and Zigbee in the Sept 2006 issues of Servo mag, latest issue.

Reply to
dan michaels

c.a.p gets very uncivil at times because most of what's being discussed is nothing more than opinion - yet everyone is very invested in their own opinions. It's as bad as debating religion and politics. Because it's a philosophy group, very little practical AI gets discussed. It's mostly silly debates over the meaning of intelligence and whether AI is even possible, and why everyone else is approaching the problem the wrong way. :) Someone trying to program bots is likely to be hard pressed to find anything useful in the group.

Reply to
Curt Welch

My belief is that the subsumption approach could probably solve any problem (keeping in mind that I only roughly understand the subsumption approach), including full human intelligence, but that humans just wouldn't be able to hand-code the solution - at least not without thousands of years of trial and error adjustments to the code.

As an example, try making a robot do image recognition as well as a human. Make it tell you all the things that are in a picture. This is an example of a problem that even if we know how to solve it, I don't think a human could ever hand code the solution. It must be done by hand coding a learning system, and then training it, just like you train a human, to perform the same trick.

Or if that example is too much of a sensory problem and not enough of an action problem for you, make a car that can drive from one location to another in a typical city as well as a cab driver can without using GPS. The complexity of the sensory interaction required to respond to not only the road, and traffic signals, but things like a construction worker, cop, or crossing guard, trying to direct traffic is just beyond what humans can be expected to correctly hand-code a solution for using the subsumption architecture or any other approach.

On a scale of 1 to 100 of what can be done with the subsumption architecture, with 100 being full human intelligence, the most complex things I've seen done with it might rate a 2. It's not that the subsumption architecture has hit some brick wall which can't be crossed, it's just that the work required to create more complex solutions is beyond what humans can easily understand - so it would require a ton of slow trial and error programming work to create systems of greater complexity.

For example, Dan talked about his walking bot and how he thought of using a learning systems to evolve a better set of parameters for making it walk. This is an example of something too complex to do by hand. Dan, or no one, can look at a problem like that and just "know" how to set the parameters by hand to make it walk better. Instead, we just have to make changes, and test, make changes, and test, make changes and test, and slowly, by trial and error, create a better walking pattern. And this is a simple problem of controlling a few legs to make it walk forward. How do you hand-code a solution to make the same system react to complex sensory problems while at the same time navigating across a complex moving terrain with simple subsumption techniques?

I think it can be specified, but I think there are real limits of what we as programmers trying to manually specify the reactions would never be able to understand.

Reply to
Curt Welch

I can't read that quote above and get any useful meaning out of it.

To try and understand what people see BBR is, does it allow internal state or not? Because any system that has internal state, that changes as a function of internal state, and inputs, and which generate outputs as a function of internal state, and inputs, is basically a turning machine of robotics. It's a FSM that can do anything that can be done.

It seems to me that the distinction in BBR vs other approach is less a limit as to what it can do, and more of a general label of how we choose to think about the operation of the machine.

Does Brooks have a formal (aka mathematical) defintion of what subsumption is, and what it isn't? Or is it just a general approach to how to program robots?

Reply to
Curt Welch

Take a look at Joe's Sept 3 post for a fuller description from Arkin's book.

Reply to
dan michaels

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.