Well, what I wrote wasn't an argument. It's simple mathematical fact. So I'm not sure what point you are trying to make, or what you didn't understand.
I really have no clue what you are thinking here.
If you read 40 bits of information, you have 40 bits of information. Is that really so hard to understand?
Now, if you want to talk information theory, then the information content of those 40 bits for a typical bot will in fact be quite a bit less than 40 bits because the bits you read are not statistically independent. Each time you read the switch at a high rate (like in my example) you have a high probability of it being the same as the last time you read it). And, if the bot has some algorithm for trying to avoid hitting things, then for most environments, the switch will be open far more than closed. This means the true information content of the bit stream will be far less than the 1 bit per bit read.
Yes, I didn't really get into the amount of information potentially embedded in the time domain for an application like that.
Sure, the information coming from a bumper switch on a bot is fairly useless above a certain temoral resolution. It's effectively noise.
There's far more useful information in it than that. Writing code to take advantage of the information might not be very easy, but the information is there none the less.
My point is not how much useful information might be in 1 second of bumper switch data. My point was that there is useful information in the history of what has happened over time. The bot can explore the entire world using only bumper switches and two wheels for driving to learn stuff about the world. The limit of the amount of information is really a function of the environment, and not the bot. It could drive around for 10 years, and build up detailed map data about the the entire world, having only binary bumper switch data to work with. It might have collected trillions of bits of raw bumper switch data in all that time, but the data might be compressed down to only a megabyte of map data because much of the switch data was redundant.
If the bot is located in a square room with nothing else in it, then all the data there is to collect, is the fact that you are in a square room. Once you figure that out, the information content from the bumper switches becomes effective 0 (because you can always predict what the bumper switch data will be). Now, for a real bot, that never happens, because there's all sorts of complex information about the bot itself mixed in the data (like maybe the motion batteries are running down so it's taking the bot longer to drive from one side of the room to the next - or the left wheel is starting to turn slower making it curve because a gear is wearing down). All that sort of data can be extracted from the bumper switch data given enough time in a real bot.
My point is the amount of data from the switches constantly increases based on how long you collect data from them. It's not limited to a few bits. If the bot drives around for years, it can might extract a megabyte of useful data from the environment though those simple binary inputs.
That could easily be true.
I can't really tell what point you are making here. If you put me inside a bot with two bumper switches for inputs, and two wheels for output (and lets say the wheels each have only three states - forward, stopped, or reverse - no variable speed control). I can still drive this thing around the environment, build up detailed map information, find out that I've been bumping into something that keeps moving, but other things that seem to be walls in a square room, then later figure out the thing that's been moving is actually another bot, with an intelligent drive. And even though they don't speak my natural language, after a few years of bumping into this other bot, we develop our own language, and start to communicate using a Morse-code like system. All that could be done with 2 outputs with 3 states each and two inputs with 2 states each. There's really no limit to the complexity of behavior you can generate if you have enough of the right type of logic inside the machine. As long as you have some outputs, and some inputs, then you can act intelligently.
BTW, I have no idea what jBot is, so maybe this is causing some confusion on my part. Should I look it up and become familiar with it to understand your points?
The complexity of behavior will be limited to the complexity of the controller inside the machine, not to the complexity of the I/O connections.
Ok, I'm a bit lost with what you thinking about. Are you talking about any machine, or the jBot?
In general, you can study the inputs and outputs and try to predict the behavior. But, depending on the machine design, it might be so hard to do that it's effectively impossible. I could for example, have a bot drive around a room in random directions where the "random" is defined by some standard PRNG. The machine could then record bumper switch hits, and collect bits of data using some technique such as recording a 0 if the left switch is hit first, and and 1 if the right switch is hit. It could then take about 1000 bits of collision data, and then run it though an encryption, or hash function, and then use the output of that, with some algorithm, to select the "random" directions it picks to turn.
You could spend 100 million dollars studying the behavior of bot like that and most likely, get no almost no where in being able to predict it's behavior, or create an equivalent machine to predict it's behavior. Now many, if you studied it for about million years, you could finally decode it's secretest. Some amount of study would no doubt do it. But the amount of study would be huge for what is in fact a very simple bot algorithm. Some machines are just inherently very hard to reverse engineer.
Yeah, I think that's true. Kind of. But I don't really understand what you think a behavior is. I would just say the intelligence is in it's outputs period.
A common technique for programming robots is to create a predefined output sequence and call that a "behavior". It might be something like, "drive forward for 1 second". Or "turn 90deg to the right". Then the system can use some higher level logic to trigger the selection of different behaviors. This allows bots to do useful things, but it doesn't make them look very smart. Humans and animals are far more flexible than that.
To me, that is just a two level system for the control of the outputs (the behavior generators at the low level, and the behavior selectors at the high level). In the end, the intelligence is in how the bot controls the outputs period - at all levels - not just the high level behavior selector.