Can MindForth AI feel emotions?

From the rewrite-in-progress of the AI User Manual

formatting link

1.5 Can MindForth feel emotions?

When a robot is in love, it needs to feel a physiological response to its internal state of mind. Regardless of what causes the love, the robot will not experience what the ancient Greeks called "damenta phrenas himero" (tamed in the heart by longing) unless some bodily manifestation of the longings of love interrupts the otherwise placid state of the robot mind and draws the conscious attention of the robot to its emotion. It could be as simple an affect as the emitting of a sound like "thump-thump" or "tick-tock" from a robot loudspeaker feeding back into a sensory microphone, so that the robot both generates and perceives the physiological disruption of its previous placidity.

Makers of robots could program their nuts-and-volts counterpart to commence the loudspeaker "thump-thump" behavior for a brief period of time immediately following each recognition of the presence of the human by the bot. This automatic reaction might simply mystify the robot, who would wonder why it reacts so dramatically to the perceived presence of its human friend. Given the beat of the thump-thump sound, and given its perception by the robot, the fact of which emotion is felt is not a given, but hinges rather on the cognitive predisposition of the robot mind to feel any one of a range of possible emotions.

The amateur roboticist who wants to inculcate emotions in a forthmindful robot has got to match the physiological manifestation of each emotion with an adequate sensory perception of the physiological event. Here in the first True AI User Manual, let us initiate and henceforth maintain the following roster of possible emotions in robots and their physiological concomitants.

  • love -- felt as a thump-thump of the virtual heart * anger -- felt as the flashing of a red warning light

We may add to the list as clever robogeeks invent and demonstrate robust pairings of affect and percept for each emotion. On the other hand, robot-makers could endow their robots with the output-input pairings and let the robots themselves sort it out as to which emotion is called into sharp focus by each physical manifestation. The one group of people whom we do not want calling the robot emotion shots are the film directors and movie-makers. A massive, fiery explosion is not a proper evincing of anger or excitement in a robot tasked with vacuuming your carpet.

The theory behind our plan for robot emotions is that, once there is a cognitive spark that could engender an emotion, such as a sudden and drastic cognitive predicament, the robot needs the involuntary bodily response and sensation thereof to sharpen and focus its attention upon the emotional feeling. Without the physiological jolt and its perception that bends the chain of thought, the intelligent robot has no cause to feel the target emotion. There must be a discontinuity in the thought-stream, or there can be no emotion. Even if the robot is only thinking about an emotion, there needs to be at least a memory of the actively felt physiological event.

Early, disembodied versions of Mind.Forth obviously can not feel an emotion if they lack a body to smack the consciousness with the emotional ictus and to perceive what it feels like, but MindForth holds out the promise of robot emotions to pioneers in robot evolution who will incorporate MindForth. Try to have some interesting emotional displays that will cater but not pander to the insatiable lust of movie-makers for Gotterdammerung-gone-wild special effects and godzillas.

Arthur

--

formatting link

Reply to
mentifex
Loading thread data ...

Manual

formatting link

The next step, clearly, would be reductionist robots that on reading their own source code would conclude, "Well, duh, it's just because I was programmed that way."

Reply to
Axel Harvey

And then in their teenage years would exclaim "I didn't ask to be programmed"

Reply to
Deep Reset

This has probably been mentioned before, but wouldn't it be of more use to simulate the responses a human would expect another human to display than lumber a robot with existential confusion in the first instance?

In terms of theories of mind, it also seems that as we can hardly be relied on not to treat others as objects, more would be forthcoming from a mind free of various affects of emtion in terms of working out where logic starts and stops in each of us - which is different to a greater or lesser extent, even when people actively seek to think and act like a pack with shared, immutable values and instinctive idiosyncratic responses - then a robot would learn more and more quickly from observing the differences between it and us.

After all, until we know where we start and end and where the fact we haven't seen sunlight and made our own vitamin D has an affective edge, for example, or why it is we miss our home by the seaside when in a low-salt environment, comes into play, can we really hope to program a robot effectively by way of a servo- motor rumble circuit to feel anything other than mildly navigationally challenged?

After all, to cite a, probably meaningless, cliche all Star Trek TNG's "Mr Data" ever was originally meant to do was showcase speed-reading techniques, IIUC.

And I do not belive that in such a metaphor-free place as Starfleet eventually became, post-Scotty, the crew would have been so habituated to a beta-phase proto- type as to engage quite so collusively in the fantastic anthropomorphics they did.

Not a planned first post. I came across this page on MSNBC today and ended up wondering who Hilary Clinton is?

Is this some kind of new doorway into the rumoured postal voting scams US cynics are bracing against?

formatting link
G DAEB COPYRIGHT (C) 2008 SIPSTON

--

Reply to
FCS

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.