Mon.30.AUG.2010 -- On the Shoulders of Giants?
The "prsn" variable in MindForth artificial intelligence
(AI) enables the AI to think thoughts in the first,
second or third person with English verbs in the
present tense. MindForth is different from
most natural language processing (NLP) software
because previous NLP software may be intricately
crafted for the generation of grammatically correct
English sentences but not for the thinking necessary
to drive the NLP mechanisms. Because MindForth has a
conceptual core that actually thinks, MindForth is
an AI engine that may be "reinventing the wheel"
in terms of tacking on NLP routines that have
already been invented elsewhere unbeknownst to
the Mentifex (mindmaker) originator of MindForth,
but MindForth remains the original invention of
an artificial mind that needs its own special forms
of NLP software. Other advanced NLP software may
translate ideas from one natural language to another,
but MindForth is ideation software that thinks up
its own ideas, thank you, and becomes more skillful
at thinking co-extensively with the growing
sophistication of its NLP generativity. We are met
today on a mindgrid of that generativity, and we must
generate AI Mind code for self-referential thinking
in English. MindForth is like an AI rodent that
scurries about while giant NLP dinosaurs tower overhead.
Mon.30.AUG.2010 -- VerbPhrase Orchestrates Inflection
Our current code is abandoning the stopgap
measure of using the SpeechAct module
to add an inflectional "S" to regular verbs
in the third person singular. The control of
verb inflections is now shifting into the
VerbPhrase module where it belongs. We will try
to use an old "inflex1" variable from the
20may09A.F version of MindForth to carry each
phonemic character of an inflectional ending
(such as "S" or "ING") from the VerbPhrase
module into the SpeechAct module. An old
MindForth Programming Journal (MFPJ) entry
describes the original usage of "inflex1" to
carry an "S" ending into SpeechAct. Now we
would like to expand the usage so that
"inflex1" and "inflex2" and "inflex3" may
carry all three characters of an "ING" ending
into SpeechAct. First we rename all (three)
instances of "inflex1" as simply "inflex"
so that we may confirm our notion that
"inflex1" was not yet affecting program-flow,
before we re-introduce "inflex1" as a variable
that does indeed influence program-flow.
We run the AI code, and nothing seems amiss.
Then we rename our instances of the temporary
"inflec1" from yesterday (29aug2010) as the
henceforth genuine "inflex1" to make sure that
we still have the functionality from yesterday.
Again we run the code, and all is well. Now we
need to clean up the test routines from
yesterday and smooth out glitches such as
the tendency to tack on an extra "S" each time
that a verb is used in the third person singular.
We still have the variable "lastpho" from the
24may09A.F AI, for avoiding an extra "S" on verbs.
That variable is continually being set in the
SpeechAct module. First in VerbPhrase we use a
test message to report to us what values are
flowing through the "lastpho" variable. Then
in VerbPhrase we make the setting of "inflex1"
to ASCII 83 "S" dependent upon the "lastpho"
not being "S", but the method initially does
not work. We suspect that the "lastpho" value
is being set too early at almost the beginning
of the SpeechAct module.
When VerbPhrase sends an inflectional "S" inflex1
into SpeechAct, all the conditionality about person,
number, gender, etc., should be kept in VerbPhrase
and should no longer play a role in SpeechAct.
SpeechAct as code should not care why it is being
asked to add an "S" or an "ING" onto a word being
spoken. Therefore much of the conditional code in
SpeechAct after the the detection of an intended
"32" space should be removed, and SpeechAct should
simply speak the inflection.
I agree, Mindforth appears IMHO to be "much ado about nothing".
I hope one day to be able to announce something slightly more
significant, but until then will remain silent.
In the meantime, I commend everyone to listen to those such as, for
example Ian Parker, and pay no mind to Mentifex, he means well, but
carries no import.
There is far more to be learned from Princeton Wordnet than to arguing
On 10/10/2010 5:49 PM, Mark Conrad wrote:
I am all for A I research that eventually leads somewhere, but I think
that chasing grammar rules around in circles is a waste of time.
Now we all know that raising a human in an intellectual vacuum
leads to a dumb human not good for much of anything.
We also know that raising a human in a rich educational environment
will often lead to a very productive "thinking" human who has great
value to his fellow man.
So if someone builds an artificial brain by whatever means, then
"put it to the test" by educating it.
If the (blank) artificial brain is functional, then it should be able to
"soak up" the education, and eventually become productive like
its human biological counterpart.
In other words, researchers might be well advised to try to find out
exactly "how" a human "grabs hold of " his very first
bits of information.
But then again, that might lead to yet another dead end.
Hmm, that raises another question.
Would any artificial brain be capable of cognition if it did not
have its handy-dandy accessory body ?
... or at least eyes, ears, fingers, nose, etc.
I will have to check my back videos of Star Trek ;-)
... and yet another question, are emotions necessary for any
artificial brain, or not ?
As an artificial entity myself , I need to have answers to these
questions , because my artificial leader will get mad at me
if I return to my distant galaxy without these answers.
An artificial mind is a terrible thing to waste.
Ideally, we will feed our constructed (AI) brains on a diet of news
articles e.g. 33 miners rescued today in Chile.
Of course that's a bit tricky, so we start with the kindergarten stuff,
"apples are a kind of fruit"
"people can eat fruit"
"apples grow on trees"
"trees need water and sunlight"
"cats eat mice"
"cats are mammals"
etc, and dream of the day when they can autonomously read AAP or CNN
feeds to learn of the world around them.
Mentifex seems to be still hoping the lightning will strike the
lightning rod by some miracle.
On 12/10/2010 6:29 PM, Mark Conrad wrote:
Do you think so? You're wrong.
"cats eat mice" is very hard to understand.
It is a generalisation of facts.
There is a whole cultural context around it. It is repeated by
people who have never actually seen a mouse, let alone one while
eaten by a cat, and if so probably not for real but on youtube
or a comics book.
I have seen once that a cat chased a mouse. I have never seen
a mouse eaten. A generalisation is one step beyond that plus
requires an aptitude for generalisation.
That is a cat
That is a mouse
That cat is doing something with that mouse! It eats it!
Now that is what a child could understand. And it would be
Oh wait! There is a whole underlying world of pattern recognition
and the recognition of objects, before a brain understands
"that is a cat". So there. Arthur is even further off than you
imagine. 9 months in the womb and two years of training before
"that is a cat" makes sense.
Some children have never seen an elephant. All children recognize
a picture of an elephant. I have never seen a person shot,
for real. I have seen countless people shot on tv.
Culture is a strange thing.
He has no clue.
<sensible remarks snipped>
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
I agree with your analysis, good points,
but in this specific instantiation,
my cat certainly does eat mice, he hunts them down & eats them.
Of course most cats eat cat food, which is usually made from offal that
doesn't fetch a good price at the butchers.
While you're right about "cats eat mice" being a generalization of
knowledge, I can most certainly confirm that *some* cats do indeed eat
mice. In particular, both our cat and her frighteningly muscular pal
from next door both enjoy not just chasing various neighborhood
rodentia (mice, chipmunks, small/slow squirrels and rabbits), but
capturing them with their claws, sinking their teeth into them, and
occasionally feasting on a chunk of the critter. The neighbor's cat
goes for extra gruesome points by beheading chipmunks as he's playing
with the carcass. The transition between these cats going from sweet
balls of playfully adorable cuteness to blood-thirsty psychopaths is
alarming. And it's part of the reason why when Frisky wakes us up
every morning by jumping on the bed and purring in our ears, we get
Photographic evidence of a recent chipmunk kill is on my Facebook
page. That particular chipmunk wasn't eaten; this is one that Frisky
held dangling from her mouth and dropped in front of us, as a gift.
Awww, isn't that cute?
What Arthur is really doing is presenting a model for some kind of
trivial artificial intelligence that he at some level knows is
somewhere between Racter and Eliza. But that doesn't matter, because
his breathless reports of success aren't really to claim any
breakthrough. They are there to convince others to take the baton and
run with the idea, and hopefully in that process come up with
something useful. He's selling a meme, and what he ultimately hopes
is that when someone creates something that works, he'll get some
credit. Think of it as a kind of artificial intelligence "Stone
I've never seen a cat chase or eat a mouse. But I have stepped on the guts
left behind when my cat ate what I believe was a mouse in our house. :)
Yeah, "cats eat mice" is easy to understand if you talk about what the
words indicate alone. But once you bring in the complex meaning of what
cats are and mice are, and the meaning of this relationship "eat", it
becomes highly complex and no computer system I've seen manages to
represent the true complexity of the meaning of language. It's highly
context sensitive and the context is NOT just the words on the paper. It's
the context of who wrote them, and where they were written, and who is
reading it, and a 1000 other points that ends up creating the meaning we
are responding to when we read, or write, words such as that.
All human behavior is highly context sensitive. How we respond to
anything, is a complex process created by the large and complex neural
networks in our brain. It creates a very complex mapping from sensory
context, to behavior, and it's never simple. Even for facts that are well
defined, such how we respond when someone asks us what 1+1 is, we don't
always say two. Sometimes we say "why the fuck are you asking me that?"
Sometimes we say 4 to make some point, or to try and irritate the guy that
Meaning is hidden in the dependence coded into the networks that determines
how we react to various sensory stimulation. Until we can build systems
that represents the same levels of complex context sensitive behaviors, we
will not have stored any sort of "true" human meaning into our machines.
That's my long running argument. Intelligence should not be thought of as
the skills we have, but instead, it should be understood as our ability to
learn skills. Most the skills we have today come from teachers. So some
people get the mistaken idea that humans only learn by showing the answer
by someone else. We certainty learn much faster when someone can show us
the answer. But humans don't need to be shown answers. They find them on
their own. All knowledge that exists in our culture, was created by some
human in the past, without them being taught it.
So a key here is not just "soaking up" knowledge, but the actual creation
of it to begin with. If we built an AI that could soak up knowledge, but
not create it on its own, it wouldn't be intelligent.
Reinforcement learning is the answer to both how it soaks up knowledge, and
is able to create it on its own. We already well understood on low
dimension problems (aka trivially small state spaces). The hard part is
finding workable approaches to apply it to high dimension problems. Once
someone finds a good workable algorithm for reinforcement learning in a
high dimension sensory spaces, AI will basically be solved. It would be
the AI technological equivalent of the Wright Brother's plane.
It needs to interact with an environment. But it could for example be
limited to interacting through a network connection to other computers on
the internet instead of using eyes and fingers. That sort of robot would
have no direct awareness of the 3D world we live in, and as such, make it
very hard for it to understand the 3D things we are talking about. But
even so, a blind person has no problem learning to talk _as_ _if_ they
could see by mimicking the words of others. For example, the song "Angle
Eyes" is full of references to sight such as the first line which goes,
"Girl, you're looking, fine tonight" but yet Jeff Healey the songwriter is
blind. On the surface it's a song about his love of a girl, but under the
surface, it can be seen as his longing to see. Though he can't write such
a line with a direct understanding of what it's like to "look fine" he can
write it and understand the indirect meaning of something to the effect of
"you are very attractive to me tonight" i.e., "I'm aware that I'm very
attracted to you tonight".
An AI that was blind and deaf and without tactile sensation should be able
to learn to understand the meaning of much of what a human writes even
though the only sensation the AI had was of a network connection to the
internet. It would be difficult for it however and we would expect it to
at times make mistakes in its thinking and word usage that would show its
lack of direct experience with those other senses.
It would certainly make it a lot easier for it to understand human culture
if had a body more similar to ours and a set of sensors that roughly
It wouldn't need those to be "intelligent" however.
All the answers are there. :)
I think so. I think basic emotions are not something you have to add to the
AI as an additional module to it's "intelligence". I think it's just how
we would expect any reinforcement learning machine to act. I don't believe
it's possible to create true intelligence without emotions.
The sci-fi theme of robots being emotionless and highly rational is fall
out from the idea that computers are normally programmed and designed by
humans using a highly rational thought process. We typically program them
by building our own rational thought process into our computers. But when
we do that, they never seen to be very intelligent - mostly because they
don't duplicate our learning.
AI projects such as Cyc try to approach the problem of intelligence by
building an language level data store which is typically very rational.
That is, they store facts such as "birds are animals". This set theory
type of absolute fact becomes the foundation of what a system like Cyc an
learn. It's a binary yes or no, true or false, face.
Humans learning and human intelligence is not about such absolutes however.
It's all based on probabilities. Though we can express absolute facts with
our language, most our talk is not about such absolutes. If I say the sky
is red, I'm not making an absolute statement about the frequency spectrum
of the light coming from the sky. I'm instead expressing some approximate
classification of the type of sky I'm seeing. If I say I'm going out to
get something to eat, it really means more like, "I'm leaving and I'm
intending to get some food, but maybe I'll get distracted and do something
else instead". Though the sentence "I'm going out to get something to eat"
can be interpreted in an absolute sense, it's seldom, if every, how we
actually mean it in every day talk.
AI approaches like Cyc, which are based on absolute language facts, aren't
able to store, or generate, or respond to, language like we do, which is
based totally on complex contextual probabilities. And, they come across
as not having "emotions" as the result as well.
I think when you build a goal driven behavior learning machine based on
statistical probabilities, the behavior that emerges from it will be what
we call emotional, and intelligent, at the same time.
We can use our rational thought process to program such a machine, but what
we program into it, is not our rational thought process, but a rational
process that produces probabilistic goal driven behavior. And such
behavior, is what I claim, is our emotional behavior.
Reinforcement learning machines are goal seeking machines. Their behavior
repertoire is shaped so as to maximize the odds of getting more future
rewards. They naturally learn to recognize the objects in the environment
that are good predictors of rewards, and they seek them out. They learn to
hoard them when possible. The learn to love them. They learn which
objects are natural predictors of low rewards, and they learn to avoid
them, and escape from them. They learn to fear them. Love and fear ate
just learned behavior associated with goal seeking. All the emotions are
fall outs of this.
If you build a probabilistic based goal seeking machine, it will exhibit
behaviors we call emotions. If you don't build that type of machine, what
you end up doesn't look intelligent.
I think emotions and intelligence go hand in and and the idea of a having a
highly rational emotionless intelligent machine like Data on STTNG is just
fiction. It can't be done. If it's truly intelligent, it will have fears
and desires even if it tries hard to hide them.
Well, I've told you what I believe the answers are. You can take that to
your leader. :)
I believe that creating any computer software that can handle
really complex thinking/learning/creating on a par with the
human brain is not possible at the present time.
That might change in the (distant?) future.
Do you believe that the researchers who are trying to mimic
the way the brain works; essentially duplicating the _exact_
way the brain works, only in silicon - - - do you think they
will have any measure of success?
In other words, is it even possible to mimic the way the
brain actually works, in a reasonably sized computer?
I think they have been able to mimic very simple "brains"
of very small insects, but have not had much success with
I believe those projects are all really focused on performing the
simulations just to gain understanding of how these complex networks
function. I don't think they expect to get all the way to full human
behavior out of it any time soon (though I'm sure they would like to hope
such might happen at some point).
I don't pay much attention to those projects since my interest is really
more in the computer science fields of machine learning, but I think the
real issue is these researches don't yet have a clue how the brain
organizes itself. They can't simulate self organization if they have no
clue how the brain is doing it.
The little I've seen, indicates to me they are skirting the issue, but
taking some random measurements from a real brain (maybe not human?) and
then running some type of search or optimization process on setting the
weights and connections of their simulation, in an attempt to make their
simulation duplicate the characteristics of the brain activity they
The problem is that thought they can test a single neuron, and attempt to
develop a computer module of how it functions, they don't have the
technology to scan and record the actual wiring of a large section of a
brain. They have to guess at it. And they are using these reverse
engineering techniques to try and guess at the wiring by making the
simulation duplicate at some level, some of the same high level brain
They have had success to that extent, as far as I understand, but I don't
know if they have any real clue how the brain wires itself, or how that
wiring changes over time.
And that is the foundation of how the brain learns. And if you don't
figure out how to make these large networks learn, you haven't, in my book,
even touched on the real foundation of human intelligence.
So I don't think they are very close yet - not to mention, those cells are
highly complex and they only simulate the biological process which they
believe are important to the high level functioning. They might be missing
some processes that are highly important, while at the same time, be
wasting huge amounts of cpu time simulating other aspects, which aren't
We can build simulators to duplicate the complexity of electron flow in
transistors, and use those simulators to simulate an multi-transistor AND
gate, and use more of those to simulate how a 32 bit full adder works, but
we have wasted huge amounts of CPU time simulating stuff we never needed to
simulate if all we wanted to do, was add to 32 bit numbers together. I'm
sure the brain simulators are doing the same thing. That is, wasting huge
amounts of CPU simulating a biological process that can instead be
re-implemented with many orders of magnitude less CPU time if they just
understood the correct basic principles at work in the brain to make these
networks self organize.
The simulator approach will help us better understand the brain for sure.
But it might not be the approach that first helps us uncover the principles
we need to understand. Or it might.
No one knows. I suspect it is, but it's only speculation. Most the people
that seem to suggest it can't be done (as you seem to be suggesting) seem
in my view, to be biased in believing that humans are "magical" and as
such, must be too hard for any machine to duplicate. I think we all have a
highly over inflated egos when it comes to believing how special and
powerful we really are. I think society encourages us to believe that.
Wen you look at the facts of what humans can do, we aren't so special.
Right. I think that's true. But I don't think it's as much a problem of
not having the CPU resources, as it is not having the basic knowledge of
how the brain works period. You can't build a valid simulator if you don't
know what you are simulating, and that's about where we are at this point.
The idea of the simulators, is to help us understand what we don't yet
know, and I think they are doing their job and helping us make progress in
Yes. Many animals exhibit such intelligent behaviour, too.
They learn and teach skills. The small detail that makes
humans somewhat different is that we've developed the skill
of storing such knowledge outside our own bodies. I don't
think any other animal has done that.
Even bees however do communicate knowledge; so it's not a
problem of the lack of external representation of knowledge,
just that the representation isn't persistent, generic and
teachable, in the way that writing is.
I think emotions (as I understand them) exist in all mammals.
Perhaps not in reptiles however, even though those are capable
of learning (maybe not teaching though?). We had a rabbit which
lost an entire litter (9 pups) one cold winter night. For two
weeks, she was a *sad rabbit*. It was plain to see, even for
our children. I don't think the level of this response was
different from ours, in proportion to the relative intelligence
and level of investment in producing offspring.
I see emotions as being "activation coefficients" for certain
types of behaviour. When I'm riled up, I'm more likely to
exhibit angry behaviour - I need less of the stimulus that
would normally produce such behaviour. Same with the other
It's possible we only have names for a dozen or two such
activation coefficients, and I'd be willing to bet that the
same set can be shown to exist in *all mammals* regardless
of their relative intelligence levels.
Consequently I don't associate such emotions with intelligence.
Right. And I'd like to add, human language is the same.
Interestingly, in programming language compilers, it's possible
to use an LR parser to generate code from an expression tree,
where each possible emitted code fragment is associated with a
cost, and a manipulation of the expression tree. A so-called
BURG (Bottom Up Rewriting Grammar) is such an LR parsing engine
that seeks out the optimum set of production rules, by exploring
the possibilities to minimise the cost of reducing the expression
tree to one node, in parallel (for some definition of parallel).
I mention that because I see that human language can be broken
down in the same way. Any "production rule" which is related to
items in the current context become "lower cost", i.e. more
likely to be related. As rules are tentatively matched, those
rules themselves contribute to the context and hence to the
probabilities of other rules matching. Using this approach, a
probabilistic parsing engine could be built that would understand
language in the same kind of way that I think we do.
The tricky bit is implementing it efficiently without massively
parallel hardware. Even our own brains use a Darwinian voting
system to pursue more likely paths and demote less likely ones
though, so that kind of heuristic search might be appropriate.
I don't think it's about emotion. It's about how context modifies
the weights on possible productions. Human emotions contribute to
those weights, but they certainly aren't dominant over other kinds
By context, I don't just mean the cultural values which are
expressed in the weights (costs) of the possible productions.
I mainly mean the weights attached to concepts due to them
having been discussed recently, or relevant in the relationship
between the speaker and the listener, or related to emotional
state (activation coefficients).
You seem to be defining emotional behaviour as unpredictability.
I think that's wrong. Just because I can't know your emotional
state (the activation levels on your various emotional variables),
doesn't mean the behaviour is in principle unpredictable. It's
just that I don't know what I need to be able to predict it.
Apart from that, we do have an element of unpredictability...
but I don't think that's related to emotion. It's what gives us
our creativity - we won't always use the previously-defined
optimum interpretation and response.
Right. There is at least one goal which appears to be more strongly
hard-wired into humans than into other animals, and which perhaps is
what makes us more intelligent than other animals. It's the desire to
be able to predict outcomes. This lies at the core of our thirst for
knowledge, but also of the reasons we dream, and in our appreciation
of humour and music, etc. Guess what? It's a very simple Darwinian
driver. Being able to predict is an adaptive trait.
I disagree. Unpredictability != emotion.
It's true that an intelligent machine would be unpredictable,
but not because it shows behaviour that I'd describe as emotional.
It might be just disagreement on the semantics of the word, but
I think you'd find a lot of folk side with me.
To me, "activation coefficient" is nothing more than another name for
I guess however you see activation coefficient as some sort of internal
context that is not all that related to external context? Maybe?
I look at all our behavior, including our language behavior, as being
highly context sensitive. What we do at any moment in time, is a function
of the entire sensory context we are acting in. That context is defined by
some amount of recent past sensory events - what the brain experienced
over the past few seconds, minutes or hours, with a stronger weighting on
more recent past events.
If someone asks us what is 1+1, how we answer is a function of the context
the question is asked in. Not just what else might have been said before
that, but our entire sensory context. If the context includes the
information that lets us know we are at home sitting next to an old friend,
the way we respond to that context, might be to say, "what are you trying
to be funny"? If the context is in public, and the question came from a 5
year old that is a stranger to us, our response to that context will be
All the behavior we produce is a complex function of the current context
defined by our sensory environment. When and how we we walk, and move our
arms, and move our tongue and lips, is all a complex function of the full
context we are responding to.
If you say emotional response is some sort of activation coefficient, then
so is every other element of our context. The fact I'm talking to a 5 year
old is an activation coefficient that biases all my behavior when that kid
is part of my context.
People often talk about emotions as being an internal state of mind. Such
as your example of being angry or "worked up". But odds are, you are
acting that way because of some element of your sensory context. For
example, you walk out to your car, and find someone put a big dent in the
door, and ran away without admiring what they did. It makes you angry and
will bias your behavior for some extended person.
You can say that "angry" is an "internal emotional state that bias our
behavior as an activation coefficient, or we can more simply just say the
"car dent" is part of the resent past context that biases our behavior,
along with everything that recently happened.
I prefer the latter. All our behavior is context sensitive and what we
call our "emotional state" can mostly be explained simply as how we have
learned to act, in response to an event like the car dent. So sure, it's
fine to say the car dent activated a state of mind which serves as an
activation coefficient for our behavior, but it's the same thing as saying
our behavior is biased by the large and complex context we exist in at any
moment in time.
The minor difference with the classes of behaviors we call "emotional" is
that there are often tied to clear innate physiological effects as well -
such as ones trigger by adrenaline such as heart rate, and sensitivity to
pain. Those are obvious unique internal states of the body that are
separate from our pure set of learned reactions to our context. But other
than those innate behavior modifiers, most of our simple emotional
reaction, like how we respond to finding someone had dented our car, I
believe is best seen just as obvious learned behaviors in response to our
There might be some sort of internal physiological classification of
behavior modifying systems such as the changes adrenaline causes in us that
can be mapped back do some small set of a dozen or so contextual
parameters. However, I don't believe most emotional behavior can actually
be so precisely tried to specific internal mechanisms. I believe it's
mostly just learned behaviors of a reward seeking machine. All the basic
emotions like love and hate and anger and fear and sadness can all be
explained as just learned behavior and not something which has to be
explained by innate hardware in us that will bias our behavior.
If there is another person or object that we believe will make a better
future for us if we are around it, or can control it, then we will
naturally act to covet that object - to gain control over it, to protect it
from others. All those behaviors can be seen as learned behavior which we
just happen to group into a large classification we call love (or when the
drive in us to do these things is less strong, we use other words like
desire, or want, or like).
We can sense that we "love" something" and we talk about it as being some
internal state of mind. It certainly is an internal state of mind, but
it's also simply our actions. How do we know we love something? Because
we observe our own actions. We are aware of our own actions. If I see a
girl, and suddenly develop a desire to run away to prevent them from seeing
me, we don't call that type of behavior love. But if I see a girl, and
immediately want to go over and be with her, and I find that I can't
concentrate on what I was trying to do, because my mind keep jumping back
to trying to find a way to go be with her, then we recognize this behavior
in ourselves as being love. We describe those and a wide range of other
similar behaviors as "being in love".
But none of it needs to be seen as some special type of activation
coefficient. It can all be seen as just learned behaviors that are our
response to our current context. The girl becomes part of the context, and
that causes our behavior to change. We are able to sense how we are acting
(both acting in the mind and externally) and that becomes part of our
context as well, which causes us to say "I love her".
Everything we can sense, is an activation coefficient, not just our
"emotional states". Everything we can sense, is part of the context that
the brain uses to select our behavior at any instant in time.
Consequently, I see "emotions" as nothing more than "emotional behavior"
and I see it as much a part of our intelligent behavior as all the other
behavior we produce in response to whatever our current context is.
Yeah, the context that determines how we act is huge. Millions or billions
of bits of context state. That state is translated into a decision about
how to act, by some sort of process happening in our brain that makes
probabilistic decisions based on the strength of these billions of
different factors in our context. It implements some sort of highly
complex mapping function from context to behavior.
Operant conditioning acts to slowly modify that highly complex mapping
function over time.
I don't think "emotion" is best seen as a context. I think it's the way we
label types of behavior that just so happens to be produced at times in
response to some context - like the car dent.
If the context we are in makes us bang our fist on the table and produce
words of anger, we say the person is "angry" and that sort of talk makes us
naturally think as if there is an "angry" context in our brain. But the
context is more likely something like "past unexpected event which created
a large unexpected reduction in our estimated expected future rewards which
could have been avoided if we had been given the option to act differently
To talk about emotions as a special class of behaviors (as is often done),
leads us naturally to think it's innately triggered in a way that is very
different from all our other behaviors. But I think the emotional
behaviors are triggered in the exactly same ways as all our behaviors -
it's just learned.
The minor difference is that it's often associated with large changes in
expected future rewards, where most behavior is associated with only very
small changes in expected future rewards (our unemotional actions).
Right, what I just call the context.
I am? Odd that you would read "rational goal driven behavior" to mean
No, emotional behavior is just one of the many types of behaviors we learn.
It's just as predictable as all our behavior (which is very hard just
because it's produce by a highly complex mapping function from context to
Right. True about all behavior. It seems unpredictable but it is as
predictable as the path of falling apple. It's just very hard to predict
because it's produced by a very complex system which we don't have a way to
measure and model with enough accuracy.
Well, this is a little bit of a diversion, but what I've found
experimenting with learning networks, is that when the system is driven by
very high dimension complex data, there is so much information in the data,
that it serves as an effective noise source for creating plenty of
variability in the actions, so that you don't have to intentionally build
non-optimal decisions to create the needed exploration. The system can
attempt to pick optimal behaviors all the time, but the high noise levels
in the sensory data creates enough variability to create the needed
exploration. But that's just a side implementation issue.
I doubt it's at all unique to humans. First, all real time RL machines
must have an innate power to predict future rewards. The system has no
hope of learning how to deal with the real world if it can't do that. SO
any animal that learns by reinforcement must have innate powers to predict
at least future rewards. But that's not really what you were thinking
We learn to control the future. We learn to take actions now, to make sure
we are able to eat later on in the day for example. We learn all sorts of
mental and physical tricks for helping us shape the future to be better for
us. Like we learn to write down food we need to buy at the store to make
the future better for us (we don't end up back at home without the food we
need). We make plans for what we will do in the future just to make the
future better for us. We learn to make plans for the future (which the
writing a shopping list is just an example of the planing skills we learn).
I think all our "predicting the future" is nothing more than learned
behaviors to make the future better for us. And we learn those behaviors,
because the underlying learning hardware that crates all our intelligent
behavior, has innate powers to predict future rewards, and it uses those
predictions, to shape our actions.
So my speculation here, is that if you build learning hardware with strong
ability to predict future rewards based on current context, and use that
prediction to condition the way the system responds to a complex context,
the system will naturally evolve behaviors that allow it to make a better
future for itself (like writing down a shopping list).
I think the lower animals that are conditioned are doing the same thing -
trying to shape their future the best they can. Humans just do a far
better job of it, because I suspect, they simply have a much larger brain,
which allows it to be sensitive to a far greater context - a far higher
resolution context (or understanding) of the current state of the
I think everything we do to try and predict the future, can just be seen as
a learned behaviors driven by the underlying need to maximize our internal
systems prediction of a "good future" for us. The size of our behavior
repertoire, is just far larger than an animal like a dog, just because our
brain is far lager. With a larger brain allowing us to learn a far larger
repertoire of behaviors, we just end up learning far more complex "tricks"
to make the future better for us.
Again, I did not mean to leave you with the idea that "emotions unpredictability" I don't really see how you found that in my words.
Let me write what I wrote above in a slightly different way to try and
better express my meaning...
If you build a probabilistic based goal seeking machine, it will generate
all the behaviors we think of as intelligence, and the behaviors we call
emotions. I don't believe emotional behavior, and "intelligent" behavior
are produced by two different systems, they are produced by the same system
for the same reasons.
I don't think emotional behavior is in fact a special type of behavior at
all. I think it's just exactly what you would expect a learning machine to
One of the more complex "tricks" of behavior humans learn, is language. And
with that language, they learn to use it as this odd tool we call "rational
thinking" that is like an abacus for predicting facts about the future. We
"play with the beads" (so to say as a metaphor) of the words in our heads,
and after playing, the final words we created are used to guide our
Just as when we learn the trick of moving beads up and own in an abacus,
and then the final position of the beads at the end of the process, becomes
a guide for our actions.
We use language as an internal "mind tool" to guild our action as well. We
produce talk in response to our environment, then we produce more talk in
response to the talk we just produced, and on and on, until finally we
stop, and whatever we last said, is what we use to guide our actions.
That's called "being rational" when the language we use is full of tests
for logical validity.
The brain does not operate rationally like that at the low level. That is
just a high level trick we have learned to use. WE have learned though
experience that it can at times produce a better future for us if we follow
what our words said.
Below that "trick of using language to be rational" is a behavior
generating system that maps complex context into actions. It's so complex,
we can't for the life of us, most the time, figure out why our b rain
picked one action over another. We just notice it did something, and then
try to produce language to describe why (rationalize our actions).
If we choose not to use the language trick of being rational, and just let
the low level system pick what it things is best without using language,
then we often describe that sort of behavior as "being emotional". Or,
"following our heart".
My low level brain is pulling me to sleep with that girl. But if I let the
language part of my brain talk to me, it says, "that will only lead to
trouble, don't don't it". If my behaviors end up following the lead of the
language part of my brain, we call it being rational. If my behaviors
follow the non-lagniappe parts of the brain, we call it begin emotional.
It's valid in my view, to argue that our low level hardware for selection
behavior can be called our emotional hardware. That is, we have an
emotional foundation for our behavior, not a rational one. We learn by
experience, to use these language behavior tricks to make ourselves act
rationally at times instead of following our low level behavior system.
The limit of the low level hardware is that even though it's trying to
predict future rewards, it's ability to do that is very limited in time.
It can't predict more than minutes or maybe hours into the future. Our
language usage however, allows us to logically "compute" rewards much
further into the future - all the way to our death. So when we choose to
not use language tricks when to guide our actions, we are doing what our
short term prediction system things is best, but we are risking a very
bleak long term future, that our language system was trained to predict.
I guess what I mean is that the time constant on emotional context
is much longer than for other context... long enough that we notice
when someone is "in that mood" (in the absence of a proximate
stimulus) as opposed to just responding to an immediate situation.
It seems likely also, an emotion may be associated with a particular
endocrine function, which (a) affects many parts of the mind and body,
(b) could explain the time constant, and (c) is common to all creatures
sharing a common biology.
The persistence of a mood, the way it pervades behaviour that's
not a direct response to the cause, and the fact it has counterparts
in other individuals is what makes it appear as an independent state,
something worthy of its own name.
These are the three main ways in which emotional context differs from
pure neurological (brain) context.
I disagree with this. I think you are closer to the truth in the quote
below. The odds are that I'm responding to my adrenalin level. It's not
just sensory context, it's endocrine-derived neuro-muscular context.
Since the endocrine system affects all our organ activity, it's why we
feel certain emotions "in our guts".
I don't think that difference is minor, for the 3 reasons I stated.
If I'm right, and these map to endocrine states, that would explain it.
If so, emotions wouldn't be so universally recognisable, even across species.
Unless, of course, it is (in the form of a hormone level). I think
my rabbit example (and many others like it) reinforce that view.
Hard to argue with that, except to say that emotions appear to have
a constant activity level across individuals and species of extremely
widely varying intelligence. That argues for them having a different
basis, perhaps a more fundamental one.
Of course they contribute to the intelligence of behaviour, but then
so does respiration; it's hard to exhibit intelligent behaviour unless
you're alive. It doesn't mean that respiration is intelligent.
The interesting thing from a CS perspective is how the possible answers
are reduced; even our amazing brains are clearly not performing a brute
force combinatoric search across all those billions of variables.
And in response to other stimuli, in the non-immediate aftermath of an
event like that. It's this persistence that makes the behaviour pattern
stand out and get named. We know what kind of response is proportionate,
and if we see a disproportionate response, we assume that some prior
event has created a mood which explains it.
If it's endocrine-related, then it is different.
When you said rational process, thought you meant a process that can
be rationally understood. You followed it with "probabilistic", which
to me implied stochastic; subject to some internal noise that makes it
Our rabbit did not "learn" her mothering instincts. Nor did she have
to learn how to be sad. It's also arguable that actually being sad
was not an adaptive behaviour - hard to see how being sad will help
her pass on her genes.
Mammals have these states wired in to their biochemistry because it
makes them better goal-seekers; but that they're wired in makes them
different from neurological goal-seeking.
That's a very fair observation, and one I also believe.
I stopped being concerned about "free will" and the arguments
about morality which stem from it after realising how incredibly
interconnected things are. As the Rev Prof David Pennington explained:
If you want to predict the motion of a single molecule of air, spanning
the duration of a tenth of a nanosecond, during which (at room temp
and pressure) it will undergo an average of about 50 collisions, you
need to know the state of almost every particle in the entire universe.
In particular, if you neglect the gravitational attraction (the weakest
of the physical forces) of a single electron (the lightest stable
subatomic particle) at the furthermost end of the observable universe,
then after those 50 collisions you know basically nothing about the
direction or momentum of that molecule.
And folk resort to Platonic dualisms to justify their belief in free
will within an otherwise mechanistic universe!
Yeah, there certainly seem to be some base emotions supported by additional
Well, don't go too far with that. That is, yes I agree those sorts of
reasons is why a class of behaviors will end up with a name, but once
something has a name, it creates a bias in our thoughts about the cause
(one name -> one cause).
You seem to be trying to dig for a justification to fit the name (emotional
Persistence of behavior is certainly not a unique indicator. Our lives are
filled with examples of persistent behaviors. Like when we drive a car for
a hour, we exhibit persistent car driving behavior for the entire period.
The cause of that is obviously mostly external (the fact that we are in car
driving) so the cause is obvious to us. But if we break up with a girl
friend, and our behavior changes for the next month, the cause is still
external even if the cause is not temporally persistent.
I'm not aware of any emotional behavior that isn't a direct response to
some external cause.
Maybe. I don't know much about brain chemistry. But isn't it likely that
the endocrine level was triggered by external events? And if so, then it's
just as valid in that chain of causality to claim our behavior is in
response to the external event instead of saying it is in response to the
If we see a child in the path of our car, and step on the brakes, and the
car stops, do we say the cause of the car stopping was the increase in
pressure in the break fluid? We can. But we could also say the car
stopped because we applied pressure to the break pedal. Or we could say
the girl stopped the car by stepping in front of it. Any of the items in
the chain of causality can be labeled as the cause. The entire chain of
casualty is a better description of the "cause" but causality chains are
often too complex to talk about in their entirety.
So the question would be, what controls the endocrine levels (or other
chemicals in our brain that effect our behavior).
Ah, my wording wasn't very accurate there. I meant that I don't believe
there's a simple one to one mapping from the names we make up to label a
class of "emotional" behavior, and some _unique_ internal mechanism to
Yes, a more fundamental one I would argue. I'll get to that.
Yes, in theory. But by calling it a "search" creates implications of a
serial comparison process. I doubt it's implemented anything like that.
It's mostly likely a neural network like parallel function calculation.
Yes, we call it a "mood" when a class of behaviors emerge from a past
event. For the brain to implement that, we know the brain has to maintain
state (memory) of that past event somehow. But how it maintains the state
is not really important when we are simply talking about behavior (as we do
when we talk about it being emotional).
Hum. Ok. Well, we tend to switch to using probabilities to talk about
effects we can't fully predict. And that's often because there are factors
at work we can't measure. We might call those factors noise, or we might
just understand we are dealing with missing data.
Thinking about this, I believe my use of the idea of probabilities really
fails to capture the intent of what I was trying to communicate.
I believe human behavior is created by a highly complex mapping function
implemented as a neural network. The network is trained (aka adjusted) by
experience. The "output" of this parallel mapping network is our behavior.
The network can be trained to produce very simple and easy to predict
responses, such as when we train a dog to sit in response to a hand
gesture. When we learn to create "rational" language behavior, which is
full of simple easy to predict responses (1+1=2, true OR false -> true), we
have trained our complex behavior system to produce simple actions.
But the complex network can, and does, produce complex behavior that can't
be explained in simple terms - simply because it's a complex function of a
million variables. I was calling this probabilistic. But it's not really
probabilistic. It's some precise mapping function made up of million of
variables. We just have trouble talking about the behavior without
resorting to using probabilities (what is the probability that the person
will turn his head to the right in response to the sound?)
Human behavior is complex, because it's produced by a very complex stimulus
response system trained by (adjusted in response to) a life time of
experience. Some of the behavior turns out to be simple enough to describe
with a few words. (He stopped the car to prevent hitting the girl). Most
of it however, is too complex to be explained with words. Why did I choose
to use each of the words I used in this post? Why did this sentence end
with the word marshmallow?
I like to think of this hard to explain behavior as probabilistic, but
that's not really accurate.
Yes, and that sort of complexity is inherent in the brain as well. I
believe it works by taking a complex neural network that by default,
produces highly complex (and pointless) behavior. To understand what it
does, is as hard as millions of factors effecting the behavior. The
network implements a highly complex, parallel causality chain.
The complex reaction system is the foundation of a reinforcement learning
machine. It's behavior is shaped by a single external reward signal. In
the brain, Dopamine seems to be a key element in the rewards signal, with
theory put forth that it encodes the error in reward prediction.
Regardless of how it's implemented, reinforcement learning machines share a
collection of key features in common. It's just the basic nature of what
can and must be done, to build such a machine.
First, they are driven by a single reward signal. It can't be driven by
two (or more) reward signals because the machine has to make choices and if
given apples and oranges to pick from, it can't know which to pick unless
they are first mapped to a single valued reward. There can only be one
dimension of "goodness" to the low level hardware that has to make the
decision about what to do. In order to motivate such a machine with more
than one goal (such as protecting it's body from harm, while also seeking
food) there must be internal hardware that defines the relative value
(conversion rate?) of these two goals. How much pain from a skin sensor is
worth the same as an hour of not eating? These independent reward
(positive or negative) effects must be translated down to a single reward
value by innate hardware, it can't be learned. The learning hardware then
seeks to maximize that one dimension measure of reward.
In addition, the machine is built so as to maximize not just current
reward, but some estimated measure of future rewards. They all do this
(meaning all the ones we have build in computer science) by using a reward
prediction function. That is, it learns, from experience, to predict
This gives all these machines two inherent internal measures of reward.
One measure is the signal coming from the innate hardware that defines the
systems primary reward (which is the machines goal - to maximize THAT
signal over the long run). But the second signal that must exist, is the
systems prediction of future rewards. It must learn how to map the current
sensory environment, into a prediction of future rewards. A hand moving
towards a flame, is a predictor of less future rewards (a predictor of
pain). A table full of food when we are hungry is a predictor of higher
The low level innate rewards, are used to train the reward precondition
system. The error signal between the predictions, and the low level innate
reward, is used to train the prediction network.
The output of the prediction network however, is used to train behavior.
So when we do something, and it causes the environment to change in a way
that the causes the prediction system to estimate lower future rewards,
that feedback is used to change the behavior production system in ways that
reduce the probability the system will act the same way in the future.
When things change for the better, the system is again changed, to increase
the probability that a similar context will produce a similar response.
The better the prediction system is at "understanding" and "predicting"
future rewards, the better it is at training the systems behavior. If the
predicting system can't predict the negative rewards that will follow when
a hand is heading towards a flame, then the behavior learning system won't
be able to learn not to do that.
All of human learned behaviors can be fairly easily explained by the
actions of such a reinforcement learning system. Including most of our
A key "mood" state that happens with such a system, is the output of the
reward predictor. And that internal reward signal is a key signal
evolution could use to regulate the rest of the body as well.
If the brain senses the environment has moved to a state which is a strong
predictor of very large negative rewards ahead (such as what happens when a
hungry lion is suddenly standing in front of us and we fear we are about to
become lunch for the lion), there are obvious physiological effects that
evolution could tie to that signal to help us survive - such as getting the
body ready to escape by raising heart rates and turning off digestion to
But the innate response is one wired to the reward prediction signal of the
reinforcement learning machine.
Same for the inverse conditions of when the reward prediction system
predicts a sudden increase of future rewards. We can have physiological
effects tied to that condition because it's an important innate internal
signal that exists in this type of learning machine which evolution could
pick up on to regulate body functions.
What such a machine learns, is how to produce behaviors to make that
prediction signal increase, or at least, not drop. It learns that various
clues in the environment signal an opportunity to get more rewards, or to
prevent a loss of rewards. A cookie jar becomes a clue for producing
behaviors to open the lid, and pick out a cookie and eat it. A lion
becomes a clue to run away. We learn a wide range of behaviors for
collecting rewards, and for escaping from the things that will take rewards
away from us.
So what happens, when the environment is full of "good stuff". We become
very active running around trying to pick all the low hanging rewards as
quickly as possible. When a rabbit has babies, it's probably got a slew of
innate rewards available to motivate it to take care of the babies. It
becomes very busy doing all the things it needs to do to take care of them
and is constantly collecting "rewards" for doing so. The babies become
predictors of high future rewards.
If all the "good stuff" suddenly vanishes from the environment, what
happens? The reward predictor output crashes. That causes strong negative
training to take place (the rabbit just experienced a strong negative
learning experience). That creates a depression in the behavior (which is
what negative training does - reduces the odds of a behavior being
produced). But more important, it removes all the low hanging
fruit-rewards so there's suddenly nothing to "run around and collect".
But there could also be a re-adjusting of the reward prediction system
required. It could be "bottomed out" in theory. (just speculation on my
part here). That is, it might have a limited range of signals it can
produce (lets say 0 to 10 as an example), and it tends to try and keep that
number around 5. When the environment gets better, it raises the signal,
and when it gets worse, it lowers it. With the baby rabbits around, the
system might have started to constantly predict high values (8-10), and
then readjusted down to an average of 5 after some time (life has been
good, so it's readjusted the training signal to reflect expectations based
on this good life). The average signal returns to around 5. But then the
babies dies, and suddenly the signal falls to 0 or 1. It continues to act
as a negative training signal for days. But in time, readjusts back to 5
(life sucks, but we can deal with it :)).
So we can call this sort of mood an emotional depression, or we can also
just look at it as standard behavior of a reinforcement learning machine
when the environment suddenly changes so as to reduce the rewards
available. The depression can last until there is something else to
motivate the behavior, or, until the reward prediction system has time to
adjust to the new level of expected rewards.
We find this same sort of behavior across a wide range of animals, simply
because they all have reinforcement learning machines as the foundation of
their behavior, and all reinforcement learning machines share these
features in common.
Most of the basic emotions (happy, sad, depressed, fear, love), in my view,
are explained exactly like this. That is, as the standard way
reinforcement learning machines can be expected to act.
Our "intelligence" is normally described as the large and complex behavior
set we each have that allow us to perform complex survival "tricks" (like
operating an ATM machine).
But most of this large and complex behavior set was not innate in us. It
wasn't wired into us at birth. It was learned. It was learned by a
reinforcement learning machine that is adjusted by an innately defined
reward signal. Our simple logical (rational) behaviors, were trained into
us, by a life time of experience. But the behaviors we can understand and
explain with a few words, are only the tip of the iceberg on the large
mapping function that creates all our behavior - most of which are too
complex for us to explain with a few words, which means they are too
complex to "understand" (we can't describe their cause with words so we say
we can't understand them).
But the classes of behavior we general talk about as "emotional" (which is
often used to mean not "rational"), are also just learned behaviors. And
even if they are sometimes triggered in response to a chemical levels, the
major "emotional moods" are most likely (in my view) there as part of the
implementation of the reinforcement learning machine, (such as dopamine
possibly being a signal for encoding the difference between predicted and
real rewards), instead of some "extra" evolutionary "feature" built into
It's common for people to talk about emotions as some sort of odd
evolutionary feature built into us which are separate from our rational
"intelligence". However, I think our rational intelligence comes from the
fact that we are reinforcement learning machines, and these machines
naturally produce "emotional" behavior at the same time. It's just how the
learning system works. They aren't "extra" features given to us, by
evolution, they are there simply because evolution put our external motion
under the control of a reinforcement learning controller and these
controllers produce "emotional" behavior as their foundation. Our more
rational behaviors are just the high level (tip of the iceberg) behaviors
that have been so strongly reinforced they are close to absolutes in their
Has anyone read
Rapture for the Geeks: When AI Outsmarts IQ
by Richard Dooling
or the underlying source material, such as a Wired article by a Sun
Microsystems founder, Bill Joy? Relevant to the ongoing discussion
Some of the great minds of our age believe in the inevitability of
computers that will out-think people. And sooner than one might
think, by around mid-century "for sure", given metrics such as Moore's
Law and the exponential growth of computing power.
On Wed, 13 Oct 2010 23:42:21 +1100, Brian Martin wrote:
The problem with Arthur (Mentifex's real name), is that he thinks he's
some kind of super genius, and therefore totally ignores all work done by
everyone else in the field.
As a consequence, and because he's emphatically *not* any kind of genius,
his work is stuck at the level of a first year college student, and has
been for the last thirty years.
If he took a look at, for example, the work of Doug Lenat, he'd find, in
the "Cyc" project not only a powerful and flexible way of encoding all
sorts of knowledge; but gigabytes of encoded knowledge to play with.
Instead he's only able to encode and manipulate the kinds of knowledge
that "Eliza" was handling forty years ago.
= David --- If you use Microsoft products, you will, inevitably, get
Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here.
All logos and trade names are the property of their respective owners.