I want to create an android

comp.ai.philosophy wrote:


So consciousness is subjective but intelligence is not?

~v~~
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Lester Zick wrote:

Intelligence can be clearly and concisely defined, consciousness can not.
Intelligence: The ability to formulate an effective initial response to a novel situation.
Consciousness is an internal trait, intelligence is an external, observable trait. If a system behaves intelligently (solves problems it hasn't seen before), then it is intelligent.
Intelligence can be measured. Human level AI will be achieved when a computer can consistently pass the Turing Test against a skilled interrogator.
Intelligence can exist in degrees. A dog is more intelligent than a fish, but less than a monkey. But does a dog, monkey or fish have consciousness? Different people and different cultures disagree. It is like asking if they have a soul.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
comp.ai.philosophy wrote:

Whether intelligence can be clearly and concisely defined has no bearing on whether it is subjective. I think what you mean to say is definitions of consciousness are subjective. But I don't see anything to indicate your definition of intelligence is not subjective.

So what makes this definition not subjective? Consciousness could undoubtedly be defined in comparable terms.

Well it may be like asking if they have a soul but I don't see that intelligence is any less subjective as a mechanism. You're arguing intelligence can be measured but measures of intelligence are every bit as subjective as people want to make them.
~v~~
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Lester Zick wrote:

Definition is always subjective. Give me an example of "non-subjective" definition.
To give a definition certain objective flavour we could bring in "experimental material" showing that statisticaly one definition was more often used historicaly than others (for example in journal articles). But strictly speaking it does not make it less subjective than other, it just makes it more common.
Regards, Evgenij
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Thu, 26 Jan 2006 14:26:05 -0600, Evgenij Barsukov

I think you're confusing the origin of definition with the definition itself. All definitions originate mechanically in subjective terms. That doesn't mean the content of the definition is subjective. Science requires objective definition. Just saying consciousness is so and so doesn't cut it in science.

Lack of demonstrability is what makes a definition subjective whether you draw on historical or popular sources.
~v~~
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Lester Zick wrote:

Definition has to be "specific" as to include only objects intended. But you _intent_ remains subjective.
Definition that includes more objects than intended is not "unobjective" it is just unsuccesful.
There are more other criteria which makes a definition useful in scientific threatise, and which cause a particular diffinition to become common. The final test for effectiveness of particular definition is its wide propagation and common usage in discussions. In retrospect it is possible to analyse why some definitions become common and other did not, and come up with list of criteria for success. But the final test of usefulness is also subjective, and our list criteria will only give guide-lines but no guaranties of success. In some cases success of particular definition is just a matter of "first inventor" of certain relationsheep giving it a definition, even though later when relationsheep is better understood a more specific definition could have been given. In other cases it is just luck or higher persuasiveness of particular scientist that his definition becomes more common than other competing one.
It would be interesting nevertheless to come up with such a list of success criteria for definitions.

I would rather say "lack of specificity" where demonstrability is a subset, that applies to definitions dealing with concepts.
Regards, Evgenij
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Thu, 26 Jan 2006 16:17:11 -0600, Evgenij Barsukov

So? Definitions are either objective or inherently problematic.

But successful definitions in the sense of being true have to be objective.

I'm talking about true definitions.

Hey, people come up with all kinds of more or less successful and unsuccessful definitions all the time. Doesn't mean they're true or even exhaustive.

Well I'm mainly concerned with whether definitions are exhaustive. If so they can certainly be true. But if not they can only be successful or more or less useful in problematic terms.
~v~~
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
: comp.ai.philosophy wrote:
: >Lester Zick wrote:
: >> >"Conciousness" is ill-defined and subjective. : >> >Besides, the goal is "intelligence", not : >> >conciousness. : >> : >> So consciousness is subjective but intelligence is not? : > : >Intelligence can be clearly and concisely defined, : >consciousness can not.
: Whether intelligence can be clearly and concisely defined has no : bearing on whether it is subjective. I think what you mean to say is : definitions of consciousness are subjective. But I don't see anything : to indicate your definition of intelligence is not subjective.
: > Intelligence: The ability to formulate an effective : > initial response to a novel situation.
: So what makes this definition not subjective? Consciousness could : undoubtedly be defined in comparable terms.
We, as humans decide. A reasonable response can be defined as one that a panel of judges decide is reasonable. There is some culture clash with this method obviously, but it can be decided upon.
: >Consciousness is an internal trait, intelligence is : >an external, observable trait. If a system behaves : >intelligently (solves problems it hasn't seen before), : >then it is intelligent. : > : >Intelligence can be measured. Human level AI will : >be achieved when a computer can consistently pass : >the Turing Test against a skilled interrogator. : > : >Intelligence can exist in degrees. A dog is more : >intelligent than a fish, but less than a monkey. : >But does a dog, monkey or fish have consciousness? : >Different people and different cultures disagree. : >It is like asking if they have a soul.
: Well it may be like asking if they have a soul but I don't see that : intelligence is any less subjective as a mechanism. You're arguing : intelligence can be measured but measures of intelligence are every : bit as subjective as people want to make them.
I disagree. Consciousness is purely subjective. Intelligence is a display of rationality. Consciousness is the realization of self and of the spontaneous decision that the self is important. In that manner, I believe that dogs, cats, monkeys, squirrels, etc. are conscious beings, they display fear, joy, etc. (in as much as I can tell). This will be a challenge to determine in a machine. In short, I believe that both intelligence and consciousness can be defined and measured.
IMO, DLC
: ~v~~
--
============================================================================
* Dennis Clark snipped-for-privacy@frii.com www.techtoystoday.com *
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
comp.ai.philosophy wrote:

I don't suggest it can't. But humans decide everything subjective or objective. That alone doesn't make definitions one or the other.

Well there's some wiggle room here. Consciousness is mechanized subjectively; that's true. But it's products are objective. That's how and why we can use its products interpersonally to begin with.

Which is just a synonym regression.

I've argued this at length with feedbackdroids and unsuccessfully. I don't see animals as providing evidence of their own consciousness.
~v~~
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Bob wrote:

Yes you could simulate my 'licence-plate-detector network' with a symbolic algorithm, but it would be even less understandeble than a network, you then also could say everything writen in a computerlanguage is symbolic. I would say that your 'symbolic algorithm would be in the sub-symbolic- category

But you would have to think of everything! youre robot has to get his coat before going outside, except when the house is on fire, when its hot outside, when ...........
I would say its imposible to think of everything, let genetic algorithms sort it out
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I agree with the point you are making, but you have a few slight errors there. You can't build anything out of nand gates because you can't create a clock out of nand gates. You can't even build a computer out of nand gates alone because of the timing problem.
So, since you have ignored this very important concept of time when you talk about nand gates, how are you addressing it when you use this concept of "symbolic algorithm"?
Typical computer programs are timeless. That is, they assume the idea of sequence, but they have no concept of time encoded into the source code. And this is a big issues because human intelligence is heavily dependent not just on sybolic sequence, but on time itself. Symbolic sequence alone is never enough to create intelligence, you need timed symbolic sequences. Timing is everything in intelligence. You can't talk, or walk, or hammer a nail, unless you get the timing right. It's not just what symbol you produce next which is important, it's the exact time which the symbol is produced which is important. If you don't look at, understand, and solve, this complex timing problem, you can't begin to solve AI.
So just be careful about oversimplifying the problem of AI down to nothing but NAND gates and symbol processing. Doing that can cause one to forget how time dependent the problem is.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

This is a common but excruciatingly lame argument, IMO. All flying systems (flapping wings or rotating props) use the same aerodynamic principles. Same with any type of locomotion: they all use the same propulsion principle based on Newtonian action/reaction.

You're kidding me? You know of another way to create intelligence without making connections?

They don't need to do it faster than that.

We must have a huge amount of memory for human-level intelligence. It's all made of transistors,lots of them. What you should have said is that computers require far fewer processors because one processor can do the work of many neurons in the same amount of time.

So what? Nobody is claiming that we must simulate biological neurons down to their low-level bio-chemical processes. A neural network can certainly be created in software but, if you look into memory with a microscope, you're not going to see pathways and signals. You're going to see a bunch of transistors which are either on or off. That does not mean that the principles used in its operation and organization are not similar to those of biological networks. A software neural network is a virtual mechanism.

What is more dramatic than learning to walk, speak, drive a car around town and send people to the moon? It was all accomplished by the neural networks in our brains. You're purposely stupid or something?

ahahaha... This is really lame. The symbolic approach to AI is the work of the devil and those who promote it are children of the Devil (wake up! Minsky et al). ahahaha... It has been around for over half a century and it has been shown to be a pathetic failure. The entire symbol manipulation community should be tarred and feathered and paraded around town for having wasted humanity's time on a wild goose chase for half a century. ahahaha... Those who are still promoting this crap, at this late date in the con game, should be caned publically as an example to the younger generation. ahahaha...

I do. It's called the brain, artificial or otherwise.

You're using emotional buzz words like "throw" and "magically" for effect because you don't have a valid argument. You are revealing yourself to be nothing but a con artist (as always, I tell it like I see it). The fact that AI is hard is precisely why the symbolic approach is crap. And you know it. One cannot understand cognition because it is too complex for our limited brains to encompass. One can only understand its basic operating principles which is what the sensible AI researcher should be trying to figure out. One cannot program the astronomical interconnectedness of intelligence using a pure symbolic approach. It's pure stupidity.
Connectedness implies zillions of signals and connections between zillions of basic processors, which is another way of saying 'neural network'. Regardless of what the clueless symbolic AI fanatics ceaselessly preach, the proper goal of the AI scientist is to search for free lunches, i.e., to look for neural methods and principles that will allow intelligence to emerge automatically through learning and self-organizatioon. Any other approach is no better than pissing on a spark plug. ahahaha...
This is my last post on this thread. I'm really getting tired of this shit. I do it as public service but I think I should get paid from now on. ahahaha... Fire away!
Louis Savain
Why Software Is Bad and What We Can Do to Fix It: http://www.rebelscience.org/Cosas/Reliability.htm
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Bob wrote:

Hot air ballons predate fixed wing aircraft by over a century. Kites are even older: it's not clear just when they were invented, in fact. The Wright Bros. plane was a powered kite. Attempts to power kites (and hot-air balloons) predate the Wright Bros. In fact, without those previous attempts, the Wrights couldn't have done it. Even their combination of a kite made with tension members for lightness plus a light motor wasn't the first one, either. There were dozens of attempts to use just this combination. But theirs worked well enough to say they were the first to do it successfully. They refined existing technology to the point of success, which is no small feat, especially when previous attempts could be seen as indications that it was dead end.
As for "flapping wings..." Well, that's a common misconception (oh, the awful eeffects of simplified school histories!) Have you ever actually watched birds? Or flying fish and flying squirrels, for that matter. Many birds glide - some spend more time glding than flapping their wings, in fact. Flying fish and squirrels glide. So do some insects, part of the time: eg, many butterflies alternate flapping and gliding flight. Even some bats glide some fo the time. FWIW, pterodactyls probably glided more than they flapped their wings. People knew long before the Wrights that flapping wasn't the only way to get into the air and move about in it.
HTH
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Wed, 25 Jan 2006 11:10:16 -0500, Wolf Kirchmeir

And yet people still flap their mouths.
~v~~
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Bob wrote:

Actually, there were (are?) some primitive bacteria that evolved bearing. Notice that biological systems that you know (or that did evolved) are not the only possible to evolve.

Whether changes are "dramatic" or not depends only on your subjective perception of time. Since evolution is mainly based upon mutation(+crossover)/natural selection, it's blind and takes time. Of course from point of our human perception of time conscious engineering is much faster. Also many/most products of evolution might be sorts of dead ends. But I woudn't make statement that something "is not possible by evolution" as such.

I wouldn't say that. Artificial Neural Networks are merely primitive substitutes of biological brains, that can be used (as any AI algorithm) to solve some rather simplistics tasks (and often do it better than humans, but that applies to almost any computation than can be run on a computer). Also ANNs are only one of approaches/methods that somewhat attempts to explain the way how human mind reasoning works.

I'm not sure what you mean by symbolic algorithm but artificial neural networks (as any other heuristic algorithms) are capable to confront problems for which classical/deterministic algorithms that would work in acceptable time don't exist (either are not known or not possible to create). Of course results are only some suboptimal solutions but that's always better than nothing.

That's probably the greatest disadvantage of neural networks, that they works as black-boxes. Once constructed, it is very hard (or imposiible) to withdraw the human-readable rules rules upon which given neural network is implicitly based. It is true even for small networks consisting of few "real" neurons.
--
677265676F727940346E6575726F6E732E636F6D

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Well, I agree with the point you are trying to make here, but just want to take the time to point out your history is a bit off. Man had been building kites that could fly without flapping wings for something like a thousand years before the Wright Brothers. Man had been flying in balloons for over a hundred years before the Wright Brothers. Other people had already published books about the phisical characteristics of air foils before the Wright brothers built their first glider. The Wright bothers were simply the first to take all this knowledge about flight and build the first practical (i.e. stearable) powered glider.

I agree that makes for good dreams. But the brain is solving a large parallel data processing problem. It takes thousand of parallel sensory data streams, mixes and matches them in some complex way, and produces hundreds of parallel output data streams. You can't do that with anything other than what logically has to be a large parallel signal processing network. So neural networks are not just a dream, they are in fact the way this problem must be solved. It's only a question of what type of network(s) and network processors (nodes) are needed to solve this problem correctly.
However, because of the very advantages you point out about the hardware we have to work with, we might actually build that large parallel network on a sequential processor which is able to emulate the function of network instead of building a physical network. So the physical implementation may be very different, but the logical function in the end has to be the same.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@msn.com wrote:

Why would you want to do that? That is the main question...
This is not a goal that can be achieved by one person, but for entire humanity over many generations. However, things are not made by human civilization just for fun of it.
Every invention requires "energy investment" and therefore a purpose to justify it. Non-autonomous AI (computers + software) has a clear purpose to help people achieve the goal of Life - sustained acceleration of entropy increase. This is AI which is integrated into humanity and catalyticaly active only in conjunction with people, but not on its own. It makes together with people an combined advanced life-form, but not a new independent life form.
What you are suggesting is to create AI that would be self-sufficient Life form, e.g. would compete with people for resources, e.g. stuff entropy of which can be increased. This is not only unnesesary but also a dangerous efford which should be discouraged and even punished. Fortunately, humanity is not going to provide energy for generating something which has no use whatsoever, and therefore it will not be created at this stage.
At later time, when human+AI life-form will become so powerful that creation of autonomous AI can be done with negligible amount of energy, it will be created just for fun of it as a side-result of something else. But at that time human+AI hybrid Life will be sufficiently powerful to be able to control the autonomous AI. Human+AI life will be not so different from AI by itself until than.
In fact it might be that human+AI hybrid life will eventualy transform into pure bio-free AI just for efficiency reasons, but it will literaly be an evolutional transformation rather than "creating" of something from scratch.
Regards, Evgenij
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
Evgenij Barsukov wrote:

Are you kidding? *Most* of the best inventions come from wanting to do something, and thinking about selling it or making money second. Think about napster. Think about GNU software.
The mountain bike was a hack of a normal bike before it became marketable. UNIX was written to run a game with no thought of making money.

Because it *isn't* there is sometimes enough.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
mlw wrote:

I meant it much for philosophicaly than just "money" merit of it. Concept of "purpose" and "meaning" goes deep into our mental conditioning, and so even when we spend huge amount of time for things that have no monetary merit or social reward, we still do it according to our internal purpose. We are very very spiritual or shoudl I better say, idealistic, creatures, even though it is not on the surface and not always realized on concious level.

Of cause. Money have nothing to do with it. But purpose does. Mountain bike is a very useful thing.
Regards, Evgenij

Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
My conjecture the intelligence will come first, the body later. My second conjecture is the intelligence will be an autonomous charcter in a video game. Right now they try to make them clever acting and aware of the game world. At some point designers may integrate verbal interaction with human player(s). Then graft them into mechanical bodies.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.