I want to create an android

Why would you want to do that? That is the main question...

This is not a goal that can be achieved by one person, but for entire humanity over many generations. However, things are not made by human civilization just for fun of it.

Every invention requires "energy investment" and therefore a purpose to justify it. Non-autonomous AI (computers + software) has a clear purpose to help people achieve the goal of Life - sustained acceleration of entropy increase. This is AI which is integrated into humanity and catalyticaly active only in conjunction with people, but not on its own. It makes together with people an combined advanced life-form, but not a new independent life form.

What you are suggesting is to create AI that would be self-sufficient Life form, e.g. would compete with people for resources, e.g. stuff entropy of which can be increased. This is not only unnesesary but also a dangerous efford which should be discouraged and even punished. Fortunately, humanity is not going to provide energy for generating something which has no use whatsoever, and therefore it will not be created at this stage.

At later time, when human+AI life-form will become so powerful that creation of autonomous AI can be done with negligible amount of energy, it will be created just for fun of it as a side-result of something else. But at that time human+AI hybrid Life will be sufficiently powerful to be able to control the autonomous AI. Human+AI life will be not so different from AI by itself until than.

In fact it might be that human+AI hybrid life will eventualy transform into pure bio-free AI just for efficiency reasons, but it will literaly be an evolutional transformation rather than "creating" of something from scratch.

Regards, Evgenij

Reply to
Evgenij Barsukov
Loading thread data ...

Are you kidding? *Most* of the best inventions come from wanting to do something, and thinking about selling it or making money second. Think about napster. Think about GNU software.

The mountain bike was a hack of a normal bike before it became marketable. UNIX was written to run a game with no thought of making money.

Because it *isn't* there is sometimes enough.

Reply to
mlw

Actually, there were (are?) some primitive bacteria that evolved bearing. Notice that biological systems that you know (or that did evolved) are not the only possible to evolve.

Whether changes are "dramatic" or not depends only on your subjective perception of time. Since evolution is mainly based upon mutation(+crossover)/natural selection, it's blind and takes time. Of course from point of our human perception of time conscious engineering is much faster. Also many/most products of evolution might be sorts of dead ends. But I woudn't make statement that something "is not possible by evolution" as such.

I wouldn't say that. Artificial Neural Networks are merely primitive substitutes of biological brains, that can be used (as any AI algorithm) to solve some rather simplistics tasks (and often do it better than humans, but that applies to almost any computation than can be run on a computer). Also ANNs are only one of approaches/methods that somewhat attempts to explain the way how human mind reasoning works.

I'm not sure what you mean by symbolic algorithm but artificial neural networks (as any other heuristic algorithms) are capable to confront problems for which classical/deterministic algorithms that would work in acceptable time don't exist (either are not known or not possible to create). Of course results are only some suboptimal solutions but that's always better than nothing.

That's probably the greatest disadvantage of neural networks, that they works as black-boxes. Once constructed, it is very hard (or imposiible) to withdraw the human-readable rules rules upon which given neural network is implicitly based. It is true even for small networks consisting of few "real" neurons.

Reply to
Grzegorz Wrób

My conjecture the intelligence will come first, the body later. My second conjecture is the intelligence will be an autonomous charcter in a video game. Right now they try to make them clever acting and aware of the game world. At some point designers may integrate verbal interaction with human player(s). Then graft them into mechanical bodies.

Reply to
rick++

So consciousness is subjective but intelligence is not?

~v~~

Reply to
Lester Zick

As the AI and robotics community are probably not really interested in creating one, the really most interesting question here is:

What do _YOU_ think, has the best chance of being helpful in achieving this?

Claudio

Reply to
Claudio Grondi

Yes you could simulate my 'licence-plate-detector network' with a symbolic algorithm, but it would be even less understandeble than a network, you then also could say everything writen in a computerlanguage is symbolic. I would say that your 'symbolic algorithm would be in the sub-symbolic- category

But you would have to think of everything! youre robot has to get his coat before going outside, except when the house is on fire, when its hot outside, when ...........

I would say its imposible to think of everything, let genetic algorithms sort it out

Reply to
bob the builder

I meant it much for philosophicaly than just "money" merit of it. Concept of "purpose" and "meaning" goes deep into our mental conditioning, and so even when we spend huge amount of time for things that have no monetary merit or social reward, we still do it according to our internal purpose. We are very very spiritual or shoudl I better say, idealistic, creatures, even though it is not on the surface and not always realized on concious level.

Of cause. Money have nothing to do with it. But purpose does. Mountain bike is a very useful thing.

Regards, Evgenij

Reply to
Evgenij Barsukov

Why wouldn't it be possible? Admittedly, there isn't much in the way of AI, but programming a mimic of human behavor is not all that difficult. Humans highly overrate the complexity of the behavior of the average human. Seems to me, to make it credible, you'd have to make it act sort of stupid, like human behavior. :-) So, the only problem is the software.

As for neural nets - I expect that is just a passing fad, like bubble memory. Neural nets was an exercise in the study of how neurons work; not a very practical solution to AI, imho.

Reply to
Stuart Grey

Intelligence can be clearly and concisely defined, consciousness can not.

Intelligence: The ability to formulate an effective initial response to a novel situation.

Consciousness is an internal trait, intelligence is an external, observable trait. If a system behaves intelligently (solves problems it hasn't seen before), then it is intelligent.

Intelligence can be measured. Human level AI will be achieved when a computer can consistently pass the Turing Test against a skilled interrogator.

Intelligence can exist in degrees. A dog is more intelligent than a fish, but less than a monkey. But does a dog, monkey or fish have consciousness? Different people and different cultures disagree. It is like asking if they have a soul.

Reply to
Bob

Well, I agree with the point you are trying to make here, but just want to take the time to point out your history is a bit off. Man had been building kites that could fly without flapping wings for something like a thousand years before the Wright Brothers. Man had been flying in balloons for over a hundred years before the Wright Brothers. Other people had already published books about the phisical characteristics of air foils before the Wright brothers built their first glider. The Wright bothers were simply the first to take all this knowledge about flight and build the first practical (i.e. stearable) powered glider.

I agree that makes for good dreams. But the brain is solving a large parallel data processing problem. It takes thousand of parallel sensory data streams, mixes and matches them in some complex way, and produces hundreds of parallel output data streams. You can't do that with anything other than what logically has to be a large parallel signal processing network. So neural networks are not just a dream, they are in fact the way this problem must be solved. It's only a question of what type of network(s) and network processors (nodes) are needed to solve this problem correctly.

However, because of the very advantages you point out about the hardware we have to work with, we might actually build that large parallel network on a sequential processor which is able to emulate the function of network instead of building a physical network. So the physical implementation may be very different, but the logical function in the end has to be the same.

Reply to
Curt Welch

I agree with the point you are making, but you have a few slight errors there. You can't build anything out of nand gates because you can't create a clock out of nand gates. You can't even build a computer out of nand gates alone because of the timing problem.

So, since you have ignored this very important concept of time when you talk about nand gates, how are you addressing it when you use this concept of "symbolic algorithm"?

Typical computer programs are timeless. That is, they assume the idea of sequence, but they have no concept of time encoded into the source code. And this is a big issues because human intelligence is heavily dependent not just on sybolic sequence, but on time itself. Symbolic sequence alone is never enough to create intelligence, you need timed symbolic sequences. Timing is everything in intelligence. You can't talk, or walk, or hammer a nail, unless you get the timing right. It's not just what symbol you produce next which is important, it's the exact time which the symbol is produced which is important. If you don't look at, understand, and solve, this complex timing problem, you can't begin to solve AI.

So just be careful about oversimplifying the problem of AI down to nothing but NAND gates and symbol processing. Doing that can cause one to forget how time dependent the problem is.

Reply to
Curt Welch

The entire field of AI has been trying to answer that question for over 50 years now and no one yet has the answer. There's not even a consensus on which direction to take or how long it's going to take. But, that doesn't stop people from forming opinions and I've got mine which I'll share with you.

Yes, some form of signal processing network such as nural networks I believe is going to be required to solve the problem. However, most neural networks today aren't likely to be the answer - it's likely to be some new type of neural network.

I for example have been playing with real time temporal pulse rounting networks and I think they have a lot of hope. But that's just one example of how the answer might be some type of signal processing network which is really unlike any of the more traditional ANN.

There's been growing excitement in pulse networks of all types in the last decade. But is this going to be the answer, or just one more small step along the way? No one will really know until the job is done.

I also strongly believe that AI is a reinforcement learning problem at it's core. So in addition to building a real time signal processing network, it must be trained with reinforcement learning. So that's the path I think is going to lead to a Data-Bot. It's a real time, temporal, reinforcement trained signal processing network.

I also think we are much closer to seeing some real intelligence coming out of machines than most. I've recently made another 10 year bet with some friends for example. These were the same friends I made a 10 year bet about AI back around 1974 - which I lost. I don't intend to loose this time. :)

I'm sure that we already have all the processing power we need to create human level intelligence - we just don't understand enough about the algorithms yet.

What I think we can produce in these next 10 years is a demonstration of machine learning that will be advanced enough to make people stand up and take notice for the first time in history. The type of machine that will actually look and act alive and conscious - one with maybe dog like behavior if not monkey or human like behavior. One where the behavior is all learned, and not simply hard-coded by some animation artist. We might even get it to learn and use langauge to some level in the next 10 years.

Recreating all the complexities of human personalities however is going to take longer. Maybe 50 years. So it might be 50 years before something like Data or C3P0 is walking around with us. But within 10, I think we will have intelligent conscious machines which will be smart enough for people to really start to understand that AI is not just something for sci-fi stories. And advanced enough to make people realize it's time to start talking seriously about what this technolgy means to the future of mankind.

And actually, if we get as far as I expect we will get in the next 10 years, then that might create so much excitement in the field that it will only take another 10 after that to reach, and go past, full human intelligence. I would like that becuse it would mean I would have a chance to be alive to see it happen. That for me would be more fun than to see than the moon walk.

BTW, the people that think evolution based systems (alife etc.) is the answer, have the right idea, but the wrong approach. Reinforcment learning systems are the right approach. Reinforcement learning and evolution based learning are basically the same idea with different implementation. They both create systems of directed change which evolves the system towards a goal and that is the basis of what intelligence is (the world and life looks like it was designed by something intelligent because it was - evolution is inteligence working at a geologic time scale). But DNA-like evolution which works with replication, mutation, and natural selection, is too slow to use as a tool for creating an intelligent machine. We can solve the problem faster using our brains. And that solution is to find a signal processing network that can be trained by reinforcment learning (i.e., one which slowly shapes it's signal processing behavior based on real time reward and punishment events).

Reply to
Curt Welch

As I agree with the first half of the sentence, I have trouble to accept the second half. From my point of view, the core of the problem is more of a political as of a technical nature. It is a question of money and purpose and not the question of new unknown technology.

I don't bet, because I don't like to, but if I would, I would bet against you. I mean we have such technology already today and there, so either you have won already your next 10 years bet even from todays point of view or you won't win also in 10 years for the same reasons as today.

Even if it would become possible to gather all the enthusiasts like you together, where would they get that money necessary to make it true from? Most of the people consider it much more important to buy a new car or travel around the world as to donate their money to space or AI research. Try to motivate young people being now 16-20 years old having a much more real chance to make it come true than you yourself to become in a same way enthusiastic than you and you will probably see, where the true obstacles on the way to androids are. Try to tell your wife (if you have any), that you would like to replace her by a more intelligent and sportive android in the future and it can happen to you, that it becomes apparent, that there is not only no need to create such android, but there are strong forces against making it reality.

If there will be much intention, there will be sure also a way. I currently see the problem not that much in finding the way as in spreading and creating the intention.

Claudio

Reply to
Claudio Grondi

LOL. You must be with the NSA and they've told you about one of their top secret black projects. They know their secret is safe with you because nobody's going to believe you. ahahaha...

Louis Savain

Why Software Is Bad and What We Can Do to Fix It:

formatting link

Reply to
Traveler

Is it an typical amarican thing that the government is secretly trying to produce AI and is actually ahead of the normal scientific comunity?(dont answer that:P ) I do think there is some kind of a money/motivation thing. When the world population really want to have a mining facillity on mars than we would have one yesterday.

Reply to
bob the builder

Whether intelligence can be clearly and concisely defined has no bearing on whether it is subjective. I think what you mean to say is definitions of consciousness are subjective. But I don't see anything to indicate your definition of intelligence is not subjective.

So what makes this definition not subjective? Consciousness could undoubtedly be defined in comparable terms.

Well it may be like asking if they have a soul but I don't see that intelligence is any less subjective as a mechanism. You're arguing intelligence can be measured but measures of intelligence are every bit as subjective as people want to make them.

~v~~

Reply to
Lester Zick

Definition is always subjective. Give me an example of "non-subjective" definition.

To give a definition certain objective flavour we could bring in "experimental material" showing that statisticaly one definition was more often used historicaly than others (for example in journal articles). But strictly speaking it does not make it less subjective than other, it just makes it more common.

Regards, Evgenij

Reply to
Evgenij Barsukov

Some of the enthusiasts happen to be rich. Like Jeff Hawkins - the guy beyhind the palm piolt. He's set up and funded his own reasearch centers.

That's only true about basic research projects which have a low probablity of producing anything of short term practical use - which is currently true for both space and a lot of AI. However both have produced spin-off technologies that are of practical use and which do get a lot of funding (character recognition used by banks and the post office for example).

If anyone got technology to the point of producing anything like actual dog intelligence, there would be huge flow of money into the research.

Oh, don't be silly. My wife would love to get her own android assistant as much as I would. She's always said she wants her own wife. We would own

100 of them if they existed, and we could afford them, and we could build them in a way that they would actually want to work for us without us having to pay them what we would have to pay humans to work for us.

Even if human intelligence and slavery was incompatable, we would love to have android pets. I've already got a few of them, but I want ones that are as intersting and as loving as my dog and my cat as well.

There's plenty of interest and funding in AI already. Have you not seen things like the DARPA challenge where they paid millions to whoever could build an automated car? AI funding and research has only continued to grow over the years as the technologies slowy develop.

It could even be argued that all the growth of computer technology is already AI research. And it's one of the largest single investments other than people which companies these days have to invest in. It's going to morph at some point into a pure AI business where the PCs sitting on the desks of all the office workers will actually take over the job of the people sitting at the desks (it's already been happening for the past 60+ years).

Reply to
Curt Welch

I think you're confusing the origin of definition with the definition itself. All definitions originate mechanically in subjective terms. That doesn't mean the content of the definition is subjective. Science requires objective definition. Just saying consciousness is so and so doesn't cut it in science.

Lack of demonstrability is what makes a definition subjective whether you draw on historical or popular sources.

~v~~

Reply to
Lester Zick

PolyTech Forum website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.