Originally posted by FZ+
And why can't computers do the same?
Because that needs capacity for abstraction and rationalization from incomplete, uncertain and ambigous experience, often nth degree of speculation. Not even in close future.
To improve algoritm, one needs to get out of the loop and above the problem, to see abstractly the problem and applicability of existing solution. From 'inside', there is only one option - action-reaction type selfadaption. It is limiting possibilities enormously.
Basically, it's to couple a degree of randomness to a selector that let's the best surivive. These then develop themselves to produce their own programming - often even better than a human could think up. That's programming, creating their own algorithms, without a programmer.
Right. But do you think that's a cheap bingo? What if none survives? What if such runaway selfprogramming hits a deadend, deadlock? What if criteria for best selection is stupid? What if 'often' means actually 1 in a million?
Here lies the complexity of another kind. We need to invent algoritms that evolve, and don't reach their limits at levels of 3yr old retard, but go further. Thats difficult. Its like trying to give 1 million balls specific initial accelerations, so that in few weeks after we let them go, they form any wanted geometric shapes after zillions of times colliding with each other. Difficulty is not in making evolution possible, difficulty is in making sure it doesn't stop after few 'generations' and dissolve into random noise.
Anyway, its fascinating area of research. You really start looking at computers as at living things.
But I think we have a theoretical beginning on how such a machine can work, modeled on our understanding of the brain at the moment. Currently it is thought that thought is the product of a sort of mini-evolution in the brain, in terms of competing impulses and stuff. It's a mega-network of small, semi-intelligent bits that work together in harmony.
I'm not uptodate in brain research, dunno. I only think that its quite abit more difficult than that. Holographic-like memory. Operating not on bit levels, but with abstract images of concepts, thus to a degree insensitive to individual 'bits'. Technical/technological details are imo secondary, although issues are not - we don't want intelligent computer to be size of moon. With details I mean its to a degree irrelevant how we deliver signals, its how they interact algorithmically that counts.
When brain needs to solve problem it hasn't faced before, it somehow engages parts of brain unrelated to the problem, creates sort of many-to-one focus. By that, it extends problem at hand over whole baggage of experience it has, and by that detaches from inside the problem to 'above' it. And even though experience it uses isn't applicable, it helps to make the forced induction, turn the scale. After such induction, solution becomes part of experience, used in future to make induction in other areas. This way, bigger whole is used to make progress in smaller areas. Its something like physicists engaging experience from mathmatics, or even from behaviour of cats as species. Sometimes, unexpected analogies can help. So yes, there is evolution.
What computers lack, is that critical threshold of abstract experience, they can't detach from the problem at hand, they remain 'inside'. They can't solve problems without external help.
If you think about brain, then a lot of experience it has comes from external world, from books, talks, education. All that knowledge becomes part of 'baggage' used for thinking. None of the knowledge is rocksolid, a lot of the 'baggage' can be actually crap. Can you sense the reason for 'beliefs' and religion here? Its normal, and any brain consists of beliefs, its actually the only possible way, without that, it can't be interally consistent. If that brakes, mind goes nuts. Human programming is basically imparting the 'baggage' so that his belief system changes in wanted direction. Very dangerous to consistency if converting mind to opposites. But that's basically what our education is. It speeds up our 'baggage' creation and makes sure its in same direction as rest of mankind.
I wonder if we'll ever see computer that 'believes' it is Napoleon..
Our current computing is based strictly on formal logic - a determined processor, held at each point by programming. Extend that, and you just get the same thing, the problem being that the programs themselves can't keep up. WE can't keep up.
When is parallel computing successful then? When we exploit the fact that there are such sub-divisions. Notice the tremendous success of the SETI@Home project. For the success of AI, we need
(a) processors with a degree of randomness - perhaps quantum computing, or fuzzy logic can be the key,
(b) processors that function individually, but can communicate, and
(c) a whole new dynamic to programming - no longer instructions, but situations to react to.
SETI is not good example. It is brute force approach. It scales well, but has very little capacity for intelligence. On rest I agree with you, but with few reservations:
I don't think any randomness is needed. Uncertainty of external world is enough. Fuzzy logic is imo indeed a key, and quantum computing, but not because of uncertainty, but because we need enormous amount of computing power in crazily small volumes. Superpositions of quantums gives specific benefits that can't be mimiced well with digital computers. Individual processors is also not required, its more like processes that count.
And yes, that programming. Its the major beast to concur, its complete change in our programming thinking. Although underlying may be oldstyle instructions. Its just too much programming and too complex for ordinary mortal to grasp.
Anyway, our PC's are nowhere near all of the above, they are dumb. Chess-engines are extremely limited in scope, always 'inside', and thus only adaptive, not creative. I believe that quantum computing would be the kicker for computer intelligence, mainly because its impossible to apply classical programming there, so new kind of programming will be forced to have sharp advances.