Will human's still be relevant in 2500?


by DiracPool
Tags: 2500, human, relevant
DiracPool
DiracPool is offline
#1
Jan27-13, 12:13 AM
P: 492
My vote is No. Human's, in fact all mammals and life on earth is impossibly overcomplex and energy hungry for what they contribute. In a thousand years machines are gonna look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress, but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I wanna hear your opinion.

The foundations of human cognition are already known, Piaget taught us that, and the technology exists to press this operationalism into electronics, so it it just a matter of time. And not alot, I am working with a team trying to push this forward now. Once it happens, it won't be long before humans define the term "superflous". What's gonna happen to us, then? I think we will be cared for by our robotic superiors just like we take care of grandma, but I don't think that what the future holds is some kind of Star Trek fantasy. What's your vote?
Phys.Org News Partner Science news on Phys.org
Simplicity is key to co-operative robots
Chemical vapor deposition used to grow atomic layer materials on top of each other
Earliest ancestor of land herbivores discovered
Evo
Evo is offline
#2
Jan27-13, 12:28 AM
Mentor
Evo's Avatar
P: 25,925
We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.
collinsmark
collinsmark is online now
#3
Jan27-13, 03:50 AM
HW Helper
PF Gold
collinsmark's Avatar
P: 1,842
Our future, robot overlords will be pleased with your post.

bp_psy
bp_psy is offline
#4
Jan27-13, 04:11 AM
P: 452

Will human's still be relevant in 2500?


Quote Quote by DiracPool View Post
My vote is No. Human's, in fact all mammals and life on earth is impossibly overcomplex and energy hungry for what they contribute.
They don't contribute what exactly to what exactly ? And why should anyone care?
Czcibor
Czcibor is offline
#5
Jan27-13, 05:36 AM
P: 64
Quote Quote by collinsmark View Post
Our future, robot overlords will be pleased with your post.
We wouldn't have future robot overlords. They would be a future cloud computing overlords ;)

(yes, we're used to deal with separated, individual human beings, while any software seems to be more abstract)

Now the difference between human and computer is clear, however in the future that might not be so. What about case of having chip in brain with advanced functions? Ok, that's only data access. What if you use it to make calculations? What if you can order the chip to stimulate your brain to improve your mood or become more concentrated on your job? What if there are plenty of electrodes and they are used to boost brain capabilities because part of your neuron net is actually simulated by computer? What if your body reach best before date and you decide to back up your memory and migrate fully on a computer?

In which step you ceased being human and become a computer?
Jimmy Snyder
Jimmy Snyder is offline
#6
Jan27-13, 05:54 AM
P: 2,163
According to a research report by Zager and Evans, 1969, we should be good til the year 10,000.
DiracPool
DiracPool is offline
#7
Jan27-13, 06:07 AM
P: 492
Quote Quote by bp_psy View Post
They don't contribute what exactly to what exactly ? And why should anyone care?
I have to tread lightly here, because I've been told the science has to be sound, and this science is sound and I can prove it. The human brain works at 10 hertz, from pole to pole, back to front, local geometry ( like within the visual or auditory primary cortex) runs at 40hz. Inter-areal dynamics proceed at in what we call the beta band at about 20-30 hz, these are interactions between cross-cortical regions like V1-V5, TEO to TE, or even the medial forebrain, etc. to subcortical regions involved in salience processing.

But the general deal here is that the global frame rate of human cognition is 10 hz, this is a limitation of biological neurons due to volume conduction issues and membrane permeability. However, I don't think that there's any reason that in a fabricated device we can't take this 10 hz reality of human thought and turn it into 1 megahertz, or even more. What would that mean? that would mean that this device could read the entire library of congress in a few seconds and could live the entire existence of humans from the time of Homo erectus in perhaps the time it takes you to eat lunch. What's that gonna mean when it happens, and its probably gonna, because I personally am working on it, and nobody, scientists at least, seems to be too interested in de-evolving this progression.
Sayajin
Sayajin is offline
#8
Jan27-13, 06:17 AM
P: 16
For sure we are not talking about our current computers. Nowdays they can hardly be compared to the intelect of insects and the minimal size of transistors will probably be reached in 20-30 years (22nm technology currently) .
DiracPool
DiracPool is offline
#9
Jan27-13, 06:40 AM
P: 492
Quote Quote by Sayajin View Post
For sure we are not talking about our current computers. Nowdays they can hardly be compared to the intelect of insects and the minimal size of transistors will probably be reached in 20-30 years (22nm technology currently) .
This is exactly my point Sayajin and where, IMO, the confusion is, so thanks for bringing it up. I'm too lazy to look it up now, but I read an article in some popular science magazine a couple years ago about a guy who somehow fenagled the US government (or the UK) for several million dollars to build a supercomputer designed to mimic the behavior of individual neurons in the modeling of cognition and consciousness. Well, they think they're approaching the cognitive capacities of a rat or an insect. I'm not quite sure, again, I'd have to look it up, but even if they think they did that's not gonna help anybody very much.

The point is, you are right with what you say if you think that the brain is best modeled in this fashion, through using the traditional Hodgkin and Huxley or even an 80's PDP approach. This is not the way to go, though, the brain is a choatic system and we can simplify the dynamics and, I believe, get it going to far more than 10 hertz with current technology. I know it CAN happen, it's just a matter of who can put a project together that actually does do it. But I think that this is inevitable, and probably sooner than later.
Sayajin
Sayajin is offline
#10
Jan27-13, 07:49 AM
P: 16
Quote Quote by DiracPool View Post
This is exactly my point Sayajin and where, IMO, the confusion is, so thanks for bringing it up. I'm too lazy to look it up now, but I read an article in some popular science magazine a couple years ago about a guy who somehow fenagled the US government (or the UK) for several million dollars to build a supercomputer designed to mimic the behavior of individual neurons in the modeling of cognition and consciousness. Well, they think they're approaching the cognitive capacities of a rat or an insect. I'm not quite sure, again, I'd have to look it up, but even if they think they did that's not gonna help anybody very much.

The point is, you are right with what you say if you think that the brain is best modeled in this fashion, through using the traditional Hodgkin and Huxley or even an 80's PDP approach. This is not the way to go, though, the brain is a choatic system and we can simplify the dynamics and, I believe, get it going to far more than 10 hertz with current technology. I know it CAN happen, it's just a matter of who can put a project together that actually does do it. But I think that this is inevitable, and probably sooner than later.
Actualy they have been trying to simulate the behaviour of the neurons of rats for quite a long time now. The thing is that although they can simulate the signal transmition between them nobody knows what they mean and what kind of functions are responsible for. As far as I know the only time the scientists tried to link certain "neuro-wiring" with certain behaviour was with fruit fly and only a single case. Nobody can actualy tell you how the fruit fly thinks on the most basic level. Its much like doing something that you don't actualy understand. Thats the point in this they are trying to figure out how this works.
On the other hand conciousness(absolutely nothing known here) and long-term memory(neuro-connections are chaning consistently changing where long-term memory is stored since the signal transmition is temporary) are very controversial. They are all about speculations and there isn't a single theory with which everybody agrees with.

Maybe someday they will understand all this but even then I doubt that machines will become smarter than us. The best think that we can do is build machines as smart as us who if are conciouss learning new knowlege creatures they can start evolving much faster than us. But If we are able to do this we would probably know some way to become smarter as well. At the end of the day all I said is pure speculation and I think that none of us will be able to know the answer of this questions during our lifetime.

I've aways been very interested in the way living creatures work compared to the way that computers work. For example look at some little kid. Tell him to give you some cup of tea. The kid will do it without any problems. Now try to imagine building a robot who will be helping old or sick people for example by giving them a tea when they want.
If you give him only a single type of cups tea and they are aways in the same place it will be easy work.

Now imagine that he is in realistic conditions.
First of all the robot has to recognize the cups ( they vary in shape color and size).This will be the hardest part. Image recognition is one of the hardest things for machines. Nowdays there are many types of software that try to recognize faces for example. The way they do it is by mesuring certain features of human face (distance between eyes and such) by knowing that every human has to be different by using database of human faces it can recognize them. But since it depends also on angle of the picture nowdays they also use 3D images to improve the algorithm. The ting with cups is much more complex than this. The robot must know some way to grab the cup without breaking it or making the liquid spill. He has to know if they are made from glass plastic or some other material. For example one time use cups can easily be deformed if the robot grabs it with too much force. On the other hand if its heavy he can drop it. After that he should knows if its too hot so he can give it to the person( thats actualy very easy).This simple task turns out to be very complex even for modern computers. If a human tries to do the calculations that a normal PC can do for a second it will take him years. Yet he can do tasks which are too hard for this thing.

On the other hand even simple animals have very complex behaviour. They can determine if some other animal standing next to them is dangerous in few seconds. They can search for food in realistic changing environments and adapt to them. Even some single celular organisms show some very interesting behaviour.
Kholdstare
Kholdstare is offline
#11
Jan27-13, 08:30 AM
P: 390
To answer your question - NO.
mfb
mfb is online now
#12
Jan27-13, 10:57 AM
Mentor
P: 10,774
What do you mean with "humans"?

Will the influence of biological humans be relevant by 2500? Sure - even if we are extinct and do not leave any robots or similar behind, our influence on the landscape and climate will remain a while.

Will there be life which sees itself as humans (or descendants of humans)? If we do not go extinct together with our technology, I would expect this:
- if mind uploading gets possible, this allows humans to become immortal. There are humans which would use this opportunity. And who wants to replace (!) himself by something not considered humanly?
- if mind uploading does not get possible, and no powerful AI manages to kill us all, I don't see why humans should stop to reproduce.
FreeMitya
FreeMitya is offline
#13
Jan27-13, 11:35 AM
P: 31
It's not for one person to say, but I'm not sure the very devoted AI scientists are considering the protestations that such research might spark. How many people do they expect will be willing to let engineers, if such capabilities are attained, create a synthetic species that would eventually push humans into irrelevancy? Don't underestimate our pride. Now, do I think it's possible for machines to replace humans in theory? Sure. Do I think it will happen? Not as soon as others seem to think, when I consider the social issues that will arise.

Now, this is coming from somebody who is certainly not "in the loop" when it comes to such a subject, so keep that in mind when you consider how best to dispose of my argument and me.
zoobyshoe
zoobyshoe is offline
#14
Jan27-13, 11:40 AM
zoobyshoe's Avatar
P: 5,616
Quote Quote by DiracPool View Post
My vote is No. Human's, in fact all mammals and life on earth is impossibly overcomplex and energy hungry for what they contribute.
What they contribute to what?
Number Nine
Number Nine is offline
#15
Jan27-13, 12:48 PM
P: 771
I have to tread lightly here, because I've been told the science has to be sound, and this science is sound and I can prove it. The human brain works at 10 hertz, from pole to pole, back to front, local geometry ( like within the visual or auditory primary cortex) runs at 40hz. Inter-areal dynamics proceed at in what we call the beta band at about 20-30 hz, these are interactions between cross-cortical regions like V1-V5, TEO to TE, or even the medial forebrain, etc. to subcortical regions involved in salience processing.
...what? Claiming that the human brain "works" at 10hz is so grossly over-simplistic as to be nonsense. Certainly it doesn't refer to individual neurons, which can fire at frequencies far greater than 10hz. We're then left with large (or small) scale oscillations between different brain regions or local clusters of neuron. In that case, you see spontaneous oscillations of many different frequencies (all the way from delta to gamma).

The thing is that although they can simulate the signal transmition between them nobody knows what they mean and what kind of functions are responsible for.
This severely underestimates our current knowledge. Our understanding of the neural mechanisms under lying reinforcement learning in humans, for instance, is extensive.
dlgoff
dlgoff is online now
#16
Jan27-13, 02:04 PM
Sci Advisor
PF Gold
dlgoff's Avatar
P: 2,631
Quote Quote by Evo View Post
We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.
Hands Evo 2500 GOOBF cards.

Ryan_m_b
Ryan_m_b is offline
#17
Jan27-13, 02:26 PM
Mentor
Ryan_m_b's Avatar
P: 5,341
Quote Quote by FreeMitya View Post
It's not for one person to say, but I'm not sure the very devoted AI scientists are considering the protestations that such research might spark. How many people do they expect will be willing to let engineers, if such capabilities are attained, create a synthetic species that would eventually push humans into irrelevancy? Don't underestimate our pride. Now, do I think it's possible for machines to replace humans in theory? Sure. Do I think it will happen? Not as soon as others seem to think, when I consider the social issues that will arise.

Now, this is coming from somebody who is certainly not "in the loop" when it comes to such a subject, so keep that in mind when you consider how best to dispose of my argument and me.
I expect legal and ethical debates to rage like never before if conscious software ever approaches reality. Forget the terminator style arguments and just think of the very basic headaches like at what point such software gains rights, either limited like those of animals or equal to humans? If they have rights then by creating one do you have to offer the same support as you would a child in other words would labs and businesses be expected to pay the equivalent of child maintenance to a software project? Does shutting off the machine said software runs on without consent constitute assault, does deleting it constitute murder, if genetic algorithms are used does that constitute mass murder or even genocide for the large number of variants that were filtered out, if human equivalent rights are awarded how would a democracy function when one voter can copy and paste until they are the majority etc etc.

Biologists have been dealing with the ethical and social ramifications of their work for a while as a distinct field of study (bioethics) but nothing would compare to the scope of discussion needed if we ever get close to conscious software.

All of that is a huge discussion on its own that illustrates that even starting from the premise that conscious artificial generally intelligent entities are possible there could be derailing social factors. However another facet of this is the premise that conscious artificial generally intelligent entities are even desirable in the first place. When we talk about AGI we tend to think of little more than digital humans, this is rooted in the understanding that we're talking about the need/want for an entity with general intelligence comparable to a human so that it can pretty much do anything a human could do with equivalent training. However I've always found the assumption that such an entity would also come with consciousness, ego, emotion etc to be odd. It seems very anthropometric to assume that because we're entities of (relatively) high general intelligence and we're conscious emotional beings that any general intelligent entity would be conscious and have emotions. I don't see that this is necessarily true. It seems more likely to me that if we were ever to make an artificial general intelligence it would have about as much conscious thought, emotion, motivation, ego etc as a mechanical clock with the same likelyhood of intentially hurtung us. Increased complexity of intelligence does not necessarily mean increased consciousness and agency. We might throw a interface over such software to make it able to pass a Turing test but that's by the by.

EDIT: For clarification my points on the relationship between consciousness and intelligence stem from the epiphenomenology debate in modern neuroscience and philosophy in which there is a body of evidence that points to consciousness being superfluous to decision making.
AnTiFreeze3
AnTiFreeze3 is offline
#18
Jan27-13, 02:35 PM
AnTiFreeze3's Avatar
P: 245
Humanity isn't stupid enough to allow robots to become more intelligent than we are. There is a huge level of unpredictability that follows something like the Singularity (not black-holes).

I don't see why we can't just cut them off at ape-like intelligence and force the robots to do manual labor for us.


Register to reply

Related Discussions
What is statiscally relevant? Set Theory, Logic, Probability, Statistics 4
Can 12V 2500 mah NiMH batteries release around 150A? Electrical Engineering 49
Rotating cylinder - number of revolutions before it reach 2500 rpm Advanced Physics Homework 4
urgent power supply and creative inspire 2.1 2500 question Electrical Engineering 15
Future Human: Making human's invisible without a suit Biology 16