Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Will human's still be relevant in 2500?

  1. Jan 27, 2013 #1
    My vote is No. Human's, in fact all mammals and life on earth is impossibly overcomplex and energy hungry for what they contribute. In a thousand years machines are gonna look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress, but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I wanna hear your opinion.

    The foundations of human cognition are already known, Piaget taught us that, and the technology exists to press this operationalism into electronics, so it it just a matter of time. And not alot, I am working with a team trying to push this forward now. Once it happens, it won't be long before humans define the term "superflous". What's gonna happen to us, then? I think we will be cared for by our robotic superiors just like we take care of grandma, but I don't think that what the future holds is some kind of Star Trek fantasy. What's your vote?
  2. jcsd
  3. Jan 27, 2013 #2


    User Avatar

    Staff: Mentor

    We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.
    Last edited: Jan 27, 2013
  4. Jan 27, 2013 #3


    User Avatar
    Homework Helper
    Gold Member

    Our future, robot overlords will be pleased with your post.
  5. Jan 27, 2013 #4
    They don't contribute what exactly to what exactly ? And why should anyone care?
  6. Jan 27, 2013 #5
    We wouldn't have future robot overlords. They would be a future cloud computing overlords ;)

    (yes, we're used to deal with separated, individual human beings, while any software seems to be more abstract)

    Now the difference between human and computer is clear, however in the future that might not be so. What about case of having chip in brain with advanced functions? Ok, that's only data access. What if you use it to make calculations? What if you can order the chip to stimulate your brain to improve your mood or become more concentrated on your job? What if there are plenty of electrodes and they are used to boost brain capabilities because part of your neuron net is actually simulated by computer? What if your body reach best before date and you decide to back up your memory and migrate fully on a computer?

    In which step you ceased being human and become a computer?
  7. Jan 27, 2013 #6
    According to a research report by Zager and Evans, 1969, we should be good til the year 10,000.
  8. Jan 27, 2013 #7
    I have to tread lightly here, because I've been told the science has to be sound, and this science is sound and I can prove it. The human brain works at 10 hertz, from pole to pole, back to front, local geometry ( like within the visual or auditory primary cortex) runs at 40hz. Inter-areal dynamics proceed at in what we call the beta band at about 20-30 hz, these are interactions between cross-cortical regions like V1-V5, TEO to TE, or even the medial forebrain, etc. to subcortical regions involved in salience processing.

    But the general deal here is that the global frame rate of human cognition is 10 hz, this is a limitation of biological neurons due to volume conduction issues and membrane permeability. However, I don't think that there's any reason that in a fabricated device we can't take this 10 hz reality of human thought and turn it into 1 megahertz, or even more. What would that mean? that would mean that this device could read the entire library of congress in a few seconds and could live the entire existence of humans from the time of Homo erectus in perhaps the time it takes you to eat lunch. What's that gonna mean when it happens, and its probably gonna, because I personally am working on it, and nobody, scientists at least, seems to be too interested in de-evolving this progression.
  9. Jan 27, 2013 #8
    For sure we are not talking about our current computers. Nowdays they can hardly be compared to the intelect of insects and the minimal size of transistors will probably be reached in 20-30 years (22nm technology currently) .
  10. Jan 27, 2013 #9
    This is exactly my point Sayajin and where, IMO, the confusion is, so thanks for bringing it up. I'm too lazy to look it up now, but I read an article in some popular science magazine a couple years ago about a guy who somehow fenagled the US government (or the UK) for several million dollars to build a supercomputer designed to mimic the behavior of individual neurons in the modeling of cognition and consciousness. Well, they think they're approaching the cognitive capacities of a rat or an insect. I'm not quite sure, again, I'd have to look it up, but even if they think they did that's not gonna help anybody very much.

    The point is, you are right with what you say if you think that the brain is best modeled in this fashion, through using the traditional Hodgkin and Huxley or even an 80's PDP approach. This is not the way to go, though, the brain is a choatic system and we can simplify the dynamics and, I believe, get it going to far more than 10 hertz with current technology. I know it CAN happen, it's just a matter of who can put a project together that actually does do it. But I think that this is inevitable, and probably sooner than later.
  11. Jan 27, 2013 #10
    Actualy they have been trying to simulate the behaviour of the neurons of rats for quite a long time now. The thing is that although they can simulate the signal transmition between them nobody knows what they mean and what kind of functions are responsible for. As far as I know the only time the scientists tried to link certain "neuro-wiring" with certain behaviour was with fruit fly and only a single case. Nobody can actualy tell you how the fruit fly thinks on the most basic level. Its much like doing something that you don't actualy understand. Thats the point in this they are trying to figure out how this works.
    On the other hand conciousness(absolutely nothing known here) and long-term memory(neuro-connections are chaning consistently changing where long-term memory is stored since the signal transmition is temporary) are very controversial. They are all about speculations and there isn't a single theory with which everybody agrees with.

    Maybe someday they will understand all this but even then I doubt that machines will become smarter than us. The best think that we can do is build machines as smart as us who if are conciouss learning new knowlege creatures they can start evolving much faster than us. But If we are able to do this we would probably know some way to become smarter as well. At the end of the day all I said is pure speculation and I think that none of us will be able to know the answer of this questions during our lifetime.

    I've aways been very interested in the way living creatures work compared to the way that computers work. For example look at some little kid. Tell him to give you some cup of tea. The kid will do it without any problems. Now try to imagine building a robot who will be helping old or sick people for example by giving them a tea when they want.
    If you give him only a single type of cups tea and they are aways in the same place it will be easy work.

    Now imagine that he is in realistic conditions.
    First of all the robot has to recognize the cups ( they vary in shape color and size).This will be the hardest part. Image recognition is one of the hardest things for machines. Nowdays there are many types of software that try to recognize faces for example. The way they do it is by mesuring certain features of human face (distance between eyes and such) by knowing that every human has to be different by using database of human faces it can recognize them. But since it depends also on angle of the picture nowdays they also use 3D images to improve the algorithm. The ting with cups is much more complex than this. The robot must know some way to grab the cup without breaking it or making the liquid spill. He has to know if they are made from glass plastic or some other material. For example one time use cups can easily be deformed if the robot grabs it with too much force. On the other hand if its heavy he can drop it. After that he should knows if its too hot so he can give it to the person( thats actualy very easy).This simple task turns out to be very complex even for modern computers. If a human tries to do the calculations that a normal PC can do for a second it will take him years. Yet he can do tasks which are too hard for this thing.

    On the other hand even simple animals have very complex behaviour. They can determine if some other animal standing next to them is dangerous in few seconds. They can search for food in realistic changing environments and adapt to them. Even some single celular organisms show some very interesting behaviour.
  12. Jan 27, 2013 #11
    To answer your question - NO.
  13. Jan 27, 2013 #12


    User Avatar
    2017 Award

    Staff: Mentor

    What do you mean with "humans"?

    Will the influence of biological humans be relevant by 2500? Sure - even if we are extinct and do not leave any robots or similar behind, our influence on the landscape and climate will remain a while.

    Will there be life which sees itself as humans (or descendants of humans)? If we do not go extinct together with our technology, I would expect this:
    - if mind uploading gets possible, this allows humans to become immortal. There are humans which would use this opportunity. And who wants to replace (!) himself by something not considered humanly?
    - if mind uploading does not get possible, and no powerful AI manages to kill us all, I don't see why humans should stop to reproduce.
  14. Jan 27, 2013 #13
    It's not for one person to say, but I'm not sure the very devoted AI scientists are considering the protestations that such research might spark. How many people do they expect will be willing to let engineers, if such capabilities are attained, create a synthetic species that would eventually push humans into irrelevancy? Don't underestimate our pride. Now, do I think it's possible for machines to replace humans in theory? Sure. Do I think it will happen? Not as soon as others seem to think, when I consider the social issues that will arise.

    Now, this is coming from somebody who is certainly not "in the loop" when it comes to such a subject, so keep that in mind when you consider how best to dispose of my argument and me.
  15. Jan 27, 2013 #14
    What they contribute to what?
  16. Jan 27, 2013 #15
    ...what? Claiming that the human brain "works" at 10hz is so grossly over-simplistic as to be nonsense. Certainly it doesn't refer to individual neurons, which can fire at frequencies far greater than 10hz. We're then left with large (or small) scale oscillations between different brain regions or local clusters of neuron. In that case, you see spontaneous oscillations of many different frequencies (all the way from delta to gamma).

    This severely underestimates our current knowledge. Our understanding of the neural mechanisms under lying reinforcement learning in humans, for instance, is extensive.
  17. Jan 27, 2013 #16


    User Avatar
    Science Advisor
    Gold Member

    Hands Evo 2500 GOOBF cards.

  18. Jan 27, 2013 #17


    User Avatar
    Staff Emeritus
    Science Advisor

    I expect legal and ethical debates to rage like never before if conscious software ever approaches reality. Forget the terminator style arguments and just think of the very basic headaches like at what point such software gains rights, either limited like those of animals or equal to humans? If they have rights then by creating one do you have to offer the same support as you would a child in other words would labs and businesses be expected to pay the equivalent of child maintenance to a software project? Does shutting off the machine said software runs on without consent constitute assault, does deleting it constitute murder, if genetic algorithms are used does that constitute mass murder or even genocide for the large number of variants that were filtered out, if human equivalent rights are awarded how would a democracy function when one voter can copy and paste until they are the majority etc etc.

    Biologists have been dealing with the ethical and social ramifications of their work for a while as a distinct field of study (bioethics) but nothing would compare to the scope of discussion needed if we ever get close to conscious software.

    All of that is a huge discussion on its own that illustrates that even starting from the premise that conscious artificial generally intelligent entities are possible there could be derailing social factors. However another facet of this is the premise that conscious artificial generally intelligent entities are even desirable in the first place. When we talk about AGI we tend to think of little more than digital humans, this is rooted in the understanding that we're talking about the need/want for an entity with general intelligence comparable to a human so that it can pretty much do anything a human could do with equivalent training. However I've always found the assumption that such an entity would also come with consciousness, ego, emotion etc to be odd. It seems very anthropometric to assume that because we're entities of (relatively) high general intelligence and we're conscious emotional beings that any general intelligent entity would be conscious and have emotions. I don't see that this is necessarily true. It seems more likely to me that if we were ever to make an artificial general intelligence it would have about as much conscious thought, emotion, motivation, ego etc as a mechanical clock with the same likelyhood of intentially hurtung us. Increased complexity of intelligence does not necessarily mean increased consciousness and agency. We might throw a interface over such software to make it able to pass a Turing test but that's by the by.

    EDIT: For clarification my points on the relationship between consciousness and intelligence stem from the epiphenomenology debate in modern neuroscience and philosophy in which there is a body of evidence that points to consciousness being superfluous to decision making.
    Last edited: Jan 27, 2013
  19. Jan 27, 2013 #18
    Humanity isn't stupid enough to allow robots to become more intelligent than we are. There is a huge level of unpredictability that follows something like the Singularity (not black-holes).

    I don't see why we can't just cut them off at ape-like intelligence and force the robots to do manual labor for us.
  20. Jan 27, 2013 #19
  21. Jan 27, 2013 #20
    "Cut them off"? How? Pass a law against it? A law specifying that all research in computational neuroscience and artificial intelligence must stop as soon as our technology reaches a point at which AI is quantifiably identical to ape intelligence? No.
  22. Jan 27, 2013 #21


    User Avatar
    Staff Emeritus
    Science Advisor

    Why not? Politics and legal processes have halted scientific development in fields that will be far less controversial than this. Just think of human cloning. I'm not swayed by the "if its possible someone will do it" argument by itself.
  23. Jan 27, 2013 #22


    User Avatar
    Staff Emeritus
    Science Advisor

    I have to ask, relevant to what?
    It's evident that all life is not impossible and I have no idea what you mean by "overcomplex". By what reasoning did you come to that conclusion? And what do you mean "contribute". It seems like you're talking in terms of trophic flow but I can't imagine what you are getting at.
    Be wary about the narrative of progress. There is nothing set in stone that human civilisations will always continue to become technologically more advanced, the truth of the matter is that we have no precedents for high energy civilisations and any speculation over the future (at these timescales and concerning this type of technological change) however well reasoned can't be more than speculation.
    It means that we stand on the shoulders of giants.
    You can't assert that the technology exists in one breath then insist it's a matter of time in the other. Firstly it doesn't exist and secondly strong AI/AGI has been coming real soon for decades. Whilst there has been progress in making software that acts intelligently it's not apparent to me that we've made strides towards strong AI. Reason being that general intelligence is ill defined in humans and its not even clear what it is. If we're going to try and create it from scratch or reverse engineer the brain (and they are both tasks of Herculean difficulty) we'll have to address the issue of what intelligence is along the way.
    You're an AI researcher working to produce strong AI?
    Superfluous to what? We're already superfluous to a lot of things and not to others. What changes if you propose strong AI?
    My vote is that the premises of this discussion need reevaluating, see my previous post for more on this.
  24. Jan 27, 2013 #23
    I didn't mean that it couldn't be done, only that it would be nearly unenforceable. Human cloning requires tremendous and specialized resources, but I can write a biophysically realistic model of the basal ganglia that will exhibit reinforcement learning in my bedroom. This will only become easier as computing power increases. Banning intelligent computer programs would be only slightly less difficult than banning cryptography. What's more, how would you quantify the level of intelligence beyond which computer programs would be prohibited?

    The government would necessarily carve out exceptions for itself (likely for military purposes), which means that the necessary technology and knowledge would almost certainly be developed. It would be impossible to keep that knowledge from becoming public; they tried the same thing with public key cryptography and failed miserably.
  25. Jan 27, 2013 #24


    User Avatar
    Staff Emeritus
    Science Advisor

    I agree and disagree. I see your point but writing the program for strong AI is likely to be insanely difficult, require very specialised training in AI science and possibly even require expensive and specialised equipment (the latter may change if Moore's law continues to the point where domestic/commercial machines match the requirements for strong AI, whatever that may be...). However I suppose it's possible that once the work is done the data could be copied and pasted.

    Either way I maintain the topic is not as simple as "could be done = will be done".
  26. Jan 28, 2013 #25
    Ah, and I suppose you were a strong advocate for advancing the research of nuclear weapons? Just because we are capable of advancing a field does not mean that we should do it. Like I said, there is a large amount of unpredictability, at least with our current understanding, as to what would happen if we allowed an AI smarter than we are to exist.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook