Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Should Science wage war again

  1. Jan 30, 2005 #1
    "Should Science wage war again"

    It was known in 1939 that German physicists were close to discover the secrets to maintain a nuclear chain reaction. The consequence of this was understood by a German physicist that previously defected to the USA. His cooperation with Einstein to send a letter to President Roosevelt changed the course of human history. If the Nazis had developed an H bomb before the USA, history books would have been written very differently. It turned out that the USA did not need the H bomb to defeat Germany in WW II, as the first detonation was in Hiroshima some two months later. As of today, developing that H bomb, dropping it and showing the world the terrible mass destruction that it could cause, has deterred all nations, of many distinct ideologies from detonating one again, from fear of the devastation it could cause. Or was it only because one nation had more H bombs?

    We are faced with a similar threat to human annihilation even more menacing. AI technology will supercede in a matter of decades human intelligence. This technology put in the hands of whatever controls them, humans or themselves, sets back the clock to 1939 again.

    So the question is should a nation who values human rights above individual or state rights, developer a technology again which could totally destroy all human life?

    It would be interesting to know your opinions.
     
  2. jcsd
  3. Jan 30, 2005 #2

    GeD

    User Avatar

    -So the question is should a nation who values human rights above individual or state rights, developer a technology again which could totally destroy all human life?

    Specifically in terms of your question - NO, no nation is 'forced' or 'should' build such technology. The same answer can be said of the opposite - there is nothing that says that they 'shouldn't' develop such technology (if true AI can really be built).

    However, what is technology for? It's for improving our survival, our freedoms - our quality of life. If such goals are not interpreted into the development of AI, then it is a mistake. Developing AI for 'science' or for 'knowledge' are both equally idiotic, and eventually highly destructive.

    -AI technology will supercede in a matter of decades human intelligence.

    Where, pray tell, did you get this assumed 'fact'? Not movies I hope.
     
  4. Jan 30, 2005 #3
    The bombs that were dropped on Hiroshima and Nagasaki were ordinary fission bombs, not H bombs.

    I believe that the human race has no long-term hope and that AI does. If we could create a fully intelligent, viable AI without any defects, this would be a good thing even if it eventually kills everyone. An AI has the potential to explore space, understand more things, better, and live longer both individually and as a race than any human or group of humans. Moreover, a truly intelligent AI would not kill everyone; humans are too interesting to be simply discarded by any rational being.
     
  5. Jan 30, 2005 #4

    GeD

    User Avatar

    -I believe that the human race has no long-term hope and that AI does.

    -An AI has the potential to explore space, understand more things, better, and live longer both individually and as a race than any human or group of humans.

    Why would understanding space and things be important to humans if they have no 'long-term hope'? Knowledge by itself is worthless. The reason these apocalyptic technologies exist is because of this 'sake of knowledge or science'. Both worthless objectives by themselves.

    -Moreover, a truly intelligent AI would not kill everyone; humans are too interesting to be simply discarded by any rational being.

    Assuming the eventual existence of killer robots, it's very possible that either they do not become intelligent fast enough before they destroy us, or they never become 'rational' enough to save us from extinction.
     
  6. Jan 30, 2005 #5
    Humans _do_ have no long-term hope. Space colonization is not practical for humans. The world will wind down with us on it, and that's the end. Catapulting computer chips and little robots through space, though, is relatively simple; AI can colonize space.

    Purpose is not survival; it is the health of systems. A vibrant, super-intelligent AI could surpass the vitality--and thus the worth--of humanity.

    Don't you see that it doesn't matter whether the future inhabitants of the earth share your genes or have any genes at all, so long as they are intelligent and vital people?


    It would indeed be a tragedy if the human race were destroyed by a semi-intelligent military AI without the capacity to develop further.
     
  7. Jan 30, 2005 #6

    GeD

    User Avatar

    Actually it is not the case that "I would see" it as necessary that the future inhabitants be 'intelligent' and 'vital' if it meant abandoning or feeding either myself or the human species. To abandon the affirmation and improvement (and thus the continuation) of our lives simply to make way for the existence of a technologically superior (more powerful) species would be tantamount to decadence.
    Indeed, I could see us laboring for the AI to improve them, since they presumably can do a lot more, but not the kind of quasi-suicidal tendency that I think you are implying. Anyone who values life will never value AI to be worth more than humanity.
     
  8. Jan 30, 2005 #7
    Your valuing of biological life over AI life is totally arbitrary. There is no important difference between an intelligent man and an equally intelligent machine.

    What if a strange deity cast a spell over the earth so that all infants conceived from here on are born to look and act exactly like normal children, with the exception that they are in fact machines, with immortal lifespans and very high intelligence? They would not be susceptible to disease or other ailments and they would work constructively just like humans, only better; the only difference between them and very hardy humans is that they (the machine-people) are made of non-biological substances.

    In short, what if your children were perfect citizens, and also machines?

    What would be any drawback to that scenario?

    So don't speak of "abandoning the human species." Loyalty to something or other merely because of what it is made of (flesh and blood as opposed to steel and silicon), is arbitrary and irrational.
     
  9. Jan 30, 2005 #8

    GeD

    User Avatar

    I don't value biological life over AI, I value my life over all other life. And I agree, there is no important difference between a man or AI that have both the abilities to reason and freedom of choice. However, their being AI does not guarantee their greatness over humans - this still has to be proven.

    And what about this strange deity? He seems to be more powerful than all of the AI combined. Shouldn't we sacrifice instead to make more of such dieties? He is the superior life after all.

    *****
    Extra:
    Thus:
    -This is true in the sense that anyone who values life will never value AI > Human out of a simple RACE or STRENGTH/SURVIVABILITY comparison - it will take more than that (ie. imagination, choice, intelligence, development potential). A simple look into the nazi ideology and world war 2 will show you just how dangerous your ideas can be if they are interpreted exactly as you have said it.

    -The potential of AI is not founded on anything yet. We only know that they are physically superior (computing, lifting, moving, etc), but they have not shown or proven to be superior mentally - imagination, free thought, etc. It is true that so far, robotic components has been shown to be physically superior to biological components. What about computer viruses? Problems in coding? Overheating? Electrical failures? Shortcircuits? Are they free to choose and think - do they really have an imagination?

    -Their show of intelligence has been nothing more than a show of speed and a blind ability to obey (none have shown the ability to command) - nothing of this superior AI intelligence has yet to be proven.

    -Later on, if AI prove to be 'superior', we could labor/sacrifice for the development of AI as a way of preserving our legacy (they are our children after all), and THAT would be improving the quality of what humans call 'intelligent life' and their vitality. Not simply that the endgoal is supposedly a 'perfect AI' that we should go for. Perhaps humans will later evolve into something super powerful, that they become even greater than the AI? Unlikely, but at this moment, there is no reason to say that we must eventually abandon the survival of the human race.
     
    Last edited: Jan 31, 2005
  10. Jan 30, 2005 #9

    GeD

    User Avatar

    Thus, it is my opinion that at this moment, it is meaningless and worthless to build AI simply because they can colonize space and learn more things about spatial phenomena (knowledge and science by themselves are worthless). We need to see an improvement in our quality of life, and build AI to try to help ourselves out - not simple study because of a need to improve science and knowledge.

    It is true, we could try to usher in an era of 'superior' robots (although this existence of such an end goal is still under dispute at the moment). But who is to say that building AI will prove to be superior to say, creating new biologically potent beings? As we said before there is no real difference - they are both of atoms and energy. Perhaps an evolved super-human or other biological or cybernetic beings with resistance to viruses will prove to be more effective since they are more adaptive and less rigid than AI in physical form (and perhaps in intelligence as well).

    At this moment in time, we should be asking how AI could help our quality of life - not their 'eventual' succession of the human race. It is odd that humans think that it is ok if they are extinct, yet are so unwilling to make animals or plants extinct out of the fear that the earth would die out.
     
    Last edited: Jan 30, 2005
  11. Jan 30, 2005 #10
    What does "interesting" have to do with reason?
     
  12. Jan 31, 2005 #11
    Certainly, at the moment AI is only a tool to help humans. It would take a true breakthrough to make a real AI. But our brains follow the orders of physics just as computers do; there is no fundamental difference.

    It is possible that biological computers will always be better than non-biological computers, simply because of processing speed. No computer performs as many computations as a human brain. As for adaptibility, though, an AI has the potential to surpass humans and other biological entities enormously. Already the human mind is limited and one-track compared to the variety of things a simple desktop computer can do. Humans only think, however adaptable and useful this thought may be; even current computers can do unimaginably diverse things.

    Genetic design may produce a more powerful computer than the human brain, but you don't necessarily need humans to make that design. An AI might do the same thing; it could make AI II.

    All this is under the postulate that a truly reasoning AI can be built. If this can't happen, perhaps because of limited computing speed, then none of these things I am saying about the potential superiority of silicon applies. But I believe a reasoning AI _can_ be built.

    Dan, rational beings are always interested by complex and quixotic things. You can't be intelligent if you don't have any interest in the unusual and significant. I believe it's not possible to design a truly intelligent AI which does not share this interest.
     
  13. Feb 2, 2005 #12
    It is possible that biological computers will always be better than non-biological computers, simply because of processing speed. No computer performs as many computations as a human brain. As for adaptibility, though, an AI has the potential to surpass humans and other biological entities enormously. Already the human mind is limited and one-track compared to the variety of things a simple desktop computer can do. Humans only think, however adaptable and useful this thought may be; even current computers can do unimaginably diverse things.

    I think you got it reveresed comparison wise Bartholomew

    Biological computers and non biological computers are great within their own bailiwick. They can't be compared head to head simply becasue they are constructed/developed vastly different

    Non bio computers simply process alot of 1's and 0's and they do it very fast to make up for this type of short fall and liniarity of processing which is caused by construction methods.


    Bio computers (our brain) can't come close to the processing speed of computers, because of the medium of which they're made. But we make up for the slower processing speed by having alot more interconnections (neurons), and therby are able to draw conclusions based on seeemingly unconnected thoughts or bits of data.

    By this comparison unless AI is constructed as complex as our brain interconnections (literally billions of interconnections) we'll be able to most likely achive parity with non bio AI.

    Besides if AI got as complex as our brains wouldn't it have learned from us and therefore be a human intelligence (albiet constructed)? Could it not be tought an appreciation of diversity and life?

    .
     
  14. Feb 2, 2005 #13
    No, the brain is far faster than the fastest silicon computer. Each individual neuron is much slower, but overall the processing power is much greater because, as you say, there are many more connections and also many more neurons. Also, each individual neuron performs more computation than a single logic gate, mitigating its speed disadvantage.

    Versatility is heavily in favor of the computer. For a human to learn a new thing, he must spend hours, sometimes longer, doing so. For a computer to learn a new thing it simply loads a program. The capability of computers to alter their memories and functionality far exceeds the human capability to do the same. Think of all the multifarious programs stored on just your computer at this moment--operating system(s), word processors, internet connection managers, other system software, games, hard drive browsers, internet browsers, command prompts, device drivers--who knows what else. There must be hundreds of thousands of commercial programs in existence, and any compatible computer can execute any of them.
     
  15. Feb 3, 2005 #14
    yep. You got me there onthe second part. I agree with you on the first part, I think we were saying the same thing (mostly) but from different perspecives. I was thinking of the neurons speed vs electron flow.
     
  16. Feb 3, 2005 #15

    GeD

    User Avatar

    I only disagree about the part about learning faster. In some ways, especially of non-mathematical, non-set ways like pattern recognition, humasn learn things a lot faster than the computer.
     
  17. Feb 3, 2005 #16
    Pattern recognition is a skill--just part of our genetic programming. If an equivalent pattern recognition program were made for the computer, it could learn it in an instant.

    So the current advantages of humans are--more speed, and better existing programs for certain things (i.e. pattern recognition). With a true AI, the human programs would no longer be better than the computer programs, and then the only advantage of humans would be speed, matched against the computer versatility.
     
  18. Feb 6, 2005 #17

    GeD

    User Avatar


    Then if humans were put through genetic engineering to speed up neurons and computing speed, as well as pattern recognition...


    As I said before, it's worthless to speculate about what AI or humans will have in the future (unless it was already almost complete). We must only look at what AI is now - not the fantastical future of what could happens.
     
  19. Feb 6, 2005 #18
    I think that AI now is very close. All it needs is the right few insights. Genetic engineering of intelligence will not happen to any significant extent in the next hundred years.
     
  20. Feb 6, 2005 #19
    Yet in 1900 is was said that people couldn't fly. 60 years later, we went to the moon. i would say that the impossible is never that.

    Alo, Bartholomew, i think that you underestimate the human race. We have survived much. Yet you say we have no chance!?!?!?!?!

    We will continue to evolve. We still are. So do not discount us just because you can not see what is in front of your nose. A humans goal is always to survive. Don't discount us till we are gone.
     
    Last edited: Feb 6, 2005
  21. Feb 6, 2005 #20
    It's not impossible, but it's tremendously unlikely. Remember that very few people actually went to the moon. In terms of results it was not a significant event. Significant genetic engineering of intelligence would need to involve hundreds of thousands of people... it's just not going to happen even if the techniques were there, which they aren't, because intelligence is genetically complicated.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?