Will human's still be relevant in 2500?


by DiracPool
Tags: 2500, human, relevant
AnTiFreeze3
AnTiFreeze3 is offline
#19
Jan27-13, 02:58 PM
AnTiFreeze3's Avatar
P: 245
Quote Quote by dlgoff View Post
Number Nine
Number Nine is online now
#20
Jan27-13, 03:11 PM
P: 771
Quote Quote by AnTiFreeze3 View Post
Humanity isn't stupid enough to allow robots to become more intelligent than we are. There is a huge level of unpredictability that follows something like the Singularity (not black-holes).

I don't see why we can't just cut them off at ape-like intelligence and force the robots to do manual labor for us.
"Cut them off"? How? Pass a law against it? A law specifying that all research in computational neuroscience and artificial intelligence must stop as soon as our technology reaches a point at which AI is quantifiably identical to ape intelligence? No.
Ryan_m_b
Ryan_m_b is offline
#21
Jan27-13, 03:13 PM
Mentor
Ryan_m_b's Avatar
P: 5,350
Quote Quote by Number Nine View Post
"Cut them off"? How? Pass a law against it?
Why not? Politics and legal processes have halted scientific development in fields that will be far less controversial than this. Just think of human cloning. I'm not swayed by the "if its possible someone will do it" argument by itself.
Ryan_m_b
Ryan_m_b is offline
#22
Jan27-13, 04:39 PM
Mentor
Ryan_m_b's Avatar
P: 5,350
Quote Quote by DiracPool View Post
My vote is No.
I have to ask, relevant to what?
Quote Quote by DiracPool View Post
Human's, in fact all mammals and life on earth is impossibly overcomplex and energy hungry for what they contribute.
It's evident that all life is not impossible and I have no idea what you mean by "overcomplex". By what reasoning did you come to that conclusion? And what do you mean "contribute". It seems like you're talking in terms of trophic flow but I can't imagine what you are getting at.
Quote Quote by DiracPool View Post
In a thousand years machines are gonna look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress,
Be wary about the narrative of progress. There is nothing set in stone that human civilisations will always continue to become technologically more advanced, the truth of the matter is that we have no precedents for high energy civilisations and any speculation over the future (at these timescales and concerning this type of technological change) however well reasoned can't be more than speculation.
Quote Quote by DiracPool View Post
but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I wanna hear your opinion.
It means that we stand on the shoulders of giants.
Quote Quote by DiracPool View Post
The foundations of human cognition are already known, Piaget taught us that, and the technology exists to press this operationalism into electronics, so it it just a matter of time.
You can't assert that the technology exists in one breath then insist it's a matter of time in the other. Firstly it doesn't exist and secondly strong AI/AGI has been coming real soon for decades. Whilst there has been progress in making software that acts intelligently it's not apparent to me that we've made strides towards strong AI. Reason being that general intelligence is ill defined in humans and its not even clear what it is. If we're going to try and create it from scratch or reverse engineer the brain (and they are both tasks of Herculean difficulty) we'll have to address the issue of what intelligence is along the way.
Quote Quote by DiracPool View Post
And not alot, I am working with a team trying to push this forward now.
You're an AI researcher working to produce strong AI?
Quote Quote by DiracPool View Post
Once it happens, it won't be long before humans define the term "superflous".
Superfluous to what? We're already superfluous to a lot of things and not to others. What changes if you propose strong AI?
Quote Quote by DiracPool View Post
What's gonna happen to us, then? I think we will be cared for by our robotic superiors just like we take care of grandma, but I don't think that what the future holds is some kind of Star Trek fantasy. What's your vote?
My vote is that the premises of this discussion need reevaluating, see my previous post for more on this.
Number Nine
Number Nine is online now
#23
Jan27-13, 05:10 PM
P: 771
Quote Quote by Ryan_m_b View Post
Why not? Politics and legal processes have halted scientific development in fields that will be far less controversial than this. Just think of human cloning. I'm not swayed by the "if its possible someone will do it" argument by itself.
I didn't mean that it couldn't be done, only that it would be nearly unenforceable. Human cloning requires tremendous and specialized resources, but I can write a biophysically realistic model of the basal ganglia that will exhibit reinforcement learning in my bedroom. This will only become easier as computing power increases. Banning intelligent computer programs would be only slightly less difficult than banning cryptography. What's more, how would you quantify the level of intelligence beyond which computer programs would be prohibited?

The government would necessarily carve out exceptions for itself (likely for military purposes), which means that the necessary technology and knowledge would almost certainly be developed. It would be impossible to keep that knowledge from becoming public; they tried the same thing with public key cryptography and failed miserably.
Ryan_m_b
Ryan_m_b is offline
#24
Jan27-13, 05:41 PM
Mentor
Ryan_m_b's Avatar
P: 5,350
I agree and disagree. I see your point but writing the program for strong AI is likely to be insanely difficult, require very specialised training in AI science and possibly even require expensive and specialised equipment (the latter may change if Moore's law continues to the point where domestic/commercial machines match the requirements for strong AI, whatever that may be...). However I suppose it's possible that once the work is done the data could be copied and pasted.

Either way I maintain the topic is not as simple as "could be done = will be done".
AnTiFreeze3
AnTiFreeze3 is offline
#25
Jan28-13, 12:10 PM
AnTiFreeze3's Avatar
P: 245
Quote Quote by Number Nine View Post
"Cut them off"? How? Pass a law against it? A law specifying that all research in computational neuroscience and artificial intelligence must stop as soon as our technology reaches a point at which AI is quantifiably identical to ape intelligence? No.
Ah, and I suppose you were a strong advocate for advancing the research of nuclear weapons? Just because we are capable of advancing a field does not mean that we should do it. Like I said, there is a large amount of unpredictability, at least with our current understanding, as to what would happen if we allowed an AI smarter than we are to exist.
mfb
mfb is offline
#26
Jan28-13, 01:05 PM
Mentor
P: 10,840
Quote Quote by AnTiFreeze3 View Post
Ah, and I suppose you were a strong advocate for advancing the research of nuclear weapons?
I prefer this world over a hypothetical one where North Korea has nuclear weapons and other countries have not.
Just because we are capable of advancing a field does not mean that we should do it.
Right, but the opposite is true as well: We should not stop all research we can do.
Like I said, there is a large amount of unpredictability, at least with our current understanding, as to what would happen if we allowed an AI smarter than we are to exist.
And there is Edit: source being discussed how to make an AI friendly.
AnTiFreeze3
AnTiFreeze3 is offline
#27
Jan28-13, 02:10 PM
AnTiFreeze3's Avatar
P: 245
Quote Quote by mfb View Post
I prefer this world over a hypothetical one where North Korea has nuclear weapons and other countries have not.
I despise hypothetical arguments, but I'll try to humor you:

Theorize all you want, but there are virtually no circumstances under which the spending of trillions upon trillions of dollars on apocalyptic weapons could be considered a noble endeavor. And considering the primacy of North Korea and the intellectuals in its possession, I highly doubt they would have been able to produce any nuclear weapons of their own without outside influence; I would like to remind you that they have only recently come upon six pitiful nuclear weapons, and have done so over sixty years after the U.S. made the first nuclear weapon. And I can assure you that they did not reach this point on their own.

And my original statement was meant to be an example of how we should not strive to know everything about something simply because we can. Besides, the best way to hinder any self-extinction of humanity is to simply never develop the technology to kill ourselves off in the first place.

Quote Quote by mfb View Post
Right, but the opposite is true as well: We should not stop all research we can do.
I don't recall ever promoting that all research should be stopped. All I have ever said is that, when certain research could be considered a threat to humanity, we ought to severely think about what we're doing before we make any wrong decisions. Blindly striving forward without thinking about the consequences of what we may come to know is a decadence at the very least.

Quote Quote by mfb View Post
And there is Edit: source being discussed how to make an AI friendly.
The edit does make it a little difficult to understand, but I'm assuming that you meant to include a source stating that there is research being done on how to make potential AI friendly. I would be all for this; I thought about including a statement at the end of my last post stating how, if research were done that could guarantee the safety of this pursuit, then I would be completely pleased.
Ryan_m_b
Ryan_m_b is offline
#28
Jan28-13, 02:21 PM
Mentor
Ryan_m_b's Avatar
P: 5,350
Quote Quote by mfb View Post
And there is Edit: source being discussed how to make an AI friendly.
Sorry but that source is not acceptable. I'd like to remind members to post links to credible sources I.e. peer review papers from credible AI journals. I doubt I need to remind anyone that this is a topic that had been commented on voraciously by various bias organisations that are not suitable sources for a PF discussion.
mfb
mfb is offline
#29
Jan28-13, 04:48 PM
Mentor
P: 10,840
Quote Quote by AnTiFreeze3 View Post
I don't recall ever promoting that all research should be stopped. All I have ever said is that, when certain research could be considered a threat to humanity, we ought to severely think about what we're doing before we make any wrong decisions. Blindly striving forward without thinking about the consequences of what we may come to know is a decadence at the very least.
Of course. I think we all agree on that point.

@Ryan_m_b: Sorry. It was not my intent to use it as source of actual research.

@AnTiFreeze3: You can check the wikipedia article for a list of publications (see the references there).
Number Nine
Number Nine is online now
#30
Jan28-13, 06:42 PM
P: 771
Quote Quote by AnTiFreeze3 View Post
Ah, and I suppose you were a strong advocate for advancing the research of nuclear weapons? Just because we are capable of advancing a field does not mean that we should do it. Like I said, there is a large amount of unpredictability, at least with our current understanding, as to what would happen if we allowed an AI smarter than we are to exist.
You didn't respond to my question, which had to do with pragmatics and not morality. Nuclear weapons require specialized equipment; programming does not. How do you intend to enforce your proposed ban?
2112rush2112
2112rush2112 is offline
#31
Jan28-13, 09:50 PM
P: 21
Quote Quote by Evo View Post
We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.
Oh my gawd in jeebus I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no sir!"



Evo
Evo is offline
#32
Jan28-13, 09:55 PM
Mentor
Evo's Avatar
P: 25,958
Quote Quote by 2112rush2112 View Post
Oh my gawd in jeebus I can't believe what I'm reading. This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo!
The rules in General Discussion are the same as in the rest of the forum. We allow humor, personal interest discussions, etc... but only as long as they are within the forum guidelines.
micromass
micromass is online now
#33
Jan28-13, 10:19 PM
Mentor
micromass's Avatar
P: 16,692
Quote Quote by 2112rush2112 View Post
Oh my gawd in jeebus I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no sir!"



If you know where that picture is from, then I got to say that you have great taste in music.
DiracPool
DiracPool is offline
#34
Jan29-13, 05:18 AM
P: 492
Quote Quote by micromass View Post
If you know where that picture is from, then I got to say that you have great taste in music.
YOU! Yes YOU!!! ...the Lad recons himself a poet...and probably a physcist as well. Absolute RUBBISH. Get back with your work!
Ryan_m_b
Ryan_m_b is offline
#35
Jan29-13, 05:35 AM
Mentor
Ryan_m_b's Avatar
P: 5,350
Quote Quote by 2112rush2112 View Post
Oh my gawd in jeebus I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no
You think you know the rules better than a moderator?
DiracPool
DiracPool is offline
#36
Jan29-13, 06:11 AM
P: 492
...what? Claiming that the human brain "works" at 10hz is so grossly over-simplistic as to be nonsense. Certainly it doesn't refer to individual neurons, which can fire at frequencies far greater than 10hz. We're then left with large (or small) scale oscillations between different brain regions or local clusters of neuron. In that case, you see spontaneous oscillations of many different frequencies (all the way from delta to gamma).
Well, I wouldn't really call it nonsense, there's a good deal of evidence for it, but that's almost beside the point, because admittedly, this thread really works best with more of a philosophical or perhaps teleological flavor, which is ok, isn't it?

You're an AI researcher working to produce strong AI?
Sorry, this quote is by Ryan m b (I can't figure out how to do multiple quotes in one response yet). I got the single ones down. In any case, no, its not standard AI, it's more related to a field that may be referred to as cognitive neurodynamics, looking at information less as that of software-themed and more as layered "frames" of itinerant chaotic states in masses of neural tissue. It's speculative but not "nonsense," a researcher named Jose Principe and co. have recently built a functioning analog VLSI chip based on the technology.

However, again, the point I was trying to make was more in reference to the expensiveness, or perhaps better stated "vestigalness" of the energy expense of biological tissue to accomplish the important features of what we cherish to be human. Evolution has labored blindly for hundreds of millions of years to find a mechanism for how our brains work to carry on this conversation we're having right now. But the mechanism is grossly ineffecient and hampered/slowed down by the sloppiness of the way evolution works. I mean, can't we all at least agree on that? The obvious comparisons are vacuum tubes vs solid state transistors vs integrated circuits. "In the year 2525..." (sing it with me people) we are not gonna still build the equivalent of "vacuum tube humans" in the same way we don't build vacuum tube iPhones today. But we probably will keep them (humans) around as a novelty just as Marshall Brain keeps some vacuum tubes in his museum.

The whole thing about the Terminator effect or why would we want to create something better than ourselves I think is a non-starter. It is simply going to happen, IMO, because they will just be better "us's", only much, much, more efficient, and we will think that is OK but they actually ARE just better us's, just as todays iPads are better than that Laptop Roy Scheider was using in the film 2010. Who'd wanna go back to that? And BTW, they don't even publish OMNI anymore, Roy, so you tell me!

Now, I think the situation might be different is someone decided to build a platform that wasn't based on ours. That would be FOREIGN. And we don't like foreign, we like to talk to things that think like us. So my guess is that the robots that are our keepers in the future will be built on our platform and we will avoid thorny problems like having to deal with Arnold and his cronies.


Register to reply

Related Discussions
What is statiscally relevant? Set Theory, Logic, Probability, Statistics 4
Can 12V 2500 mah NiMH batteries release around 150A? Electrical Engineering 49
Rotating cylinder - number of revolutions before it reach 2500 rpm Advanced Physics Homework 4
urgent power supply and creative inspire 2.1 2500 question Electrical Engineering 15
Future Human: Making human's invisible without a suit Biology 16