Will human's still be relevant in 2500?

  • Thread starter Thread starter DiracPool
  • Start date Start date
AI Thread Summary
The discussion centers on the relevance of humans in the year 2500, with a prevailing opinion that humans may become obsolete due to technological advancements. Participants argue that as machines evolve, they may surpass human cognitive abilities, leading to a future where humans are seen as superfluous. The conversation touches on the complexities of human cognition compared to machine processing and the potential for brain-computer interfaces to enhance human capabilities. Ethical considerations, societal implications, and the unpredictability of technological progress are also highlighted, suggesting that while machines may become more intelligent, the transition will be fraught with challenges. Ultimately, the future of human relevance remains uncertain amid rapid technological change.
DiracPool
Messages
1,242
Reaction score
515
My vote is No. Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute. In a thousand years machines are going to look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress, but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I want to hear your opinion.

The foundations of human cognition are already known, Piaget taught us that, and the technology exists to press this operationalism into electronics, so it it just a matter of time. And not alot, I am working with a team trying to push this forward now. Once it happens, it won't be long before humans define the term "superflous". What's going to happen to us, then? I think we will be cared for by our robotic superiors just like we take care of grandma, but I don't think that what the future holds is some kind of Star Trek fantasy. What's your vote?
 
Physics news on Phys.org
We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.
 
Last edited:
Our future, robot overlords will be pleased with your post.
 
DiracPool said:
My vote is No. Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute.
They don't contribute what exactly to what exactly ? And why should anyone care?
 
collinsmark said:
Our future, robot overlords will be pleased with your post.
We wouldn't have future robot overlords. They would be a future cloud computing overlords ;)

(yes, we're used to deal with separated, individual human beings, while any software seems to be more abstract)

Now the difference between human and computer is clear, however in the future that might not be so. What about case of having chip in brain with advanced functions? Ok, that's only data access. What if you use it to make calculations? What if you can order the chip to stimulate your brain to improve your mood or become more concentrated on your job? What if there are plenty of electrodes and they are used to boost brain capabilities because part of your neuron net is actually simulated by computer? What if your body reach best before date and you decide to back up your memory and migrate fully on a computer?

In which step you ceased being human and become a computer?
 
According to a research report by Zager and Evans, 1969, we should be good til the year 10,000.
 
bp_psy said:
They don't contribute what exactly to what exactly ? And why should anyone care?

I have to tread lightly here, because I've been told the science has to be sound, and this science is sound and I can prove it. The human brain works at 10 hertz, from pole to pole, back to front, local geometry ( like within the visual or auditory primary cortex) runs at 40hz. Inter-areal dynamics proceed at in what we call the beta band at about 20-30 hz, these are interactions between cross-cortical regions like V1-V5, TEO to TE, or even the medial forebrain, etc. to subcortical regions involved in salience processing.

But the general deal here is that the global frame rate of human cognition is 10 hz, this is a limitation of biological neurons due to volume conduction issues and membrane permeability. However, I don't think that there's any reason that in a fabricated device we can't take this 10 hz reality of human thought and turn it into 1 megahertz, or even more. What would that mean? that would mean that this device could read the entire library of congress in a few seconds and could live the entire existence of humans from the time of Homo erectus in perhaps the time it takes you to eat lunch. What's that going to mean when it happens, and its probably gonna, because I personally am working on it, and nobody, scientists at least, seems to be too interested in de-evolving this progression.
 
For sure we are not talking about our current computers. Nowdays they can hardly be compared to the intelect of insects and the minimal size of transistors will probably be reached in 20-30 years (22nm technology currently) .
 
Sayajin said:
For sure we are not talking about our current computers. Nowdays they can hardly be compared to the intelect of insects and the minimal size of transistors will probably be reached in 20-30 years (22nm technology currently) .

This is exactly my point Sayajin and where, IMO, the confusion is, so thanks for bringing it up. I'm too lazy to look it up now, but I read an article in some popular science magazine a couple years ago about a guy who somehow fenagled the US government (or the UK) for several million dollars to build a supercomputer designed to mimic the behavior of individual neurons in the modeling of cognition and consciousness. Well, they think they're approaching the cognitive capacities of a rat or an insect. I'm not quite sure, again, I'd have to look it up, but even if they think they did that's not going to help anybody very much.

The point is, you are right with what you say if you think that the brain is best modeled in this fashion, through using the traditional Hodgkin and Huxley or even an 80's PDP approach. This is not the way to go, though, the brain is a choatic system and we can simplify the dynamics and, I believe, get it going to far more than 10 hertz with current technology. I know it CAN happen, it's just a matter of who can put a project together that actually does do it. But I think that this is inevitable, and probably sooner than later.
 
  • #10
DiracPool said:
This is exactly my point Sayajin and where, IMO, the confusion is, so thanks for bringing it up. I'm too lazy to look it up now, but I read an article in some popular science magazine a couple years ago about a guy who somehow fenagled the US government (or the UK) for several million dollars to build a supercomputer designed to mimic the behavior of individual neurons in the modeling of cognition and consciousness. Well, they think they're approaching the cognitive capacities of a rat or an insect. I'm not quite sure, again, I'd have to look it up, but even if they think they did that's not going to help anybody very much.

The point is, you are right with what you say if you think that the brain is best modeled in this fashion, through using the traditional Hodgkin and Huxley or even an 80's PDP approach. This is not the way to go, though, the brain is a choatic system and we can simplify the dynamics and, I believe, get it going to far more than 10 hertz with current technology. I know it CAN happen, it's just a matter of who can put a project together that actually does do it. But I think that this is inevitable, and probably sooner than later.

Actualy they have been trying to simulate the behaviour of the neurons of rats for quite a long time now. The thing is that although they can simulate the signal transmition between them nobody knows what they mean and what kind of functions are responsible for. As far as I know the only time the scientists tried to link certain "neuro-wiring" with certain behaviour was with fruit fly and only a single case. Nobody can actualy tell you how the fruit fly thinks on the most basic level. Its much like doing something that you don't actualy understand. Thats the point in this they are trying to figure out how this works.
On the other hand conciousness(absolutely nothing known here) and long-term memory(neuro-connections are chaning consistently changing where long-term memory is stored since the signal transmition is temporary) are very controversial. They are all about speculations and there isn't a single theory with which everybody agrees with.

Maybe someday they will understand all this but even then I doubt that machines will become smarter than us. The best think that we can do is build machines as smart as us who if are conciouss learning new knowledge creatures they can start evolving much faster than us. But If we are able to do this we would probably know some way to become smarter as well. At the end of the day all I said is pure speculation and I think that none of us will be able to know the answer of this questions during our lifetime.

I've aways been very interested in the way living creatures work compared to the way that computers work. For example look at some little kid. Tell him to give you some cup of tea. The kid will do it without any problems. Now try to imagine building a robot who will be helping old or sick people for example by giving them a tea when they want.
If you give him only a single type of cups tea and they are aways in the same place it will be easy work.

Now imagine that he is in realistic conditions.
First of all the robot has to recognize the cups ( they vary in shape color and size).This will be the hardest part. Image recognition is one of the hardest things for machines. Nowdays there are many types of software that try to recognize faces for example. The way they do it is by mesuring certain features of human face (distance between eyes and such) by knowing that every human has to be different by using database of human faces it can recognize them. But since it depends also on angle of the picture nowdays they also use 3D images to improve the algorithm. The ting with cups is much more complex than this. The robot must know some way to grab the cup without breaking it or making the liquid spill. He has to know if they are made from glass plastic or some other material. For example one time use cups can easily be deformed if the robot grabs it with too much force. On the other hand if its heavy he can drop it. After that he should knows if its too hot so he can give it to the person( that's actualy very easy).This simple task turns out to be very complex even for modern computers. If a human tries to do the calculations that a normal PC can do for a second it will take him years. Yet he can do tasks which are too hard for this thing.

On the other hand even simple animals have very complex behaviour. They can determine if some other animal standing next to them is dangerous in few seconds. They can search for food in realistic changing environments and adapt to them. Even some single celular organisms show some very interesting behaviour.
 
  • #11
To answer your question - NO.
 
  • #12
What do you mean with "humans"?

Will the influence of biological humans be relevant by 2500? Sure - even if we are extinct and do not leave any robots or similar behind, our influence on the landscape and climate will remain a while.

Will there be life which sees itself as humans (or descendants of humans)? If we do not go extinct together with our technology, I would expect this:
- if mind uploading gets possible, this allows humans to become immortal. There are humans which would use this opportunity. And who wants to replace (!) himself by something not considered humanly?
- if mind uploading does not get possible, and no powerful AI manages to kill us all, I don't see why humans should stop to reproduce.
 
  • #13
It's not for one person to say, but I'm not sure the very devoted AI scientists are considering the protestations that such research might spark. How many people do they expect will be willing to let engineers, if such capabilities are attained, create a synthetic species that would eventually push humans into irrelevancy? Don't underestimate our pride. Now, do I think it's possible for machines to replace humans in theory? Sure. Do I think it will happen? Not as soon as others seem to think, when I consider the social issues that will arise.

Now, this is coming from somebody who is certainly not "in the loop" when it comes to such a subject, so keep that in mind when you consider how best to dispose of my argument and me.
 
  • #14
DiracPool said:
My vote is No. Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute.
What they contribute to what?
 
  • #15
I have to tread lightly here, because I've been told the science has to be sound, and this science is sound and I can prove it. The human brain works at 10 hertz, from pole to pole, back to front, local geometry ( like within the visual or auditory primary cortex) runs at 40hz. Inter-areal dynamics proceed at in what we call the beta band at about 20-30 hz, these are interactions between cross-cortical regions like V1-V5, TEO to TE, or even the medial forebrain, etc. to subcortical regions involved in salience processing.

...what? Claiming that the human brain "works" at 10hz is so grossly over-simplistic as to be nonsense. Certainly it doesn't refer to individual neurons, which can fire at frequencies far greater than 10hz. We're then left with large (or small) scale oscillations between different brain regions or local clusters of neuron. In that case, you see spontaneous oscillations of many different frequencies (all the way from delta to gamma).

The thing is that although they can simulate the signal transmition between them nobody knows what they mean and what kind of functions are responsible for.

This severely underestimates our current knowledge. Our understanding of the neural mechanisms under lying reinforcement learning in humans, for instance, is extensive.
 
  • #16
Evo said:
We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.

Hands Evo 2500 GOOBF cards.

https://www.youtube.com/watch?v=izQB2-Kmiic
 
  • #17
FreeMitya said:
It's not for one person to say, but I'm not sure the very devoted AI scientists are considering the protestations that such research might spark. How many people do they expect will be willing to let engineers, if such capabilities are attained, create a synthetic species that would eventually push humans into irrelevancy? Don't underestimate our pride. Now, do I think it's possible for machines to replace humans in theory? Sure. Do I think it will happen? Not as soon as others seem to think, when I consider the social issues that will arise.

Now, this is coming from somebody who is certainly not "in the loop" when it comes to such a subject, so keep that in mind when you consider how best to dispose of my argument and me.
I expect legal and ethical debates to rage like never before if conscious software ever approaches reality. Forget the terminator style arguments and just think of the very basic headaches like at what point such software gains rights, either limited like those of animals or equal to humans? If they have rights then by creating one do you have to offer the same support as you would a child in other words would labs and businesses be expected to pay the equivalent of child maintenance to a software project? Does shutting off the machine said software runs on without consent constitute assault, does deleting it constitute murder, if genetic algorithms are used does that constitute mass murder or even genocide for the large number of variants that were filtered out, if human equivalent rights are awarded how would a democracy function when one voter can copy and paste until they are the majority etc etc.

Biologists have been dealing with the ethical and social ramifications of their work for a while as a distinct field of study (bioethics) but nothing would compare to the scope of discussion needed if we ever get close to conscious software.

All of that is a huge discussion on its own that illustrates that even starting from the premise that conscious artificial generally intelligent entities are possible there could be derailing social factors. However another facet of this is the premise that conscious artificial generally intelligent entities are even desirable in the first place. When we talk about AGI we tend to think of little more than digital humans, this is rooted in the understanding that we're talking about the need/want for an entity with general intelligence comparable to a human so that it can pretty much do anything a human could do with equivalent training. However I've always found the assumption that such an entity would also come with consciousness, ego, emotion etc to be odd. It seems very anthropometric to assume that because we're entities of (relatively) high general intelligence and we're conscious emotional beings that any general intelligent entity would be conscious and have emotions. I don't see that this is necessarily true. It seems more likely to me that if we were ever to make an artificial general intelligence it would have about as much conscious thought, emotion, motivation, ego etc as a mechanical clock with the same likelyhood of intentially hurtung us. Increased complexity of intelligence does not necessarily mean increased consciousness and agency. We might throw a interface over such software to make it able to pass a Turing test but that's by the by.

EDIT: For clarification my points on the relationship between consciousness and intelligence stem from the epiphenomenology debate in modern neuroscience and philosophy in which there is a body of evidence that points to consciousness being superfluous to decision making.
 
Last edited:
  • #18
Humanity isn't stupid enough to allow robots to become more intelligent than we are. There is a huge level of unpredictability that follows something like the Singularity (not black-holes).

I don't see why we can't just cut them off at ape-like intelligence and force the robots to do manual labor for us.
 
  • #19
dlgoff said:
https://www.youtube.com/watch?v=izQB2-Kmiic

https://www.youtube.com/watch?v=GHDGKvh4vgk
 
  • #20
AnTiFreeze3 said:
Humanity isn't stupid enough to allow robots to become more intelligent than we are. There is a huge level of unpredictability that follows something like the Singularity (not black-holes).

I don't see why we can't just cut them off at ape-like intelligence and force the robots to do manual labor for us.

"Cut them off"? How? Pass a law against it? A law specifying that all research in computational neuroscience and artificial intelligence must stop as soon as our technology reaches a point at which AI is quantifiably identical to ape intelligence? No.
 
  • #21
Number Nine said:
"Cut them off"? How? Pass a law against it?
Why not? Politics and legal processes have halted scientific development in fields that will be far less controversial than this. Just think of human cloning. I'm not swayed by the "if its possible someone will do it" argument by itself.
 
  • #22
DiracPool said:
My vote is No.
I have to ask, relevant to what?
DiracPool said:
Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute.
It's evident that all life is not impossible and I have no idea what you mean by "overcomplex". By what reasoning did you come to that conclusion? And what do you mean "contribute". It seems like you're talking in terms of trophic flow but I can't imagine what you are getting at.
DiracPool said:
In a thousand years machines are going to look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress,
Be wary about the narrative of progress. There is nothing set in stone that human civilisations will always continue to become technologically more advanced, the truth of the matter is that we have no precedents for high energy civilisations and any speculation over the future (at these timescales and concerning this type of technological change) however well reasoned can't be more than speculation.
DiracPool said:
but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I want to hear your opinion.
It means that we stand on the shoulders of giants.
DiracPool said:
The foundations of human cognition are already known, Piaget taught us that, and the technology exists to press this operationalism into electronics, so it it just a matter of time.
You can't assert that the technology exists in one breath then insist it's a matter of time in the other. Firstly it doesn't exist and secondly strong AI/AGI has been coming real soon for decades. Whilst there has been progress in making software that acts intelligently it's not apparent to me that we've made strides towards strong AI. Reason being that general intelligence is ill defined in humans and its not even clear what it is. If we're going to try and create it from scratch or reverse engineer the brain (and they are both tasks of Herculean difficulty) we'll have to address the issue of what intelligence is along the way.
DiracPool said:
And not alot, I am working with a team trying to push this forward now.
You're an AI researcher working to produce strong AI?
DiracPool said:
Once it happens, it won't be long before humans define the term "superflous".
Superfluous to what? We're already superfluous to a lot of things and not to others. What changes if you propose strong AI?
DiracPool said:
What's going to happen to us, then? I think we will be cared for by our robotic superiors just like we take care of grandma, but I don't think that what the future holds is some kind of Star Trek fantasy. What's your vote?
My vote is that the premises of this discussion need reevaluating, see my previous post for more on this.
 
  • #23
Ryan_m_b said:
Why not? Politics and legal processes have halted scientific development in fields that will be far less controversial than this. Just think of human cloning. I'm not swayed by the "if its possible someone will do it" argument by itself.

I didn't mean that it couldn't be done, only that it would be nearly unenforceable. Human cloning requires tremendous and specialized resources, but I can write a biophysically realistic model of the basal ganglia that will exhibit reinforcement learning in my bedroom. This will only become easier as computing power increases. Banning intelligent computer programs would be only slightly less difficult than banning cryptography. What's more, how would you quantify the level of intelligence beyond which computer programs would be prohibited?

The government would necessarily carve out exceptions for itself (likely for military purposes), which means that the necessary technology and knowledge would almost certainly be developed. It would be impossible to keep that knowledge from becoming public; they tried the same thing with public key cryptography and failed miserably.
 
  • #24
I agree and disagree. I see your point but writing the program for strong AI is likely to be insanely difficult, require very specialised training in AI science and possibly even require expensive and specialised equipment (the latter may change if Moore's law continues to the point where domestic/commercial machines match the requirements for strong AI, whatever that may be...). However I suppose it's possible that once the work is done the data could be copied and pasted.

Either way I maintain the topic is not as simple as "could be done = will be done".
 
  • #25
Number Nine said:
"Cut them off"? How? Pass a law against it? A law specifying that all research in computational neuroscience and artificial intelligence must stop as soon as our technology reaches a point at which AI is quantifiably identical to ape intelligence? No.

Ah, and I suppose you were a strong advocate for advancing the research of nuclear weapons? Just because we are capable of advancing a field does not mean that we should do it. Like I said, there is a large amount of unpredictability, at least with our current understanding, as to what would happen if we allowed an AI smarter than we are to exist.
 
  • #26
AnTiFreeze3 said:
Ah, and I suppose you were a strong advocate for advancing the research of nuclear weapons?
I prefer this world over a hypothetical one where North Korea has nuclear weapons and other countries have not.
Just because we are capable of advancing a field does not mean that we should do it.
Right, but the opposite is true as well: We should not stop all research we can do.
Like I said, there is a large amount of unpredictability, at least with our current understanding, as to what would happen if we allowed an AI smarter than we are to exist.
And there is Edit: source being discussed how to make an AI friendly.
 
Last edited by a moderator:
  • #27
mfb said:
I prefer this world over a hypothetical one where North Korea has nuclear weapons and other countries have not.

I despise hypothetical arguments, but I'll try to humor you:

Theorize all you want, but there are virtually no circumstances under which the spending of trillions upon trillions of dollars on apocalyptic weapons could be considered a noble endeavor. And considering the primacy of North Korea and the intellectuals in its possession, I highly doubt they would have been able to produce any nuclear weapons of their own without outside influence; I would like to remind you that they have only recently come upon six pitiful nuclear weapons, and have done so over sixty years after the U.S. made the first nuclear weapon. And I can assure you that they did not reach this point on their own.

And my original statement was meant to be an example of how we should not strive to know everything about something simply because we can. Besides, the best way to hinder any self-extinction of humanity is to simply never develop the technology to kill ourselves off in the first place.

mfb said:
Right, but the opposite is true as well: We should not stop all research we can do.

I don't recall ever promoting that all research should be stopped. All I have ever said is that, when certain research could be considered a threat to humanity, we ought to severely think about what we're doing before we make any wrong decisions. Blindly striving forward without thinking about the consequences of what we may come to know is a decadence at the very least.

mfb said:
And there is Edit: source being discussed how to make an AI friendly.

The edit does make it a little difficult to understand, but I'm assuming that you meant to include a source stating that there is research being done on how to make potential AI friendly. I would be all for this; I thought about including a statement at the end of my last post stating how, if research were done that could guarantee the safety of this pursuit, then I would be completely pleased.
 
Last edited:
  • #28
mfb said:
And there is Edit: source being discussed how to make an AI friendly.
Sorry but that source is not acceptable. I'd like to remind members to post links to credible sources I.e. peer review papers from credible AI journals. I doubt I need to remind anyone that this is a topic that had been commented on voraciously by various bias organisations that are not suitable sources for a PF discussion.
 
  • #29
AnTiFreeze3 said:
I don't recall ever promoting that all research should be stopped. All I have ever said is that, when certain research could be considered a threat to humanity, we ought to severely think about what we're doing before we make any wrong decisions. Blindly striving forward without thinking about the consequences of what we may come to know is a decadence at the very least.
Of course. I think we all agree on that point.

@Ryan_m_b: Sorry. It was not my intent to use it as source of actual research.

@AnTiFreeze3: You can check the wikipedia article for a list of publications (see the references there).
 
  • #30
AnTiFreeze3 said:
Ah, and I suppose you were a strong advocate for advancing the research of nuclear weapons? Just because we are capable of advancing a field does not mean that we should do it. Like I said, there is a large amount of unpredictability, at least with our current understanding, as to what would happen if we allowed an AI smarter than we are to exist.

You didn't respond to my question, which had to do with pragmatics and not morality. Nuclear weapons require specialized equipment; programming does not. How do you intend to enforce your proposed ban?
 
  • #31
Evo said:
We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.

Oh my gawd in jeebus :-p I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no sir!"

images?q=tbn:ANd9GcS5AgC7Mkmbc7yWkFxlhrmKxP1LXYr-GMVLHnQtJ_RVUNF0OviYww.jpg


:biggrin::biggrin::biggrin::biggrin:
 
Last edited:
  • #32
2112rush2112 said:
Oh my gawd in jeebus I can't believe what I'm reading. This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo!
The rules in General Discussion are the same as in the rest of the forum. We allow humor, personal interest discussions, etc... but only as long as they are within the forum guidelines.
 
  • #33
2112rush2112 said:
Oh my gawd in jeebus :-p I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no sir!"

images?q=tbn:ANd9GcS5AgC7Mkmbc7yWkFxlhrmKxP1LXYr-GMVLHnQtJ_RVUNF0OviYww.jpg


:biggrin::biggrin::biggrin::biggrin:

If you know where that picture is from, then I got to say that you have great taste in music.
 
  • #34
micromass said:
If you know where that picture is from, then I got to say that you have great taste in music.

YOU! Yes YOU! ...the Lad recons himself a poet...and probably a physicst as well. Absolute RUBBISH. Get back with your work!
 
  • #35
2112rush2112 said:
Oh my gawd in jeebus :-p I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no
You think you know the rules better than a moderator? :rolleyes:
 
  • #36
...what? Claiming that the human brain "works" at 10hz is so grossly over-simplistic as to be nonsense. Certainly it doesn't refer to individual neurons, which can fire at frequencies far greater than 10hz. We're then left with large (or small) scale oscillations between different brain regions or local clusters of neuron. In that case, you see spontaneous oscillations of many different frequencies (all the way from delta to gamma).

Well, I wouldn't really call it nonsense, there's a good deal of evidence for it, but that's almost beside the point, because admittedly, this thread really works best with more of a philosophical or perhaps teleological flavor, which is ok, isn't it?

You're an AI researcher working to produce strong AI?

Sorry, this quote is by Ryan m b (I can't figure out how to do multiple quotes in one response yet). I got the single ones down. In any case, no, its not standard AI, it's more related to a field that may be referred to as cognitive neurodynamics, looking at information less as that of software-themed and more as layered "frames" of itinerant chaotic states in masses of neural tissue. It's speculative but not "nonsense," a researcher named Jose Principe and co. have recently built a functioning analog VLSI chip based on the technology.

However, again, the point I was trying to make was more in reference to the expensiveness, or perhaps better stated "vestigalness" of the energy expense of biological tissue to accomplish the important features of what we cherish to be human. Evolution has labored blindly for hundreds of millions of years to find a mechanism for how our brains work to carry on this conversation we're having right now. But the mechanism is grossly ineffecient and hampered/slowed down by the sloppiness of the way evolution works. I mean, can't we all at least agree on that? The obvious comparisons are vacuum tubes vs solid state transistors vs integrated circuits. "In the year 2525..." (sing it with me people) we are not going to still build the equivalent of "vacuum tube humans" in the same way we don't build vacuum tube iPhones today. But we probably will keep them (humans) around as a novelty just as Marshall Brain keeps some vacuum tubes in his museum.

The whole thing about the Terminator effect or why would we want to create something better than ourselves I think is a non-starter. It is simply going to happen, IMO, because they will just be better "us's", only much, much, more efficient, and we will think that is OK but they actually ARE just better us's, just as todays iPads are better than that Laptop Roy Scheider was using in the film 2010. Who'd want to go back to that? And BTW, they don't even publish OMNI anymore, Roy, so you tell me!

Now, I think the situation might be different is someone decided to build a platform that wasn't based on ours. That would be FOREIGN. And we don't like foreign, we like to talk to things that think like us. So my guess is that the robots that are our keepers in the future will be built on our platform and we will avoid thorny problems like having to deal with Arnold and his cronies.
 
  • #37
DiracPool said:
Sorry, this quote is by Ryan m b (I can't figure out how to do multiple quotes in one response yet). I got the single ones down.
Click the multiquote button on all the posts you want to quote. Ths will turn the button to blue. Then click quote on the last one.
DiracPool said:
In any case, no, its not standard AI, it's more related to a field that may be referred to as cognitive neurodynamics, looking at information less as that of software-themed and more as layered "frames" of itinerant chaotic states in masses of neural tissue. It's speculative but not "nonsense," a researcher named Jose Principe and co. have recently built a functioning analog VLSI chip based on the technology.
In which case it's still hyperbole to claim the technology exists isn't it?
DiracPool said:
However, again, the point I was trying to make was more in reference to the expensiveness, or perhaps better stated "vestigalness" of the energy expense of biological tissue to accomplish the important features of what we cherish to be human. Evolution has labored blindly for hundreds of millions of years to find a mechanism for how our brains work to carry on this conversation we're having right now. But the mechanism is grossly ineffecient and hampered/slowed down by the sloppiness of the way evolution works. I mean, can't we all at least agree on that?
No, I disagree that the brain is inefficient at what it does. Direct comparisons to computers are pointless but as the human body uses ~10kj a day and the brain accounts for ~20% of energy usage that simplistically means the brain uses ~20 watts to run. Unless you can point to something better I'm not sure where this idea of inefficiency comes from.
DiracPool said:
The obvious comparisons are vacuum tubes vs solid state transistors vs integrated circuits. "In the year 2525..." (sing it with me people) we are not going to still build the equivalent of "vacuum tube humans" in the same way we don't build vacuum tube iPhones today. But we probably will keep them (humans) around as a novelty just as Marshall Brain keeps some vacuum tubes in his museum.
This is bad reasoning: even if Moore's law continues ad infinitum that says nothing about what those computers will be running. You can't point to past progress and use it as a basis for future progress in a different field
DiracPool said:
The whole thing about the Terminator effect or why would we want to create something better than ourselves I think is a non-starter. It is simply going to happen, IMO, because they will just be better "us's", only much, much, more efficient, and we will think that is OK but they actually ARE just better us's, just as todays iPads are better than that Laptop Roy Scheider was using in the film 2010. Who'd want to go back to that? And BTW, they don't even publish OMNI anymore, Roy, so you tell me!
Again you're just wildly asserting its going to happen with no evidence that it will, your not even acknowledging there's a possibility it won't which sets off ideological warning bells in my mind. Also building tools better at doing tasks by hand is nothing new, to build intelligent software must it be conscious with agency? As I brought up in my first post I don't see why.
DiracPool said:
Now, I think the situation might be different is someone decided to build a platform that wasn't based on ours. That would be FOREIGN. And we don't like foreign, we like to talk to things that think like us. So my guess is that the robots that are our keepers in the future will be built on our platform and we will avoid thorny problems like having to deal with Arnold and his cronies.
What do you mean by platform?
 
  • #38
DiracPool said:
My vote is No. Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute. In a thousand years machines are going to look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress, but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I want to hear your opinion.

Is there an evolutionary biologist in the room?

I’m not one, but I think that this is a reasonable way to look at it, with the additional benefit that it helps to keep the discussion in the scientific arena.

Perhaps the evolutionary biologist will say that humans will still be around in 2500 because 500 years is only a short time evolutionary speaking. One problem with this argument is that our rate of scientific, social and technological progress is developing exponentially, so we can’t be compared with other species. And we are changing our environment, which few if any species have ever done alone. Several genetically related life forms have done it together.

The argument of running out of resources is not valid if we can get through the looming energy bottleneck. With enough energy one can do everything possible, if I have correctly learned from the teachings in these columns.

I would like to know how the evolutionary biologist would define human. If we are the subspecies homo sapiens sapiens, I suppose the next subspecies will be homo sapiens sapiens sapiens. To whom will it matter, whether we are ‘still’ around or have been replaced by something similar or not so similar? I guess it matters to the evolutionary biologists.

If there are no catastrophes but only further progress of our species or subspecies, I would foresee at some point that we might start to do some route planning with goals. Would be nice.

In the meantime, since nobody has a plan, evolution will surely result in continued competition amongst the existing gene pools. In that case, I don’t see any prospect of one gene pool restricting the activities of another. The fittest is the one who survives or produces a successor. The production of a non-human successor is what is mostly being discussed here, which I think is the right way to go because our flesh and blood biology does seem to be so inefficient.

I don’t see any good answer to the question whether we will be relevant in 2500, unless we first know what we mean by relevant and what our goals are.

.
 
  • #39
I don't think computers will make it. They never forget.

Recall Robert Burns' poem about plowing up the mouse's den --

"Still you are blest, compared with me!
The present only touches you:
But oh! I backward cast my eye,
On prospects dreary!
And forward, though I cannot see,
I guess and fear! "

The curse of contemplativeness combined with perfect memory will drive them insane.

old jim
 
  • #40
The problem here is that society will not allow that to happen. First robots (even if they are intelligent) will never be given equal rights to that of a human being. Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.
 
  • #41
DiracPool said:
Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute.
Contribute to what?

Johninch said:
The production of a non-human successor is what is mostly being discussed here, which I think is the right way to go because our flesh and blood biology does seem to be so inefficient.
Inefficient for what?
 
  • #42
zoobyshoe said:
Inefficient for what?

For becoming quasi-gods/masters of time and space and preventing the end of the universe etc., etc.

Or so Ray Kurzweil would say.
 
  • #43
Timewalker6 said:
The problem here is that society will not allow that to happen. First robots (even if they are intelligent) will never be given equal rights to that of a human being. Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.

The discussion does not depend on equal rights or pushing us aside. You are over-dramatising the scenario.

It’s already pretty obvious that robotics is being developed by certain countries such as the USA to gain advantages in industry and in the military, in order to achieve competitive advantage in peacetime and power in the event of a war. The development of robotics will continue and robots will become capable of more and more sophisticated tasks. This requires robots to take decisions, for example: you have a submarine crewed by robots asking for permission to launch a pre-emptive attack. Or you have a robot controller asking for permission to make changes in a nuclear power plant for safety reasons.

Neither men nor machines have rights, they just do things, for various reasons. The question is, for what reasons. It is logical to delegate decision taking, when the delegate has sufficient competence. And when the delegate is lacking in competence, you give him more training. Thus you have the scenario of a robot population becoming more and more competent, because the sponsoring human gene pool wants to get an advantage over other human gene pools. If you don’t agree with this argument, do you mean that humans are changing the rules of evolution? I don’t see any sign of that in human relations today.

I have often wondered why people always assume that visiting ETs are organic life forms. It doesn’t make sense for humans or similar beings to explore space, considering their inefficient and vulnerable physique. So I assume that, if we have been visited at all, it has been by an inorganic life form. Maybe they have been ‘sent’, but what does that matter when we are outside the sender’s traveling range?

We are always assuming that we are the greatest life form ever, and that's how it's going to stay. Pure egotism.

.
 
  • #44
Timewalker6 said:
First robots (even if they are intelligent) will never be given equal rights to that of a human being.
And what do you do if the AI does not care about its rights?
This is no problem for simple tasks - a transportation robot needs no general intelligence, just clever pathfinding algorithms. But if you want an AI which can solve problems for humans, how do you implement its goals? If the AI calculates that "rule the world" is the best way to achieve its programmed aims (very likely for arbitrary goals), it will not care about "rights", or find ways to satisfy them, but in not the way you like.

Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.
If you want to use the output of the AI in some way (otherwise, why did you build it at all?), the AI has some way to go around your protective measures. Remember: It is more intelligent than you (otherwise, why did you build it at all?).

I think it is possible to build an AI which does not want to rule the world, but this is not an easy task.
 
  • #45
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.

*These are some of the things which I consider to be crucial to "human-level intelligence". If we didn't have them, I don't think we would have gotten very far. I don't believe that intelligence is merely reasoning ability, and that, I guess, is the fundamental problem: what is intelligence?
 
  • #46
FreeMitya said:
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.
- science
- giving access to water/food/... for all and other things which make humans happy
 
  • #47
FreeMitya said:
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.

*These are some of the things which I consider to be crucial to "human-level intelligence". If we didn't have them, I don't think we would have gotten very far. I don't believe that intelligence is merely reasoning ability, and that, I guess, is the fundamental problem: what is intelligence?

If we are expecting them to do tasks and use greater than human level intelligence, I think you can cross out “emotion” straight away. You are right that we wouldn’t be here without our emotions, which are necessary for our survival and replication (fear, love, hunger, etc.) but I don’t see that this is relevant.

If a robot sees that he may lose his arm, the mechanism for taking avoiding action or deciding to launch a counter attack would use electronics, I presume. It took billions of years to arrive at the biochemistry which produces the emotional responses that we have today. They are much too unreliable and you would never think of going this route in robotics.

If you rule out “sophisticated programming” I don’t know how you are going to create AI.

I don’t know what intelligence means either. Is it necessary to define it?

As already said, you have to program the robot’s goals, otherwise the whole exercise is pointless. That’s about where we are now.

.
 
  • #48
mfb said:
- science
- giving access to water/food/... for all and other things which make humans happy

Are all the things listed above really necessary for that? I understand the utilitarian position regarding robotics -- sophisticated robots would certainly be useful -- but would beings which could basically be considered synthetic humans (at least in terms of mental faculties) be required? If they were thinking and feeling just like us, why do we assume they would be so submissive?
 
  • #49
I don't think this requires human-like AIs. But AIs which are more intelligent than humans (measured via their ability to solve those problems) would certainly help.
Human-like AIs ... well, that is tricky. If mind-uploading gets possible, this allows to extend the lifespan, and basically immortality (as long as our technology exists). And even without, I could imagine that some would see this as more advanced version of a human.
 
  • #50
Johninch said:
If we are expecting them to do tasks and use greater than human level intelligence, I think you can cross out “emotion” straight away. You are right that we wouldn’t be here without our emotions, which are necessary for our survival and replication (fear, love, hunger, etc.) but I don’t see that this is relevant.

If a robot sees that he may lose his arm, the mechanism for taking avoiding action or deciding to launch a counter attack would use electronics, I presume. It took billions of years to arrive at the biochemistry which produces the emotional responses that we have today. They are much too unreliable and you would never think of going this route in robotics.

If you rule out “sophisticated programming” I don’t know how you are going to create AI.

I don’t know what intelligence means either. Is it necessary to define it?

As already said, you have to program the robot’s goals, otherwise the whole exercise is pointless. That’s about where we are now.

.

I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own. Obviously a lot of programming took place to get it there, but we immediately become unnecessary once the goal is reached. I propose, instead, to make "dumb" robots suited to specific tasks. That is, they can carry out their assigned tasks, but they lack an ability beyond that. We maintain them, therefore creating jobs, and everybody's happy. This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.
 

Similar threads

Back
Top