Will human's still be relevant in 2500?

  • Thread starter Thread starter DiracPool
  • Start date Start date
Click For Summary
The discussion centers on the relevance of humans in the year 2500, with a prevailing opinion that humans may become obsolete due to technological advancements. Participants argue that as machines evolve, they may surpass human cognitive abilities, leading to a future where humans are seen as superfluous. The conversation touches on the complexities of human cognition compared to machine processing and the potential for brain-computer interfaces to enhance human capabilities. Ethical considerations, societal implications, and the unpredictability of technological progress are also highlighted, suggesting that while machines may become more intelligent, the transition will be fraught with challenges. Ultimately, the future of human relevance remains uncertain amid rapid technological change.
  • #31
Evo said:
We have rules against overly speculative posts. We have had "what if" threads like this before. They serve no purpose. I'll allow this for a while to see what happens, but need to remind everyone that any guess needs to be rooted in today's mainstream science. Consider ethics, costs, education and financial circumstances, etc... Certain cultures aren't going to allow it., Remote areas, and on and on.

Oh my gawd in jeebus :-p I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no sir!"

images?q=tbn:ANd9GcS5AgC7Mkmbc7yWkFxlhrmKxP1LXYr-GMVLHnQtJ_RVUNF0OviYww.jpg


:biggrin::biggrin::biggrin::biggrin:
 
Last edited:
Physics news on Phys.org
  • #32
2112rush2112 said:
Oh my gawd in jeebus I can't believe what I'm reading. This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo!
The rules in General Discussion are the same as in the rest of the forum. We allow humor, personal interest discussions, etc... but only as long as they are within the forum guidelines.
 
  • #33
2112rush2112 said:
Oh my gawd in jeebus :-p I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no sir!"

images?q=tbn:ANd9GcS5AgC7Mkmbc7yWkFxlhrmKxP1LXYr-GMVLHnQtJ_RVUNF0OviYww.jpg


:biggrin::biggrin::biggrin::biggrin:

If you know where that picture is from, then I got to say that you have great taste in music.
 
  • #34
micromass said:
If you know where that picture is from, then I got to say that you have great taste in music.

YOU! Yes YOU! ...the Lad recons himself a poet...and probably a physicst as well. Absolute RUBBISH. Get back with your work!
 
  • #35
2112rush2112 said:
Oh my gawd in jeebus :-p I can't believe what I'm reading.
This forum (General Discussion forum) isn't the Quantum Mechanics forum, Evo! Telling this guy that there are "rules against overly speculative posts" in this forum is like telling your students on the campus green during lunch that there shall be no discussions about the possibility of interstellar life. "Enough of this balderdash talk! Not on my campus, no
You think you know the rules better than a moderator? :rolleyes:
 
  • #36
...what? Claiming that the human brain "works" at 10hz is so grossly over-simplistic as to be nonsense. Certainly it doesn't refer to individual neurons, which can fire at frequencies far greater than 10hz. We're then left with large (or small) scale oscillations between different brain regions or local clusters of neuron. In that case, you see spontaneous oscillations of many different frequencies (all the way from delta to gamma).

Well, I wouldn't really call it nonsense, there's a good deal of evidence for it, but that's almost beside the point, because admittedly, this thread really works best with more of a philosophical or perhaps teleological flavor, which is ok, isn't it?

You're an AI researcher working to produce strong AI?

Sorry, this quote is by Ryan m b (I can't figure out how to do multiple quotes in one response yet). I got the single ones down. In any case, no, its not standard AI, it's more related to a field that may be referred to as cognitive neurodynamics, looking at information less as that of software-themed and more as layered "frames" of itinerant chaotic states in masses of neural tissue. It's speculative but not "nonsense," a researcher named Jose Principe and co. have recently built a functioning analog VLSI chip based on the technology.

However, again, the point I was trying to make was more in reference to the expensiveness, or perhaps better stated "vestigalness" of the energy expense of biological tissue to accomplish the important features of what we cherish to be human. Evolution has labored blindly for hundreds of millions of years to find a mechanism for how our brains work to carry on this conversation we're having right now. But the mechanism is grossly ineffecient and hampered/slowed down by the sloppiness of the way evolution works. I mean, can't we all at least agree on that? The obvious comparisons are vacuum tubes vs solid state transistors vs integrated circuits. "In the year 2525..." (sing it with me people) we are not going to still build the equivalent of "vacuum tube humans" in the same way we don't build vacuum tube iPhones today. But we probably will keep them (humans) around as a novelty just as Marshall Brain keeps some vacuum tubes in his museum.

The whole thing about the Terminator effect or why would we want to create something better than ourselves I think is a non-starter. It is simply going to happen, IMO, because they will just be better "us's", only much, much, more efficient, and we will think that is OK but they actually ARE just better us's, just as todays iPads are better than that Laptop Roy Scheider was using in the film 2010. Who'd want to go back to that? And BTW, they don't even publish OMNI anymore, Roy, so you tell me!

Now, I think the situation might be different is someone decided to build a platform that wasn't based on ours. That would be FOREIGN. And we don't like foreign, we like to talk to things that think like us. So my guess is that the robots that are our keepers in the future will be built on our platform and we will avoid thorny problems like having to deal with Arnold and his cronies.
 
  • #37
DiracPool said:
Sorry, this quote is by Ryan m b (I can't figure out how to do multiple quotes in one response yet). I got the single ones down.
Click the multiquote button on all the posts you want to quote. Ths will turn the button to blue. Then click quote on the last one.
DiracPool said:
In any case, no, its not standard AI, it's more related to a field that may be referred to as cognitive neurodynamics, looking at information less as that of software-themed and more as layered "frames" of itinerant chaotic states in masses of neural tissue. It's speculative but not "nonsense," a researcher named Jose Principe and co. have recently built a functioning analog VLSI chip based on the technology.
In which case it's still hyperbole to claim the technology exists isn't it?
DiracPool said:
However, again, the point I was trying to make was more in reference to the expensiveness, or perhaps better stated "vestigalness" of the energy expense of biological tissue to accomplish the important features of what we cherish to be human. Evolution has labored blindly for hundreds of millions of years to find a mechanism for how our brains work to carry on this conversation we're having right now. But the mechanism is grossly ineffecient and hampered/slowed down by the sloppiness of the way evolution works. I mean, can't we all at least agree on that?
No, I disagree that the brain is inefficient at what it does. Direct comparisons to computers are pointless but as the human body uses ~10kj a day and the brain accounts for ~20% of energy usage that simplistically means the brain uses ~20 watts to run. Unless you can point to something better I'm not sure where this idea of inefficiency comes from.
DiracPool said:
The obvious comparisons are vacuum tubes vs solid state transistors vs integrated circuits. "In the year 2525..." (sing it with me people) we are not going to still build the equivalent of "vacuum tube humans" in the same way we don't build vacuum tube iPhones today. But we probably will keep them (humans) around as a novelty just as Marshall Brain keeps some vacuum tubes in his museum.
This is bad reasoning: even if Moore's law continues ad infinitum that says nothing about what those computers will be running. You can't point to past progress and use it as a basis for future progress in a different field
DiracPool said:
The whole thing about the Terminator effect or why would we want to create something better than ourselves I think is a non-starter. It is simply going to happen, IMO, because they will just be better "us's", only much, much, more efficient, and we will think that is OK but they actually ARE just better us's, just as todays iPads are better than that Laptop Roy Scheider was using in the film 2010. Who'd want to go back to that? And BTW, they don't even publish OMNI anymore, Roy, so you tell me!
Again you're just wildly asserting its going to happen with no evidence that it will, your not even acknowledging there's a possibility it won't which sets off ideological warning bells in my mind. Also building tools better at doing tasks by hand is nothing new, to build intelligent software must it be conscious with agency? As I brought up in my first post I don't see why.
DiracPool said:
Now, I think the situation might be different is someone decided to build a platform that wasn't based on ours. That would be FOREIGN. And we don't like foreign, we like to talk to things that think like us. So my guess is that the robots that are our keepers in the future will be built on our platform and we will avoid thorny problems like having to deal with Arnold and his cronies.
What do you mean by platform?
 
  • #38
DiracPool said:
My vote is No. Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute. In a thousand years machines are going to look back on 2012 in awe. A 1000 years seems like a long time and it is with the pace of current technological progress, but remember that it took anatomically modern humans 160,000 years to get it together to spit paint a Bison on a cave wall after they already looked like us. What does that mean? I want to hear your opinion.

Is there an evolutionary biologist in the room?

I’m not one, but I think that this is a reasonable way to look at it, with the additional benefit that it helps to keep the discussion in the scientific arena.

Perhaps the evolutionary biologist will say that humans will still be around in 2500 because 500 years is only a short time evolutionary speaking. One problem with this argument is that our rate of scientific, social and technological progress is developing exponentially, so we can’t be compared with other species. And we are changing our environment, which few if any species have ever done alone. Several genetically related life forms have done it together.

The argument of running out of resources is not valid if we can get through the looming energy bottleneck. With enough energy one can do everything possible, if I have correctly learned from the teachings in these columns.

I would like to know how the evolutionary biologist would define human. If we are the subspecies homo sapiens sapiens, I suppose the next subspecies will be homo sapiens sapiens sapiens. To whom will it matter, whether we are ‘still’ around or have been replaced by something similar or not so similar? I guess it matters to the evolutionary biologists.

If there are no catastrophes but only further progress of our species or subspecies, I would foresee at some point that we might start to do some route planning with goals. Would be nice.

In the meantime, since nobody has a plan, evolution will surely result in continued competition amongst the existing gene pools. In that case, I don’t see any prospect of one gene pool restricting the activities of another. The fittest is the one who survives or produces a successor. The production of a non-human successor is what is mostly being discussed here, which I think is the right way to go because our flesh and blood biology does seem to be so inefficient.

I don’t see any good answer to the question whether we will be relevant in 2500, unless we first know what we mean by relevant and what our goals are.

.
 
  • #39
I don't think computers will make it. They never forget.

Recall Robert Burns' poem about plowing up the mouse's den --

"Still you are blest, compared with me!
The present only touches you:
But oh! I backward cast my eye,
On prospects dreary!
And forward, though I cannot see,
I guess and fear! "

The curse of contemplativeness combined with perfect memory will drive them insane.

old jim
 
  • #40
The problem here is that society will not allow that to happen. First robots (even if they are intelligent) will never be given equal rights to that of a human being. Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.
 
  • #41
DiracPool said:
Human's, in fact all mammals and life on Earth is impossibly overcomplex and energy hungry for what they contribute.
Contribute to what?

Johninch said:
The production of a non-human successor is what is mostly being discussed here, which I think is the right way to go because our flesh and blood biology does seem to be so inefficient.
Inefficient for what?
 
  • #42
zoobyshoe said:
Inefficient for what?

For becoming quasi-gods/masters of time and space and preventing the end of the universe etc., etc.

Or so Ray Kurzweil would say.
 
  • #43
Timewalker6 said:
The problem here is that society will not allow that to happen. First robots (even if they are intelligent) will never be given equal rights to that of a human being. Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.

The discussion does not depend on equal rights or pushing us aside. You are over-dramatising the scenario.

It’s already pretty obvious that robotics is being developed by certain countries such as the USA to gain advantages in industry and in the military, in order to achieve competitive advantage in peacetime and power in the event of a war. The development of robotics will continue and robots will become capable of more and more sophisticated tasks. This requires robots to take decisions, for example: you have a submarine crewed by robots asking for permission to launch a pre-emptive attack. Or you have a robot controller asking for permission to make changes in a nuclear power plant for safety reasons.

Neither men nor machines have rights, they just do things, for various reasons. The question is, for what reasons. It is logical to delegate decision taking, when the delegate has sufficient competence. And when the delegate is lacking in competence, you give him more training. Thus you have the scenario of a robot population becoming more and more competent, because the sponsoring human gene pool wants to get an advantage over other human gene pools. If you don’t agree with this argument, do you mean that humans are changing the rules of evolution? I don’t see any sign of that in human relations today.

I have often wondered why people always assume that visiting ETs are organic life forms. It doesn’t make sense for humans or similar beings to explore space, considering their inefficient and vulnerable physique. So I assume that, if we have been visited at all, it has been by an inorganic life form. Maybe they have been ‘sent’, but what does that matter when we are outside the sender’s traveling range?

We are always assuming that we are the greatest life form ever, and that's how it's going to stay. Pure egotism.

.
 
  • #44
Timewalker6 said:
First robots (even if they are intelligent) will never be given equal rights to that of a human being.
And what do you do if the AI does not care about its rights?
This is no problem for simple tasks - a transportation robot needs no general intelligence, just clever pathfinding algorithms. But if you want an AI which can solve problems for humans, how do you implement its goals? If the AI calculates that "rule the world" is the best way to achieve its programmed aims (very likely for arbitrary goals), it will not care about "rights", or find ways to satisfy them, but in not the way you like.

Honestly do you expect society to treat the first successful AI (a thinking box maybe or a stationary computer) as a person? Besides based on said society's fears of a robot apocalypse of some kind there would most likely be protective measures against said robots pushing us aside.
If you want to use the output of the AI in some way (otherwise, why did you build it at all?), the AI has some way to go around your protective measures. Remember: It is more intelligent than you (otherwise, why did you build it at all?).

I think it is possible to build an AI which does not want to rule the world, but this is not an easy task.
 
  • #45
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.

*These are some of the things which I consider to be crucial to "human-level intelligence". If we didn't have them, I don't think we would have gotten very far. I don't believe that intelligence is merely reasoning ability, and that, I guess, is the fundamental problem: what is intelligence?
 
  • #46
FreeMitya said:
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.
- science
- giving access to water/food/... for all and other things which make humans happy
 
  • #47
FreeMitya said:
How many tasks would we be expecting them to do for which they'd need human-level intelligence (or greater), a growing, learning, independent mind (complete with ambitions, introspection, emotion, etc.)*, and not just sophisticated programming? It seems so frivolous and excessive to me. I'm starting to get apprehensive, though, as this is becoming very philosophical.

*These are some of the things which I consider to be crucial to "human-level intelligence". If we didn't have them, I don't think we would have gotten very far. I don't believe that intelligence is merely reasoning ability, and that, I guess, is the fundamental problem: what is intelligence?

If we are expecting them to do tasks and use greater than human level intelligence, I think you can cross out “emotion” straight away. You are right that we wouldn’t be here without our emotions, which are necessary for our survival and replication (fear, love, hunger, etc.) but I don’t see that this is relevant.

If a robot sees that he may lose his arm, the mechanism for taking avoiding action or deciding to launch a counter attack would use electronics, I presume. It took billions of years to arrive at the biochemistry which produces the emotional responses that we have today. They are much too unreliable and you would never think of going this route in robotics.

If you rule out “sophisticated programming” I don’t know how you are going to create AI.

I don’t know what intelligence means either. Is it necessary to define it?

As already said, you have to program the robot’s goals, otherwise the whole exercise is pointless. That’s about where we are now.

.
 
  • #48
mfb said:
- science
- giving access to water/food/... for all and other things which make humans happy

Are all the things listed above really necessary for that? I understand the utilitarian position regarding robotics -- sophisticated robots would certainly be useful -- but would beings which could basically be considered synthetic humans (at least in terms of mental faculties) be required? If they were thinking and feeling just like us, why do we assume they would be so submissive?
 
  • #49
I don't think this requires human-like AIs. But AIs which are more intelligent than humans (measured via their ability to solve those problems) would certainly help.
Human-like AIs ... well, that is tricky. If mind-uploading gets possible, this allows to extend the lifespan, and basically immortality (as long as our technology exists). And even without, I could imagine that some would see this as more advanced version of a human.
 
  • #50
Johninch said:
If we are expecting them to do tasks and use greater than human level intelligence, I think you can cross out “emotion” straight away. You are right that we wouldn’t be here without our emotions, which are necessary for our survival and replication (fear, love, hunger, etc.) but I don’t see that this is relevant.

If a robot sees that he may lose his arm, the mechanism for taking avoiding action or deciding to launch a counter attack would use electronics, I presume. It took billions of years to arrive at the biochemistry which produces the emotional responses that we have today. They are much too unreliable and you would never think of going this route in robotics.

If you rule out “sophisticated programming” I don’t know how you are going to create AI.

I don’t know what intelligence means either. Is it necessary to define it?

As already said, you have to program the robot’s goals, otherwise the whole exercise is pointless. That’s about where we are now.

.

I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own. Obviously a lot of programming took place to get it there, but we immediately become unnecessary once the goal is reached. I propose, instead, to make "dumb" robots suited to specific tasks. That is, they can carry out their assigned tasks, but they lack an ability beyond that. We maintain them, therefore creating jobs, and everybody's happy. This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.
 
  • #51
I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own.

In what sense? It wouldn't need Human oversight in order to run, but neither does the forward Euler method I've programmed in Python. It wouldn't be able to transcend the limitations of its hardware unless it had some form of mobility or access to manufacturing capabilities that would let it extend itself.

Human cognition is only quantitatively different from chimpanzee cognition, which is only quantitatively different from the cognition of their next closest cousins. There is a vast landscape of slight differences across species, and it's only when you look at 2 points separated by a great distance that you see great differences. There's no reason to expect that machine intelligence would be any different; there's not going to a be a point where machines immediately transition from being lifeless automatons to "transcending their programming".

This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.

Why? I don't often worry that other people "aren't serving me". Why would I worry about machines?
 
  • #52
Number Nine said:
It wouldn't be able to transcend the limitations of its hardware unless it had some form of mobility or access to manufacturing capabilities that would let it extend itself.
Or access to the internet. Or access to a human to convince him to grant that access.
Human cognition is only quantitatively different from chimpanzee cognition, which is only quantitatively different from the cognition of their next closest cousins.
How many chimpanzees do you need for the theory of Newtonian gravity?
There's no reason to expect that machine intelligence would be any different;
Machine intelligence is not limited to a fixed hardware and software - you can improve it. And it can even improve its own code.
Why? I don't often worry that other people "aren't serving me". Why would I worry about machines?
Human computing power is not doubling every 2 years.
 
  • #53
Or access to the internet. Or access to a human to convince him to grant that access.

In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.

How many chimpanzees do you need for the theory of Newtonian gravity?

I'm not entirely sure what this has to do with my statement.

Machine intelligence is not limited to a fixed hardware and software - you can improve it. And it can even improve its own code.

Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.

Human computing power is not doubling every 2 years.

Quantitative difference.
 
  • #54
FreeMitya said:
I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own. Obviously a lot of programming took place to get it there, but we immediately become unnecessary once the goal is reached. I propose, instead, to make "dumb" robots suited to specific tasks. That is, they can carry out their assigned tasks, but they lack an ability beyond that. We maintain them, therefore creating jobs, and everybody's happy. This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.

Your idea of restricting robots only to do limited tasks means restricting the development of robotics technology - not very likely. You say to “create jobs”, but we are talking about labor saving devices.

The subject is the year 2500 and it’s only logical that by then we will develop more and more sophisticated robots, which work within coordinated systems and maintain themselves. The benefits of automation increase and accumulate due to experience and synergies. If we haven’t fully automated in the next 500 years, we must in the meantime have blown ourselves up.

For example we already have robots to assembly automobiles. We should try to develop a completely automated automobile plant including all materials storage & handling, repair & maintenance of all equipment, the building and the robots themselves, quality control and parking the finished vehicles, finance and administration, etc. with no human involvement at all. No lighting needed, no canteen, no personnel department and for sure we need a much smaller factory.

Then we use this experience to automate other types of production facilities and services of all kinds, the list is endless. Facilities will talk to each other to arrange supplies like materials, power and water, throughout the whole supply chain, potentially in all industries and services, including security, transport, agriculture, and of course, program development to improve the robots and design new ones.

When everything runs itself, most of our descendents will be on social security, which is nothing other than a sharing scheme. The problem is, a fully automated system only works if it controls itself. You can’t have humans interfering, they foul things up. We already have this problem on airliners and trains. Exactly when there is a difficult situation, the pilot thinks he can do better than the computer. On the roads and railways it’s worse, with human error causing loss of lives, injuries and damage. Not to mention the huge losses caused by inefficient labor. All that has to go.

We can’t tell computers to “serve us”, I don’t know how we could define that. We have to give them goals which are consistent with our goals. With the military it’s more hair-raising, because you have to get the robots to distinguish between military and civilian targets, otherwise ….

We don’t distinguish that very well today, do we, although it's quite often a deliberate mistake.

.
 
  • #55
Number Nine said:
In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.
Humans cannot copy/distribute themself via the internet.
I'm not entirely sure what this has to do with my statement.
If chimpanzees are just slower, but do not have a qualitative difference, they should be able to invent modern physics as well. If that is not possible, independent of the number of chimpanzees, we have a qualitative difference.
Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.
No surgery/implant helped thinking so far for healthy humans. If such an improvement can be done in the future, I would expect microprocessors to be involved. And then we have intelligence in microprocessors again.
Quantitative difference.
A factor of 2 every 2 years is "just" a quantitative difference. But this gives a factor of 1 million in 40 years (currently, computing power is evolving even quicker, so you just need 20-25 years for that factor). If you can do something in a minute instead of a year, this opens up completely new options.
 
  • #56
Number Nine said:
In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.

Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.
The difference I've commonly seen proposed is that a human equivalent AI has the following significant advantages:

1) As it runs on a computer throwing more hardware at it can speed it up. There are obvious advantages to turning 1 hour of thinking and computer work into 10 or 100 or...

2) A digital entity would presumably have perfect memory, easy to see the advantages there.

3) If a human is tasked with a job and they think they could benefit from another person helping they have to go and find someone willing to help, this can be difficult if the skill set you need is rare or costs a lot. A digital entity by contrast can copy and paste itself. Given enough hardware one AI could easily become a team or even an entire institutions worth of people. This also offers an advantage over training humans: train one nuclear physicist and you have one nuclear physicist but if the entity is digital you've got as many as you want. Speculatively said copies could then be merged back together when no longer required.

4) Whilst we have methods of adapting the way we think through repetitive training, neurolinguistic programming and chemical alteration those are clumsy compared to what a digital entity could possibly do. If you propose the ability to make human equivalent AI you propose an ability to model different designs of AI. A digital entity could be able to model it's own thinking processes, invent and model more efficient processes and incorporate them. A trivial example being something like a calculator function.

5) Lastly and most speculatively assuming the human equivalent AI has some form of emotion to motivate it and help form values it benefits from being able to selectively edit those emotions as needed. A human can find it hard to commit to a task hour after hour without weariness or distraction.

Note that all of this is just speculative. It may be based in fairly good reasoning based on our understanding now but the very premises of any AI discussion are open to change. Reason being we have no idea how the future of AI science is going to go and developments could occur to make our speculation appear as quant as 60s era visions of human space colonisation.
mfb said:
A factor of 2 every 2 years is "just" a quantitative difference. But this gives a factor of 1 million in 40 years (currently, computing power is evolving even quicker, so you just need 20-25 years for that factor). If you can do something in a minute instead of a year, this opens up completely new options.
A few things are worth noting here:

1) Moore's law is not guaranteed to continue. In fact it's a near certainty that it will top out once we start etching transistors only a few atoms wide. That's not to say that computer development will halt but the roadmap of better lithography that has held for so long isn't going to hold. Rather than being a fairly constant progression computer development may turn to be a start stop affair.

2) There's no guarantee that the program's needed to run a human equivalent intelligence are open to parallel processing as much as we like. There may be a point where more hardware doesn't do anything.

3) Turning a subjective day into a subjective second doesn't help invent Step B when Step A requires a 6 hour experiment to be conducted in the real world. In terms of critical path analysis thinking faster only helps in actions that require thinking, not necessarily doing (unless the doing can be totally digitalised I.e writing a word doc).
 
  • #57
Ryan_m_b said:
1) Moore's law is not guaranteed to continue. In fact it's a near certainty that it will top out once we start etching transistors only a few atoms wide. That's not to say that computer development will halt but the roadmap of better lithography that has held for so long isn't going to hold. Rather than being a fairly constant progression computer development may turn to be a start stop affair.
3-dimensional processors (with thousands++ of layers instead of a few) could give some more decades of Moore's law. And quicker transistors might be possible, too. Not strictly Moore's law, but still an increase in computing power.

2) There's no guarantee that the program's needed to run a human equivalent intelligence are open to parallel processing as much as we like. There may be a point where more hardware doesn't do anything.
Well, you can always use "teams" in that case. The human version of more parallel processing ;). It does not guarantee that you can do [research] which would take 1 human 1 year in a minute, but even in the worst case (and assuming you have the same hardware as 10^6 humans) you can run 10^6 parallel research projects and finish one per minute.

3) Turning a subjective day into a subjective second doesn't help invent Step B when Step A requires a 6 hour experiment to be conducted in the real world. In terms of critical path analysis thinking faster only helps in actions that require thinking, not necessarily doing (unless the doing can be totally digitalised I.e writing a word doc).
Thinking might give a way to avoid step A, to design a faster experiment or to use the available computing power for a simulation.
 
  • #58
How long before someone creates a virus to destroy or manipulate AI's?
 
  • #59
Who is this human and what's so special about his still?
 
  • #60
Johninch said:
For example we already have robots to assembly automobiles. We should try to develop a completely automated automobile plant including all materials storage & handling, repair & maintenance of all equipment, the building and the robots themselves, quality control and parking the finished vehicles, finance and administration, etc. with no human involvement at all. No lighting needed, no canteen, no personnel department and for sure we need a much smaller factory.

Then we use this experience to automate other types of production facilities and services of all kinds, the list is endless. Facilities will talk to each other to arrange supplies like materials, power and water, throughout the whole supply chain, potentially in all industries and services, including security, transport, agriculture, and of course, program development to improve the robots and design new ones.

When everything runs itself, most of our descendents will be on social security, which is nothing other than a sharing scheme. The problem is, a fully automated system only works if it controls itself. You can’t have humans interfering, they foul things up. We already have this problem on airliners and trains. Exactly when there is a difficult situation, the pilot thinks he can do better than the computer. On the roads and railways it’s worse, with human error causing loss of lives, injuries and damage. Not to mention the huge losses caused by inefficient labor. All that has to go.

We can’t tell computers to “serve us”, I don’t know how we could define that. We have to give them goals which are consistent with our goals. With the military it’s more hair-raising, because you have to get the robots to distinguish between military and civilian targets, otherwise ….

We don’t distinguish that very well today, do we, although it's quite often a deliberate mistake.
Who is going to pay for all this automation and who is going to but the products? If no one has a job no one will have any money. Your notions seem to dismiss any known economy. Basically nothing ever happens unless some human or group of humans makes a lot of money off it.
 

Similar threads

Replies
10
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 22 ·
Replies
22
Views
5K
  • · Replies 33 ·
2
Replies
33
Views
6K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 11 ·
Replies
11
Views
28K
  • · Replies 8 ·
Replies
8
Views
6K