Will human's still be relevant in 2500?

  • Thread starter DiracPool
  • Start date
  • #51
807
23
I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own.

In what sense? It wouldn't need Human oversight in order to run, but neither does the forward Euler method I've programmed in Python. It wouldn't be able to transcend the limitations of its hardware unless it had some form of mobility or access to manufacturing capabilities that would let it extend itself.

Human cognition is only quantitatively different from chimpanzee cognition, which is only quantitatively different from the cognition of their next closest cousins. There is a vast landscape of slight differences across species, and it's only when you look at 2 points separated by a great distance that you see great differences. There's no reason to expect that machine intelligence would be any different; there's not going to a be a point where machines immediately transition from being lifeless automatons to "transcending their programming".

This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.

Why? I don't often worry that other people "aren't serving me". Why would I worry about machines?
 
  • #52
35,630
12,174
It wouldn't be able to transcend the limitations of its hardware unless it had some form of mobility or access to manufacturing capabilities that would let it extend itself.
Or access to the internet. Or access to a human to convince him to grant that access.
Human cognition is only quantitatively different from chimpanzee cognition, which is only quantitatively different from the cognition of their next closest cousins.
How many chimpanzees do you need for the theory of Newtonian gravity?
There's no reason to expect that machine intelligence would be any different;
Machine intelligence is not limited to a fixed hardware and software - you can improve it. And it can even improve its own code.
Why? I don't often worry that other people "aren't serving me". Why would I worry about machines?
Human computing power is not doubling every 2 years.
 
  • #53
807
23
Or access to the internet. Or access to a human to convince him to grant that access.

In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.

How many chimpanzees do you need for the theory of Newtonian gravity?

I'm not entirely sure what this has to do with my statement.

Machine intelligence is not limited to a fixed hardware and software - you can improve it. And it can even improve its own code.

Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.

Human computing power is not doubling every 2 years.

Quantitative difference.
 
  • #54
131
1
I say "programming" because if we were to create a super-intelligent machine, in my view, at least, it would immediately transcend programming (in a sense) because it could, in theory, survive on its own. Obviously a lot of programming took place to get it there, but we immediately become unnecessary once the goal is reached. I propose, instead, to make "dumb" robots suited to specific tasks. That is, they can carry out their assigned tasks, but they lack an ability beyond that. We maintain them, therefore creating jobs, and everybody's happy. This is all key because a self-aware robot with any semblance of logical thought would immediately wonder why it is serving us, which could create problems.

Your idea of restricting robots only to do limited tasks means restricting the development of robotics technology - not very likely. You say to “create jobs”, but we are talking about labor saving devices.

The subject is the year 2500 and it’s only logical that by then we will develop more and more sophisticated robots, which work within coordinated systems and maintain themselves. The benefits of automation increase and accumulate due to experience and synergies. If we haven’t fully automated in the next 500 years, we must in the meantime have blown ourselves up.

For example we already have robots to assembly automobiles. We should try to develop a completely automated automobile plant including all materials storage & handling, repair & maintenance of all equipment, the building and the robots themselves, quality control and parking the finished vehicles, finance and administration, etc. with no human involvement at all. No lighting needed, no canteen, no personnel department and for sure we need a much smaller factory.

Then we use this experience to automate other types of production facilities and services of all kinds, the list is endless. Facilities will talk to each other to arrange supplies like materials, power and water, throughout the whole supply chain, potentially in all industries and services, including security, transport, agriculture, and of course, program development to improve the robots and design new ones.

When everything runs itself, most of our descendents will be on social security, which is nothing other than a sharing scheme. The problem is, a fully automated system only works if it controls itself. You can’t have humans interfering, they foul things up. We already have this problem on airliners and trains. Exactly when there is a difficult situation, the pilot thinks he can do better than the computer. On the roads and railways it’s worse, with human error causing loss of lives, injuries and damage. Not to mention the huge losses caused by inefficient labor. All that has to go.

We can’t tell computers to “serve us”, I don’t know how we could define that. We have to give them goals which are consistent with our goals. With the military it’s more hair-raising, because you have to get the robots to distinguish between military and civilian targets, otherwise ….

We don’t distinguish that very well today, do we, although it's quite often a deliberate mistake.

.
 
  • #55
35,630
12,174
In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.
Humans cannot copy/distribute themself via the internet.
I'm not entirely sure what this has to do with my statement.
If chimpanzees are just slower, but do not have a qualitative difference, they should be able to invent modern physics as well. If that is not possible, independent of the number of chimpanzees, we have a qualitative difference.
Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.
No surgery/implant helped thinking so far for healthy humans. If such an improvement can be done in the future, I would expect microprocessors to be involved. And then we have intelligence in microprocessors again.
Quantitative difference.
A factor of 2 every 2 years is "just" a quantitative difference. But this gives a factor of 1 million in 40 years (currently, computing power is evolving even quicker, so you just need 20-25 years for that factor). If you can do something in a minute instead of a year, this opens up completely new options.
 
  • #56
Ryan_m_b
Staff Emeritus
Science Advisor
5,917
719
In which case computers can improve themselves in precisely the same way that Humans can improve themselves; by making use of external resources. Nothing shocking.

Human beings can study and increase their knowledge. We've developed surgery and implantations techniques to overcome physical limitations. We're already developing more radical improvements; engineers are at work upon it now. Again, I'm not seeing any radical difference here.
The difference I've commonly seen proposed is that a human equivalent AI has the following significant advantages:

1) As it runs on a computer throwing more hardware at it can speed it up. There are obvious advantages to turning 1 hour of thinking and computer work into 10 or 100 or...

2) A digital entity would presumably have perfect memory, easy to see the advantages there.

3) If a human is tasked with a job and they think they could benefit from another person helping they have to go and find someone willing to help, this can be difficult if the skill set you need is rare or costs a lot. A digital entity by contrast can copy and paste itself. Given enough hardware one AI could easily become a team or even an entire institutions worth of people. This also offers an advantage over training humans: train one nuclear physicist and you have one nuclear physicist but if the entity is digital you've got as many as you want. Speculatively said copies could then be merged back together when no longer required.

4) Whilst we have methods of adapting the way we think through repetitive training, neurolinguistic programming and chemical alteration those are clumsy compared to what a digital entity could possibly do. If you propose the ability to make human equivalent AI you propose an ability to model different designs of AI. A digital entity could be able to model it's own thinking processes, invent and model more efficient processes and incorporate them. A trivial example being something like a calculator function.

5) Lastly and most speculatively assuming the human equivalent AI has some form of emotion to motivate it and help form values it benefits from being able to selectively edit those emotions as needed. A human can find it hard to commit to a task hour after hour without weariness or distraction.

Note that all of this is just speculative. It may be based in fairly good reasoning based on our understanding now but the very premises of any AI discussion are open to change. Reason being we have no idea how the future of AI science is going to go and developments could occur to make our speculation appear as quant as 60s era visions of human space colonisation.
A factor of 2 every 2 years is "just" a quantitative difference. But this gives a factor of 1 million in 40 years (currently, computing power is evolving even quicker, so you just need 20-25 years for that factor). If you can do something in a minute instead of a year, this opens up completely new options.
A few things are worth noting here:

1) Moore's law is not guaranteed to continue. In fact it's a near certainty that it will top out once we start etching transistors only a few atoms wide. That's not to say that computer development will halt but the roadmap of better lithography that has held for so long isn't going to hold. Rather than being a fairly constant progression computer development may turn to be a start stop affair.

2) There's no guarantee that the program's needed to run a human equivalent intelligence are open to parallel processing as much as we like. There may be a point where more hardware doesn't do anything.

3) Turning a subjective day into a subjective second doesn't help invent Step B when Step A requires a 6 hour experiment to be conducted in the real world. In terms of critical path analysis thinking faster only helps in actions that require thinking, not necessarily doing (unless the doing can be totally digitalised I.e writing a word doc).
 
  • #57
35,630
12,174
1) Moore's law is not guaranteed to continue. In fact it's a near certainty that it will top out once we start etching transistors only a few atoms wide. That's not to say that computer development will halt but the roadmap of better lithography that has held for so long isn't going to hold. Rather than being a fairly constant progression computer development may turn to be a start stop affair.
3-dimensional processors (with thousands++ of layers instead of a few) could give some more decades of Moore's law. And quicker transistors might be possible, too. Not strictly Moore's law, but still an increase in computing power.

2) There's no guarantee that the program's needed to run a human equivalent intelligence are open to parallel processing as much as we like. There may be a point where more hardware doesn't do anything.
Well, you can always use "teams" in that case. The human version of more parallel processing ;). It does not guarantee that you can do [research] which would take 1 human 1 year in a minute, but even in the worst case (and assuming you have the same hardware as 10^6 humans) you can run 10^6 parallel research projects and finish one per minute.

3) Turning a subjective day into a subjective second doesn't help invent Step B when Step A requires a 6 hour experiment to be conducted in the real world. In terms of critical path analysis thinking faster only helps in actions that require thinking, not necessarily doing (unless the doing can be totally digitalised I.e writing a word doc).
Thinking might give a way to avoid step A, to design a faster experiment or to use the available computing power for a simulation.
 
  • #58
Evo
Mentor
23,531
3,145
How long before someone creates a virus to destroy or manipulate AI's?
 
  • #59
Borg
Science Advisor
Gold Member
1,921
2,651
Who is this human and what's so special about his still?
 
  • #60
6,362
1,282
For example we already have robots to assembly automobiles. We should try to develop a completely automated automobile plant including all materials storage & handling, repair & maintenance of all equipment, the building and the robots themselves, quality control and parking the finished vehicles, finance and administration, etc. with no human involvement at all. No lighting needed, no canteen, no personnel department and for sure we need a much smaller factory.

Then we use this experience to automate other types of production facilities and services of all kinds, the list is endless. Facilities will talk to each other to arrange supplies like materials, power and water, throughout the whole supply chain, potentially in all industries and services, including security, transport, agriculture, and of course, program development to improve the robots and design new ones.

When everything runs itself, most of our descendents will be on social security, which is nothing other than a sharing scheme. The problem is, a fully automated system only works if it controls itself. You can’t have humans interfering, they foul things up. We already have this problem on airliners and trains. Exactly when there is a difficult situation, the pilot thinks he can do better than the computer. On the roads and railways it’s worse, with human error causing loss of lives, injuries and damage. Not to mention the huge losses caused by inefficient labor. All that has to go.

We can’t tell computers to “serve us”, I don’t know how we could define that. We have to give them goals which are consistent with our goals. With the military it’s more hair-raising, because you have to get the robots to distinguish between military and civilian targets, otherwise ….

We don’t distinguish that very well today, do we, although it's quite often a deliberate mistake.
Who is going to pay for all this automation and who is going to but the products? If no one has a job no one will have any money. Your notions seem to dismiss any known economy. Basically nothing ever happens unless some human or group of humans makes a lot of money off it.
 
  • #61
131
1
Who is going to pay for all this automation and who is going to but the products? If no one has a job no one will have any money. Your notions seem to dismiss any known economy. Basically nothing ever happens unless some human or group of humans makes a lot of money off it.

Our capitalist system has already started on the automation and labor saving path in order to protect profits and the result is a higher level of permanent unemployment is many advanced countries. The process is gradual. Large companies, who have the most "fat" in terms of excess labor, shed it at every opportunity and excuse.

In the 20 years leading up to my recent retirement, my financial job in a large pharmaceutical company was transformed by computerisation – productivity climbed tremendously, staffing was slashed and big bonuses were introduced to help this process. Make no mistake, the developed world is on track to produce more output at lower costs with less labor.

Social security in its various forms puts money into the pockets of the non-employed. It’s a constant struggle to keep pace with the economics of the underdeveloped world - China, Taiwan, Thailand, Indonesia, Korea and there are many Asian countries like India who have hardly started. It’s a rat race, and developed countries are going to have to automate a lot more, otherwise outsourcing will decimate western industry.

I don’t have the answer to growing unemployment and increasing social security costs – ask a sociologist. I’m only evaluating the current situation and where it is leading. Higher productivity through automation is the only way. Even China is starting to automate. How you distribute the profits is a social question.

A lot goes into taxes and pension funds which spread the money around. Since it’s not enough, we fill the gap by printing more. This is not the right way to go - it's a measure born of desperation. In Europe we don’t want to print so much money, with the result that certain economies are going down the drain. This is not a “known economy”, it’s a serious problem and a big challenge. But we won’t solve it by restricting automation in favour of jobs.

.
 
  • #62
1,194
512
I just want to make two points. There is evidence that the human brain operates on a global level at 10 hz, the so called alpha band. Again, I stated it in an earlier post, local coordinative conditions in the neocortex run at 40hz, the so called gamma band the well known 40hz oscillations in local cortex, such as visual, auditory, etc. discussed by Gray and Singer back in the 80's and continually verified up to the present. Intermediate to that is the beta band, about 15-25 hz, which typically involves inter-regional dynamics. There are current models that place the dynamics of cognition on these levels, with global cognitive "frames" of thought occuring at the 10 hz alpha range.

If you want to argue the validity of that particular model, that is for another thread. My point for this thread is more of a what if? What if we could recreate human cognition in hardware that could run at 1 megahertz instead of 10 hz? Or what about 1 gigahertz? That would mean that this contrived machine would be able to live several thousand human lives in the time in took for you to take a sip of your coffee. That would mean something. Why would we want to kill that and not let it propagate? Some machine that smart would quickly usurp any of our attempts to quell its capacities. In theory perhaps we might be able to put in some fail safe device, but why?

This is just a progression of evolution. We are Homo sapien. Homo erectus is not still around, Habilis? No, Heidelbergensis? No, australopithecis? No? Neanderthal? No. Ardipithecus and Orrorin? No no. The list goes on. Is that wrong? There's a reason for these things. Look, the sad fact is that we are not likely to ever travel anyplace past Mars. If you think that humans are going to populate those huge Stephen Hawking superstructures for multigeneration migrations out to alpha centurai, c'mon that's laughable.

Again, look, humans have not gone past the moon and the Voyager spacecrafts are already at the heliopause, need I say more?
 
  • #63
1,194
512
Look, the sad fact is that we are not likely to ever travel anyplace past Mars. If you think that humans are going to populate those huge Stephen Hawking superstructures for multigeneration migrations out to alpha centurai, c'mon that's laughable.

Again, look, humans have not gone past the moon and the Voyager spacecrafts are already at the heliopause, need I say more?

Is it Narcissistic to quote your own quote? In any case, I just wanted to add that the way I envision it is that our trip to the other-reaches of the universe will be accomplished by the spreading of many of these "human-like" voyager spacecrafts out into the nethersphere, who will power their sustenance of simple electricity by converting the ambient matter and energy they encounter into this electricity. Of course, they will be able to repair and reproduce in the same manner. What do you think? Does this scenario sound more likely, or does a Solyent Green scenario sound more likely where humans have big spacecrafts with greenhouses for growing broccoli, and some guys having a fight with rubber axes? I mean, really? That is what I meant about human-being energy being expensive, unless you think you can make broccoli out of interstellar dust. The Earth and Mars as sustaiable options for humans are, as we know, not going to be around forever. In fact, is certainly possible that, "human-like voyagers" or not, humans will be extinct anyway by 2500. So I think that the answer here is clear. The real question is will we be able to create these interstellar pioneers in time.
 
  • #64
131
1
What if we could recreate human cognition in hardware that could run at 1 megahertz instead of 10 hz? Or what about 1 gigahertz? That would mean that this contrived machine would be able to live several thousand human lives in the time in took for you to take a sip of your coffee. That would mean something. Why would we want to kill that and not let it propagate? Some machine that smart would quickly usurp any of our attempts to quell its capacities. In theory perhaps we might be able to put in some fail safe device, but why?

This is just a progression of evolution. We are Homo sapien. Homo erectus is not still around, Habilis? No, Heidelbergensis? No, australopithecis? No? Neanderthal? No. Ardipithecus and Orrorin? No no. The list goes on. Is that wrong? There's a reason for these things.


Human cognition in hardware? Do you mean with or without human emotions? I suppose you mean without. In this case you have to build in preordained goals, otherwise it would not know what to do.

Why would it want to explore the universe? If you program it to go explore the universe, that makes it a man-made probe. Is that what you mean? Will it report back? If not, what will you program it do when it finds something interesting? I am not seeing the motivation programmed into this new creation.

I don’t see that you are answering my point about goals. Our goals can’t imply our disappearance or suicide, can they? I assume that the created superior being would be programmed to bring us some benefits, otherwise why would we create it?

I think that human cognition in hardware is a tall order, not for technical reasons, but for psychological reasons. I think we are in a bit if a loop here.

.
 
  • #65
6,362
1,282
My point for this thread is more of a what if? What if we could recreate human cognition in hardware that could run at 1 megahertz instead of 10 hz? Or what about 1 gigahertz? That would mean that this contrived machine would be able to live several thousand human lives in the time in took for you to take a sip of your coffee. That would mean something. Why would we want to kill that and not let it propagate? Some machine that smart would quickly usurp any of our attempts to quell its capacities. In theory perhaps we might be able to put in some fail safe device, but why?
You're making a weird assumption that since we could (in your scenario) make a machine capable of taking over we will do it. There's no advantage in us making it, in relinquishing control to something that puts it's own ends before ours, but you assume we will anyway.

There's no reason to believe a computer could evolve sentience on its own, or emotions. The simulacrum of sentience and emotions would have to be programmed into it. To do that would be to gratuitously invite self initiated, self serving and irrational decisions by the machine. Emotions mean it would start having preferences, tastes, it might get religion. Why would we make such a thing?
 
  • #66
collinsmark
Homework Helper
Gold Member
2,968
1,473
the_singularity_is_way_over_there.png

[Source: http://abstrusegoose.com/496] [Broken]
 
Last edited by a moderator:
  • #67
1,194
512
Ha Ha Ha. You'll see. You will ALL see! Muahahahahahaha:devil:
 
  • #68
Evo
Mentor
23,531
3,145
It seems the thread has run out of new thoughts. Closed.
 

Related Threads on Will human's still be relevant in 2500?

Replies
3
Views
757
Replies
69
Views
9K
Replies
7
Views
2K
  • Last Post
Replies
3
Views
3K
Replies
18
Views
3K
Replies
15
Views
3K
Replies
9
Views
382
Top