Supercomputer Utopia: A Human Society Scenario

  • Thread starter Thread starter Chaos' lil bro Order
  • Start date Start date
Click For Summary

Discussion Overview

The discussion explores a hypothetical future society where a supercomputer manages the distribution of tasks and jobs among citizens, relying on individual inputs to optimize efficiency and resource allocation. Participants examine the implications of such a system on personal autonomy, societal structure, and the nature of cooperation.

Discussion Character

  • Exploratory
  • Debate/contested
  • Conceptual clarification

Main Points Raised

  • One participant suggests that the supercomputer would effectively manage resources and individual needs better than humans can, proposing a system where citizens contribute to its programming.
  • Another participant questions the feasibility of this system within a capitalist framework, arguing it would necessitate a socialist model and raises concerns about dealing with unmotivated individuals.
  • A different viewpoint emphasizes the importance of personal choice and satisfaction in decision-making, expressing discomfort with the idea of having decisions made by a supercomputer.
  • Some participants raise questions about the nature of cooperation in this system, including whether it would be voluntary or compulsory and the consequences of disobedience.
  • One participant draws an analogy comparing society to a community of cells and the supercomputer to a brain, suggesting a biological model for understanding the proposed system.
  • References to literature, such as Asimov's "The Evitable Conflict" and Zamaytin's "We," are made to highlight similar themes in existing works.
  • Concerns are expressed about the lack of spontaneity and individuality in a society governed by a supercomputer, with one participant finding the scenario unamusing.

Areas of Agreement / Disagreement

Participants express a range of opinions, with no consensus reached on the viability or desirability of a supercomputer-managed society. Disagreements persist regarding the implications for personal autonomy and the nature of cooperation.

Contextual Notes

Participants highlight various assumptions about human behavior, the nature of decision-making, and the potential for error in the supercomputer's operations, which remain unresolved.

Chaos' lil bro Order
Messages
682
Reaction score
2
Consider a future human society scenario where a 'supercomputer' was entrusted by all of humanity to divy out the jobs and tasks of all the citizens on Earth. The citizens all put faith in the correctness of each and every decision the supercomputer made, ranging from the grandest of humanity's goals to the smallest minutia of daily chores. And they are right to do so, because it turns out that the supercomputer indeed does manage the Earth's resources and as well, the interrelationships of individual humans and their wants and needs, much better than in fact the humans can themselves. This supercomputer would be not officially an AI, but its programming input would come directly from the individual citizens. Each citizen would have a sensor/transmitter on his body that monitored all life functions and senses felt by the citizen. These in turn would be transmitted to the supercomputer who would run a series of algorithms on the information and decide the best route for everyone to take. The algorithms would themselves be programmed by each individual citizen as they go about their daily lives naturally finding improvements and efficiencies in life and industry. We humans would update the supercomputer like some wikipedia entry and each update would hyperlink to many others, thus creating some hyperplexus of information.

It would be nice to think, 'Oh, I need some milk, I better go to the store.' And twenty seconds later a neighbour rings your bell and delivers a 2L, because the supercomputer told him that, since your neighbour was already at the store and planning on going home, that it is a more efficient decision for humanity in general and for the wants and needs of the individual, if he picks up the milk and delivers it to you. Of course this means you must consider the flip side of the coin which is that you will be called upon to add random tasks to your daily routine, but I think that would be quite an amusing life and it certainly holds the charm of a well natured spontaneity.
 
Computer science news on Phys.org
It certainly would be a different sort of life...but how could it function in a capitalist society, where there are winners and losers?

I guess it couldn't. It would have to be 100% socialist. How would the system deal with lazy people?
 
What happens if someone decides they don't want to get milk for their neighbor? Is cooperation voluntary or compulsory? What are the consequences for disobedience? Are voluntary actions outside of those decreed by the computer acceptable?

I'm fond of my right to fail if I so choose. Yes, a computer could make better decisions than I could, but they wouldn't be my decisions. I wouldn't get any personal satisfaction by having decisions made for me. Without making my own decisions I have no influence over my identity.

I'd rather get milk for my neighbors because it pleases me to help them and not because it is more efficient for humanity.
 
hmmm, I need a million dollars... guess I'll just wait here 'til some guys bring them to me.
 
lisab said:
It certainly would be a different sort of life...but how could it function in a capitalist society, where there are winners and losers?

I guess it couldn't. It would have to be 100% socialist. How would the system deal with lazy people?

I believe that if we got to such a point in society we would probably use economic/political models that were much more evolved than present day capitalism or socialism.
 
Consider this analogy:

Society ----> Community of cells.

Supercomputer ----> brain

Today's society ------> Stromatolite


Future society controlled by supercomputer ---> human body controlled by the brain
 
Read Asimoiv's "The Evitable Conflict".
 
Check out the book "We" by Zamaytin. It was written before computers, but the idea is pretty much the same.
 
Chaos' lil bro Order said:
...would be quite an amusing life and it certainly holds the charm of a well natured spontaneity.


:smile:

No spontaneity nor individuality at all.

I don't see anything "amusing" in your scenario... :confused:
 
  • #10
Huckleberry said:
What happens if someone decides they don't want to get milk for their neighbor? Is cooperation voluntary or compulsory? What are the consequences for disobedience? Are voluntary actions outside of those decreed by the computer acceptable?

I'm fond of my right to fail if I so choose. Yes, a computer could make better decisions than I could, but they wouldn't be my decisions. I wouldn't get any personal satisfaction by having decisions made for me. Without making my own decisions I have no influence over my identity.

I'd rather get milk for my neighbors because it pleases me to help them and not because it is more efficient for humanity.

As outlined in the scenario, the wants and needs of the individual will be tranmitted to the supercomputer, so if you didn't want to get milk, the supercomputer would never have asked you in the first place. Of course you may, say, 'but something made me change my mind at the last moment.' to which there is no argument and we spiral into a free choice debate. The Idealized supercomputer would waste little time querying citizens to do tasks that they would not other wise do, but sure, as you point out there would be small errors among many successes. You can call that error choice if it, makes you free warm and fuzzy on the inside :)
 

Similar threads

Replies
10
Views
5K
Replies
19
Views
3K
Replies
56
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 4 ·
Replies
4
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
29
Views
6K
Replies
13
Views
7K
Replies
47
Views
6K