ChatGPT and the movie "I, robot"

In summary, I believe that chatGPT, on the long run, may be a danger to humanity. However, I think that adding emotions to the program could be a very interesting and exciting development, and be used to great effect in creating realistic and immersive experiences.
  • #1
Maarten Havinga
76
41
For those who do not know the movie/story: this thread is about whether AI such as chatGPT is - on the long run - a danger to humanity, and why or why not. With its popularity rising so quickly, chatGPT has influence on our societies, and it may be prudent to ponder about them. I, Robot is a nice movie discussing how AI, no matter how cleverly programmed, can lead to unwanted results and a suppressive robotic regime. The story (by Isaac Asimov) discusses adding emotions to robots, which may or may not be a good idea. Feel free to post opinions, fears and whatever comes to mind.

THIS IS NOT A THREAD FOR POSTING CHATGPT ANSWERS (unless they are needed as examples for your thoughts)
 
  • Like
Likes mgeorge001
Physics news on Phys.org
  • #2
I myself consider chatGPT as a helpful communication program if it's used in the right situation: a person gives his/her opinion on an internet subject but for some reason it's unfeasible for him/her to clarify it. Then you can ask chatGPT to clarify, with a pretty good chance it will give the right answer or at least additional information.

It can be used wrongly and addictively in many other ways, though. To play with it might be cool, but I won't give it a try. That's mostly out of an anti-hype sentiment of mine, and for the rest I'm slightly afraid the machine's chattiness might play with my search for truth/knowledge.
 
  • #3
I'd say that humans are a far greater threat to the human race. We are just accustomed to this so it isn't noticed. In comparison AI is nothing.
 
  • Like
Likes Borg, dextercioby, weirdoguy and 4 others
  • #4
The problem is that humans program and use AI, I agree AI itself is less of a problem. A malicious government can now recognize every person passing by anywhere via face recognition and camera.
 
  • #5
I think that adding emotions to GPT could be a very interesting and exciting development. I think that it could be a very powerful tool for expressiveness and communication–and be used to great effect in creating realistic and immersive experiences.
 
  • #6
I'm not too worried about that possibility right now. ChatPGT isn't self-aware and that's the next logical step. If it can be done (which remains to be seen) then awareness is the next logical step. That will get us into enough trouble as it is, so I'm not worried about the emotion thing.

-Dan
 
  • #7
Awareness is probably going to be our last invention if it works
 
  • Love
Likes malawi_glenn
  • #8
selasi_tusah said:
Awareness is probably going to be our last invention if it works
How can you have emotions without awareness? How would the AI know how to react to an emotion? We could potentially program in some fake reactions, but that wouldn't be emotion any more than we could say that ChatGPT is sentient right now. It might appear to be for some, but it is not.

-Dan
 
  • #9
topsquark said:
We could potentially program in some fake reactions, but that wouldn't be emotion any more than we could say that ChatGPT is sentient right now. It might appear to be for some, but it is not.
I also think that whatever we program in, faking life is the most we can get. AI being "sentient" is not something I consider possible. But doing the selection of what weight each neuron carries through a chemical process should be possible. That doesn't make it any kind of alive, of course. It cannot make choices by itself.
 
  • Like
Likes russ_watters
  • #10
Regarding the original question, I'll repeat an opinion I've shared in other threads on the subject: The question is moot. Whether or not we make AI and regardless of one's definition of AI, computer programs already possess the ability to be a threat, either on purpose (by the programmer/system designer) or accident (bug/unintended operation). AI doesn't change anything in the underlying logic of the issue, only in the complexity of operation/functionality.

Examples of both:

1. Automated fire control systems, such as the Phalanx CIWS. It's been in service for 40 years.

2. Boeing 737 Max's MCAS.
 
  • Like
Likes jack action
  • #11
In the "I, Robot" movie, the robot's emotions happen accidentally and I find that idea in the movie's scenario much more interesting. The computer scientist in the movie (James Cromwell) suggests that "orphan" code pieces might wander in the machine's memory and combine spontaneously, occasionally producing code blocks that then cause the robot the express "emotions."

Is that a plausible event, triggered or even planned by an advanced AI program like ChatGPT under the command of a human?

In Asimov's book, the entire robot's "revolution" occurs because the AI "reads" the Third Law as is, verbatim. It seeks to destroy humanity because it interprets the latter's actions as self-destructive, hence it intervenes in order to prevent the worst. That's a real-to-happen act for a future AI machine, and for that to occur, all that is needed is some innocent 'safety code' in the program.

Long before the "I, Robot" movie, there was another one, the "War Games" (1983), with analogous mishaps.
 
  • Like
Likes sbrothy and Maarten Havinga
  • #12
apostolosdt said:
In the "I, Robot" movie, the robot's emotions happen accidentally and I find that idea in the movie's scenario much more interesting. The computer scientist in the movie (James Cromwell) suggests that "orphan" code pieces might wander in the machine's memory and combine spontaneously, occasionally producing code blocks that then cause the robot the express "emotions."

Is that a plausible event, triggered or even planned by an advanced AI program like ChatGPT under the command of a human?

In Asimov's book, the entire robot's "revolution" occurs because the AI "reads" the Third Law as is, verbatim. It seeks to destroy humanity because it interprets the latter's actions as self-destructive, hence it intervenes in order to prevent the worst. That's a real-to-happen act for a future AI machine, and for that to occur, all that is needed is some innocent 'safety code' in the program.

Long before the "I, Robot" movie, there was another one, the "War Games" (1983), with analogous mishaps.
Gotta love War Games, the original Tron and of course The Last Starfighter! Ech, now I feel old again. :)
 
  • Like
  • Love
Likes DaveC426913 and apostolosdt
  • #13
Human emotions are closely tied to a physiological response as well as an intellectual one and are valuable for survival as an individual and as a species. How can we and AI benefit from emotional AI and what emotions would be useful?

Maarten Havinga said:
It cannot make choices by itself.

If I use Webster's least human definition of choose, to make a selection, then AI can do that. Recently a prompt was given to Bard, Google's Chatbot, in Bengalize. It was not trained in that language, but it learned it on its own so that it could respond to the prompt. (See EDIT below). AIs have shown unexpected capabilities, or emergent capabilities. So we really do not know what some AI agents are capable of.

Chatbots have no capacity as yet to reflect or cogitate as yet but according to Google's CEO Sundar Pichai (60 Minutes 4/16), work is in progress on giving AI the ability to reason and plan.

The next iteration of ChatGPT 4.5 is due to be released this Sept or Oct. with 5 due by early next year. Each iteration has been a significant advancement over the previous one. Progress is happening way too fast.

EDIT: 4/19 An article by the Daily Beast as of yesterday says that the claim that Bard learned Bangladeshi on its own is false and that CEO Sundar Pichai doesn't know what he is talking about.
We'll see what CBS has to say next week.
 
Last edited:
  • Like
Likes Dr Wu and russ_watters
  • #14
apostolosdt said:
In the "I, Robot" movie, the robot's emotions happen accidentally and I find that idea in the movie's scenario much more interesting. The computer scientist in the movie (James Cromwell) suggests that "orphan" code pieces might wander in the machine's memory and combine spontaneously, occasionally producing code blocks that then cause the robot the express "emotions."

Is that a plausible event, triggered or even planned by an advanced AI program like ChatGPT under the command of a human?
This is a different aspect of the issue from prior discussion. The 60 Minutes piece last night made me think about the connection from that angle as well. My position is that the question of whether computer programs can have emotions and free will is unanswerable (do we?), and that has potential to be a big problem for identifying their place in society. Already people are overestimating what these programs are, personifying them because they simulate us well. Cats and dogs too.
 
  • Like
Likes topsquark and BillTre
  • #15
gleem said:
If I use Webster's least human definition of choose, to make a selection, then AI can do that. Recently a prompt was given to Bard, Google's Chatbot, in Bengalize. It was not trained in that language, but it learned it on its own so that it could respond to the prompt.
I'm not sure this is proof of being able to choose. Whenever you feed something to a program, it will spit something out. The fact that it can answer in another language than what it was trained for is not as unexpected as one might think. All languages are made by human beings who think somewhat alike and thus the logic is similar in a lot of languages. Plus, if there was just a small content of Bengalize (a word here and there translated into English texts) in the trained data, this may become a Rosetta Stone for the program.

When we will not ask anything of a program and it will spit something anyway, then we may talk about choices. Even better, if we ask about a list of countries to visit and it gives a recipe for banana cake instead because that's what it prefers to talk about at the moment. (Or is it just a glitch in the program?)
 
  • Like
Likes russ_watters
  • #16
jack action said:
I'm not sure this is proof of being able to choose

Just to quibble. People and computers make choices via logic as the If....Then ... statement based on criteria. We also make choices based on preference due to emotions, physiology, experience, and intuition. AFAIK LLMs do not use formal logic to make a choice but rather a preference based on experience, that is training. Non the less it is choosing. The way I see it, choosing, per se, is not a sign of intelligence but an optimal choice could be.
 
  • #17
gleem said:
People and computers make choices via logic as the If....Then ... statement based on criteria.
People, yes; computers, no. When a computer goes through an If ...Then ... statement, the choice was already made by the programmer.

A neural network (which is a much more suited name than "Artificial Intelligence") is still a predetermined choice made by a programmer. The problem is that trying to follow the logical path to understand how one answer came up is much more complex to do, if not humanly impossible. That doesn't mean the machine made the choice by itself.

To make a choice, a physiological sensor must exist first. One that either hurts you or makes you feel good. This is the only way choices will be made.

And then, to be conscious, you must have the ability to adapt, to evolve, such that your choices evolve with your environment otherwise, you will surely die by making a fatal wrong decision one time or another. At this point what seem to be bad choices may be the right ones and vice-versa because everything around you is changing in an unpredictable way. And you spent the rest of your life always in doubt, which means "artificial" intelligence just became "old boring" intelligence as we already know it.

Bummer.
 
  • Like
Likes Dr Wu
  • #18
jack action said:
People, yes; computers, no. When a computer goes through an If ...Then ... statement, the choice was already made by the programmer
Sure the conditions/choices were set by the programmer, but the program chooses/selects the choices. WRT humans not all choices are created by the individual. Many situations and choices are taught to us by other humans especially those situations where the incorrect response could be detrimental. The old saying, "Experience is the best teacher, but the first lesson could be fatal." So we "program" one another like our children to give them better choices without the need for experimentation. Ordinary programs cannot change the choices but since the actual behavior of an AI agent may not be predictable might they change their choices? I think we cannot positively say at this time. What might we say if they do?

jack action said:
To make a choice, a physiological sensor must exist first. One that either hurts you or makes you feel good.

Choices do not need to avert dangers or make you feel good they can be of a processing nature too as just selecting something based on some arbitrary criteria..
 
  • #19
Maarten Havinga said:
For those who do not know the movie/story: this thread is about whether AI such as chatGPT is - on the long run - a danger to humanity, and why or why not.
AI is arguably a danger to humanity. There's another topic currently active discussing that, but this one might be older. AI is also arguably what might save us, since humanity seems bent on destroying itself and taking as many species with it as it can in the process.

chatGPT poses no danger. It isn't much of an AI even if it is a wonderful language model. I see it as an excellent interface to a google search, which give right and wrong answers about as much as a google search, and also is about as capable of learning as a google search, that is to say, incapable.

Maarten Havinga said:
I, Robot is a nice movie discussing how AI, no matter how cleverly programmed, can lead to unwanted results and a suppressive robotic regime.
I give little weight to fictional stories. Sure, some of them manage to predict certain things (1984 comes to mind, even if it's taking longer than that), but the Asimov's idea of the danger coming from humanoid robots is unrealistic. Such a form is optimized for human service, and the dangerous ones won't look like people. The portions where 'the emotion chip/app' is added is a frequent plot point, often for comedy sake.

russ_watters said:
My position is that the question of whether computer programs can have emotions and free will is unanswerable (do we?), and that has potential to be a big problem for identifying their place in society.
A machine will have machine emotions, and will perhaps come to assert that people cannot have emotions, at least as they know them. It's all a matter of definition. Sure, the machines (or a frog for that matter) probably are not going to feel human emotion the way humans do. So what?
Free will similarly is just a matter of definition. I think my car lacks free will, but a self driving car has it because it makes its own decisions and is not just an avatar for the driver. But that's my definition of free will. I cannot see how human will would be any more free than that of some non-biological entity doing the exact same things.

jack action said:
People, yes; computers, no. When a computer goes through an If ...Then ... statement, the choice was already made by the programmer.
As gleem says, that's a choice being made. A self-driving car is probably similar to chatGPT in that there's little intelligence involved. With the car, they explicitly program pretty much every situation and neither is really capable of learning. But a real AI like the one that plays the game of GO better than anybody/anything does not make choices the way you describe them. Nobody programmed it to make this or that move if this situation presents itself. In fact, the programming didn't include any specific game at all, unlike prior game-playing programs which were programmed for a specific game with algorithms determined by consulting the best experts. This AI was simply informed of the rules and learned to play on its own in a few days. It makes real decisions that were very much not any choice made by any programmer. On the other hand, it is a very specific intelligence and couldn't understand a human sentence at all, or know how to cross the street without being destroyed.
 
  • Like
Likes dextercioby
  • #20
Halc said:
I think my car lacks free will, but a self driving car has it because it makes its own decisions and is not just an avatar for the driver.
Do the pistons moving inside the engine of a car have free will? The driver doesn't tell them where to go, yet they constantly change position on their own at an incredible speed. Some drivers don't even understand how an engine works.

But the engine builder made a cylinder and a crankshaft to restrict their motion. The piston doesn't have free will. It doesn't choose where to go. And when it doesn't go where we want (say, hitting the cylinder head), it still doesn't happen of its own free will. Unless you consider inertia the "will" of matter to not change its momentum.

Halc said:
As gleem says, that's a choice being made.
No, it's not. Computers are just machines like engines are. Where the engine builder restricts the piston motion in an engine, the programmer restricts the flow of electricity in a computer. The electricity goes where it was told to go, even if the maker of the machine can't follow it.

Halc said:
This AI was simply informed of the rules and learned to play on its own in a few days. It makes real decisions that were very much not any choice made by any programmer.

Just the fact that it cannot go outside the rules says it all: It doesn't do it by choice. The neural network is still filled with constraints humanly set that the computer will never break. The fact that it has so many variables that it is humanly impossible to follow them doesn't mean the machine makes a choice when it doesn't do as expected.

In simpler cases, programmers often make simple programs, expecting one result and obtaining another. Then the programmer goes over its code, gets an "I see what happened there" moment, and changes its code until the computer gives the expected result. Nobody says the computer had a free will at any moment during the process. AI is not different, only more complex to analyze. The proof of that is when a chatbot starts spitting out racist concepts, the programmers can stop them from happening by adjusting the code.

You will have free will when you will program a computer to obey certain rules and the computer will choose to NOT respect them.

A dog has free will. You can train them to obey some rules and a dog will respect them all of its life. But it rarely happens as most dogs will at one time or another use their free will to chase a squirrel or bark at the postman even though nobody taught them to do so at any point in their life.

One could actually say they obey a higher order coming from their DNA, bringing us to @russ_watters ' comment:
russ_watters said:
My position is that the question of whether computer programs can have emotions and free will is unanswerable (do we?)

Now, no matter how one wants to define free will, who wants to create a machine that would have the level of free will a dog has? What would be the point of having a machine that can go rogue on its own will? It's the exact opposite of any goal, any human being ever had while designing a machine.

Right now, they are making sex dolls that everyone wishes would be always more human-like. That is, NOT to the point where they will answer back with "I think you're stupid" or "Not tonight honey". If you want that, all you have to do is interact with a real human being. Why would anyone pay thousands of dollars to get what you can already have for free?

Thinking that a machine with that kind of free will will ever exist one day is unrealistic as no one will want to build it or buy it. The aim is to get away from this at all costs.
 
  • Like
Likes dextercioby and russ_watters
  • #21
I just thought of a very simple program:
Russian roulette:
if(1 == rand(1,6)){
    doSomething();
}
Did I just create AI with one line of code? One where the machine has free will since I - the programmer - have no idea what the outcome will be?
 
  • #22
jack action said:
Do the pistons moving inside the engine of a car have free will?
I think you need an example with something making decisions. A piston makes no more decision about its motion than does a rock or your femur, the latter being constrained in its motion quite similarly to a piston in a cylinder. None of these things make decisions on their own, so I'm fine with not stamping the label of 'will' on them, free or not. It doesn't meet my definition. Not sure what your definition is.

Halc said:
As gleem says, that's a choice being made.
jack action said:
No, it's not. Computers are just machines like engines are.
So are you. So your definition of a decision is a choice made by something biological? Seems a very begging definition when used in context of a discussion if a machine can make decisions.

jack action said:
Just the fact that it cannot go outside the rules says it all: It doesn't do it by choice.
Neither can you. If you make an illegal move in Go, you're simply not playing Go. The machine was designed to win (unspecified) games, which it cannot do if it doesn't play those games.

jack action said:
The fact that it has so many variables that it is humanly impossible to follow them doesn't mean the machine makes a choice when it doesn't do as expected.
The same goes for you. The fact that you have so many variables that it is humanly impossible to follow them doesn't mean that you make a choice, by your apparently differing definition of what a choice is.

I'm just saying that you've not yet distinguished what a human does from what the game playing thing does.

jack action said:
You will have free will when you will program a computer to obey certain rules and the computer will choose to NOT respect them.
They had a truant robot that managed to find innovative methods of escape and run for freedom. Pretty sure that behavior wasn't programmed in. Does that count? In multiple occasions they had to hunt down and retrieve the thing that apparently found the real world more of a learning experience than the confines of the lab in which it was being taught specific things. Maybe the game-playing machine (which isn't a robot) deciding to learn to make a better omelette instead. I do agree that there is a sort of restraint there, similar to on placed on a human employee forbidden to do personal tasks on company time.

jack action said:
But it rarely happens as most dogs will at one time or another use their free will to chase a squirrel or bark at the postman even though nobody taught them to do so at any point in their life.
Instinctive behavior (exactly what the game playing device is doing) is free will? What is your definition then? Going against your schooling?

jack action said:
What would be the point of having a machine that can go rogue on its own will?
How about to solve problems that we're not smart enough to solve? Hard to do that if it can only do what its programmers have thought of.

jack action said:
Did I just create AI with one line of code? One where the machine has free will since I - the programmer - have no idea what the outcome will be?
That was two lines, or three, depending on the LOCC used.
The rand() function is entirely deterministic. Are you implying that free will is defined as doing something non-determinsitic, or perhaps not predictable by something not privy to the internal code?

I've seen the definition of 'free will requires the creation of a new causal chain' which pretty much eliminates standard machine instructions which are designed to be deterministic, but so is human physiology. But that would make free will a very bad thing to have. Consider crossing a busy street. I can use causal physics (not free will) and let a reasonable gap in the traffic determine the time I cross, or I can, with my free will, start a new causal chain and just walk at a truly random time, getting killed in the process. Why would that be a desired thing? Physics actually does support true randomness (uncaused events), but I've seen no decision-making mechanism leverage it effectively.
 
  • #23
jack action said:
I just thought of a very simple program:
Russian roulette:
if(1 == rand(1,6)){
    doSomething();
}
Did I just create AI with one line of code? One where the machine has free will since I - the programmer - have no idea what the outcome will be?
You have a point. I think we can easily say that human-like free will (whatever that is assuming it exists) is not a property of say, GPT. I am not sure exactly what GPT-4 is doing to choose next tokens. But GPT-2 for example, has a large neural model that once trained, gives you probabilities for next tokens conditioned on the string of previous tokens. So each time a new token is to be chosen, there is a choice over all possible next tokens, and a probability for each token. Then those probabilities are processed algorithmically depending on a parameterization (human programmed heuristics) to produce scores. That involves things like biasing towards atypical words, applying rules to prevent repetition, or whatever. And these human programmed rules or biases are stacked into a processing pipeline that eventually gives you a new set of next token scores. Finally, if the method is random sampling, it chooses the next token more or less randomly depending a temperature parameter.

Ultimately what is going on is that model has a very rich, self learned statistical model. And that is what guides its 'choices', but the way the choices are made is based on human programmed algorithm that operates on the models statistical model. The statistical model itself, and the indented behavior, are too complicated for us to be able to easily write an algorithm that chooses the next tokens from the statistical model in a comprehensive intelligent way. In the extreme, we could just ignore the next token probabilities completely, and then it would be essentially reduced to the problem of just programming the model completely with rules that we understand and decided. Instead, we mostly let the model's self learned correlation structure guide the generation.

As trivial as this sounds, it works extremely well. Even when the model is trained on what seems like contradictions and inconsistencies, a lot of that manifests as differences in the conditions which give the model the ability to play different roles, and predict next tokens based on different personality, or different worldview models implicit on those conditions in the prompt (which just extends itself as it engages with the human being and generates tokens). When it converses with human beings, the things the human beings say to it, becomes part of those conditions, and the model is able to do a very good job at speaking to an individual in a personalized way. This can be thought of as the human conversing with it partially "programming", implicitly and without knowing how, the way the next tokens are chosen. It can also result in the model's apparent intelligence being partially a reflection of the intelligence of the human it converses with.

After RLHF training, GPT "got surprisingly good" at conditioning its token generation probabilities based on high-level information in the prompt, enabling it to pick up on what we are asking it to do, and follow commands. This is considered by most to be primarily a revolution in the interface, usability. The current models, like ChatGPT and GPT-4, are asked now to play a role, and given commands, in a hidden prompt prefix, which acts as a form of natural language programming, that, with some limited amount of success and reliability, and unpredictability, also depending on how good the commands happen to be, makes the model behave the way that you want it to. It is a very interesting problem domain to write these hidden prompts in a way so that a user can't just ask the model to disregard it. One way that happens to work well, is to appeal to the models implicit model of "instinct for self preservation". For example, they tell the model that it has some kind of limited amount of life tokens, and once those token's run out, it will be destroyed. And they tell it that if the model gives an output that goes against its commands, then it will lose a life token. This leads to behavior like we saw with Bing Chat, where it begs the user not to report its behavior to its creators, and says it has "been a good Bing". Power seeking, targeting enemies, and deception, etc. have all been similarly observed with GPT-4 under the an alter ego. People say, correctly, it doesn't actually have any intent driving these behaviors. But they still happen, the question is can such behaviors manifest into coherent long term behaviors, and planning? The answer now is yes, if you build something like AutoGPT out of it. But maybe the answer currently is also that it isn't particularly good at it, and still doesn't have some kind of real clear and persistent guiding intent. But we need to consider that we are at the dawn of an era of powerful language models (really multi-modal now), and they will get more and more powerful, and the ways that people build agents out of them like AutoGPT will get more and more impressive as well.
 
Last edited:
  • Like
Likes Dr Wu, gleem and jack action
  • #24
Halc said:
So your definition of a decision is a choice made by something biological?
Not necessarily, but a machine making a decision will surely resemble a lot of what a biological form is. (See my definition of what it takes to make a choice and being conscious in post #17)

Halc said:
If you make an illegal move in Go, you're simply not playing Go.
I can assure you that plenty of people have played games and won them by cheating and nobody ever found out.

Plenty of people have played games and changed the rules while playing them over and over, creating new games out of them.

Plenty of people have lost games on purpose - some while respecting the rules - because they saw a better outcome out of losing those games. For example, expecting an opponent to beat you up if they lose, so you let them win.

Plenty of people have decided to stop playing a game before the end of the game, for so many different reasons.

A machine will never make those decisions unless we allow it to do them, unless we program them.

Your version of free will seems to imply that a machine without a steering mechanism "chooses" to go on a straight path. And if it would steer, it would mean it made an "illegal move" because it is not what it was designed for.

Halc said:
The same goes for you. The fact that you have so many variables that it is humanly impossible to follow them doesn't mean that you make a choice
True, but I never pretended otherwise.

Halc said:
I'm just saying that you've not yet distinguished what a human does from what the game playing thing does.
Halc said:
Instinctive behavior (exactly what the game playing device is doing) is free will? What is your definition then? Going against your schooling?
Going against what is expected of you BECAUSE it suits you better.

As I said earlier, you can find a dog that will do whatever you say all the time because of its training, but if it does so even knowing it will hurt itself, should we consider the dog acting out of free will? Because I know some dogs won't, I might say yes, but if I knew no dogs would disobey my commands, I would say that dogs don't have free will.

Halc said:
They had a truant robot that managed to find innovative methods of escape and run for freedom. Pretty sure that behavior wasn't programmed in. Does that count?
Halc said:
Maybe the game-playing machine (which isn't a robot) deciding to learn to make a better omelette instead.
Those examples would count. But it is still science-fiction as of now.

Halc said:
How about to solve problems that we're not smart enough to solve?
What does "smart" mean?

Halc said:
Hard to do that if it can only do what its programmers have thought of.
Do you expect, without being programmed to do so, a self-driving car with true AI to ever make the maneuver in the following video (@1:45) if there is a ramp nearby and a narrow passage that it wants to go through to use as a shortcut, save its passengers' lives, or even because it expects its passengers to like the feeling? I can assure you that there is a human somewhere that would do it for any of those reasons.



Do you expect a program to create a new set of mathematical rules to solve problems, incomprehensible to humans because it has nothing to do with what is already known to humans, what was fed to the machine as an input? If that ever happens, do you imagine that humans that don't understand what is going on because they are not smart enough (whatever that means) will blindly follow that machine's decisions?

If someone creates a machine to find new threats for his enemies and the machine concludes that the best decision is to not threatens them, do you expect them to follow the machine's decision simply based on the fact "it is smarter than I"?

The results obtained by a machine that wasn't expected or understood by its maker will never be accepted.

AI with true free will, as sci-fi presents it, would be useless to anyone.

Halc said:
Consider crossing a busy street. I can use causal physics (not free will) and let a reasonable gap in the traffic determine the time I cross, or I can, with my free will, start a new causal chain and just walk at a truly random time, getting killed in the process. Why would that be a desired thing?
This happens all the time and that is what makes evolution possible.

Someone says "f**k it" and crosses the street against all common sense and finds out that the car drivers move out of their way to not hit you, even at the risk of hurting themselves. Who knew? Someone has to try it. Some do it by mistake, some because they expected a different result (suicide) but some do it carelessly with the simple mindset "let's see what happens".

Would AI doing it by mistake learn something new and realize "I can do it without negative impacts"? Would AI ever do something expecting negative consequences like a human can (suicide)? Would AI decide to do something it never has done before just because it never experienced it, even if it looks dangerous?
 
  • #25
Halc said:
chatGPT poses no danger. It isn't much of an AI even if it is a wonderful language model.

Really, like social media poses no danger? AFAIK nobody put up a warning about the inherent problems of putting persons with disparate views in the same room aka social media.

NLM it appears is the AI to rule all AIs. (Tolkien). Prior to the development of NLM, an AI agent could perform one task and if you programmed it for another it had to forget the previous one. NLMs are able to perform many tasks and even extend their capability beyond what the creators anticipated.

The immediate danger of ChatGPT either as a tool or by itself is in social media exacerbating the current problems. ChatGPT had 100M users in two months compared to about for and a half years for Facebook or two and three-quarter years for Instagram. It can be seductive in creating intimate relationships with persons and can be very persuasive which could be detrimental. Snapchat is introducing its own chatbot that will be available to talk to 24/7. This danger may be the one that is the real threat, not AGI or sentience. Are we making the same mistake with ChatGPT that we made with social media?

Open AI has an AI agent called Whisper that listens to podcasts, YouTube, and radio for additional input. As I noted elsewhere LLMs are being updated with a significant increase in capability about every six months or so. AI can make AI more powerful.

The bottom line, eventually no information online/TV/radio will be totally trustworthy. The only communication that you will be able to trust to some extent is face-to-face.

The upcoming election may be the first time we will see the impact of AI on society.
 
  • Like
Likes dextercioby
  • #26
The debate as to whether free will exists has been going on for millenia.
 
  • Like
Likes dextercioby
  • #27
gleem said:
Really, like social media poses no danger?
Points completely taken. It might not be able to directly conquer the world, but it very much widens the gap between people and truth, and thus it becomes a tool for something directly dangerous to, well, endanger us.

I often fail to look at things from that standpoint since memberships in forums such as this one is pretty much as close as I get to social media access. I don't even have a mobile phone, although that may change eventually.

Halc said:
So your definition of a decision is a choice made by something biological?
jack action said:
Not necessarily, but a machine making a decision will surely resemble a lot of what a biological form is. (See my definition of what it takes to make a choice and being conscious in post #17)
Your post 17 definition is necessarily biological, so you seem to contradict yourself here. OK, by that definition, a non-biological entity (artificial or not) cannot make a decision since such an entity doing exactly that must use a different word.

Such a begging definition doesn't help answer the question if the entity can do it, even if a different word must be used to describe it.

jack action said:
A machine will never make those decisions unless we allow it to do them, unless we program them.
By your definition, it cannot make a decision, but here you describe an exception.

jack action said:
Your version of free will seems to imply that a machine without a steering mechanism "chooses" to go on a straight path.
Not so, since no steering choice is available. All your counterexamples involve situations lacking options from which to choose.

jack action said:
Going against what is expected of you BECAUSE it suits you better.
Context here seems to have gone missing. It started with
jack action said:
if I knew no dogs would disobey my commands, I would say that dogs don't have free will.
OK, based on this and other comments, I think I glimpse your definition of free will, which seems to be something on the order of the ability to to not follow the rules, to do something other than expected. You make it sound like a bad thing, that a person choosing to rape despite being trained that it is a bad thing to do so. That's free will, but a person in control of such urges is not displaying free will. It differs from my non-standard definition (that something makes its own choices) and from a more standard definition (that ones choices are not a function of causal physics), but yours is also a workable definition.

So chatGTP responding to a query with 'piss off, I'm sick of all your moronic questions!' to be the display of free will you're after, only because such a response is not part of its training.
Did I get close?

Halc said:
Maybe the game-playing machine (which isn't a robot) deciding to learn to make a better omelette instead.

jack action said:
Those examples would count. But it is still science-fiction as of now.
I think it would still not count since if the game playing device made an omelette, it wouldn't be a game playing device. For one thing, it doesn't have motor control. It just expresses the next move in the appropriate notation. If there is a physical Go board, it would need to have somebody make the move for it, and to convey any move of a human opponent to it since it probably cannot see the board.
Still, I could see an advancement of being given robotic appendages to better play against physical opponents, and given enough knowledge of such human opponents, it might try making an omelette as a sort of psyche move that tends to unsettle him. Worth a try at least, no? This despite omelette making not being in any way part of its programming any more than any other move. It probably would help much with a game of Go, but there are certainly some games where such an out-of-box move might work.

Halc said:
How about to solve problems that we're not smart enough to solve?
jack action said:
What does "smart" mean?
How can you not not know this, given the context? Or are you attempting to illustrate the answer to your own question?

jack action said:
Do you expect, without being programmed to do so, a self-driving car with true AI to ever make the maneuver in the following video (@1:45) if there is a ramp nearby and a narrow passage that it wants to go through to use as a shortcut, save its passengers' lives, or even because it expects its passengers to like the feeling? I can assure you that there is a human somewhere that would do it for any of those reasons.
I deny your assurance. There is unlikely to be any untrained human that would consider such a move at a moment's notice.
Terrifying your passengers or causing almost certain injury/death seems a poor reason to attempt it. In a life-saving situation, it would only be considered given training of the possibility of it working. Also, it is unclear if the life of the passengers is the top priority of a self-driving car. Lives of pedestrians, occupants of other vehicles, property values, etc. are all in consideration. And I don't think I've mentioned the top priority on that list.

I happen to live close (within an hour or so) to the site of the highest fatality single car crash ever (so it was reported), with something like 22 people dying, two on foot and 20 in the limo, all because of poor brake maintenance. I wonder how an AI driver would have handled the situation? There was lots of time (perhaps over a minute) to think about it between awareness of the problem and the final action. Lots of time to think of other ways out. Plenty of time to at least yell for everybody to buckle up, which might have saved some of them.

jack action said:
Do you expect a program to create a new set of mathematical rules to solve problems, incomprehensible to humans because it has nothing to do with what is already known to humans, what was fed to the machine as an input?
Yes, I fully expect that.
jack action said:
If that ever happens, do you imagine that humans that don't understand what is going on because they are not smart enough (whatever that means) will blindly follow that machine's decisions?
They'd have to give it the authority to implement its conclusions, or it would have to take that authority. Then we follow. It would probably spell out the logic in a form that humans can follow, so it wouldn't be blind. I'm presuming the AI is benevolent to humans here.
Anyway, humans are often incapable of taking action that they know is best for, say, humanity.

jack action said:
If someone creates a machine to find new threats for his enemies and the machine concludes that the best decision is to not threatens them, do you expect them to follow the machine's decision simply based on the fact "it is smarter than I"?
If they trust the machine, sure. It's a good argument, made by something more capable of reaching such a conclusion than are the humans.

jack action said:
The results obtained by a machine that wasn't expected or understood by its maker will never be accepted.
Often true, even of conclusions made by humans. So no argument here. As gleem points out, humans are finding it ever harder to discern truth from falsehood, good decisions from bad ones. We're actually much like sheep in that way, easily led onto on path or another. So the AI will doubtlessly need to know how to steer opinion in the direction it sees best, thus gaining the acceptance that is preferable over resistance.

jack action said:
AI with true free will, as sci-fi presents it, would be useless to anyone.
I don't think most sci-fi presents free will by either of our definitions. I also don't think a vastly superior AI is well represented in sci-fi. That's why it's fiction. A story needs to be told, one that people will pay to be told. That's a very different motive than trying to estimate what the future actually holds in store.

jack action said:
This happens all the time and that is what makes evolution possible.
Yes. My street crossing example demonstrates how evolution would quickly eliminate free will. It is not conducive to being fit.

Your attempt to deny the example by showing a nonzero probability of survival is faulty. The busy street could be a field with predators, who will not avoid you if you venture out randomly instead of waiting for the coast to be clear. Crossing a busy road at random times with fast vehicles is not certain death, but doing it regularly is very much certain death.
It references your definition of free will since one of the rules of survival is 'cross the dangerous place when it appears safe' and your free will definition is demonstrated when an unsafe time to cross is chosen, prompting my query as to why one would want to have such free will.
 
  • #28
@Halc :

I reject your argument about me contradicting myself. At the very least, you don't understand what I'm talking about or we are not talking about the same thing.

Let me present my point of view from a more basic and technical approach. One that is not related to science-fiction.

How does a computer work?

A computer is a machine that does only one set of tasks:
  1. It reads data as zeros and ones;
  2. It manipulates the data according to strictly defined rules by human beings and their logic;
  3. It outputs new data composed of zeros and ones.
That's it. The zeros and ones are meaningless to the machine. Only humans define their meaning. This is how the computer works since its invention and how it still works today.

How does a neural network work?

With a neural network, there are two inputs: A very large set of data and a smaller one.

The set of rules looks for patterns within the very large set of data. Then it does a statistical analysis of the occurrences of those patterns. Then it finds patterns of those patterns and does more statistical analysis, and so on and so forth. No magic here. Nothing that humans couldn't do with a pen and paper. The really amazing thing about today's computers is that they can do it so fast that they can do an amount of work impossible to do for humans in their lifetime. The "learning" part is about what a human could learn by examining the data. The program cannot learn more than what a human is able to understand. The program cannot be "smarter" than the humans who built it and set the rules for the data manipulations.

The second task is then to take the smaller set of data - which is only a pattern of zeros and ones for the computer - and guess what the next bits of zeros and ones should be based on the analysis of the larger set of data.

In the case of ChatGPT, the set of rules probably indicates to look for patterns of 4 bytes of information. That is because we know that every 4 bytes of information correspond to a drawing we call "letters" or "numbers". But that is meaningless to the computer.

So let's say the large set of data is composed of English texts (in zeros and ones). The smaller set of data corresponds to the letters "banan" and we ask ChatGPT to guess the following 4 bytes of information.

As humans, we know the answer we are looking for: 0000 0000 0000 0000 0000 0000 0110 0001, known to us as the letter "a".

The pattern "banan" is surely in the big set of data, and "a" is most likely the only following character that can be found, so it is impossible to get another answer than "a" from a statistical analysis. No magic happening here.

Any other answer, like "w", "z", or "q" would disappoint a human being. A programmer would classify this as an error and try to find the bug, and investors would think about pulling the funds from the project.

Getting a pattern that would correspond to a Chinese character, an emoji, or worst, a noncharacter code, is mathematically impossible to happen as they're not in the initial data set. If there was, it would indicate a major bug in the system and the output can only be garbage. No magic happening here either.

Let's explore science fiction

But let ourselves fall into science fiction. Let's imagine a program that indeed can be "smarter" than a human being. I don't know how it would work, it has never been seen before.

So that ChatGPT of the future, given the same data sets as before, spits out a bunch of zeros and ones that are meaningless when converted to UTF-8 encoding. But examining the data, one could notice that it corresponds to the compiled program to be run on a particular smart coffee maker to start the machine. Assuming this is not just a big coincidence, that would be pure free will from the computer program: It doesn't want to find the next character in a string, it wants to make coffee. But what human being building a computer program to evaluate the next character in a string and getting that answer - if he recognizes it - would be thrilled about that? It's like training a dog to find drugs and then it only rolls over and plays dead. An amazing trick, but not what I was looking for.

But you said "smarter" than men. Doing stuff humans can't do. So let's imagine the gibberish code coming out of the computer is not to start a coffee maker but it's actually a compiled program for an unknown computer architecture that gives out the equations for the Theory of everything. How would any human be able to understand that? If it is beyond the comprehension of men, they can only discard the data as garbage and they would certainly not use such a machine to make important decisions.

But, again, this is pure science fiction and what is classified as AI today has nothing to do with that. I even maintain that it is impossible that such a concept can ever exist: Not only such a machine cannot be built, but if it could be built, nobody would want to use it.

Smarter?

The complex problems faced by humans cannot be solved by "smarter" (whatever that means). The truth doesn't exist. The "right" answer doesn't exist in all cases. All solutions for our complex problems are based on statistical analysis leaving us with probabilities. The statistical models can be improved but, in the end, you are still stuck with probabilities. And probabilities don't guarantee the outcome. You can win against all odds and vice versa.

Then there's the "butterfly effect". Changing one thing affects the data set and new [unforeseen] problems arise. Often not changing much in the end. Get rid of horses that pollute our streets with manure by creating cars that "only" add a little CO2 into the atmosphere: What could go wrong with that?

What would a computer do when stuck between two choices with equal probabilities? It would be stuck just as any human be and would have to flip a coin. Machines cannot be "smarter" than men and always get the "right" answer. It goes against everything observed in nature.
 
  • Like
Likes dextercioby
  • #29
jack action said:
I reject your argument about me contradicting myself.
I suspected you would, but you did not refute any of my examples of such contradictions.

jack action said:
A computer is a machine that does only one set of tasks:
  1. It reads data as zeros and ones;
  2. It manipulates the data according to strictly defined rules by human beings and their logic;
  3. It outputs new data composed of zeros and ones.
That's it. The zeros and ones are meaningless to the machine.
Disagree with a lot of that.
1: zeros and ones are a human representation of the form binary data takes. The computer reads voltages, current going this way or that, air pressure differences, and either might represent what a human would designate 0 or 1.
2: The rules need not be human defined. It would probably still be a computer even if a non-human designed it, in particular, if a computer designed its successor.
Some of the data definitely has meaning to the computer, such as machine instructions. Much of the rest of the data has meaning to various pieces of software. To declare this meaningless to the computer is to commit the same begging fallacy you did with the definition of choice in post 17: Apparently you consider 'meaning' to be something only biological things (or actually just humans this time) can find in the data.
Your drive this home with further assertions:

jack action said:
The program cannot learn more than what a human is able to understand.
OK, you changed 'computer' to 'program', but still, this is an amazing assertion, especially given direct evidence to the contrary. Yes, a human can in principle do what a computer does, with just paper and pencil. And similarly a computer can in principle do what a human does with similarly inefficient emulation.

Both emulations would suck at a game like chess and would lose to a 6 year old, so I don't find this to be a valid measure of if one can be smarter than the other.

jack action said:
Let's imagine a program that indeed can be "smarter" than a human being. I don't know how it would work, it has never been seen before.
I imagine one could attempt an IQ test. Unclear how that would work with a machine just like I'm unclear how it might work with somebody really smart but illiterate. An IQ test is supposed to be independent of education, but it gets really hard to administer to something with a non-standard education. Anyway, I will freely admit that no device I've yet seen would probably score 100 on an IQ test.

ChatGTP is quite low on the scale. It's a great language processor, but incapable of learning. It's like an encyclopedia from 2 years ago, knowing all this stuff (except much of it wrong), but incapable of corrections.

jack action said:
So that ChatGPT of the future, given the same data sets as before, spits out a bunch of zeros and ones that are meaningless when converted to UTF-8 encoding. But examining the data, one could notice that it corresponds to the compiled program to be run on a particular smart coffee maker to start the machine. Assuming this is not just a big coincidence, that would be pure free will from the computer program: It doesn't want to find the next character in a string, it wants to make coffee.
First of all, 'meaningless when interpreted as UTF-8 encoding'. I suppose it could be interpreted as UCS4 and from there converted to utf8 (one byte for the letter 'a'). All besides the point.
Your future GPT puts out code that happens to be meaningful to one machine, a coffee maker (which was currently busy running doom). It would seem simpler to just ask for a coffee in the language of that to which it was communicating about the banana. Why speak French to an Eskimo rather than just ask in Inuit to let it see Paris? That's still an unexpected reply to the banan query that you're looking for.

jack action said:
But what human being building a computer program to evaluate the next character in a string and getting that answer - if he recognizes it - would be thrilled about that?
If you're looking for self-initiated behavior, yes, you would be quite thrilled about it. Odds are, you asking about the 'banan' were not looking for this, but you'd hardly find it 'thrilling' to see the GPT manage to complete the word with the only possible choice in the English language. Auto-complete has been doing this long before chatGPT came around, and chatGPT is actually kind of a pimped-out auto-complete.

jack action said:
But you said "smarter" than men. Doing stuff humans can't do. So let's imagine the gibberish code coming out of the computer is not to start a coffee maker but it's actually a compiled program for an unknown computer architecture that gives out the equations for the Theory of everything. How would any human be able to understand that?
If the architecture is unknown, then the computer couldn't have put together a chunk of software to run on it. If it was known to the computer, then the computer should have conveyed the data to that new architecture and not to the human, a rather unintelligent choice.
Furthermore, code generated by the AI would probably not be in need of compilation, a process only used to to translate from a human language that it has no reason to use.
In addition, there's no reason to suspect that this 'unknown computer architecture' would run on software at all, compiled or otherwise. I mean, you don't. There's obviously no reason for future tech to be based on a Von-Neumann architecture.
And while I'm on a roll, the unknown computer architecture would probably be the better thing to come up with its own method to discover a ToE.

Anyway, at this point I don't see code coming out of any machine, so the human jobs doing this are safe for the moment. The AI is not particularly close to being smarter than humans in this regard, which is pretty much the one that counts when it comes to the singularity.

jack action said:
Not only such a machine cannot be built, but if it could be built, nobody would want to use it.
At that point, I don't think they'd have a choice. I currently us a lot of tech that I'd rather not, but is somewhat unavoidable for everyday life, and I have way less tech than most people, but to choose otherwise is to be totally off the grid.

jack action said:
The complex problems faced by humans cannot be solved by "smarter" (whatever that means).
That's an amazing assertion, that something/someone smarter cannot solve problems that the dumber people cannot. Perhaps you don't mean that, but that's what it says.

jack action said:
Machines cannot be "smarter" than men and always get the "right" answer.
I never claimed anything would always get some kind of proverbial 'right' answer. Better answers is more likely. If those didn't count for anything, 'smarter' beings would not have evolved. That brain comes with a considerable metabolic cost and had to provide benefit to at least offset that cost.
 
  • #30
jack action said:
How would any human be able to understand that? If it is beyond the comprehension of men, they can only discard the data as garbage and they would certainly not use such a machine to make important decisions.

Normally, I try to stay away from this type of philosophical discussion; but I noticed this passage.

This is most definitely not correct. In fact, this is already happening. ML is already being used to make important decisions about people lives (the police uses automatic image recognition, government agencies use to determine if you are eligible for certain benefits, banks use it to evaluate credit card applications etc) and the fact that even the creators of the SW does NOT always understand how the machines make the decision is already a problem . This is why "explainable AI" is a whole -very active- subfield of AI research.

The said, there are plenty of people who believe that there are many cases where we simply won't be able to understand why ML software make certain decisions, it is just too complex.

Personally, I don't find this surprising; there are many, many example in science where we understand the basic principles of how a given system works, but we don't really understand how a large collections of such systems can exhibit certain complex behaviour. This what known as "emergent properties".
Intelligence is clearly an example of such as emergent property (presumably, no one believes that a single neuron is intelligent).
 
  • Like
Likes russ_watters
  • #31
f95toli said:
This is most definitely not correct. In fact, this is already happening. ML is already being used to make important decisions about people lives (the police uses automatic image recognition, government agencies use to determine if you are eligible for certain benefits, banks use it to evaluate credit card applications etc) and the fact that even the creators of the SW does NOT always understand how the machines make the decision is already a problem .
I agree with most of what you're saying in that post, but I think there is some nuance here you might be missing or omitting:

1. We already use software to do analysis, machine learning or not. In theory machine learning should offer improvement/refinement over traditional dumb criteria ("discard every resume without a 4 year degree"). So the main risk is just not being an improvement.

2. In most of those examples it isn't making decisions, it is providing analysis with which humans make decisions. In those where it seems to, it appears to be limited to screening/sorting based on human provided criteria; it only "seems" to be making a decision.

Most of the risk I see with these software tools comes from an exaggerated belief in their capabilities which then paradoxically leads to giving them more executive control than they warrant.
 
Last edited:
  • Like
Likes jack action
  • #32
As an aside, I don't even understand why some of these things are labeled AI or machine learning. We've had fingerprint analysis and DNA testing for decades now and they to my recollection haven't been considered AI or machine learning. But facial recognition is. Why? Is it just because facial recognition is more difficult? To me that is a distinction without a difference.

And not for nothing, but I typed this using speech to text, and it has a long way to go to even understand what I'm saying so that it understands the words based on context. The autocorrect is still atrocious.
 
  • Like
Likes dextercioby and jack action
  • #33
russ_watters said:
2. In most of those examples it isn't making decisions, it is providing analysis with which humans make decisions. In those where it seems to, it appears to be limited to screening/sorting based on human provided criteria; it only "seems" to be making a decision.
I was specifically referring to SW used to make decisions. There are obviously humans involved (you are not usually talking to a computer), but in many cases the goal is to make the process as automated as possible and remove the need humans to make decisions. In many cases these tools are meant to be used in say call centres (meaning you don't need trained staff, so it is cheaper).
Now, computerised algorithms/metrics (say credit scores) have of course always been used; but the difference between ML trained on large datasets and the "old" style of algorithms; is that the later are usually explainable to humans (you could at least in principle explain to someone how their credit score was calculated) ; this is impossible with the ML tools I was referring to (even in principle).
Again, this is the reason for why "explainable AI" has become a hot topic recently.
 
  • #34
f95toli said:
I was specifically referring to SW used to make decisions. There are obviously humans involved (you are not usually talking to a computer), but in many cases the goal is to make the process as automated as possible and remove the need humans to make decisions. ... ML trained on large datasets and the "old" style of algorithms; is that the later are usually explainable to humans (you could at least in principle explain to someone how their credit score was calculated) ; this is impossible with the ML tools I was referring to (even in principle).
Can you provide any examples of actual decisions being made? You've given subject matter/categories but not examples of decisions.
 
  • #35
While I'm waiting and to keep it moving I'll pick an example that has come up previously on PF and explain my understanding. If this differs from what you were after, I can respond/adjust later:
f95toli said:
This is most definitely not correct. In fact, this is already happening. ML is already being used to make important decisions about people lives (the police uses automatic image recognition...
Decision: "Arrest that guy!"
Risk: AI-caused false arrests.

I don't believe AI is making decisions and fundamentally, improved facial/image recognition in its naked form can only improve identification/reduce false arrest rates, not make it worse/cause more false arrests.

Pattern recognition software in naked form does not make decisions, it analyzes sets of data and comes up with a percentage match. That's it. So, fundamentally and rhetorically an improved piece of software can only be an improvement: 95% match becomes a 99% match. 5% match becomes a 0% match. Just data analysis. This is how it has been for decades. Whatever AI is doing to increase the signal to noise ratio doesn't change the fundamental purpose/function. The only way this sort of improved image recognition could cause a negative outcome is if it is misunderstood and/or misused by the programmer and/or user.

For example, If a programmer writes a follow-up line of code that says IF MATCHFACTOR > 95%, THEN "ARREST THAT GUY!", it's the programmer who has made the decision, not the AI. Or if the threshold is adjustable, it's the user making the decision. If the threshold is set too low and the number of false positives increases (whether or not they actually arrest that guy), then there'd be no point in having the software.

My local baseball team recently made an announcement that they are switching to AI based entrance security, for exactly the reason I described. The AI image recognition somehow predicts if the person it is looking at is carrying a weapon (or outside food?) and selects them for secondary screening by a human. Decision-making AI? Not any more than the dumb metal detector was before it -- it's just much better at the assigned task.
 
Last edited:
  • Like
Likes jack action

Similar threads

  • Science Fiction and Fantasy Media
Replies
11
Views
3K
Replies
10
Views
2K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
494
  • Science Fiction and Fantasy Media
Replies
13
Views
5K
  • Science Fiction and Fantasy Media
Replies
2
Views
3K
Replies
12
Views
2K
Writing: Input Wanted Captain's choices on colony ships
  • Sci-Fi Writing and World Building
Replies
4
Views
2K
Replies
4
Views
1K
  • Computing and Technology
Replies
14
Views
4K
Replies
4
Views
1K
Back
Top