gleem said:
Really, like social media poses no danger?
Points completely taken. It might not be able to directly conquer the world, but it very much widens the gap between people and truth, and thus it becomes a tool for something directly dangerous to, well, endanger us.
I often fail to look at things from that standpoint since memberships in forums such as this one is pretty much as close as I get to social media access. I don't even have a mobile phone, although that may change eventually.
Halc said:
So your definition of a decision is a choice made by something biological?
jack action said:
Not necessarily, but a machine making a decision will surely resemble a lot of what a biological form is. (See my definition of what it takes to make a choice and being conscious in
post #17)
Your post 17 definition is necessarily biological, so you seem to contradict yourself here. OK, by that definition, a non-biological entity (artificial or not) cannot make a decision since such an entity doing exactly that must use a different word.
Such a begging definition doesn't help answer the question if the entity can do it, even if a different word must be used to describe it.
jack action said:
A machine will never make those decisions unless we allow it to do them, unless we program them.
By your definition, it cannot make a decision, but here you describe an exception.
jack action said:
Your version of free will seems to imply that a machine without a steering mechanism "chooses" to go on a straight path.
Not so, since no steering choice is available. All your counterexamples involve situations lacking options from which to choose.
jack action said:
Going against what is expected of you BECAUSE it suits you better.
Context here seems to have gone missing. It started with
jack action said:
if I knew no dogs would disobey my commands, I would say that dogs don't have free will.
OK, based on this and other comments, I think I glimpse your definition of free will, which seems to be something on the order of the ability to to not follow the rules, to do something other than expected. You make it sound like a bad thing, that a person choosing to rape despite being trained that it is a bad thing to do so. That's free will, but a person in control of such urges is not displaying free will. It differs from my non-standard definition (that something makes its own choices) and from a more standard definition (that ones choices are not a function of causal physics), but yours is also a workable definition.
So chatGTP responding to a query with 'piss off, I'm sick of all your moronic questions!' to be the display of free will you're after, only because such a response is not part of its training.
Did I get close?
Halc said:
Maybe the game-playing machine (which isn't a robot) deciding to learn to make a better omelette instead.
jack action said:
Those examples would count. But it is still science-fiction as of now.
I think it would still not count since if the game playing device made an omelette, it wouldn't be a game playing device. For one thing, it doesn't have motor control. It just expresses the next move in the appropriate notation. If there is a physical Go board, it would need to have somebody make the move for it, and to convey any move of a human opponent to it since it probably cannot see the board.
Still, I could see an advancement of being given robotic appendages to better play against physical opponents, and given enough knowledge of such human opponents, it might try making an omelette as a sort of psyche move that tends to unsettle him. Worth a try at least, no? This despite omelette making not being in any way part of its programming any more than any other move. It probably would help much with a game of Go, but there are certainly some games where such an out-of-box move might work.
Halc said:
How about to solve problems that we're not smart enough to solve?
jack action said:
How can you not not know this, given the context? Or are you attempting to illustrate the answer to your own question?
jack action said:
Do you expect, without being programmed to do so, a self-driving car with true AI to ever make the maneuver in the following video (@1:45) if there is a ramp nearby and a narrow passage that it wants to go through to use as a shortcut, save its passengers' lives, or even because it expects its passengers to like the feeling? I can assure you that there is a human somewhere that would do it for any of those reasons.
I deny your assurance. There is unlikely to be any untrained human that would consider such a move at a moment's notice.
Terrifying your passengers or causing almost certain injury/death seems a poor reason to attempt it. In a life-saving situation, it would only be considered given training of the possibility of it working. Also, it is unclear if the life of the passengers is the top priority of a self-driving car. Lives of pedestrians, occupants of other vehicles, property values, etc. are all in consideration. And I don't think I've mentioned the top priority on that list.
I happen to live close (within an hour or so) to the site of the highest fatality single car crash ever (so it was reported), with something like 22 people dying, two on foot and 20 in the limo, all because of poor brake maintenance. I wonder how an AI driver would have handled the situation? There was lots of time (perhaps over a minute) to think about it between awareness of the problem and the final action. Lots of time to think of other ways out. Plenty of time to at least yell for everybody to buckle up, which might have saved some of them.
jack action said:
Do you expect a program to create a new set of mathematical rules to solve problems, incomprehensible to humans because it has nothing to do with what is already known to humans, what was fed to the machine as an input?
Yes, I fully expect that.
jack action said:
If that ever happens, do you imagine that humans that don't understand what is going on because they are not smart enough (whatever that means) will blindly follow that machine's decisions?
They'd have to give it the authority to implement its conclusions, or it would have to take that authority. Then we follow. It would probably spell out the logic in a form that humans can follow, so it wouldn't be blind. I'm presuming the AI is benevolent to humans here.
Anyway, humans are often incapable of taking action that they know is best for, say, humanity.
jack action said:
If someone creates a machine to find new threats for his enemies and the machine concludes that the best decision is to not threatens them, do you expect them to follow the machine's decision simply based on the fact "it is smarter than I"?
If they trust the machine, sure. It's a good argument, made by something more capable of reaching such a conclusion than are the humans.
jack action said:
The results obtained by a machine that wasn't expected or understood by its maker will never be accepted.
Often true, even of conclusions made by humans. So no argument here. As gleem points out, humans are finding it ever harder to discern truth from falsehood, good decisions from bad ones. We're actually much like sheep in that way, easily led onto on path or another. So the AI will doubtlessly need to know how to steer opinion in the direction it sees best, thus gaining the acceptance that is preferable over resistance.
jack action said:
AI with true free will, as sci-fi presents it, would be useless to anyone.
I don't think most sci-fi presents free will by either of our definitions. I also don't think a vastly superior AI is well represented in sci-fi. That's why it's fiction. A story needs to be told, one that people will pay to be told. That's a very different motive than trying to estimate what the future actually holds in store.
jack action said:
This happens all the time and that is what makes evolution possible.
Yes. My street crossing example demonstrates how evolution would quickly eliminate free will. It is not conducive to being fit.
Your attempt to deny the example by showing a nonzero probability of survival is faulty. The busy street could be a field with predators, who will not avoid you if you venture out randomly instead of waiting for the coast to be clear. Crossing a busy road at random times with fast vehicles is not certain death, but doing it regularly is very much certain death.
It references your definition of free will since one of the rules of survival is 'cross the dangerous place when it appears safe' and your free will definition is demonstrated when an unsafe time to cross is chosen, prompting my query as to why one would want to have such free will.