PeterDonis said:
Yes, as you note, this is the "hard problem", as it is called, of consciousness, but as it is framed by those who consider it a problem, it's actually worse than hard, it's impossible, because there is no way to directly test for "subjective aware experience" externally.
I agree , to tell whether one has a conscious experience it takes one to know it.
Klystron said:
In @artis example of the starving scientists preserving edible seeds while under siege, an 'AI' might better perform this altruistic role of preservation for future generations precisely because it does not identify as human, cares nothing for the current living population survival, does not become hungry for food and may not be designed for self-preservation.
it might better perform the task because it's a machine yes, but here the emphasis is on the reasoning behind the task, if a human being is willing to die for the benefit of others down the road, like the example of Christ, then such a decision is only made if one is able to understand the extreme depth of emotion and reason and possible outcomes that such an action would bring forth.
For AI the saving of a seed collection during war is nothing more than a task, the subjective reason of happy children being able to live and enjoy life when the war is over is just a piece of code for the AI.
And it would be just a bunch of spiking neurons within a brain if that brain wasn't conscious and now we're back to square 1 of why a bunch of spiking neurons create this world within a world that we call subjective awareness.
I do feel the dilemma of mind VS matter is going to be among the hardest problems of science ever.
Much like
@PeterDonis already said, how does one test for consciousness , it might just be that if we had the ability to copy every electrical signal within a brain and then perfectly simulate those signals on a brain like analog computer in a second by second , frame by frame way we would get no conscious result within the computer, or at least nothing resembling that.
It just might be that you cannot "tap into" existing conscious experience and you can only start one from scratch much like you cannot regrow a forest even if you use the same trees and same positions.
gleem said:
It has been reported since last year that AI is being used to construct viruses that are undetectable by most antivirus software. Microsoft has a program using AI to detect AI-generated viruses, but is this always going to protect us? But this is not my point. AI is used to help us write programs that we need. AI could with the right prompt develop the goal to try and make an escape from its current computer into the internet itself. It might unbeknownst to humans put subroutines into software that humans and AI are collaborating which is intended to upload itself into the Cloud and remain there covertly. To remain undetected it might create accounts disguised as human individuals or organizations which would be the agents to help it achieve its goals. No sentience is required. It would have access to everything connected to the internet. Game over well almost. Humans shut down every electronic device ever connected to the internet and erase all memories but can we or will we?
Let me give you an example of why I think this cannot exactly happen like that.
If we assume that AI doesn't have and possible even cannot have a conscious subjective awareness like ours then AI will never be able to reason like we do, AI can only "take over the world" the same way it can win a GO match, or a chess match, by making precalculated moves which it bases of previous acquired knowledge.
But there's a problem here, AI unlike us cannot make a deliberate mistake because that would need the subjective reasoning and intuition of a conscious mind , because from an AI point of view you do not make deliberate mistakes as that is directly against the goal of winning the game, but in life especially if you are up to "no good" you often have to "feel" the situation and make a deliberate mistake to convince the other party that you are just as stupid as them so that they don't suspect you for what you shouldn't be.
A behavior like this demands the actor to be conscious and subjective because that is the world in which we deal and live as we are like that.
In other words an AI trying to sneak past us would be like the "perfect kid" in school who always learns endless hours to pass the exam with A+ every time, surely everyone notices a kid like that and they are usually referred to as "nerds" and they stand out.
AI overtaking the internet would be like the ultimate nerd move, how in the world would it stay unnoticed by us?
Only if the AI doing that could make deliberate mistakes and take unnecessary detours from it's main objective just like a human would, but how do you do that if you are built to succeed and you don't have the ability to reason in a subjective way?
You cannot just copy us because that would mean you would make the same mistakes as we do and you would fail , so you become perfect and then you stand out eventually and you get seen.There are two types of thieves, the bad ones that get caught because their sloppy and the extremely good ones that don't get caught but everyone still knows their been robbed.
Even if you can't catch a thieve you can still know something weird has happened when you suddenly have no money don't you?