.Scott said:
I have no idea what "NFI" is.
My post started out saying "AI and consciousness are not as inscrutable as you presume.".
There is a lot of discussion around AI and consciousness that is super-shallow. To the point where terms are not only left undefined, but shift from sentence to sentence. And where a kind of "wow" factor exists where things are not understood because it is presumed that they cannot be understood. People are stunned by the presumed complexity and block themselves from attempting even a cursory analysis.
If you want to create an artificial person that deals with society and has a sense of self-preservation, and you don't want to wait for Darwinian forces to take hold, then you need to start with some requirements definition and some systems analysis. If you are not practiced in such exercises, this AI/Consciousness mission is probably not a good starter. Otherwise, I think you will quickly determined that there is going to be a "self object" - and much of the design work will involved presenting "self" as a unitary, responsible, witness and agent to both society and internally - and in recognizing and respecting other "self-like" beings in our social environment.
The fact that we so readily take this "self" as a given demonstrates how effectively internalized this "self object" is. How could any AI exist without it? In fact, no current AI exists with it.
To me, it's at the deep level of analysis where you come to realize that AI cannot be understood (at least internally). This is because neural networks are complex systems modelling other complex systems.
Sure we can understand the black box's possible range of inputs and output, and to some extent the expected ones if the model and data is simple enough.
The fact that the worlds best theorists still have no solid theory to explain even simple artificial neural networks in a way the experts are satisfied with, however, is telling us something. Because we can make ones much much more complicated.
So basically, what we can do if we have this controlled isolated system, is choose the data to train it with, choose the loss functions to penalize bad behavior based on, and choose the degrees of freedom it has.
But I very much doubt people would have the restraint to agree, in solidarity, all across the world, to constrain our use of AI to such a limiting amount, especially when letting AI go wilder offers so many competitive advantages. We're talking vast wealth, vast scientific and engineering achievement, vast military power, etc. that you are expecting people to pass up in the name of being cautious. The humans we're talking about here are the same ones that are cool with poisoning themselves and the rest of the world with things like phthalates and the like just to make a little more money, and are even willing and able to corrupt powerful governments to make it happen.
Humans are not only too foolish to feasibly exercise the level of caution you expect, they're also too greedy. In reality, people will release self reproducing AI into the solar system to mine asteroids and terraform Mars in a second, as soon as they get the chance. And they will build AI armies capable of wiping out all human beings in a day as soon as they get the chance too.
Will they be self aware? Does it matter?
Anyways, there can be a notion of self awareness which is easily achieved by AI, which is to simply learn about itself and then its behavior will depend on its condition. And if its condition affects other things that affect the loss function, then it will behave accordingly. This can easily reach the level where an AI acts similar to humans in terms of things like ego, greed, anger, envy, depression, etc.
What we have as humans that seems special is not that we behave with these characteristics, but that we have these subjective feelings which we cannot imagine to be possible with a machine.
Animals have self awareness and certainly emotion in my opinion. They clearly feel things like we do. And they do experience things like envy comparing themselves to others. Pets are notorious for becoming envious of others. Dogs in particular are extremely sensitive and emotional animals.
What humans have that is objectively special is a higher level of analytical thinking than animals. But AI can arguably surpass us easily in analytical thinking, at least in niche cases and probably in the long run in general.
So what we have left really to separate us is the subjective experience of feeling.
AI can behave exactly as if it is sensitive emotionally and has feelings, but we could never peer inside and somehow tell if there is anything similar going on. Like you say we often just say the neural network is too complex to understand internally so maybe we can't tell. The truth is, we don't know where this subjective experience of feeling comes from in biological creatures. Is something supernatural involved? Like a soul? Penrose thinks the brain has quantum organelle which give us a special metaphysical character (for lack of better explanation). And I admit that I am inclined to have at least a vague feeling there is some form of spiritual plane we occupy as living creatures.
Even if that is true (Penrose is right), can we make artificial versions of those organelle? Or how about the brains we're growing in vats? At which point can these lab grown biological brains begin feeling things or having a subjective experience? Maybe for that to happen they need to be more complex? Isn't that what people ask about artificial neural networks? Do they need to first have senses, learn, and be able to respond to an environment? Would a human brain in a vat, deprived of a natural existence, have a subjective experience we could recognize? Would some kind of non-biological but quantum neural network be capable of feeling?
There are too many unanswered questions. But I'm in the camp that believes that whether or not AI feels in the way we do, it doesn't matter in practice if it acts like it does. But an emotional AI isn't really what makes the most danger in my opinion. I think the biggest danger is out of control growth. Imagine if a super strain of space cockroaches started multiplying super-exponentially and consumed everything on Earth in a week. That is the type of thing which can possibly result from something as simple as an engineer/researcher doing an experiment just to see what would happen.