AI and LLM in particular are amazing. Beginning with ChatGPT3 making a splash on the web, detractors argued that it was just auto-complete. It was desired to emulate human neural networks, a guess of how we think they work. It worked pretty well without any specific reasoning capability. It showed, at least to me, that our language contains much hidden information about logic and culture, as its immediate responses were generally correct and reasonable. Did you ever wonder how humans functioned without language? We get sight, hearing, and touch in order to function, and then get language. AI gets language first but doesn't experience sight, touch, and hearing as we do.
Since it was required to process prompts until it reached some artificial endpoint and had no memory to reflect on the development of its responses, it often drifted off track when processing too long and hallucinates, a term I find unhelpful. But we continue to move the goal posts when it acts more like a human. I believe one day, we will not be able to move them any farther.
Assuming AI at this stage should be able to generate generally original works of art, like Picasso, is no more reasonable than expecting an arbitrary human to do that. We think of his modern works, but he was influenced by many artists: Van Gogh, Gauguin, Cézanne, Matisse, and Braque. Artists learn from one another, and great artists evolve to develop a distinctive expression of their skill. The famous abstract artist Mondrian's geometric works were probably influenced by cubism. So, what is creativity?
Regarding creativity, here is a
video in which the script was generated in 2020 by GPT3, called the Solicitors. It was just given the opening setting.
How much of what we think is processed in the subconscious and "revealed" to our conscious mind? That begs the question of what is actually in our subconscious. Perhaps more than we can imagine. I find myself sometimes mimicking something I heard, or am I fabricating a response based on information obtained from unremembered sources? Even as I compose this post, I cannot be sure how much is truly original.
When we choose one word over another because it better fits the context, do we not have some sense ( a type of weighting factor like AI ) of its appropriateness?
I have sometimes been befuddled by a question or statement because my understanding of the words was not what was intended. I either gave a stupid answer or interpretation, or I could not process it. Why do we expect these early AI models to be perfect?
Humans can think about their thinking, called metacognition, which can help avoid mistakes; AI, AFAIK, cannot (so far). In some sense, human intelligence and AI are similar. But then, do we really want AI to be like human intelligence?
So, getting back to the opening post, anybody can become too dependent on things, letting them control their behavior, whether a tool, AI, or another human.