DaveC426913 said:
Ehh. OK. If we dilute the definition.
But ... what if we don't?
Then it gets harder to describe/define whether what "AI" is doing counts as "intelligence" or more to the point, whether it is going to cause fundamental changes in how/if we work. My main or at least starting point here is that I think both sides tend to try to force a definition, thereby defining their position to be the "correct" one. Both "AI makes decisions therefore it is 'intelligent'." and "AI doesn't make decisions therefore it is not 'intelligent'." are wrong.
I am asserting that a calculator and an AI are qualitatively different...
I agree calling it just a calculator is going too far. As is calling it a calculator that works in natural language...though not by much.
and that there are good reasons for thinking that the uses AI is being put to are unprecedented.
I'd like to see some [good] examples, because that to me is the rub. Not declaring it "intelligent" because it "makes decisions" or "replaces jobs" but the actual decisions and actual jobs or fractions of jobs. Because so far while I see people talking about a potential upcoming apocalypse, I haven't been particularly impressed by the examples I'm seeing,
in that context. Moreover, the aggressive vision of the people foreseeing an apocalypse is limited to the AI only and relies on an assumption that humans won't adapt,
like they always have. It's never been easy to see what the landscape looks like after the groundbreaking change happens, but IMO it's a mistake to believe that humans will no longer be able to think of other/new things to pay other humans to do,
like they always have. Is it possible This Time Will Be Different? Sure. But IMO it's more likely people are just making the same mistake they always have with that.
There was a list posted earlier of companies replacing jobs with AI (can't seem to find it now) where for several companies listed they couldn't point to
any actual jobs that were being replaced, they just assumed layoffs were because of AI. Several others were customer service chatbots, which have been around for decades in other forms, or phone support which was long ago mostly outsourced to developing countries, to people who were barely allowed to deviate from a pre-written script. No, these jobs do not impress me in the context of a predicted AI apocalypse.
Is that too far from the OP's question to be part of the same topic?
Well, OP's first question is key (actually two questions):
1. Can AI do everything a human can do
1a. [or] how close are we to that?
I'd hope we can agree the answer to #1 is a clear-cut "no" but given the arguments I'm seeing touting AI's current capabilities I'm not sure everyone agrees.
For 1a, I do think a lot of the proponents of AI think it is close while the detractors(like me) don't think it is very close. Like I said about the other thread about an "independent research" paper in physics, while it's helpful in rewriting a paper to make it
look like a scientific paper, it is of little help in actually judging or creating the actual physics content. It appears to me to be a long way from replacing the physicist in that equation.
Some observers are even saying LLMs
have already pretty much peaked.
But yeah, that's what the thread seems to be about.