Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
  • #491
ChatGPT is making people more money and better at their jobs. 4 of them break down how.
https://www.yahoo.com/finance/news/chatgpt-making-people-more-money-114549005.html

AI is simply a tool, which can be used properly/productively or misused destructively.
I'm having trouble finding a kind way to critique 3 of those jobs, but 1 out of 4 isn't too bad if you're a baseball player. The others, yup, that's how they'll be replaced and it's a little surprising to me that the internet didn't replace them already.
 
  • Like
Likes jack action
  • #492
If with all the hype about ChatGpt, and the bandwagon forming around it.
It is as if everyone is being sucked in, under the false belief that ChatGPT is infallible ( as well as other AI's out there present and future ).
That is the AI problem that will 'kiill' humanity IMO - Not an AI of superintelligence that will try to protect us from ourselves.
If we can't know what information is correct, as humans we will take the easy way out by just assuming that ChatGPT says, so so it must be true.

Sorry to pick on ChatGPT, but it is giving the true nature of what the AI modelling has unleashed upon the world.

The thing make stuff up, but appears to present the information as if an expert.
A typical mis-information route from ChatGPT.
https://www.msn.com/en-ca/money/new...1&cvid=e967eb65af174e2ebd8b2d04a08b8db8&ei=12
A quote from the article of typical things that ChatGPT does,
The Research On ChatGPT Inaccuracies: This growing concern was brought into sharp focus by the study "High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content," conducted by Mehul Bhattacharyya, Valerie M. Miller, Debjani Bhattacharyya and Larry E. Miller.

Through an analysis of 30 medical papers generated by ChatGPT-3.5, each containing at least three references, the researchers uncovered startling results: of the 115 references generated by the AI, 47% were completely fabricated, 46% were authentic but used inaccurately and only 7% were authentic and accurate.

Their findings reflect the larger concern that ChatGPT is capable of not only creating fabricated citations, but whole articles and bylines that never existed. This propensity, known as “hallucination,” is now seen as a significant threat to the integrity of information across many fields.
 

Suggested for: Fearing AI: Possibility of Sentient Self-Autonomous Robots

Replies
21
Views
431
3
Replies
99
Views
3K
Replies
3
Views
373
Replies
8
Views
655
Replies
12
Views
592
Replies
3
Views
649
Replies
18
Views
849
Back
Top