On Progress Toward AGI

  • Thread starter Thread starter gleem
  • Start date Start date
Click For Summary
The development of artificial general intelligence (AGI) aims to achieve AI that matches human intelligence, characterized by learning, reasoning, and adaptability. Current AI systems excel in specific tasks but lack the ability to generalize skills or perform complex reasoning. The Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) serves as a benchmark to compare human and AI performance, focusing on inductive reasoning. The test has evolved to increase difficulty as AI capabilities improve, with significant cash prizes for competition participants.A key challenge in AI development is memory management, as models struggle to integrate past actions into current tasks effectively. While some companies, like British Petroleum, recognize AI's potential, they hesitate to deploy it due to concerns about understanding AI errors. This reflects a broader societal skepticism towards AI compared to human judgment, despite AI often making fewer mistakes.Concerns about the rapid growth of AI data centers and their impact on energy infrastructure are also highlighted, particularly in Texas, where demand is expected to surge.
  • #31
"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."

https://theamericanscholar.org/baby-shoggoth-is-listening/

Baby Shoggoth Is Listening​

Why are some writers tailoring their work for AI, and what does this mean for the future of writing and reading?
 
Computer science news on Phys.org
  • #32
LeCun, by his choice, has taken a different direction. He has been telling anyone who asks that he thinks large language models, or LLMs, are a dead end in the pursuit of computers that can truly outthink humans. He’s fond of comparing the current start-of-the-art models to the mind of a cat—and he believes the cat to be smarter. Several years ago, he stepped back from managing his AI division at Meta, called FAIR, in favor of a role as an individual contributor doing long-term research.

“I’ve been not making friends in various corners of Silicon Valley, including at Meta, saying that within three to five years, this [world models, not LLMs] will be the dominant model for AI architectures, and nobody in their right mind would use LLMs of the type that we have today,” the 65-year-old said last month at a symposium at the Massachusetts Institute of Technology.

https://www.msn.com/en-us/news/tech...s-now-he-thinks-everyone-is-wrong/ar-AA1QtEKt
 
  • #33
My coworker and I are very focused on an aspect of my architecture that we call a World View. Based on the above, it sounds very similar to the world models with respect to how we're approaching its design. I'm sure that his is more advanced but I'm definitely on that page w.r.t. the future.
 
  • #34
In post #29, I noted that AI is better than humans in identifying emotions.

Google just released Gemini 3, which has been noted to be significantly better than ChatGPT-5 releasted just this past August. In a study to see how it compares to current models concerning self-harm and mental-health crisis scenarios, it aced the test while the best the other models could do was 60%. see https://www.forbes.com/sites/johnko...-on-a-critical-test-all-other-ai-models-fail/

In general, from what I have read so far, Gemini is impressing users. From https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini
This remark.
Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.
 
  • #35
nsaspook said:
"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."

https://theamericanscholar.org/baby-shoggoth-is-listening/

Baby Shoggoth Is Listening​

Why are some writers tailoring their work for AI, and what does this mean for the future of writing and reading?
one thing humans and AIs share: our unconsciousness. Everybody wonders all the time if the AIs will ever gain consciousness, as if they’re such masters of consciousness themselves, as if they could ever finish knowing what they feel or why, as if they didn’t have strangely thin access to the reasons for their own behavior.
I opine that Darwin/Wallace theory of natural selection explains a good deal of human behavior. If thy neighbors annoyeth thee why not kill the men and enslave the women? Furthermore it makes sense to lie about this, cloaking your motives in high-sounding rhetoric to attract allies. Finally it makes sense to lie to oneself about about it.

I very seldom see any discussion of this. Instead what I see over and over again is the firm belief that such behavior is absolutely fundamental. They say that once an AI becomes "conscious" it will behave in exactly this way, killing or enslaving. Well, if it is exposed to Darwin/Wallace type incentives and is able to do so then it would seem that it will.

But human behavior does not usually follow this pattern. The main reason base barbarism doesn't usually rampage is summed up by live by the sword, die by the sword. If you've got a game where only one party can triumph at the expense of all others then the sensible strategy is to convince most of the participants to not play this game and instead use their energies to prevent all from doing so.

More can be explained by EI Wilson's sociobiology, which says sacrificing one's life is a winning strategy if it leads to survival of portions of your DNA amongst numerous relatives. But machines haven't got any DNA.

In sum, using these tools about 99% of human behavior is explained. The remainder are just nuts. It happens.
 
Last edited:
  • #36
Accolades continue to come in for Google's Gemini 3 LLM.

The CEO of Salesforce, Marc Benioff, says
"Holy dang," Benioff wrote on X on Sunday. "I've used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I'm not going back. The leap is insane — reasoning, speed, images, video… everything is sharper and faster. It feels like the world just changed, again."
Both Musk and Altman sent congratulations to Google.

However, AI tech Bloggers, while noting that Gemini is a vast improvement over other LLMs, hold their judgment until the full effect on end users is documented.

One notable improvement was on the ARC AGI 2 benchmark (see post #1), where Gemini 3 beat ChatGPT 5.1 by a factor of 2.5, a heretofore improvement never seen before on any benchmark shown below.

1764092804981.webp
 
  • #37
Hornbein said:
I very seldom see any discussion of this.
Discussion seems to a lot about ASI ( Artificial Stupid Intelligence ).
 

Similar threads

  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 17 ·
Replies
17
Views
3K
Replies
10
Views
4K
  • · Replies 4 ·
Replies
4
Views
532
  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 21 ·
Replies
21
Views
3K
Replies
3
Views
3K
Replies
2
Views
3K
  • · Replies 179 ·
6
Replies
179
Views
27K