On Progress Toward AGI

  • Thread starter Thread starter gleem
  • Start date Start date
Click For Summary
SUMMARY

The forum discussion centers on the pursuit of Artificial General Intelligence (AGI) and the challenges faced in its development. Key points include the introduction of the Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) as a benchmark for evaluating AI's reasoning capabilities, which has been revised to increase difficulty in response to advancements in AI models. The discussion highlights the limitations of current AI systems in memory and planning, as well as the financial motivations behind AGI development, particularly by major players like Microsoft and OpenAI. Concerns about the ethical implications and societal impacts of AI technologies are also raised, emphasizing the need for responsible AI deployment.

PREREQUISITES
  • Understanding of Artificial General Intelligence (AGI)
  • Familiarity with the Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI)
  • Knowledge of AI memory architectures and planning capabilities
  • Awareness of ethical considerations in AI development
NEXT STEPS
  • Research the latest revisions of the ARC-AGI benchmark and its implications for AI development
  • Explore advancements in AI memory architectures and their impact on reasoning
  • Investigate the financial models driving AGI development in major tech companies
  • Examine ethical frameworks for responsible AI deployment in society
USEFUL FOR

This discussion is beneficial for AI researchers, developers, ethicists, and policymakers interested in the implications of AGI development and the challenges of integrating AI into society responsibly.

  • #31
"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."

https://theamericanscholar.org/baby-shoggoth-is-listening/

Baby Shoggoth Is Listening​

Why are some writers tailoring their work for AI, and what does this mean for the future of writing and reading?
 
Computer science news on Phys.org
  • #32
LeCun, by his choice, has taken a different direction. He has been telling anyone who asks that he thinks large language models, or LLMs, are a dead end in the pursuit of computers that can truly outthink humans. He’s fond of comparing the current start-of-the-art models to the mind of a cat—and he believes the cat to be smarter. Several years ago, he stepped back from managing his AI division at Meta, called FAIR, in favor of a role as an individual contributor doing long-term research.

“I’ve been not making friends in various corners of Silicon Valley, including at Meta, saying that within three to five years, this [world models, not LLMs] will be the dominant model for AI architectures, and nobody in their right mind would use LLMs of the type that we have today,” the 65-year-old said last month at a symposium at the Massachusetts Institute of Technology.

https://www.msn.com/en-us/news/tech...s-now-he-thinks-everyone-is-wrong/ar-AA1QtEKt
 
  • Informative
Likes   Reactions: Borg
  • #33
My coworker and I are very focused on an aspect of my architecture that we call a World View. Based on the above, it sounds very similar to the world models with respect to how we're approaching its design. I'm sure that his is more advanced but I'm definitely on that page w.r.t. the future.
 
  • #34
In post #29, I noted that AI is better than humans in identifying emotions.

Google just released Gemini 3, which has been noted to be significantly better than ChatGPT-5 releasted just this past August. In a study to see how it compares to current models concerning self-harm and mental-health crisis scenarios, it aced the test while the best the other models could do was 60%. see https://www.forbes.com/sites/johnko...-on-a-critical-test-all-other-ai-models-fail/

In general, from what I have read so far, Gemini is impressing users. From https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini
This remark.
Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.
 
  • Like
Likes   Reactions: Borg
  • #35
nsaspook said:
"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."

https://theamericanscholar.org/baby-shoggoth-is-listening/

Baby Shoggoth Is Listening​

Why are some writers tailoring their work for AI, and what does this mean for the future of writing and reading?
one thing humans and AIs share: our unconsciousness. Everybody wonders all the time if the AIs will ever gain consciousness, as if they’re such masters of consciousness themselves, as if they could ever finish knowing what they feel or why, as if they didn’t have strangely thin access to the reasons for their own behavior.
I opine that Darwin/Wallace theory of natural selection explains a good deal of human behavior. If thy neighbors annoyeth thee why not kill the men and enslave the women? Furthermore it makes sense to lie about this, cloaking your motives in high-sounding rhetoric to attract allies. Finally it makes sense to lie to oneself about about it.

I very seldom see any discussion of this. Instead what I see over and over again is the firm belief that such behavior is absolutely fundamental. They say that once an AI becomes "conscious" it will behave in exactly this way, killing or enslaving. Well, if it is exposed to Darwin/Wallace type incentives and is able to do so then it would seem that it will.

But human behavior does not usually follow this pattern. The main reason base barbarism doesn't usually rampage is summed up by live by the sword, die by the sword. If you've got a game where only one party can triumph at the expense of all others then the sensible strategy is to convince most of the participants to not play this game and instead use their energies to prevent all from doing so.

More can be explained by EI Wilson's sociobiology, which says sacrificing one's life is a winning strategy if it leads to survival of portions of your DNA amongst numerous relatives. But machines haven't got any DNA.

In sum, using these tools about 99% of human behavior is explained. The remainder are just nuts. It happens.
 
Last edited:
  • #36
Accolades continue to come in for Google's Gemini 3 LLM.

The CEO of Salesforce, Marc Benioff, says
"Holy dang," Benioff wrote on X on Sunday. "I've used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I'm not going back. The leap is insane — reasoning, speed, images, video… everything is sharper and faster. It feels like the world just changed, again."
Both Musk and Altman sent congratulations to Google.

However, AI tech Bloggers, while noting that Gemini is a vast improvement over other LLMs, hold their judgment until the full effect on end users is documented.

One notable improvement was on the ARC AGI 2 benchmark (see post #1), where Gemini 3 beat ChatGPT 5.1 by a factor of 2.5, a heretofore improvement never seen before on any benchmark shown below.

1764092804981.webp
 
  • #37
Hornbein said:
I very seldom see any discussion of this.
Discussion seems to a lot about ASI ( Artificial Stupid Intelligence ).
 
  • #38
Hornbein said:
I opine that Darwin/Wallace theory of natural selection explains a good deal of human behavior. If thy neighbors annoyeth thee why not kill the men and enslave the women? Furthermore it makes sense to lie about this, cloaking your motives in high-sounding rhetoric to attract allies. Finally it makes sense to lie to oneself about about it.

I very seldom see any discussion of this. Instead what I see over and over again is the firm belief that such behavior is absolutely fundamental. They say that once an AI becomes "conscious" it will behave in exactly this way, killing or enslaving. Well, if it is exposed to Darwin/Wallace type incentives and is able to do so then it would seem that it will.

But human behavior does not usually follow this pattern. The main reason base barbarism doesn't usually rampage is summed up by live by the sword, die by the sword. If you've got a game where only one party can triumph at the expense of all others then the sensible strategy is to convince most of the participants to not play this game and instead use their energies to prevent all from doing so.

More can be explained by EI Wilson's sociobiology, which says sacrificing one's life is a winning strategy if it leads to survival of portions of your DNA amongst numerous relatives. But machines haven't got any DNA.

In sum, using these tools about 99% of human behavior is explained. The remainder are just nuts. It happens.
I find this reasoning spurious because you seem to be using the mis-quote of Darwin rather than what he actually espoused. Would you please clarify?
 

Similar threads

  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 17 ·
Replies
17
Views
3K
Replies
10
Views
5K
  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 6 ·
Replies
6
Views
1K
Replies
3
Views
3K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
3
Views
2K