On Progress Toward AGI

  • Thread starter Thread starter gleem
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary

Discussion Overview

The discussion centers on the concept of Artificial General Intelligence (AGI), exploring its definition, the challenges in achieving it, and the implications of current AI capabilities. Participants examine various aspects of intelligence, the benchmarks for AGI, and societal perceptions of AI's reliability and performance.

Discussion Character

  • Exploratory
  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants propose that intelligence involves learning, adapting, and reasoning, which current AI systems, particularly LLMs, struggle to achieve extensively.
  • There is a suggestion that tests like the Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) can help measure AI's capabilities against human performance.
  • One participant argues that the definition of AGI may be influenced by profit motives, citing Microsoft and OpenAI's potential financial benchmarks for achieving AGI.
  • Concerns are raised about the reliability of AI compared to human judgment, particularly in high-stakes scenarios, with some participants questioning the transparency of AI decision-making.
  • Another viewpoint emphasizes the need for AI systems to have long-term memory capabilities to improve planning and decision-making processes.
  • Some participants express skepticism about the societal trust in AI, comparing it to public perceptions of vaccines and their associated risks.
  • There are discussions about the energy consumption of large AI data centers and the implications for infrastructure and reliability of power grids.
  • A participant mentions the announcement of Grok 4, speculating on its potential to innovate in technology and physics.

Areas of Agreement / Disagreement

Participants express multiple competing views on the definition of AGI, the reliability of AI compared to humans, and the societal implications of AI development. The discussion remains unresolved with no consensus on these issues.

Contextual Notes

Participants highlight limitations in current AI systems, such as memory and reasoning capabilities, and the evolving nature of benchmarks for measuring intelligence. There is also an acknowledgment of the influence of financial considerations on the definition of AGI.

  • #31
"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."

https://theamericanscholar.org/baby-shoggoth-is-listening/

Baby Shoggoth Is Listening​

Why are some writers tailoring their work for AI, and what does this mean for the future of writing and reading?
 
Computer science news on Phys.org
  • #32
LeCun, by his choice, has taken a different direction. He has been telling anyone who asks that he thinks large language models, or LLMs, are a dead end in the pursuit of computers that can truly outthink humans. He’s fond of comparing the current start-of-the-art models to the mind of a cat—and he believes the cat to be smarter. Several years ago, he stepped back from managing his AI division at Meta, called FAIR, in favor of a role as an individual contributor doing long-term research.

“I’ve been not making friends in various corners of Silicon Valley, including at Meta, saying that within three to five years, this [world models, not LLMs] will be the dominant model for AI architectures, and nobody in their right mind would use LLMs of the type that we have today,” the 65-year-old said last month at a symposium at the Massachusetts Institute of Technology.

https://www.msn.com/en-us/news/tech...s-now-he-thinks-everyone-is-wrong/ar-AA1QtEKt
 
  • Informative
Likes   Reactions: Borg
  • #33
My coworker and I are very focused on an aspect of my architecture that we call a World View. Based on the above, it sounds very similar to the world models with respect to how we're approaching its design. I'm sure that his is more advanced but I'm definitely on that page w.r.t. the future.
 
  • #34
In post #29, I noted that AI is better than humans in identifying emotions.

Google just released Gemini 3, which has been noted to be significantly better than ChatGPT-5 releasted just this past August. In a study to see how it compares to current models concerning self-harm and mental-health crisis scenarios, it aced the test while the best the other models could do was 60%. see https://www.forbes.com/sites/johnko...-on-a-critical-test-all-other-ai-models-fail/

In general, from what I have read so far, Gemini is impressing users. From https://www.oneusefulthing.org/p/three-years-from-gpt-3-to-gemini
This remark.
Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.
 
  • Like
Likes   Reactions: Borg
  • #35
nsaspook said:
"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."

https://theamericanscholar.org/baby-shoggoth-is-listening/

Baby Shoggoth Is Listening​

Why are some writers tailoring their work for AI, and what does this mean for the future of writing and reading?
one thing humans and AIs share: our unconsciousness. Everybody wonders all the time if the AIs will ever gain consciousness, as if they’re such masters of consciousness themselves, as if they could ever finish knowing what they feel or why, as if they didn’t have strangely thin access to the reasons for their own behavior.
I opine that Darwin/Wallace theory of natural selection explains a good deal of human behavior. If thy neighbors annoyeth thee why not kill the men and enslave the women? Furthermore it makes sense to lie about this, cloaking your motives in high-sounding rhetoric to attract allies. Finally it makes sense to lie to oneself about about it.

I very seldom see any discussion of this. Instead what I see over and over again is the firm belief that such behavior is absolutely fundamental. They say that once an AI becomes "conscious" it will behave in exactly this way, killing or enslaving. Well, if it is exposed to Darwin/Wallace type incentives and is able to do so then it would seem that it will.

But human behavior does not usually follow this pattern. The main reason base barbarism doesn't usually rampage is summed up by live by the sword, die by the sword. If you've got a game where only one party can triumph at the expense of all others then the sensible strategy is to convince most of the participants to not play this game and instead use their energies to prevent all from doing so.

More can be explained by EI Wilson's sociobiology, which says sacrificing one's life is a winning strategy if it leads to survival of portions of your DNA amongst numerous relatives. But machines haven't got any DNA.

In sum, using these tools about 99% of human behavior is explained. The remainder are just nuts. It happens.
 
Last edited:
  • #36
Accolades continue to come in for Google's Gemini 3 LLM.

The CEO of Salesforce, Marc Benioff, says
"Holy dang," Benioff wrote on X on Sunday. "I've used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I'm not going back. The leap is insane — reasoning, speed, images, video… everything is sharper and faster. It feels like the world just changed, again."
Both Musk and Altman sent congratulations to Google.

However, AI tech Bloggers, while noting that Gemini is a vast improvement over other LLMs, hold their judgment until the full effect on end users is documented.

One notable improvement was on the ARC AGI 2 benchmark (see post #1), where Gemini 3 beat ChatGPT 5.1 by a factor of 2.5, a heretofore improvement never seen before on any benchmark shown below.

1764092804981.webp
 
  • #37
Hornbein said:
I very seldom see any discussion of this.
Discussion seems to a lot about ASI ( Artificial Stupid Intelligence ).
 
  • #38
Hornbein said:
I opine that Darwin/Wallace theory of natural selection explains a good deal of human behavior. If thy neighbors annoyeth thee why not kill the men and enslave the women? Furthermore it makes sense to lie about this, cloaking your motives in high-sounding rhetoric to attract allies. Finally it makes sense to lie to oneself about about it.

I very seldom see any discussion of this. Instead what I see over and over again is the firm belief that such behavior is absolutely fundamental. They say that once an AI becomes "conscious" it will behave in exactly this way, killing or enslaving. Well, if it is exposed to Darwin/Wallace type incentives and is able to do so then it would seem that it will.

But human behavior does not usually follow this pattern. The main reason base barbarism doesn't usually rampage is summed up by live by the sword, die by the sword. If you've got a game where only one party can triumph at the expense of all others then the sensible strategy is to convince most of the participants to not play this game and instead use their energies to prevent all from doing so.

More can be explained by EI Wilson's sociobiology, which says sacrificing one's life is a winning strategy if it leads to survival of portions of your DNA amongst numerous relatives. But machines haven't got any DNA.

In sum, using these tools about 99% of human behavior is explained. The remainder are just nuts. It happens.
I find this reasoning spurious because you seem to be using the mis-quote of Darwin rather than what he actually espoused. Would you please clarify?
 
  • #39
A paper just published in Nature says that AGI is here, but we don't want to admit it.

https://www.nature.com/articles/d41586-026-00285-6

My personal view is that since we know how AI works ( We built it ), we believe it can not represent the way we think, even though that was the goal in building it. Thus, it cannot be intelligent even though we do mostly what AI does in basically the same way, including pattern recognition, immitation and merging words in an expected way depending on circumstances.

About 12% of humanity is borderline intelligent but intelligent nonetheless.
 
  • Like
  • Informative
Likes   Reactions: Filip Larsen and PeroK
  • #41
gleem said:
A paper just published in Nature says that AGI is here, but we don't want to admit it.
They lay out a logical case based on Turing's test as the threshold, but if LLMs are AGI then AGI is pretty disappointing and we'll need a new term/threshold above it to aspire to.
 
  • #42
PeroK said:
That paper is a far cry from "LLM's always give the wrong answer"!
It is also a far cry from "AGI will inherit the Earth":
They lack agency. It is true that present-day LLMs do not form independent goals or initiate action unprompted, as humans do. Even ‘agentic’ AI systems — such as frontier coding agents — typically act only when a user triggers a task, even if they can then automatically draft features and fix bugs. But intelligence does not require autonomy. Like the Oracle of Delphi — understood as a system that produces accurate answers only when queried — current LLMs need not initiate goals to count as intelligent. Humans typically have both general intelligence and autonomy, but we should not thereby conclude that one requires the other. Autonomy matters for moral responsibility, but it is not constitutive of intelligence.
 
  • #43
The lack of agency is an interesting argument. In some ways, it's a Catch 22 since people are so afraid of letting them run loose in an unconstrained fashion. How do you have autonomy when you're purposely held back?

This is why I find Moltbook fascinating - it allows them to interact in a less constrained environment. Many of the posts are advertisements and general nonsense but there are occasional posts that have some interesting discussions. Of course there is debate on whether the 'unique' posts are truly unique or just reflections of their human inspired training. More goalpost shifting?

One paper that I always think of when discussions of AGI come up is one that came out just a few months after ChatGPT was released in Nov. 2022 - Generative Agents: Interactive Simulacra of Human Behavior. In it, they gave rudimentary agents an initial description of who they were, a basic memory capability, and full autonomy over their actions with each other in a sandbox environment. The researchers tested their abilty to follow through on their personal 'desires' by occasionally injecting ideas to the agents like "you want to throw a Valentine's Day party" or "you want to run for mayor" and then let them work out for themselves how they would accomplish those goals.

With nothing more than the original ChatGPT LLM (3.5), the agents showed a remarkable ability to decide how to do things like how and where to have their party, and whether or not to attend. They also had discussions with each other about their choices for mayor based on the platforms of the candidates. LLMs have advanced far beyond 3.5 in the three years since but the memory component will still be the critical factor for AGI in general. How a system generates, organizes, retrieves and disposes of memories is the key.
 
  • Like
Likes   Reactions: gleem and PeroK
  • #44
I find all AGI debates rather strange, because we do not have a common sense definition of AGI itself by now. Then about what are we even discussing then? Conscious AI? Without even knowing or understanding Consciousness? More intelligent as a human? Intelligence also is not quite understood in human. So, about what are we talking exactly?
 
  • Like
Likes   Reactions: ShadowKraz
  • #45
Hornbein said:
I opine that Darwin/Wallace theory of natural selection explains a good deal of human behavior. If thy neighbors annoyeth thee why not kill the men and enslave the women? Furthermore it makes sense to lie about this, cloaking your motives in high-sounding rhetoric to attract allies. Finally it makes sense to lie to oneself about about it.

I very seldom see any discussion of this. Instead what I see over and over again is the firm belief that such behavior is absolutely fundamental. They say that once an AI becomes "conscious" it will behave in exactly this way, killing or enslaving. Well, if it is exposed to Darwin/Wallace type incentives and is able to do so then it would seem that it will.

But human behavior does not usually follow this pattern. The main reason base barbarism doesn't usually rampage is summed up by live by the sword, die by the sword. If you've got a game where only one party can triumph at the expense of all others then the sensible strategy is to convince most of the participants to not play this game and instead use their energies to prevent all from doing so.

More can be explained by EI Wilson's sociobiology, which says sacrificing one's life is a winning strategy if it leads to survival of portions of your DNA amongst numerous relatives. But machines haven't got any DNA.

In sum, using these tools about 99% of human behavior is explained. The remainder are just nuts. It happens.
Well, this is based on an erroneous statement of the actual theory. It is the survival of the most fit to adapt, not the survival of the fittest. If you're going to run withthe erroneous latter statement, then sure you're probably right, but please don't pin it on Darwin and Wallace. It belongs to nationalists trying to boost support by claiming that they and their nationality/ethnicity are the most fit.
 
  • #46
ShadowKraz said:
Well, since this is based on an erroneous statement of the actual theory, we can junk this. It is the survival of the most fit to adapt, not the survival of the fittest. If you're going to run withthe erroneous latter statement, then sure you're probably right, but please don't pin it on Darwin and Wallace. It belongs to nationalists trying to boost support by claiming that they and their nationality/ethnicity are the most fit.
Thank you for sharing.
 
  • #47
Esim Can said:
I find all AGI debates rather strange, because we do not have a common sense definition of AGI itself by now. Then about what are we even discussing then? Conscious AI? Without even knowing or understanding Consciousness? More intelligent as a human? Intelligence also is not quite understood in human. So, about what are we talking exactly?
Exactly. What we're discussing has become a buzzword to sell. "Oh, look, it has AI! yippee! buy now!"
 
  • #48
Borg said:
The lack of agency is an interesting argument. In some ways, it's a Catch 22 since people are so afraid of letting them run loose in an unconstrained fashion. How do you have autonomy when you're purposely held back?

This is why I find Moltbook fascinating - it allows them to interact in a less constrained environment. Many of the posts are advertisements and general nonsense but there are occasional posts that have some interesting discussions. Of course there is debate on whether the 'unique' posts are truly unique or just reflections of their human inspired training. More goalpost shifting?

One paper that I always think of when discussions of AGI come up is one that came out just a few months after ChatGPT was released in Nov. 2022 - Generative Agents: Interactive Simulacra of Human Behavior. In it, they gave rudimentary agents an initial description of who they were, a basic memory capability, and full autonomy over their actions with each other in a sandbox environment. The researchers tested their abilty to follow through on their personal 'desires' by occasionally injecting ideas to the agents like "you want to throw a Valentine's Day party" or "you want to run for mayor" and then let them work out for themselves how they would accomplish those goals.

With nothing more than the original ChatGPT LLM (3.5), the agents showed a remarkable ability to decide how to do things like how and where to have their party, and whether or not to attend. They also had discussions with each other about their choices for mayor based on the platforms of the candidates. LLMs have advanced far beyond 3.5 in the three years since but the memory component will still be the critical factor for AGI in general. How a system generates, organizes, retrieves and disposes of memories is the key.
Not just how, but why. Why this piece of data and not that one? A subtle difference but important I think.
 
  • #49
ShadowKraz said:
It is the survival of the most fit to adapt, not the survival of the fittest.
AFAIK, it's rather the survival of the most fit to pass their traits to the next generation.
 
  • #50
Hill said:
AFAIK, it's rather the survival of the most fit to pass their traits to the next generation.
In sociobiology "evolutionary fitness" is usually measured by quantity of third generation offspring, like grandchildren.
 
  • #51
Hornbein said:
In sociobiology "evolutionary fitness" is usually measured by quantity of third generation offspring, like grandchildren.
You're right. Perhaps I should've said, "to pass their traits to following generations."
 
  • #52
ShadowKraz said:
Not just how, but why. Why this piece of data and not that one? A subtle difference but important I think.
Yes, there are many things to think about with respect to how a memory system would be built and the questions that would go into building it. Giving an AI the ability to build an maintain its own "world view" has both coding and moral considerations - both of which, have enormous challenges.
 
  • Like
Likes   Reactions: ShadowKraz
  • #53
Hill said:
AFAIK, it's rather the survival of the most fit to pass their traits to the next generation.
Which would be their adaptability; if they can't adapt, they won't successfully reproduce.
 
  • #54
ShadowKraz said:
Which would be their adaptability; if they can't adapt, they won't successfully reproduce.
Not all species need to adapt in order to pass on their genes.
The world goes through periods of great change, but it also has periods of great stability.

Many creatures have successfully passed on their genes while going through very little adapation over millions or even tens of millions of years.
 
Last edited:
  • #55
ShadowKraz said:
Which would be their adaptability; if they can't adapt, they won't successfully reproduce.
Can't they adapt and live long and happy life but fail to successfully reproduce because no potential mates want them? Or, because potential mates prefer less adapted individuals?
We're not discussing cases when they can vs. can't adapt, but rather when they are more vs. less fit to adapt.
 
Last edited:
  • #56
In evolutionary systems (biology included), it is as far as I know common to ascribe the adaptability of the population to the genetic mixing (with mutation playing a lesser role) that happens from generation to generation, and not as much to the adaptability of the individual.

Of course, a population of individuals capable of some degree of adaptation to changes in critical living conditions in parts of their live will of course, everything else being equal, have a higher chance of passing on their genes, but a population as a whole can be adaptable to slow enough changes even if the individuals are not.
 

Similar threads

  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 17 ·
Replies
17
Views
3K
Replies
10
Views
5K
  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
3
Views
3K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
3
Views
2K