Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #91
javisot said:
It's a malfunction
Yes. And because we cannot currently pinpoint the source of the malfunction it is one that we cannot directly resolve.

We have a long history of developing better understandings of new technologies and fixing problems that plagued new technologies. So there is hope that it can be fixed. But currently it is just hope.
 
  • Like
Likes   Reactions: gleem, russ_watters and javisot
Physics news on Phys.org
  • #92
Dale said:
Not to my knowledge.
it’s a basic part of the discussion of what makes something alive, for example

A central characteristic of living organisms is their agency, that is, their intrinsic activity, both in terms of their basic life processes and their behavior in the environment

https://pmc.ncbi.nlm.nih.gov/articles/PMC11652585/
 
  • Like
Likes   Reactions: 256bits
  • #93
BWV said:
it’s a basic part of the discussion of what makes something alive, for example

"A central characteristic of living organisms is their agency, that is, their intrinsic activity, both in terms of their basic life processes and their behavior in the environment"

https://pmc.ncbi.nlm.nih.gov/articles/PMC11652585/
You might want to look at the next sentence: "This aspect is currently a subject of debate"
 
  • Skeptical
Likes   Reactions: BWV
  • #94
Dale said:
You might want to look at the next sentence: "This aspect is currently a subject of debate"
Like I said, ‘part of the discussion’, but think you would be hard pressed to find a biologist arguing that living things do not possess agency
 
  • Agree
Likes   Reactions: gleem
  • #95
Dale said:
Yes. And because we cannot currently pinpoint the source of the malfunction it is one that we cannot directly resolve.

We have a long history of developing better understandings of new technologies and fixing problems that plagued new technologies. So there is hope that it can be fixed. But currently it is just hope.
To the question, "Can a model as extensive as you or an AGI exist that doesn't generate hallucinations?", Chatgpt answers no.

"A model without hallucinations cannot be truly general"


(It would be curious if a model with hallucinations evolves on its own into a model without hallucinations, since that is something the model itself describes as impossible)
 
Last edited:
  • #96
PeroK said:
It's not clear the extent to which an LLM understands things. [...] But, it may have in practical terms an emergent understanding.

That's the serious debate we should be having on PF.
Yes, it is clear: it doesn't understand things, not even a little bit. LLMs only find patterns - what it was programmed for - and serves them formatted as output.
PeroK said:
The best human players learn from experienced players and coaches, who in turn have learned from the best players of the past. And nowadays, ironically, also learn from chess engines. Once you get past beginner level, you would learn about pawn structure, outposts, weak pawns, weak squares, bishop against knight etc. These considerations would have been programmed into the algorithm of a conventional chess engine. These are long-term positional ideas that do not lead to immediate victory. AlphaZero figured out the relative importance of all these ideas for itself in four hours.

AlphaZero developed its own "understanding" of all those strategic ideas.
It doesn't understand what it did. It was programmed to find patterns and it did. The patterns to look for are mathematically human-defined. It works so fast that it can find patterns that no human has found yet in human history. Then humans can now learn those new found patterns to better themselves. That is the point of making the machine in the first place.
gleem said:
You ask AI to do something that it has never been asked to do and it does it. Is that not understanding?
AI always does what it is asked to do. It does nothing more than finding patterns in the training data.
gleem said:
AI produces responses that are not in its training data.
That is the point: analyzing patterns in the training data to discover something common that will point us in the right direction, something we haven't seen, yet. The fact that humans can find it faster with an AI program than on their own, doesn't give any sign of intelligence to the machine, especially independent intelligence, an agency.

PeroK said:
Is your argument that a pocket calculator is not intelligent, therefore no computer systems can be intelligent?
No, the argument is that if you, @PeroK , attribute some form of intelligence to LLMs - no matter how small, no matter how you define it - then you must attribute some form of intelligence to a pocket calculator as well.

Of course, me and others in this thread, are arguing that a pocket calculator does not have intelligence, it is just a dumb machine, and LLMs are just dumb machines as well, that are just more efficient than a pocket calculator (regarding their own tasks, that is).

You, on the other hand, make a lot of assumptions based on very wild and unfounded statements:
  • AGI - which is still pure science-fiction at this point - is a threat to humans and will inherit the Earth;
  • LLMs are the way to AGI, because:
  • LLMs show signs of intelligence.
And your arguments revolve around that we must agree that LLMs are intelligent because we must also agree with your fear of AGI, something that any serious expert (when they stop being some vendor looking for venture capital) will say that we are far from it. Here's what Yann LeCun has to say about it in the last few days:
"We can't even reproduce cat intelligence or rat intelligence," Yann LeCun told a room full of AI researchers in Paris recently. LeCun won the Turing Award—basically the Nobel Prize for computer science—for pioneering the neural networks that power today's AI.
LeCun puts it bluntly: "We're never going to get to human-level intelligence by just training on text." He points out that a four-year-old processes as much data through vision alone as the largest language models consume in text. Blind children achieve similar cognitive development through touch. The common thread isn't language—it's interaction with physical reality.
"LLMs are not a path to superintelligence or even human-level intelligence," LeCun argues. "I have said that from the beginning."
LeCun and Hassabis are more cautious. Hassabis puts genuine AGI at "five to 10 years" with a 50% probability, and only if researchers make "one or two more breakthroughs" beyond current approaches. He lists missing capabilities: learning from few examples, continuous learning, better long-term memory, improved reasoning and planning.

LeCun has abandoned the term AGI entirely. "The reason being that human intelligence is actually quite specialized," he explains. "So calling it AGI is kind of a misnomer." He prefers "advanced machine intelligence"—AMI, conveniently the name of his startup.

The disagreement isn't just semantic. It reflects fundamentally different views on whether current approaches can reach human-level intelligence or whether something entirely new is required.
And even more:
Not long after ChatGPT was released, the two researchers who received the 2018 Turing Award with Dr. LeCun warned that A.I. was growing too powerful. Those scientists even warned that the technology could threaten the future of humanity. Dr. LeCun argued that was absurd.

“There was a lot of noise around the idea that A.I. systems were intrinsically dangerous and that putting them in the hands of everyone was a mistake,” he said. “But I have never believed in this.”
Subbarao Kambhampati, an Arizona State University professor who has been an A.I. researcher nearly as long as Dr. LeCun, agreed that today’s technologies don’t provide a path to true intelligence. But he also pointed out that they were increasingly useful in highly lucrative areas like computer coding. Dr. LeCun’s newer methods, he added, are unproven.
 
  • Like
Likes   Reactions: javisot, PeterDonis, BillTre and 1 other person
  • #97
PeroK said:
Not everyone in the AI field agrees with this assertion...
PeroK said:
That's the serious debate we should be having on PF.
As I said before, if you have references to peer-reviewed literature on the topic, you are welcome to start a new thread in the appropriate forum to have a serious debate.

My basis for the assertion you refer to is simple: Chatgpt is designed not to have any concept of an actual world that the text it processes refers to. By design it has no "concept" of anything except extracting patterns from a corpus of text. That is by contrast with, for example, Wolfram Alpha, as was discussed in a PF thread some time ago about an article by Stephen Wolfram describing the difference between Wolfram Alpha and Chatgpt. If you ask Wolfram Alpha a question, it parses the question to determine which of the multiple databases of actual world knowledge (which knowledge was assembled by humans checking it against the actual world) should be consulted to find an answer, consults that database, and uses the information retrieved to formulate an answer in text form. Granted, this is an extremely rudimentary "concept of an actual world", but it's still better than no such concept at all, which is what Chatgpt has.
 
  • Like
Likes   Reactions: russ_watters, BillTre, jack action and 1 other person
  • #98
gleem said:
AI produces responses that are not in its training data.
In the sense that it produces arrangements of words that are not in its training data, sure.

But the only information it has as input is its training data--a corpus of text.

gleem said:
Helen Keller was deaf and blind, having fewer channels to connect with the rest of the world: was she less intelligent than a hearing and sighted person?
Helen Keller still had touch, taste, and smell, and she was able to learn language from scratch through touch. And even before she learned language, she had a concept of an external world--that's clear from her descriptions of her mental processes prior to learning language.

Indeed, Helen Keller is a good example of how to assess a being's intelligence without using the crutch of language--which I would say is one of the key misconceptions underlying Chatgpt and LLMs in general: the idea that everything can be reduced to text. Clearly Helen Keller was intelligent even before she learned language. But the intelligence she displayed then cannot be reduced to text.
 
  • Like
Likes   Reactions: russ_watters and BillTre
  • #99
gleem said:
AFAIK, LLMs are not given specific instructions on how to process information
Sure they are: they're computer programs. A computer program is a set of specific instructions on how to process information. But of course that doesn't mean they are given explicit instructions on how to process all the information they process:

gleem said:
e.g., doing arithmetic.
Sure, LLMs aren't explicitly programmed to do arithmetic (like, say, a pocket calculator); they have to "learn" it by extracting patterns from their training data. Whereas humans are taught arithmetic by being given explicit instructions.

But humans are not taught how to form concepts, such as our concept of an external world, by being given explicit instructions. Nor are we taught how to recognize faces, how to move our bodies, how to manipulate objects, by being given explicit instructions in every case. We learn a huge number of things without being given explicit instructions. Indeed, we learn many things that LLMs can't learn at all, since they do not have the same information channels we have: they don't have eyes, ears, noses, taste buds, touch, proprioceptive senses of bodily orientation, etc.
 
  • Like
Likes   Reactions: russ_watters, BillTre and javisot
  • #100
Dale said:
how much of what we think is a conscious decision is actually a retrospective justification of a decision already made subconsciously
A creature can have agency, in the sense that its actions have clear and highly non-random relationships to its circumstances and to some set of reasonable goals, without being conscious of it. Indeed, since conscious decision making takes time, it might well be impractical for a creature to make all its decisions with full conscious awareness.
 
  • Like
Likes   Reactions: BillTre
  • #101
gleem said:
it knows everything humans know and do
I think this is a vast overstatement. It "knows" what's in its training data. But its training data still falls far, far short of "everything humans know and do".
 
  • Like
Likes   Reactions: BillTre and Dale
  • #102
webplodder said:
Do LLM's understand English? They certainly do a good job of demonstrating they do, IMO.
As anyone who has written a book report for school based on Cliff's Notes can tell you, being able to produce plausible-sounding text on a topic is not a sufficient condition for understanding it.
 
  • Like
Likes   Reactions: BillTre, jack action and javisot
  • #103
PeroK said:
AGI is a threat.
If "AGI" in the form of LLMs is a threat, I don't think it's because it understands anything. I think it's because there will be humans either stupid or reckless enough to allow it to control critical functions even though it doesn't understand what it's doing.

I think @Dale made a good point when he said this is basically an engineering issue: you have this thing that people are proposing to use for safety critical applications, without having a solid foundation of knowledge about it--we can't even agree on whether it understands anything. That's not good engineering practice.
 
  • Like
  • Agree
Likes   Reactions: russ_watters, BillTre, jack action and 1 other person
  • #104
PeterDonis said:
As anyone who has written a book report for school based on Cliff's Notes can tell you, being able to produce plausible-sounding text on a topic is not a sufficient condition for understanding it


They're just fast food. You don't want to live on it, but if you're starving and need to get the gist before you read the whole book? Works.

And honestly - if someone can grab the main bits of that report, spot the dodgy dog stuff or the DNA mess, and still call bullshit? That's not fake. That's sharp.

So yeah - maybe it's not deep, but it's not dumb either."
 
  • #105
FactChecker said:
I would say yes. But for a reasonable discussion, we would have to narrow it to individual types of AI. With that limitation, we would need to put some meaning to "sum" and "more than".
In neural networks, not only can the results sometimes be surprising, the reasoning behind the results can be too obscure to understand.
Fair enough - let's flip it.

Even if we slice AI up by type, "sum" and "more than" don't need fancy definitions. They just mean: does the output beat the input? If a network takes your words, your data, your mess and spits out something cleverer than any single piece of it - then yeah, that's "more".

And obscurity? Sure, the reasoning's a black box. But guess what - our brains are too. We don't know why we suddenly remember an old song mid-conversation, or why we fall in love with someone "wrong". Doesn't stop us from being real.

So maybe the surprise isn't a flaw. It's the whole damn point. AI might not feel it... but if it acts like it does - hell, who cares what's under the hood?
 
  • Like
Likes   Reactions: FactChecker and javisot
  • #106
jack action said:
Such a machine does not exist yet, and the possibility of creating one is as probable as bringing back to life body parts sewn together, à la Frankenstein:

You're thinking of Frankenstein as some mad science flop. But that story's not about failure - it's about what happens when you do get it right. The monster didn't die because he was stitched-up junk. He lived. He felt. He raged.

Same here: we don't need a "perfect" AI yet. We need one that works - better than we expect. And right now? GPTs are already rewriting code, diagnosing diseases, even flirting like they mean it. Not alive? Maybe. But "probable"? Hell, we're halfway there.

The body parts? They weren't the problem. The lightning was. And we've got plenty of that - data, power, time. Give it another decade. Frankenstein's not dead. He's just updating.
 
  • #107
webplodder said:
And obscurity? Sure, the reasoning's a black box. But guess what - our brains are too. We don't know why we suddenly remember an old song mid-conversation, or why we fall in love with someone "wrong". Doesn't stop us from being real.
Wait a second, what you're saying isn't trivial, but it's not correct. We don't know if humans are black boxes when it comes to text generation. I understand that clarifying this point is important if we want to create an intelligence "comparable to or superior to human intelligence." Is human intelligence a black box or not?

We don't know.
 
  • #108
QuarkyMeson said:
I mean they sample from PDF's to generate their output. Even the models weights are fuzzed for models like Chatgpt depending on user settings I think. I'm assuming the objection is that the stochasticity isn't truly random? Which is true I guess.


I mean, we haven't really seen anything that would point to more than just dumb machine right? Machine learning, neural nets, etc have been around for decades at this point. The transformer architecture is pretty new, but it's built off what came before. I would find it odd myself if there was some computing threshold you had to hit where dumb machine -> emergent intelligence was just naturally crossed.


The research avenue that creeps me out the most in computing has always been this:

brainss

I just feel like they all were thinking "Could we?" and no one stopped to think "Should we?"

QuarkyMeson said:
I mean they sample from PDF's to generate their output. Even the models weights are fuzzed for models like Chatgpt depending on user settings I think. I'm assuming the objection is that the stochasticity isn't truly random? Which is true I guess.


I mean, we haven't really seen anything that would point to more than just dumb machine right? Machine learning, neural nets, etc have been around for decades at this point. The transformer architecture is pretty new, but it's built off what came before. I would find it odd myself if there was some computing threshold you had to hit where dumb machine -> emergent intelligence was just naturally crossed.


The research avenue that creeps me out the most in computing has always been this:

brainss

I just feel like they all were thinking "Could we?" and no one stop
jack action said:
Without a way to measure intelligence, I guess it is harder to determine what is more intelligent. Or is it?

Say I have a computer that can recite all the words in the English language, with their definitions. It can even do it twice in a row, exactly the same way. It is very difficult for a human to do this, if not impossible, at least for most. Even if those people speak the English language every day since they were born.

Is that computer more intelligent than a human? If so, then a dictionary is just a mute computer - without a voice synthesizer - holding the same information. A human just has to use their eyes to read the book, instead of listening to the computer with their ears, to get the same information. You read it twice in a row, and you get the same information! Is a book an intelligent object?

One can argue that this is knowledge, not logic. But is a computer reasoning, or is it just obeying the rules of logic implemented by the humans behind the machine? A calculator to which I enter "2+2=" and spits out "4" is not considered more intelligent than a 2-year-old. The guy who built the calculator is.

Assume an AI machine finds a new molecule to cure cancer or even finds a way to time-travel. Does it do it on its own? Was there a goal for this machine to achieve this? Did it have a reason to do so? Then what? Can it do anything with that newfound information? Or was this machine just a dumb tool used by an intelligent human, with a goal and means to use that information?

Without a body and a set of sensors, I fail to see how one can classify any set of atoms as "intelligent".
 
  • #109
jack action said:
Yes, it is clear: it doesn't understand things, not even a little bit. LLMs only find patterns - what it was programmed for - and serves them formatted as output.

It doesn't understand what it did. It was programmed to find patterns and it did. The patterns to look for are mathematically human-defined. It works so fast that it can find patterns that no human has found yet in human history. Then humans can now learn those new found patterns to better themselves. That is the point of making the machine in the first place.


Pattern-finding is understanding. Humans don't have magic insight - we recognize patterns too, just slower and with feelings attached.

If an LLM spots a novel cure by connecting dots no human ever did, that's not "just patterns" - that's comprehension at a level we call intelligence when a person does it.

The speed and scale don't disqualify it; they amplify it. No soul required. Results do.




And even more:
 
  • #110
PeterDonis said:
A creature can have agency, in the sense that its actions have clear and highly non-random relationships to its circumstances and to some set of reasonable goals, without being conscious of it. Indeed, since conscious decision making takes time, it might well be impractical for a creature to make all its decisions with full conscious awareness.
Sure. The debate is more about whether the word “agency” should be used to describe that. I am just saying that these words (agency and intelligence) are not words like “mass” and “force” that have clear meanings that can be measured precisely. Their meanings and measurements are ambiguous. So two people can make opposite statements and both be right in context. The argument seems fruitless until those terms are operationally defined and accepted.

That is why I was focused on what I see as the major issue AI has as an engineered product: we cannot identify the source of its malfunctions (hallucinations).
 
  • Like
Likes   Reactions: javisot, BillTre and russ_watters
  • #111
webplodder said:
And obscurity? Sure, the reasoning's a black box. … who cares what's under the hood?
The person trying to make the next version safer than the previous version is who cares. This is the major problem with the whole being more than the sum of its parts: we cannot fix problems in the whole because we don’t know which part is the source.
 
  • Like
Likes   Reactions: javisot, PeterDonis and gleem
  • #112
Dale said:
The person trying to make the next version safer than the previous version is who cares. This is the major problem with the whole being more than the sum of its parts: we cannot fix problems in the whole because we don’t know which part is the source.
We don't need to know the exact source part to fix the whole.
Iterative testing, red-teaming, alignment research, and scaling laws let us observe emergent behaviour, measure safety metrics, and push the entire system toward safer outputs - even if the "why" stays black-box.

Engineers have been safely improving complex systems (aircraft, nuclear reactors, software) for decades without perfect causal maps. Same principle applies here.
Not knowing the precise neuron isn't a show-stopper; it's just engineering under uncertainty.
 
  • #113
javisot said:
Wait a second, what you're saying isn't trivial, but it's not correct. We don't know if humans are black boxes when it comes to text generation. I understand that clarifying this point is important if we want to create an intelligence "comparable to or superior to human intelligence." Is human intelligence a black box or not?

We don't know.

You're right - it's not trivial. And honestly? We don't know. Our brains are black boxes too - fMRI lights up, EEG buzzes, but no one's ever cracked open a neuron and said, "Ah, that's why you loved her."

We guess. We model. We poke. But the "why" behind a thought? Still foggy.

So yeah - if we're trying to build something "human-level" or better, we're basically copying a mystery. And that's terrifying - but also kinda beautiful.

We don't need to understand it to make it. We just need to make it work.
 
  • Like
Likes   Reactions: javisot
  • #114
PeterDonis said:
If "AGI" in the form of LLMs is a threat, I don't think it's because it understands anything. I think it's because there will be humans either stupid or reckless enough to allow it to control critical functions even though it doesn't understand what it's doing.

I think @Dale made a good point when he said this is basically an engineering issue: you have this thing that people are proposing to use for safety critical applications, without having a solid foundation of knowledge about it--we can't even agree on whether it understands anything. That's not good engineering practice.
We hand over critical stuff to black-box systems every day - traffic lights, autopilot, even stock-trading algos - and we don't know every line of code. We just test, monitor, and build redundancy.

The difference? LLMs aren't magic. They're software - predictable at scale. We can sandbox them, rate-limit them, log every move, and yank the plug if it goes weird.

Not understanding the "why" isn't a death sentence. It's just new. We didn't understand electricity at first either - didn't stop us wiring cities.

Reckless? Sure. But stupid? Nah. It's just bold engineering. And bold beats safe every time - if we do it right.
 
  • #115
PeterDonis said:
I think this is a vast overstatement. It "knows" what's in its training data. But its training data still falls far, far short of "everything humans know and do".
You're right - it's not "everything". But here's the thing: it doesn't need everything. It needs enough patterns, enough noise, enough lies and truths - to fake the rest. Like a really good actor who never read the script but still nails every line.

And we're already there. It knows more than any one person ever could - more languages, more history, more dirty secrets - because it hoovered up the whole internet.

So yeah - short of "everything"? Sure. But it's already past "more than enough" for most things.

That's what scares me. Not the knowledge. The speed at which it catches up.
 
  • #116
webplodder said:
We don't need to know the exact source part to fix the whole.
Yes, we do.

webplodder said:
Iterative testing, red-teaming, alignment research, and scaling laws let us observe emergent behaviour, measure safety metrics, and push the entire system toward safer outputs - even if the "why" stays black-box
That simply is not the case. All of those things are currently being done and hallucinations not only persist, there is some evidence that they are getting more common.

webplodder said:
Engineers have been safely improving complex systems (aircraft, nuclear reactors, software) for decades without perfect causal maps.
I disagree with this. In any sort of engineering failure we do a root cause analysis. We take the system apart and find what went wrong with which part and how that problem caused the failure.

Your comments in this post seem completely detached from the actual reality of standard engineering practice.

webplodder said:
Reckless? Sure. But stupid? Nah. It's just bold engineering. And bold beats safe every time - if we do it right.
I am glad that you are not actually an engineer. This is the kind of attitude that gets people killed and the kind of statement that correctly gets companies found liable for gross negligence.
 
Last edited:
  • Like
  • Agree
Likes   Reactions: javisot, gmax137, PeterDonis and 2 others
  • #117
Dale said:
how much of what we think is a conscious decision is actually a retrospective justification of a decision already made subconsciously
If they believe they can get away with it, most people just do whatever they feel like doing. Then they come up with the weirdest "reasons."
 
  • #118
PeroK said:
The fact that somebody, somewhere decided wrongly that a dumb machine was intelligent shouldn't discredit the progress of AI in its entirely. [emphasis added]
It would be silly to believe the field hasn't made progress, and I never said that. Please don't read past what I've said or put words in my mouth I haven't said. The rest of your post is mainly just vamping on the above, with insults, so there's little else to respond to. Except I'll answer a question:
I don't understand what point you are trying to make here.
If a user doesn't know if a response is made 'from scratch' by the program vs being a pre-programmed in by a human, the user won't be able to tell if the computer is "intelligent" or the guard rails/pre-programmed responses are just well-thought out by the human programmer. And if they don't know, the answer to the question may not matter because regardless of why the program works, it works so they'll use it. And further:

fantasy world. [snip]
AI is happening out there in the real world. [snip]
If you'll allow me to get on my soapbox, the question is far from philosophical. The future of human race depends upon it! [snip]
... AGI is a threat.
Ironically we agree that A[G]I is a threat, just not why. Most of these questions (intelligence, agency) are at least somewhat philosophical and at this point un-answerable in an absolute sense. But the reason they don't matter isn't because they don't have clear answers it's because people are going to use AI regardless of what they think the answers are, or because they haven't thought-through the implications.

People (users and developers) trusting it is the threat. All of the direct harm caused by it is caused by such trust/use. And the Believers in it are pushing that trust, increasing the harm and threat.

[much of this has already been said by others...]
 
Last edited:
  • Like
Likes   Reactions: jack action
  • #119
webplodder said:
Reckless? Sure. But stupid? Nah. It's just bold engineering. And bold beats safe every time - if we do it right.
That's a hard sell to the family of a guy who got decapitated when his self-driving car drove under a truck it thought was a cloud.
 
  • Like
Likes   Reactions: BillTre, FactChecker and Dale
  • #120
webplodder said:
Pattern-finding is understanding. Humans don't have magic insight - we recognize patterns too, just slower and with feelings attached.

If an LLM spots a novel cure by connecting dots no human ever did, that's not "just patterns" - that's comprehension at a level we call intelligence when a person does it.

The speed and scale don't disqualify it; they amplify it. No soul required. Results do.
Say we have an AI designed to find a cure for cancer. We give it all we know about cancer and cancer treatments, and it finds a pattern that shows us the way to cure all cancers. If you want to call it intelligence, it doesn't matter to me.

But if one wants to make me believe that this machine will somehow, on its own, find the solution and restrain itself to deliver it to humans and think "boy, now I know their weakness, instead of sharing the cure, I should use that secretly to my advantage", and all of that without any warning behaviors whatsoever? Come on! We are well past pattern-finding behaviors at that point. It's not a magic box; it's just a machine that adds 0s and 1s when it is plugged in. Even if you want to say that humans are no more than that - and it can be debatable - it took millions of years to evolve the way we are.

webplodder said:
And we're already there. It knows more than any one person ever could - more languages, more history, more dirty secrets - because it hoovered up the whole internet.

So yeah - short of "everything"? Sure. But it's already past "more than enough" for most things.

That's what scares me. Not the knowledge. The speed at which it catches up.
It doesn't know, it holds knowledge, as a book does.

As already been told by others, it is what other people could do with that knowledge that is fearful, not what the machine can do with it.

The fact that LLMs spit out stuff we do not want to hear (or that don't say what we want to hear), and that engineers just fine-tune them to make sure they do "the right thing", proves how these machines don't do anything on their own.
 
  • Like
Likes   Reactions: javisot

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K