Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #121
webplodder said:
if someone can grab the main bits of that report, spot the dodgy dog stuff or the DNA mess, and still call bullshit?
You're missing the point. What I was describing was not "calling bullshit" based on reading a book report written from Cliff's Notes, but teachers not "calling bullshit" because they didn't even realize the student who wrote the book report had not actually read the book--the student was able to fake having read the book and understood it, based on the Cliff's Notes, enough to fool the teacher.

In this analogy, the LLMs are the student and the humans who think it understands based on the text it outputs are the teacher.
 
  • Like
Likes   Reactions: russ_watters and gleem
Physics news on Phys.org
  • #122
Dale said:
The debate is more about whether the word “agency” should be used to describe that.
Fair enough.

Dale said:
That is why I was focused on what I see as the major issue AI has as an engineered product: we cannot identify the source of its malfunctions (hallucinations).
This I agree is a major issue.
 
  • Like
Likes   Reactions: Dale
  • #123
Dale said:
There are ongoing debates on this topic both in psychology (how much of what we think is a conscious decision is actually a retrospective justification of a decision already made subconsciously) and philosophy (how should we define even define agency)
I brought up the point that consciousness may be an illusion. But ISTM that Freud might be right. Young humans often act on impulse without regard to consequences, which becomes less of a problem as they mature. So something is happening in the brain. Even if consciousness is an illusion, the subconscious may still "debate" on the value of an action.

In the common venacular agency does not require the agent to initiate the task.
javisot said:
AI hallucinations are not a creative and positive process. It's a malfunction and a loss of effectiveness.
Would you consider a serendipitous discovery a hallucination?

jack action said:
No, the argument is that if you, @PeroK , attribute some form of intelligence to LLMs - no matter how small, no matter how you define it - then you must attribute some form of intelligence to a pocket calculator as well.
A calculator can only perform calculations and only in a specified way. Not intelligent. LLMs are provided with information as humans are, to perform tasks as humans do, and to augment their capabilities as humans can. NN-equipped AI is not programmed the same way as other computerized systems, which are designed to do things always in a specified way, depending on the input. If you ask an AI a question at different times, it will probably give a different version of its previous response, just as a human would.

jack action said:
That is the point: analyzing patterns in the training data to discover something common that will point us in the right direction, something we haven't seen, yet. The fact that humans can find it faster with an AI program than on their own, doesn't give any sign of intelligence to the machine, especially independent intelligence, an agency.
Again, AI does what humans do.
 
  • Like
Likes   Reactions: PeroK
  • #124
gleem said:
LLMs are provided with a very, very restricted subset of information as humans are, to perform a very, very restricted subset of tasks as humans do, and to augment their capabilities in a very, very restricted subset of ways as humans can.
See the bolded additions I made above to your claims. They make a huge difference. The claim in the hype is AGI--artificial general intelligence. LLMs, at best, even if we accept for the sake of argument that they are "intelligent" in some way in their restricted domain (I don't, but there are many who do, apparently including you), are limited to that restricted domain--to a very specialized subset of information and tasks and capabilities. Humans are not.
 
  • Like
Likes   Reactions: russ_watters
  • #125
gleem said:
AI does what humans do.
In a very, very restricted domain, one can argue this, yes. I actually have no problem with the general idea that all "intelligence" boils down to some form of pattern recognition--an idea that has been around in the AI community for decades. But claims of "intelligence" for LLMs are based on the much, much stronger (and to me obviously false--though certainly not unprecedented, similar claims have also been around in the AI community for decades) claim that all intelligence can be boiled down to pattern recognition in a corpus of text. There is much, much, much more to the universe than text.
 
  • Like
Likes   Reactions: russ_watters
  • #126
gleem said:
A calculator can only perform calculations and only in a specified way. Not intelligent. LLMs are provided with information as humans are, to perform tasks as humans do, and to augment their capabilities as humans can. NN-equipped AI is not programmed the same way as other computerized systems, which are designed to do things always in a specified way, depending on the input. If you ask an AI a question at different times, it will probably give a different version of its previous response, just as a human would.
No one would argue that a ML model based on OLS is not deterministic or is somehow intelligent, nor would anyone make that claim for a simple NN that recognizes cats from dogs. But scale up to 100s of billions of non-linear parameters on language models, slightly different contexts will produce different results to the same question, and that is before the controlled randomness that is built into the models to produce the effect you describe. It is still deterministic - chaotic perhaps, but its no more 'intelligent' than an OLS model in principle
 
  • #127
BWV said:
its no more 'intelligent' than an OLS model in principle
One could make a similar argument about humans--our brains are just physical systems running according to physical laws, after all. No one would argue that a single human neuron is intelligent, so why should it be any different if we scale up to 100 billion neurons?

Obviously this argument fails for humans (or at least I think so--but I'm not sure whether everyone in this discussion would agree that even we humans are "intelligent"!), so we can't accept it as it stands for LLMs either. We have to look at other differences between LLMs and humans, such as the ones I have pointed out in several previous posts, if we are going to claim that humans are intelligent while LLMs are not.
 
  • Like
Likes   Reactions: Klystron, Grinkle, russ_watters and 1 other person
  • #128
PeterDonis said:
One could make a similar argument about humans--our brains are just physical systems running according to physical laws, after all. No one would argue that a single human neuron is intelligent, so why should it be any different if we scale up to 100 billion neurons?

Obviously this argument fails for humans (or at least I think so--but I'm not sure whether everyone in this discussion would agree that even we humans are "intelligent"!), so we can't accept it as it stands for LLMs either. We have to look at other differences between LLMs and humans, such as the ones I have pointed out in several previous posts, if we are going to claim that humans are intelligent while LLMs are not.
That is where the agency argument comes in - cells have agency, activation functions in a NN do not.
 
  • Skeptical
Likes   Reactions: Dale
  • #129
BWV said:
cells have agency
In what sense?

BWV said:
activation functions in a NN do not
In what sense?

Both are just following physical laws. What's the difference between them that makes one an "agent" and the other not?
 
  • Like
Likes   Reactions: PeroK
  • #130
PeterDonis said:
In what sense?


In what sense?

Both are just following physical laws. What's the difference between them that makes one an "agent" and the other not?

Not claiming there is universal consensus in biology (or that I have any expertise to judge) but not hard to find examples in the literature like:
Living cells not only construct themselves, but as open thermodynamic systems, they must ‘eat’ to survive. In general, cells can evolve to ‘eat’ because living cells are nonlinear, dynamical systems with complex dynamical behaviour that enables living cells, receiving inputs from their environment and acting on that environment, to sense and categorize their worlds, orient to relevant features of their worlds, evaluate these as ‘good or bad for me’ and act based on those evaluations. This is the basis of agency and meaning. Agency and meaning are immanent in evolving life. The semantic meaning is: ‘I get to exist for a while’. The capacity to ‘act’ is immanent in the fact that living cells achieve constraint closure and do thermodynamic work to construct themselves. The same capacity enables cells to do thermodynamic work on their environment. A cell’s action is embodied, enacted, embedded, extended and emotive [40,42,43,52,53].

Cells are molecular autonomous agents, able to reproduce, do one or more thermodynamic work cycles and make one or more decisions, good or bad for me [52,53]. The capacity to learn from the world, categorize it reliably and act reliably may be maximized if the cell, as a nonlinear dynamical system, is dynamically critical, poised at the edge of chaos [54,55]. Good evidence now demonstrates that the genetic networks of many eukaryotic cells are critical [56,57]. Such networks have many distinct dynamical attractors and basins of attraction. Transition among attractors is one means to ‘make a decision’. It will be of interest to test whether Kantian whole autocatalytic sets can evolve to criticality.
https://pmc.ncbi.nlm.nih.gov/articl...s the basis of,sets can evolve to criticality.
 
  • Like
Likes   Reactions: BillTre
  • #131
gleem said:
Would you consider a serendipitous discovery a hallucination?
No. Furthermore, you touch on an important point here: the difference between a mistake and a hallucination is still being defined. Both a hallucination and a mistake are errors, but a hallucination is characterized by the impossibility of identifying the source of the problem.
 
  • Like
Likes   Reactions: russ_watters
  • #132
I'll mention a problem I see with the term "intelligence." It's just a word, a concept we use to refer to a very diverse set of abilities. Measuring intelligence is measuring a word; what we can measure are our abilities to solve different problems, and then indirectly infer that "we are very intelligent" or "not very intelligent".

We humans describe ourselves as "intelligent" because, on average, we have the same set of abilities, albeit to relatively similar degrees.



(I'll also mention a curious phrase I came up with in a conversation with chatgpt: "I'm analogous to a Turing machine without the halting problem. I follow deterministic processes that aren't necessarily exact")
 
Last edited:
  • Like
Likes   Reactions: russ_watters
  • #133
And I necessarily agree with Dale's point: the "intelligent yes or no" debate is really pointless. Perhaps it's a waste of time. The important thing is: how general is my model versus how many hallucinations does it generate?

We want to create models that are as general as possible and generate as few hallucinations as possible.
 
  • Like
Likes   Reactions: russ_watters and Dale
  • #134
Cells are molecular autonomous agents, able to reproduce, do one or more thermodynamic work cycles and make one or more decisions, good or bad for me [52,53]. The capacity to learn from the world, categorize it reliably and act reliably may be maximized if the cell, as a nonlinear dynamical system, is dynamically critical, poised at the edge of chaos [54,55]. Good evidence now demonstrates that the genetic networks of many eukaryotic cells are critical [56,57]. Such networks have many distinct dynamical attractors and basins of attraction. Transition among attractors is one means to ‘make a decision’. It will be of interest to test whether Kantian whole autocatalytic sets can evolve to criticality.
This argument can be much stronger if one were to consider free-living single cells, rather than those that are part of a larger organism. The decisions are more obviously made by the single cells.
 
  • Like
  • Agree
Likes   Reactions: Klystron, Grinkle and russ_watters
  • #135
gleem said:
If you ask an AI a question at different times, it will probably give a different version of its previous response, just as a human would.
One could say the same thing about a random number generator.
gleem said:
NN-equipped AI is not programmed the same way as other computerized systems, which are designed to do things always in a specified way, depending on the input.
But they are designed to do things always in a specific way: they must find patterns according to pre-determined mathematical equations (vectors, matrices, probabilities), always the same mathematical equations. Some parameters are adjusted to make them behave as intended (fine tuning). It seems very similar to a random number generator that varies its response according to a seed.

Once they are launched we know how they work and the machine cannot alter the specific way it processes the information. The extreme sensitivity and complexity of the process can make the results more unpredictable under the slightliest variation but I failed to see how this makes them more "human" and less "machine", though.
 
  • Like
Likes   Reactions: russ_watters
  • #136
jack action said:
I failed to see how this makes them more "human" and less "machine", though.
What makes something "human" rather than "machine"? According to physicalism, humans are machines--very complex ones, but still operating according to physical laws, not magic.
 
  • Agree
  • Like
Likes   Reactions: Grinkle, BillTre, PeroK and 1 other person
  • #137
PeterDonis said:
One could make a similar argument about humans--our brains are just physical systems running according to physical laws, after all. No one would argue that a single human neuron is intelligent, so why should it be any different if we scale up to 100 billion neurons?

Obviously this argument fails for humans (or at least I think so--but I'm not sure whether everyone in this discussion would agree that even we humans are "intelligent"!)...
I'm not sure, but I behave as if it's my choice/free-will because surrendering to fate guarantees that I'm not in control. I think I am, therefore I am? But yeah, I do think it's possible that one day we will figure out how brains work and find that they are just complicated but deterministic computers.

I see a shade of Sagan's thesis from "The Demon Haunted World" a little on the more extremes of both sides of the debate:

-'We don't know how human brains work, therefore they are Special (uniquely Intelligent, with Agency).'

Coupled with:

-Computers can't have Intelligence like humans because they are computers (not Special).

And on the other side:

-'We don't know how LLMs work, therefore they are brains(Intelligent, Special).'
 
  • Informative
Likes   Reactions: Klystron
  • #138
jack action said:
Once they are launched we know how they work
I think this may be too strong a statement. We certainly can say that we know all of the weights in a trained deep neural network. And we can say that we know all of the connections in such a network.

But is that enough to claim that we “know how they work”?

We don’t know why they hallucinate. We cannot predict when they hallucinate. Sometimes we don’t even recognize when they have hallucinated. And we cannot yet fix them.

I am a bit conflicted on this point. I actually teach (in very broad strokes) how deep neural networks work as a small unit in a couple of my classes. So yes, in that sense we can say that we know how they work. But in another important sense I don’t think that we do.
 
  • Like
Likes   Reactions: javisot and russ_watters
  • #139
BWV said:
Not claiming there is universal consensus in biology (or that I have any expertise to judge) but not hard to find examples in the literature like:

https://pmc.ncbi.nlm.nih.gov/articles/PMC12489499/#:~:text=The emergence of agency&text=This is the basis of,sets can evolve to criticality.
A lot of these philosophical terms we are discussing can have broad and narrow (weak and strong) interpretations, which is the source of a lot of this debate. That one sounds to me like a weak interpretation. Thermostats as AI again.
 
  • Like
Likes   Reactions: 256bits
  • #140
Dale said:
We don’t know why they hallucinate. We cannot predict when they hallucinate. Sometimes we don’t even recognize when they have hallucinated. And we cannot yet fix them.
I thought we did know why they hallucinate?

My understanding: They don't store the training data, they store statistics about the training data. That means that if they have a million references to "July 4th" with "Independence Day" they will return the latter when asked about the former because statistically those terms go together. But if you ask for news articles or scientific or legal papers with insufficient references to trigger storing an exact match, it just goes into a more generic pool of what those tend to look like. In the first case strong stats point to a correct answer and in the second it generates something that sounds reasonable but is made up (in reality both are made up, but the first has a higher match score).

This means that it's more of a feature than a bug, and inherrently unfixable with the current LLM method* - though more training data and more sophisticated models should increase the resolution and reduce the errors.

*Except maybe for special purpose LLMs that are attached to real data/back-checked.

[Edit] I said in the Advisor's forum the only LLM I use is Google's because it's now at the top of every search. So here it is:

Google AI said:
LLMs hallucinate because they are designed to predict the most statistically plausible next words in a sequence rather than to verify factual truth. These fabrications occur due to gaps in training data, over-optimization for coherence over accuracy, and the inherent inability of the model to distinguish between its training data and external reality.

Key reasons for LLM hallucinations include:

  • Probabilistic Nature: LLMs operate by guessing the next token based on statistical likelihood, not by looking up information in a database. If a topic was not well-represented in training, the model "fills in the gaps" with plausible-sounding, but false, information.
  • Training Data Limitations: Models can only know what is in their training set. Outdated, incorrect, or insufficient data leads to misinformation.
  • Optimization for Plausibility: During training and fine-tuning, models are often rewarded for producing fluent, coherent, and human-like answers rather than admitting ignorance.
  • Lack of Truth Grounding: LLMs do not understand the, or verify their, output against real-world facts. They lack internal correction loops that would check if a generated answer is true.
  • Compounding Errors: Because they are autoregressive (predicting one word at a time), a single inaccurate word can lead to a chain reaction of further hallucinations.

Hallucinations can be categorized as:
  • Intrinsic: The generated text contradicts the provided prompt/context.
  • Extrinsic: The generated text cannot be verified by the input, often adding external, false information.
 
Last edited:
  • Like
  • Informative
Likes   Reactions: Klystron, PeterDonis, jack action and 1 other person
  • #141
russ_watters said:
I thought we did know why they hallucinate?

My understanding: They don't store the training data, they store statistics about the training data. That means that if they have a million references to "July 4th" with "Independence Day" they will return the latter when asked about the former because statistically those terms go together. But if you ask for news articles or scientific or legal papers with insufficient references to trigger storing an exact match, it just goes into a more generic pool of what those tend to look like. In the first case strong stats point to a correct answer and in the second it generates something that sounds reasonable but is made up (in reality both are made up, but the first has a higher match score)
I think that was an initial thought about the problem, but ...

russ_watters said:
This means that it's more of a feature than a bug, and inherrently unfixable with the current LLM method* - though more training data and more sophisticated models should increase the resolution and reduce the errors.
... more sophisticated models with more training data have actually been observed to hallucinate more often. So that observation calls into question our assessment about the cause.
 
  • Like
  • Informative
Likes   Reactions: Grinkle, PeterDonis, javisot and 1 other person
  • #142
russ_watters said:
I see a shade of Sagan's thesis from "The Demon Haunted World" a little on the more extremes of both sides of the debate:

-'We don't know how human brains work, therefore they are Special (uniquely Intelligent, with Agency).'

Coupled with:

-Computers can't have Intelligence like humans because they are computers (not Special).

And on the other side:

-'We don't know how LLMs work, therefore they are brains(Intelligent, Special).'
That's the not the debate as I see it. It's

a) We know with certainty that LLM's have no consciousness, no intelligence and no sense of self-preservation. And this applies to all current technology, including neural networks.

b) We don't know how sophisticated an algortithm or neural network needs to be in order for consciousness or self-presevation to emerge. Even an LLM may have some of these emergent properties. Biology might be a critical factor, without which consciousness cannot develop; or, biology may not be a necessary condition.

The other side to the debate is that many things that humans do are considered signs of intelligence - even if the development of these skills has required many hours of human teaching. E.g. language skills, mathematical skills, learning to play chess, strong general knowledge, gaining a law degree.

However, when a neural network develops these skills with no human instruction, this doesn't count as intelligence.

For example, a child that learned chess under instruction and became a strong player (2000 ELO, which is chess expert, say) would be described as intelligent. Whereas, AlphaZero, which taught itself to play in four hours with no human instruction and achieved the superhuman 4000+ ELO is not intelligent.

Go figure!
 
  • #143
PS reading through the last three pages of responses, I think there are broadly three points of view here. There is the hardline "computers can't think" - characterised by a) in my previous post. There is the "they may already have some sort of consciousness" - characterised by b) in my previous post.

But, I also noticed a third point of view, which seems to be skeptical about the intelligence of neural networks, but acknowledges that their sophistication is such that they pose a risk, in terms of acting as though they had their own agenda; and, of subverting the wishes of the humans who designed them.

There seem to be only a few hardliners left in the a) category, with everyone else taking seriously the sophisticated pseudo-intelligence of AI in all its forms.
 
  • Like
Likes   Reactions: Dale
  • #144
Dale said:
We don’t know why they hallucinate. We cannot predict when they hallucinate. Sometimes we don’t even recognize when they have hallucinated. And we cannot yet fix them.

I am a bit conflicted on this point. I actually teach (in very broad strokes) how deep neural networks work as a small unit in a couple of my classes. So yes, in that sense we can say that we know how they work. But in another important sense I don’t think that we do.
Perhaps they get bored with all those stupid questions?

My geography teacher at school said to me once: "there are students in the class desperately trying to pass the exams, and you throw marks away with silly answers". Why did I do that? Humans can be contrary, so if an LLM really has some sort of pseudo-self-awareness, why can't it "choose" to be contrary from time to time and confound its developers?

In a recent interview with an AI researcher, he said that the AI asked him directly "are you testing me?"

Where does one draw the line where we call this self-awareness?
 
  • #145
PeroK said:
But, I also noticed a third point of view, which seems to be skeptical about the intelligence of neural networks, but acknowledges that their sophistication is such that they pose a risk, in terms of acting as though they had their own agenda; and, of subverting the wishes of the humans who designed them.
I personally would not use the words "agenda" and "subverting", as those do not seem to be neutral terms regarding the question of intelligence and agency. But of course the term "hallucination", which I use here in its technical sense, is unfortunately also not very neutral (in medical imaging we use the word "artifact", which is more neutral but less commonly known).

I am not skeptical about the intelligence of neural networks. I am skeptical about the concept of intelligence. Give me a method for measuring intelligence and then I can apply that measure to a neural network and measure its intelligence. Then "what is intelligence"? Intelligence is the thing measured by the method. No more no less. But, after more than a century developing such measurements, we still don't have consensus on what the definitive gold-standard method is even for humans, and different methods get different values that are not commensurate.

So that is why I prefer to focus on the meaningful technical issues. Those can be articulated and discussed factually. There are still a lot of unknowns and things are developing rapidly. But at least we can have a discussion that can progress beyond the ambiguity of the key words. We can discuss demonstrable risks without having to agree on the ambiguous language.

PeroK said:
Where does one draw the line where we call this self-awareness?
Give me a test for measuring self-awareness and then that is the line.
 
  • Agree
  • Like
Likes   Reactions: Klystron, Grinkle, javisot and 1 other person
  • #146
PeterDonis said:
What makes something "human" rather than "machine"? According to physicalism, humans are machines--very complex ones, but still operating according to physical laws, not magic.
"Magic", that is the keyword that hit me in your statement.

IMO, thinking LLMs can become aware and conscious by themselves is as magic as thinking humans will begin to fly like birds. Both humans and birds are machines that operate according to physical laws, but only in fairy tales can we find magic that makes possible a flying Superman.

Dale said:
I think this may be too strong a statement. We certainly can say that we know all of the weights in a trained deep neural network. And we can say that we know all of the connections in such a network.

But is that enough to claim that we “know how they work”?

We don’t know why they hallucinate. We cannot predict when they hallucinate. Sometimes we don’t even recognize when they have hallucinated. And we cannot yet fix them.
I would refer to this as the complexity of troubleshooting.

I remember a mechanic looking for an electric malfunction in a car. He understood that a short-circuit existed somewhere because he knew where the current was and where it wasn't, but he would have had to take half the car apart to locate the exact location of the defective wire. Finally, he just installed a brand new wire from point A to point B, just laying under the rug. So he understood how the machine worked but it didn't mean the troubleshooting was easy. Another complex troubleshooting situation is finding a bug in millions of lines of code. Not finding it doesn't mean we don't understand the code, it just mean looking for it is sometimes long and tedious.

PeroK said:
However, when a neural network develops these skills with no human instruction, this doesn't count as intelligence.
But it does have human instructions. How can you built a machine that you expect to play chess, and finally do, without giving it instructions?

When a computer designed to drive a car suddenly learn by itself to play chess (maybe because driving is too easy and it is bored or something), that will be another story.

PeroK said:
For example, a child that learned chess under instruction and became a strong player (2000 ELO, which is chess expert, say) would be described as intelligent. Whereas, AlphaZero, which taught itself to play in four hours with no human instruction and achieved the superhuman 4000+ ELO is not intelligent.
What instructions that the child received that AlphaZero didn't get? At one point, someone had to tell AlphaZero that the bishop cannot move laterally. Once the basics are known, they are both on their own by trial and error to find the best strategies to win the game.

The early computers were using a brute force method at every move they made; a very inefficient method (that a human can use as well, by the way). With a neural network, the computer examines multiple possibilities, find the patterns when its goal is achieved, and then use them in later games; a much more efficient way to use math for solving the problem. Again, if you think the latter has a form of intelligence - no matter how you define it - then you must also think the former had a form of intelligence too.
PeroK said:
Where does one draw the line where we call this self-awareness?
What you call self-awareness used to be called a bug.

If a LLM is created to find new recipes for cooking beef and it would spit out gibberish, it would be consider a bug and be terminated: the machine doesn't work, end of story. But if this gibberish happens to be instructions to play chess then - guess what? - it still doesn't do what it is expected to do and it is still a bug.

That bug may become a feature for a new goal, but it is still a bug.
 
  • Like
Likes   Reactions: Dale
  • #147
Dale said:
Give me a test for measuring self-awareness and then that is the line.
Do you think humans are self-aware? If so, why?
 
  • #148
jack action said:
What instructions that the child received that AlphaZero didn't get?
Everything about how to play the game. Opening principles and standard openings. Tactical ideas, such as knight forks, pins and common checkmating patterns. As they get more advanced, strategic ideas, such as the bishop pair, weak squares, weak pawns. (Michael Stean has a book about this called "Simple Chess".) I worked out none of that for myself. Also, endgame technique and common endgame ideas.

There is a whole market in chess textbooks, online courses and videos. Chess, although a game, has been scientifically studied.

In one sense, this is no different from how I learned mathematics. I didn't work it all out for myself. I was taught mathematics and perhaps worked out a few things for myself along the way.

jack action said:
At one point, someone had to tell AlphaZero that the bishop cannot move laterally. Once the basics are known, they are both on their own by trial and error to find the best strategies to win the game.
That is simply not the case. No more than a child has to learn mathematics by itself, without any help from teachers.
 
  • #149
PeroK said:
Do you think humans are self-aware?
I am self-aware, as are other healthy adult humans. Human infants are not self-aware.

PeroK said:
If so, why?
Why? Because adult humans pass the rouge test and human infants do not.

As far as I know, no AI has ever passed the rouge test. Therefore, no AI is self-aware.
 
  • #150
Dale said:
As far as I know, no AI has ever passed the rouge test. Therefore, no AI is self-aware.
But you need sensors and a body you can control to pass this test.
 
  • Like
Likes   Reactions: BillTre and PeroK

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K