Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #271
Grinkle said:
The more I consider @Dale 's approach, the more I like it. Stripping away all of the romance and drama around AI by simply talking about software products avoids all that
Yeah, but even I got sucked into the "intelligence" debate.
 
Physics news on Phys.org
  • #272
There is a big flaw in the Turing test: It is assumed that the judge is a simple human.

But what if the judge is using AI for assistance or for doing the job entirely? If AI is so efficient, it should be able to detect AI. It should be able to detect the patterns coming from a neural network. There must be patterns in neural networks, otherwise the whole concept of a neural network wouldn't work. And if one is tempted to say that a neural network could be intelligent enough to fool another, why wouldn't this neural network be the one used for AI detection?

Then the human using AI to detect AI is still the most intelligent in the room.

That is what is wrong with all the doomsday scenarios: they never consider that both "good" and "evil" sides can use the same tools.
 
  • #273
jack action said:
Then the human using AI to detect AI is still the most intelligent in the room.
You are conflating detection with intelligence here, no?

The Turing test can't be flawed in the manner you suggest because it is what it is by definition. It doesn't assume the judge is human, its defined that way. As I read it, its not defined to have anything to do with intelligence per se, that is our retro-fit in this thread.

There may be many approaches to sussing out the AI in the Turing test, they don't all have to do with intelligence. Style, for example; I speculate that an AI will be more prone to including examples in its response to justify its response than a human, even if both responses agree with each other conceptually.

jack action said:
That is what is wrong with all the doomsday scenarios: they never consider that both "good" and "evil" sides can use the same tools.
In my own scenario of society self-dumbing down, that's actually the key part of the scenario.
 
Last edited:
  • #274
Grinkle said:
The Turing test can't be flawed in the manner you suggest because it is what it is by definition. It doesn't assume the judge is human, its defined that way.
From Wikipedia [bold emphasis mine]:
The Turing test, [...], is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart.
Can the human evaluator use a tool to help him do the job? If he is nearsighted and both text transcripts look equally blurry to the point he cannot distinguish them, then as the machine passed the test? Or can the human evaluator wear glasses to help him see better? Another example: if two metal samples (one heat-treated, the other not) are indistinguishable to the naked eye but distinguishable under a microscope, are they considered the same, or is using a microscope considered cheating?

The problem with the Turing test is that everyone puts themselves as an acceptable human evaluator. But if a machine can reliably tell them apart, can we still say the machine can exhibit intelligent behaviour equivalent to that of a human?
 
  • #275
I think I am not understanding you properly - the below is to help me understand better.

jack action said:
Can the human evaluator use a tool to help him do the job?
Imo, yes. I speculate if one could ask Turing whether one could use an Encyclopedia Britannica (the Google of his day) during the evaluation, he would have shrugged and said something like "Whatever, sure, if you want, but that's not how I talk to humans, myself".

jack action said:
The problem with the Turing test is that everyone puts themselves as an acceptable human evaluator.
I am missing your point. Why is this a problem? I agree that everyone puts themselves as an acceptable human evaluator.

jack action said:
But if a machine can reliably tell them apart, can we still say the machine can exhibit intelligent behaviour equivalent to that of a human?
Maybe, and if a machine could do this without using some deterministic trivial style-based fingerprint in the discourse, I'd be very impressed with the machine. How are you connecting that with the Turing test? Its an entirely different test with an entirely different motivation, imo. Its an interesting test, also, imo.
 
  • #276
jack action said:
Can the human evaluator use a tool to help him do the job? If he is nearsighted and both text transcripts look equally blurry to the point he cannot distinguish them, then as the machine passed the test?
I think this is missing the forest for the trees.

The Turing Test does not place conditions on the tester (except that they are human). It can be assumed that the test's setup involves ideal conditions (ones that best facilitate an accurate test of the challenge itself - sans confounding factors).

It's not about whether Tester had a bad day at all, or dropped out of grade school or whatever. It's just: given its best shot, can a human tell the difference?
 
  • Like
Likes   Reactions: Grinkle
  • #277
jack action said:
From Wikipedia [bold emphasis mine]:

Can the human evaluator use a tool to help him do the job? If he is nearsighted and both text transcripts look equally blurry to the point he cannot distinguish them, then as the machine passed the test? Or can the human evaluator wear glasses to help him see better? Another example: if two metal samples (one heat-treated, the other not) are indistinguishable to the naked eye but distinguishable under a microscope, are they considered the same, or is using a microscope considered cheating?

The problem with the Turing test is that everyone puts themselves as an acceptable human evaluator. But if a machine can reliably tell them apart, can we still say the machine can exhibit intelligent behaviour equivalent to that of a human?
The human judge represents the ability to know the reality of things, that's why the judge is not an AI.
 
  • #278
Grinkle said:
I am missing your point. Why is this a problem? I agree that everyone puts themselves as an acceptable human evaluator.
Well, anyone who wants to see intelligence in a machine can just say: "It is intelligent because it fooled me." Not a very scientific method in my opinion.
DaveC426913 said:
It's just: given its best shot, can a human tell the difference?
But can "its best shot" be done with a tool he made? What is the difference between those two statements:
  1. Can a human fly? No. What if he builds and uses an airplane to do so? Yes.
  2. Can a human tell a machine and a human apart in a conversation? No. What if he builds and uses an AI machine to do so? Yes.
The thing about the second statement is that, according to the Turing test, the answer determines the state of the machine evaluated: it either can or cannot exhibit intelligent behaviour equivalent to that of a human.

javisot said:
The human judge represents the ability to know the reality of things, that's why the judge is not an AI.
But if an AI that can exhibit intelligent behaviour equivalent to that of a human exists, doesn't it also mean that the same AI should be able to know the reality of things and be the evaluator in a Turing test? Therefore, I arrive at the paradox that if I use AI against itself and it sees the difference, then the machine does not pass the Turing test.

Or is the Turing test not really measuring intelligence, but just the ability to recognize patterns, one of the characteristics of intelligence?

That being said, why does a test developed in 1949 as a thought experiment by a guy who died in 1954, i.e. has no clue of what a modern computer does, can be taken seriously by anyone? Again, no serious AI researchers do:
jack action said:
Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research. Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test".
Only doomsdayers use it to sell their point.
 
  • Like
Likes   Reactions: Dale
  • #279
jack action said:
Or is the Turing test not really measuring intelligence,
No, it is not measuring intelligence. It was not intended to measure intelligence - this is the crux of why you are getting so much push back. Can you find any reference to Turing saying he intended his test to measure intelligence?
 
  • Like
Likes   Reactions: russ_watters
  • #280
jack action said:
That being said, why does a test developed in 1949 as a thought experiment by a guy who died in 1954, i.e. has no clue of what a modern computer does, can be taken seriously by anyone? Again, no serious AI researchers do:
Because the whole point is that it's not about what goes on on the inside; it's about results.

If it is so sophisticated that you can't tell from the outside whether it's a human or an AI, then what - really - is the difference?
 
  • #281
jack action said:
But if an AI that can exhibit intelligent behaviour equivalent to that of a human exists, doesn't it also mean that the same AI should be able to know the reality of things and be the evaluator in a Turing test? Therefore, I arrive at the paradox that if I use AI against itself and it sees the difference, then the machine does not pass the Turing test.
Yes, but the original test presentation omits that and directly assumes a human judge who knows the reality. I suppose you can replace the human judge with AGI.
 
  • #282
Intelligence tests measure our ability to solve intelligence tests, plain and simple.

Let's say we take an intelligence test and get a certain score. Let's say we're given feedback on what we did right and wrong on the test. Then we repeat the exact same test, and now our memory (having learned the answers) allows us to get a better score. Are we more intelligent?

Here I'd like to emphasize: let's say someone is able to provide a lot of correct information in conversation, capable of winning a general quiz, that's specifically an exercise in memory, not exclusively in intelligence.

Personally, I understand human intelligence as the ability to optimally manage a certain quality and quantity of information. I mean, I don't believe intelligence is solely about having a lot of increasingly complex information stored, but rather the ability to manage that information.

The Turing test measures a machine's ability to generate information as complex and interconnected as that which an average human being can generate. (Chatgpt says: "A system can pass the Turing test without being intelligent, and can be intelligent without passing the Turing test")
 
Last edited:
  • #283
javisot said:
Intelligence tests measure our ability to solve intelligence tests, plain and simple.

Let's say we take an intelligence test and get a certain score. Let's say we're given feedback on what we did right and wrong on the test. Then we repeat the exact same test, and now our memory (having learned the answers) allows us to get a better score. Are we more intelligent?
That's not really the way they work.

A well-designed intelligence test will not be one that you get to take a second time and have it be identical.

They are usually questions of a kind that you won't have encountered previously. They test your ability to lateralize and generalize - to apply general problem-solving skills to problems you have not seen before.

Example: If you see a 3D mapping puzzle that asks you which flat object can be folded into a given 3D shape, finding the solution once doesn't mean you know the answer to the next one.

Solving this, on the first test:
1770763266568.webp


doesn't mean you have already solved this, on the second:
1770763339833.webp

You will need to internalize the general principles. And that's the hallmark of intelligence.

javisot said:
I don't believe intelligence is solely about having a lot of increasingly complex information stored, but rather the ability to manage that information.
Specifically, to derive principles that can be applied to new problems, or better yet, a whole class of problems.
 
Last edited:
  • Like
Likes   Reactions: javisot
  • #284
DaveC426913 said:
That's not really the way they work.

A well-designed intelligence test will not be one that you get to take a second time and have it be identical.
Obviously, this isn't the way to conduct an intelligence test, for the reasons I mentioned. We're trying to measure intelligence, not just memory.

The only thing we can be sure about an intelligence test is that it measures our ability to solve intelligence tests, never intelligence in its entirety (which is a very common criticism).
 
  • Like
Likes   Reactions: BillTre
  • #285
javisot said:
Obviously, this isn't the way to conduct an intelligence test, for the reasons I mentioned. We're trying to measure intelligence, not just memory.
You did not read my post.

javisot said:
The only thing we can be sure about an intelligence test is that it measures our ability to solve intelligence tests, never intelligence in its entirety (which is a very common criticism).
I disagree. See my post.
 
  • #286
Grinkle said:
Can you find any reference to Turing saying he intended his test to measure intelligence?
Maybe not in literature, but the test was brought up in this thread as proof that AI is intelligent:
PeroK said:
And, my broader argument is that the more things AI can do and the more emergent characteristics it develops, the more things are deemed to be not an indicator of intelligence.

For example, LLMs can comfortably pass the Turing test. So, for the AI skeptics the Turing test is no longer valid.
PeroK said:
Once a system is sophisticated enough to pass the Turing test, then I think its valid to start asking questions about it regarding intelligence and deception.
PeroK said:
I refuse to believe that Alan Turing overlooked this aspect of things. He must have imagined that the AI knew not to show off.

It would be ironic if we dismiss AI as intelligent precisely because it exhibits superhuman intelligence!
I must admit I worry about the direction this discussion takes.

Just to be clear: I don't think AI is self-aware, conscious, or has intentions. If a definition of intelligence refers to these, then it is not intelligent either.

Regarding the subject of this thread, I don't think that chatbots exhibit behaviours that can’t be predicted by simply analyzing the sum of their parts, either.

The results of LLMs are based on very simple math (addition, multiplication, etc.) repeated billions of times. If you use exactly the same inputs, you get exactly the same results. Nothing that is humanely impossible to do, except for the time it would require to do these calculations. The complexity and length of the process may make it harder to follow, but there is no magic.

Some people in this discussion seem to be open to the idea that there is something more in LLMs, or at least that we are on the way to something more. Lots of discussions about intelligence, based on not much, mostly ignoring how AI researchers define intelligence:
  • "The computational part of the ability to achieve goals in the world."
  • "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent."
Furthermore, when some people get excited about a book ("If Anyone Builds It, Everyone Dies") that was written by a guy who is an autodidact who hasn't even attended high school, speculating about an apocalyptic future, I'm worried. He wants researchers to design a Friendly AI :rolleyes:, which physically makes no sense. Here's an opinion of this doomsdayer:
https://en.wikipedia.org/wiki/Eliezer_Yudkowsky#cite_note-:1-5 said:
But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for Time, that we should "be willing to destroy a rogue datacenter by airstrike," even at the risk of sparking "a full nuclear exchange."
I still think LLMs are just dumb machines that need to be debugged and used appropriately, no need for a nuclear exchange or the like.
 
  • Like
Likes   Reactions: Grinkle
  • #287
jack action said:
I don't think that chatbots exhibit behaviours that can’t be predicted by simply analyzing the sum of their parts
Then why do they hallucinate? They hallucinate precisely because the developers cannot predict that behavior by analyzing the parts. The developers have the code, the entire training set, and the fully trained model. They have all the parts. With all of that, I think the claim that it is possible to predict AI behavior is doubtful at best.

jack action said:
The results of LLMs are based on very simple math (addition, multiplication, etc.) repeated billions of times.
You are missing one very important point here. Although the operations are simple, deep neural networks are highly non-linear. So even from a strict math operations standpoint, the total behavior is not due to a sum of parts.
 
  • Like
Likes   Reactions: javisot
  • #288
Dale said:
Then why do they hallucinate? They hallucinate precisely because the developers cannot predict that behavior by analyzing the parts.
I don't think it's true that "hallucination" can't be predicted based on knowing how LLMs work. They hallucinate because by design they only process the text in their training data and prompts, with nothing whatsoever in their algorithms that checks the text for correspondence with anything else. Even the term "hallucinate" is a misnomer, because it implies that the LLM is somehow "perceiving" something that's not there, instead of just executing an algorithm that, by design, produces text with no expectation of any reliable relationship to reality. In other words, this behavior is entirely predictable to anyone who understands what the LLM is, and is not, doing.

So I think that if the developers are seriously claiming that they cannot predict this behavior based on their knowledge of the code, the training set, and the model, then they are being either extremely obtuse or extremely disingenuous.
 
Last edited:
  • Informative
Likes   Reactions: DESM
  • #289
PeterDonis said:
I don't think it's true that "hallucination" can't be predicted based on knowing how LLMs work.
I would challenge anyone to predict a specific hallucination given a trained deep neural network. I am not limiting it to LLM’s, but not just a general “hallucinations will happen sometimes” but a specific “this hallucination will happen in this circumstance”
 
  • Agree
  • Like
Likes   Reactions: Grinkle and javisot
  • #290
Dale said:
I would challenge anyone to predict a specific hallucination given a trained deep neural network.
Of course that's unrealistic--but then we humans can't even predict what specific bugs will be in code we wrote ourselves, with no LLMs anywhere in sight. So I don't see this as a significant distinction between LLMs and other kinds of computer programs.

The fact that the algorithm the LLM is running is encoded in a ginormous number of statistical weights in a nonlinear neural network might be a significant distinction--but then the unpredictability of specific behavior is simply due to...the ginormous number of parameters and the nonlinearity. Simple chaos theory leads us to expect any system with those properties to be unpredictable on some time scale. Nothing about LLMs specifically makes them any different in this respect from any other nonlinear system with a ginormous number of parameters.
 
  • Agree
Likes   Reactions: Grinkle
  • #291
Dale said:
predict a specific hallucination
The discussions of hallucination that I have seen in the AI literature (of which I have not read very much so I might well be missing something) have not been about the inability to predict specific hallucinations. They have been about AI researchers apparently being surprised that hallucinations occur at all, and apparently being confused about why such a thing would happen.
 
  • Like
Likes   Reactions: javisot
  • #292
Dale said:
Then why do they hallucinate? They hallucinate precisely because the developers cannot predict that behavior by analyzing the parts. The developers have the code, the entire training set, and the fully trained model. They have all the parts. With all of that, I think the claim that it is possible to predict AI behavior is doubtful at best.
It is "unpredictable" because the bugs are harder to find. ChatGPT-4 is estimated to have roughly 1.8 trillion parameters. Good luck finding which combination of these parameters is causing the unwanted hallucination.
 
  • #293
A LLM is a formal system; we can use it to number things, We can make it work with the set of natural numbers. An interesting question I've seen several times is: Is incompleteness applicable to LLMs?

Most likely, an LLM without hallucinations cannot be truly general.
 
Last edited:
  • Like
Likes   Reactions: Grinkle
  • #294
jack action said:
If you use exactly the same inputs, you get exactly the same results.

jack action said:
The complexity and length of the process may make it harder to follow, but there is no magic.

This discussion on hallucinations connected a dot for me that I really should have connected long ago.

I spent many years of my career working in EDA (electronic design automation), that is software used to design IC's. Place and Route software (an example of EDA functionality) is probably familiar to many who are reading this. My last real work in this area was 20 some odd years ago, so bear with me if I am dated vs what is current state-of-the-art, I hope my analogy has validity even if that is the case.

Silicon utilization (just utilization for short) refers to how much of the silicon ends up containing "active area" (transistors) and "metals" (routing interconnect) vs just unused wafer. The higher the utilization the better the result, since the smaller the chip needs to be. Whether or not a particular solution was optimal was a matter of endless conflict between the tool provider and the customer. The customer would want higher utilization, some would insist that entitlement was a 100%, meaning they expected that they should end up with zero unused Si in the layout. Once one fixes the design rules, it is usually not hard to come up with simple un-routable scenarios to show that in general, 100% utilization is not entitlement. It becomes much harder to show that in the general case, 90% or 80% is not entitlement, if the placement algorithm in particular is smart enough. For those deeply familiar, I am skipping discussion of internal standard cell utilization vs external utilization and the resultant trade-offs. I'm sure I'm skipping lots of other things as well that are not coming to mind.

Its not efficient to try and predict in advance the outcome of the placement algorithm on a given test case - one simply runs it. If one doesn't like the outcome, one adjusts the weights in the cost function and runs it again. There is simply no value or sensible reason for a developer to do anything other than end-to-end testing to assess the performance of the software (edit: on a specific real-world test case). Discussions of why tool A achieves 80% utilization and tool B achieves 82% become very heuristic and opinion based, partly because no one is allowed access to competitors source code or end executable, and partly because one doesn't know for sure that adjusting the cost function in ones own tool wouldn't produce a better result for the test case under discussion.

There was no magic or mystery to any of it. There was a frustrating amount of NP-hard solution assessment to argue optimality.

I speculate that frequency of hallucination is much the same. A hallucination in now seeming to me analogous to silicon white space left after a completel Place and Route. If I change the cost function, I can make this particular example better, but what have I done to the more general case? How many other scenarios have I perhaps made worse? The only practical way to assess that is end-to-end testing. It doesn't imply any fundamental non-determinism in the software.

Edit: @javisot I see you beat me to it as I was composing this! I read your post as along the same lines as mine.
 
Last edited:
  • Like
Likes   Reactions: jack action and javisot
  • #295
PeterDonis said:
Of course that's unrealistic--but then we humans can't even predict what specific bugs will be in code we wrote ourselves, with no LLMs anywhere in sight. So I don't see this as a significant distinction between LLMs and other kinds of computer programs.
In ordinary code when a bug arises it isn't because we cannot predict the behavior, it is because we did not predict the behavior. There is definitely a distinction between cannot and did not.

The distinction goes further. With an ordinary software bug we can predict it, but simply did not, so we released software with a bug. However, after the bug is identified, then we can also go back and analyze the behavior and determine which part caused the bug. In a deep neural network, not only can we not predict a specific hallucination, we also cannot analyze it and determine where it came from. We cannot identify a line of code, an entry in the training set, or a trained network parameter that causes the hallucination. Not only are deep neural networks unpredictable, they are also unanalyzable. They cannot be treated as the sum of their parts either prospectively or retrospectively.

PeterDonis said:
Simple chaos theory leads us to expect any system with those properties to be unpredictable on some time scale. Nothing about LLMs specifically makes them any different in this respect from any other nonlinear system with a ginormous number of parameters.
Agreed. I would make the same statement about, for example, the weather. I am not attributing any intelligence or mysticism to a deep neural network (nor the weather), just recognizing the fact that it is not possible to treat them as simply the sum of their parts. Neither in terms of predicting their behavior nor in terms of analyzing their behavior (as you said, over the chaotic time scale).
 
Last edited:
  • Like
Likes   Reactions: PeterDonis, russ_watters, gleem and 1 other person
  • #296
Dale said:
With an ordinary software bug
I am now questioning if its productive (from the software developers sense) to call a hallucination a bug. Certainly its a bug from the perspective of the user experience. White space in silicon is not thought of by the developer as a bug per se, because zero white space is not an entitlement performance possibility of the PnR software. The solution that yields the best performance in terms of utilization wins in the market place, and the investment in improving performance is ongoing, but that is different from identifying and fixing bugs.

If my analogy in how LLM software works to how PnR software works is invalid, then so is what I just said, of course.
 
  • #297
So what would I call it if not a bug? I suppose its an undesirable but unavoidable trade-off inherent in the architecture. Using a cost function to assess goodness of the output as its being constructed is nicely general, but optimality is hard to determine and beating an algorithm that has been tuned to a specific real world test case is usually not possible.
 
  • #298
Dale said:
With an ordinary software bug we can predict it, but simply did not, so we released software with a bug. However, after the bug is identified, then we can also go back and analyze the behavior and determine which part caused the bug. In a deep neural network, not only can we not predict a specific hallucination, we also cannot analyze it and determine where it came from. We cannot identify a line of code, an entry in the training set, or a trained network parameter that causes the hallucination. Not only are deep neural networks unpredictable, they are also unanalyzable. They cannot be treated as the sum of their parts either prospectively or retrospectively.
With ordinary software, a bug can also be a question of quality control: we haven't analyzed what a user might input into a program. For example, a user enters "0" for their age, and the programmer never considered someone doing so. The result is that a division by zero happens somewhere, causing an error. With a neural network, these are the only bugs possible. The inputs to test for quality control can be counted by the billions and probably more.

The proof that these bugs are predictable is that once something unwanted happens - like giving a racist comment, for example - they can adjust the parameters to make sure it doesn't happen again. With hallucinations, it is much more difficult because it's much more difficult to identify what in the input triggers them since nothing makes sense for us. Still they can adjust the parameters to have fewer hallucinations (proving they know why and how), but sometimes at the expense of losing other desirable outcomes.

Dale said:
I would make the same statement about, for example, the weather. I am not attributing any intelligence or mysticism to a deep neural network (nor the weather), just recognizing the fact that it is simply not possible to treat them as simply the sum of their parts. Neither in terms of predicting their behavior nor in terms of analyzing their behavior (as you said, over the chaotic time scale).
I disagree with comparing analyzing a neural network to analyzing the weather.

The weather is unpredictable because the inputs are not fixed in time. Even if I knew every physical property of every air molecule at t=0, a butterfly flaps its wings at t=1, and all my predictions can become worthless.

With a neural network, once the inputs are set, you launch the calculations, and you get a result. Repeat the calculations with the same inputs, and you will get the same results. Of course, changing just a single input may provide a totally different outcome, but still repeatable.
 
  • #299
jack action said:
With hallucinations, it is much more difficult

A hallucination is an example of an undesirable output, as is a racist comment. I'm not sure its easier to identify the source of a racist comment than it is to identify the source of a hallucination. Maybe hallucinations are less thematic to our mind - I don't know that they are more of a corner case output than a racist comment as far as LLM implementation goes, though. I agree it makes intuitive sense that finding inputs that can cause a racist output should be easier, and if you find those inputs it should be possible to deal with the unwanted output, but maybe that's just my human chauvinism at work again?
 
  • #300
jack action said:
but sometimes at the expense of losing other desirable outcomes.
And this is not trivial. One indication that we don't know how to predict and define the structure of hallucinations is that the superficial changes we make simply shift the problem. Removing hallucinations from one place and putting them in another doesn't reduce the number of hallucinations. Many hallucinations, after in-depth analysis, end up being categorized as simple errors that could be corrected, or even predicted. The models that don't hallucinate, or hallucinate very little, are those that are sufficiently limited.
 
  • Informative
Likes   Reactions: Grinkle

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K