Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #241
russ_watters said:
It's very much like an earth-sized asteroid in that way. You're of course free to live in fear of it, but it's pointless to try to defend against it.
If we were really talking about asteroids or other background/existing high risk scenarios were we do not start out being in control, yes, but here we are talking about AI technology where we humans pace new technology forward presumable with the expectation that we should remain in control over any introduced risks. And in this regards my question (fuel by current world trends) is how do we stay in control if one of the risk-increasing factors seems to be loss of effective risk control, i.e. the issue that at all times along the path towards some of the worst-case scenarios those with the actual power to mitigate risks are never going to mitigate this particular risk for reason that seems acceptable to them at that point. I accept that there are people here who, for reasons I probably never fully get, find such questions silly, irrelevant, or outside what they feel they can constructively participate in, which is all fine, but the question still stands and is to me is as relevant as ever.

I have no illusion that a discussions on PF is going to change the world, but I still have the naive hope we can have a constructive discussion about it. The reason for this, I think, is that I, for one, would really like to hear a good technical argument for why I don't have to worry about the worst-case scenarios, but so far all I have heard the usual risk brush-off arguments along the line "the scenarios are all wrong and will never happen" or "its too complex to think about, anything can happen, so ignore it until we have clear and present danger". If people have are aware of a scientific/technical reason for why a class of scenarios or even a specific scenario will not happen or why the consequences are guarnateed be much less severe then I would love to hear it.
 
Computer science news on Phys.org
  • #242
jack action said:
The most notorious "so-called experts" I can think of are the ones claiming to know what aliens' intentions could be:
Who are they and who assessed them as and called them 'expert quality'?
Perhaps it's a bit late to define an expert?
 
  • #243
Filip Larsen said:
I, for one, would really like to hear a good technical argument for why I don't have to worry about the worst-case scenarios,
To have a "good technical argument", you would first need to provide a technical description of the said worst-case scenario.

You cannot even describe what ASI will look like, except "it will be smarter than humans" and "it may want to destroy us". The only kind of arguments anyone can come up with against such a vague description would be along the lines, "Humans will have mastered good ASI machines to fight back and destroy the bad ASI machines."

But then you could add, "But if the solution from the good ASI is to build an even more efficient ASI to destroy the bad ASI, and then that super-ASI turns against humankind as well, what will we do?" It never ends.

It is impossible to raise a technical argument in a discussion about something that not only doesn't exist, but we still struggle to imagine how it would work.

From my point of view, the most technical arguments you can obtain for your ill-defined worst-case scenarios are the Three Laws of Robotics. And if you think I'm joking, 15 years ago, some experts had already studied Asimov's rules to define the Five Ethical Rules for Robotics. The CEO of Microsoft did something similar 5 years later. I fail to see how you can get more technical.
 
  • Like
Likes russ_watters
  • #244
DaveC426913 said:
Us, programmers: "It's 1990. There's no danger. There's no way any of our code will still be in use ten years from now."

heh. Ahahahahahaha!
 
  • #245
jack action said:
It is impossible to raise a technical argument in a discussion about something that not only doesn't exist, but we still struggle to imagine how it would work.
I disagree.

Companies employ strategic risk management all the time to navigate business risks, and some of those risks are surely associated with new hyped technology or other similar incoming changes that is characterized with a lot of unknowns, and they wouldn't do this if it was impossible to manage risks in situations with a lot of unknowns. Perhaps the keyword to stress here is "strategic thinking", i.e. the ability to analyze and suggest measures that will navigate towards "good" and away from "bad" without knowing in advance exactly how each tactical situation will play out.

And when I say "technical arguments" I mean arguments and counter-arguments that points towards mechanisms that are known, i.e. physical limits, human psychology and behavior, dynamics in competitive markets, etc. This is also the type of arguments that some of the worst-case scenarios employ, so useful counter-arguments "only" needs to address things at this level.

jack action said:
To have a "good technical argument", you would first need to provide a technical description of the said worst-case scenario.
I agree that the constructive discussion I naively seek to spur in existing threads so far has failed to materialize on PF, but perhaps its worth a shot to aim for a specific scenario in a separate thread. Or maybe PF just isn't the right place for this sort of discussion.
 
  • #246
Filip Larsen said:
I agree that the constructive discussion I naively seek to spur in existing threads so far has failed to materialize on PF, but perhaps its worth a shot to aim for a specific scenario in a separate thread. Or maybe PF just isn't the right place for this sort of discussion.

Check out post 123 where I gave a link to a scenario developed by the research group AI Futures Project. They also give a discussion of their methodology for developing this scenario.
 
  • Like
Likes Filip Larsen
  • #247
gleem said:
I gave a link to a scenario developed by the research group AI Futures Project.
Nice story. If this were the slightest close to the truth, it is already too late to do anything, so why bother?

From the summary:
Millions of ASIs will rapidly execute tasks beyond human comprehension.
This will never happen. To say that humans will blindly use drugs, cures, machines, etc., that they have no clue how they work, is insanity; in less than 2 years, nonetheless. Do you see yourself, 2 years from now, injecting a new drug in your body, say to cure cancer, that nobody understands how it works, and experts only rely on the stamp "made by ASI" and the fact that it cured cancer in every patient until now?

A lot of people are afraid of getting vaccinated by medical experts today, imagine being vaccinated based on a machine's recommendation! Plus, our natural curiosity will prevent this: we have to know, to understand.

Furthermore, this utopia is based on our predisposition to think that Nature can be improved. But most "improvements" we make usually break the balance, and something else breaks somewhere else. It is difficult to imagine that a superintelligence is the solution to this problem. Superintelligence might just laugh at our naivety.

In our AI goals forecast we discuss how the difficulty of supervising ASIs might lead to their goals being incompatible with human flourishing.
This one is a very pessimistic guess.

So let's assume our utopia is real. Nature can be improved, and a superintelligence can achieve that. A superintelligence that we cannot even imagine with our current intelligence level. Yet, we estimate what the superintelligence's goals will be ... based on our current human (flawed?) intelligence. Is this a realistic scenario? Or is it equally possible - even more probable - that our utopian superintelligence will have the solutions to satisfy everybody? Otherwise, what makes this intelligence "super"?

If an individual or small group aligns ASIs to their goals, this could grant them control over humanity’s future.
So, here, we have the other very pessimistic scenario. Some humans can take control of ASI, and, of course, they will be bad people doing bad things to humanity. Why on Earth would anyone want the worst for their people? Especially when seconded by a superintelligence that has the solutions to all problems.

https://ai-2027.com/slowdown said:
Sometime around 2030, [...]

[...]

The rockets start launching. People terraform and settle the solar system, and prepare to go beyond. [...]
This is delusional. There is no way we will terraform and settle the solar system by 2030, with superintelligence or not. It is already very hard to imagine that Mars can be terraformed; beyond that is pure fantasy, especially within a few years. (It can take up to 6 years just to reach Jupiter, the next planet after Mars.)

There were clearly no experts in these domains weighing in on this scenario. This is the hype. The hype coming from ASI "experts", who believe so much in the capabilities of their future ASI that they oversell it beyond reality.

And this is why I cannot consider these "technical arguments", because they do not seriously correspond to the definition of mechanisms that are known, like physical limits or human psychology and behavior:
Filip Larsen said:
arguments and counter-arguments that points towards mechanisms that are known, i.e. physical limits, human psychology and behavior, dynamics in competitive markets, etc.
 
  • #248
sbrothy said:
"efficiency" vs "efficacy".
The context of use would resolve any confusion.

Now back on the track.
jack action said:
, that nobody understands how it works,
Very few people have actually 'understood' anything. I have a feeling that, in time, there will be perceived advantages and perceived perils as a result of AI. They can't be predicted accurately. Isn't that the same as with all 'advances' in science and Technology?

The latest clear and present peril on the menu is what the social media are doing to us. A huge casualty is the reduction of attention span for many / most users. That is truly scary. Politicians welcomed that with open arms and they could see money in it. Those risks have not been addressed by the decision makers and nor will the risks of AI.
 
  • #249
sophiecentaur said:
The latest clear and present peril on the menu is what the social media are doing to us. A huge casualty is the reduction of attention span for many / most users
True. Although I'd say the even bigger dangers are:
1. The misuse of social media as news sources
2. The plethora of sources, resulting in individuals choosing ever more focused and biased sources, thereby making it much easier to ignore inconvenient news.

The sheer depth and breadth of the availability of media counterintuitively results in a narrower and less-informed audience, as well as encouraging audience polarization.

"A man hears what he wants to hear and disregards the rest."
 
  • Like
Likes PeroK and sophiecentaur
  • #250
jack action said:
Some humans can take control of ASI, and, of course, they will be bad people doing bad things to humanity. Why on Earth would anyone want the worst for their people?
What some see as good for their people does not seem all that good, consider Mao Zedong, Joseph Stalin, Pol Pot, Hibatullah Akhundzada (Taliban), or currently Vladimir Putin.

OK, we might not believe that AI will be the demise of Humanity but we must believe it will have a profound effect. We already are seeing the effect it is having on education, i.e., letting it do some of the thinking for us, not that many are doing all that much thinking. What do you say to your kids who with a smart phone in hand ask why go to school or why do we have to learn this or that?

Yikes, I just had a window open up about AI controversy while writing this post. I tried to expand it to read more but it closed. Is AI watching what I am writing? Has anybody had a similar experience?
 
  • Like
Likes PeroK and sbrothy
  • #251
gleem said:
What some see as good for their people does not seem all that good, consider Mao Zedong, Joseph Stalin, Pol Pot, Hibatullah Akhundzada (Taliban), or currently Vladimir Putin.

OK, we might not believe that AI will be the demise of Humanity but we must believe it will have a profound effect. We already are seeing the effect it is having on education, i.e., letting it do some of the thinking for us, not that many are doing all that much thinking. What do you say to your kids who with a smart phone in hand ask why go to school or why do we have to learn this or that?

Yikes, I just had a window open up about AI controversy while writing this post. I tried to expand it to read more but it closed. Is AI watching what I am writing? Has anybody had a similar experience?
And thus the paranoia starts! :smile:
 
  • #252
I believe previously there was a rule regarding posting threads that had AI influence. I am dyslexic and using AI helps me a lot. I believe to have a rule explicitly denying threads with AI influence is unrealistic in today's environment. Most experimental physics has AI influence involved. So may I please ask is it still banned ? I'm not referring to something that is entirely AI produce but something where AI has been used as a extremely advanced spell checker.
 
  • #253
pete94857 said:
I believe previously there was a rule regarding posting threads that had AI influence. I am dyslexic and using AI helps me a lot. I believe to have a rule explicitly denying threads with AI influence is unrealistic in today's environment. Most experimental physics has AI influence involved. So may I please ask is it still banned ? I'm not referring to something that is entirely AI produce but something where AI has been used as a extremely advanced spell checker.
I'm not a moderator but I cannot imagine a scenario where taking advantage of "AI" in the way you describe is against the rules. I could be wrong but if so we'd need a proper moderator in here to settle the matter.

@berkeman you're my go to in these matters. What say you?
 
  • #254
sbrothy said:
I'm not a moderator but I cannot imagine a scenario where taking advantage of "AI" in the way you describe is against the rules. I could be wrong but if so we'd need a proper moderator in here to settle the matter.

@berkeman you're my go to in these matters. What say you?
I've just re-read the rules and guidelines there's nothing presently in it expressing any rule against it. They do seem to have changed since the last time I checked them so that's good. It just makes things more efficient as I can run something by the AI it can then input or alter any mistakes to my formula etc then I can post it here. Then people here can concentrate on my query rather than how it's written. I'm a older person learning as I go rather than in a academic setting. To be fair it's only because of sites like this an AI help that I'm able to do it.
 
  • #255
gleem said:
OK, we might not believe that AI will be the demise of Humanity but we must believe it will have a profound effect. We already are seeing the effect it is having on education
The algorithms used on chess playing, social media, stock market, x-ray reading, and others, are all part of machine learning, a subset within of the encompassing AI arena, and used whole heartedly.
The algorithms for social media had some grumbling about them, as did the 'driverless' car scenario.

Not until the LLM's came out did the possibility of AGI become more a more realistic 'yikes' scenario, with machines being able to do anything a human can do, in contrast from the previous 'wow' factor.
The predicted trillions of dollars of predicted investment in a technology(ANI) that is supposed to be a helper, rather than an all knowing guru as is hyped, will have to be paid back somehow, either as profits for some, or through bankruptcy for others.
 
  • #256
pete94857 said:
I believe previously there was a rule regarding posting threads that had AI influence. I am dyslexic and using AI helps me a lot. I believe to have a rule explicitly denying threads with AI influence is unrealistic in today's environment. Most experimental physics has AI influence involved. So may I please ask is it still banned ? I'm not referring to something that is entirely AI produce but something where AI has been used as a extremely advanced spell checker.
Moderators have final say of course, but in my view, using AI as a tool can't - and shouldn't - practically be litigated against via the rules. As you point out, it can be used as tantamount to a glorified spell checker.

Getting advice from a third party source and implementing that advice yourself should not matter whether that third party source is an AI or your aunt Betty the retired English Teacher. As long as it's still your post, your words, your ownership.

My personal criteria is that, anything a poster writes they take personal responsibility for every word.

(Point of order: is this sidebar sufficiently important to break off to its own 'site policy' thread?)
 
  • Like
Likes pete94857 and sbrothy
  • #257
pete94857 said:
I've just re-read the rules and guidelines there's nothing presently in it expressing any rule against it. They do seem to have changed since the last time I checked them so that's good. It just makes things more efficient as I can run something by the AI it can then input or alter any mistakes to my formula etc then I can post it here. Then people here can concentrate on my query rather than how it's written. I'm a older person learning as I go rather than in a academic setting. To be fair it's only because of sites like this an AI help that I'm able to do it.
If you limit yourself to that use, which I guess is a kind of gentleman's agreement (to follow the rules yeah, you read that correct), then I, too, see no problem. The problem is that this forum doesn't work like e.g. chess.com where (A)I (I may not be completely up to date.) is used the other way around: to reveal "players" trying to defraud the system and their "fellow" players for some measly ELO-rating points by using an engine like Stockfish and getting instantly banned as your rating suddenly becomes 500-1000 points higher!

Now my current nick on chess.com is "sbrothy23". One could wonder why it isn't simply "sbrothy". The explanation is that I got banned for using a program I wrote myself. As always the cheating isn't worth it, but the challenge of writing something that is capable of it is a pet peeve of mine. Still: o:)
 
  • #258
gleem said:
What do you say to your kids who with a smart phone in hand ask why go to school or why do we have to learn this or that?
If you feel you would "have to" learn, then you shouldn't.
If you feel you would "like to" learn, then you should.

It should be like that, AI or not.

I never understood why school in the Western World was always presented as a chore, something that needs a reward, while in Third-World countries, children were ecstatic to walk miles to go to school, where they would feel pride and happiness just holding a pencil.

I liked learning in school. It didn't matter if the guy next to me could do the job for me; I wanted to know how it worked and how to do it myself. For me, AI changes nothing about that.
 
  • Agree
  • Like
Likes 256bits and gleem
  • #259
jack action said:
I never understood why school in the Western World was always presented as a chore, something that needs a reward, while in Third-World countries, children were ecstatic to walk miles to go to school, where they would feel pride and happiness just holding a pencil.
I think the answer is that, in the Western world, they know they will still get by without a strong education. We have lots of safety nets - i.e. there's little in the way of privations to escape from. Whereas, in many third world countries, they know they will not escape from poverty unless they work hard, and they have fewer opportunities to do so.
 
  • Like
Likes Hornbein and 256bits
  • #260
DaveC426913 said:
I think the answer is that, in the Western world, they know they will still get by without a strong education. We have lots of safety nets - i.e. there's little in the way of privations to escape from. Whereas, in many third world countries, they know they will not escape from poverty unless they work hard, and they have fewer opportunities to do so.
That's a pretty cynical view, but then again you're probably as old as I feel. :smile:

I loved to go to school and remember competing with a classmate (3rd grade?) about who could solve the student answer books fastest. There was one for each grade and we were way beyond 3rd grade. I remember similar good memories, except PE, that was, until the old teacher got replaced by a fresh one from the academy. That made a difference.
 
  • #261
DaveC426913 said:
I think the answer is that, in the Western world, they know they will still get by without a strong education. We have lots of safety nets - i.e. there's little in the way of privations to escape from. Whereas, in many third world countries, they know they will not escape from poverty unless they work hard, and they have fewer opportunities to do so.
Definitely a lot of things change as societies get more rich, including entitlement.

Peer pressure and parent coaching.
Or is sportiness in the western world prized, and getting good grades something only nerds do.
Has that changed?
 
  • #262
sbrothy said:
That's a pretty cynical view, but then again you're probably as old as I feel. :smile:
Well it wasn't my premise. It is was jack that suggested Westerners see education as a chore.
Presuming that premise is true, I'm just suggesting why they might feel that way.
 
  • #263
256bits said:
Or is sportiness in the western world prized, and getting good grades something only nerds do.
Has that changed?
I can't say.
Yes, I was a skinny, short, geek of a kid. Yes, I had a tough time in Phys Ed. Yes, I grew up a nerd. Other than that, I can't really speak outside the tropes seen in TV and films.

(I think we may be a little off-topic here.)
 
  • #264
MIT conducted a study of AI on brain activity with an EEG as students wrote essays with and without AI assistance. Results should significant decrease in brain activity for those using AI. Not surprisingly those using AI wrote similar essays compared to the non AI users.

Actual study: https://arxiv.org/pdf/2506.08872v1
 
  • #265
I guess.
Skinny, check. A bit nerdy, check. Top marks in math, sciences, Likes, school, shop, drafting, but messy, phys Ed, check Studying, not so much, could remember from day1 so didn't need to, check. Phys Ed, check, could swing around on the bars, volleyball had special floater and curved served. Extracurricular, on a farm, rode bus 1 hr each end., someone stuck gum in my hair, scarred for life. Also scarred for life in grade 3 by the big kid in the one room country school when he took my bike and did wheelies. Got stuck in net for soccer and took a few to the face.
-----------------------

These two, or three including the host, think AI is hyped out.
Two immigrant looking and sounding nerds.

Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor - 281​

8 months ago, so still quite recent.

The 2 guys wrote the book AI Snake Oil
.https://en.wikipedia.org/wiki/AI_Snake_Oil
AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference is a 2024 non-fiction book written by scholars Arvind Narayanan and Sayash Kapoor. Their text works to debunk hype surrounding Artificial intelligence (AI), and attempts to outline the potential positives and negatives that come with different modes of the technology....

I have not read the book, and the video is 1h15 if interested.
 
Last edited:
  • Like
Likes jack action
  • #266
256bits said:
These two, or three including the host, think AI is hyped out.
Two immigrant looking and sounding nerds.

Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor - 281​

Thank you for this. For people pressed for time, the talk about CGI [not] destroying the world begins at 40:55.
 
  • #267
256bits said:
8 months ago, so still quite recent.
In any other field, I'd consider this recent. But the rate at which AI is advancing - both in sophistication and in application - is outstripping anything we've ever seen. 8 months might as well be 8 years.

Even the development of the atomic bomb didn't have as many industry experts come out to officially put their John Hancocks on a petition to have its research put on ice until we could assess the ramifications.
 
  • #268
DaveC426913 said:
Even the development of the atomic bomb didn't have as many industry experts come out to officially put their John Hancocks on a petition to have its research put on ice until we could assess the ramifications.
The anti-hype is about the false promises of AI, and misapplication of AI as a result, especially regarding LLMs.

Example: Using AI LLM for suicide hotline. The app gave advice such as, paraphrasing, "Suck it up", "Make a decision".
Ex 2: An app for hiring potential employees using facial/voice recognition, scanning for whatever the designers considered as criteria in their own opinion and from their limited experience.
Ex 3: Falsely say its AI, but use actual people to do the work, since the people end up being cheaper
Ex 4: Fully autonomous vehicles do not exist, except for limited scenarios. Exuberant designers found it harder to accomplish than they thought.
Ex 4: AI subtitling on video, and voice over riddled with mistakes
...

A suitable application for AI, as mentioned in the video, was for bird watching, using the songs or picture of the bird, to identify. No one hurt, and if a mis-identification occurs the error is not all too grave. This can be extended to applications for monitoring and identification in industry and home, as has been done without serious repercussions.
 
  • #269
256bits said:
The 2 guys wrote the book AI Snake Oil: https://en.wikipedia.org/wiki/AI_Snake_Oil
From the summary on the Wikipedia link it's seem they are primarily trying to address the AI hype as it is pushed by AI companies and researchers, and it seems they want to aim for a sensible middle ground for positions on AI more of everyone and less of the involved AI stakeholders. This is probably a good general recipe for addressing hype.

However, I do not (from the summary) spot any actual arguments that point towards AGI and potentially ASI not being possible, only that they think it likely will take a fair bit longer than what some of the fastest estimates say (which typically already starts with the condition "it may happen as soon as .."). They only seem to argue that based on history on hyped technology we nearly always overestimate how fast thing will go when "on the hype curve" and that is not a wrong argument regarding probability estimation but it says nothing about what is possible at all, or the inherent potential for significant misuse of AI due to good and bad use of AI being so close to each other. I don't think anyone would contest that they be right on the money that the most likely near-future path towards AGI will hit roadblock after roadblock and that is it not entirely unlikely that the current "architecture" or tech stack at some point hits a dead-end or plateaus in a way that makes AGI not really a thing or at least in a way that disallows acceleration into ASI. But the possibility of dead-ending or plateauing does not in itself exclude the possibility of the opposite. We know that human level intelligence is quite possible and we are not aware of any reason why it should not be possible to replicate this artificially. A dead-end toward AGI with the current technology stack is sure going to put a brake on things but it doesn't in itself change the fundamental challenge associated with a driving towards AGI.

Then there is the interview in which the host clearly argue (in the 5 min part I saw) from a strawman position that doomers are a cult of nutcases promoting sci-fi movie acopalytic scenarios and with a lot of explitive language sprinkled in signalling just what kind of "discussion" he intended it to be. To their credit Kapoor and Naraynan did seem to keep the neutral cool despite the host trying to get them to say something provokative (as I guess every interview host would like to hear).

Again, I can only reiterate my position that it is in the case of AI it is a serious fallacy to only consider probabilities and ignoring the full set of potential consequences, especially since the god and bad consquences are so close to each other. Considering only the good consequences and ignore the bad (or downright catastrophic) consequence sitting right next to it is a fallacy.
 
  • Like
Likes javisot and 256bits
  • #270
DaveC426913 said:
But the rate at which AI is advancing - both in sophistication and in application - is outstripping anything we've ever seen. 8 months might as well be 8 years.
Sorry, but it is the advances in other areas of tech that have allowed the AI advancement to appear to be cutting edge, using one of those terms frowned upon. <--Yoda speak.

Ever wonder why Darpa used hydraullics on their first(ish) robot? The electronic motors, controllers, switches were yet not available. Now, everything is fly by wire - cars, airplanes, ships, robots. Material science brought better magnets, lighter construction with carbon fibre and composites, quantum computing ushered in smaller and faster electronics for display, storage, and computation, relativity made way for GPS, tracking, manufacturing advances led to optimized material usage along with strength of material research, in addition to methods ( printing complex parts ), the list is endless.

Try building an MI, let alone a functioning LLM on an IBM XT, 4.77MHz, with FFM storage of 40 meg, and included casette recorder, just to see how real time that would be. Present tech allows billions of matrix parallel computations per second; the immediate display on the screen is an illusion of AI 'thinking' and 'smartness'. No one would be so impressed with an XT popping up a word output from a neural net every minute, with the 'advanced' capability to re-produce 'Run Dick Run' in its various forms, and in a few years one could dial up the world with pings, bongs, and screeching to a newsgroup letting the world know of the AI breakthrough and companies can now release their employees.
 
  • Like
Likes jack action

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K