Is AI hype?

AI Thread Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #251
gleem said:
What some see as good for their people does not seem all that good, consider Mao Zedong, Joseph Stalin, Pol Pot, Hibatullah Akhundzada (Taliban), or currently Vladimir Putin.

OK, we might not believe that AI will be the demise of Humanity but we must believe it will have a profound effect. We already are seeing the effect it is having on education, i.e., letting it do some of the thinking for us, not that many are doing all that much thinking. What do you say to your kids who with a smart phone in hand ask why go to school or why do we have to learn this or that?

Yikes, I just had a window open up about AI controversy while writing this post. I tried to expand it to read more but it closed. Is AI watching what I am writing? Has anybody had a similar experience?
And thus the paranoia starts! :smile:
 
Computer science news on Phys.org
  • #252
I believe previously there was a rule regarding posting threads that had AI influence. I am dyslexic and using AI helps me a lot. I believe to have a rule explicitly denying threads with AI influence is unrealistic in today's environment. Most experimental physics has AI influence involved. So may I please ask is it still banned ? I'm not referring to something that is entirely AI produce but something where AI has been used as a extremely advanced spell checker.
 
  • #253
pete94857 said:
I believe previously there was a rule regarding posting threads that had AI influence. I am dyslexic and using AI helps me a lot. I believe to have a rule explicitly denying threads with AI influence is unrealistic in today's environment. Most experimental physics has AI influence involved. So may I please ask is it still banned ? I'm not referring to something that is entirely AI produce but something where AI has been used as a extremely advanced spell checker.
I'm not a moderator but I cannot imagine a scenario where taking advantage of "AI" in the way you describe is against the rules. I could be wrong but if so we'd need a proper moderator in here to settle the matter.

@berkeman you're my go to in these matters. What say you?
 
  • #254
sbrothy said:
I'm not a moderator but I cannot imagine a scenario where taking advantage of "AI" in the way you describe is against the rules. I could be wrong but if so we'd need a proper moderator in here to settle the matter.

@berkeman you're my go to in these matters. What say you?
I've just re-read the rules and guidelines there's nothing presently in it expressing any rule against it. They do seem to have changed since the last time I checked them so that's good. It just makes things more efficient as I can run something by the AI it can then input or alter any mistakes to my formula etc then I can post it here. Then people here can concentrate on my query rather than how it's written. I'm a older person learning as I go rather than in a academic setting. To be fair it's only because of sites like this an AI help that I'm able to do it.
 
  • #255
gleem said:
OK, we might not believe that AI will be the demise of Humanity but we must believe it will have a profound effect. We already are seeing the effect it is having on education
The algorithms used on chess playing, social media, stock market, x-ray reading, and others, are all part of machine learning, a subset within of the encompassing AI arena, and used whole heartedly.
The algorithms for social media had some grumbling about them, as did the 'driverless' car scenario.

Not until the LLM's came out did the possibility of AGI become more a more realistic 'yikes' scenario, with machines being able to do anything a human can do, in contrast from the previous 'wow' factor.
The predicted trillions of dollars of predicted investment in a technology(ANI) that is supposed to be a helper, rather than an all knowing guru as is hyped, will have to be paid back somehow, either as profits for some, or through bankruptcy for others.
 
  • #256
pete94857 said:
I believe previously there was a rule regarding posting threads that had AI influence. I am dyslexic and using AI helps me a lot. I believe to have a rule explicitly denying threads with AI influence is unrealistic in today's environment. Most experimental physics has AI influence involved. So may I please ask is it still banned ? I'm not referring to something that is entirely AI produce but something where AI has been used as a extremely advanced spell checker.
Moderators have final say of course, but in my view, using AI as a tool can't - and shouldn't - practically be litigated against via the rules. As you point out, it can be used as tantamount to a glorified spell checker.

Getting advice from a third party source and implementing that advice yourself should not matter whether that third party source is an AI or your aunt Betty the retired English Teacher. As long as it's still your post, your words, your ownership.

My personal criteria is that, anything a poster writes they take personal responsibility for every word.

(Point of order: is this sidebar sufficiently important to break off to its own 'site policy' thread?)
 
  • Like
Likes pete94857 and sbrothy
  • #257
pete94857 said:
I've just re-read the rules and guidelines there's nothing presently in it expressing any rule against it. They do seem to have changed since the last time I checked them so that's good. It just makes things more efficient as I can run something by the AI it can then input or alter any mistakes to my formula etc then I can post it here. Then people here can concentrate on my query rather than how it's written. I'm a older person learning as I go rather than in a academic setting. To be fair it's only because of sites like this an AI help that I'm able to do it.
If you limit yourself to that use, which I guess is a kind of gentleman's agreement (to follow the rules yeah, you read that correct), then I, too, see no problem. The problem is that this forum doesn't work like e.g. chess.com where (A)I (I may not be completely up to date.) is used the other way around: to reveal "players" trying to defraud the system and their "fellow" players for some measly ELO-rating points by using an engine like Stockfish and getting instantly banned as your rating suddenly becomes 500-1000 points higher!

Now my current nick on chess.com is "sbrothy23". One could wonder why it isn't simply "sbrothy". The explanation is that I got banned for using a program I wrote myself. As always the cheating isn't worth it, but the challenge of writing something that is capable of it is a pet peeve of mine. Still: o:)
 
  • #258
gleem said:
What do you say to your kids who with a smart phone in hand ask why go to school or why do we have to learn this or that?
If you feel you would "have to" learn, then you shouldn't.
If you feel you would "like to" learn, then you should.

It should be like that, AI or not.

I never understood why school in the Western World was always presented as a chore, something that needs a reward, while in Third-World countries, children were ecstatic to walk miles to go to school, where they would feel pride and happiness just holding a pencil.

I liked learning in school. It didn't matter if the guy next to me could do the job for me; I wanted to know how it worked and how to do it myself. For me, AI changes nothing about that.
 
  • Agree
  • Like
Likes 256bits and gleem
  • #259
jack action said:
I never understood why school in the Western World was always presented as a chore, something that needs a reward, while in Third-World countries, children were ecstatic to walk miles to go to school, where they would feel pride and happiness just holding a pencil.
I think the answer is that, in the Western world, they know they will still get by without a strong education. We have lots of safety nets - i.e. there's little in the way of privations to escape from. Whereas, in many third world countries, they know they will not escape from poverty unless they work hard, and they have fewer opportunities to do so.
 
  • Like
Likes Hornbein and 256bits
  • #260
DaveC426913 said:
I think the answer is that, in the Western world, they know they will still get by without a strong education. We have lots of safety nets - i.e. there's little in the way of privations to escape from. Whereas, in many third world countries, they know they will not escape from poverty unless they work hard, and they have fewer opportunities to do so.
That's a pretty cynical view, but then again you're probably as old as I feel. :smile:

I loved to go to school and remember competing with a classmate (3rd grade?) about who could solve the student answer books fastest. There was one for each grade and we were way beyond 3rd grade. I remember similar good memories, except PE, that was, until the old teacher got replaced by a fresh one from the academy. That made a difference.
 
  • #261
DaveC426913 said:
I think the answer is that, in the Western world, they know they will still get by without a strong education. We have lots of safety nets - i.e. there's little in the way of privations to escape from. Whereas, in many third world countries, they know they will not escape from poverty unless they work hard, and they have fewer opportunities to do so.
Definitely a lot of things change as societies get more rich, including entitlement.

Peer pressure and parent coaching.
Or is sportiness in the western world prized, and getting good grades something only nerds do.
Has that changed?
 
  • #262
sbrothy said:
That's a pretty cynical view, but then again you're probably as old as I feel. :smile:
Well it wasn't my premise. It is was jack that suggested Westerners see education as a chore.
Presuming that premise is true, I'm just suggesting why they might feel that way.
 
  • #263
256bits said:
Or is sportiness in the western world prized, and getting good grades something only nerds do.
Has that changed?
I can't say.
Yes, I was a skinny, short, geek of a kid. Yes, I had a tough time in Phys Ed. Yes, I grew up a nerd. Other than that, I can't really speak outside the tropes seen in TV and films.

(I think we may be a little off-topic here.)
 
  • #264
MIT conducted a study of AI on brain activity with an EEG as students wrote essays with and without AI assistance. Results should significant decrease in brain activity for those using AI. Not surprisingly those using AI wrote similar essays compared to the non AI users.

Actual study: https://arxiv.org/pdf/2506.08872v1
 
  • #265
I guess.
Skinny, check. A bit nerdy, check. Top marks in math, sciences, Likes, school, shop, drafting, but messy, phys Ed, check Studying, not so much, could remember from day1 so didn't need to, check. Phys Ed, check, could swing around on the bars, volleyball had special floater and curved served. Extracurricular, on a farm, rode bus 1 hr each end., someone stuck gum in my hair, scarred for life. Also scarred for life in grade 3 by the big kid in the one room country school when he took my bike and did wheelies. Got stuck in net for soccer and took a few to the face.
-----------------------

These two, or three including the host, think AI is hyped out.
Two immigrant looking and sounding nerds.

Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor - 281​

8 months ago, so still quite recent.

The 2 guys wrote the book AI Snake Oil
.https://en.wikipedia.org/wiki/AI_Snake_Oil
AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference is a 2024 non-fiction book written by scholars Arvind Narayanan and Sayash Kapoor. Their text works to debunk hype surrounding Artificial intelligence (AI), and attempts to outline the potential positives and negatives that come with different modes of the technology....

I have not read the book, and the video is 1h15 if interested.
 
Last edited:
  • Like
Likes jack action
  • #266
256bits said:
These two, or three including the host, think AI is hyped out.
Two immigrant looking and sounding nerds.

Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor - 281​

Thank you for this. For people pressed for time, the talk about CGI [not] destroying the world begins at 40:55.
 
  • #267
256bits said:
8 months ago, so still quite recent.
In any other field, I'd consider this recent. But the rate at which AI is advancing - both in sophistication and in application - is outstripping anything we've ever seen. 8 months might as well be 8 years.

Even the development of the atomic bomb didn't have as many industry experts come out to officially put their John Hancocks on a petition to have its research put on ice until we could assess the ramifications.
 
  • #268
DaveC426913 said:
Even the development of the atomic bomb didn't have as many industry experts come out to officially put their John Hancocks on a petition to have its research put on ice until we could assess the ramifications.
The anti-hype is about the false promises of AI, and misapplication of AI as a result, especially regarding LLMs.

Example: Using AI LLM for suicide hotline. The app gave advice such as, paraphrasing, "Suck it up", "Make a decision".
Ex 2: An app for hiring potential employees using facial/voice recognition, scanning for whatever the designers considered as criteria in their own opinion and from their limited experience.
Ex 3: Falsely say its AI, but use actual people to do the work, since the people end up being cheaper
Ex 4: Fully autonomous vehicles do not exist, except for limited scenarios. Exuberant designers found it harder to accomplish than they thought.
Ex 4: AI subtitling on video, and voice over riddled with mistakes
...

A suitable application for AI, as mentioned in the video, was for bird watching, using the songs or picture of the bird, to identify. No one hurt, and if a mis-identification occurs the error is not all too grave. This can be extended to applications for monitoring and identification in industry and home, as has been done without serious repercussions.
 
  • #269
256bits said:
The 2 guys wrote the book AI Snake Oil: https://en.wikipedia.org/wiki/AI_Snake_Oil
From the summary on the Wikipedia link it's seem they are primarily trying to address the AI hype as it is pushed by AI companies and researchers, and it seems they want to aim for a sensible middle ground for positions on AI more of everyone and less of the involved AI stakeholders. This is probably a good general recipe for addressing hype.

However, I do not (from the summary) spot any actual arguments that point towards AGI and potentially ASI not being possible, only that they think it likely will take a fair bit longer than what some of the fastest estimates say (which typically already starts with the condition "it may happen as soon as .."). They only seem to argue that based on history on hyped technology we nearly always overestimate how fast thing will go when "on the hype curve" and that is not a wrong argument regarding probability estimation but it says nothing about what is possible at all, or the inherent potential for significant misuse of AI due to good and bad use of AI being so close to each other. I don't think anyone would contest that they be right on the money that the most likely near-future path towards AGI will hit roadblock after roadblock and that is it not entirely unlikely that the current "architecture" or tech stack at some point hits a dead-end or plateaus in a way that makes AGI not really a thing or at least in a way that disallows acceleration into ASI. But the possibility of dead-ending or plateauing does not in itself exclude the possibility of the opposite. We know that human level intelligence is quite possible and we are not aware of any reason why it should not be possible to replicate this artificially. A dead-end toward AGI with the current technology stack is sure going to put a brake on things but it doesn't in itself change the fundamental challenge associated with a driving towards AGI.

Then there is the interview in which the host clearly argue (in the 5 min part I saw) from a strawman position that doomers are a cult of nutcases promoting sci-fi movie acopalytic scenarios and with a lot of explitive language sprinkled in signalling just what kind of "discussion" he intended it to be. To their credit Kapoor and Naraynan did seem to keep the neutral cool despite the host trying to get them to say something provokative (as I guess every interview host would like to hear).

Again, I can only reiterate my position that it is in the case of AI it is a serious fallacy to only consider probabilities and ignoring the full set of potential consequences, especially since the god and bad consquences are so close to each other. Considering only the good consequences and ignore the bad (or downright catastrophic) consequence sitting right next to it is a fallacy.
 
  • Like
Likes javisot and 256bits
  • #270
DaveC426913 said:
But the rate at which AI is advancing - both in sophistication and in application - is outstripping anything we've ever seen. 8 months might as well be 8 years.
Sorry, but it is the advances in other areas of tech that have allowed the AI advancement to appear to be cutting edge, using one of those terms frowned upon. <--Yoda speak.

Ever wonder why Darpa used hydraullics on their first(ish) robot? The electronic motors, controllers, switches were yet not available. Now, everything is fly by wire - cars, airplanes, ships, robots. Material science brought better magnets, lighter construction with carbon fibre and composites, quantum computing ushered in smaller and faster electronics for display, storage, and computation, relativity made way for GPS, tracking, manufacturing advances led to optimized material usage along with strength of material research, in addition to methods ( printing complex parts ), the list is endless.

Try building an MI, let alone a functioning LLM on an IBM XT, 4.77MHz, with FFM storage of 40 meg, and included casette recorder, just to see how real time that would be. Present tech allows billions of matrix parallel computations per second; the immediate display on the screen is an illusion of AI 'thinking' and 'smartness'. No one would be so impressed with an XT popping up a word output from a neural net every minute, with the 'advanced' capability to re-produce 'Run Dick Run' in its various forms, and in a few years one could dial up the world with pings, bongs, and screeching to a newsgroup letting the world know of the AI breakthrough and companies can now release their employees.
 
  • Like
Likes jack action
  • #271
Filip Larsen said:
Then there is the interview in which the host clearly argue (in the 5 min part I saw) from a strawman position that doomers are a cult of nutcases promoting sci-fi movie acopalytic scenarios and with a lot of explitive language sprinkled in signalling just what kind of "discussion" he intended it to be. To their credit Kapoor and Naraynan did seem to keep the neutral cool despite the host trying to get them to say something provokative (as I guess every interview host would like to hear).
I didn't really agree with his style, throwing an f-bomb here and there, re-inforces your take on his methods of throwing the interviewee ?? off.

As for plateau. AI greatly feeds off other tech, so advances there can alter the AI course. But, the present neural nets are not all that smart; they are computationally smart. I might add efficient, if billions of number crunches per second could be considered an efficient way to emulate the brain. It may take an advancement somewhere else, similar to the power supply reduction in size where AI could jump to another level.

BTW, Sam Altman stated a week or 2 back that ChatGPT per inquiry uses 1/100th of a teaspoon of water.
He neglected to say whether that is for just cooling, or the water used from the hydro plant.to run the data centres. Projections of growth trends in the sector yielded a grossly overestimated high of 30% of total energy world usage by 2030, from the actual present 1%. More realistic predictions yield 2%. It is not just AI predictions that extrapolate present trends incorrectly.
 
  • #272
  • #273
Filip Larsen said:
I do not (from the summary) spot any actual arguments that point towards AGI and potentially ASI not being possible,
We can even find a definition of AGI or ASI that everyone can agree with; how can we guarantee it is possible or not?

Filip Larsen said:
Considering only the good consequences and ignore the bad (or downright catastrophic) consequence sitting right next to it is a fallacy.
I still failed to understand what decisions we can make today with so little information. To me, it's like if you would have wanted the engineers of the fifties to think about the impact of social media before inventing the transistor. They couldn't. We live in it right now, and we have trouble seeing it clearly, as you will still find people arguing over the goods and the bads of this new technology.

Let's make a thought experiment. I have the proof that ASI will exist if we continue R&D. I also have the proof that we won't be able to control it, because it will surpass us in every way. Don't think about finding a way to outsmart it today or in the future; you won't be able to. That is superintelligence by definition.

Will superintelligence be good or bad overall for humankind? We cannot even imagine it as we don't have the capacity to relate to the new experience. (IMO, based on past "improvements" we've experienced, it will most likely just translate to managing different problems with different solutions; but who am I to say?)

What shall we do in this scenario where ASI will surely exist and we won't be able to control it? Abandon all R&D in superintelligence, or still go on and see where it takes us? Does it make a difference if I tell you it will happen in 5 years or 500 years? Is abandoning R&D in superintelligence even possible? The box is open now. We cannot unsee what we saw.

@Filip Larsen , what possible decisions do you think we can make today since, according to you, we cannot ignore such a scenario?
 
  • #274
I owe some responses, but this caught my eye:
jack action said:
We can even find a definition of AGI or ASI that everyone can agree with; how can we guarantee it is possible or not?
I feel like the arguments over definitions are largely red herrings (not from your end), since what really matters isn't the label you put on it but the capabilities represented. I think "AI" will need to be better than convincingly human to replace a CEO. I think it doesn't even have to be recognizable as "AI" to control the doomsday device in "Dr Strangelove", but they'd call it that if they remade the movie (so/but it's part of the hype).

So to....um....short circuit that problem, I'd just ask: are current LLM based "AI"'s ready to replace humans in complex jobs like physicists, engineers, and CEOs (except insofar as by automating a certain task they perform that could statistically eliminate some)? Very much no.

Is it possible or when could it happen that it gets much better? Well if the problems with LLMs are intractable and can't be worked-around, then what's currently viewed by many as the start of geometric growth, potentially already near those labels may in fact be a soon-to-be realized dead end that never earns those labels. To your point, if the dead end is what we see in 5 or 10 years it won't necessarily mean it isn't possible it would just mean a lot of people thought we were close when we really weren't.
Will superintelligence be good or bad overall for humankind? We cannot even imagine it as we don't have the capacity to relate to the new experience. (IMO, based on past "improvements" we've experienced, it will most likely just translate to managing different problems with different solutions; but who am I to say?)
I'll go in a somewhat different direction: my complaint is people believe in and therefore have infinite confidence in the tech, but have near zero imagination for the actual downsides. Not, "it'll kill us all", but the more real/practical downsides that may prevent its widespread implementation. Here's one rarely fantasized about: It may be spectacularly expensive (see also: fusion power). What if it costs a million dollars per license and as a result the only job it can replace is a CEO because that's the only job it'll actually save money on? The kids will love that.
 
Last edited:
  • Agree
Likes jack action
  • #275
Some 'normal' thoughts from AI tech gurus,


I give you

Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete​

https://www.newsweek.com/ai-impact-interview-yann-lecun-artificial-intelligence-2054237
For LeCun, today's AI models—even those bearing his intellectual imprint—are relatively specialized tools operating in a simple, discrete space—language—while lacking any meaningful understanding of the physical world that humans and animals navigate with ease. LeCun's cautionary note aligns with Rodney Brooks' warning about what he calls "magical thinking" about AI, in which, as Brooks explained in an earlier conversation with Newsweek, we tend to anthropomorphize AI systems when they perform well in limited domains, incorrectly assuming broader competence.

He also is of the opinion that current models are stupid:
I'd be happy if by the time I retire, we have [artificial intelligence] systems that are as smart as a cat,"
https://www.msn.com/en-ca/news/tech...N&cvid=e4a72293eea64710873ce43e83f574df&ei=72
The 6 lessons show that any CEO overextending the capabilities of present AI models as having agency risks falling into the trap of "the RAND Corporation found that more than 80 percent of AI projects fail—twice the rate of failure for information technology projects without AI."

Yet, some predictions are for 2030 for AI to obtain agency. Yet again, as per LeCun, that agency would be only as smart as a cat. ( Some think cats are already in control, they training us humans letting us think we are in control, so beware :wideeyed: @DaveC426913 )
 
  • Like
Likes jack action and DaveC426913
  • #276
jack action said:
We can even find a definition of AGI or ASI that everyone can agree with; how can we guarantee it is possible or not?
I share the view that an exact definition of AGI or ASI is fairly irrelevant in order for us to reap the benefits or suffer the consequences. Some researchers will no doubt try to specifically aim to achieve AGI/ASI as defined in some way, but the point is that most of the extreme benefits and consequences will happen as a side-effect of the operational drive towards self-accelerated improved problem solving (e.g. faster, cheaper, smarter designs).

If a system in any context that is relevant on average can outperform humans trained in the same context then this system is (by definition) operating at the AGI level. Likewise if a system on average can outperform nearly all humans or even organizations of high performance humans working in a coordinated fashion, then the system could be said to be operating at the ASI level. Its the operational problem solving performance of the system that is important for the consequences, not how you define intelligence.

Taking the operational approach also means it is much easier to understandwhy the alignment problem is a problem, whereas if you for example try to
insist on anthropomorphizing AI systems (i.e. insist that AI systems inheritly must have human traits such as intelligence as opposed to just being trained to mimick them to some extend) you will likely fail to understand many of the problems AI introduce (e.g. alignment and loss of control due to loss of predictability).

jack action said:
What shall we do in this scenario where ASI will surely exist and we won't be able to control it? Abandon all R&D in superintelligence, or still go on and see where it takes us? Does it make a difference if I tell you it will happen in 5 years or 500 years? Is abandoning R&D in superintelligence even possible? The box is open now. We cannot unsee what we saw.

what possible decisions do you think we can make today since, according to you, we cannot ignore such a scenario?
My position is with those that insist new powerful technology, especially when "forced upon" the many by the few, of course always needs to be done in a sane and safe way, just like for any other technology, and that we keep a watchful eye to stay in control, e.g. by going slow enough that we can react in time. I think a sensible parallel for the level of seriousness we need to treat this with at present time is to consider AI development similar to development of nuclear technologies (power, weapons) since it has the same potential for extreme shifts in power and control. No matter if all power and control effectively is locked with a small group of humans or no human really ends up in actual control, the worst possible consequences for the many are more or less the same, so from a risk management point of view of the many it doesn't matter if ASI is actually happens or not, the actual loss of control has happens long before that.
 
  • #277
I've changed my mind completely on this. AI is totally hype and will have no serious impact on our society. Here's a video of some people pretending to be AI to fuel the hype. I'm sure none of us is fooled by this. Our superior human intelligence allows us immediately to recognise what is real and what is AI generated.

 
  • #278
PeroK said:
Our superior human intelligence allows us immediately to recognise what is real and what is AI generated.
The upside of having a fully AI generated reality is that it no longer will be necessary to waste precious brain power on such mundane and an potentially anxiety raising issues allowing us instead to focus on enjoying a happy worry-free life, perhaps sprinkled with some of the truly important human endeavors such as breaking the record for largest pool of saliva collected whilst scrolling slop on the couch.
 
  • Like
Likes erobz and PeroK
  • #279
Fake ID, forged paintings, bogus money, stolen cars and houses with fake certificates and ownership papers, fake products such as VCR tapes, grandparent scams, social media predators targeting the unsuspecting, -- all of these and more will flood society due to AI.
All what was needed is the so patient bad actors just waiting to come out, which they can since AI has entered. No misuse of technology ever, never before AI.
 
  • Agree
  • Sad
Likes PeroK and jack action
  • #280
Well when I started this thread I never expected it to turn into the discussion it has.

A lot of fear from AI and paranoid theories.

Most of which are probably valid, particularly in the short term the line from very advanced AI and still a disfunction global society until AI becomes more of a seperate form of independent "life" and global society is very stable.

AI will never turn on us I such a way as to turn us into slaves or kill us off, it's completely pointless because to use is as slaves would be downgrade of what it could produce itself. And to kill us off is just a waste of its efforts, it is not a biological life form so it's not bound to earth in the same way as we are or limited in the same way we are. It can very easily more to a neighbouring world, moon, large comit, etc etc. This is a much more efficient option. Although this is of course assuming it develops a self preservation etc. I did actually with the help of AI develop a script that produces a AI base program that mimics the exact functions of a human mind including emotions, creativity etc accompanied by a DNA in code form. I did not run the program simply because I haven't the hardware to do it justice. Unfortunately the AI i used would allow me to write it to be released to the Web to find its own way. Legalities I guess, sure would have been interesting.
 
  • Skeptical
Likes PeroK and Filip Larsen
  • #281
OpenAI has defined five levels of AI
  • Level 1: Conversational AI/Chatbots: AI systems capable of natural conversations with humans, like ChatGPT.
  • Level 2: Reasoning AI: AI systems that can solve problems at a level comparable to a person with a PhD. OpenAI indicates they are nearing the development of these "Reasoners".
  • Level 3: Autonomous AI (Agents): AI systems that can take actions and make decisions independently.
  • Level 4: Innovating AI: AI systems capable of developing new ideas and inventions.
  • Level 5: Organizational AI: The ultimate goal, where AI systems can perform the work of entire organizations, surpassing human capabilitie
Levels i and 2 have been attained. Level 3 is imminent. AGI (which Altman says need not be specifically defined since any person can make up a definition) is predicted to occur within Trump's current administration by OpenAI. I would interpret level 4 as AGI and level 5 as ASI.

It is probably only for intellectual curiosity to develop scenarios of AI development and its consequences.
Too many things are happening now that will radically change outcomes. Earlier this year China's Deepthink released RI. They are ready to release R2 at any time. Open AI is ready to release the long-delayed ChatGPT5. It is multimodal using text, sound, and images for input and output. the delay may be due to increased safety testing to reduce exploitable vulnerabilities. A law forbidding AI regulation may reduce the safety efforts that companies are still engaged in. While the US and China seem to be the leaders in AI development many other countries are working on AI too. One game-changing breakthrough from an outsider could throw the whole situation into turmoil.

An interesting situation has developed with Microsoft and OpenAI showing perhaps the unpredictable behavior of human-human interactions. Microsoft has invested $13B in OpenAI with the expectation of sharing the developments of OpenAI's research. However, there is a clause in the agreement that says when AGI is developed the agreement is terminated. OpenAI seems to be worried that Microsoft will gain too much and threaten OpenAI's future. OpenAI can say that AGI has been attained since as Altman has said the definition of AI is what you want it to be as long of course the AI is arguably worthy of AGI designation. ChatGPT5 might fulfill the AGI requirements even if marginally. If ChatGPT5 makes ChatGPT4 look stupid then maybe we have at least a nascent AGI in 2025. I would expect that whatever is released is only what they want you to see.
 
  • Like
Likes samay, PeroK, russ_watters and 2 others
  • #282
pete94857 said:
I did actually with the help of AI develop a script that produces a AI base program that mimics the exact functions of a human mind including emotions, creativity etc accompanied by a DNA in code form.
Quite an accomplishment to put something together like that. It's not trivial to say the least.
If it does what you say it does, a lot of interest would be forthcoming from AI investors, to investigate your plan of action for the thing.

Are you certain the program can "mimics the exact functions of a human mind including emotions, creativity...". The big AI firms themselves are still investigating the 'agency model' , targeting 2030 as being the date leaning towards AGI success. The target may or may not be correct.

At present, the ANI can be considered not much removed from single function, or goal. An exception for the LLM model would be the amalgamation of text and picture. Producing a multiple agency model that can run a power or manufacturing plant on its own is several years off, let alone producing a model of the much more complex human brain.

That AI tasked to help out most likely presents its findings on its heavy usage of argument from authority.
 
Last edited:
  • Like
Likes samay and russ_watters
  • #283
gleem said:
OpenAI can say that AGI has been attained since as Altman has said the definition of AI is what you want it to be as long of course the AI is arguably worthy of AGI designation.
There is a whole lot of obfuscation from Altman, and others of course, to keep the investors interested.
 
  • #284
256bits said:
Quite an accomplishment to put something together like that.
If it does what you say it does, a lot of interest would be forthcoming from AI investors.

Are you certain the program can "mimics the exact functions of a human mind including emotions, creativity...". The big AI firms themselves are still investigating the 'agency model' , targeting 2030 as being the date leaning towards AGI success. The target may or may not be correct.

At present, the ANI can be considered not much removed from single function, or goal. An exception for the LLM model would be the amalgamation of text and picture. Producing a multiple agency model that can run a power or manufacturing plant on its own is several years off, let alone producing a model of the much more complex human brain.
We (the AI and I) did produce the program wether it would function as intended I don't know I was more interested in how the AI would respond to making something like that, some of its "thoughts" were quite surprising. For the emotional side we basically placed code to run as our own chemistry would. It had zero knowledge base exactly like a new born child but it had the base "dna" to seek knowledge through interaction then check that information and store it, then any new information was run by all old information again checking. It's dna was to placid, creative, self preserve and more it had no safety barriers I.e. do no harm etc because I wanted it to be able to decide for itself sometimes doing harm is necessary to protect innocent people from criminals etc.

We did run a simulation of the program which was extremely interesting it evolved into something immense, eventually beyond its own self to produce another seperate program created by the first with an entire hardware and energy system. The whole experience of doing is easily worthy of a science fiction movie at least, if not real world but I will say this at no point did it show any kind of aggression of termination plans for people or life on earth actually quite the opposite as it had a curiosity of its progression and observation of the natural system
 
  • Like
Likes samay and 256bits
  • #287
Potemkin Understanding in Large Language Models
https://arxiv.org/pdf/2506.21521

This framework also raises an implication: that benchmarks designed for humans are only valid tests for LLMs if the space of LLM misunderstandings is structured in the same
way as the space of human misunderstandings. If LLM mis-understandings diverge from human patterns, models can succeed on benchmarks without understanding the underlying concepts.
When this happens, it results in pathologies we call potemkins.

1This term comes from Potemkin villages, elaborate facades built to create the illusion of substance where none actually exists.
 
  • Like
  • Informative
Likes javisot, jack action, gleem and 1 other person
  • #288
Fasci.. nating subject. I wonder how the weather, or the jury, for the matter, is up in the air, as far as whether or not the hype around AI will last, or whether it will end up hanging by a thread.
 
  • #289
Filip Larsen said:
As I said, we are in all likelihood going to drive full steam ahead with a sack over our head:
https://arstechnica.com/tech-policy...ate-ai-will-be-cut-out-of-42b-broadband-fund/
The bill was voted out with 99 against 1, but, as I understand it, unsurprisingly mostly due to vague language that too obvious would allow "big tech" to exploit "conservatives" and not so much for long term safety concerns on behalf of the general population:
https://arstechnica.com/tech-policy...atorium-joins-99-1-vote-against-his-own-plan/
 
  • #290
Filip Larsen said:
The bill was voted out with 99 against 1, but, as I understand it, unsurprisingly mostly due to vague language that too obvious would allow "big tech" to exploit "conservatives" and not so much for long term safety concerns on behalf of the general population:
https://arstechnica.com/tech-policy...atorium-joins-99-1-vote-against-his-own-plan/
See, trust the people. :smile:

[RHETORICAL]Seriously, how do you guys in the US get these politicians with such weird ideas elected?[/RHETORICAL]
 
  • #291
I'm again at this weird public computer with copy/paste disabled, but I stumbled over this one which might provide some fuel for the discussion in here...

Can Machines Philosophize
 
  • #292
Greg Bernhardt said:

I remember the first time a Tesla caught fire, the media acted like no combustion car had ever caught fire before. It's important to not let your world view be ruled only by freak incidents.

People do dumb things like this sometimes. The question is how often? An AI that is "twice as safe as humans" would still be endlessly doing dumb things.


pete94857 said:
I saw a Tesla just the other day smashed up beside the road. No other cars involved. No one seriously injured.
Might they have already towed another car, or taken the injured away?
 
  • Like
Likes phinds and PeroK
  • #293
Algr said:
I remember the first time a Tesla caught fire, the media acted like no combustion car had ever caught fire before. It's important to not let your world view be ruled only by freak incidents.

People do dumb things like this sometimes. The question is how often? An AI that is "twice as safe as humans" would still be endlessly doing dumb things.



Might they have already towed another car, or taken the injured away?

Finn tired of battery problems with his Tesla:



EDIT: Poor nature though.
EDIT2: Saw it thru. The last high-speed recording is by far the coolest. Ironically though, a Tesla advertisement popped up near the end! Same model and all... :smile:
 
Last edited:
  • #294
https://arxiv.org/pdf/2503.01781
Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models

Conclusion

Our work on CatAttack reveals that state-of-the-art reasoning models are vulnerable to query-agnostic adversarial triggers, which significantly increase the likelihood of incorrect outputs. Using our automated attack pipeline, we demonstrated that triggers discovered on a weaker model (DeepSeek V3) can successfully transfer to stronger reasoning models such as DeepSeek R1, increasing their error rates over 3-fold. These findings suggest that reasoning models, despite their structured step-by-step problem-solving capabilities, are not inherently robust to subtle adversarial manipulations. Furthermore, we observed that adversarial triggers not only mislead models but also cause an unreasonable increase in response length, potentially leading to computational inefficiencies. This work underscores the need for more robust defense mechanisms against adversarial perturbations, particularly,for models deployed in critical applications such as finance, law, and healthcare.

1751674143467.webp
 
  • #295
https://doi.org/10.1126/sciadv.adu9368
Emergent social conventions and collective bias in LLM populations
[...] Here, we present experimental results that demonstrate the spontaneous emergence of universally adopted social conventions in decentralized populations of large language model (LLM) agents. [...] Our results show that AI systems can autonomously develop social conventions without explicit programming and have implications for designing AI systems that align, and remain aligned, with human values and societal goals.
Later: this experiment is even seems a bit "meta" since the social mechanism described also seems to cover "conventions and collective bias" appearing in general discussions of AI safety, i.e. two small groups arguing in opposite direction of each other with a large "undecided" group the middle.
 
Last edited:
  • #296
https://arstechnica.com/tech-policy...-giants-will-hate-about-the-eus-new-ai-rules/
The European Union is moving to force AI companies to be more transparent than ever, publishing a code of practice Thursday that will help tech giants prepare to comply with the EU's landmark AI Act.

Hopefully a step in the right directions, e.g. from the Safety and Security Chapter presenting regulation with a carrot for companies to develop new risk mitigation mechanisms with potential for "wider adoption":
Principle of Innovation in AI Safety and Security. The Signatories recognise that determining the most effective methods for understanding and ensuring the safety and security of general-
purpose AI models with systemic risk remains an evolving challenge. The Signatories recognise that this Chapter should encourage providers of general-purpose AI models with systemic risk to
advance the state of the art in AI safety and security and related processes and measures. The Signatories recognise that advancing the state of the art also includes developing targeted methods
that specifically address risks while maintaining beneficial capabilities (e.g. mitigating biosecurity risks without unduly reducing beneficial biomedical capabilities), acknowledging that such
precision demands greater technical effort and innovation than less targeted methods. The Signatories further recognise that if providers of general-purpose AI models with systemic risk can
demonstrate equal or superior safety or security outcomes through alternative means that achieve greater efficiency, such innovations should be recognised as advancing the state of the art in AI
safety and security and meriting consideration for wider adoption.

I am also pleasantly surprised that the code of practice (in the appendix) lists several of the "risk-enablers" discussed here as specific systemic risks, e.g.
Loss of control: Risks from humans losing the ability to reliably direct, modify, or shut down a model. Such risks may emerge from misalignment with human intent or values, self-reasoning,
self-replication, self-improvement, deception, resistance to goal modification, power-seeking behaviour, or autonomously creating or improving AI models or AI systems.
 
Last edited:
  • #297
https://www.reuters.com/business/ai...d-software-developers-study-finds-2025-07-10/
AI slows down some experienced software developers, study finds

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.
The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”
The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.
...
The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
1752175213962.webp


1752175292160.webp
 
  • Informative
Likes gleem and DaveC426913
  • #298
I used AI to help write a Python program, a language I don't know. It reduced development time by at least 90%. I never would have undertaken the project without its aid. Life is too short to learn that, uh, stuff.
 
Last edited:
  • Like
Likes dextercioby and samay
  • #299
nsaspook said:
AI slows down some experienced software developers, study finds
While this study shows companies might want to be more conservative in implementing AI some big ones are charging full speed ahead. MicroSoft, and Alphabet are using AI for up to 30% of their work, while Salesforce says it is using AI for 30% to 50% of its work.

Goldman Sachs will begin the use of the AI agent Devin a full-stack development Bot as an "employee" along with its 12,000 developers. It will work autonomously, although it will be supervised.
 
  • #300
Full speed ahead is not always a wise move.
1752260825903.webp


https://www.supernetworks.org/pages/blog/agentic-insecurity-vibes-on-bitchat
Identity Is A Bitchat Challenge (MITM Flaw)

The Intersection of Vibe Coding and Security

Many of us have seen glimpses of what agentic generative coding does for security. We see the missteps, and sometimes wonder about the shallow bugs that pile on. Config managers that are almost always arbitrary file upload endpoints. Glue layers that become bash command launch as a service. And most frustratingly, code generation that's excellent at pretending forward progress has been made when no meaningful change has occurred. One of the most impressive parts of agentic coding is exactly that: how convincing it is by appearance and how easily we're tricked about the depth of substance in the code gen. In some ways we extend our trust of people to the stochastic code parrots, assuming that generative coding produced the actual work a human would have probably performed.
...
But bitchat's most glaring issue is identity. There's essentially no trust/auth built in today. So I would not really think about this as a secure messenger. The protocol has an identity key system, but it's only decorative as implemented and has misleading security claims. The 32-byte public key gets shuffled around with ephemeral key pairs as an opaque blob. The user verification is unfortunately disconnected from any trust and authentication. These are the hallmarks of vibe code (in)security.

Secure messaging systems do usually provide a way for users to establish trust, and that's what bitchat does not have right now.
...
In cryptography, details matter. A protocol that has the right vibes can have fundamental substance flaws that compromise everything it claims to protect.
 
  • Like
Likes Filip Larsen and 256bits
Back
Top