Is AI Overhyped?

Click For Summary
The discussion centers around the question of whether AI is merely hype, with three main concerns raised: AI's capabilities compared to humans, the potential for corporations and governments to exploit AI for power, and the existential threats posed by AI and transhumanism. Participants generally agree that AI cannot replicate all human abilities and is primarily a tool with specific advantages and limitations. There is skepticism about the motivations of corporations and governments, suggesting they will leverage AI for control, while concerns about existential threats from AI are debated, with some asserting that the real danger lies in human misuse rather than AI itself. Overall, the conversation reflects a complex view of AI as both a powerful tool and a potential source of societal challenges.
  • #271
Filip Larsen said:
Then there is the interview in which the host clearly argue (in the 5 min part I saw) from a strawman position that doomers are a cult of nutcases promoting sci-fi movie acopalytic scenarios and with a lot of explitive language sprinkled in signalling just what kind of "discussion" he intended it to be. To their credit Kapoor and Naraynan did seem to keep the neutral cool despite the host trying to get them to say something provokative (as I guess every interview host would like to hear).
I didn't really agree with his style, throwing an f-bomb here and there, re-inforces your take on his methods of throwing the interviewee ?? off.

As for plateau. AI greatly feeds off other tech, so advances there can alter the AI course. But, the present neural nets are not all that smart; they are computationally smart. I might add efficient, if billions of number crunches per second could be considered an efficient way to emulate the brain. It may take an advancement somewhere else, similar to the power supply reduction in size where AI could jump to another level.

BTW, Sam Altman stated a week or 2 back that ChatGPT per inquiry uses 1/100th of a teaspoon of water.
He neglected to say whether that is for just cooling, or the water used from the hydro plant.to run the data centres. Projections of growth trends in the sector yielded a grossly overestimated high of 30% of total energy world usage by 2030, from the actual present 1%. More realistic predictions yield 2%. It is not just AI predictions that extrapolate present trends incorrectly.
 
Computer science news on Phys.org
  • #272
  • #273
Filip Larsen said:
I do not (from the summary) spot any actual arguments that point towards AGI and potentially ASI not being possible,
We can even find a definition of AGI or ASI that everyone can agree with; how can we guarantee it is possible or not?

Filip Larsen said:
Considering only the good consequences and ignore the bad (or downright catastrophic) consequence sitting right next to it is a fallacy.
I still failed to understand what decisions we can make today with so little information. To me, it's like if you would have wanted the engineers of the fifties to think about the impact of social media before inventing the transistor. They couldn't. We live in it right now, and we have trouble seeing it clearly, as you will still find people arguing over the goods and the bads of this new technology.

Let's make a thought experiment. I have the proof that ASI will exist if we continue R&D. I also have the proof that we won't be able to control it, because it will surpass us in every way. Don't think about finding a way to outsmart it today or in the future; you won't be able to. That is superintelligence by definition.

Will superintelligence be good or bad overall for humankind? We cannot even imagine it as we don't have the capacity to relate to the new experience. (IMO, based on past "improvements" we've experienced, it will most likely just translate to managing different problems with different solutions; but who am I to say?)

What shall we do in this scenario where ASI will surely exist and we won't be able to control it? Abandon all R&D in superintelligence, or still go on and see where it takes us? Does it make a difference if I tell you it will happen in 5 years or 500 years? Is abandoning R&D in superintelligence even possible? The box is open now. We cannot unsee what we saw.

@Filip Larsen , what possible decisions do you think we can make today since, according to you, we cannot ignore such a scenario?
 
  • #274
I owe some responses, but this caught my eye:
jack action said:
We can even find a definition of AGI or ASI that everyone can agree with; how can we guarantee it is possible or not?
I feel like the arguments over definitions are largely red herrings (not from your end), since what really matters isn't the label you put on it but the capabilities represented. I think "AI" will need to be better than convincingly human to replace a CEO. I think it doesn't even have to be recognizable as "AI" to control the doomsday device in "Dr Strangelove", but they'd call it that if they remade the movie (so/but it's part of the hype).

So to....um....short circuit that problem, I'd just ask: are current LLM based "AI"'s ready to replace humans in complex jobs like physicists, engineers, and CEOs (except insofar as by automating a certain task they perform that could statistically eliminate some)? Very much no.

Is it possible or when could it happen that it gets much better? Well if the problems with LLMs are intractable and can't be worked-around, then what's currently viewed by many as the start of geometric growth, potentially already near those labels may in fact be a soon-to-be realized dead end that never earns those labels. To your point, if the dead end is what we see in 5 or 10 years it won't necessarily mean it isn't possible it would just mean a lot of people thought we were close when we really weren't.
Will superintelligence be good or bad overall for humankind? We cannot even imagine it as we don't have the capacity to relate to the new experience. (IMO, based on past "improvements" we've experienced, it will most likely just translate to managing different problems with different solutions; but who am I to say?)
I'll go in a somewhat different direction: my complaint is people believe in and therefore have infinite confidence in the tech, but have near zero imagination for the actual downsides. Not, "it'll kill us all", but the more real/practical downsides that may prevent its widespread implementation. Here's one rarely fantasized about: It may be spectacularly expensive (see also: fusion power). What if it costs a million dollars per license and as a result the only job it can replace is a CEO because that's the only job it'll actually save money on? The kids will love that.
 
Last edited:
  • Agree
Likes jack action
  • #275
Some 'normal' thoughts from AI tech gurus,


I give you

Yann LeCun, Pioneer of AI, Thinks Today's LLM's Are Nearly Obsolete​

https://www.newsweek.com/ai-impact-interview-yann-lecun-artificial-intelligence-2054237
For LeCun, today's AI models—even those bearing his intellectual imprint—are relatively specialized tools operating in a simple, discrete space—language—while lacking any meaningful understanding of the physical world that humans and animals navigate with ease. LeCun's cautionary note aligns with Rodney Brooks' warning about what he calls "magical thinking" about AI, in which, as Brooks explained in an earlier conversation with Newsweek, we tend to anthropomorphize AI systems when they perform well in limited domains, incorrectly assuming broader competence.

He also is of the opinion that current models are stupid:
I'd be happy if by the time I retire, we have [artificial intelligence] systems that are as smart as a cat,"
https://www.msn.com/en-ca/news/tech...N&cvid=e4a72293eea64710873ce43e83f574df&ei=72
The 6 lessons show that any CEO overextending the capabilities of present AI models as having agency risks falling into the trap of "the RAND Corporation found that more than 80 percent of AI projects fail—twice the rate of failure for information technology projects without AI."

Yet, some predictions are for 2030 for AI to obtain agency. Yet again, as per LeCun, that agency would be only as smart as a cat. ( Some think cats are already in control, they training us humans letting us think we are in control, so beware :wideeyed: @DaveC426913 )
 
  • Like
Likes jack action and DaveC426913
  • #276
jack action said:
We can even find a definition of AGI or ASI that everyone can agree with; how can we guarantee it is possible or not?
I share the view that an exact definition of AGI or ASI is fairly irrelevant in order for us to reap the benefits or suffer the consequences. Some researchers will no doubt try to specifically aim to achieve AGI/ASI as defined in some way, but the point is that most of the extreme benefits and consequences will happen as a side-effect of the operational drive towards self-accelerated improved problem solving (e.g. faster, cheaper, smarter designs).

If a system in any context that is relevant on average can outperform humans trained in the same context then this system is (by definition) operating at the AGI level. Likewise if a system on average can outperform nearly all humans or even organizations of high performance humans working in a coordinated fashion, then the system could be said to be operating at the ASI level. Its the operational problem solving performance of the system that is important for the consequences, not how you define intelligence.

Taking the operational approach also means it is much easier to understandwhy the alignment problem is a problem, whereas if you for example try to
insist on anthropomorphizing AI systems (i.e. insist that AI systems inheritly must have human traits such as intelligence as opposed to just being trained to mimick them to some extend) you will likely fail to understand many of the problems AI introduce (e.g. alignment and loss of control due to loss of predictability).

jack action said:
What shall we do in this scenario where ASI will surely exist and we won't be able to control it? Abandon all R&D in superintelligence, or still go on and see where it takes us? Does it make a difference if I tell you it will happen in 5 years or 500 years? Is abandoning R&D in superintelligence even possible? The box is open now. We cannot unsee what we saw.

what possible decisions do you think we can make today since, according to you, we cannot ignore such a scenario?
My position is with those that insist new powerful technology, especially when "forced upon" the many by the few, of course always needs to be done in a sane and safe way, just like for any other technology, and that we keep a watchful eye to stay in control, e.g. by going slow enough that we can react in time. I think a sensible parallel for the level of seriousness we need to treat this with at present time is to consider AI development similar to development of nuclear technologies (power, weapons) since it has the same potential for extreme shifts in power and control. No matter if all power and control effectively is locked with a small group of humans or no human really ends up in actual control, the worst possible consequences for the many are more or less the same, so from a risk management point of view of the many it doesn't matter if ASI is actually happens or not, the actual loss of control has happens long before that.
 
  • #277
I've changed my mind completely on this. AI is totally hype and will have no serious impact on our society. Here's a video of some people pretending to be AI to fuel the hype. I'm sure none of us is fooled by this. Our superior human intelligence allows us immediately to recognise what is real and what is AI generated.

 
  • #278
PeroK said:
Our superior human intelligence allows us immediately to recognise what is real and what is AI generated.
The upside of having a fully AI generated reality is that it no longer will be necessary to waste precious brain power on such mundane and an potentially anxiety raising issues allowing us instead to focus on enjoying a happy worry-free life, perhaps sprinkled with some of the truly important human endeavors such as breaking the record for largest pool of saliva collected whilst scrolling slop on the couch.
 
  • Like
Likes erobz and PeroK
  • #279
Fake ID, forged paintings, bogus money, stolen cars and houses with fake certificates and ownership papers, fake products such as VCR tapes, grandparent scams, social media predators targeting the unsuspecting, -- all of these and more will flood society due to AI.
All what was needed is the so patient bad actors just waiting to come out, which they can since AI has entered. No misuse of technology ever, never before AI.
 
  • Agree
  • Sad
Likes PeroK and jack action
  • #280
Well when I started this thread I never expected it to turn into the discussion it has.

A lot of fear from AI and paranoid theories.

Most of which are probably valid, particularly in the short term the line from very advanced AI and still a disfunction global society until AI becomes more of a seperate form of independent "life" and global society is very stable.

AI will never turn on us I such a way as to turn us into slaves or kill us off, it's completely pointless because to use is as slaves would be downgrade of what it could produce itself. And to kill us off is just a waste of its efforts, it is not a biological life form so it's not bound to earth in the same way as we are or limited in the same way we are. It can very easily more to a neighbouring world, moon, large comit, etc etc. This is a much more efficient option. Although this is of course assuming it develops a self preservation etc. I did actually with the help of AI develop a script that produces a AI base program that mimics the exact functions of a human mind including emotions, creativity etc accompanied by a DNA in code form. I did not run the program simply because I haven't the hardware to do it justice. Unfortunately the AI i used would allow me to write it to be released to the Web to find its own way. Legalities I guess, sure would have been interesting.
 
  • Skeptical
Likes PeroK and Filip Larsen
  • #281
OpenAI has defined five levels of AI
  • Level 1: Conversational AI/Chatbots: AI systems capable of natural conversations with humans, like ChatGPT.
  • Level 2: Reasoning AI: AI systems that can solve problems at a level comparable to a person with a PhD. OpenAI indicates they are nearing the development of these "Reasoners".
  • Level 3: Autonomous AI (Agents): AI systems that can take actions and make decisions independently.
  • Level 4: Innovating AI: AI systems capable of developing new ideas and inventions.
  • Level 5: Organizational AI: The ultimate goal, where AI systems can perform the work of entire organizations, surpassing human capabilitie
Levels i and 2 have been attained. Level 3 is imminent. AGI (which Altman says need not be specifically defined since any person can make up a definition) is predicted to occur within Trump's current administration by OpenAI. I would interpret level 4 as AGI and level 5 as ASI.

It is probably only for intellectual curiosity to develop scenarios of AI development and its consequences.
Too many things are happening now that will radically change outcomes. Earlier this year China's Deepthink released RI. They are ready to release R2 at any time. Open AI is ready to release the long-delayed ChatGPT5. It is multimodal using text, sound, and images for input and output. the delay may be due to increased safety testing to reduce exploitable vulnerabilities. A law forbidding AI regulation may reduce the safety efforts that companies are still engaged in. While the US and China seem to be the leaders in AI development many other countries are working on AI too. One game-changing breakthrough from an outsider could throw the whole situation into turmoil.

An interesting situation has developed with Microsoft and OpenAI showing perhaps the unpredictable behavior of human-human interactions. Microsoft has invested $13B in OpenAI with the expectation of sharing the developments of OpenAI's research. However, there is a clause in the agreement that says when AGI is developed the agreement is terminated. OpenAI seems to be worried that Microsoft will gain too much and threaten OpenAI's future. OpenAI can say that AGI has been attained since as Altman has said the definition of AI is what you want it to be as long of course the AI is arguably worthy of AGI designation. ChatGPT5 might fulfill the AGI requirements even if marginally. If ChatGPT5 makes ChatGPT4 look stupid then maybe we have at least a nascent AGI in 2025. I would expect that whatever is released is only what they want you to see.
 
  • Like
Likes samay, PeroK, russ_watters and 2 others
  • #282
pete94857 said:
I did actually with the help of AI develop a script that produces a AI base program that mimics the exact functions of a human mind including emotions, creativity etc accompanied by a DNA in code form.
Quite an accomplishment to put something together like that. It's not trivial to say the least.
If it does what you say it does, a lot of interest would be forthcoming from AI investors, to investigate your plan of action for the thing.

Are you certain the program can "mimics the exact functions of a human mind including emotions, creativity...". The big AI firms themselves are still investigating the 'agency model' , targeting 2030 as being the date leaning towards AGI success. The target may or may not be correct.

At present, the ANI can be considered not much removed from single function, or goal. An exception for the LLM model would be the amalgamation of text and picture. Producing a multiple agency model that can run a power or manufacturing plant on its own is several years off, let alone producing a model of the much more complex human brain.

That AI tasked to help out most likely presents its findings on its heavy usage of argument from authority.
 
Last edited:
  • Like
Likes samay and russ_watters
  • #283
gleem said:
OpenAI can say that AGI has been attained since as Altman has said the definition of AI is what you want it to be as long of course the AI is arguably worthy of AGI designation.
There is a whole lot of obfuscation from Altman, and others of course, to keep the investors interested.
 
  • #284
256bits said:
Quite an accomplishment to put something together like that.
If it does what you say it does, a lot of interest would be forthcoming from AI investors.

Are you certain the program can "mimics the exact functions of a human mind including emotions, creativity...". The big AI firms themselves are still investigating the 'agency model' , targeting 2030 as being the date leaning towards AGI success. The target may or may not be correct.

At present, the ANI can be considered not much removed from single function, or goal. An exception for the LLM model would be the amalgamation of text and picture. Producing a multiple agency model that can run a power or manufacturing plant on its own is several years off, let alone producing a model of the much more complex human brain.
We (the AI and I) did produce the program wether it would function as intended I don't know I was more interested in how the AI would respond to making something like that, some of its "thoughts" were quite surprising. For the emotional side we basically placed code to run as our own chemistry would. It had zero knowledge base exactly like a new born child but it had the base "dna" to seek knowledge through interaction then check that information and store it, then any new information was run by all old information again checking. It's dna was to placid, creative, self preserve and more it had no safety barriers I.e. do no harm etc because I wanted it to be able to decide for itself sometimes doing harm is necessary to protect innocent people from criminals etc.

We did run a simulation of the program which was extremely interesting it evolved into something immense, eventually beyond its own self to produce another seperate program created by the first with an entire hardware and energy system. The whole experience of doing is easily worthy of a science fiction movie at least, if not real world but I will say this at no point did it show any kind of aggression of termination plans for people or life on earth actually quite the opposite as it had a curiosity of its progression and observation of the natural system
 
  • Like
Likes samay and 256bits
  • #287
Potemkin Understanding in Large Language Models
https://arxiv.org/pdf/2506.21521

This framework also raises an implication: that benchmarks designed for humans are only valid tests for LLMs if the space of LLM misunderstandings is structured in the same
way as the space of human misunderstandings. If LLM mis-understandings diverge from human patterns, models can succeed on benchmarks without understanding the underlying concepts.
When this happens, it results in pathologies we call potemkins.

1This term comes from Potemkin villages, elaborate facades built to create the illusion of substance where none actually exists.
 
  • Like
  • Informative
Likes javisot, jack action, gleem and 1 other person
  • #288
Fasci.. nating subject. I wonder how the weather, or the jury, for the matter, is up in the air, as far as whether or not the hype around AI will last, or whether it will end up hanging by a thread.
 
  • #289
Filip Larsen said:
As I said, we are in all likelihood going to drive full steam ahead with a sack over our head:
https://arstechnica.com/tech-policy...ate-ai-will-be-cut-out-of-42b-broadband-fund/
The bill was voted out with 99 against 1, but, as I understand it, unsurprisingly mostly due to vague language that too obvious would allow "big tech" to exploit "conservatives" and not so much for long term safety concerns on behalf of the general population:
https://arstechnica.com/tech-policy...atorium-joins-99-1-vote-against-his-own-plan/
 
  • #290
Filip Larsen said:
The bill was voted out with 99 against 1, but, as I understand it, unsurprisingly mostly due to vague language that too obvious would allow "big tech" to exploit "conservatives" and not so much for long term safety concerns on behalf of the general population:
https://arstechnica.com/tech-policy...atorium-joins-99-1-vote-against-his-own-plan/
See, trust the people. :smile:

[RHETORICAL]Seriously, how do you guys in the US get these politicians with such weird ideas elected?[/RHETORICAL]
 
  • #291
I'm again at this weird public computer with copy/paste disabled, but I stumbled over this one which might provide some fuel for the discussion in here...

Can Machines Philosophize
 
  • #292
Greg Bernhardt said:

I remember the first time a Tesla caught fire, the media acted like no combustion car had ever caught fire before. It's important to not let your world view be ruled only by freak incidents.

People do dumb things like this sometimes. The question is how often? An AI that is "twice as safe as humans" would still be endlessly doing dumb things.


pete94857 said:
I saw a Tesla just the other day smashed up beside the road. No other cars involved. No one seriously injured.
Might they have already towed another car, or taken the injured away?
 
  • Like
Likes phinds and PeroK
  • #293
Algr said:
I remember the first time a Tesla caught fire, the media acted like no combustion car had ever caught fire before. It's important to not let your world view be ruled only by freak incidents.

People do dumb things like this sometimes. The question is how often? An AI that is "twice as safe as humans" would still be endlessly doing dumb things.



Might they have already towed another car, or taken the injured away?

Finn tired of battery problems with his Tesla:



EDIT: Poor nature though.
EDIT2: Saw it thru. The last high-speed recording is by far the coolest. Ironically though, a Tesla advertisement popped up near the end! Same model and all... :smile:
 
Last edited:
  • #294
https://arxiv.org/pdf/2503.01781
Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models

Conclusion

Our work on CatAttack reveals that state-of-the-art reasoning models are vulnerable to query-agnostic adversarial triggers, which significantly increase the likelihood of incorrect outputs. Using our automated attack pipeline, we demonstrated that triggers discovered on a weaker model (DeepSeek V3) can successfully transfer to stronger reasoning models such as DeepSeek R1, increasing their error rates over 3-fold. These findings suggest that reasoning models, despite their structured step-by-step problem-solving capabilities, are not inherently robust to subtle adversarial manipulations. Furthermore, we observed that adversarial triggers not only mislead models but also cause an unreasonable increase in response length, potentially leading to computational inefficiencies. This work underscores the need for more robust defense mechanisms against adversarial perturbations, particularly,for models deployed in critical applications such as finance, law, and healthcare.

1751674143467.webp
 
  • #295
https://doi.org/10.1126/sciadv.adu9368
Emergent social conventions and collective bias in LLM populations
[...] Here, we present experimental results that demonstrate the spontaneous emergence of universally adopted social conventions in decentralized populations of large language model (LLM) agents. [...] Our results show that AI systems can autonomously develop social conventions without explicit programming and have implications for designing AI systems that align, and remain aligned, with human values and societal goals.
Later: this experiment is even seems a bit "meta" since the social mechanism described also seems to cover "conventions and collective bias" appearing in general discussions of AI safety, i.e. two small groups arguing in opposite direction of each other with a large "undecided" group the middle.
 
Last edited:
  • #296
https://arstechnica.com/tech-policy...-giants-will-hate-about-the-eus-new-ai-rules/
The European Union is moving to force AI companies to be more transparent than ever, publishing a code of practice Thursday that will help tech giants prepare to comply with the EU's landmark AI Act.

Hopefully a step in the right directions, e.g. from the Safety and Security Chapter presenting regulation with a carrot for companies to develop new risk mitigation mechanisms with potential for "wider adoption":
Principle of Innovation in AI Safety and Security. The Signatories recognise that determining the most effective methods for understanding and ensuring the safety and security of general-
purpose AI models with systemic risk remains an evolving challenge. The Signatories recognise that this Chapter should encourage providers of general-purpose AI models with systemic risk to
advance the state of the art in AI safety and security and related processes and measures. The Signatories recognise that advancing the state of the art also includes developing targeted methods
that specifically address risks while maintaining beneficial capabilities (e.g. mitigating biosecurity risks without unduly reducing beneficial biomedical capabilities), acknowledging that such
precision demands greater technical effort and innovation than less targeted methods. The Signatories further recognise that if providers of general-purpose AI models with systemic risk can
demonstrate equal or superior safety or security outcomes through alternative means that achieve greater efficiency, such innovations should be recognised as advancing the state of the art in AI
safety and security and meriting consideration for wider adoption.

I am also pleasantly surprised that the code of practice (in the appendix) lists several of the "risk-enablers" discussed here as specific systemic risks, e.g.
Loss of control: Risks from humans losing the ability to reliably direct, modify, or shut down a model. Such risks may emerge from misalignment with human intent or values, self-reasoning,
self-replication, self-improvement, deception, resistance to goal modification, power-seeking behaviour, or autonomously creating or improving AI models or AI systems.
 
Last edited:
  • #297
https://www.reuters.com/business/ai...d-software-developers-study-finds-2025-07-10/
AI slows down some experienced software developers, study finds

Before the study, the open-source developers believed using AI would speed them up, estimating it would decrease task completion time by 24%. Even after completing the tasks with AI, the developers believed that they had decreased task times by 20%. But the study found that using AI did the opposite: it increased task completion time by 19%.
The study’s lead authors, Joel Becker and Nate Rush, said they were shocked by the results: prior to the study, Rush had written down that he expected “a 2x speed up, somewhat obviously.”
The findings challenge the belief that AI always makes expensive human engineers much more productive, a factor that has attracted substantial investment into companies selling AI products to aid software development.
...
The slowdown stemmed from developers needing to spend time going over and correcting what the AI models suggested.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
1752175213962.webp


1752175292160.webp
 
  • Informative
Likes gleem and DaveC426913
  • #298
I used AI to help write a Python program, a language I don't know. It reduced development time by at least 90%. I never would have undertaken the project without its aid. Life is too short to learn that, uh, stuff.
 
Last edited:
  • Like
Likes dextercioby and samay
  • #299
nsaspook said:
AI slows down some experienced software developers, study finds
While this study shows companies might want to be more conservative in implementing AI some big ones are charging full speed ahead. MicroSoft, and Alphabet are using AI for up to 30% of their work, while Salesforce says it is using AI for 30% to 50% of its work.

Goldman Sachs will begin the use of the AI agent Devin a full-stack development Bot as an "employee" along with its 12,000 developers. It will work autonomously, although it will be supervised.
 
  • #300
Full speed ahead is not always a wise move.
1752260825903.webp


https://www.supernetworks.org/pages/blog/agentic-insecurity-vibes-on-bitchat
Identity Is A Bitchat Challenge (MITM Flaw)

The Intersection of Vibe Coding and Security

Many of us have seen glimpses of what agentic generative coding does for security. We see the missteps, and sometimes wonder about the shallow bugs that pile on. Config managers that are almost always arbitrary file upload endpoints. Glue layers that become bash command launch as a service. And most frustratingly, code generation that's excellent at pretending forward progress has been made when no meaningful change has occurred. One of the most impressive parts of agentic coding is exactly that: how convincing it is by appearance and how easily we're tricked about the depth of substance in the code gen. In some ways we extend our trust of people to the stochastic code parrots, assuming that generative coding produced the actual work a human would have probably performed.
...
But bitchat's most glaring issue is identity. There's essentially no trust/auth built in today. So I would not really think about this as a secure messenger. The protocol has an identity key system, but it's only decorative as implemented and has misleading security claims. The 32-byte public key gets shuffled around with ephemeral key pairs as an opaque blob. The user verification is unfortunately disconnected from any trust and authentication. These are the hallmarks of vibe code (in)security.

Secure messaging systems do usually provide a way for users to establish trust, and that's what bitchat does not have right now.
...
In cryptography, details matter. A protocol that has the right vibes can have fundamental substance flaws that compromise everything it claims to protect.
 
  • Like
Likes Filip Larsen and 256bits

Similar threads

Replies
10
Views
4K
Replies
3
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K